Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Who You'll Work With Driving lasting impact and building long-term capabilities with our clients is not easy work. You are the kind of person who thrives in a high performance/high reward culture - doing hard things, picking yourself up when you stumble, and having the resilience to try another way forward. In return for your drive, determination, and curiosity, we'll provide the resources, mentorship, and opportunities you need to become a stronger leader faster than you ever thought possible. Your colleagues—at all levels—will invest deeply in your development, just as much as they invest in delivering exceptional results for clients. Every day, you'll receive apprenticeship, coaching, and exposure that will accelerate your growth in ways you won’t find anywhere else. When you join us, you will have: Continuous learning: Our learning and apprenticeship culture, backed by structured programs, is all about helping you grow while creating an environment where feedback is clear, actionable, and focused on your development. The real magic happens when you take the input from others to heart and embrace the fast-paced learning experience, owning your journey. A voice that matters: From day one, we value your ideas and contributions. You’ll make a tangible impact by offering innovative ideas and practical solutions. We not only encourage diverse perspectives, but they are critical in driving us toward the best possible outcomes. Global community: With colleagues across 65+ countries and over 100 different nationalities, our firm’s diversity fuels creativity and helps us come up with the best solutions for our clients. Plus, you’ll have the opportunity to learn from exceptional colleagues with diverse backgrounds and experiences. World-class benefits: On top of a competitive salary (based on your location, experience, and skills), we provide a comprehensive benefits package to enable holistic well-being for you and your family. Your Impact You will be responsible for developing deep Retail domain understanding in at least one of the following areas –Customer Analytics, Pricing Optimization, Inventory Management, E-commerce, or Digital Transformation. Collaboration with business stakeholders, engineers and internal teams to build and implement extraordinary retail focused data products (reusable asset) and solutions and delivering them right to the client will be of utmost importance. You will work on the frameworks and libraries that our teams of Data Scientists and Data Engineers use to progress from data to impact and guide global companies through data science solutions to transform their businesses and enhance performance across industries including E-commerce, Grocery, F&B and CPG. Your Qualifications and Skills Masters's or PhD degree in computer science, engineering or mathematics, or equivalent experience 6+ years of relevant experience with strong foundations of statistics and machine learning techniques Proven experience applying machine learning techniques to solve business problems Proven experience in translating technical methods to non-technical stakeholder Proven experience writing production-grade code (Python / Pyspark) for machine learning in a professional setting Strong understanding of analytics libraries (e.g., pandas, numpy, matplotlib, scikit-learn, statsmodels, kedro, mlflow) Experience in any cloud platforms (AWS, Azure, or GCP) Familiarity with containerization technologies (Docker, Docker-compose) Experience with Git-based workflows and automation frameworks, specifically GitHub Actions and GitLab CI/CD Familiarity or hands-on experience with data visualization tools (PowerBI, Tableau, etc.)
Posted 2 months ago
8.0 years
0 Lacs
India
On-site
Job Description First Solar reserves the right to offer you a role most applicable to your experience and skillset. Basic Job Functions: This position manages organizational strategy and operations through design and statistical analysis of business initiatives and experiments. Works with business partners to understand what the business needs and issues are to address. Applies advanced knowledge of statistics and data mining (e.g., predictive modelling, simulation) or other mathematical techniques to recognize patterns and create insights from business data. Designs, develops, and evaluates statistical and predictive models that lead to business solutions. Serves as lead statistician for the unit, providing expertise, oversight, and guidance on statistical analysis efforts. Communicate findings and recommendations to management across different departments. Supports implementation efforts. Should lead and mentor the team working in the Data Competency Centre. Education/Experience: Bachelor’s degree in data science, Computer science, Electrical Engineering, Applied Statistics, or Physics with 8-10 years of relevant work experience in Data Science, Artificial Intelligence or Machine Learning algorithm development Master’s degree in data science, Computer science, Electrical Engineering, Applied Statistics, or Physics with 5+ years of relevant work experience in Data Science, Artificial Intelligence or Machine Learning algorithm development PhD degree in Data Science, Computer science, Electrical Engineering, Applied Statistics, or Physics with 4+ years of relevant work experience in Data Science, Artificial Intelligence or Machine Learning algorithm development Working knowledge of budgets and financial statements Required Skills/Competencies: Demonstrated skill of managing teams; Ability to coach and mentor team members to drive results Demonstrated experience with programming languages and statistical software tools (Python, SAS, R, JMP or similar), relational databases (SQL server), data analysis and visualization software (preferably PowerBI, SAS). Demonstrated experience with standard data science and machine learning packages such as Numpy, Pandas, Matplotlib, seaborn, bokeh, plotly Demonstrated experience with the machine learning model stack (regression, classification, neural networks, time series) and packages (Scikit learn, Xgboost, Keras, Pytorch, Tensorflow, statsmodels) Demonstrated experience of the machine learning model tradeoffs such as hyper-parameter tuning, regularization, cross-validation, skewness in data, dimensionality reduction, and complexity vs interpretability. Demonstrated experience with utilizing advanced descriptive statistics and analysis techniques (such as forecasting, analysis of variance, t-tests, categorical data analysis, nonparametric data analysis, cluster analysis, factor analysis and multivariate statistical analysis) design business experiments and measure the impact of business actions. Demonstrated experience with computer vision models (e.g. image classification, object detection, and segmentation) Demonstrated experience working with various business partners to scope and design the statistical framework for business solutions and socialize and help integrate results. Strong communication skills to work with groups, and experience with communicating business implications of complex data relationships and results of statistical models to multiple business partners. Essential Responsibilities: Statistical Analysis & Model Development Works with business partners to identify and scope new opportunities for statistical analysis applications to evaluate business performance and to support business decisions. Works internally and with I/S and Enterprise Data Management to define, secure and prepare datasets for statistical modeling. Explores data using a variety of statistical (e.g., data mining, regression, cluster analysis) techniques to answer business questions or guide future model development. Build programs for running statistical tests on data and for understanding correlation of various attributes. Builds hypotheses, identifies research data attributes and determines the best approach to address business issues. Working with business partners leads development of experimental design for business initiatives. Applies advanced statistical techniques, including analysis of variance, t-tests, factor analysis, regression and multivariate analyses or simulation, to analyze the effects of business initiatives. Build predictive models (e.g. logistic regression, generalized linear models) as appropriate to support business partner objectives. Builds and deploys computer vision models (e.g., image classification, object detection, segmentation) to meet business needs. Prepares testing scenarios and tests model performance. Incorporates findings and provides insights as part of model development and enhancement. Provides guidance and direction related to statistical analysis to less experienced Data Science & Analytics staff as needed. Provides peer review related to analytics methods and results. Responsible for advanced analytics, including experimental design and analysis, for the most complex business experiments. Model Consultation, Implementation, & Communication Serves as statistical expert within the unit as well as in consultation to various areas of the business, to support design and analysis of business experiments. Leads analytics projects or components related to large, complex business initiatives. Prepare recommendations and findings for business partners. Works with research and analytics staff and other areas of the business on model application and implementation. Effectively communicates and delivers statistical and predictive model results to business partners, supporting socialization and adoption of analysis results into business activities and decisions. Assists with knowledge transfer and training to business areas regarding new analytics applications as part of implementation process. Works closely with business stakeholders to identify and answer critical questions. Assists in the development of standard analytical approaches and methodologies for the department. Provides knowledge transfer and training on new modeling and statistical analysis tools and methodologies to less experienced staff. Industry Research Research and maintain awareness of industry’s best practices and business strategies. Proactively brings in new and innovative ideas and approaches to develop business solutions. Research and leverage new statistical techniques and technologies to apply in their statistical research work. Reporting Relationships: Will report to Head- Automation Digital Transformation USA This role will have direct reports Travel: Physical Requirements: Will sit and stand for long periods of time during the day. Will walk, climb stairs and on equipment. May reach above shoulder heights and below the waist May lift up to 50 lbs. Required to use hands to finger, lift, handle, carry or feel objects. May stoop, kneel, bend, talk and hear. Specific vision abilities are required. All associates working on the production floor may be required to wear a respirator at any given time and thus, the ability to wear a respirator is a condition of employment and continued employment (require little or no facial hair). Office Physical Requirements: All positions in our office require interaction with people and technology while either standing or sitting. To best service our customers, internal and external, all associates must be able to communicate face-to-face and on the phone with or without reasonable accommodation. First Solar is committed to compliance with its obligations under all applicable state and federal laws prohibiting employment discrimination. In keeping with this commitment, it attempts to reasonably accommodate applicants and employees in accordance with the requirements of the disability discrimination laws. It also invites individuals with disabilities to participate in a good faith, interactive process to identify reasonable accommodations that can be made without imposing an undue hardship. Potential candidates will meet the education and experience requirements provided on the above job description and excel in completing the listed responsibilities for this role. All candidates receiving an offer of employment must successfully complete a background check and any other tests that may be required. Equal Opportunity Employer Statement: First Solar is an Equal Opportunity Employer that values and respects the importance of a diverse and inclusive workforce. It is the policy of the company to recruit, hire, train and promote persons in all job titles without regard to race, color, religion, sex, age, national origin, veteran status, disability, sexual orientation, or gender identity. We recognize that diversity and inclusion is a driving force in the success of our company. I have read and understand the above ‘Roles and Responsibilities’ for this position. I believe I meet the ‘Minimum Qualifications’ to perform the required ‘Essential Functions & Responsibilities’ to the best of my abilities.
Posted 2 months ago
2.0 years
10 Lacs
Gurgaon
On-site
Gurgaon, India We are seeking an Associate Consultant to join our India team based in Gurgaon. This role at Viscadia offers a unique opportunity to gain hands-on experience in the healthcare industry, with comprehensive training in core consulting skills such as critical thinking, market analysis, and executive communication. Through project work and direct mentorship, you will develop a deep understanding of healthcare business dynamics and build a strong foundation for a successful consulting career. ROLES AND RESPONSIBILITIES Technical Responsibilities Design and build full-stack forecasting and simulation platforms using modern web technologies (e.g., React, Node.js, Python) hosted on AWS infrastructure (e.g., Lambda, EC2, S3, RDS, API Gateway). Automate data pipelines and model workflows using Python for data preprocessing, time-series modeling (e.g., ARIMA, Exponential Smoothing), and backend services. Develop and enhance product positioning, messaging, and resources that support the differentiation of Viscadia from its competitors. Conduct research and focus groups to elucidate key insights that augment positioning and messaging Replace legacy Excel/VBA tools with scalable, cloud-native applications, integrating dynamic reporting features and user controls via web UI. Use SQL and cloud databases (e.g., AWS RDS, Redshift) to query and transform large datasets as inputs to models and dashboards. Develop interactive web dashboards using frameworks like React + D3.js or embed tools like Power BI/Tableau into web portals to communicate insights effectively. Implement secure, modular APIs and microservices to support modularity, scalability, and seamless data exchange across platforms. Ensure cost-effective and reliable deployment of solutions via AWS services, CI/CD pipelines, and infrastructure-as-code (e.g., CloudFormation, Terraform). Business Responsibilities Support the development and enhancement of forecasting and analytics platforms tailored to the needs of pharmaceutical clients across various therapeutic areas Build in depth understanding of pharma forecasting concepts, disease areas, treatment landscapes, and market dynamics to contextualize forecasting models and inform platform features Partner with cross-functional teams to ensure forecast deliverables align with client objectives, timelines, and decision-making needs Contribute to a culture of knowledge sharing and continuous improvement by mentoring junior team members and helping codify best practices in forecasting and business analytics Grow into a client-facing role, combining an understanding of commercial strategy with forecasting expertise to lead engagements and drive value for clients QUALIFICATIONS Bachelor’s degree (B.Tech/B.E.) from a premier engineering institute, preferably in Computer Science, Information Technology, Electrical Engineering, or related disciplines 2+ years of experience in full-stack development, with a strong focus on designing, developing, and maintaining AWS-based applications and services SKILLS & TECHNICAL PROFICIENCIES Technical Skills Proficient in Python, with practical experience using libraries such as pandas, NumPy, matplotlib/seaborn, and statsmodels for data analysis and statistical modeling Strong command of SQL for data querying, transformation, and seamless integration with backend systems Hands-on experience in designing and maintaining ETL/ELT data pipelines, ensuring efficient and scalable data workflows Solid understanding and applied experience with cloud platforms, particularly AWS; working familiarity with Azure and Google Cloud Platform (GCP) Full-stack web development expertise, including building and deploying modern web applications, web hosting, and API integration Proficient in Microsoft Excel and PowerPoint, with advanced skills in data visualization and delivering professional presentations Soft Skills Excellent verbal and written communication skills, with the ability to effectively engage both technical and non-technical stakeholders Strong analytical thinking and problem-solving abilities, with a structured and solution-oriented mindset Demonstrated ability to work independently as well as collaboratively within cross-functional teams Adaptable and proactive, with a willingness to thrive in a dynamic, fast-growing environment Genuine passion for consulting, with a focus on delivering tangible business value for clients Domain Expertise (Good to have) Strong understanding of pharmaceutical commercial models, including treatment journeys, market dynamics, and key therapeutic areas Experience working with and interpreting industry-standard datasets such as IQVIA, Symphony Health, or similar secondary data sources Familiarity with product lifecycle management, market access considerations, and sales performance tracking metrics used across the pharmaceutical value chain
Posted 2 months ago
8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description Change the world. Love your job. Texas Instruments is seeking experienced Data Scientist to join our team. As the Data Scientist in TI's Demand Analytics team, you will play a pivotal role in shaping and executing our demand planning and inventory buffer strategies for the company. You will be working side by side with a team of highly technical professionals that will consists of application developers, system architects, data scientists and data engineers. This role will be responsible for solving complex business problems through innovative solutions that deliver tangible business value. This position requires a technical leader with a strong technical background in AI/ML, simulation solutions, strategic thinking, and a passion for innovation through data.This team is responsible for:portfolio management for demand forecasting algorithms, generation of inventory buffer targets, segmentation of TI's products and simulation/validation frameworks, defining specs/reference architectures to best achieve business outcomes and ensuring security and interoperability between capabilities. Roles And Duties Stakeholder engagement: Work collaboratively and strategically with stakeholder groups to achieve TI business strategy and goals Communicate complex technical concepts and influence final business outcomes with stakeholders effectively Partner with cross-functional teams to identify and prioritize actionable, high-impact insights across a variety of core business areas Technology and platforms: Build simple, scalable and modular technology stacks using modern technologies and software engineering principles. Simulate real world scenarios with various models and approaches to determine best fit of algorithms by varying the inputs across hundreds to thousands of variables Research, experiment and implement new approaches and models that flex with the business strategy transformations Leads data acquisition and engineering efforts Develops and applies machine learning, AI and data engineering framework Solutions, writes and debugs code for complex development projects Oversees, evaluates and determines the best modeling techniques for various scenarios letting the data drive the conversation Qualifications Minimum requirements: MS or PhD in a quantitative field (e.g., Computer Science, Statistics, Engineering, Mathematics) or equivalent practical experience. 8+ years of professional experience in data science or a related role. 5+ years of hands-on experience developing and deploying time series forecasting models in a professional setting. Demonstrated experience in the supply chain domain (e.g., Semiconductor, Retail, CPG, Pharmaceutical), with a deep understanding of concepts like demand forecasting, S&OP, or inventory management. Expert-level proficiency in Python and its core data science libraries (e.g., Pandas, NumPy, Scikit-learn, Statsmodels) and forecasting packages (e.g., Prophet, PyTorch Forecasting). Proven experience taking machine learning models from prototype to production, including knowledge of CI/CD and model monitoring. Preferred Qualifications Experience with MLOps tools and platforms (e.g., MLflow, Kubeflow, Airflow, Docker, Kubernetes). Practical experience with cloud data science platforms (e.g., AWS SageMaker, Azure ML, Google AI Platform). Familiarity with advanced forecasting techniques such as probabilistic forecasting, causal inference, or using Transformers for time series. Experience applying NLP to extract features from unstructured text to enhance forecasting models. Strong SQL skills and experience working with large-scale data warehousing solutions (e.g., Snowflake, BigQuery, Redshift). About Us Why TI? Engineer your future. We empower our employees to truly own their career and development. Come collaborate with some of the smartest people in the world to shape the future of electronics. We're different by design. Diverse backgrounds and perspectives are what push innovation forward and what make TI stronger. We value each and every voice, and look forward to hearing yours. Meet the people of TI Benefits that benefit you. We offer competitive pay and benefits designed to help you and your family live your best life. Your well-being is important to us. About Texas Instruments Texas Instruments Incorporated (Nasdaq: TXN) is a global semiconductor company that designs, manufactures and sells analog and embedded processing chips for markets such as industrial, automotive, personal electronics, communications equipment and enterprise systems. At our core, we have a passion to create a better world by making electronics more affordable through semiconductors. This passion is alive today as each generation of innovation builds upon the last to make our technology more reliable, more affordable and lower power, making it possible for semiconductors to go into electronics everywhere. Learn more at TI.com . Texas Instruments is an equal opportunity employer and supports a diverse, inclusive work environment. If you are interested in this position, please apply to this requisition. About The Team TI does not make recruiting or hiring decisions based on citizenship, immigration status or national origin. However, if TI determines that information access or export control restrictions based upon applicable laws and regulations would prohibit you from working in this position without first obtaining an export license, TI expressly reserves the right not to seek such a license for you and either offer you a different position that does not require an export license or decline to move forward with your employment.
Posted 2 months ago
2.0 years
0 Lacs
Gurgaon Rural, Haryana, India
On-site
We are seeking an Full Stack Developer (Associate Consultant) to join our India team based in Gurgaon. This role at Viscadia offers a unique opportunity to gain hands-on experience in the healthcare industry, with comprehensive training in core consulting skills such as critical thinking, market analysis, and executive communication. Through project work and direct mentorship, you will develop a deep understanding of healthcare business dynamics and build a strong foundation for a successful consulting career. ROLES AND RESPONSIBILITIES Technical Responsibilities Design and build full-stack forecasting and simulation platforms using modern web technologies (e.g., React, Node.js, Python) hosted on AWS infrastructure (e.g., Lambda, EC2, S3, RDS, API Gateway). Automate data pipelines and model workflows using Python for data preprocessing, time-series modeling (e.g., ARIMA, Exponential Smoothing), and backend services. Replace legacy Excel/VBA tools with scalable, cloud-native applications, integrating dynamic reporting features and user controls via web UI. Use SQL and cloud databases (e.g., AWS RDS, Redshift) to query and transform large datasets as inputs to models and dashboards. Develop interactive web dashboards using frameworks like React + D3.js or embed tools like Power BI/Tableau into web portals to communicate insights effectively. Implement secure, modular APIs and microservices to support modularity, scalability, and seamless data exchange across platforms. Ensure cost-effective and reliable deployment of solutions via AWS services, CI/CD pipelines, and infrastructure-as-code (e.g., CloudFormation, Terraform). Business Responsibilities Support the development and enhancement of forecasting and analytics platforms tailored to the needs of pharmaceutical clients across various therapeutic areas Build in depth understanding of pharma forecasting concepts, disease areas, treatment landscapes, and market dynamics to contextualize forecasting models and inform platform features Partner with cross-functional teams to ensure forecast deliverables align with client objectives, timelines, and decision-making needs Contribute to a culture of knowledge sharing and continuous improvement by mentoring junior team members and helping codify best practices in forecasting and business analytics Grow into a client-facing role, combining an understanding of commercial strategy with forecasting expertise to lead engagements and drive value for clients QUALIFICATIONS Bachelor’s degree (B.Tech/B.E.) from a premier engineering institute, preferably in Computer Science, Information Technology, Electrical Engineering, or related disciplines 2+ years of experience in full-stack development, with a strong focus on designing, developing, and maintaining AWS-based applications and services SKILLS AND TECHNICAL PROFICIENCIES Technical Skills Proficient in Python, with practical experience using libraries such as pandas, NumPy, matplotlib/seaborn, and statsmodels for data analysis and statistical modeling Strong command of SQL for data querying, transformation, and seamless integration with backend systems Hands-on experience in designing and maintaining ETL/ELT data pipelines, ensuring efficient and scalable data workflows Solid understanding and applied experience with cloud platforms, particularly AWS; working familiarity with Azure and Google Cloud Platform (GCP) Full-stack web development expertise, including building and deploying modern web applications, web hosting, and API integration Proficient in Microsoft Excel and PowerPoint, with advanced skills in data visualization and delivering professional presentations Soft Skills Excellent verbal and written communication skills, with the ability to effectively engage both technical and non-technical stakeholders Strong analytical thinking and problem-solving abilities, with a structured and solution-oriented mindset Demonstrated ability to work independently as well as collaboratively within cross-functional teams Adaptable and proactive, with a willingness to thrive in a dynamic, fast-growing environment Genuine passion for consulting, with a focus on delivering tangible business value for clients Domain Expertise Strong understanding of pharmaceutical commercial models, including treatment journeys, market dynamics, and key therapeutic areas Experience working with and interpreting industry-standard datasets such as IQVIA, Symphony Health, or similar secondary data sources Familiarity with product lifecycle management, market access considerations, and sales performance tracking metrics used across the pharmaceutical value chain
Posted 2 months ago
5.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Urgently Hiring for AI/ML Role We are seeking an experienced AI/ML Engineer with 3–5 years of hands-on experience in building and deploying machine learning solutions in the fintech and spend management domain. You will work on real-time forecasting, intelligent document processing (invoices/receipts), fraud detection, and other AI-powered features that enhance our finance intelligence platform. This role demands expertise in both time series forecasting and computer vision, as well as a solid understanding of how ML applies to enterprise finance operations. Key Responsibilities: Design, train, and deploy ML models for spend forecasting, budget prediction, expense categorization, and risk scoring. Build and optimize OCR-based invoice and receipt parsing systems using computer vision and NLP techniques. Implement time-series models (Prophet, ARIMA, LSTM, XGBoost, etc.) for forecasting trends in financial transactions, expenses, and vendor payments. Work on intelligent document classification, key-value extraction, and line-item detection from unstructured financial documents (PDFs, scanned images). Collaborate with product and finance teams to define high-impact AI use cases and deliver business-ready solutions. Integrate ML pipelines into production using scalable tools and platforms (Docker, CI/CD, cloud services). Monitor model performance post-deployment, conduct drift analysis, and implement retraining strategies. Required Skills & Qualifications: Core Machine Learning Strong knowledge of supervised and unsupervised ML techniques applied to structured and semi-structured financial data Experience in time-series analysis and forecasting algorithms such as: ARIMA, SARIMA Facebook Prophet XGBoost for regression LSTM / GRU models for sequential data Proficiency in Python and key libraries: scikit-learn, Pandas, NumPy, StatsModels, PyTorch, TensorFlow. Computer Vision & Document AI Hands-on experience with OCR tools such as Tesseract, Google Vision API, or AWS Textract. Knowledge of document layout analysis and field-level extraction using OpenCV, LayoutLM, or Google Document AI. Familiarity with annotation tools (Label Studio, CVAT) and post-processing OCR outputs for structured data extraction. Deployment & Engineering Experience in exposing ML models via Flask or FastAPI. Model packaging and deployment with Docker, version control with Git, and ML lifecycle tools like MLflow or DVC. Working knowledge of cloud platforms (AWS/GCP/Azure) and integrating models with backend microservices. Data & Domain Understanding of financial documents: invoices, receipts, expense reports, and GL data. Ability to work with tabular, image-based, and PDF-based financial datasets. SQL proficiency and familiarity with financial databases or ERP systems is a plus.
Posted 2 months ago
5.0 years
0 Lacs
India
Remote
Job Title: Data Scientist Type: Contract Experience: 5+ years Contract Duration: 6 months + Extendable Location: Remote Time zone: IST/UK Shift Key Responsibilities: Develop predictive models using time series and regression techniques etc.. Perform diagnostic analysis to identify patterns, outliers, and potential drivers of campaign performance Evaluate data quality and recommend strategies to address gaps or improve model accuracy Work closely with data engineering team Translate business objectives into data science approaches that are explainable and reliable Required Skills: Strong experience in statistical modeling, machine learning, and time series forecasting Proficiency in Python (Pandas, scikit-learn, statsmodels) and SQL Experience building rule-based or constraint-based decision models Comfort designing interpretable logic or simulation frameworks to support “what-if” planning and tactical recommendations Experience using/implementing LLMs or NLP Nice to Have: Familiarity with attribution modeling, marketing mix modeling Prior work on AI-driven reporting/insight tools using LLMs or NLP Experience working with marketing/media data Exposure to cloud-based environments (AWS preferred)
Posted 2 months ago
5.0 - 9.0 years
0 Lacs
indore, madhya pradesh
On-site
As a Senior Data Scientist with 5+ years of experience, you will be responsible for designing and implementing models, mining data for insights, and interpreting complex data structures to drive business decision-making. Your expertise in machine learning, including areas such as NLP, Machine vision, and Time series, will be essential in this role. You will be expected to have strong skills in Model Tuning, Model Validation, Supervised and Unsupervised Learning, and hands-on experience with model development, data preparation, training, and inference-ready deployment of models. Your proficiency in descriptive and inferential statistics, hypothesis testing, and data analysis will help in developing code for reproducible analysis of data. Experience with AWS services like Sagemaker, Lambda, Glue, Step functions, and EC2 is necessary, along with knowledge of Databricks, Anaconda distribution, and similar data science code development and deployment IDEs. Your familiarity with ML algorithms related to time-series, natural language processing, optimization, object detection, topic modeling, clustering, and regression analysis will be highly valued. You should have expertise in Hive/Impala, Spark, Python, Pandas, Keras, SKLearn, StatsModels, Tensorflow, and PyTorch. End-to-end model deployment and production experience of at least 1 year is required, along with a good understanding of Model Deployment in Azure ML platform, Anaconda Enterprise, or AWS Sagemaker. Basic knowledge of deep learning algorithms such as MaskedCNN, YOLO, and familiarity with Visualization and analytics/Reporting Tools like Power BI, Tableau, and Alteryx will be considered advantageous for this role.,
Posted 2 months ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
What You'll Work On Develop state-of-the-art time series models for anomaly detection and forecasting in observability data. Design a root cause analysis system using LLMs, causal analysis, machine learning and anomaly detection algorithms. Develop Large Language Models for time series analysis Create highly scalable ML pipelines for real-time monitoring and alerting. Build and maintain ML Ops workflows for model deployment, evaluation, monitoring, and updates. Build frameworks to evaluate AI agents. Handle large datasets using Python and its ML ecosystem (e.g., NumPy, Pandas, Scikit-Learn, TensorFlow, PyTorch, Statsmodels). Use Bayesian methods, Granger causality, counterfactual analysis, and other techniques to derive meaningful system insights. Collaborate with other teams to deploy ML-driven observability solutions in production. What We're Looking For 5+ years of hands-on experience in Machine Learning, Time Series Analysis, and Causal Analytics. Bachelors degree in Computer Science, Mathematics or Statistics. Masters or PhD is a plus Strong proficiency in Python and libraries like Scikit-Learn, TensorFlow, PyTorch, Statsmodels, Prophet, or similar. Deep understanding of time-series modeling, forecasting, and anomaly detection techniques. Expertise in causal inference, Bayesian statistics, and causal graph modeling. Experience in ML Ops, model evaluation, and deployment in production environments. Working knowledge of databases and data processing frameworks (SQL, Spark, Dask, etc.). Experience in observability, monitoring, or AI-driven system diagnostics is a big plus. Background in AI Agent evaluation and optimization is a plus. Working with LLMs, fine-tuning LLMs, LLM Ops, LLM Agents is a big plus Our Values Loyalty & Long-term Commitment - We invest in people who invest in us. Opinionated yet Open-Minded - We value strong perspectives but encourage constructive discussions. Passion - We seek individuals who are passionate about their craft. Humility & Integrity - Honest, transparent, and accountable team members are key. Adaptability & Self-Sufficiency - Ability to thrive in a fast-paced and evolving environment. Build Fast and Break Fast - We believe in rapid iteration and learning from failures. What You'll Work On You will be instrumental in building the next-generation Observability platform for automated Root Cause Analysis using LLMs, Machine Learning algorithms. You will be innovating on building LLMs for time series analysis. You'll have the opportunity to work with an experienced team, gain deep insights into how startups are built, and be at the forefront of disruptive innovation in Observability. Comp - upto 2Cr+ (ref:hirist.tech)
Posted 2 months ago
175.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. How will you make an impact in this role? This Position is in GSG Advanced Analytics team as part of GSG MIS COE, is looking for full time candidates as Data Science Analysts. The Advanced Analytics team works across multiple Data Science/Machine Learning portfolio of projects across GSG organization across multiple areas of Servicing. Understand the overall business perspective and help conceptualize the business problem into a Data Science/ML roadmap Have a research bend of mind and read/engage/apply new techniques and algorithms in the field to generate efficiencies/process improvements. Rigorous testing of algorithms as per business norms and delivering significant working leverage over status quo and generate value for the business. Capability of writing, debugging and compiling codes in multiple Machine Learning environment and understanding of Python stack is necessary Research focus on problem solving through NLP(Natural Language Processing) is critical. Clear application on performance testing and validation framework of machine learning models Understand and deploy mathematical foundations of cutting edge NLP techniques on varied sources of data Willingness to derive insights from terabyte sized data and capability to design scalable solutions is paramount Minimum Qualifications · Proven experience of applying using Machine Learning techniques like Regression, Classification, Supervised or Unsupervised Recommenders, Deep Learning etc. · Ability to work in cross functional teams · Excellent data wrangling and visualization skills · Hands on knowledge of SQL is expected · A research mindset with a zeal to experiment new algorithms is expected Preferred Qualifications: · Master’s in a quantitative field (e.g., Finance, Engineering, Mathematics, Statistics Computer Science or Economics) from a top institute. · A set of high impact research papers on applied NLP/ Deep Learning/ GenAI · Prior experience working with Transformers/LSTMs/CNNs preferred · Working knowledge of finetuning Large Language Models(LLMs) is a plus · Deep knowledge of Statistics and Maths and ability to dissect problems from the first principle. Exposure to fields like Linear Algebra, Bayesian Statistics, Group theory is desirable · Complete grip on Python environment and libraries (pandas, numpy, nltk, statsmodels, gensim, pyspark, spacy, transformers), Deep learning expertise is preferred any of Tensorflow/Torch. We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.
Posted 2 months ago
175.0 years
2 - 2 Lacs
Gurgaon
On-site
At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. How will you make an impact in this role? This Position is in GSG Advanced Analytics team as part of GSG MIS COE, is looking for full time candidates as Data Science Analysts. The Advanced Analytics team works across multiple Data Science/Machine Learning portfolio of projects across GSG organization across multiple areas of Servicing. Understand the overall business perspective and help conceptualize the business problem into a Data Science/ML roadmap Have a research bend of mind and read/engage/apply new techniques and algorithms in the field to generate efficiencies/process improvements. Rigorous testing of algorithms as per business norms and delivering significant working leverage over status quo and generate value for the business. Capability of writing, debugging and compiling codes in multiple Machine Learning environment and understanding of Python stack is necessary Research focus on problem solving through NLP(Natural Language Processing) is critical. Clear application on performance testing and validation framework of machine learning models Understand and deploy mathematical foundations of cutting edge NLP techniques on varied sources of data Willingness to derive insights from terabyte sized data and capability to design scalable solutions is paramount Minimum Qualifications Proven experience of applying using Machine Learning techniques like Regression, Classification, Supervised or Unsupervised Recommenders, Deep Learning etc. Ability to work in cross functional teams Excellent data wrangling and visualization skills Hands on knowledge of SQL is expected A research mindset with a zeal to experiment new algorithms is expected Preferred Qualifications: Master’s in a quantitative field (e.g., Finance, Engineering, Mathematics, Statistics Computer Science or Economics) from a top institute. A set of high impact research papers on applied NLP/ Deep Learning/ GenAI Prior experience working with Transformers/LSTMs/CNNs preferred Working knowledge of finetuning Large Language Models(LLMs) is a plus Deep knowledge of Statistics and Maths and ability to dissect problems from the first principle. Exposure to fields like Linear Algebra, Bayesian Statistics, Group theory is desirable Complete grip on Python environment and libraries (pandas, numpy, nltk, statsmodels, gensim, pyspark, spacy, transformers), Deep learning expertise is preferred any of Tensorflow/Torch. We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.
Posted 2 months ago
0 years
0 Lacs
Madurai, Tamil Nadu, India
On-site
Role : AIML Engineer Location : Madurai/ Chennai Language: Python DBs : SQL Core Libraries: Time Series & Forecasting: pmdarima, statsmodels, Prophet, GluonTS, NeuralProphet SOTA ML : ML Models, Boosting & Ensemble models etc. Explainability : Shap / Lime Required skills: Deep Learning: PyTorch, PyTorch Forecasting, Data Processing: Pandas, NumPy, Polars (optional), PySpark Hyperparameter Tuning: Optuna, Amazon SageMaker Automatic Model Tuning Deployment & MLOps: Batch & Realtime with API endpoints, MLFlow Serving: TorchServe, Sagemaker endpoints / batch Containerization: Docker Orchestration & Pipelines: AWS Step Functions, AWS SageMaker Pipelines AWS Services: SageMaker (Training, Inference, Tuning) S3 (Data Storage) CloudWatch (Monitoring) Lambda (Trigger-based Inference) ECR, ECS or Fargate (Container Hosting)
Posted 2 months ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Summary JD- Having a background in Retail will be a Big Plus. Advanced in Python: Proven experience with core libraries like pandas, numpy, scikit-learn, and matplotlib, plus advanced tools like statsmodels, xgboost, lightgbm, prophet, deep learning-based forecasting (e.g. Neural forecast). Advanced Forecasting Techniques: Experience with ensemble models, hierarchical forecasting, probabilistic forecasting, and multivariate time series. Pricing Models: Background in price elasticity modeling, good to have experience in optimization - at least one of the 2 resources Model Evaluation: Familiarity with time-series cross-validation, backtesting, and metrics such as MAE, MAPE, RMSE, SMAPE, and prediction intervals. SQL Proficiency: Ability to query and manage data from relational databases. Version Control: Comfortable working with Git/GitHub for collaboration and code management. Mandate skill- python and advanced forcesting skills with models like arima etc. Any shift- Second shift Location-Banglore
Posted 2 months ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Summary JD- Having a background in Retail will be a Big Plus. Advanced in Python: Proven experience with core libraries like pandas, numpy, scikit-learn, and matplotlib, plus advanced tools like statsmodels, xgboost, lightgbm, prophet, deep learning-based forecasting (e.g. Neural forecast). Advanced Forecasting Techniques: Experience with ensemble models, hierarchical forecasting, probabilistic forecasting, and multivariate time series. Pricing Models: Background in price elasticity modeling, good to have experience in optimization - at least one of the 2 resources Model Evaluation: Familiarity with time-series cross-validation, backtesting, and metrics such as MAE, MAPE, RMSE, SMAPE, and prediction intervals. SQL Proficiency: Ability to query and manage data from relational databases. Version Control: Comfortable working with Git/GitHub for collaboration and code management. Mandate skill- python and advanced forcesting skills with models like arima etc. Any shift- Second shift Location-Banglore
Posted 2 months ago
4.0 years
0 Lacs
Madurai
On-site
Job Location: Madurai Job Experience: 4-15 Years Model of Work: Work From Office Technologies: Artificial Intelligence Machine Learning Functional Area: Software Development Job Summary: Job Title: ML Engineer – TechMango Location: TechMango, Madurai Experience: 4+ Years Employment Type: Full-Time Role Overview We are seeking an experienced Machine Learning Engineer with strong proficiency in Python, time series forecasting, MLOps, and deployment using AWS services. This role involves building scalable machine learning pipelines, optimizing models, and deploying them in production environments. Key Responsibilities: Core Technical Skills Languages & Databases Programming Language: Python Databases: SQL Core Libraries & Tools Time Series & Forecasting: pmdarima, statsmodels, Prophet, GluonTS, NeuralProphet Machine Learning Models: State-of-the-art ML models, including boosting and ensemble methods Model Explainability: SHAP, LIME Deep Learning & Data Processing Frameworks: PyTorch, PyTorch Forecasting Libraries: Pandas, NumPy, PySpark, Polars (optional) Hyperparameter Tuning Tools: Optuna, Amazon SageMaker Automatic Model Tuning Deployment & MLOps Model Deployment: Batch & real-time with API endpoints Experiment Tracking: MLFlow Model Serving: TorchServe, SageMaker Endpoints / Batch Containerization & Pipelines Containerization: Docker Orchestration: AWS Step Functions, SageMaker Pipelines AWS Cloud Stack SageMaker (Training, Inference, Tuning) S3 (Data Storage) CloudWatch (Monitoring) Lambda (Trigger-based inference) ECR / ECS / Fargate (Container Hosting) Candidate Requirements Strong problem-solving and analytical mindset Hands-on experience with end-to-end ML project lifecycle Familiarity with MLOps workflows in production environments Excellent communication and documentation skills Comfortable working in agile, cross-functional teams
Posted 2 months ago
8.0 years
0 Lacs
India
Remote
Location: Remote | Commitment: Full-time Company Overview Insight Fusion Analytics turns complex data into actionable insight for clients across finance, retail, and professional sport. Our sports-analytics unit builds predictive systems that transform raw match, athlete, and biomechanical data into winning strategies. Role Summary We’re hiring a Lead Statistician with deep expertise in sports prediction to architect, validate, and continuously refine our forecasting engines. You’ll own the statistical core—from feature-engineering pipelines through probabilistic calibration—working with ML engineers and domain analysts to produce production-ready forecasts that thrive in real-world conditions. What You’ll Do Model Architecture & Validation – Design Bayesian and frequentist frameworks (hierarchical Elo, Poisson-Gamma, state-space models) and build leakage-proof cross-validation strategies. Feature Engineering & Experimental Design – Derive advanced spatio-temporal, biometric, and contextual features; run A/B and multivariate tests to quantify lift. Uncertainty Quantification – Produce calibrated predictive intervals, scenario simulations, and decision-theoretic metrics (Brier, CRPS, EVaR). Mentorship & Review – Set statistical standards, review code/notebooks, and mentor junior analysts. Stakeholder Communication – Translate complex statistical results into concise recommendations for coaches, product managers, and executives. Must-Have Qualifications 8+ years professional experience (or PhD + 5 years ) in applied statistics, econometrics, or quantitative social science. Documented track record building sports prediction systems. Expert proficiency with Python (NumPy, SciPy, Pandas, statsmodels, PyMC/Stan) and SQL; R a plus. Mastery of resampling methods, hierarchical models, time-series analysis, Monte-Carlo simulation, and causal inference. Proven success preventing data leakage and look-ahead bias in live pipelines. Strong communication skills for both technical and non-technical audiences. Nice-to-Have Familiarity with deep-learning frameworks (TensorFlow/PyTorch) for hybrid stat-ML architectures. Experience deploying models on AWS, GCP, or Azure using containerized workflows. Publications or conference talks in sports analytics (MIT Sloan, NESSIS, MathSport). How to Apply Send the following to insightfusionanalytics@gmail.com : CV highlighting sports-analytics projects and publications. Portfolio or repo links demonstrating end-to-end statistical modelling for sports prediction. A one-page brief describing your proudest predictive model: objective, methodology, error analysis, and business impact.
Posted 2 months ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title: Senior ML Engineer Minimum 4 to 8+ years of experience in ML development in Product Based Company Location: Bangalore (Onsite) Why should you choose us? Rakuten Symphony is a Rakuten Group company, that provides global B2B services for the mobile telco industry and enables next-generation, cloud-based, international mobile services. Building on the technology Rakuten used to launch Japan’s newest mobile network, we are taking our mobile offering global. To support our ambitions to provide an innovative cloud-native telco platform for our customers, Rakuten Symphony is looking to recruit and develop top talent from around the globe. We are looking for individuals to join our team across all functional areas of our business – from sales to engineering, support functions to product development. Let’s build the future of mobile telecommunications together! Required Skills and Expertise: Candidate Must have exp. Working in Product Based Company. Should be able to Build, train, and optimize deep learning models with TensorFlow, Keras, PyTorch, and Transformers. Should have exp. In Manipulate and analyse large-scale datasets using Python, Pandas, Numpy, Dask Apply advanced fine-tuning techniques (Full Fine-Tuning, PEFT) and strategies to large language and vision models. Implement and evaluate classical machine learning algorithms using scikit-learn, statsmodels, XGBoost etc. Develop and deploy scalable APIs for ML models using FastAPI. Should have exp. In performing data visualization and exploratory data analysis with Matplotlib, Seaborn, Plotly, and Bokeh. Collaborate with cross-functional teams to deliver end-to-end ML solutions. Deploy machine learning models for diverse business applications over the cloud native and on-premise Hands-on experience with Docker for containerization and Kubernetes for orchestration and scalable deployment of ML models. Familiarity with CI/CD pipelines and best practices for deploying and monitoring ML models in production. Stay current with the latest advancements in machine learning, deep learning, and AI. Our commitment to you: - Rakuten Group’s mission is to contribute to society by creating value through innovation and entrepreneurship. By providing high-quality services that help our users and partners grow, - We aim to advance and enrich society. - To fulfill our role as a Global Innovation Company, we are committed to maximizing both corporate and shareholder value. RAKUTEN SHUGI PRINCIPLES: Our worldwide practices describe specific behaviours that make Rakuten unique and united across the world. We expect Rakuten employees to model these 5 Shugi Principles of Success. Always improve, always advance . Only be satisfied with complete success - Kaizen. Be passionately professional . Take an uncompromising approach to your work and be determined to be the best. Hypothesize - Practice - Validate - Shikumika. Use the Rakuten Cycle to success in unknown territory. Maximize Customer Satisfaction . The greatest satisfaction for workers in a service industry is to see their customers smile. Speed!! Speed!! Speed!! Always be conscious of time. Take charge, set clear goals, and engage your team.
Posted 2 months ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
Company Description We are looking for a Data Analyst with 3+ years of experience in ** BlueOptima is on a mission to maximize the economic and social value that software engineering organizations are capable of delivering. Our vision is to become the global reference for the optimization of the performance of Software Engineering. Our technology is used by some of the world’s largest organizations, including nine of the world’s top twelve Universal Banks, and a number of large corporates. We are a global organization with headquarters in London and additional offices in India, Mexico, and the US. We are made up of 100+ individuals from more than 20 different countries. We promote an open-minded environment and encourage our employees to create their own success story in this high-performance environment. Location: Bangalore Department: Data Engineering Job Description Job summary: Our ground-breaking technology is built on top of Billions of data points that are representative of a developer’s interaction with Source Code and Task Tracking systems. The enormous amount of data BlueOptima processes daily requires specialists to dive into the dataset, and identify insights from the data points and device solutions to extend and enhance BlueOptima’s product suite. We are looking for talented data analysts who are critical of data and curious to determine the story it narrates, explore vast datasets and are aptly able to use any and all tools available at their disposal to interrogate the data. A successful candidate will turn data into information, information into insight, and insight into valuable product features. Responsibilities and tasks : Collaborate with the marketing team to produce impactful technical whitepapers by conducting thorough data collection and analysis and contributing to content development. Partner with the Machine Learning and Data Engineering team to develop and implement innovative solutions for our Developer Analytics and Team Lead Dashboard products. Provide insightful data analysis and build actionable dashboards to empower data-driven decision-making across business teams (Sales, Customer Success, Marketing). Deliver compelling data visualizations and reports using tools like Tableau and Grafana to communicate key insights to internal and external stakeholders. Identify and implement opportunities to automate data analysis, reporting, and dashboard creation processes to improve efficiency. Qualifications Technical Must have a minimum of 3 years of relevant Work Experience in Data Science, Data Analytics, or Business Intelligence Demonstrate advanced SQL expertise, including performance tuning (indexing, query optimization) and complex data transformation (window functions, CTEs) for extracting insights from large datasets Demonstrate intermediate-level Python skills for data analysis, including statistical modeling, machine learning (scikit-learn, statsmodels), and creating impactful visualizations, with a focus on writing well-documented, reusable code Possess strong data visualization skills with proficiency in at least one enterprise-level tool (e.g., Tableau, Grafana, Power BI), including dashboard creation, interactive visualizations, and data storytelling. Behavioral Good communication and is able to express ideas clearly and in a thoughtful manner Comes up with a range of possible directions to analyse data when presented with ill-defined / open-ended problem statements Provides rationale to each analytical direction with pros and cons without any support Showcases lateral thinking ability by approaching a problem in creative directions Demonstrate strong analytical project management skills, with the ability to break down complex data analysis initiatives into well-defined phases (planning, data acquisition, EDA, modeling, visualization, communication), ensuring efficient execution and impactful outcomes Your career progression: In BlueOptima, we strive to strengthen your skills, widen your scope of work and develop your career fast. For this role, you can expect to become more autonomous and start working on your own individual projects. This will also lead to supporting or managing a specific area of metrics for the business (e.g. revenue metrics) and potentially growth to a mentor or Team Lead position. Additional Information Why join our team? Culture and Growth: Global team with a creative, innovative and welcoming mindset. Rapid career growth and opportunity to be an outstanding and visible contributor to the company's success. Freedom to create your own success story in a high performance environment. Training programs and Personal Development Plans for each employee Benefits: 33 days of holidays - 18 Annual leave + 7 sick leaves + 8 public and religious holidays Contributions to your Provident Fund which can be matched by the company above the statutory minimum as agreed Private Medical Insurance provided by the company Gratuity payments Claim Mobile/Internet expenses and Professional Development costs Leave Travel Allowance Flexible Work from Home policy - 2 days home p/w Free drinks and snacks in the office International travel opportunities Global annual meet up (most recent meetups have been held in Thailand and Cancun) High quality equipment (Ergonomic chairs and 32’ screens) Stay connected with us on LinkedIn or keep an eye on our career page for future opportunities!
Posted 2 months ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Dreaming big is in our DNA. It’s who we are as a company. It’s our culture. It’s our heritage. And more than ever, it’s our future. A future where we’re always looking forward. Always serving up new ways to meet life’s moments. A future where we keep dreaming bigger. We look for people with passion, talent, and curiosity, and provide them with the teammates, resources and opportunities to unleash their full potential. The power we create together – when we combine your strengths with ours – is unstoppable. Are you ready to join a team that dreams as big as you do? AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev. The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology. The teams are transforming Operations through Tech and Analytics. Do You Dream Big? We Need You. Job Description Job Title: Senior Data Scientist Location: Bangalore Reporting to: Senior Manager Analytics Purpose of the role We are looking for an experienced and strategic Senior Data Scientist to join our high-impact analytics team in the FMCG sector. In this role, you will lead the development of Resource Allocation Model that directly influences how we allocate marketing budgets, drive consumer demand, and enhance retail performance. This is a hands-on technical role requiring deep expertise in data science, machine learning, and business acumen to solve complex problems in a fast-paced, consumer-centric environment. Key tasks & accountabilities Develop, validate, and scale Resource Allocation Model to quantify the impact of Sales & Marketing packages on sales and brand performance. Implement optimization algorithms to inform budget allocation and maximize marketing ROI across geographies and product portfolios. Lead the development of predictive and prescriptive models to support commercial, trade, and brand teams. Leverage PySpark to manage and transform large-scale retail, media, and consumer datasets. Build and deploy ML models using Python and TensorFlow, ensuring robust model performance and business relevance. Collaborate with marketing, category, and commercial stakeholders to embed insights into strategic decisions. Use GitHub Actions for version control, CI/CD workflows, DVC for data versioning, and reproducible ML pipelines. Present findings through compelling data storytelling and dashboards for senior leadership. Mentor junior data scientists and contribute to a culture of innovation and excellence. 3. Qualifications, Experience, Skills Level Of Educational Attainment Required Master’s or PhD in a quantitative discipline (e.g., Data Science, Statistics, Computer Science, Economics). Previous Work Experience 5+ years of hands-on experience in data science, preferably within the FMCG or retail domain. Skills Required Proven track record of building and deploying Marketing Mix Models and/or media attribution models. Deep knowledge of optimization techniques (e.g., linear programming, genetic algorithms, constrained optimization). Advanced programming skills in Python (pandas, scikit-learn, statsmodels, TensorFlow). Expertise in PySpark for distributed data processing and transformation. Experience with Git and GitHub Actions for collaborative development and CI/CD pipelines. Strong grounding in statistics, experimental design (A/B testing), and causal inference. Master’s or PhD in a quantitative discipline (e.g., Data Science, Statistics, Computer Science, Economics). Preferred Skills Required Experience working with syndicated retail data (e.g., Nielsen, IRI) and media data (e.g., Meta, Google Ads). Exposure to cloud platforms like AWS, GCP, or Azure. Familiarity with FMCG metrics (e.g., brand health, share of shelf, volume uplift, promotional ROI). Ability to translate complex models into business actions in cross-functional environments. And above all of this, an undying love for beer! We dream big to create future with more cheers
Posted 2 months ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Description Derangula Consulting Inc. offers business intelligence, IT consultancy & support, and staffing services tailored to meet your business needs. Our team of experts is committed to helping you maximize the benefits of our services and support your business growth. Looking for a Python Developer for Time Series Forecasting (10 Years of Historical Data) We are seeking an experienced Python developer with a strong background in data analysis and forecasting to work on a key component of our Business Intelligence (BI) system. Project Overview: You will be working with 10 years of historical data to develop a forecasting model that captures underlying trends and seasonality. The forecasted data will be integrated into our existing BI reports to enhance strategic decision-making. Requirements: • Proficiency in Python, especially with data science libraries like pandas, NumPy, scikit-learn, statsmodels, Prophet, or similar • Experience with time series analysis and forecasting techniques • Ability to clean, preprocess, and analyze historical data • Build and validate forecasting models based on previous trends • Generate forecasted datasets that align with business metrics and timelines • Deliver output suitable for integration into BI tools such as Power BI, Qlik, or Tableau 🎯 Deliverables: • A Python-based forecasting pipeline or script • Forecasted data for relevant KPIs • Documentation on the model used and how to update it in the future • Optional: visualizations or charts demonstrating forecasting accuracy/trends 🧠 Nice to Have: • Familiarity with BI tools or experience working alongside BI teams • Understanding of domain-specific KPIs (e.g., finance, sales, operations, etc.) If you’re passionate about turning historical data into actionable insights and have a track record of delivering reliable forecasts, we’d love to hear from you Feel free to call us -9390571086 Mail-derangulamahesh61@gmail.com
Posted 2 months ago
2.0 - 4.0 years
2 - 8 Lacs
Gurgaon
On-site
Machine Learning Engineer (L1) Experience Required: 2-4 years As a Machine Learning Engineer at Spring, you’ll help bring data-driven intelligence into our products and operations. You’ll support the development and deployment of models and pipelines that power smarter decisions, more personalized experiences, and scalable automation. This is an opportunity to build hands-on experience in real-world ML and AI systems while collaborating with experienced engineers and data scientists. You’ll work on data processing, model training, and integration tasks — gaining exposure to the entire ML lifecycle, from experimentation to production deployment. You’ll learn how to balance model performance with system requirements, and how to structure your code for reliability, observability, and maintainability. You’ll use modern ML/AI tools such as scikit-learn, HuggingFace, and LLM APIs — and be encouraged to explore AI techniques that improve our workflows or unlock new product value. You’ll also be expected to help build and support automated data pipelines, inference services, and validation tools as part of your contributions. You’ll work closely with engineering, product, and business stakeholders to understand how models drive value. Over time, you’ll build the skills and judgment needed to identify impactful use cases, communicate technical trade-offs, and contribute to the broader evolution of ML at Spring. What You’ll Do Support model development and deployment across structured and unstructured data and AI use cases. Build and maintain automated pipelines for data processing, training, and inference. Use ML and AI tools (e.g., scikit-learn, LLM APIs) in day-to-day development. Collaborate with engineers, data scientists, and product teams to scope and deliver features. Participate in code reviews, testing, and monitoring practices. Integrate ML systems into customer-facing applications and internal tools. Identify differences in data distribution that could affect model performance in real-world applications. Stay up to date with developments in the machine learning industry. Tech Expectations Core Skills Curiosity, attention to detail, strong debugging skills, and eagerness to learn through feedback Solid foundation in statistics and data interpretation Strong understanding of data structures, algorithms, and software development best practices Exposure to data pipelines, model training and evaluation, or training workflows Languages Must Have: Python, SQL ML Algorithms Must Have: Traditional modeling techniques (e.g., tree models, Naive Bayes, logistic regression) Ensemble methods (e.g., XGBoost, Random Forest, CatBoost, LightGBM) ML Libraries / Frameworks Must Have: scikit-learn, Hugging Face, Statsmodels, Optuna Good to Have: SHAP, Pytest Data Processing / Manipulation Must Have: pandas, NumPy Data Visualization Must Have: Plotly, Matplotlib Version Control Must Have: Git Others – Good to Have AWS (e.g., EC2, SageMaker, Lambda) Docker Airflow MLflow Github Actions
Posted 2 months ago
0.0 - 3.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Location: Gurugram, India Position Summary We are seeking a highly motivated and analytical Quant Analyst to join Futures First. The role involves supporting development and execution of quantitative strategies across financial markets. Job Profile Statistical Arbitrage & Strategy Development Design and implement pairs, mean-reversion, and relative value strategies in fixed income (govvies, corporate bonds, IRS). Apply cointegration tests (Engle-Granger, Johansen), Kalman filters, and machine learning techniques for signal generation. Optimize execution using transaction cost analysis (TCA). Correlation & Volatility Analysis Model dynamic correlations between bonds, rates, and macro variables using PCA, copulas, and rolling regressions. Forecast yield curve volatility using GARCH, stochastic volatility models, and implied-vol surfaces for swaptions. Identify regime shifts (e.g., monetary policy impacts) and adjust strategies accordingly. Seasonality & Pattern Recognition Analyse calendar effects (quarter-end rebalancing, liquidity patterns) in sovereign bond futures and repo markets. Develop time-series models (SARIMA, Fourier transforms) to detect cyclical trends. Back Testing & Automation Build Python-based back testing frameworks (Backtrader, Qlib) to validate strategies. Automate Excel-based reporting (VBA, xlwings) for P&L attribution and risk dashboards. Integrate Bloomberg/Refinitiv APIs for real-time data feeds. Requirements Education Qualifications B.Tech Work Experience 0-3 years Skill Set Must have: Strong grasp of probability theory, stochastic calculus (Itos Lemma, SDEs), and time-series econometrics (ARIMA, VAR, GARCH). Must have: Expertise in linear algebra (PCA, eigenvalue decomposition), numerical methods (Monte Carlo, PDE solvers), and optimization techniques. Preferred: Knowledge of Bayesian statistics, Markov Chain Monte Carlo (MCMC), and machine learning (supervised/unsupervised learning). Libraries: NumPy, Pandas, statsmodels, scikit-learn, arch (GARCH models). Back testing: Backtrader, Zipline, or custom event-driven frameworks. Data handling: SQL, Dask (for large datasets). Power Query, pivot tables, Bloomberg Excel functions (BDP, BDH). VBA scripting for various tools and automation. Experience with C /Java (low-latency systems), QuantLib (fixed income pricing), or R (statistical). Yield curve modelling (Nelson-Siegel, Svensson), duration/convexity, OIS pricing. Credit spreads, CDS pricing, and bond-CDS basis arbitrage. Familiarity with VaR, CVaR, stress testing, and liquidity risk metrics. Understanding of CCIL, NDS-OM (Indian market infrastructure). Ability to translate intuition and patterns into quant models. Strong problem-solving and communication skills (must explain complex models to non-quants). Comfortable working in a fast-paced work environment. Work hours will be aligned to APAC Markets.
Posted 2 months ago
0.0 - 5.0 years
5 - 20 Lacs
Gurgaon
On-site
Assistant Manager EXL/AM/1349734 ServicesGurgaon Posted On 30 May 2025 End Date 14 Jul 2025 Required Experience 0 - 5 Years Basic Section Number Of Positions 1 Band B1 Band Name Assistant Manager Cost Code D003152 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type Backfill Max CTC 500000.0000 - 2000000.0000 Complexity Level Not Applicable Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group Analytics Sub Group Banking & Financial Services Organization Services LOB Services SBU Analytics Country India City Gurgaon Center Gurgaon-SEZ BPO Solutions Skills Skill PYTHON SQL Minimum Qualification B.TECH/B.E Certification No data available Job Description We are seeking a skilled Data Engineer to join our dynamic team. The ideal candidate will have expertise in Python, SQL, Tableau, and PySpark, with additional exposure to SAS, banking domain knowledge, and version control tools like GIT and BitBucket. The candidate will be responsible for developing and optimizing data pipelines, ensuring efficient data processing, and supporting business intelligence initiatives. Key Responsibilities: Design, build, and maintain data pipelines using Python and PySpark Develop and optimize SQL queries for data extraction and transformation Create interactive dashboards and visualizations using Tableau Implement data models to support analytics and business needs Collaborate with cross-functional teams to understand data requirements Ensure data integrity, security, and governance across platforms Utilize version control tools like GIT and BitBucket for code management Leverage SAS and banking domain knowledge to improve data insights Required Skills: Strong proficiency in Python and PySpark for data processing Advanced SQL skills for data manipulation and querying Experience with Tableau for data visualization and reporting Familiarity with database systems and data warehousing concepts Preferred Skills: Knowledge of SAS and its applications in data analysis Experience working in the banking domain Understanding of version control systems, specifically GIT and BitBucket Knowledge of pandas, numpy, statsmodels, scikit-learn, matplotlib, PySpark , SASPy Qualifications: Bachelor's/Master's degree in Computer Science, Data Science, or a related field Excellent problem-solving and analytical skills Ability to work collaboratively in a fast-paced environment Workflow Workflow Type L&S-DA-Consulting
Posted 2 months ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Roles & Responsibilities: • Be responsible for the development of the conceptual, logical, and physical data models • Work with application/solution teams to implement data strategies, build data flows and develop/execute logical and physical data models • Implement and maintain data analysis scripts using SQL and Python. • Develop and support reports and dashboards using Google Plx/Data Studio/Looker. • Monitor performance and implement necessary infrastructure optimizations. • Demonstrate ability and willingness to learn quickly and complete large volumes of work with high quality. • Demonstrate excellent collaboration, interpersonal communication and written skills with the ability to work in a team environment. Minimum Qualifications • Hands-on experience with design, development, and support of data pipelines • Strong SQL programming skills (Joins, sub queries, queries with analytical functions, stored procedures, functions etc.) • Hands-on experience using statistical methods for data analysis • Experience with data platform and visualization technologies such as Google PLX dashboards, Data Studio, Tableau, Pandas, Qlik Sense, Splunk, Humio, Grafana • Experience in Web Development like HTML, CSS, jQuery, Bootstrap • Experience in Machine Learning Packages like Scikit-Learn, NumPy, SciPy, Pandas, NLTK, BeautifulSoup, Matplotlib, Statsmodels. • Strong design and development skills with meticulous attention to detail. • Familiarity with Agile Software Development practices and working in an agile environment • Strong analytical, troubleshooting and organizational skills • Ability to analyse and troubleshoot complex issues, and proficiency in multitasking • Ability to navigate ambiguity is great • BS degree in Computer Science, Math, Statistics or equivalent academic credentials
Posted 2 months ago
5.0 - 9.0 years
9 - 13 Lacs
Hyderabad
Work from Office
Job Summary. ServCrust is a rapidly growing technology startup with the vision to revolutionize India's infrastructure. by integrating digitization and technology throughout the lifecycle of infrastructure projects.. About The Role. As a Data Science Engineer, you will lead data-driven decision-making across the organization. Your. responsibilities will include designing and implementing advanced machine learning models, analyzing. complex datasets, and delivering actionable insights to various stakeholders. You will work closely with. cross-functional teams to tackle challenging business problems and drive innovation using advanced. analytics techniques.. Responsibilities. Collaborate with strategy, data engineering, and marketing teams to understand and address business requirements through advanced machine learning and statistical models.. Analyze large spatiotemporal datasets to identify patterns and trends, providing insights for business decision-making.. Design and implement algorithms for predictive and causal modeling.. Evaluate and fine-tune model performance.. Communicate recommendations based on insights to both technical and non-technical stakeholders.. Requirements. A Ph.D. in computer science, statistics, or a related field. 5+ years of experience in data science. Experience in geospatial data science is an added advantage. Proficiency in Python (Pandas, Numpy, Sci-Kit Learn, PyTorch, StatsModels, Matplotlib, and Seaborn); experience with GeoPandas and Shapely is an added advantage. Strong communication and presentation skills. Show more Show less
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |