Jobs
Interviews

64 Statsmodels Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 years

0 Lacs

Madurai

On-site

Job Location: Madurai Job Experience: 4-15 Years Model of Work: Work From Office Technologies: Artificial Intelligence Machine Learning Functional Area: Software Development Job Summary: Job Title: ML Engineer – TechMango Location: TechMango, Madurai Experience: 4+ Years Employment Type: Full-Time Role Overview We are seeking an experienced Machine Learning Engineer with strong proficiency in Python, time series forecasting, MLOps, and deployment using AWS services. This role involves building scalable machine learning pipelines, optimizing models, and deploying them in production environments. Key Responsibilities: Core Technical Skills Languages & Databases Programming Language: Python Databases: SQL Core Libraries & Tools Time Series & Forecasting: pmdarima, statsmodels, Prophet, GluonTS, NeuralProphet Machine Learning Models: State-of-the-art ML models, including boosting and ensemble methods Model Explainability: SHAP, LIME Deep Learning & Data Processing Frameworks: PyTorch, PyTorch Forecasting Libraries: Pandas, NumPy, PySpark, Polars (optional) Hyperparameter Tuning Tools: Optuna, Amazon SageMaker Automatic Model Tuning Deployment & MLOps Model Deployment: Batch & real-time with API endpoints Experiment Tracking: MLFlow Model Serving: TorchServe, SageMaker Endpoints / Batch Containerization & Pipelines Containerization: Docker Orchestration: AWS Step Functions, SageMaker Pipelines AWS Cloud Stack SageMaker (Training, Inference, Tuning) S3 (Data Storage) CloudWatch (Monitoring) Lambda (Trigger-based inference) ECR / ECS / Fargate (Container Hosting) Candidate Requirements Strong problem-solving and analytical mindset Hands-on experience with end-to-end ML project lifecycle Familiarity with MLOps workflows in production environments Excellent communication and documentation skills Comfortable working in agile, cross-functional teams

Posted 4 weeks ago

Apply

8.0 years

0 Lacs

India

Remote

Location: Remote | Commitment: Full-time Company Overview Insight Fusion Analytics turns complex data into actionable insight for clients across finance, retail, and professional sport. Our sports-analytics unit builds predictive systems that transform raw match, athlete, and biomechanical data into winning strategies. Role Summary We’re hiring a Lead Statistician with deep expertise in sports prediction to architect, validate, and continuously refine our forecasting engines. You’ll own the statistical core—from feature-engineering pipelines through probabilistic calibration—working with ML engineers and domain analysts to produce production-ready forecasts that thrive in real-world conditions. What You’ll Do Model Architecture & Validation – Design Bayesian and frequentist frameworks (hierarchical Elo, Poisson-Gamma, state-space models) and build leakage-proof cross-validation strategies. Feature Engineering & Experimental Design – Derive advanced spatio-temporal, biometric, and contextual features; run A/B and multivariate tests to quantify lift. Uncertainty Quantification – Produce calibrated predictive intervals, scenario simulations, and decision-theoretic metrics (Brier, CRPS, EVaR). Mentorship & Review – Set statistical standards, review code/notebooks, and mentor junior analysts. Stakeholder Communication – Translate complex statistical results into concise recommendations for coaches, product managers, and executives. Must-Have Qualifications 8+ years professional experience (or PhD + 5 years ) in applied statistics, econometrics, or quantitative social science. Documented track record building sports prediction systems. Expert proficiency with Python (NumPy, SciPy, Pandas, statsmodels, PyMC/Stan) and SQL; R a plus. Mastery of resampling methods, hierarchical models, time-series analysis, Monte-Carlo simulation, and causal inference. Proven success preventing data leakage and look-ahead bias in live pipelines. Strong communication skills for both technical and non-technical audiences. Nice-to-Have Familiarity with deep-learning frameworks (TensorFlow/PyTorch) for hybrid stat-ML architectures. Experience deploying models on AWS, GCP, or Azure using containerized workflows. Publications or conference talks in sports analytics (MIT Sloan, NESSIS, MathSport). How to Apply Send the following to insightfusionanalytics@gmail.com : CV highlighting sports-analytics projects and publications. Portfolio or repo links demonstrating end-to-end statistical modelling for sports prediction. A one-page brief describing your proudest predictive model: objective, methodology, error analysis, and business impact.

Posted 4 weeks ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Title: Senior ML Engineer Minimum 4 to 8+ years of experience in ML development in Product Based Company Location: Bangalore (Onsite) Why should you choose us? Rakuten Symphony is a Rakuten Group company, that provides global B2B services for the mobile telco industry and enables next-generation, cloud-based, international mobile services. Building on the technology Rakuten used to launch Japan’s newest mobile network, we are taking our mobile offering global. To support our ambitions to provide an innovative cloud-native telco platform for our customers, Rakuten Symphony is looking to recruit and develop top talent from around the globe. We are looking for individuals to join our team across all functional areas of our business – from sales to engineering, support functions to product development. Let’s build the future of mobile telecommunications together! Required Skills and Expertise: Candidate Must have exp. Working in Product Based Company. Should be able to Build, train, and optimize deep learning models with TensorFlow, Keras, PyTorch, and Transformers. Should have exp. In Manipulate and analyse large-scale datasets using Python, Pandas, Numpy, Dask Apply advanced fine-tuning techniques (Full Fine-Tuning, PEFT) and strategies to large language and vision models. Implement and evaluate classical machine learning algorithms using scikit-learn, statsmodels, XGBoost etc. Develop and deploy scalable APIs for ML models using FastAPI. Should have exp. In performing data visualization and exploratory data analysis with Matplotlib, Seaborn, Plotly, and Bokeh. Collaborate with cross-functional teams to deliver end-to-end ML solutions. Deploy machine learning models for diverse business applications over the cloud native and on-premise Hands-on experience with Docker for containerization and Kubernetes for orchestration and scalable deployment of ML models. Familiarity with CI/CD pipelines and best practices for deploying and monitoring ML models in production. Stay current with the latest advancements in machine learning, deep learning, and AI. Our commitment to you: - Rakuten Group’s mission is to contribute to society by creating value through innovation and entrepreneurship. By providing high-quality services that help our users and partners grow, - We aim to advance and enrich society. - To fulfill our role as a Global Innovation Company, we are committed to maximizing both corporate and shareholder value. RAKUTEN SHUGI PRINCIPLES: Our worldwide practices describe specific behaviours that make Rakuten unique and united across the world. We expect Rakuten employees to model these 5 Shugi Principles of Success. Always improve, always advance . Only be satisfied with complete success - Kaizen. Be passionately professional . Take an uncompromising approach to your work and be determined to be the best. Hypothesize - Practice - Validate - Shikumika. Use the Rakuten Cycle to success in unknown territory. Maximize Customer Satisfaction . The greatest satisfaction for workers in a service industry is to see their customers smile. Speed!! Speed!! Speed!! Always be conscious of time. Take charge, set clear goals, and engage your team.

Posted 1 month ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Company Description We are looking for a Data Analyst with 3+ years of experience in ** BlueOptima is on a mission to maximize the economic and social value that software engineering organizations are capable of delivering. Our vision is to become the global reference for the optimization of the performance of Software Engineering. Our technology is used by some of the world’s largest organizations, including nine of the world’s top twelve Universal Banks, and a number of large corporates. We are a global organization with headquarters in London and additional offices in India, Mexico, and the US. We are made up of 100+ individuals from more than 20 different countries. We promote an open-minded environment and encourage our employees to create their own success story in this high-performance environment. Location: Bangalore Department: Data Engineering Job Description Job summary: Our ground-breaking technology is built on top of Billions of data points that are representative of a developer’s interaction with Source Code and Task Tracking systems. The enormous amount of data BlueOptima processes daily requires specialists to dive into the dataset, and identify insights from the data points and device solutions to extend and enhance BlueOptima’s product suite. We are looking for talented data analysts who are critical of data and curious to determine the story it narrates, explore vast datasets and are aptly able to use any and all tools available at their disposal to interrogate the data. A successful candidate will turn data into information, information into insight, and insight into valuable product features. Responsibilities and tasks : Collaborate with the marketing team to produce impactful technical whitepapers by conducting thorough data collection and analysis and contributing to content development. Partner with the Machine Learning and Data Engineering team to develop and implement innovative solutions for our Developer Analytics and Team Lead Dashboard products. Provide insightful data analysis and build actionable dashboards to empower data-driven decision-making across business teams (Sales, Customer Success, Marketing). Deliver compelling data visualizations and reports using tools like Tableau and Grafana to communicate key insights to internal and external stakeholders. Identify and implement opportunities to automate data analysis, reporting, and dashboard creation processes to improve efficiency. Qualifications Technical Must have a minimum of 3 years of relevant Work Experience in Data Science, Data Analytics, or Business Intelligence Demonstrate advanced SQL expertise, including performance tuning (indexing, query optimization) and complex data transformation (window functions, CTEs) for extracting insights from large datasets Demonstrate intermediate-level Python skills for data analysis, including statistical modeling, machine learning (scikit-learn, statsmodels), and creating impactful visualizations, with a focus on writing well-documented, reusable code Possess strong data visualization skills with proficiency in at least one enterprise-level tool (e.g., Tableau, Grafana, Power BI), including dashboard creation, interactive visualizations, and data storytelling. Behavioral Good communication and is able to express ideas clearly and in a thoughtful manner Comes up with a range of possible directions to analyse data when presented with ill-defined / open-ended problem statements Provides rationale to each analytical direction with pros and cons without any support Showcases lateral thinking ability by approaching a problem in creative directions Demonstrate strong analytical project management skills, with the ability to break down complex data analysis initiatives into well-defined phases (planning, data acquisition, EDA, modeling, visualization, communication), ensuring efficient execution and impactful outcomes Your career progression: In BlueOptima, we strive to strengthen your skills, widen your scope of work and develop your career fast. For this role, you can expect to become more autonomous and start working on your own individual projects. This will also lead to supporting or managing a specific area of metrics for the business (e.g. revenue metrics) and potentially growth to a mentor or Team Lead position. Additional Information Why join our team? Culture and Growth: Global team with a creative, innovative and welcoming mindset. Rapid career growth and opportunity to be an outstanding and visible contributor to the company's success. Freedom to create your own success story in a high performance environment. Training programs and Personal Development Plans for each employee Benefits: 33 days of holidays - 18 Annual leave + 7 sick leaves + 8 public and religious holidays Contributions to your Provident Fund which can be matched by the company above the statutory minimum as agreed Private Medical Insurance provided by the company Gratuity payments Claim Mobile/Internet expenses and Professional Development costs Leave Travel Allowance Flexible Work from Home policy - 2 days home p/w Free drinks and snacks in the office International travel opportunities Global annual meet up (most recent meetups have been held in Thailand and Cancun) High quality equipment (Ergonomic chairs and 32’ screens) Stay connected with us on LinkedIn or keep an eye on our career page for future opportunities!

Posted 1 month ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Dreaming big is in our DNA. It’s who we are as a company. It’s our culture. It’s our heritage. And more than ever, it’s our future. A future where we’re always looking forward. Always serving up new ways to meet life’s moments. A future where we keep dreaming bigger. We look for people with passion, talent, and curiosity, and provide them with the teammates, resources and opportunities to unleash their full potential. The power we create together – when we combine your strengths with ours – is unstoppable. Are you ready to join a team that dreams as big as you do? AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev. The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology. The teams are transforming Operations through Tech and Analytics. Do You Dream Big? We Need You. Job Description Job Title: Senior Data Scientist Location: Bangalore Reporting to: Senior Manager Analytics Purpose of the role We are looking for an experienced and strategic Senior Data Scientist to join our high-impact analytics team in the FMCG sector. In this role, you will lead the development of Resource Allocation Model that directly influences how we allocate marketing budgets, drive consumer demand, and enhance retail performance. This is a hands-on technical role requiring deep expertise in data science, machine learning, and business acumen to solve complex problems in a fast-paced, consumer-centric environment. Key tasks & accountabilities Develop, validate, and scale Resource Allocation Model to quantify the impact of Sales & Marketing packages on sales and brand performance. Implement optimization algorithms to inform budget allocation and maximize marketing ROI across geographies and product portfolios. Lead the development of predictive and prescriptive models to support commercial, trade, and brand teams. Leverage PySpark to manage and transform large-scale retail, media, and consumer datasets. Build and deploy ML models using Python and TensorFlow, ensuring robust model performance and business relevance. Collaborate with marketing, category, and commercial stakeholders to embed insights into strategic decisions. Use GitHub Actions for version control, CI/CD workflows, DVC for data versioning, and reproducible ML pipelines. Present findings through compelling data storytelling and dashboards for senior leadership. Mentor junior data scientists and contribute to a culture of innovation and excellence. 3. Qualifications, Experience, Skills Level Of Educational Attainment Required Master’s or PhD in a quantitative discipline (e.g., Data Science, Statistics, Computer Science, Economics). Previous Work Experience 5+ years of hands-on experience in data science, preferably within the FMCG or retail domain. Skills Required Proven track record of building and deploying Marketing Mix Models and/or media attribution models. Deep knowledge of optimization techniques (e.g., linear programming, genetic algorithms, constrained optimization). Advanced programming skills in Python (pandas, scikit-learn, statsmodels, TensorFlow). Expertise in PySpark for distributed data processing and transformation. Experience with Git and GitHub Actions for collaborative development and CI/CD pipelines. Strong grounding in statistics, experimental design (A/B testing), and causal inference. Master’s or PhD in a quantitative discipline (e.g., Data Science, Statistics, Computer Science, Economics). Preferred Skills Required Experience working with syndicated retail data (e.g., Nielsen, IRI) and media data (e.g., Meta, Google Ads). Exposure to cloud platforms like AWS, GCP, or Azure. Familiarity with FMCG metrics (e.g., brand health, share of shelf, volume uplift, promotional ROI). Ability to translate complex models into business actions in cross-functional environments. And above all of this, an undying love for beer! We dream big to create future with more cheers

Posted 1 month ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Company Description Derangula Consulting Inc. offers business intelligence, IT consultancy & support, and staffing services tailored to meet your business needs. Our team of experts is committed to helping you maximize the benefits of our services and support your business growth. Looking for a Python Developer for Time Series Forecasting (10 Years of Historical Data) We are seeking an experienced Python developer with a strong background in data analysis and forecasting to work on a key component of our Business Intelligence (BI) system. Project Overview: You will be working with 10 years of historical data to develop a forecasting model that captures underlying trends and seasonality. The forecasted data will be integrated into our existing BI reports to enhance strategic decision-making. Requirements: • Proficiency in Python, especially with data science libraries like pandas, NumPy, scikit-learn, statsmodels, Prophet, or similar • Experience with time series analysis and forecasting techniques • Ability to clean, preprocess, and analyze historical data • Build and validate forecasting models based on previous trends • Generate forecasted datasets that align with business metrics and timelines • Deliver output suitable for integration into BI tools such as Power BI, Qlik, or Tableau 🎯 Deliverables: • A Python-based forecasting pipeline or script • Forecasted data for relevant KPIs • Documentation on the model used and how to update it in the future • Optional: visualizations or charts demonstrating forecasting accuracy/trends 🧠 Nice to Have: • Familiarity with BI tools or experience working alongside BI teams • Understanding of domain-specific KPIs (e.g., finance, sales, operations, etc.) If you’re passionate about turning historical data into actionable insights and have a track record of delivering reliable forecasts, we’d love to hear from you Feel free to call us -9390571086 Mail-derangulamahesh61@gmail.com

Posted 1 month ago

Apply

2.0 - 4.0 years

2 - 8 Lacs

Gurgaon

On-site

Machine Learning Engineer (L1) Experience Required: 2-4 years As a Machine Learning Engineer at Spring, you’ll help bring data-driven intelligence into our products and operations. You’ll support the development and deployment of models and pipelines that power smarter decisions, more personalized experiences, and scalable automation. This is an opportunity to build hands-on experience in real-world ML and AI systems while collaborating with experienced engineers and data scientists. You’ll work on data processing, model training, and integration tasks — gaining exposure to the entire ML lifecycle, from experimentation to production deployment. You’ll learn how to balance model performance with system requirements, and how to structure your code for reliability, observability, and maintainability. You’ll use modern ML/AI tools such as scikit-learn, HuggingFace, and LLM APIs — and be encouraged to explore AI techniques that improve our workflows or unlock new product value. You’ll also be expected to help build and support automated data pipelines, inference services, and validation tools as part of your contributions. You’ll work closely with engineering, product, and business stakeholders to understand how models drive value. Over time, you’ll build the skills and judgment needed to identify impactful use cases, communicate technical trade-offs, and contribute to the broader evolution of ML at Spring. What You’ll Do Support model development and deployment across structured and unstructured data and AI use cases. Build and maintain automated pipelines for data processing, training, and inference. Use ML and AI tools (e.g., scikit-learn, LLM APIs) in day-to-day development. Collaborate with engineers, data scientists, and product teams to scope and deliver features. Participate in code reviews, testing, and monitoring practices. Integrate ML systems into customer-facing applications and internal tools. Identify differences in data distribution that could affect model performance in real-world applications. Stay up to date with developments in the machine learning industry. Tech Expectations Core Skills Curiosity, attention to detail, strong debugging skills, and eagerness to learn through feedback Solid foundation in statistics and data interpretation Strong understanding of data structures, algorithms, and software development best practices Exposure to data pipelines, model training and evaluation, or training workflows Languages Must Have: Python, SQL ML Algorithms Must Have: Traditional modeling techniques (e.g., tree models, Naive Bayes, logistic regression) Ensemble methods (e.g., XGBoost, Random Forest, CatBoost, LightGBM) ML Libraries / Frameworks Must Have: scikit-learn, Hugging Face, Statsmodels, Optuna Good to Have: SHAP, Pytest Data Processing / Manipulation Must Have: pandas, NumPy Data Visualization Must Have: Plotly, Matplotlib Version Control Must Have: Git Others – Good to Have AWS (e.g., EC2, SageMaker, Lambda) Docker Airflow MLflow Github Actions

Posted 1 month ago

Apply

0.0 - 3.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Location: Gurugram, India Position Summary We are seeking a highly motivated and analytical Quant Analyst to join Futures First. The role involves supporting development and execution of quantitative strategies across financial markets. Job Profile Statistical Arbitrage & Strategy Development Design and implement pairs, mean-reversion, and relative value strategies in fixed income (govvies, corporate bonds, IRS). Apply cointegration tests (Engle-Granger, Johansen), Kalman filters, and machine learning techniques for signal generation. Optimize execution using transaction cost analysis (TCA). Correlation & Volatility Analysis Model dynamic correlations between bonds, rates, and macro variables using PCA, copulas, and rolling regressions. Forecast yield curve volatility using GARCH, stochastic volatility models, and implied-vol surfaces for swaptions. Identify regime shifts (e.g., monetary policy impacts) and adjust strategies accordingly. Seasonality & Pattern Recognition Analyse calendar effects (quarter-end rebalancing, liquidity patterns) in sovereign bond futures and repo markets. Develop time-series models (SARIMA, Fourier transforms) to detect cyclical trends. Back Testing & Automation Build Python-based back testing frameworks (Backtrader, Qlib) to validate strategies. Automate Excel-based reporting (VBA, xlwings) for P&L attribution and risk dashboards. Integrate Bloomberg/Refinitiv APIs for real-time data feeds. Requirements Education Qualifications B.Tech Work Experience 0-3 years Skill Set Must have: Strong grasp of probability theory, stochastic calculus (Itos Lemma, SDEs), and time-series econometrics (ARIMA, VAR, GARCH). Must have: Expertise in linear algebra (PCA, eigenvalue decomposition), numerical methods (Monte Carlo, PDE solvers), and optimization techniques. Preferred: Knowledge of Bayesian statistics, Markov Chain Monte Carlo (MCMC), and machine learning (supervised/unsupervised learning). Libraries: NumPy, Pandas, statsmodels, scikit-learn, arch (GARCH models). Back testing: Backtrader, Zipline, or custom event-driven frameworks. Data handling: SQL, Dask (for large datasets). Power Query, pivot tables, Bloomberg Excel functions (BDP, BDH). VBA scripting for various tools and automation. Experience with C /Java (low-latency systems), QuantLib (fixed income pricing), or R (statistical). Yield curve modelling (Nelson-Siegel, Svensson), duration/convexity, OIS pricing. Credit spreads, CDS pricing, and bond-CDS basis arbitrage. Familiarity with VaR, CVaR, stress testing, and liquidity risk metrics. Understanding of CCIL, NDS-OM (Indian market infrastructure). Ability to translate intuition and patterns into quant models. Strong problem-solving and communication skills (must explain complex models to non-quants). Comfortable working in a fast-paced work environment. Work hours will be aligned to APAC Markets.

Posted 1 month ago

Apply

0.0 - 5.0 years

5 - 20 Lacs

Gurgaon

On-site

Assistant Manager EXL/AM/1349734 ServicesGurgaon Posted On 30 May 2025 End Date 14 Jul 2025 Required Experience 0 - 5 Years Basic Section Number Of Positions 1 Band B1 Band Name Assistant Manager Cost Code D003152 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type Backfill Max CTC 500000.0000 - 2000000.0000 Complexity Level Not Applicable Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group Analytics Sub Group Banking & Financial Services Organization Services LOB Services SBU Analytics Country India City Gurgaon Center Gurgaon-SEZ BPO Solutions Skills Skill PYTHON SQL Minimum Qualification B.TECH/B.E Certification No data available Job Description We are seeking a skilled Data Engineer to join our dynamic team. The ideal candidate will have expertise in Python, SQL, Tableau, and PySpark, with additional exposure to SAS, banking domain knowledge, and version control tools like GIT and BitBucket. The candidate will be responsible for developing and optimizing data pipelines, ensuring efficient data processing, and supporting business intelligence initiatives. Key Responsibilities: Design, build, and maintain data pipelines using Python and PySpark Develop and optimize SQL queries for data extraction and transformation Create interactive dashboards and visualizations using Tableau Implement data models to support analytics and business needs Collaborate with cross-functional teams to understand data requirements Ensure data integrity, security, and governance across platforms Utilize version control tools like GIT and BitBucket for code management Leverage SAS and banking domain knowledge to improve data insights Required Skills: Strong proficiency in Python and PySpark for data processing Advanced SQL skills for data manipulation and querying Experience with Tableau for data visualization and reporting Familiarity with database systems and data warehousing concepts Preferred Skills: Knowledge of SAS and its applications in data analysis Experience working in the banking domain Understanding of version control systems, specifically GIT and BitBucket Knowledge of pandas, numpy, statsmodels, scikit-learn, matplotlib, PySpark , SASPy Qualifications: Bachelor's/Master's degree in Computer Science, Data Science, or a related field Excellent problem-solving and analytical skills Ability to work collaboratively in a fast-paced environment Workflow Workflow Type L&S-DA-Consulting

Posted 1 month ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Roles & Responsibilities: • Be responsible for the development of the conceptual, logical, and physical data models • Work with application/solution teams to implement data strategies, build data flows and develop/execute logical and physical data models • Implement and maintain data analysis scripts using SQL and Python. • Develop and support reports and dashboards using Google Plx/Data Studio/Looker. • Monitor performance and implement necessary infrastructure optimizations. • Demonstrate ability and willingness to learn quickly and complete large volumes of work with high quality. • Demonstrate excellent collaboration, interpersonal communication and written skills with the ability to work in a team environment. Minimum Qualifications • Hands-on experience with design, development, and support of data pipelines • Strong SQL programming skills (Joins, sub queries, queries with analytical functions, stored procedures, functions etc.) • Hands-on experience using statistical methods for data analysis • Experience with data platform and visualization technologies such as Google PLX dashboards, Data Studio, Tableau, Pandas, Qlik Sense, Splunk, Humio, Grafana • Experience in Web Development like HTML, CSS, jQuery, Bootstrap • Experience in Machine Learning Packages like Scikit-Learn, NumPy, SciPy, Pandas, NLTK, BeautifulSoup, Matplotlib, Statsmodels. • Strong design and development skills with meticulous attention to detail. • Familiarity with Agile Software Development practices and working in an agile environment • Strong analytical, troubleshooting and organizational skills • Ability to analyse and troubleshoot complex issues, and proficiency in multitasking • Ability to navigate ambiguity is great • BS degree in Computer Science, Math, Statistics or equivalent academic credentials

Posted 1 month ago

Apply

5.0 - 9.0 years

9 - 13 Lacs

Hyderabad

Work from Office

Job Summary. ServCrust is a rapidly growing technology startup with the vision to revolutionize India's infrastructure. by integrating digitization and technology throughout the lifecycle of infrastructure projects.. About The Role. As a Data Science Engineer, you will lead data-driven decision-making across the organization. Your. responsibilities will include designing and implementing advanced machine learning models, analyzing. complex datasets, and delivering actionable insights to various stakeholders. You will work closely with. cross-functional teams to tackle challenging business problems and drive innovation using advanced. analytics techniques.. Responsibilities. Collaborate with strategy, data engineering, and marketing teams to understand and address business requirements through advanced machine learning and statistical models.. Analyze large spatiotemporal datasets to identify patterns and trends, providing insights for business decision-making.. Design and implement algorithms for predictive and causal modeling.. Evaluate and fine-tune model performance.. Communicate recommendations based on insights to both technical and non-technical stakeholders.. Requirements. A Ph.D. in computer science, statistics, or a related field. 5+ years of experience in data science. Experience in geospatial data science is an added advantage. Proficiency in Python (Pandas, Numpy, Sci-Kit Learn, PyTorch, StatsModels, Matplotlib, and Seaborn); experience with GeoPandas and Shapely is an added advantage. Strong communication and presentation skills. Show more Show less

Posted 1 month ago

Apply

0.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Position Summary We are seeking a highly motivated and analytical Quant Analyst to join Futures First. The role involves supporting development and execution of quantitative strategies across financial markets. Job Profile Statistical Arbitrage & Strategy Development Design and implement pairs, mean-reversion, and relative value strategies in fixed income (govvies, corporate bonds, IRS). Apply cointegration tests (Engle-Granger, Johansen), Kalman filters, and machine learning techniques for signal generation. Optimize execution using transaction cost analysis (TCA). Correlation & Volatility Analysis Model dynamic correlations between bonds, rates, and macro variables using PCA, copulas, and rolling regressions. Forecast yield curve volatility using GARCH, stochastic volatility models, and implied-vol surfaces for swaptions. Identify regime shifts (e.g., monetary policy impacts) and adjust strategies accordingly. Seasonality & Pattern Recognition Analyse calendar effects (quarter-end rebalancing, liquidity patterns) in sovereign bond futures and repo markets. Develop time-series models (SARIMA, Fourier transforms) to detect cyclical trends. Back testing & Automation Build Python-based back testing frameworks (Backtrader, Qlib) to validate strategies. Automate Excel-based reporting (VBA, xlwings) for P&L attribution and risk dashboards. Integrate Bloomberg/Refinitiv APIs for real-time data feeds. Requirements Education Qualifications B.Tech Work Experience 0-3 years Skill Set Must have: Strong grasp of probability theory, stochastic calculus (Ito’s Lemma, SDEs), and time-series econometrics (ARIMA, VAR, GARCH). Must have: Expertise in linear algebra (PCA, eigenvalue decomposition), numerical methods (Monte Carlo, PDE solvers), and optimization techniques. Preferred: Knowledge of Bayesian statistics, Markov Chain Monte Carlo (MCMC), and machine learning (supervised/unsupervised learning) Libraries: NumPy, Pandas, statsmodels, scikit-learn, arch (GARCH models). Back testing: Backtrader, Zipline, or custom event-driven frameworks. Data handling: SQL, Dask (for large datasets). Power Query, pivot tables, Bloomberg Excel functions (BDP, BDH). VBA scripting for various tools and automation. Experience with C++/Java (low-latency systems), QuantLib (fixed income pricing), or R (statistical l). Yield curve modelling (Nelson-Siegel, Svensson), duration/convexity, OIS pricing. Credit spreads, CDS pricing, and bond-CDS basis arbitrage. Familiarity with VaR, CVaR, stress testing, and liquidity risk metrics. Understanding of CCIL, NDS-OM (Indian market infrastructure). Ability to translate intuition and patterns into quant models. Strong problem-solving and communication skills (must explain complex models to non-quants). Comfortable working in a fast-paced work environment. Location: Gurugram, Work hours will be aligned to APAC Markets.

Posted 1 month ago

Apply

0.0 years

4 - 5 Lacs

Gurgaon

On-site

Location Gurugram, India Share Position Summary We are seeking a highly motivated and analytical Quant Analyst to join Futures First. The role involves supporting development and execution of quantitative strategies across financial markets. Job Profile Statistical Arbitrage & Strategy Development Design and implement pairs, mean-reversion, and relative value strategies in fixed income (govvies, corporate bonds, IRS). Apply cointegration tests (Engle-Granger, Johansen), Kalman filters, and machine learning techniques for signal generation. Optimize execution using transaction cost analysis (TCA). Correlation & Volatility Analysis Model dynamic correlations between bonds, rates, and macro variables using PCA, copulas, and rolling regressions. Forecast yield curve volatility using GARCH, stochastic volatility models, and implied-vol surfaces for swaptions. Identify regime shifts (e.g., monetary policy impacts) and adjust strategies accordingly. Seasonality & Pattern Recognition Analyse calendar effects (quarter-end rebalancing, liquidity patterns) in sovereign bond futures and repo markets. Develop time-series models (SARIMA, Fourier transforms) to detect cyclical trends. Back testing & Automation Build Python-based back testing frameworks (Backtrader, Qlib) to validate strategies. Automate Excel-based reporting (VBA, xlwings) for P&L attribution and risk dashboards. Integrate Bloomberg/Refinitiv APIs for real-time data feeds. Requirements Education Qualifications B.Tech Work Experience 0-3 years Skill Set Must have: Strong grasp of probability theory, stochastic calculus (Ito’s Lemma, SDEs), and time-series econometrics (ARIMA, VAR, GARCH). Must have: Expertise in linear algebra (PCA, eigenvalue decomposition), numerical methods (Monte Carlo, PDE solvers), and optimization techniques. Preferred: Knowledge of Bayesian statistics, Markov Chain Monte Carlo (MCMC), and machine learning (supervised/unsupervised learning) Libraries: NumPy, Pandas, statsmodels, scikit-learn, arch (GARCH models). Back testing: Backtrader, Zipline, or custom event-driven frameworks. Data handling: SQL, Dask (for large datasets). Power Query, pivot tables, Bloomberg Excel functions (BDP, BDH). VBA scripting for various tools and automation. Experience with C++/Java (low-latency systems), QuantLib (fixed income pricing), or R (statistical l). Yield curve modelling (Nelson-Siegel, Svensson), duration/convexity, OIS pricing. Credit spreads, CDS pricing, and bond-CDS basis arbitrage. Familiarity with VaR, CVaR, stress testing, and liquidity risk metrics. Understanding of CCIL, NDS-OM (Indian market infrastructure). Ability to translate intuition and patterns into quant models. Strong problem-solving and communication skills (must explain complex models to non-quants). Comfortable working in a fast-paced work environment. Location: Gurugram, Work hours will be aligned to APAC Markets.

Posted 1 month ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

By clicking the “Apply” button, I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda’s Privacy Notice and Terms of Use. I further attest that all information I submit in my employment application is true to the best of my knowledge. Job Description: The Future Begins Here: At Takeda, we are leading digital evolution and global transformation. By building innovative solutions and future-ready capabilities, we are meeting the need of patients, our people, and the planet. Bengaluru, the city, which is India’s epicenter of Innovation, has been selected to be home to Takeda’s recently launched Innovation Capability Center. We invite you to join our digital transformation journey. In this role, you will have the opportunity to boost your skills and become the heart of an innovative engine that is contributing to global impact and improvement. At Takeda’s ICC we Unite in Diversity: Takeda is committed to creating an inclusive and collaborative workplace, where individuals are recognized for their backgrounds and abilities they bring to our company. We are continuously improving our collaborators journey in Takeda, and we welcome applications from all qualified candidates. Here, you will feel welcomed, respected, and valued as an important contributor to our diverse team. The Opportunity: As a Data Scientist, you will have the opportunity to apply your analytical skills and expertise to extract meaningful insights from vast amounts of data. We are currently seeking a talented and experienced individual to join our team and contribute to our data-driven decision-making process. Objectives: Collaborate with different business users, mainly Supply Chain/Manufacturing to understand the current state and identify opportunities to transform the business into a data-driven organization. Translate processes, and requirements into analytics solutions and metrics with effective data strategy, data quality, and data accessibility for decision making. Operationalize decision support solutions and drive use adoption as well as gathering feedback and metrics on Voice of Customer in order to improve analytics services. Understand the analytics drivers and data to be modeled as well as apply the appropriate quantitative techniques to provide business with actionable insights and ensure analytics model and data are access to the end users to evaluate “what-if” scenarios and decision making. Evaluate the data, analytical models, and experiments periodically to validate hypothesis ensuring it continues to provide business value as requirements and objectives evolve. Accountabilities: Collaborates with business partners in identifying analytical opportunities and developing BI-related goals and projects that will create strategically relevant insights. Work with internal and external partners to develop analytics vision and programs to advance BI solutions and practices. Understands data and sources of data. Strategizes with IT development team and develops a process to collect, ingest, and deliver data along with proper data models for analytical needs. Interacts with business users to define pain points, problem statement, scope, and analytics business case. Develops solutions with recommended data model and business intelligence technologies including data warehouse, data marts, OLAP modeling, dashboards/reporting, and data queries. Works with DevOps and database teams to ensure proper design of system databases and appropriate integration with other enterprise applications. Collaborates with Enterprise Data and Analytics Team to design data model and visualization solutions that synthesize complex data for data mining and discovery. Assists in defining requirements and facilitates workshops and prototyping sessions. Develops and applies technologies such as machine-learning, deep-learning algorithm to enable advanced analytics product functionality. EDUCATION, BEHAVIOURAL COMPETENCIES AND SKILLS: Bachelors’ Degree, from an accredited institution in Data Science, Statistics, Computer Science, or related field. 3+ years of experience with statistical modeling such as clustering, segmentation, multivariate, regression, etc. and analytics tools such as R, Python, Databricks, etc. required Experience in developing and applying predictive and prescriptive modeling, deep-learning, or other machine learning techniques a plus. Hands-on development of AI solutions that comply with industry standards and government regulations. Great numerical and analytical skills, as well as basic knowledge of Python Analytics packages (Pandas, scikit-learn, statsmodels). Ability to build and maintain scalable and reliable data pipelines that collect, transform, manipulate, and load data from internal and external sources. Ability to use statistical tools to conduct data analysis and identify data quality issues throughout the data pipeline. Experience with BI and Visualization tools (f. e. Qlik, Power BI), ETL, NoSQL and proven design skills a plus. Excellent written and verbal communication skills including the ability to interact effectively with multifunctional teams. Experience with working with agile teams. WHAT TAKEDA CAN OFFER YOU: Takeda is certified as a Top Employer, not only in India, but also globally. No investment we make pays greater dividends than taking good care of our people. At Takeda, you take the lead on building and shaping your own career. Joining the ICC in Bengaluru will give you access to high-end technology, continuous training and a diverse and inclusive network of colleagues who will support your career growth. BENEFITS: It is our priority to provide competitive compensation and a benefit package that bridges your personal life with your professional career. Amongst our benefits are Competitive Salary + Performance Annual Bonus Flexible work environment, including hybrid working Comprehensive Healthcare Insurance Plans for self, spouse, and children Group Term Life Insurance and Group Accident Insurance programs Health & Wellness programs including annual health screening, weekly health sessions for employees. Employee Assistance Program 3 days of leave every year for Voluntary Service in additional to Humanitarian Leaves Broad Variety of learning platforms Diversity, Equity, and Inclusion Programs Reimbursements – Home Internet & Mobile Phone Employee Referral Program Leaves – Paternity Leave (4 Weeks) , Maternity Leave (up to 26 weeks), Bereavement Leave (5 calendar days) ABOUT ICC IN TAKEDA: Takeda is leading a digital revolution. We’re not just transforming our company; we’re improving the lives of millions of patients who rely on our medicines every day. As an organization, we are committed to our cloud-driven business transformation and believe the ICCs are the catalysts of change for our global organization. Li-Hybrid Locations: IND - Bengaluru Worker Type: Employee Worker Sub-Type: Regular Time Type: Full time Show more Show less

Posted 1 month ago

Apply

5.0 - 8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Dreaming big is in our DNA. It’s who we are as a company. It’s our culture. It’s our heritage. And more than ever, it’s our future. A future where we’re always looking forward. Always serving up new ways to meet life’s moments. A future where we keep dreaming bigger. We look for people with passion, talent, and curiosity, and provide them with the teammates, resources and opportunities to unleash their full potential. The power we create together – when we combine your strengths with ours – is unstoppable. Are you ready to join a team that dreams as big as you do? AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev. The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology. The teams are transforming Operations through Tech and Analytics. Do You Dream Big? We Need You. Job Description Job Title: Manager- GBS Commercial Location: Bangalore Reporting to: Senior Manager - GBS Commercial Purpose of the role This role sits at the intersection of data science and revenue growth strategy, focused on developing advanced analytical solutions to optimize pricing, trade promotions, and product mix. The candidate will lead the end-to-end design, deployment, and automation of machine learning models and statistical frameworks that support commercial decision-making, predictive scenario planning, and real-time performance tracking. By leveraging internal and external data sources—including transactional, market, and customer-level data—this role will deliver insights into price elasticity, promotional lift, channel efficiency, and category dynamics. The goal is to drive measurable improvements in gross margin, ROI on trade spend, and volume growth through data-informed strategies. Key tasks & accountabilities Design and implement price elasticity models using linear regression, log-log models, and hierarchical Bayesian frameworks to understand consumer response to pricing changes across channels and segments. Build uplift models (e.g., Causal Forests, XGBoost for treatment effect) to evaluate promotional effectiveness and isolate true incremental sales vs. base volume. Develop demand forecasting models using ARIMA, SARIMAX, and Prophet, integrating external factors such as seasonality, promotions, and competitor activity. time-series clustering and k-means segmentation to group SKUs, customers, and geographies for targeted pricing and promotion strategies. Construct assortment optimization models using conjoint analysis, choice modeling, and market basket analysis to support category planning and shelf optimization. Use Monte Carlo simulations and what-if scenario modeling to assess revenue impact under varying pricing, promo, and mix conditions. Conduct hypothesis testing (t-tests, ANOVA, chi-square) to evaluate statistical significance of pricing and promotional changes. Create LTV (lifetime value) and customer churn models to prioritize trade investment decisions and drive customer retention strategies. Integrate Nielsen, IRI, and internal POS data to build unified datasets for modeling and advanced analytics in SQL, Python (pandas, statsmodels, scikit-learn), and Azure Databricks environments. Automate reporting processes and real-time dashboards for price pack architecture (PPA), promotion performance tracking, and margin simulation using advanced Excel and Python. Lead post-event analytics using pre/post experimental designs, including difference-in-differences (DiD) methods to evaluate business interventions. Collaborate with Revenue Management, Finance, and Sales leaders to convert insights into pricing corridors, discount policies, and promotional guardrails. Translate complex statistical outputs into clear, executive-ready insights with actionable recommendations for business impact. Continuously refine model performance through feature engineering, model validation, and hyperparameter tuning to ensure accuracy and scalability. Provide mentorship to junior analysts, enhancing their skills in modeling, statistics, and commercial storytelling. Maintain documentation of model assumptions, business rules, and statistical parameters to ensure transparency and reproducibility. Other Competencies Required Presentation Skills: Effectively presenting findings and insights to stakeholders and senior leadership to drive informed decision-making. Collaboration: Working closely with cross-functional teams, including marketing, sales, and product development, to implement insights-driven strategies. Continuous Improvement: Actively seeking opportunities to enhance reporting processes and insights generation to maintain relevance and impact in a dynamic market environment. Data Scope Management: Managing the scope of data analysis, ensuring it aligns with the business objectives and insights goals. Act as a steadfast advisor to leadership, offering expert guidance on harnessing data to drive business outcomes and optimize customer experience initiatives. Serve as a catalyst for change by advocating for data-driven decision-making and cultivating a culture of continuous improvement rooted in insights gleaned from analysis. Continuously evaluate and refine reporting processes to ensure the delivery of timely, relevant, and impactful insights to leadership stakeholders while fostering an environment of ownership, collaboration, and mentorship within the team. Technical Skills - Must Have Data Manipulation & Analysis: Advanced proficiency in SQL, Python (Pandas, NumPy), and Excel for structured data processing. Data Visualization: Expertise in Power BI and Tableau for building interactive dashboards and performance tracking tools. Modeling & Analytics: Hands-on experience with regression analysis, time series forecasting, and ML models using scikit-learn or XGBoost. Data Engineering Fundamentals: Knowledge of data pipelines, ETL processes, and integration of internal/external datasets for analytical readiness. Proficient in Power BI, Advanced MS Excel (Pivots, calculated fields, Conditional formatting, charts, dropdown lists, etc.), MS PowerPoint SQL & Python. Business Environment Work closely with Zone Revenue Management teams. Work in a fast-paced environment. Provide proactive communication to the stakeholders. This is an offshore role and requires comfort with working in a virtual environment. GCC is referred to as the offshore location. The role requires working in a collaborative manner with Zone/country business heads and GCC commercial teams. Summarize insights and recommendations to be presented back to the business. Continuously improve, automate, and optimize the process. Geographical Scope: Global 3. Qualifications, Experience, Skills Level Of Educational Attainment Required Bachelor or Post-Graduate in the field of Business & Marketing, Engineering/Solution, or other equivalent degree or equivalent work experience. Previous Work Experience 5-8 years of experience in the Retail/CPG domain. Extensive experience solving business problems using quantitative approaches. Comfort with extracting, manipulating, and analyzing complex, high volume, high dimensionality data from varying sources. And above all of this, an undying love for beer! We dream big to create future with more cheer. Show more Show less

Posted 1 month ago

Apply

2.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Roles and Responsibilities : Analyze category performance across sales channels (D2C, marketplaces, offline). Track KPIs like revenue, ASP, margin, sell-through, stock cover, and inventory turns. Conduct pricing, discount, and profitability analysis at SKU and category levels. Identify top-performing or underperforming products and uncover performance drivers. Build dashboards and automated reports for category health and inventory planning. Collaborate with marketing, SCM, and category teams to inform business decisions. Perform trend, seasonality, and cohort analysis to improve demand forecasting. Use customer behavior data (views, clicks, conversions) to support assortment planning. Automate reporting workflows and optimize SQL/Python pipelines. Support new product launches with benchmarks and success prediction models. Skills & Qualifications : 0–2 years of experience in a data analytics role, preferably in E- commerce or Retail. Proficiency in MySQL: writing complex queries, joins, window functions. Advanced Excel/Google Sheets: pivot tables, dynamic dashboards, conditional formatting. Experience in Python: Pandas, automation scripts, statsmodels/scikit- learn. Comfort with data visualization: Power BI / Tableau / Looker Studio. Understanding of product lifecycle, inventory metrics, pricing levers, and customer insights. Strong foundation in statistics: descriptive stats, A/B testing, forecasting models. Excellent problem-solving, data storytelling, and cross-functional collaboration skills. Preferred / Bonus Skills : Experience with Shopify, Magento, or other e-commerce platforms. Familiarity with Google Analytics 4 (GA4). Knowledge of merchandising or visual analytics. Exposure to machine learning (e.g., clustering, success prediction). Experience with VBA or Google Apps Script for reporting automation. Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

We are seeking a skilled Data Engineer to join our dynamic team. The ideal candidate will have expertise in Python, SQL, Tableau, and PySpark, with additional exposure to SAS, banking domain knowledge, and version control tools like GIT and BitBucket. The candidate will be responsible for developing and optimizing data pipelines, ensuring efficient data processing, and supporting business intelligence initiatives. Key Responsibilities Design, build, and maintain data pipelines using Python and PySpark Develop and optimize SQL queries for data extraction and transformation Create interactive dashboards and visualizations using Tableau Implement data models to support analytics and business needs Collaborate with cross-functional teams to understand data requirements Ensure data integrity, security, and governance across platforms Utilize version control tools like GIT and BitBucket for code management Leverage SAS and banking domain knowledge to improve data insights Required Skills Strong proficiency in Python and PySpark for data processing Advanced SQL skills for data manipulation and querying Experience with Tableau for data visualization and reporting Familiarity with database systems and data warehousing concepts Preferred Skills Knowledge of SAS and its applications in data analysis Experience working in the banking domain Understanding of version control systems, specifically GIT and BitBucket Knowledge of pandas, numpy, statsmodels, scikit-learn, matplotlib, PySpark , SASPy Qualifications Bachelor's/Master's degree in Computer Science, Data Science, or a related field Excellent problem-solving and analytical skills Ability to work collaboratively in a fast-paced environment Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Avant de postuler à un emploi, sélectionnez votre langue de préférence parmi les options disponibles en haut à droite de cette page. Découvrez votre prochaine opportunité au sein d'une organisation qui compte parmi les 500 plus importantes entreprises mondiales. Envisagez des opportunités innovantes, découvrez notre culture enrichissante et travaillez avec des équipes talentueuses qui vous poussent à vous développer chaque jour. Nous savons ce qu’il faut faire pour diriger UPS vers l'avenir : des personnes passionnées dotées d’une combinaison unique de compétences. Si vous avez les qualités, de la motivation, de l'autonomie ou le leadership pour diriger des équipes, il existe des postes adaptés à vos aspirations et à vos compétences d'aujourd'hui et de demain. Job Summary Fiche de poste : UPS Enterprise Data Analytics team is looking for a talented and motivated Data Scientist to use statistical modelling, state of the art AI tools and techniques to solve complex and large-scale business problems for UPS operations. This role would also support debugging and enhancing existing AI applications in close collaboration with the Machine Learning Operations team. This position will work with multiple stakeholders across different levels of the organization to understand the business problem, develop and help implement robust and scalable solutions. You will be in a high visibility position with the opportunity to interact with the senior leadership to bring forth innovation within the operational space for UPS. Success in this role requires excellent communication to be able to present your cutting-edge solutions to both technical and business leaderships. Responsibilities Become a subject matter expert on UPS business processes and data to help define and solve business needs using data, advanced statistical methods and AI Be actively involved in understanding and converting business use cases to technical requirements for modelling. Query, analyze and extract insights from large-scale structured and unstructured data from different data sources utilizing different platforms, methods and tools like BigQuery, Google Cloud Storage, etc. Understand and apply appropriate methods for cleaning and transforming data, engineering relevant features to be used for modelling. Actively drive modelling of business problem into ML/AI models, work closely with the stakeholders for model evaluation and acceptance. Work closely with the MLOps team to productionize new models, support enhancements and resolving any issues within existing production AI applications. Prepare extensive technical documentation, dashboards and presentations for technical and business stakeholders including leadership teams. Qualifications Expertise in Python, SQL. Experienced in using data science-based packages like scikit-learn, numpy, pandas, tensorflow, keras, statsmodels, etc. Strong understanding of statistical concepts and methods (like hypothesis testing, descriptive stats, etc.), machine learning techniques for regression, classification, clustering problems, including neural networks and deep learning. Proficient in using GCP tools like Vertex AI, BigQuery, GCS, etc. for model development and other activities in the ML lifecycle. Strong ownership and collaborative qualities in the relevant domain. Takes initiative to identify and drive opportunities for improvement and process streamline. Solid oral and written communication skills, especially around analytical concepts and methods. Ability to communicate data through a story framework to convey data-driven results to technical and non-technical audience. Master’s Degree in a quantitative field of mathematics, computer science, physics, economics, engineering, statistics (operations research, quantitative social science, etc.), international equivalent, or equivalent job experience. Bonus Qualifications NLP, Gen AI, LLM knowledge/experience Knowledge of Operations Research methodologies and experience with packages like CPLEX, PULP, etc. Knowledge and experience in MLOps principles and tools in GCP. Experience working in an Agile environment, understanding of Lean Agile principles. Type De Contrat en CDI Chez UPS, égalité des chances, traitement équitable et environnement de travail inclusif sont des valeurs clefs auxquelles nous sommes attachés. Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description Job Summary: UPS Enterprise Data Analytics team is looking for a talented and motivated Data Scientist to use statistical modelling, state of the art AI tools and techniques to solve complex and large-scale business problems for UPS operations. This role would also support debugging and enhancing existing AI applications in close collaboration with the Machine Learning Operations team. This position will work with multiple stakeholders across different levels of the organization to understand the business problem, develop and help implement robust and scalable solutions. You will be in a high visibility position with the opportunity to interact with the senior leadership to bring forth innovation within the operational space for UPS. Success in this role requires excellent communication to be able to present your cutting-edge solutions to both technical and business leaderships. Responsibilities Become a subject matter expert on UPS business processes and data to help define and solve business needs using data, advanced statistical methods and AI Be actively involved in understanding and converting business use cases to technical requirements for modelling. Query, analyze and extract insights from large-scale structured and unstructured data from different data sources utilizing different platforms, methods and tools like BigQuery, Google Cloud Storage, etc. Understand and apply appropriate methods for cleaning and transforming data, engineering relevant features to be used for modelling. Actively drive modelling of business problem into ML/AI models, work closely with the stakeholders for model evaluation and acceptance. Work closely with the MLOps team to productionize new models, support enhancements and resolving any issues within existing production AI applications. Prepare extensive technical documentation, dashboards and presentations for technical and business stakeholders including leadership teams. Qualifications Expertise in Python, SQL. Experienced in using data science-based packages like scikit-learn, numpy, pandas, tensorflow, keras, statsmodels, etc. Strong understanding of statistical concepts and methods (like hypothesis testing, descriptive stats, etc.), machine learning techniques for regression, classification, clustering problems, including neural networks and deep learning. Proficient in using GCP tools like Vertex AI, BigQuery, GCS, etc. for model development and other activities in the ML lifecycle. Strong ownership and collaborative qualities in the relevant domain. Takes initiative to identify and drive opportunities for improvement and process streamline. Solid oral and written communication skills, especially around analytical concepts and methods. Ability to communicate data through a story framework to convey data-driven results to technical and non-technical audience. Master’s Degree in a quantitative field of mathematics, computer science, physics, economics, engineering, statistics (operations research, quantitative social science, etc.), international equivalent, or equivalent job experience. Bonus Qualifications NLP, Gen AI, LLM knowledge/experience Knowledge of Operations Research methodologies and experience with packages like CPLEX, PULP, etc. Knowledge and experience in MLOps principles and tools in GCP. Experience working in an Agile environment, understanding of Lean Agile principles. Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation. Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

By clicking the “Apply” button, I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda’s Privacy Notice and Terms of Use. I further attest that all information I submit in my employment application is true to the best of my knowledge. Job Description: The Future Begins Here: At Takeda, we are leading digital evolution and global transformation. By building innovative solutions and future-ready capabilities, we are meeting the need of patients, our people, and the planet. Bengaluru, the city, which is India’s epicenter of Innovation, has been selected to be home to Takeda’s recently launched Innovation Capability Center. We invite you to join our digital transformation journey. In this role, you will have the opportunity to boost your skills and become the heart of an innovative engine that is contributing to global impact and improvement. At Takeda’s ICC we Unite in Diversity: Takeda is committed to creating an inclusive and collaborative workplace, where individuals are recognized for their backgrounds and abilities they bring to our company. We are continuously improving our collaborators journey in Takeda, and we welcome applications from all qualified candidates. Here, you will feel welcomed, respected, and valued as an important contributor to our diverse team. The Opportunity: As a Data Scientist, you will have the opportunity to apply your analytical skills and expertise to extract meaningful insights from vast amounts of data. We are currently seeking a talented and experienced individual to join our team and contribute to our data-driven decision-making process. Objectives: Collaborate with different business users, mainly Supply Chain/Manufacturing to understand the current state and identify opportunities to transform the business into a data-driven organization. Translate processes, and requirements into analytics solutions and metrics with effective data strategy, data quality, and data accessibility for decision making. Operationalize decision support solutions and drive use adoption as well as gathering feedback and metrics on Voice of Customer in order to improve analytics services. Understand the analytics drivers and data to be modeled as well as apply the appropriate quantitative techniques to provide business with actionable insights and ensure analytics model and data are access to the end users to evaluate “what-if” scenarios and decision making. Evaluate the data, analytical models, and experiments periodically to validate hypothesis ensuring it continues to provide business value as requirements and objectives evolve. Accountabilities: Collaborates with business partners in identifying analytical opportunities and developing BI-related goals and projects that will create strategically relevant insights. Work with internal and external partners to develop analytics vision and programs to advance BI solutions and practices. Understands data and sources of data. Strategizes with IT development team and develops a process to collect, ingest, and deliver data along with proper data models for analytical needs. Interacts with business users to define pain points, problem statement, scope, and analytics business case. Develops solutions with recommended data model and business intelligence technologies including data warehouse, data marts, OLAP modeling, dashboards/reporting, and data queries. Works with DevOps and database teams to ensure proper design of system databases and appropriate integration with other enterprise applications. Collaborates with Enterprise Data and Analytics Team to design data model and visualization solutions that synthesize complex data for data mining and discovery. Assists in defining requirements and facilitates workshops and prototyping sessions. Develops and applies technologies such as machine-learning, deep-learning algorithm to enable advanced analytics product functionality. EDUCATION, BEHAVIOURAL COMPETENCIES AND SKILLS : Bachelors’ Degree, from an accredited institution in Data Science, Statistics, Computer Science, or related field. 3+ years of experience with statistical modeling such as clustering, segmentation, multivariate, regression, etc. and analytics tools such as R, Python, Databricks, etc. required Experience in developing and applying predictive and prescriptive modeling, deep-learning, or other machine learning techniques a plus. Hands-on development of AI solutions that comply with industry standards and government regulations. Great numerical and analytical skills, as well as basic knowledge of Python Analytics packages (Pandas, scikit-learn, statsmodels). Ability to build and maintain scalable and reliable data pipelines that collect, transform, manipulate, and load data from internal and external sources. Ability to use statistical tools to conduct data analysis and identify data quality issues throughout the data pipeline. Experience with BI and Visualization tools (f. e. Qlik, Power BI), ETL, NoSQL and proven design skills a plus. Excellent written and verbal communication skills including the ability to interact effectively with multifunctional teams. Experience with working with agile teams. WHAT TAKEDA CAN OFFER YOU: Takeda is certified as a Top Employer, not only in India, but also globally. No investment we make pays greater dividends than taking good care of our people. At Takeda, you take the lead on building and shaping your own career. Joining the ICC in Bengaluru will give you access to high-end technology, continuous training and a diverse and inclusive network of colleagues who will support your career growth. BENEFITS: It is our priority to provide competitive compensation and a benefit package that bridges your personal life with your professional career. Amongst our benefits are Competitive Salary + Performance Annual Bonus Flexible work environment, including hybrid working Comprehensive Healthcare Insurance Plans for self, spouse, and children Group Term Life Insurance and Group Accident Insurance programs Health & Wellness programs including annual health screening, weekly health sessions for employees. Employee Assistance Program 3 days of leave every year for Voluntary Service in additional to Humanitarian Leaves Broad Variety of learning platforms Diversity, Equity, and Inclusion Programs Reimbursements – Home Internet & Mobile Phone Employee Referral Program Leaves – Paternity Leave (4 Weeks) , Maternity Leave (up to 26 weeks), Bereavement Leave (5 calendar days) ABOUT ICC IN TAKEDA: Takeda is leading a digital revolution. We’re not just transforming our company; we’re improving the lives of millions of patients who rely on our medicines every day. As an organization, we are committed to our cloud-driven business transformation and believe the ICCs are the catalysts of change for our global organization. Locations: IND - Bengaluru Worker Type: Employee Worker Sub-Type: Regular Time Type: Full time Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

It's about Being What's next. What's in it for you? A Data Scientist for AI Products (Global) will be responsible for working in the Artificial Intelligence team, Linde's AI global corporate division engaged with real business challenges and opportunities in multiple countries. Focus of this role is to support the AI team with extending existing and building new AI products for a vast amount of uses cases across Linde’s business and value chain. You'll collaborate across different business and corporate functions in international team composed of Project Managers, Data Scientists, Data and Software Engineers in the AI team and others in the Linde's Global AI team. As a Data Scientist AI, you will support Linde’s AI team with extending existing and building new AI products for a vast amount of uses cases across Linde’s business and value chain" At Linde, the sky is not the limit. If you’re looking to build a career where your work reaches beyond your job description and betters the people with whom you work, the communities we serve, and the world in which we all live, at Linde, your opportunities are limitless. Be Linde. Be Limitless. Team Making an impact. What will you do? You will work directly with a variety of different data sources, types and structures to derive actionable insights Develop, customize and manage AI software products based on Machine and Deep Learning backends will be your tasks Your role includes strong support on replication of existing products and pipelines to other systems and geographies In addition to that you will support in architectural design and defining data requirements for new developments It will be your responsibility to interact with business functions in identifying opportunities with potential business impact and to support development and deployment of models into production Winning in your role. Do you have what it takes? You have a Bachelor or master’s degree in data science, Computational Statistics/Mathematics, Computer Science, Operations Research or related field You have a strong understanding of and practical experience with Multivariate Statistics, Machine Learning and Probability concepts Further, you gained experience in articulating business questions and using quantitative techniques to arrive at a solution using available data You demonstrate hands-on experience with preprocessing, feature engineering, feature selection and data cleansing on real world datasets Preferably you have work experience in an engineering or technology role You bring a strong background of Python and handling large data sets using SQL in a business environment (pandas, numpy, matplotlib, seaborn, sklearn, keras, tensorflow, pytorch, statsmodels etc.) to the role In addition you have a sound knowledge of data architectures and concepts and practical experience in the visualization of large datasets, e.g. with Tableau or PowerBI Result driven mindset and excellent communication skills with high social competence gives you the ability to structure a project from idea to experimentation to prototype to implementation Very good English language skills are required As a plus you have hands-on experience with DevOps and MS Azure, experience in Azure ML, Kedro or Airflow, experience in MLflow or similar Why you will love working for us! Linde is a leading global industrial gases and engineering company, operating in more than 100 countries worldwide. We live our mission of making our world more productive every day by providing high-quality solutions, technologies and services which are making our customers more successful and helping to sustain and protect our planet. On the 1st of April 2020, Linde India Limited and Praxair India Private Limited successfully formed a joint venture, LSAS Services Private Limited. This company will provide Operations and Management (O&M) services to both existing organizations, which will continue to operate separately. LSAS carries forward the commitment towards sustainable development, championed by both legacy organizations. It also takes ahead the tradition of the development of processes and technologies that have revolutionized the industrial gases industry, serving a variety of end markets including chemicals & refining, food & beverage, electronics, healthcare, manufacturing, and primary metals. Whatever you seek to accomplish, and wherever you want those accomplishments to take you, a career at Linde provides limitless ways to achieve your potential, while making a positive impact in the world. Be Linde. Be Limitless. Have we inspired you? Let's talk about it! We are looking forward to receiving your complete application (motivation letter, CV, certificates) via our online job market. Any designations used of course apply to persons of all genders. The form of speech used here is for simplicity only. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, age, disability, protected veteran status, pregnancy, sexual orientation, gender identity or expression, or any other reason prohibited by applicable law. Praxair India Private Limited acts responsibly towards its shareholders, business partners, employees, society and the environment in every one of its business areas, regions and locations across the globe. The company is committed to technologies and products that unite the goals of customer value and sustainable development. Show more Show less

Posted 1 month ago

Apply

2.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

About Neo Group: Neo is a new-age, focused Wealth and Asset Management platform in India, catering to HNIs, UHNIs and multi-family offices. Neo stands on its three pillars of unbiased advisory, transparency and cost-efficiency, to offer comprehensive, trustworthy solutions. Founded by Nitin Jain (ex-CEO of Edelweiss Wealth), Neo has amassed over USD 3 Billion (₹25,000 Cr.) of Assets Under Advice within a short span of 2 years since inception, including USD 360 Million (₹3,000 Cr.) Assets Under Management. We have recently partnered with Peak XV Partners via a USD 35 Million growth round. To know more, please visit: www.neo-group.in Position: Senior Data Scientist Location: Mumbai Experience: 4 - 8 years Job Description: You are a data pro with deep statistical knowledge and analytical aptitude. You know how to make sense of massive amounts of data and gather deep insights. You will use statistics, data mining, machine learning, and deep learning techniques to deliver data-driven insights for clients. You will dig deep to understand their challenges and create innovative yet practical solutions. Responsibilities: • Meeting with the business team to discuss user interface ideas and applications. • Selecting features, building and optimizing classifiers using machine learning techniques • Data mining using state-of-the-art methods • Doing ad-hoc analysis and presenting results in a clear manner • Optimize application for maximum speed and scalability • Assure that all user input is validated before submitting code • Collaborate with other team members and stakeholders • Taking ownership of features and accountability Requirements: • 4+ years’ experience in developing Data Models • Excellent understanding of machine learning techniques and algorithms, such as k-NN, Naive Bayes, SVM, Decision Forests, etc. • Excellent understanding of NLP and language processing • Proficient understanding of Python or PySpark • Good experience of Python and databases such as MongoDB or MySQL • Good applied statistics skills, such as distributions, statistical testing, regression, etc. • Build Acquisition Scorecard Models • Build Behaviour Scorecard Models • Created Threat Detection Models • Created risk profiling model or classification model • Build Threat/Fraud Triggers from various sources of data • Experience with Data Analysis Libraries - NumPy, Pandas, Statsmodels, Dask • Good understanding of Word2vec, RNNs, Transformers, Bert, Resnet, MobileNet, Unet, Mask-RCNN, Siamese Networks, GradCam, image augmentation techniques, GAN, Tensorboard • Ability to provide accurate estimates for tasks and detailed breakdowns for planning and managing sprints • Deployment - Flask, Tensorflow serving, Lambda functions, Docker is a plus • Previous Experience leading a DS team is a plus Personal Qualities: • An ability to perform well in a fast-paced environment • Excellent analytical and multitasking skills • Stays up-to-date on emerging technologies • Data-oriented personality Why join us? We will provide you with the opportunity to challenge yourself and learn new skills, as you become an integral part our growth story. We are group of ambitious people who believe in building a business environment around new age concepts, framework, and technologies built on a strong foundation of industry expertise. We promise you the prospect of being surrounded by smart, ambitious, motivated people, day-in and day-out. That’s the kind of work you can expect to do at Neo. Show more Show less

Posted 1 month ago

Apply

2.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

About Neo Group: Neo is a new-age, focused Wealth and Asset Management platform in India, catering to HNIs, UHNIs and multi-family offices. Neo stands on its three pillars of unbiased advisory, transparency and cost-efficiency, to offer comprehensive, trustworthy solutions. Founded by Nitin Jain (ex-CEO of Edelweiss Wealth), Neo has amassed over USD 3 Billion (₹25,000 Cr.) of Assets Under Advice within a short span of 2 years since inception, including USD 360 Million (₹3,000 Cr.) Assets Under Management. We have recently partnered with Peak XV Partners via a USD 35 Million growth round. To know more, please visit: www.neo-group.in Position: Data Scientist Location: Mumbai Experience: 2-5 years Job Description: You are a data pro with deep statistical knowledge and analytical aptitude. You know how to make sense of massive amounts of data and gather deep insights. You will use statistics, data mining, machine learning, and deep learning techniques to deliver data-driven insights for clients. You will dig deep to understand their challenges and create innovative yet practical solutions. Responsibilities: • Meeting with the business team to discuss user interface ideas and applications. • Selecting features, building and optimizing classifiers using machine learning techniques • Data mining using state-of-the-art methods • Doing ad-hoc analysis and presenting results in a clear manner • Optimize application for maximum speed and scalability • Assure that all user input is validated before submitting code • Collaborate with other team members and stakeholders • Taking ownership of features and accountability Requirements: • 2+ years’ experience in developing Data Models • Excellent understanding of machine learning techniques and algorithms, such as k-NN, Naive Bayes, SVM, Decision Forests, etc. • Excellent understanding of NLP and language processing • Proficient understanding of Python or PySpark • Basic understanding of Python and databases such as MongoDB or MySQL • Good applied statistics skills, such as distributions, statistical testing, regression, etc. • Build Acquisition Scorecard Models • Build Behaviour Scorecard Models • Created Threat Detection Models • Created risk profiling model or classification model • Build Threat/Fraud Triggers from various sources of data • Experience with Data Analysis Libraries - NumPy, Pandas, Statsmodels, Dask • Good understanding of Word2vec, RNNs, Transformers, Bert, Resnet, MobileNet, Unet, Mask-RCNN, Siamese Networks, GradCam, image augmentation techniques, GAN, Tensorboard • Ability to provide accurate estimates for tasks and detailed breakdowns for planning and managing sprints • Deployment - Flask, Tensorflow serving, Lambda functions, Docker is a plus • Previous Experience leading a DS team is a plus Personal Qualities: • An ability to perform well in a fast-paced environment • Excellent analytical and multitasking skills • Stays up-to-date on emerging technologies • Data-oriented personality Why join us? We will provide you with the opportunity to challenge yourself and learn new skills, as you become an integral part our growth story. We are group of ambitious people who believe in building a business environment around new age concepts, framework, and technologies built on a strong foundation of industry expertise. We promise you the prospect of being surrounded by smart, ambitious, motivated people, day-in and day-out. That’s the kind of work you can expect to do at Neo. Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Praxair India Private Limited | Business Area: Digitalisation Data Scientist for AI Products (Global) Bangalore, Karnataka, India | Working Scheme: On-Site | Job Type: Regular / Permanent / Unlimited / FTE | Reference Code: req23348 It's about Being What's next. What's in it for you? A Data Scientist for AI Products (Global) will be responsible for working in the Artificial Intelligence team, Linde's AI global corporate division engaged with real business challenges and opportunities in multiple countries. Focus of this role is to support the AI team with extending existing and building new AI products for a vast amount of uses cases across Linde’s business and value chain. You'll collaborate across different business and corporate functions in international team composed of Project Managers, Data Scientists, Data and Software Engineers in the AI team and others in the Linde's Global AI team. As a Data Scientist AI, you will support Linde’s AI team with extending existing and building new AI products for a vast amount of uses cases across Linde’s business and value chain" At Linde, the sky is not the limit. If you’re looking to build a career where your work reaches beyond your job description and betters the people with whom you work, the communities we serve, and the world in which we all live, at Linde, your opportunities are limitless. Be Linde. Be Limitless. Team Making an impact. What will you do? You will work directly with a variety of different data sources, types and structures to derive actionable insights Develop, customize and manage AI software products based on Machine and Deep Learning backends will be your tasks Your role includes strong support on replication of existing products and pipelines to other systems and geographies In addition to that you will support in architectural design and defining data requirements for new developments It will be your responsibility to interact with business functions in identifying opportunities with potential business impact and to support development and deployment of models into production Winning in your role. Do you have what it takes? You have a Bachelor or master’s degree in data science, Computational Statistics/Mathematics, Computer Science, Operations Research or related field You have a strong understanding of and practical experience with Multivariate Statistics, Machine Learning and Probability concepts Further, you gained experience in articulating business questions and using quantitative techniques to arrive at a solution using available data You demonstrate hands-on experience with preprocessing, feature engineering, feature selection and data cleansing on real world datasets Preferably you have work experience in an engineering or technology role You bring a strong background of Python and handling large data sets using SQL in a business environment (pandas, numpy, matplotlib, seaborn, sklearn, keras, tensorflow, pytorch, statsmodels etc.) to the role In addition you have a sound knowledge of data architectures and concepts and practical experience in the visualization of large datasets, e.g. with Tableau or PowerBI Result driven mindset and excellent communication skills with high social competence gives you the ability to structure a project from idea to experimentation to prototype to implementation Very good English language skills are required As a plus you have hands-on experience with DevOps and MS Azure, experience in Azure ML, Kedro or Airflow, experience in MLflow or similar Why you will love working for us! Linde is a leading global industrial gases and engineering company, operating in more than 100 countries worldwide. We live our mission of making our world more productive every day by providing high-quality solutions, technologies and services which are making our customers more successful and helping to sustain and protect our planet. On the 1st of April 2020, Linde India Limited and Praxair India Private Limited successfully formed a joint venture, LSAS Services Private Limited. This company will provide Operations and Management (O&M) services to both existing organizations, which will continue to operate separately. LSAS carries forward the commitment towards sustainable development, championed by both legacy organizations. It also takes ahead the tradition of the development of processes and technologies that have revolutionized the industrial gases industry, serving a variety of end markets including chemicals & refining, food & beverage, electronics, healthcare, manufacturing, and primary metals. Whatever you seek to accomplish, and wherever you want those accomplishments to take you, a career at Linde provides limitless ways to achieve your potential, while making a positive impact in the world. Be Linde. Be Limitless. Have we inspired you? Let's talk about it! We are looking forward to receiving your complete application (motivation letter, CV, certificates) via our online job market. Any designations used of course apply to persons of all genders. The form of speech used here is for simplicity only. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, age, disability, protected veteran status, pregnancy, sexual orientation, gender identity or expression, or any other reason prohibited by applicable law. Praxair India Private Limited acts responsibly towards its shareholders, business partners, employees, society and the environment in every one of its business areas, regions and locations across the globe. The company is committed to technologies and products that unite the goals of customer value and sustainable development. Show more Show less

Posted 1 month ago

Apply

5.0 - 7.0 years

3 - 6 Lacs

Pune

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – AI and DATA – Statistical Modeler-Senior At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. As part of our EY- GDS AI and Data team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. Technical Skills: Statistical Programming Languages: Python, R Libraries & Frameworks: Pandas, NumPy, Scikit-learn, StatsModels, Tidyverse, caret Data Manipulation Tools: SQL, Excel Data Visualization Tools: Matplotlib, Seaborn, ggplot2, Machine Learning Techniques: Supervised and unsupervised learning, model evaluation (cross-validation, ROC curves) 5-7 years of experience in building statistical forecast models for pharma industry Deep understanding of patient flows,treatment journey across both Onc and Non Onc Tas. What we look for A Team of people with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment What working at EY offers At EY, we’re dedicated to helping our clients, from startups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees, and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you About EY As a global leader in assurance, tax, transaction and advisory services, we’re using the finance products, expertise and systems we’ve developed to build a better working world. That starts with a culture that believes in giving you the training, opportunities and creative freedom to make things better. Whenever you join, however long you stay, the exceptional EY experience lasts a lifetime. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies