High Performance Time Series
Become the time-series domain expert for your organization
Enroll in Course
Become the Time Series Expert
for your organization
The High-Performance Time Series Forecasting Course is an amazing course designed to teach Business Analysts and Data Scientists how to reduce forecast error using state-of-the-art forecasting techniques that have won competitions. You'll undergo a complete transformation learning the most in-demand skills that organizations need right now. Time to accelerate your career.
Discover What's Inside the High-Performance Time Series Forecasting Course
Crafted For Business Analysts & Data Scientists
That need to reduce forecasting error and scale results for your organization.
Everything You Need to Become the Time Series Expert for Your Organization
549 Lessons, 45.8 Hours of Video, and 3 Challenges to Test Your Skills
This is possibly my most challenging course ever. You'll learn the time series skills that have taken me 10-years of study, practice, and experimentation.
My talk on High-Performance Time Series Forecasting
This course gives you the tools you need to meet today's forecasting demands.
A full year was spent on building two of the software packages you'll learn, modeltime
and timetk
.
Plus, I'm teaching you GluonTS
, a state-of-the-art deep learning framework for time series written in python.
This course will challenge you. It will change you. It did me.
- Matt Dancho, Course Instructor & Founder of Business Science
Undergo a Complete Transformation
By learning forecasting techniques that get results
With High-Performance Forecasting, you will undergo a complete transformation by learning the most in-demand skills for creating high-accuracy forecasts.
Through this course, you will learn and apply:
- Machine Learning & Deep Learning
- Feature Engineering
- Visualization & Data Wrangling
- Transformations
- Hyper Parameter Tuning
- Forecasting at Scale (Time Series Groups)
Get started now!
How it works
Your path to becoming an Expert Forecaster is simplified into 3 streamlined steps.
Part 1
Time Series Feature Engineering
First, we build your time series feature engineering skills. You learn:
- Visualization: Identifying features visually using the most effective plotting techniques
- Data Wrangling: Aggregating, padding, cleaning, and extending time series data
- Transformations: Rolling, Lagging, Differencing, Creating Fourier Series, and more
- Feature Engineering: Over 3-hours of content on introductory and advanced feature engineering
Part 2
Machine Learning for Time Series
Next, we build your time series machine learning skills. You learn:
-
17 Algorithms: 8 hours of content on 17 TOP Algorithms. Divided into 5 groups:
- ARIMA
- Prophet
- Exponential Smoothing - ETS, TBATS, Seasonal Decomposition
- Machine Learning - Elastic Net, MARS, SVM, KNN, Random Forest, XGBOOST, Cubist, NNET & NNETAR
- Boosted Algorithms - Prophet Boost & ARIMA Boost
- Hyper Parameter Tuning: Strategies to reduce overfitting & increase model performance
- Time Series Groups: Scale your analysis from one time series to hundreds
- Parallel Processing: Needed to speed up hyper parameter tuning and forecasting at scale
- Ensembling: Combining many algorithms into a single super learner
Part 3
Deep Learning for Time Series
Next, we build your time series deep learning skills. You learn:
- GluonTS: A state-of-the-art forecasting package that's built on top of mxnet (made by Amazon)
- Algorithms: Learn DeepAR, DeepVAR, NBEATS, and more!
Challenges & Cheat Sheets
Next, we build your time series machine learning skills. You learn:
- Cheat Sheets: Developed to make your forecasting workflow reproducible on any problem
- Challenges: Designed to test your abilities & solidify your knowledge
Summary of what you get
- A methodical training plan that goes from concept to production ($10,000 value)
- Part 1 - Feature Engineering with Timetk
- Part 2 - Machine Learning with Modeltime
- Part 3 - Deep Learning with GluonTS
- Challenges & Cheat Sheets
$10,000 Value
Purchase today for: 👇
Get started now!
Your Instructor
Founder of Business Science and general business & finance guru, He has worked with many clients from Fortune 500 to high-octane startups! Matt loves educating data scientists on how to apply powerful tools within their organization to yield ROI. Matt doesn't rest until he gets results (literally, he doesn't sleep so don't be suprised if he responds to your email at 4AM)!
Course Curriculum
-
PreviewHigh-Performance Time Series - Become the Time Series Expert for Your Organization (2:34)
-
StartPrivate Slack Channel - How to Join
-
PreviewVideo Subtitles (Captions)
-
PreviewWhat is a High-Performance Forecasting System?
-
Preview[IMPORTANT] System Requirements - R + Python Requirements & Common Issues
-
StartWould You Like To Become An Affiliate (And Earn 20% On Your Sales)?
-
StartThe Forecasting Competition Review & Course Progression (3:34)
-
Start2014 Kaggle Walmart Recruiting Challenge (5:11)
-
Start2018 M4 Competition (3:37)
-
Start2018 Kaggle Wikipedia Website Traffic Forecasting Competition (4:30)
-
Start2020 M5 Competition (5:59)
-
Start5 Key Takeaways from the Forecast Competition Review (5:41)
-
StartEstablish Relationships, Part 1 - Google Analytics Summary Dataset (4:11)
-
StartEstablish Relationships, Part 2 - Google Analytics Top 20 Pages (5:23)
-
StartBuild Relationships - Mailchimp & Learning Lab Events (4:49)
-
StartGenerate Course Revenue - Transaction Revenue & Product Events (3:03)
-
Start🔽 Code Checkpoint (File Download) (0:54)
-
StartWhy is Variance Reduction Important? (4:43)
-
StartLog - Log (and Log1P) Transformation (4:17)
-
StartLog - Assessing the Benefit of Log1P Transformation (2:51)
-
StartLog - Groups & Inversion (3:43)
-
StartBox Cox - What is the Box Cox Transformation? (2:34)
-
StartBox Cox - Assessing the Benefit (4:04)
-
StartBox Cox - Inversion (2:05)
-
StartBox Cox - Managing Grouped Transformations & Inversion (8:36)
-
StartIntroduction to Rolling & Smoothing (1:49)
-
Start🔽 Rolling Windows - What is a Moving Average? (File Download) (3:53)
-
StartRolling Windows - Moving Average & Median Applied (8:53)
-
StartLoess Smoother (7:02)
-
StartRolling Correlation - Slidify, Part 1 (4:16)
-
StartRolling Correlation - Slidify, Part 2 (7:40)
-
Start[BUSINESS SPOTLIGHT] The Problem with Forecasting using a Moving Average (6:43)
-
StartIntroduction to Lags & Differencing (1:08)
-
StartLags - What is a Lag? (1:49)
-
StartLags - Lag Detection with ACF/PACF (3:54)
-
StartLags - Regression with Lags (5:06)
-
StartDifferencing - Growth vs Change (4:00)
-
StartDifferencing - Acceleration (6:22)
-
StartDifferencing - Comparing Multiple Time Series (4:44)
-
StartDifferencing - Inversion (0:57)
-
StartWhat is the Log Interval Transformation? (5:47)
-
StartVisualizing the Transformation (4:12)
-
StartTransformations & Preprocessing (5:09)
-
StartModeling (6:29)
-
StartPreparing Future Data (3:36)
-
StartMaking Predictions (1:05)
-
StartCombining the Forecast Data (4:08)
-
StartEstimating Confidence Intervals (8:24)
-
StartVisualizing Confidence Intervals (2:10)
-
StartInverting the Log Interval Transformation (4:08)
-
StartThe Time Series Signature (7:55)
-
StartFeature Removal (3:28)
-
StartLinear Trend (2:10)
-
StartNon-Linear Trend - Basis Splines (4:41)
-
StartNon-Linear Trend - Natural Splines (Stiffer than Basis Splines) (4:29)
-
StartSeasonal Features - Weekday & Month (3:21)
-
StartSeasonal Features - Combining with Trend (5:23)
-
Start🔽 Solution, Part 1 (File Download) - Collect & Prepare Data (3:49)
-
StartSolution, Part 2 - Visualizations (3:19)
-
StartSolution, Part 3A - Create Full Dataset (5:46)
-
StartSolution, Part 3B - Visualize the Full Dataset (3:47)
-
StartSolution, Part 4 - Model/Forecast Data Split (1:05)
-
StartSolution, Part 5 - Train/Test Data Split (0:56)
-
StartSolution, Part 6 - Feature Engineering (4:18)
-
StartSolution, Part 7 - Modeling: Spline Model (6:08)
-
StartSolution, Part 8 - Modeling: Lag Model (2:25)
-
StartSolution, Part 9 - Modeltime (4:03)
-
StartSolution, Part 10 - Forecast (6:49)
-
Start🔽 Setup (File Download) - Modeltime New Features (1:53)
-
StartExpedited Forecasting - Modeltime Table (5:20)
-
StartExpedited Forecasting - Skip Straight to Forecasting (2:20)
-
StartVisualizing a Fitted Model (2:57)
-
StartCalibration - In-Sample vs Out-of-Sample Accuracy (5:25)
-
StartResidual Diagnostics - Getting Residuals (2:16)
-
StartResiduals - Time Plot (2:39)
-
StartResiduals - Plot Customization (2:29)
-
StartResiduals - ACF Plot (4:06)
-
StartResiduals - Seasonality Plot (3:50)
-
StartAuto-Regressive Functions: ar() & arima() (5:15)
-
StartAuto-Regressive (AR) Modeling with Linear Regression (3:11)
-
StartSingle-Step Forecast for AR Models (4:43)
-
StartMulti-Step Recursive Forecasting for AR Models (4:44)
-
StartIntegration (Differencing) (5:42)
-
StartMoving Average (MA) Process (Error Modeling) (7:36)
-
StartSeasonal ARIMA (SARIMA) (4:29)
-
StartAdding XREGS (SARIMAX) (4:44)
-
StartImplementing Auto ARIMA in Modeltime (1:49)
-
StartHow Auto ARIMA Works - Lazy Grid Search (1:27)
-
StartComparing ARIMA & Auto ARIMA (3:15)
-
StartAdding Fourier Features to Pick Up More than 1 Seasonality (3:49)
-
StartAdding Event Features to Improve R-Squared (Variance Explained) (1:33)
-
StartRefitting & Reviewing the Forecast (2:57)
-
StartAdding Month Features to Account for February Increase - BEST MAE 0.564 (3:35)
-
Start🔽 Solution, Part 1 - Train/Test Setup (Solution File Download) (1:55)
-
StartSolution, Part 2 - ARIMA (Model 1): Basic Auto ARIMA (3:03)
-
StartSolution, Part 3 - ARIMA (Model 2): Auto ARIMA + Adding Product Events (2:14)
-
StartSolution, Part 4 - ARIMA (Model 3): Auto ARIMA + Events + Seasonality (2:08)
-
StartSolution, Part 5 - ARIMA (Model 4): Forcing Seasonality with Manual ARIMA (1:17)
-
StartSolution, Part 6 - ARIMA (Model 5): Auto ARIMA + Events + Fourier Series (0:57)
-
StartSolution, Part 7 - ARIMA - Modeltime Workflow (2:26)
-
StartSolution, Part 8 - ARIMA - Forecast Review (3:18)
-
StartSolution, Part 9 - Prophet Models: Basic (6), Yearly Seasonality (7), Events (8), Events + Fourier (9) (2:52)
-
StartSolution, Part 10 - Prophet - Modeltime Workflow (1:38)
-
StartSolution, Part 11 - Prophet - Forecast Review (3:13)
-
StartSolution, Part 12 - Exponential Smoothing Models: ETS (10), TBATS (11) (3:24)
-
StartSolution, Part 13 - Exponential Smoothing - Modeltime Workflow (1:45)
-
StartSolution, Part 14 - Exponential Smoothing - Forecast Review (1:30)
-
StartSolution, Part 15 - Forecasting the Future Data - ARIMA, Prophet & ETS/TBATS (3:40)
-
StartSolution, Part 16 - Final Review - ARIMA, Prophet, & ETS/TBATS (2:47)
-
StartStrengths/Weakness - KNN & Tree-Based Algorithms Can't Predict Beyond the Min/Max (1:24)
-
StartKNN vs GLMNET - Making Sample Data with Trend (2:08)
-
StartKNN vs GLMNET - Making Simple Trend Models (4:12)
-
StartKNN vs GLMNET - Visualize the Trend Predictions w/ Modeltime - Yikes, GLMNET just schooled KNN (4:14)
-
StartOrganizing in a Modeltime Table (4:22)
-
StartUpdating the Descriptions Programmatically (4:02)
-
StartModel Selection - Process & Tips (using Accuracy Table) (3:39)
-
StartModel Inspection - Process & Tips (using Test Forecast Visualization) (3:03)
-
StartModel Inspection - Visualizing the Future Forecast (5:42)
-
StartRecipe for Prophet Boost (3:33)
-
StartModel Strategy - Using XGBOOST for Seasonality/XREG Modeling (4:39)
-
StartWorkflow - No Parameter Tweaking (3:41)
-
Start💡 [KEY CONCEPT] Prophet Boost - Modeling Trend with Prophet, Residuals with XGBoost (3:00)
-
StartProphet Boost - Tweaking Parameters - BEST MAE 0.457 🚀 (6:33)
-
StartWhat are Sequential Models? (& Why do we need to tune them differently?) (2:55)
-
StartExtracting the Workflow from a Modeltime Table: pluck_modeltime_model() (1:40)
-
StartTime Series Cross Validation (TSCV) Specification, Part 1: time_series_cv() (4:34)
-
StartTime Series Cross Validation (TSCV), Part 2: plot_time_series_cv_plan() (4:14)
-
StartIdentify Tuning Parameters - Recipe Spec (3:07)
-
StartIdentify Tuning Parameters - Model Spec (5:14)
-
StartMake a Grid for Parameters - Grid Spec (5:55)
-
StartIntroduction to Meta-Learner Ensembling with Modeltime Ensemble (3:57)
-
StartResampling: Time Series Cross Validation (TSCV) Strategy (5:17)
-
StartMaking Sub-Model CV Predictions - modeltime_fit_resamples() (4:27)
-
StartResampling & Sub-Model Prediction: K-Fold Strategy (6:28)
-
StartLinear Regression Stack - TSCV - RMSE 1.00 (Ouch!) 🤮 (7:16)
-
StartLinear Regression Stack - K-Fold - RMSE 0.651 (Much Better, but We Can Do Better) 😀 (3:25)
-
StartGLMNET Stack - RMSE 0.641 (On the right track) 👍 (6:38)
-
StartModeltime Ensemble - In-Sample Prediction Error - Bug Squashed (1:10)
-
StartRandom Forest Stack - RMSE 0.587!!! (7% improvement) 🤑🚀 (4:33)
-
StartNeural Net Stack - RMSE 0.643 (4:05)
-
StartXGBoost Stack - RMSE 0.585!!! 💥💥💥 (4:29)
-
StartCubist Stack - RMSE 0.649 (3:11)
-
StartSVM Stack - RMSE 0.608!! 💪 (3:26)
-
StartData Understanding (4:33)
-
StartData Prep, Part 1: Padding by Group | Ungrouped Log Transformation (3:53)
-
StartData Prep, Part 2: Extend by Group (2:44)
-
StartData Prep, Part 3: Fourier Features & Lag Features by Group (6:03)
-
StartData Prep, Part 4: Rolling Features by Group | Adding a Row ID (4:59)
-
StartFuture & Prepared Data - Preparation (7:34)
-
StartPanel Model 1: Prophet with Regressors (2:11)
-
StartUPDATE: HARDHAT 1.0.0 FIX
-
StartPanel Model 2: XGBoost (2:41)
-
StartPanel Model 3: Prophet Boost (1:57)
-
StartPanel Model 4: SVM (Radial) (2:02)
-
StartPanel Model 5: Random Forest (1:31)
-
StartPanel Model 6: Neural Net (1:27)
-
StartPanel Model 7: MARS (1:27)
-
StartAccuracy Check - This will help us select models for tuning (3:22)
-
StartTuning Resamples: K-Fold Cross Validation (2:45)
-
StartPanel Model 8: XGBoost Tuned | Tunable Workflow Spec (3:37)
-
StartPanel Model 8: XGBoost Tuned | Hyperparameter Tuning (8:12)
-
StartPanel Model 9: Random Forest Tuned | Tunable Workflow Spec (1:56)
-
StartPanel Model 9: Random Forest Tuned | Hypeparameter Tuning (3:28)
-
StartPanel Model 10: MARS Tuned | Tunable Workflow Spec (2:00)
-
StartPanel Model 10: MARS Tuned | Hyperparameter Tuning (3:07)
-
StartEnsemble Average (Mean) & Sub-Model Selection (2:47)
-
StartAccuracy (Test Set, No Inversion) (1:18)
-
StartForecast Visualization (Test Set, Inverted) (3:57)
-
StartAccuracy by Group (Test Set, Inverted): summarize_accuracy_metrics() [MAE 46 💪] (4:29)
-
StartRefitted Ensemble & Future Forecast (6:11)
-
StartEnsemble Median: Avoid Overfitting (3:29)
-
StartGetting the Data | GA Webpage Visits Daily (2:17)
-
StartFull Data | Padding the Data (4:02)
-
StartAlternative Padding Strategy
-
StartFull Data | Log1P Transformation (Target) (1:01)
-
StartFull Data | Extend (Future Frame) (1:41)
-
StartFull Data | Group-Wise Fourier Series (2:33)
-
StartFull Data | Group-Wise Adding Lagged Features (1:47)
-
StartFull Data | Group-Wise Rolling Features (3:10)
-
StartFull Data | Adding a Row ID (0:52)
-
StartData Prepared | skimr::skim() - Watch out for missing data (2:11)
-
StartFuture Data | skimr::skim() - Watch out for missing data (4:07)
-
StartSplit Data Prepared (Train/Test) (2:15)
-
StartVisually Inspect the Train/Test Splits - Inspect for missing groups (3:37)
-
StartModeltime GluonTS Recipe (4:07)
-
StartDeepAR (Model 1) | Understanding deep_ar() & Training Our 1st Model (9:56)
-
StartDeepAR (Model 1) | Model Accuracy Evaluation (MAE 0.546) (4:07)
-
StartAhhh My Model Errored (Skimr to the Rescue!) (3:59)
-
StartDeepAR (Model 2) | Adjusting Hyperparameters (4:19)
-
StartDeepAR (Model 2) | Model Accuracy Evaluation (MAE 0.537) (1:49)
-
StartDeepAR (Model 3) | Scaling by Group (3:31)
-
StartDeepAR (Model 3) | Model Accuracy (MAE 0.509) (1:17)
-
StartN-BEATS (Model 4) | Understanding nbeats() & Training Our 1st N-BEATS Model (9:57)
-
StartN-BEATS (Model 5) | Improving our model with a new loss_function (MAE 0.611) (4:25)
-
StartN-BEATS (Model 6) | Ensemble Multiple N-BEATS (7:09)
-
StartN-Beats (Model 6) | Model Accuracy (MAE: 0.544) (3:04)
-
StartFuture Forecast | Inspect Refitted Models (6:01)