Time Series in Driverless AI¶
Time-series forecasting is one of the most common and important tasks in business analytics. There are many real-world applications like sales, weather, stock market, energy demand, just to name a few. At H2O, we believe that automation can help our users deliver business value in a timely manner. Therefore, we combined advanced time series analysis and our Kaggle Grand Masters’ time-series recipes into Driverless AI.
The key features/recipes that make automation prossible are:
- Automatic handling of time groups (e.g., different stores and departments)
- Robust time-series validation
- Accounts for gaps and forecaset horizon
- Uses past information only (i.e., no data leakage)
- Time-series-specific feature engineering recipes
- Date features like day of week, day of month, etc.
- AutoRegressive features, like optimal lag and lag-features interaction
- Different types of exponentially weighted moving averages
- Aggregation of past information (different time groups and time intervals)
- Target transformations and differentiation
- Integration with existing feature engineering functions (recipes and optimization)
- Automatic pipeline generation (See “From Kaggle Grand Masters’ Recipes to Production Ready in a Few Clicks” blog post.)
Understanding Time Series¶
Modeling Approach¶
Driverless AI uses GBMs, GLMs and neural networks with a focus on time-series-specific feature engineering. The feature engineering includes:
- Autoregressive elements: creating lag variables
- Aggregated features on lagged variables: moving averages, exponential smoothing descriptive statistics, correlations
- Date-specific features: week number, day of week, month, year
- Target transformations: Integration/Differentiation, univariate transforms (like logs, square roots)
This approach is combined with AutoDL features as part of the genetic algorithm. The selection is still based on validation accuracy. In other words, the same transformations/genes apply; plus there are new transformations that come from time series. Some transformations (like target encoding) are deactivated.
When running a time-series experiment, Driverless AI builds multiple models by rolling the validation window back in time (and potentially using less and less training data).
User-Configurable Options¶
Gap and Horizon¶
The guiding principle for properly modeling a time series forecasting problem is to use the historical data in the model training dataset such that it mimics the data/information environment at scoring time (i.e. deployed predictions). Specifically, you want to partition the training set to account for: 1) the information available to the model when making predictions and 2) the length of predictions to make.
Given a training dataset, gap and prediction length are parameters that determine how to split the training dataset into training samples and validation samples.
Gap is the amount of missing time bins between the end of a training set and the start of test set (with regards to time). For example:
- Assume you have daily data with days 1/1, 1/2, 1/3, 1/4 in train.
- The corresponding time bins would be 1, 2, 3, 4 for a time period of 1 day.
- Given that, the first valid time bin to predict is 5.
- As a result, Gap = max(time bin train) - min(time bin test) - 1.
Quite often, it is not possible to have the most recent data available when applying a model (or it is costly to update the data table too often); hence models need to be built accounting for a “future gap”. For example if it takes a week to update a certain data table, ideally we would like to predict “7 days ahead” with the data as it is “today”; hence a gap of 7 days would be sensible. Not specifying a gap and predicting 7 days ahead with the data as it is 7 days ahead is unrealistic (and can cannot happen as we update the data on a weekly basis in this example).
Similarly, gap can be used for those who want to forecast further in advance. For example, users want to know what will happen 7 days in the future, they will set the gap to 7 days.
Horizon (or prediction length) is the period that the test data spans for (for example, one day, one week, etc.). In other words it is the future period that the model can make predictions for.
The periodicity of updating the data may require model predictions to account for significant time in the future. In an ideal world where data can be updated very quickly, predictions can always be made having the most recent data available. In this scenario there is no need for a model to be able to predict cases that are well into the future, but rather focus on maximizing its ability to predict short term. However this is not always the case, and a model needs to be able to make predictions that span deep into the future because it may be too costly to make predictions every single day after the data gets updated.
In addition, each future data point is not the same. For example, predicting tomorrow with today’s data is easier than predicting 2 days ahead with today’s data. Hence specifying the horizon can facilitate building models that optimize prediction accuracy for these future time intervals.
Groups¶
Groups are categorical columns in the data that can significantly help predict the target variable in time series problems. For example, one may need to predict sales given information about stores and products. Being able to identify that the combination of store and products can lead to very different sales is key for predicting the target variable, as a big store or a popular product will have higher sales than a small store and/or with unpopular products.
For example, if we don’t know that the store is available in the data, and we try to see the distribution of sales along time (with all stores mixed together), it may look like that:
The same graph grouped by store gives a much clearer view of what the sales look like for different stores.
Lag¶
The primary generated time series features are lag features, which are a variable’s past values. At a given sample with time stamp \(t\), features at some time difference \(T\) (lag) in the past are considered. For example, if the sales today are 300, and sales of yesterday are 250, then the lag of one day for sales is 250. Lags can be created on any feature as well as on the target.
As previously noted, the training dataset is appropriately split such that the amount of validation data samples equals that of the testing dataset samples. If we want to determine valid lags, we must consider what happens when we will evaluate our model on the testing dataset. Essentially, the minimum lag size must be greater than the gap size.
Aside from the minimum useable lag, Driverless AI attemps to to discover predictive lag sizes based on auto-correlation.
“Lagging” variables are important in time series because knowing what happened in different time periods in the past can greatly facilitate predictions for the future. Consider the following example to see the lag of 1 and 2 days:
Date | Sales | Lag1 | Lag2 |
---|---|---|---|
1/1/2018 | 100 | - | - |
2/1/2018 | 150 | 100 | - |
3/1/2018 | 160 | 150 | 100 |
4/1/2018 | 200 | 160 | 150 |
5/1/2018 | 210 | 200 | 160 |
6/1/2018 | 150 | 210 | 200 |
7/1/2018 | 160 | 150 | 210 |
8/1/2018 | 120 | 160 | 150 |
9/1/2018 | 80 | 120 | 160 |
10/1/2018 | 70 | 80 | 120 |
Settings Determined by Driverless AI¶
Window/Moving Average¶
Using the above Lag table, a moving average of 2 would constitute the average of Lag1 and Lag2:
Date | Sales | Lag1 | Lag2 | MA2 |
---|---|---|---|---|
1/1/2018 | 100 | - | - | - |
2/1/2018 | 150 | 100 | - | - |
3/1/2018 | 160 | 150 | 100 | 125 |
4/1/2018 | 200 | 160 | 150 | 155 |
5/1/2018 | 210 | 200 | 160 | 180 |
6/1/2018 | 150 | 210 | 200 | 205 |
7/1/2018 | 160 | 150 | 210 | 180 |
8/1/2018 | 120 | 160 | 150 | 155 |
9/1/2018 | 80 | 120 | 160 | 140 |
10/1/2018 | 70 | 80 | 120 | 100 |
Aggregating multiple lags together (instead of just one) can facilitate stability for defining the target variable. It may include various lags values, for example lags [1-30] or lags [20-40] or lags [7-70 by 7].
Exponential Weighting¶
Exponential weighting is a form of weighted moving average where more recent values have higher weight than less recent values. That weight is exponentially decreased over time based on an alpha (a) (hyper) parameter (0,1), which is normally within the range of [0.9 - 0.99]. For example:
- Exponential Weight = a**(time)
- If sales 1 day ago = 3.0 and 2 days ago =4.5 and a=0.95:
- Exp. smooth = 3.0*(0.95**1) + 4.5*(0.95**2) / ((0.95**1) + (0.95**2)) =3.73 approx.
Time Series Constraints¶
Dataset Size¶
Usually, the forecast horizon (prediction length) \(H\) equals the number of time periods in the testing data \(N_{TEST}\) (i.e. \(N_{TEST} = H\)). You want to have enough training data time periods \(N_{TRAIN}\) to score well on the testing dataset. At a minimum, the training dataset should contain at least three times as many time periods as the testing dataset (i.e. \(N_{TRAIN} >= 3 × N_{TEST}\)). This allows for the training dataset to be split into a validation set with the same amount of time periods as the testing dataset while maintaining enough historical data for feature engineering.
Time Series Use Case: Sales Forecasting¶
Below is a typical example of sales forecasting based on the Walmart competition on Kaggle. In order to frame it as a machine learning problem, we formulate the historical sales data and additional attributes as shown below:
Raw data
Data formulated for machine learning
The additional attributes are attributes that we will know at time of scoring. In this example, we want to forecast the next week of sales. Therefore, all of the attributes included in our data must be known at least one week in advance. In this case, we assume that we will know whether or not a Store and Department will be running a promotional markdown. We will not use features like the temperature of the Week since we will not have that information at the time of scoring.
Once you have your data prepared in tabular format (see raw data above), Driverless AI can formulate it for machine learning and sort out the rest. If this is your very first session, the Driverless AI assistant will guide you through the journey.
Similar to previous Driverless AI examples, you need to select the dataset for training/test and define the target. For time-series, you need to define the time column (by choosing AUTO or selecting the date column manually). If weighted scoring is required (like the Walmart Kaggle competition), you can select the column with specific weights for different samples.
If you prefer to use automatic handling of time groups, you can leave the setting for time groups columns as AUTO.
Expert users can define specific time groups and change other settings as shown below. The Driverless AI time series expert settings provide a matrix of gap and prediction length combinations to choose from. The options for prediction length is based on quantiles of valid training dataset splits. Also, notice that the maximum prediction length (39 weeks) is set to exactly the size of the testing dataset. Because Driverless AI attempts to auto-detect the gap based on the training and testing datasets, it indentifies a better split (i.e. gap and prediction length combination) by the brightness of the matrix cells.
Once the experiment is finished, you can make new predictions and download the scoring pipeline just like any other Driverless AI experiments.
Time Series EXPERT SETTINGS¶
The user may further configure the time series experiments with a dedicated set of options available through the EXPERT SETTINGS. The EXPERT SETTINGS’ panel is available from within the experiment page right above the INTERPRETABILITY radar. This is a different panel than the one referenced above and can be accessed via:
List of Time Series Parameters
Parameter | Explanation |
---|---|
lag-based recipe | If enabled Driverless AI will attempt to generate lags other autoregressive features as part of the feature engineering process. If disabled, feature engineering will be limited to tabular data. In both cases the validation schema will always have models built on past data and evaluated on future data. |
lags override. | A list of lags (as integers) may be provided to be used when generating time series features. For example if [1,5,10] is passed , only lags of 1,5,10 wll be considered. |
probability to create non-target | When lag features are evaluated from Driverless AI’s genetic algorithm, they are most commonly derived from the lag features target variable. However on a small probability, Driverless AI will also consider lagging other variables too. Increasing this probability would result in Driverless AI evaluating other variables (besides the target) more often in the feature engineering process. |
Holiday Features | If enabled Driverless AI will generate some time series features based on whether a specific day is a bank holiday or not. This is currently based on US Bank holidays. |
Using a Driverless AI Time Series Model to Forecast¶
When you set the experiment’s forecast horizon, you are telling the Driverless AI experiment the dates this model will be asked to forecast for. In the Walmart Sales example, we set the Driverless AI forecast horizon to 1 (1 week in the future). This means that Driverless AI expects this model to be used to forecast 1 week after training ends. Since the training data ends on 2012-10-26, then this model should be used to score for the week of 2012-11-02.
What should the user do once the 2012-11-02 week has passed?
There are two options:
- Trigger a Driverless AI experiment to be trained once the forecast horizon ends
- a Driverless AI experiment will need to be re-trained every week
- Use Test Time Augmentation to update historical features so that we can use the same model to forecast outside of the forecast horizon
Test Time Augmentation refers to the process where the model stays the same but the features are refreshed using the latest data. In our Walmart Sales Forecasting example, a feature that may be very important is the Weekly Sales from the previous week. Once we move outside of the forecast horizon, our model no longer knows the Weekly Sales from the previous week. By performing Test Time Augmentation, Driverless AI will automatically generate these historical features if new data is provided.
In Option 1, we would launch a new Driverless AI experiment every week with the latest data and use the resulting model to forecast the next week. In Option 2, we would continue using the same Driverless AI experiment outside of the forecast horizon by using Test Time Augmentation.
Both options have their advantages and disadvantages. By re-training an experiment with the latest data, Driverless AI has the ability to possibly improve the model by changing the features used, choosing a different algorithm, and/or selecting different parameters. As the data changes over time, for example, Driverless AI may find that the best algorithm for this use case has changed.
Using Test Time Augmentation to be able to continue using the same experiment over a longer period of time means there would be no need to continually repeat a model review process. The model may become out of date, however, and the MOJO scoring pipeline is not supported.
Scoring Supported | Retraining Model | Test Time Augmentation |
---|---|---|
Driverless AI Scoring | Supported | Supported |
Python Scoring Pipeline | Supported | Supported |
MOJO Scoring Pipeline | Supported | Not Supported |
For different use cases, there may be clear advantages for retraining an experiment after each forecast horizon or for using Test Time Augmentation. In this notebook, we show how to perform both and compare the performance: Time Series Model Rolling Window.
How to trigger Test Time Augmentation?
To tell Driverless AI to perform Test Time Augmentation, simply create your forecast data to include any data that occurred after the training data ended up to the date you want a forecast for. The date which you want Driverless AI to forecast should have NA where the target column is. Here is an example of forecasting 2012-11-09.
Date | Store | Dept | Mark Down 1 | Mark Down 2 | Weekly_Sales |
---|---|---|---|---|---|
2012-11-02 | 1 | 1 | -1 | -1 | $40,000 |
2012-11-09 | 1 | 1 | -1 | -1 | NA |
If we do not include an NA in the Target column for the date we are interested in forecasting, then Test Time Augmentation will not be triggered.