Driverless AI - Training Time Series Model¶

The purpose of this notebook is to show an example of using Driverless AI to train a time series model. Our goal will be to forecast the Weekly Sales for a particular Store and Department for the next week. The data used in this notebook is from the: Walmart Kaggle Competition where features.csv and train.csv have been joined together.

Note: This notebook was tested and run on Driverless AI 1.8.1.

Workflow¶

1. Import data into Python

2. Format data for Time Series

3. Upload data to Driverless AI

4. Launch Driverless AI Experiment

5. Evaluate model performance

[1]:

import pandas as pd
from h2oai_client import Client

%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.dates as mdates


Step 1: Import Data¶

We will begin by importing our data using pandas. We are going to first work with the data in Python to correctly format it for a Driverless AI time series use case.

[2]:

sales_data = pd.read_csv("./walmart_train.csv")

[2]:

Store Dept Date Weekly_Sales Temperature Fuel_Price MarkDown1 MarkDown2 MarkDown3 MarkDown4 MarkDown5 CPI Unemployment IsHoliday sample_weight
0 1 1 2010-02-05 24924.50 42.31 2.572 -1.0 -1.0 -1.0 -1.0 -1.0 211.096358 8.106 0 1
1 1 2 2010-02-05 50605.27 42.31 2.572 -1.0 -1.0 -1.0 -1.0 -1.0 211.096358 8.106 0 1
2 1 3 2010-02-05 13740.12 42.31 2.572 -1.0 -1.0 -1.0 -1.0 -1.0 211.096358 8.106 0 1
3 1 4 2010-02-05 39954.04 42.31 2.572 -1.0 -1.0 -1.0 -1.0 -1.0 211.096358 8.106 0 1
4 1 5 2010-02-05 32229.38 42.31 2.572 -1.0 -1.0 -1.0 -1.0 -1.0 211.096358 8.106 0 1
[3]:

# Convert Date column to datetime
sales_data["Date"] = pd.to_datetime(sales_data["Date"], format="%Y-%m-%d")


Step 2: Format Data for Time Series¶

The data has one record per Store, Department, and Week. Our goal for this use case will be to forecast the total sales for the next week.

The only features we should use as predictors are ones that we will have available at the time of scoring. Features like the Temperature, Fuel Price, and Unemployment will not be known in advance. Therefore, before we start our Driverless AI experiments, we will choose to use the previous week’s Temperature, Fuel Price, Unemployment, and CPI attributes. This information we will know at time of scoring.

[4]:

lag_variables = ["Temperature", "Fuel_Price", "CPI", "Unemployment"]
dai_data = sales_data.set_index(["Date", "Store", "Dept"])
lagged_data = dai_data.loc[:, lag_variables].groupby(level=["Store", "Dept"]).shift(1)

[5]:

# Join lagged predictor variables to training data
dai_data = dai_data.join(lagged_data.rename(columns=lambda x: x +"_lag"))

[6]:

# Drop original predictor variables - we do not want to use these in the model
dai_data = dai_data.drop(lagged_data, axis=1)
dai_data = dai_data.reset_index()

[7]:

dai_data.head()

[7]:

Date Store Dept Weekly_Sales MarkDown1 MarkDown2 MarkDown3 MarkDown4 MarkDown5 IsHoliday sample_weight Temperature_lag Fuel_Price_lag CPI_lag Unemployment_lag
0 2010-02-05 1 1 24924.50 -1.0 -1.0 -1.0 -1.0 -1.0 0 1 NaN NaN NaN NaN
1 2010-02-05 1 2 50605.27 -1.0 -1.0 -1.0 -1.0 -1.0 0 1 NaN NaN NaN NaN
2 2010-02-05 1 3 13740.12 -1.0 -1.0 -1.0 -1.0 -1.0 0 1 NaN NaN NaN NaN
3 2010-02-05 1 4 39954.04 -1.0 -1.0 -1.0 -1.0 -1.0 0 1 NaN NaN NaN NaN
4 2010-02-05 1 5 32229.38 -1.0 -1.0 -1.0 -1.0 -1.0 0 1 NaN NaN NaN NaN

Now that our training data is correctly formatted, we can run a Driverless AI experiment to forecast the next week’s sales.

Step 3: Upload Data to Driverless AI¶

We will split out data into two pieces: training and test (which consists of the last week of data).

[8]:

train_data = dai_data.loc[dai_data["Date"] < "2012-10-26"]
test_data = dai_data.loc[dai_data["Date"] == "2012-10-26"]


[9]:

address = 'http://<ip_where_driverless_is_running>:12345'
# make sure to use the same user name and password when signing in through the GUI

[10]:

train_path = "./train_data.csv"
test_path = "./test_data.csv"

train_data.to_csv(train_path, index = False)
test_data.to_csv(test_path, index = False)

[11]:

# Add datasets to Driverless AI


Step 4: Launch Driverless AI Experiment¶

We will now launch the Driverless AI experiment. To do that we will need to specify the parameters for our experiment. Some of the parameters include:

• Target Column: The column we are trying to predict.

• Dropped Columns: The columns we do not want to use as predictors such as ID columns, columns with data leakage, etc.

• Is Time Series: Whether or not the experiment is a time-series use case.

• Time Column: The column that contains the date/date-time information.

• Time Group Columns: The categorical columns that indicate how to group the data so that there is one time series per group. In our example, our Time Groups Columns are Store and Dept. Each Store and Dept, corresponds to a single time series.

• Number of Prediction Periods: How far in the future do we want to predict?

• Number of Gap Periods: After how many periods can we start predicting? If we assume that we can start forecasting right after the training data ends, then the Number of Gap Periods will be 0.

For this experiment, we want to forecast next week’s sales for each Store and Dept. Therefore, we will use the following time series parameters:

• Time Group Columns: [Store, Dept]

• Number of Prediction Periods: 1 (a.k.a., horizon)

• Number of Gap Periods: 0

Note that the period size is unknown to the Python client. To overcome this, you can also specify the optional time_period_in_seconds parameter, which can help specify the horizon in real time units. If this parameter is omitted, Driverless AI will automatically detect the period size in the experiment, and the horizon value will respect this period. I.e., if you are sure your data has 1 week period, you can say num_prediction_periods=14, otherwise it is possible that the model may not work out correctly.

[12]:

experiment = h2oai.start_experiment_sync(dataset_key=train_dai.key,
testset_key = test_dai.key,
target_col="Weekly_Sales",
is_classification=False,
cols_to_drop = ["sample_weight"],
accuracy=5,
time=3,
interpretability=1,
scorer="RMSE",
enable_gpus=True,
seed=1234,
time_col = "Date",
time_groups_columns = ["Store", "Dept"],
num_prediction_periods = 1,
num_gap_periods = 0)


Step 5. Evaluate Model Performance¶

Now that our experiment is complete, we can view the model performance metrics within the experiment object.

[13]:

print("Validation RMSE: ${:,.0f}".format(experiment.valid_score)) print("Test RMSE:${:,.0f}".format(experiment.test_score))

Validation RMSE: $2,281 Test RMSE:$2,483


We can also plot the actual versus predicted values from the test data.

[14]:

plt.scatter(experiment.test_act_vs_pred.x_values, experiment.test_act_vs_pred.y_values)
plt.plot([0, max(experiment.test_act_vs_pred.x_values)],[0, max(experiment.test_act_vs_pred.y_values)], 'b--',)
plt.xlabel("Predicted")
plt.ylabel("Actual")
plt.show()


Lastly, we can download the test predictions from Driverless AI and examine the forecasted sales vs actual for a selected store and department.

[15]:

preds_path = h2oai.download(src_path=experiment.test_predictions_path, dest_dir=".")
forecast_predictions.columns = ["predicted_Weekly_Sales"]

actual = test_data[["Date", "Store", "Dept", "Weekly_Sales"]].reset_index(drop = True)
forecast_predictions = pd.concat([actual, forecast_predictions], axis = 1)

[15]:

Date Store Dept Weekly_Sales predicted_Weekly_Sales
0 2012-10-26 1 1 27390.81 28837.857422
1 2012-10-26 1 2 43134.88 43528.121094
2 2012-10-26 1 3 9350.90 8774.910156
3 2012-10-26 1 4 36292.60 35721.511719
4 2012-10-26 1 5 25846.94 23501.814453
[16]:

selected_ts = sales_data.loc[(sales_data["Store"] == 1) & (sales_data["Dept"] == 1)].tail(n = 51)

selected_ts_forecast = forecast_predictions.loc[(forecast_predictions["Store"] == 1) &
(forecast_predictions["Dept"] == 1)]

[17]:

# Plot the forecast of a select store and department
years = mdates.MonthLocator()
yearsFmt = mdates.DateFormatter('%b')

fig, ax = plt.subplots()
ax.plot(selected_ts["Date"], selected_ts["Weekly_Sales"], label = "Actual")
ax.plot(selected_ts_forecast["Date"], selected_ts_forecast["predicted_Weekly_Sales"], marker='o', label = "Predicted")
ax.xaxis.set_major_locator(years)
ax.xaxis.set_major_formatter(yearsFmt)
plt.legend(loc='upper left')
plt.show()

[ ]: