# Section - 7 Predictive Modeling

We finally have everything we need to start making predictive models now that the data has been cleaned and we have come up with a gameplan to understand the efficacy of the models.

## 7.1 Example Simple Model

We can start by making a simple linear regression model:

lm(formula = target_price_24h ~ ., data = cryptodata)
##
## Call:
## lm(formula = target_price_24h ~ ., data = cryptodata)
##
## Coefficients:
##      (Intercept)        symbolAPPC        symbolARDR         symbolASP
##  -16620.04599720        1.00495938       -0.14596393       -0.26160803
##        symbolAVA         symbolBAT         symbolBCD        symbolBCHA
##       0.02201066       -0.29759662        0.39647067        3.05323266
##        symbolBMC         symbolBNT         symbolBRD         symbolBSV
##      -1.74612051        0.37084571       -1.62633773       17.02439519
##        symbolBTC         symbolBTG         symbolBTM        symbolBZRX
##    3194.74865093        5.89559182        1.00433207       -0.21310697
##        symbolCBC         symbolCCE         symbolCHZ         symbolCKB
##      -0.14935893        2.30597966       -0.22222540        0.01469076
##        symbolCND       symbolCOCOS        symbolCOTI         symbolCRO
##       3.24092203       -2.16085036       -0.18022761        0.20741334
##        symbolCRV         symbolDCR        symbolDENT         symbolDGB
##       0.26741734       11.99307432       -0.24856900        1.04374885
##       symbolDGTX        symbolDOGE         symbolELF         symbolENJ
##      -0.13956600       -0.03131772       -0.49849531       -0.05243196
##        symbolEOS         symbolETC         symbolETH         symbolETP
##       0.29313821        3.23310724      199.55311942       -0.56624556
##       symbolEURS         symbolEVX         symbolFIL         symbolFTM
##       0.76632018       -2.65038835        7.25150338       -0.21967216
##       symbolGASP          symbolHT         symbolINJ        symbolIOST
##      -0.21185775        2.03020189        0.63753401       -0.02881264
##         symbolIQ         symbolJST         symbolKMD         symbolKNC
##      -0.20425642       -0.75978874       -0.01903277        0.15256388
##        symbolLEO        symbolLEVL         symbolLTC        symbolMANA
##      -0.40758317       -0.74269513       16.48353875       -0.11634954
##        symbolMBL         symbolMKR        symbolNEXO         symbolNXT
##      -0.63341970      267.05275986       -0.05456474       -0.46513037
##        symbolOAX         symbolPPC         symbolRCN       symbolSMART
##      -0.76068821       -0.92989702       -0.63593489       -6.52839513
##        symbolSNC         symbolSNX         symbolSRN       symbolSTORJ
##      -1.08871599        1.62602924        5.82422664       -5.47026395
##      symbolSUSHI         symbolTRX          symbolTV         symbolUNI
##       0.60109306       -0.25851754        1.11272275        1.88380237
##        symbolUNO         symbolVIB        symbolVSYS        symbolWAXP
##       9.18433236       -2.00165955        4.23725555       -0.14562995
##        symbolXEM         symbolXMR         symbolXTZ         symbolYFI
##      -0.16970116       23.71945076        1.09290985     3381.78396384
##        symbolZEC         symbolZIL         symbolZRX     date_time_utc
##      14.06901655       -0.58840902       -0.09138059        0.00002641
##             date         price_usd   lagged_price_1h   lagged_price_2h
##      -1.39188040        0.91283803       -0.06979335        0.00011554
##  lagged_price_3h   lagged_price_6h  lagged_price_12h  lagged_price_24h
##       0.06952041       -0.15702252       -0.06373733        0.27337649
##  lagged_price_3d      trainingtest     trainingtrain             split
##      -0.03871917        5.57495064       25.75882145      -29.53617048

We defined the formula for the model as target_price_24h ~ ., which means that we are want to make predictions for the target_price_24h field, and use (~) every other column found in the data (.). In other words, we specified a model that uses the target_price_24h field as the dependent variable, and all other columns (.) as the independent variables. Meaning, we are looking to predict the target_price_24h, which is the only column that refers to the future, and use all the information available at the time the rest of the data was collected in order to infer statistical relationships that can help us forecast the future values of the target_price_24h field when it is still unknown on new data that we want to make new predictions for.

In the example above we used the cryptodata object which contained all the non-nested data, and was a big oversimplification of the process we will actually use.

### 7.1.1 Using Functional Programming

From this point forward, we will deal with the new dataset cryptodata_nested, review the previous section where it was created if you missed it. Here is a preview of the data again:

cryptodata_nested
## # A tibble: 395 x 5
## # Groups:   symbol, split [395]
##    symbol split train_data          test_data          holdout_data
##    <chr>  <dbl> <list>              <list>             <list>
##  1 BTC        1 <tibble [246 x 11]> <tibble [80 x 11]> <tibble [81 x 11]>
##  2 ETH        1 <tibble [230 x 11]> <tibble [80 x 11]> <tibble [84 x 11]>
##  3 EOS        1 <tibble [246 x 11]> <tibble [80 x 11]> <tibble [81 x 11]>
##  4 LTC        1 <tibble [245 x 11]> <tibble [80 x 11]> <tibble [84 x 11]>
##  5 ADA        1 <tibble [245 x 11]> <tibble [80 x 11]> <tibble [84 x 11]>
##  6 BSV        1 <tibble [246 x 11]> <tibble [80 x 11]> <tibble [81 x 11]>
##  7 TRX        1 <tibble [246 x 11]> <tibble [80 x 11]> <tibble [80 x 11]>
##  8 ZEC        1 <tibble [244 x 11]> <tibble [80 x 11]> <tibble [80 x 11]>
##  9 HT         1 <tibble [208 x 11]> <tibble [79 x 11]> <tibble [81 x 11]>
## 10 KNC        1 <tibble [216 x 11]> <tibble [79 x 11]> <tibble [80 x 11]>
## # ... with 385 more rows

Because we are now dealing with a nested dataframe, performing operations on the individual nested datasets is not as straightforward. We could extract the individual elements out of the data using indexing, for example we can return the first element of the column train_data by running this code:

cryptodata_nested$train_data[[1]] ## # A tibble: 246 x 11 ## date_time_utc date price_usd target_price_24h lagged_price_1h ## <dttm> <date> <dbl> <dbl> <dbl> ## 1 2021-04-02 00:00:01 2021-04-02 58724. 58953. 58998 ## 2 2021-04-02 01:00:01 2021-04-02 58646. 59388. 58724. ## 3 2021-04-02 02:00:01 2021-04-02 58890. 59267. 58646. ## 4 2021-04-02 03:00:01 2021-04-02 59710. 59419. 58890. ## 5 2021-04-02 04:00:01 2021-04-02 59860 59215. 59710. ## 6 2021-04-02 05:00:00 2021-04-02 59566. 59387. 59860 ## 7 2021-04-02 06:00:01 2021-04-02 59540. 59440 59566. ## 8 2021-04-02 07:00:00 2021-04-02 59539. 59398. 59540. ## 9 2021-04-02 08:00:01 2021-04-02 59430. 59292. 59539. ## 10 2021-04-02 09:00:00 2021-04-02 59472. 59060. 59430. ## # ... with 236 more rows, and 6 more variables: lagged_price_2h <dbl>, ## # lagged_price_3h <dbl>, lagged_price_6h <dbl>, lagged_price_12h <dbl>, ## # lagged_price_24h <dbl>, lagged_price_3d <dbl> remove STORJ to resolve weird problem that arose March 3rd, 2021: cryptodata_nested <- filter(cryptodata_nested, symbol != "STORJ") As we already saw dataframes are really flexible as a data structure. We can create a new column in the data to store the models themselves that are associated with each row of the data. There are several ways that we could go about doing this (this tutorial itself was written to execute the same commands using three fundamentally different methodologies), but in this tutorial we will take a functional programming approach. This means we will focus the operations we will perform on the actions we want to take themselves, which can be contrasted to a for loop which emphasizes the objects more using a similar structure that we used in the example above showing the first element of the train_data column. When using a functional programming approach, we first need to create functions for the operations we want to perform. Let’s wrap the lm() function we used as an example earlier and create a new custom function called linear_model, which takes a dataframe as an input (the train_data we will provide for each row of the nested dataset), and generates a linear regression model: linear_model <- function(df){ lm(target_price_24h ~ . -date_time_utc -date, data = df) } We can now use the map() function from the purrr package in conjunction with the mutate() function from dplyr to create a new column in the data which contains an individual linear regression model for each row of train_data: mutate(cryptodata_nested, lm_model = map(train_data, linear_model)) ## # A tibble: 390 x 6 ## # Groups: symbol, split [390] ## symbol split train_data test_data holdout_data lm_model ## <chr> <dbl> <list> <list> <list> <list> ## 1 BTC 1 <tibble [246 x 11]> <tibble [80 x 11~ <tibble [81 x 11~ <lm> ## 2 ETH 1 <tibble [230 x 11]> <tibble [80 x 11~ <tibble [84 x 11~ <lm> ## 3 EOS 1 <tibble [246 x 11]> <tibble [80 x 11~ <tibble [81 x 11~ <lm> ## 4 LTC 1 <tibble [245 x 11]> <tibble [80 x 11~ <tibble [84 x 11~ <lm> ## 5 ADA 1 <tibble [245 x 11]> <tibble [80 x 11~ <tibble [84 x 11~ <lm> ## 6 BSV 1 <tibble [246 x 11]> <tibble [80 x 11~ <tibble [81 x 11~ <lm> ## 7 TRX 1 <tibble [246 x 11]> <tibble [80 x 11~ <tibble [80 x 11~ <lm> ## 8 ZEC 1 <tibble [244 x 11]> <tibble [80 x 11~ <tibble [80 x 11~ <lm> ## 9 HT 1 <tibble [208 x 11]> <tibble [79 x 11~ <tibble [81 x 11~ <lm> ## 10 KNC 1 <tibble [216 x 11]> <tibble [79 x 11~ <tibble [80 x 11~ <lm> ## # ... with 380 more rows Awesome! Now we can use the same tools we learned in the high-level version to make a wider variety of predictive models to test ## 7.2 Caret Refer back to the high-level version of the tutorial for an explanation of the caret package, or consult this document: https://topepo.github.io/caret/index.html ### 7.2.1 Parallel Processing R is a single thredded application, meaning it only uses one CPU at a time when performing operations. The step below is optional and uses the parallel and doParallel packages to allow R to use more than a single CPU when creating the predictive models, which will speed up the process considerably: cl <- makePSOCKcluster(detectCores()-1) registerDoParallel(cl) ### 7.2.2 More Functional Programming Now we can repeat the process we used earlier to create a column with the linear regression models to create the exact same models, but this time using the caret package. linear_model_caret <- function(df){ train(target_price_24h ~ . -date_time_utc -date, data = df, method = 'lm', trControl=trainControl(method="none")) } We specified the method as lm for linear regression. See the high-level version for a refresher on how to use different methods to make different models: https://cryptocurrencyresearch.org/high-level/#/method-options. the trControl argument tells the caret package to avoid additional resampling of the data. As a default behavior caret will do re-sampling on the data and do hyperparameter tuning to select values to use for the paramters to get the best results, but we will avoid this discussion for this tutorial. See the official caret documentation for more details. Here is the full list of models that we can make using the caret package and the steps described the high-level version of the tutorial: We can now use the new function we created linear_model_caret in conjunction with map() and mutate() to create a new column in the cryptodata_nested dataset called lm_model with the trained linear regression model for each split of the data (by cryptocurrency symbol and split): cryptodata_nested <- mutate(cryptodata_nested, lm_model = map(train_data, linear_model_caret)) We can see the new column called lm_model with the nested dataframe grouping variables: select(cryptodata_nested, lm_model) ## # A tibble: 390 x 3 ## # Groups: symbol, split [390] ## symbol split lm_model ## <chr> <dbl> <list> ## 1 BTC 1 <train> ## 2 ETH 1 <train> ## 3 EOS 1 <train> ## 4 LTC 1 <train> ## 5 ADA 1 <train> ## 6 BSV 1 <train> ## 7 TRX 1 <train> ## 8 ZEC 1 <train> ## 9 HT 1 <train> ## 10 KNC 1 <train> ## # ... with 380 more rows And we can view the summarized contents of the first trained model: cryptodata_nested$lm_model[[1]]
## Linear Regression
##
## 246 samples
##  10 predictor
##
## No pre-processing
## Resampling: None

### 7.2.3 Generalize the Function

We can adapt the function we built earlier for the linear regression models using caret, and add a parameter that allows us to specify the method we want to use (as in what predictive model):

model_caret <- function(df, method_choice){

train(target_price_24h ~ . -date_time_utc -date, data = df,
method = method_choice,
trControl=trainControl(method="none"))

}

### 7.2.4 XGBoost Models

Now we can do the same thing we did earlier for the linear regression models, but use the new function called model_caret using the map2() function to also specify the model as xgbLinear to create an XGBoost model:

cryptodata_nested <- mutate(cryptodata_nested,
xgb_model = map2(train_data, "xgbLinear", model_caret))

We won’t dive into the specifics of each individual model as the correct one to use may depend on a lot of factors and that is a discussion outside the scope of this tutorial. We chose to use the XGBoost model as an example because it has recently gained a lot of popularity as a very effective framework for a variety of problems, and is an essential model for any data scientist to have at their disposal.

There are several possible configurations for XGBoost models, you can find the official documentation here: https://xgboost.readthedocs.io/en/latest/parameter.html

### 7.2.5 Neural Network Models

We can keep adding models. As we saw, caret allows for the usage of over 200 predictive models. Let’s make another set of models, this time setting the method to dnn to create deep neural networks :

cryptodata_nested <- mutate(cryptodata_nested,
nnet_model = map2(train_data, "dnn", model_caret))

Again, we will not dive into the specifics of the individual models, but a quick Google search will return a myriad of information on the subject.

### 7.2.6 Random Forest Models

Next let’s use create Random Forest models using the method ctree

cryptodata_nested <- mutate(cryptodata_nested,
rf_model = map2(train_data, "ctree", model_caret))

### 7.2.7 Principal Component Regression

For one last set of models, let’s make Principal Component Regression models using the method pcr

cryptodata_nested <- mutate(cryptodata_nested,
pcr_model = map2(train_data, "pcr", model_caret))

### 7.2.8 Caret Options

Caret offers some additional options to help pre-process the data as well. We outlined an example of this in the high-level version of the tutorial when showing how to make a Support Vector Machine model, which requires the data to be centered and scaled to avoid running into problems (which we won’t discuss further here).

## 7.3 Make Predictions

Awesome! We have trained the predictive models, and we want to start getting a better understanding of how accurate the models are on data they have never seen before. In order to make these comparisons, we will want to make predictions on the test and holdout datasets, and compare those predictions to what actually ended up happening.

In order to make predictions, we can use the prediict() function, here is an example on the first elements of the nested dataframe:

predict(object = cryptodata_nested$lm_model[[1]], newdata = cryptodata_nested$test_data[[1]],
na.action = na.pass)
##        1        2        3        4        5        6        7        8
## 59701.54 59757.26 59645.53 59690.59 59667.93 59558.21 59916.59 59941.95
##        9       10       11       12       13       14       15       16
## 59962.79 59673.85 59487.78 59540.96 59492.67 59740.15 60546.53 60677.81
##       17       18       19       20       21       22       23       24
## 60799.02 61113.54 61228.48 61112.92 61542.03 61766.83 61615.48 61854.63
##       25       26       27       28       29       30       31       32
## 61560.35 61760.53 62089.32 61903.50 62009.03 62125.46 61714.01 61735.84
##       33       34       35       36       37       38       39       40
## 61660.98 62009.22 62307.48 62224.92 62536.85 62438.99 61814.86 61820.10
##       41       42       43       44       45       46       47       48
## 62137.67 62152.75 61842.97 62006.33 61570.84 61345.42 61662.33 60882.88
##       49       50       51       52       53       54       55       56
## 60944.51 60937.99 60684.06 60847.88 60832.78 60749.69 60734.71 60719.41
##       57       58       59       60       61       62       63       64
## 60904.96 60771.99 60674.60 60811.68 60493.30 60533.74 60547.20 60773.96
##       65       66       67       68       69       70       71       72
## 60430.90 60346.10 60528.92 60631.31 60738.92 60828.14 60691.54 60938.07
##       73       74       75       76       77       78       79       80
## 61050.06 61349.31 61210.68 61202.54 61232.30 61177.77 61265.24 61117.55

Now we can create a new custom function called make_predictions that wraps this functionality in a way that we can use with map() to iterate through all options of the nested dataframe:

make_predictions <- function(model, test){

predict(object  = model, newdata = test, na.action = na.pass)

}

Now we can create the new columns lm_test_predictions and lm_holdout_predictions with the predictions:

cryptodata_nested <- mutate(cryptodata_nested,
lm_test_predictions =  map2(lm_model,
test_data,
make_predictions),

lm_holdout_predictions =  map2(lm_model,
holdout_data,
make_predictions))

The predictions were made using the models that had only seen the training data, and we can start assessing how good the model is on data it has not seen before in the test and holdout sets. Let’s view the results from the previous step:

select(cryptodata_nested, lm_test_predictions, lm_holdout_predictions)
## # A tibble: 390 x 4
## # Groups:   symbol, split [390]
##    symbol split lm_test_predictions lm_holdout_predictions
##    <chr>  <dbl> <list>              <list>
##  1 BTC        1 <dbl [80]>          <dbl [81]>
##  2 ETH        1 <dbl [80]>          <dbl [84]>
##  3 EOS        1 <dbl [80]>          <dbl [81]>
##  4 LTC        1 <dbl [80]>          <dbl [84]>
##  5 ADA        1 <dbl [80]>          <dbl [84]>
##  6 BSV        1 <dbl [80]>          <dbl [81]>
##  7 TRX        1 <dbl [80]>          <dbl [80]>
##  8 ZEC        1 <dbl [80]>          <dbl [80]>
##  9 HT         1 <dbl [79]>          <dbl [81]>
## 10 KNC        1 <dbl [79]>          <dbl [80]>
## # ... with 380 more rows

Now we can do the same for the rest of the models:

cryptodata_nested <- mutate(cryptodata_nested,
# XGBoost:
xgb_test_predictions =  map2(xgb_model,
test_data,
make_predictions),
# holdout
xgb_holdout_predictions =  map2(xgb_model,
holdout_data,
make_predictions),
# Neural Network:
nnet_test_predictions =  map2(nnet_model,
test_data,
make_predictions),
# holdout
nnet_holdout_predictions =  map2(nnet_model,
holdout_data,
make_predictions),
# Random Forest:
rf_test_predictions =  map2(rf_model,
test_data,
make_predictions),
# holdout
rf_holdout_predictions =  map2(rf_model,
holdout_data,
make_predictions),
# PCR:
pcr_test_predictions =  map2(pcr_model,
test_data,
make_predictions),
# holdout
pcr_holdout_predictions =  map2(pcr_model,
holdout_data,
make_predictions))

We are done using the caret package and can stop the parallel processing cluster:

stopCluster(cl)
In this example we used the caret package because it provides a straightforward option to create a variety of models, but there are several great similar alternatives to make a variety of models in both R and Python. Some noteworthy mentions are tidymodels, mlr, and scikit-learn.

## 7.4 Timeseries

Because this tutorial is already very dense, we will just focus on the models we created above. When creating predictive models on timeseries data there are some other excellent options which consider when the information was collected in similar but more intricate ways to the way we did when creating the lagged variables.

For more information on using excellent tools for ARIMA and ETS models, consult the high-level version of this tutorial where they were discussed.

Move on to the next section ➡️ to assess the accuracy of the models as described in the previous section.