Section - 7 Predictive Modeling

We finally have everything we need to start making predictive models now that the data has been cleaned and we have come up with a gameplan to understand the efficacy of the models.

7.1 Example Simple Model

We can start by making a simple linear regression model:

lm(formula = target_price_24h ~ ., data = cryptodata)
## 
## Call:
## lm(formula = target_price_24h ~ ., data = cryptodata)
## 
## Coefficients:
##      (Intercept)         symbolADA        symbolAGIX         symbolANT  
##  -1399.669703572       0.000727820      -0.000843216       0.043642200  
##       symbolAPFC         symbolAPT          symbolAR         symbolARB  
##     -0.012471406       0.049727644      -0.294902923      -0.000336209  
##       symbolATOM         symbolAVA        symbolBAND         symbolBAT  
##      0.062588077       0.005111093      -0.316767155      -0.000247131  
##       symbolBCUG        symbolBICO         symbolBNT         symbolBSV  
##     -0.013246274      -0.324688137       0.004045397       0.226332950  
##        symbolBSW         symbolBTC         symbolBTG        symbolCELO  
##     -0.326170630     268.411401316       0.151465155      -0.323020575  
##        symbolCFX         symbolCHR         symbolCHZ        symbolCOMP  
##     -0.322415484      -0.322637079      -0.001474554       0.250594689  
##        symbolCTC        symbolCTXC         symbolDAR        symbolDASH  
##      0.072392083      -0.321899576      -0.325939785       0.149972713  
##        symbolDOT        symbolDYDX         symbolELF         symbolENJ  
##      0.048712357       0.023372407       0.001437382       0.000382460  
##        symbolEOS         symbolETH        symbolETHW        symbolFLUX  
##      0.005354454      17.396663770      -0.041121556      -0.325259311  
##        symbolGAL       symbolGLEEC         symbolGMT        symbolGMTT  
##     -0.313410832      -0.244946366      -0.012872750       0.124790127  
##       symbolGODS          symbolHT         symbolICP          symbolID  
##      0.027356616       0.159021847      -0.293334722       0.087649445  
##        symbolILV         symbolIMX        symbolIOTA         symbolKNC  
##      0.084052098      -0.321475802      -0.000906154       0.004603228  
##        symbolKSM       symbolLAZIO         symbolLDO         symbolLOC  
##     -0.176790622       0.017893486      -0.313179647      -0.314199609  
##       symbolLQTY         symbolLTC       symbolMAGIC        symbolMANA  
##     -0.030220719       0.476772549      -0.324276880       0.000844253  
##        symbolMDT        symbolNEXO         symbolNMR         symbolOMG  
##     -0.326042950      -0.000117699      -0.220944666      -0.320880132  
##        symbolONT          symbolOP         symbolOXT         symbolPAR  
##     -0.296068144      -0.300901240      -0.001330831      -0.352605389  
##       symbolPERP       symbolPORTO         symbolQNT         symbolRAD  
##      0.001339182      -0.300053698       0.860296987      -0.316497969  
##       symbolRARE         symbolREN         symbolRIF      symbolSANTOS  
##     -0.322000268      -0.002013056      -0.001768840      -0.293441752  
##        symbolSKL       symbolSTETH         symbolSTG       symbolSTORJ  
##     -0.002027771      17.208506736      -0.308874167       0.037665358  
##      symbolSUSHI       symbolTHETA        symbolTOMO         symbolTRU  
##     -0.316880083      -0.002480189       0.012059432      -0.322677782  
##        symbolTRX         symbolVGX         symbolXCH         symbolXDC  
##     -0.001133702       0.022938520       0.323430424      -0.012710507  
##        symbolXLM         symbolXMR         symbolZEC         symbolZRX  
##     -0.324625434       1.442381187       0.191251222      -0.000348183  
##    date_time_utc              date         price_usd   lagged_price_1h  
##      0.000002723      -0.163566221       0.852828758       0.051042097  
##  lagged_price_2h   lagged_price_3h   lagged_price_6h  lagged_price_12h  
##      0.030771219       0.062446343       0.049346762      -0.091555625  
## lagged_price_24h   lagged_price_3d      trainingtest     trainingtrain  
##      0.057169582      -0.024022330       1.190654197       0.851961086  
##            split  
##     -1.636303243

We defined the formula for the model as target_price_24h ~ ., which means that we are want to make predictions for the target_price_24h field, and use (~) every other column found in the data (.). In other words, we specified a model that uses the target_price_24h field as the dependent variable, and all other columns (.) as the independent variables. Meaning, we are looking to predict the target_price_24h, which is the only column that refers to the future, and use all the information available at the time the rest of the data was collected in order to infer statistical relationships that can help us forecast the future values of the target_price_24h field when it is still unknown on new data that we want to make new predictions for.

In the example above we used the cryptodata object which contained all the non-nested data, and was a big oversimplification of the process we will actually use.

7.1.1 Using Functional Programming

From this point forward, we will deal with the new dataset cryptodata_nested, review the previous section where it was created if you missed it. Here is a preview of the data again:

cryptodata_nested
## # A tibble: 435 x 5
## # Groups:   symbol, split [435]
##    symbol split train_data          test_data          holdout_data      
##    <chr>  <dbl> <list>              <list>             <list>            
##  1 BTC        1 <tibble [158 x 11]> <tibble [58 x 11]> <tibble [62 x 11]>
##  2 ETH        1 <tibble [158 x 11]> <tibble [58 x 11]> <tibble [62 x 11]>
##  3 EOS        1 <tibble [158 x 11]> <tibble [58 x 11]> <tibble [61 x 11]>
##  4 LTC        1 <tibble [158 x 11]> <tibble [58 x 11]> <tibble [62 x 11]>
##  5 BSV        1 <tibble [158 x 11]> <tibble [58 x 11]> <tibble [62 x 11]>
##  6 ADA        1 <tibble [158 x 11]> <tibble [58 x 11]> <tibble [62 x 11]>
##  7 ZEC        1 <tibble [158 x 11]> <tibble [58 x 11]> <tibble [62 x 11]>
##  8 HT         1 <tibble [116 x 11]> <tibble [47 x 11]> <tibble [51 x 11]>
##  9 TRX        1 <tibble [158 x 11]> <tibble [58 x 11]> <tibble [62 x 11]>
## 10 KNC        1 <tibble [158 x 11]> <tibble [58 x 11]> <tibble [62 x 11]>
## # ... with 425 more rows

Because we are now dealing with a nested dataframe, performing operations on the individual nested datasets is not as straightforward. We could extract the individual elements out of the data using indexing, for example we can return the first element of the column train_data by running this code:

cryptodata_nested$train_data[[1]]
## # A tibble: 158 x 11
##    date_time_utc       date       price_usd target_price_24h lagged_price_1h
##    <dttm>              <date>         <dbl>            <dbl>           <dbl>
##  1 2023-06-30 00:00:00 2023-06-30    30459.           30477.          30422.
##  2 2023-06-30 01:00:00 2023-06-30    30442.           30473.          30459.
##  3 2023-06-30 02:00:00 2023-06-30    30387.           30449.          30442.
##  4 2023-06-30 03:00:00 2023-06-30    30667.           30427.          30387.
##  5 2023-06-30 04:00:00 2023-06-30    30747.           30395           30667.
##  6 2023-06-30 05:00:00 2023-06-30    30884.           30400.          30747.
##  7 2023-06-30 06:00:00 2023-06-30    30735            30424.          30884.
##  8 2023-06-30 07:00:00 2023-06-30    30674.           30456.          30735 
##  9 2023-06-30 08:00:00 2023-06-30    30769.           30461.          30674.
## 10 2023-06-30 09:00:00 2023-06-30    30852.           30442.          30769.
## # ... with 148 more rows, and 6 more variables: lagged_price_2h <dbl>,
## #   lagged_price_3h <dbl>, lagged_price_6h <dbl>, lagged_price_12h <dbl>,
## #   lagged_price_24h <dbl>, lagged_price_3d <dbl>

remove STORJ to resolve weird problem that arose March 3rd, 2021:

cryptodata_nested <- filter(cryptodata_nested, symbol != "STORJ")

As we already saw dataframes are really flexible as a data structure. We can create a new column in the data to store the models themselves that are associated with each row of the data. There are several ways that we could go about doing this (this tutorial itself was written to execute the same commands using three fundamentally different methodologies), but in this tutorial we will take a functional programming approach. This means we will focus the operations we will perform on the actions we want to take themselves, which can be contrasted to a for loop which emphasizes the objects more using a similar structure that we used in the example above showing the first element of the train_data column.

When using a functional programming approach, we first need to create functions for the operations we want to perform. Let’s wrap the lm() function we used as an example earlier and create a new custom function called linear_model, which takes a dataframe as an input (the train_data we will provide for each row of the nested dataset), and generates a linear regression model:

linear_model <- function(df){
  lm(target_price_24h ~ . -date_time_utc -date, data = df)
}

We can now use the map() function from the purrr package in conjunction with the mutate() function from dplyr to create a new column in the data which contains an individual linear regression model for each row of train_data:

mutate(cryptodata_nested, lm_model = map(train_data, linear_model))
## # A tibble: 430 x 6
## # Groups:   symbol, split [430]
##    symbol split train_data          test_data         holdout_data      lm_model
##    <chr>  <dbl> <list>              <list>            <list>            <list>  
##  1 BTC        1 <tibble [158 x 11]> <tibble [58 x 11~ <tibble [62 x 11~ <lm>    
##  2 ETH        1 <tibble [158 x 11]> <tibble [58 x 11~ <tibble [62 x 11~ <lm>    
##  3 EOS        1 <tibble [158 x 11]> <tibble [58 x 11~ <tibble [61 x 11~ <lm>    
##  4 LTC        1 <tibble [158 x 11]> <tibble [58 x 11~ <tibble [62 x 11~ <lm>    
##  5 BSV        1 <tibble [158 x 11]> <tibble [58 x 11~ <tibble [62 x 11~ <lm>    
##  6 ADA        1 <tibble [158 x 11]> <tibble [58 x 11~ <tibble [62 x 11~ <lm>    
##  7 ZEC        1 <tibble [158 x 11]> <tibble [58 x 11~ <tibble [62 x 11~ <lm>    
##  8 HT         1 <tibble [116 x 11]> <tibble [47 x 11~ <tibble [51 x 11~ <lm>    
##  9 TRX        1 <tibble [158 x 11]> <tibble [58 x 11~ <tibble [62 x 11~ <lm>    
## 10 KNC        1 <tibble [158 x 11]> <tibble [58 x 11~ <tibble [62 x 11~ <lm>    
## # ... with 420 more rows

Awesome! Now we can use the same tools we learned in the high-level version to make a wider variety of predictive models to test

7.2 Caret

Refer back to the high-level version of the tutorial for an explanation of the caret package, or consult this document: https://topepo.github.io/caret/index.html

7.2.1 Parallel Processing

R is a single thredded application, meaning it only uses one CPU at a time when performing operations. The step below is optional and uses the parallel and doParallel packages to allow R to use more than a single CPU when creating the predictive models, which will speed up the process considerably:

cl <- makePSOCKcluster(detectCores()-1)
registerDoParallel(cl)

7.2.2 More Functional Programming

Now we can repeat the process we used earlier to create a column with the linear regression models to create the exact same models, but this time using the caret package.

linear_model_caret <- function(df){
  
  train(target_price_24h ~ . -date_time_utc -date, data = df,
        method = 'lm',
        trControl=trainControl(method="none"))
  
}

We specified the method as lm for linear regression. See the high-level version for a refresher on how to use different methods to make different models: https://cryptocurrencyresearch.org/high-level/#/method-options. the trControl argument tells the caret package to avoid additional resampling of the data. As a default behavior caret will do re-sampling on the data and do hyperparameter tuning to select values to use for the paramters to get the best results, but we will avoid this discussion for this tutorial. See the official caret documentation for more details.

Here is the full list of models that we can make using the caret package and the steps described the high-level version of the tutorial:

We can now use the new function we created linear_model_caret in conjunction with map() and mutate() to create a new column in the cryptodata_nested dataset called lm_model with the trained linear regression model for each split of the data (by cryptocurrency symbol and split):

cryptodata_nested <- mutate(cryptodata_nested, 
                            lm_model = map(train_data, linear_model_caret))

We can see the new column called lm_model with the nested dataframe grouping variables:

select(cryptodata_nested, lm_model)
## # A tibble: 430 x 3
## # Groups:   symbol, split [430]
##    symbol split lm_model
##    <chr>  <dbl> <list>  
##  1 BTC        1 <train> 
##  2 ETH        1 <train> 
##  3 EOS        1 <train> 
##  4 LTC        1 <train> 
##  5 BSV        1 <train> 
##  6 ADA        1 <train> 
##  7 ZEC        1 <train> 
##  8 HT         1 <train> 
##  9 TRX        1 <train> 
## 10 KNC        1 <train> 
## # ... with 420 more rows

And we can view the summarized contents of the first trained model:

cryptodata_nested$lm_model[[1]]
## Linear Regression 
## 
## 158 samples
##  10 predictor
## 
## No pre-processing
## Resampling: None

7.2.3 Generalize the Function

We can adapt the function we built earlier for the linear regression models using caret, and add a parameter that allows us to specify the method we want to use (as in what predictive model):

model_caret <- function(df, method_choice){
  
  train(target_price_24h ~ . -date_time_utc -date, data = df,
        method = method_choice,
        trControl=trainControl(method="none"))

}

7.2.4 XGBoost Models

Now we can do the same thing we did earlier for the linear regression models, but use the new function called model_caret using the map2() function to also specify the model as xgbLinear to create an XGBoost model:

cryptodata_nested <- mutate(cryptodata_nested, 
                            xgb_model = map2(train_data, "xgbLinear", model_caret))

We won’t dive into the specifics of each individual model as the correct one to use may depend on a lot of factors and that is a discussion outside the scope of this tutorial. We chose to use the XGBoost model as an example because it has recently gained a lot of popularity as a very effective framework for a variety of problems, and is an essential model for any data scientist to have at their disposal.

There are several possible configurations for XGBoost models, you can find the official documentation here: https://xgboost.readthedocs.io/en/latest/parameter.html

7.2.5 Neural Network Models

We can keep adding models. As we saw, caret allows for the usage of over 200 predictive models. Let’s make another set of models, this time setting the method to dnn to create deep neural networks :

cryptodata_nested <- mutate(cryptodata_nested, 
                            nnet_model = map2(train_data, "dnn", model_caret))

Again, we will not dive into the specifics of the individual models, but a quick Google search will return a myriad of information on the subject.

7.2.6 Random Forest Models

Next let’s use create Random Forest models using the method ctree

cryptodata_nested <- mutate(cryptodata_nested, 
                            rf_model = map2(train_data, "ctree", model_caret))

7.2.7 Principal Component Regression

For one last set of models, let’s make Principal Component Regression models using the method pcr

cryptodata_nested <- mutate(cryptodata_nested, 
                            pcr_model = map2(train_data, "pcr", model_caret))

7.2.8 Caret Options

Caret offers some additional options to help pre-process the data as well. We outlined an example of this in the high-level version of the tutorial when showing how to make a Support Vector Machine model, which requires the data to be centered and scaled to avoid running into problems (which we won’t discuss further here).

7.3 Make Predictions

Awesome! We have trained the predictive models, and we want to start getting a better understanding of how accurate the models are on data they have never seen before. In order to make these comparisons, we will want to make predictions on the test and holdout datasets, and compare those predictions to what actually ended up happening.

In order to make predictions, we can use the prediict() function, here is an example on the first elements of the nested dataframe:

predict(object = cryptodata_nested$lm_model[[1]],
        newdata = cryptodata_nested$test_data[[1]],
        na.action = na.pass)
##        1        2        3        4        5        6        7        8 
## 30557.13 30478.61 30423.94 30419.53 30465.80 30444.40 30409.10 30458.36 
##        9       10       11       12       13       14       15       16 
## 30373.08 30379.42 30375.10 30390.94 30331.10 30333.53 30344.79 30240.48 
##       17       18       19       20       21       22       23       24 
## 30269.10 30223.12 30323.38 30364.17 30391.42 30433.82 30430.94 30464.13 
##       25       26       27       28       29       30       31       32 
## 30470.69 30471.30 30550.17 30523.83 30541.54 30545.87 30586.81 30541.61 
##       33       34       35       36       37       38       39       40 
## 30549.07 30539.98 30533.24 30556.14 30539.80 30539.85 30576.98 30573.24 
##       41       42       43       44       45       46       47       48 
## 30552.60 30586.36 30602.81 30583.04 30544.57 30543.25 30556.46 30538.85 
##       49       50       51       52       53       54       55       56 
## 30543.22 30557.75 30566.09 30556.66 30539.34 30552.22 30538.06 30554.86 
##       57       58 
## 30526.89 30530.63

Now we can create a new custom function called make_predictions that wraps this functionality in a way that we can use with map() to iterate through all options of the nested dataframe:

make_predictions <- function(model, test){
  
  predict(object  = model, newdata = test, na.action = na.pass)
  
}

Now we can create the new columns lm_test_predictions and lm_holdout_predictions with the predictions:

cryptodata_nested <- mutate(cryptodata_nested, 
                            lm_test_predictions =  map2(lm_model,
                                                   test_data,
                                                   make_predictions),
                            
                            lm_holdout_predictions =  map2(lm_model,
                                                      holdout_data,
                                                      make_predictions))

The predictions were made using the models that had only seen the training data, and we can start assessing how good the model is on data it has not seen before in the test and holdout sets. Let’s view the results from the previous step:

select(cryptodata_nested, lm_test_predictions, lm_holdout_predictions)
## # A tibble: 430 x 4
## # Groups:   symbol, split [430]
##    symbol split lm_test_predictions lm_holdout_predictions
##    <chr>  <dbl> <list>              <list>                
##  1 BTC        1 <dbl [58]>          <dbl [62]>            
##  2 ETH        1 <dbl [58]>          <dbl [62]>            
##  3 EOS        1 <dbl [58]>          <dbl [61]>            
##  4 LTC        1 <dbl [58]>          <dbl [62]>            
##  5 BSV        1 <dbl [58]>          <dbl [62]>            
##  6 ADA        1 <dbl [58]>          <dbl [62]>            
##  7 ZEC        1 <dbl [58]>          <dbl [62]>            
##  8 HT         1 <dbl [47]>          <dbl [51]>            
##  9 TRX        1 <dbl [58]>          <dbl [62]>            
## 10 KNC        1 <dbl [58]>          <dbl [62]>            
## # ... with 420 more rows

Now we can do the same for the rest of the models:

cryptodata_nested <- mutate(cryptodata_nested, 
                            # XGBoost:
                            xgb_test_predictions =  map2(xgb_model,
                                                         test_data,
                                                         make_predictions),
                            # holdout
                            xgb_holdout_predictions =  map2(xgb_model,
                                                            holdout_data,
                                                            make_predictions),
                            # Neural Network:
                            nnet_test_predictions =  map2(nnet_model,
                                                          test_data,
                                                          make_predictions),
                            # holdout
                            nnet_holdout_predictions =  map2(nnet_model,
                                                             holdout_data,
                                                             make_predictions),
                            # Random Forest:
                            rf_test_predictions =  map2(rf_model,
                                                               test_data,
                                                               make_predictions),
                            # holdout
                            rf_holdout_predictions =  map2(rf_model,
                                                                  holdout_data,
                                                                  make_predictions),
                            # PCR:
                            pcr_test_predictions =  map2(pcr_model,
                                                         test_data,
                                                         make_predictions),
                            # holdout
                            pcr_holdout_predictions =  map2(pcr_model,
                                                            holdout_data,
                                                            make_predictions))

We are done using the caret package and can stop the parallel processing cluster:

stopCluster(cl)
In this example we used the caret package because it provides a straightforward option to create a variety of models, but there are several great similar alternatives to make a variety of models in both R and Python. Some noteworthy mentions are tidymodels, mlr, and scikit-learn.

7.4 Timeseries

Because this tutorial is already very dense, we will just focus on the models we created above. When creating predictive models on timeseries data there are some other excellent options which consider when the information was collected in similar but more intricate ways to the way we did when creating the lagged variables.

For more information on using excellent tools for ARIMA and ETS models, consult the high-level version of this tutorial where they were discussed.

Move on to the next section ➡️ to assess the accuracy of the models as described in the previous section.