# Section - 4 Data Prep

Next we will do some data cleaning to make sure our data is in the format we need it to be in. For a gentler introduction to data prep using the dplyr package (Wickham, François, et al. 2020) consult the high-level version.

## 4.1 Remove Nulls

First off, we aren’t able to do anything at all with a row of data if we don’t know when the data was collected. The specific price doesn’t matter if we can’t tie it to a timestamp, given by the date_time_utc field.

We can exclude all rows where the date_time_utc field has a Null (missing) value by using the filter() function from the dplyr package:

cryptodata <- filter(cryptodata, !is.na(date_time_utc))

This step removed 0 rows from the data on the latest run (2023-08-26). The is.na() function finds all cases where the date_time_utc field has a Null value. The function is preceded by the ! operator, which tells the filter() function to exclude these rows from the data.

By the same logic, if we don’t know what the price was for any of the rows, the whole row of data is useless and should be removed. But how will we define the price of a cryptocurrency?

## 4.2 Calculate price_usd Column

In the previous section we discussed the intricacies of a cryptocurrency’s price. We could complicate our definition of a price by considering both the bid and ask prices from the perspective of someone who wants to perform trades, but this is not a trading tutorial. Instead, we will define the price of a cryptocurrency as the price we could purchase it for. We will calculate the price_usd field using the cheapest price available from the ask side where at least $15 worth of the cryptocurrency are being sold. Therefore, let’s figure out the lowest price from the order book data that would allow us to purchase at least$15 worth of the cryptocurrency. To do this, for each ask price and quantity, let’s figure out the value of the trade in US Dollars. We can create each of the new trade_usd columns using the mutate() function. The trade_usd_1 should be calculated as the ask_1_price multiplied by the ask_1_quantity. The next one trade_usd_2 should consider the ask_2_price, but be multiplied by the sum of ask_1_quantity and ask_2_quantity because at the ask_2_price pricepoint we can also purchase the quantity available at the ask_1_price pricepoint:

cryptodata <- mutate(cryptodata,
trade_usd_5 = ask_5_price * (ask_1_quantity + ask_2_quantity + ask_3_quantity + ask_4_quantity + ask_5_quantity))

We can confirm that the trade_usd_1 field is calculating the $value of the lowest ask price and quantity: head(select(cryptodata, symbol, date_time_utc, ask_1_price, ask_1_quantity, trade_usd_1)) ## # A tibble: 6 x 5 ## symbol date_time_utc ask_1_price ask_1_quantity trade_usd_1 ## <chr> <dttm> <dbl> <dbl> <dbl> ## 1 BTC 2023-08-26 00:00:04 26060. 0.02 521. ## 2 ETH 2023-08-26 00:00:05 1654. 0.106 175. ## 3 EOS 2023-08-26 00:00:06 0.588 0.17 0.0999 ## 4 LTC 2023-08-26 00:00:07 65.2 4.13 269. ## 5 BSV 2023-08-26 00:00:12 30.1 52.7 1588. ## 6 ADA 2023-08-26 00:00:13 0.261 1937 506. Now we can use the mutate() function to create the new field price_usd and find the lowest price at which we could have purchased at least$15 worth of the cryptocurrency. We can use the case_when() function to find the first trade_usd value that is greater or equal to $15, and assign the correct ask_price for the new column price_usd: cryptodata <- mutate(cryptodata, price_usd = case_when( cryptodata$trade_usd_1 >= 15 ~ cryptodata$ask_1_price, cryptodata$trade_usd_2 >= 15 ~ cryptodata$ask_2_price, cryptodata$trade_usd_3 >= 15 ~ cryptodata$ask_3_price, cryptodata$trade_usd_4 >= 15 ~ cryptodata$ask_4_price, cryptodata$trade_usd_5 >= 15 ~ cryptodata$ask_5_price)) Let’s also remove any rows that have Null values for the new price_usd field like we did for the date_time_utc field in a previous step. These will mostly be made up of rows where the value of trades through the 5th lowest ask price was lower than$15.

cryptodata <- filter(cryptodata, !is.na(price_usd))

This step removed 37598 rows on the latest run.

## 4.3 Clean Data by Group

In the high-level version of this tutorial we only dealt with one cryptocurrency. In this version however, we will be creating independent models for each cryptocurrency. Because of this, we need to ensure data quality not only for the data as a whole, but also for the data associated with each individual cryptocurrency. Instead of considering all rows when applying a transformation, we can group the data by the individual cryptocurrency and apply the transformation to each group. This will only work with compatible functions from dplyr and the tidyverse.

For example, we could count the number of observations by running the count() function on the data:

count(cryptodata)
## # A tibble: 1 x 1
##        n
##    <int>
## 1 262402

But what if we wanted to know how many observations in the data are associated with each cryptocurrency separately?

We can group the data using the group_by() function from the dplyr package and group the data by the cryptocurrency symbol:

cryptodata <- group_by(cryptodata, symbol)

Now if we run the same operation using the count() function, the operation is performed grouped by the cryptocurrency symbol:

count(cryptodata)
## # A tibble: 295 x 2
## # Groups:   symbol [295]
##    symbol     n
##    <chr>  <int>
##  1 1INCH   1444
##  2 AAB      719
##  3 ABBC     138
##  4 ACT      329
##  6 AGIX    1444
##  7 AKRO     158
##  8 ALCX      69
##  9 ALI      267
## 10 ALICE    268
## # ... with 285 more rows

We can remove the grouping at any point by running the ungroup() function:

count(ungroup(cryptodata))
## # A tibble: 1 x 1
##        n
##    <int>
## 1 262402

### 4.3.1 Remove symbols without enough rows

Because this dataset evolves over time, we will need to be proactive about issues that may arise even if they aren’t currently a problem.

What happens if a new cryptocurrency gets added to the cryptocurrency exchange? If we only had a couple days of data for an asset, not only would that not be enough information to build effective predictive models, but we may run into actual errors since the data will be further split into more groups to validate the results of the models against several datasets using cross validation, more to come on that later.

To ensure we have a reasonable amount of data for each individual cryptocurrency, let’s filter out any cryptocurrencies that don’t have at least 1,000 observations using the filter() function:

cryptodata <- filter(cryptodata, n() >= 1000)

The number of rows for the cryptodata dataset before the filtering step was 262402 and is now 219714. This step removed 127 cryptocurrencies from the analysis that did not have enough observations associated with them.

### 4.3.2 Remove symbols without data from the last 3 days

If there was no data collected for a cryptocurrency over the last 3 day period, let’s exclude that asset from the dataset since we are only looking to model data that is currently flowing through the process. If an asset is removed from the exchange (if a project is a scam for example) or is no longer being actively captured by the data collection process, we can’t make new predictions for it, so might as well exclude these ahead of time as well.

cryptodata <- filter(cryptodata, max(date) > Sys.Date()-3)

The number of rows for the cryptodata dataset before this filtering step was 215024 and is now 219714.

## 4.4 Calculate Target

Our goal is to be able to make predictions on the price of each cryptocurrency 24 hours into the future from when the data was collected. Therefore, the target variable that we will be using as what we want to predict for the predictive models, is the price 24 hours into the future relative to when the data was collected.

To do this we can create a new column in the data that is the price_usd offset by 24 rows (one for each hour), but before we can do that we need to make sure there are no gaps anywhere in the data.

### 4.4.1 Convert to tsibble

We can fill any gaps in the data using the tsibble package (Wang et al. 2020), which was covered in more detail in the high-level version of the tutorial.

#### 4.4.1.1 Convert to hourly data

The data we are using was collected between the 0th and the 5th minute of every hour; it is collected in the same order every hour to try and get the timing as consistent as possible for each cryptocurrency, but the cadence is not exactly one hour. Therefore, if we convert the data now to a tsibble object, it would recognize the data as being collected on the wrong cadence.

To fix this issue, let’s create a new column called ts_index using the mutate() function which will keep the information relating to the date and hour collected, but generalize the minutes and seconds as “00:00”, which will be correctly recognized by the tsibble package as being data collected on an hourly basis. The pkDummy field contains the date and hour, so we can add the text “:00:00” to the end of that field, and then convert the new string to a date time object using the anytime() function from the anytime package (Eddelbuettel 2020):

cryptodata <- mutate(cryptodata, ts_index = anytime(paste0(pkDummy,':00:00')))

Before we can convert the data to be a tsibble and easily fill in the gaps, we also need to make sure there are no duplicate values in the ts_index column for each cryptocurrency. There shouldn’t be any duplicates, but just in case any make their way into the data somehow, we can use the distinct() function from the dplyr package to prevent the issue from potentially arising:

cryptodata <- distinct(cryptodata, symbol, ts_index, .keep_all=TRUE)

Now we can finally convert the table to a tsibble data type by using the as_tsibble() function from the tsibble package (Wang et al. 2020), and providing the symbol column for the key parameter to preserve the grouped structure:

cryptodata <- as_tsibble(cryptodata, index = ts_index, key = symbol)

Notice how the preview of the data below looks a bit different from the summary we were seeing up to this point, and now it says “A tsibble”, and next to the table dimensions says [1h], indicating the observations are 1 hour apart from each other. The second row tells us the “Key” of the tsibble is the symbol column

cryptodata
## # A tsibble: 215,024 x 34 [1h] <UTC>
## # Key:       symbol [164]
## # Groups:    symbol [164]
##    <chr> <chr>  <chr>                <dbl>          <dbl>       <dbl>
##  1 1INC~ 1INCH  USD                  0.320          963.        0.320
##  2 1INC~ 1INCH  USD                  0.320          963.        0.320
##  3 1INC~ 1INCH  USD                  0.324          963.        0.324
##  4 1INC~ 1INCH  USD                  0.326            0.1       0.326
##  5 1INC~ 1INCH  USD                  0.324          963.        0.324
##  6 1INC~ 1INCH  USD                  0.324          892.        0.324
##  7 1INC~ 1INCH  USD                  0.322          963.        0.322
##  8 1INC~ 1INCH  USD                  0.320         9604.        0.320
##  9 1INC~ 1INCH  USD                  0.322          963.        0.322
## 10 1INC~ 1INCH  USD                  0.320          963.        0.321
## # ... with 215,014 more rows, and 28 more variables: ask_2_quantity <dbl>,
## #   bid_1_price <dbl>, bid_1_quantity <dbl>, bid_2_price <dbl>,
## #   bid_2_quantity <dbl>, bid_3_price <dbl>, bid_3_quantity <dbl>,
## #   bid_4_price <dbl>, bid_4_quantity <dbl>, bid_5_price <dbl>,
## #   bid_5_quantity <dbl>, date_time_utc <dttm>, date <date>, pkDummy <chr>,
## #   trade_usd_4 <dbl>, trade_usd_5 <dbl>, price_usd <dbl>, ts_index <dttm>

### 4.4.2 Fill gaps

Now we can use the fill_gaps() function from the tsibble package to fill any gaps found in the data, as being implicitly Null. Meaning, we will add these rows into the data with NA values for everything except for the date time field. This will allow us to safely compute the target price found 24 hours into the future relative to when each row was collected.

cryptodata <- fill_gaps(cryptodata)

Now looking at the data again, there are 9063 additional rows that were added as implicitly missing in the data:

cryptodata
## # A tsibble: 224,087 x 34 [1h] <UTC>
## # Key:       symbol [164]
## # Groups:    symbol [164]
##    <chr> <chr>  <chr>                <dbl>          <dbl>       <dbl>
##  1 1INC~ 1INCH  USD                  0.320          963.        0.320
##  2 1INC~ 1INCH  USD                  0.320          963.        0.320
##  3 1INC~ 1INCH  USD                  0.324          963.        0.324
##  4 1INC~ 1INCH  USD                  0.326            0.1       0.326
##  5 1INC~ 1INCH  USD                  0.324          963.        0.324
##  6 1INC~ 1INCH  USD                  0.324          892.        0.324
##  7 1INC~ 1INCH  USD                  0.322          963.        0.322
##  8 1INC~ 1INCH  USD                  0.320         9604.        0.320
##  9 1INC~ 1INCH  USD                  0.322          963.        0.322
## 10 1INC~ 1INCH  USD                  0.320          963.        0.321
## # ... with 224,077 more rows, and 28 more variables: ask_2_quantity <dbl>,
## #   bid_1_price <dbl>, bid_1_quantity <dbl>, bid_2_price <dbl>,
## #   bid_2_quantity <dbl>, bid_3_price <dbl>, bid_3_quantity <dbl>,
## #   bid_4_price <dbl>, bid_4_quantity <dbl>, bid_5_price <dbl>,
## #   bid_5_quantity <dbl>, date_time_utc <dttm>, date <date>, pkDummy <chr>,
## #   trade_usd_4 <dbl>, trade_usd_5 <dbl>, price_usd <dbl>, ts_index <dttm>

Now that all of the gaps have been filled in, let’s convert the data back to be in the structure of a tibble, which is the data structure that supports the grouping structure we discussed previously, and let’s group the data by the symbol again:

cryptodata <- group_by(as_tibble(cryptodata), symbol)

### 4.4.3 Calculate Target

Now we finally have everything we need to calculate the target variable containing the price 24 hours into the future relative to when the data was collected. We can use the usual mutate() function to add a new column to the data called target_price_24h, and use the lead() function from dplyr to offset the price_usd column by 24 hours:

cryptodata <- mutate(cryptodata,
target_price_24h = lead(price_usd, 24, order_by=ts_index))

### 4.4.4 Calculate Lagged Prices

What about doing the opposite? If we added a new column showing the price from 24 hours earlier, could the price movement between then and when the data was collected help us predict where the price is headed next? If the price has gone down significantly over the previous 24 hours, is the price for the next 24 hours more likely to increase or decrease? What if the price has gone down significantly over the previous 24 hours, but has increased significantly since the past hour?

These relationships around the sensitivity of a price to recent price changes may help our models come up with more accurate forecasts about the future, so let’s go ahead and add some lagged prices using the same methodology used to calculate the target variable, but this time using the lag() function to get past observations instead of the lead() function used before:

cryptodata <- mutate(cryptodata,
lagged_price_1h  = lag(price_usd, 1, order_by=ts_index),
lagged_price_2h  = lag(price_usd, 2, order_by=ts_index),
lagged_price_3h  = lag(price_usd, 3, order_by=ts_index),
lagged_price_6h  = lag(price_usd, 6, order_by=ts_index),
lagged_price_12h = lag(price_usd, 12, order_by=ts_index),
lagged_price_24h = lag(price_usd, 24, order_by=ts_index),
lagged_price_3d  = lag(price_usd, 24*3, order_by=ts_index))

This step can be thought of as data engineering more than data cleaning, because rather than fixing an issue we are enhancing the dataset with columns that may help with the forecasts.

Let’s view an example of the oldest 30 rows of data associated with the Bitcoin cryptocurrency (symbol == "BTC"). With the oldest data starting from the top, the lagged_price_1h field should have a NA value for the first row because we don’t have any prices before that point. By that same logic, the lagged_price_24h column should be missing the first 24 values and have the last 6 values showing the first 6 rows of the price_usd column. The target_price_24h would values for the oldest data because the opposite is true and we don’t know the values for data for the most recent 24 rows of the data:

print(select(filter(cryptodata, symbol == 'BTC'),
ts_index, price_usd, lagged_price_1h,
lagged_price_24h, target_price_24h), n=30)
## # A tibble: 1,453 x 6
## # Groups:   symbol [1]
##    symbol ts_index            price_usd lagged_price_1h lagged_price_24h
##    <chr>  <dttm>                  <dbl>           <dbl>            <dbl>
##  1 BTC    2023-06-27 00:00:00    30276.             NA               NA
##  2 BTC    2023-06-27 01:00:00    30359.          30276.              NA
##  3 BTC    2023-06-27 02:00:00    30374.          30359.              NA
##  4 BTC    2023-06-27 03:00:00    30493.          30374.              NA
##  5 BTC    2023-06-27 04:00:00    30370.          30493.              NA
##  6 BTC    2023-06-27 05:00:00    30353.          30370.              NA
##  7 BTC    2023-06-27 06:00:00    30320.          30353.              NA
##  8 BTC    2023-06-27 07:00:00    30329.          30320.              NA
##  9 BTC    2023-06-27 08:00:00    30411.          30329.              NA
## 10 BTC    2023-06-27 09:00:00    30395.          30411.              NA
## 11 BTC    2023-06-27 10:00:00    30382.          30395.              NA
## 12 BTC    2023-06-27 11:00:00    30660.          30382.              NA
## 13 BTC    2023-06-27 12:00:00    30735.          30660.              NA
## 14 BTC    2023-06-27 13:00:00    30718.          30735.              NA
## 15 BTC    2023-06-27 14:00:00    30612.          30718.              NA
## 16 BTC    2023-06-27 15:00:00    30809.          30612.              NA
## 17 BTC    2023-06-27 16:00:00    30501.          30809.              NA
## 18 BTC    2023-06-27 17:00:00    30716.          30501.              NA
## 19 BTC    2023-06-27 18:00:00    30737.          30716.              NA
## 20 BTC    2023-06-27 19:00:00    30693.          30737.              NA
## 21 BTC    2023-06-27 20:00:00       NA           30693.              NA
## 22 BTC    2023-06-27 21:00:00       NA              NA               NA
## 23 BTC    2023-06-27 22:00:00       NA              NA               NA
## 24 BTC    2023-06-27 23:00:00       NA              NA               NA
## 25 BTC    2023-06-28 00:00:00    30696.             NA            30276.
## 26 BTC    2023-06-28 01:00:00    30613.          30696.           30359.
## 27 BTC    2023-06-28 02:00:00    30505.          30613.           30374.
## 28 BTC    2023-06-28 03:00:00    30418.          30505.           30493.
## 29 BTC    2023-06-28 04:00:00    30459.          30418.           30370.
## 30 BTC    2023-06-28 05:00:00    30489.          30459.           30353.
## # ... with 1,423 more rows, and 1 more variable: target_price_24h <dbl>

We can wrap the code used above in the tail() function to show the most recent data and see the opposite dynamic with the new fields we created:

print(tail(select(filter(cryptodata, symbol == 'BTC'),
ts_index, price_usd, lagged_price_24h,
target_price_24h),30), n=30)
## # A tibble: 30 x 5
## # Groups:   symbol [1]
##    symbol ts_index            price_usd lagged_price_24h target_price_24h
##    <chr>  <dttm>                  <dbl>            <dbl>            <dbl>
##  1 BTC    2023-08-25 07:00:00    26018.           26452.           26075.
##  2 BTC    2023-08-25 08:00:00    25979.           26476.           26051.
##  3 BTC    2023-08-25 09:00:00    26095.           26504.           26044.
##  4 BTC    2023-08-25 10:00:00    26077.           26443.           26014.
##  5 BTC    2023-08-25 11:00:00    26091.           26470.           26030.
##  6 BTC    2023-08-25 12:00:00    26103.           26445.           26048.
##  7 BTC    2023-08-25 13:00:00    26106.           26388.              NA
##  8 BTC    2023-08-25 14:00:00    26068.           26361.              NA
##  9 BTC    2023-08-25 15:00:00    25832.           26120.              NA
## 10 BTC    2023-08-25 16:00:00    25927.           26004.              NA
## 11 BTC    2023-08-25 17:00:00    25932.           26109.              NA
## 12 BTC    2023-08-25 18:00:00    25897.           26065.              NA
## 13 BTC    2023-08-25 19:00:00    26056.           26027.              NA
## 14 BTC    2023-08-25 20:00:00    25978.           26045.              NA
## 15 BTC    2023-08-25 21:00:00    26057.           26034.              NA
## 16 BTC    2023-08-25 22:00:00    26081.           26096.              NA
## 17 BTC    2023-08-25 23:00:00    26052.           26117.              NA
## 18 BTC    2023-08-26 00:00:00    26060.           26184.              NA
## 19 BTC    2023-08-26 01:00:00    26045.           26144.              NA
## 20 BTC    2023-08-26 02:00:00    26050.           26153.              NA
## 21 BTC    2023-08-26 03:00:00    26077.           26132.              NA
## 22 BTC    2023-08-26 04:00:00    26093.           26109.              NA
## 23 BTC    2023-08-26 05:00:00    26079.           26030.              NA
## 24 BTC    2023-08-26 06:00:00    26060.           26056.              NA
## 25 BTC    2023-08-26 07:00:00    26075.           26018.              NA
## 26 BTC    2023-08-26 08:00:00    26051.           25979.              NA
## 27 BTC    2023-08-26 09:00:00    26044.           26095.              NA
## 28 BTC    2023-08-26 10:00:00    26014.           26077.              NA
## 29 BTC    2023-08-26 11:00:00    26030.           26091.              NA
## 30 BTC    2023-08-26 12:00:00    26048.           26103.              NA

Reading the code shown above is less than ideal. One of the more popular tools introduced by the tidyverse is the %>% operator, which works by starting with the object/data you want to make changes to first, and then apply each transformation step by step. It’s simply a way of re-writing the same code in a way that is easier to read by splitting the way the function is called rather than adding functions onto each other into a single line that becomes really hard to read. In the example above it becomes difficult to keep track of where things begin, the order of operations, and the parameters associated with the specific functions. Compare that to the code below:

# Start with the object/data to manipulate
cryptodata %>%
# Filter the data to only the BTC symbol
filter(symbol == 'BTC') %>%
# Select columns to display
select(ts_index, price_usd, lagged_price_24h, target_price_24h) %>%
# Show the last 30 elements of the data
tail(30) %>%
# Show all 30 elements instead of the default 10 for a tibble dataframe
print(n = 30)
## # A tibble: 30 x 5
## # Groups:   symbol [1]
##    symbol ts_index            price_usd lagged_price_24h target_price_24h
##    <chr>  <dttm>                  <dbl>            <dbl>            <dbl>
##  1 BTC    2023-08-25 07:00:00    26018.           26452.           26075.
##  2 BTC    2023-08-25 08:00:00    25979.           26476.           26051.
##  3 BTC    2023-08-25 09:00:00    26095.           26504.           26044.
##  4 BTC    2023-08-25 10:00:00    26077.           26443.           26014.
##  5 BTC    2023-08-25 11:00:00    26091.           26470.           26030.
##  6 BTC    2023-08-25 12:00:00    26103.           26445.           26048.
##  7 BTC    2023-08-25 13:00:00    26106.           26388.              NA
##  8 BTC    2023-08-25 14:00:00    26068.           26361.              NA
##  9 BTC    2023-08-25 15:00:00    25832.           26120.              NA
## 10 BTC    2023-08-25 16:00:00    25927.           26004.              NA
## 11 BTC    2023-08-25 17:00:00    25932.           26109.              NA
## 12 BTC    2023-08-25 18:00:00    25897.           26065.              NA
## 13 BTC    2023-08-25 19:00:00    26056.           26027.              NA
## 14 BTC    2023-08-25 20:00:00    25978.           26045.              NA
## 15 BTC    2023-08-25 21:00:00    26057.           26034.              NA
## 16 BTC    2023-08-25 22:00:00    26081.           26096.              NA
## 17 BTC    2023-08-25 23:00:00    26052.           26117.              NA
## 18 BTC    2023-08-26 00:00:00    26060.           26184.              NA
## 19 BTC    2023-08-26 01:00:00    26045.           26144.              NA
## 20 BTC    2023-08-26 02:00:00    26050.           26153.              NA
## 21 BTC    2023-08-26 03:00:00    26077.           26132.              NA
## 22 BTC    2023-08-26 04:00:00    26093.           26109.              NA
## 23 BTC    2023-08-26 05:00:00    26079.           26030.              NA
## 24 BTC    2023-08-26 06:00:00    26060.           26056.              NA
## 25 BTC    2023-08-26 07:00:00    26075.           26018.              NA
## 26 BTC    2023-08-26 08:00:00    26051.           25979.              NA
## 27 BTC    2023-08-26 09:00:00    26044.           26095.              NA
## 28 BTC    2023-08-26 10:00:00    26014.           26077.              NA
## 29 BTC    2023-08-26 11:00:00    26030.           26091.              NA
## 30 BTC    2023-08-26 12:00:00    26048.           26103.              NA

There are several advantages to writing code the tidy way, but while some love it others hate it, so we won’t force anyone to have to understand how the %>% operator works and we have stayed away from its use for the rest of the code shown, but we do encourage the use of this tool: https://magrittr.tidyverse.org/reference/pipe.html

## 4.5 Remove Nulls

We can’t do anything with a row of data if we don’t know when the data was collected, so let’s just double confirm that all rows have a value for the date_time_utc field by using the filter() function from the dplyr package to exclude any rows with NA values for the column:

# Remove all NA values of date_time_utc:
cryptodata <- filter(cryptodata, !is.na(date_time_utc))

This step removed 9063 rows from the data. This step mainly helps us avoid issues when programmatically labeling charts in the next section, move on to the next section ➡️ to learn some amazingly powerful tools to visualize data!

### References

Eddelbuettel, Dirk. 2020. Anytime: Anything to Posixct or Date Converter. http://dirk.eddelbuettel.com/code/anytime.html.

Wang, Earo, Di Cook, Rob Hyndman, and Mitchell O’Hara-Wild. 2020. Tsibble: Tidy Temporal Data Frames and Tools. https://tsibble.tidyverts.org.

Wickham, Hadley, Romain François, Lionel Henry, and Kirill Müller. 2020. Dplyr: A Grammar of Data Manipulation. https://CRAN.R-project.org/package=dplyr.