IMPORTANT

Please note that the following price predictions are for reference only. The stock market and various financial markets, and the economy in general, are complexity systems that is beyond accurate prediction. All models are wrong, as suggested by George E. P. Box, and we can only hope that the model we built would be useful. Consequently, we shall NOT be responsible for any loss or damage, howsoever caused, by relying on the information in this report.

Date: 2022-11-22

We specialize in forecasting short term price movements of stocks, commodities, exchange rates and other assets using prediction with machine learning.

The predicted closing prices (the Forecast column in the table below) for the coming five days are as follows. If the day has no trading, then the prediction represents the expected price if trading had occurred.

## # A tsibble: 5 x 4 [1D]
##   Date       Forecast                  80%                  95%
##   <date>        <dbl>                 <hilo>                 <hilo>
## 1 2022-11-22    1737. [1716.505, 1756.631]80 [1707.013, 1767.591]95
## 2 2022-11-23    1736. [1707.849, 1764.709]80 [1693.671, 1778.675]95
## 3 2022-11-24    1735. [1700.865, 1770.470]80 [1683.444, 1788.698]95
## 4 2022-11-25    1734. [1694.674, 1774.668]80 [1675.346, 1797.355]95
## 5 2022-11-26    1734. [1690.250, 1777.821]80 [1666.965, 1803.656]95

The predictions, and the 80% AND 95% confidence intervals (if available) are displayed graphically below:

The best model, that is, the model with the minimum error used for the above predictions is Neural Network Autoregression.

The following might be more interesting for the technically oriented readers.

The price predictions are based on a competition of the following models using machine learning.

3. ARIMA
4. Auto ETS
5. Exponential Smoothing with Box-Cox Transformation
6. Holt Linear Method AAN
7. Holt-Winters Damped Method with Multiplicative Error MAdM
8. Mean
9. Multiplicative Holt-Winters Method MAM
10. Neural Network Autoregression
11. Random Walk
12. Random Walk Drift
13. SNAIVE
15. Theta Multiplicative
16. TSLM
17. Combined Model based on a weighted average of the above models.

The machine learning program splits the data into the training dataset and the testing dataset. The training dataset is used to build the abovementioned 17 forecasting models. Then the testing dataset is used to determine which of the models created has the lowest prediction error. The following errors are computed for each model using the testing dataset:

1. RMSE or root mean squared error
2. MAE or mean absolute error
3. MAPE or mean absolute percentage error

The models are ranked based on the above error measurements. The rank sum of each model is computed, and the best model is the model that has the lowest rank sum.

After the best model is identified, the best model is rebuilt using the full dataset, and then the re-built model is used to predict the prices of the next 5 observations.

Details of the best model created are as follows:

## Series: price_adjusted
## Model: NNAR(1,1,2)[7]
##
## Average of 20 networks, each of which is
## a 2-2-1 network with 9 weights
## options were - linear output units
##
## sigma^2 estimated as 2.231e-05

Box-Cox transformation has been used when necessary to find the lambda to ensure the residuals will be roughly homoscedastic.

The errors of the 17 models are as follows. The lower the Rank Sum, the more accurate the model.

## # A tibble: 18 x 5
##    Model                                             RMSE   MAE  MAPE Rank Sum
##    <chr>                                            <dbl> <dbl> <dbl>      <dbl>
##  1 Neural Network Autoregression                     55.9  43.1  2.48          3
##  2 ARIMA                                             69.2  52.7  3.02          6
##  3 Random Walk Drift                                 69.6  52.9  3.03          9
##  4 Holt Linear Method AAN                            69.8  53.0  3.04         12
##  5 Theta Additive                                    70.8  53.8  3.08         15
##  6 Theta Multiplicative                              71.0  53.9  3.09         18
##  7 Exponential Smoothing with Box-Cox Transformati~  71.2  54.0  3.10         21
##  8 Additive Holt-Winters Method AAA                  71.8  54.4  3.12         24
## 18 Mean                                             639.  636.  37.4          54