Multivariate volatility forecasting (3), Exponentially weighted model

Broadly speaking, complex models can achieve great predictive accuracy. Nonetheless, a winner in a kaggle competition is required only to attach a code for the replication of the winning result. She is not required to teach anyone the built-in elements of his model which gives the specific edge over other competitors. In a corporation settings your manager and his manager and so forth MUST feel comfortable with the underlying model. Mumbling something like “This artificial-neural-network is obtained by using a grid search over a range of parameters and connection weights where the architecture itself is fixed beforehand…”, forget it!

Your audience needs to understand, and understand fast. They don’t have the will or time to pick up on anything too tedious, even if it can be slightly more accurate. Simplicity is a very important model-selection criterion in business. In multivariate volatility estimation, the simplest way is to use the historical covariance matrix. But it is too simple, we already know volatility is time-varying. You often see practitioners use rolling standard deviation as a way to model time-varying volatility. It may be less accurate than other state-of-the-art methods like Range-based Covariance Estimation but it is very simple to implement and easy to explain.

This is where exponentially weighted covariance estimation steps in. What is a rolling window estimation if not an equally weighted of the past within the window and zero weight outside of the window. If we have a vector of 5 observation and we use a window of 2 than the vector of weights for estimation is [0,0,0,0.5,0.5]. A step further is to give at least some weight to a more distant past, but also to weight more heavily the most recent observations, say the weight vector [0.05, 0.1, 0.15, 0.3, 0.4]. This simple idea of weights which are decaying to zero as the past becomes past is old yet still prosperous in time series literature.

In accordance with the stylized fact that low volatility follow low volatility and high volatility follow high volatility (volatility clustering), this idea is perfectly suited for multivariate volatility forecasting. Consider the following:

(1)   \begin{equation*}  D_t = (1-\lambda) \sum_{t=1}^ \infty \lambda^{t-1} (\varepsilon_{t-1}\varepsilon^ \prime_{t-1}) = (1-\lambda)(\varepsilon_{t-1}\varepsilon^ \prime_{t-1})+\lambda D_{t-1}, \end{equation*}

where D_t is the current estimate of the covariance matrix, and D_{t-1} is the covariance matrix based on the past up until the period t-1. Another interpretation I carry is that we use the simplest estimate possible which is the historical covariance matrix, but adding some weight (1- \lambda, to be exact) to a covariance matrix estimated based on only the most recent observation. This is really easy to explain which makes it very popular, an industry standard almost. We can estimate how fast we would like the weights to decline, but you can also use some previous research which estimates the decaying parameter as 0.94.

A function written by Eric Zivot does all the work for us (scroll down for the function’s code). See how it looks, I use the same data as in my previous posts and plot the correlation matrix over time for a couple of different lambda values:

Correlation_EWMA
You can see that if you assign 15% weight for the last observation you get somewhat volatile estimates. A weight of only 5% (lambda = 0.95) gives a smoother estimate, but perhaps less accurate.

Apart from the simplicity, another important advantage is that there is no need to care about invertability, since at each point in time the estimate is simply a weighted average of two valid correlation matrices (see post number (1) for more information on that). And also, you can apply this method to whatever financial instrument, liquid or illiquid, which is another reason for it being so popular.

References
Forecasting with Exponential Smoothing: The State Space Approach (Springer Series in Statistics)

UPDATE on 2015-Oct-14:
An astute reader rightly commented that the function is not suitable for out-of-sample prediction, this is correct. The reason is that we shrink towards the sample covariance matrix which is based on the full sample, which we don’t yet know before the sample ends. In a realistic settings we can only use information available up to that point where we wish to predict. Subsequently, I altered the original function to include an one extra parameter (the initial window length for estimation of the covariance matrix). The initial covariance matrix is then taken only using information up to the time the prediction is made, as is the scaling. The figures do not change much. The new modified function is below

2 comments on “Multivariate volatility forecasting (3), Exponentially weighted model”

  1. Eran,

    Is this method an online and vectorized method? Because I don’t see you calling a for loop beyond getting the instruments. Is this output something that can be used for a rolling backtest?

    -Ilya

Leave a Reply to Ilya

Your email address will not be published. Required fields are marked *