The evaluation of volatility models is gracefully complicated by the fact that, unlike other time series, even the realization is not observable. Two researchers would never disagree about what was yesterday’s stock price, but they can easily disagree about what was yesterday’s stock volatility. Because we don’t observe volatility directly, each of us uses own proxy of choice. There are many ways to skin this cat (more on volatility proxy here).

In a previous post Univariate volatility forecast evaluation we considered common ways in which we can evaluate how good is our volatility model, dealing with one time-series at a time. But how do we evaluate, or compare two models in a multivariate settings, with two covariance *matrices*?

The solution can be found in Robert Engle’s research factory. Specifically, Testing and valuing dynamic correlations for asset allocation, Journal of Business & Economic Statistics, 2006, Vol. 24(2). The paper has around 200 citation, highly under-cited (and under-utilized if you believe the connection citations-usage).

You are welcome to read the full paper, but the main idea can be summarized quite quickly and simply. This is the key message directly from the paper:

Theorem 1 says that the conditional variance of every optimized portfolio will be greater than or equal to the conditional variance of the portfolio optimized on the true covariance matrix.

…

Theorem 1 offers a strategy for comparing covariance matrices, based on the idea of choosing covariance matrices that achieve lowest portfolio variance for all relevant expected returns.

The idea is quite intuitive, but it is useful that someone also formally stapled the theory behind it. So, given two estimates of covariance matrices, construct a portfolio. Out of the two covariance estimates, the portfolio would have lower variance for the more accurate covariance estimate. This will be true for any vector of expected returns and any required excess return. With this move from covariance matrix to a portfolio, you are moving from multivariate settings back to univariate settings!

Formally, if is the vector of expected excess returns, and is the required return, you can find the portfolio weights using as your covariance at time :

(1)

Once you have the weights, you can construct the portfolio and voila. The lower the variance of the portfolio compared to the variance of another portfolio, the better your covariance estimate. Note: the only change should be the covariance matrix, so that you compare apples to apples (no change in the vector of excess or required returns for example).

In the same paper, they show that following this idea, you can simply test the equality between two models pretty much the way it is done in the univariate case. The Diebold Mariano-type test and the Mincer Zarnowitz regression test are mentioned. I see no reason why the Jarque Bera test would not qualify here as well (but it is not mentioned in the paper).

I have shown how to estimate the covariance matrix using quite a few techniques in a series of 6 posts and counting. Now, I have pulled some history of the following tickers: ` c('XLY','XLP','XLE','XLF','XLV','XLI','XLB','XLK','XLU')`

, and constructed global minimum variance portfolios using difference covariance estimates. I kept 1 year as an out-of-sample as well; so that we construct the portfolio based on say 9 years, then check the variance of the portfolio in a subsequent year. Here is the code:

**Pull the data:**

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
library(quantmod) k <- 10 # how many years back? end<- format(Sys.Date(),"%Y-%m-%d") start<-format(Sys.Date() - (k*365),"%Y-%m-%d") symetf = c('XLY','XLP','XLE','XLF','XLV','XLI','XLB','XLK','XLU') l <- length(symetf) w0 <- NULL for (i in 1:l){ dat0 = getSymbols(symetf[i], src="yahoo", from=start, to=end, auto.assign = F, warnings = FALSE, symbol.lookup = F) w1 <- weeklyReturn(dat0) w0 <- cbind(w0,w1) } time <- as.Date(substr(index(w0),1,10)) w0 <- as.matrix(w0) colnames(w0) <- symetf head(w0, 3) TT <- NROW(w0) |

**Estimate the volatility of the individual series (no covariance yet):**

1 2 3 4 5 6 7 8 9 10 11 12 |
library(rugarch) citation("rugarch") Alexios Ghalanos (2014). rugarch: Univariate GARCH models. R package version 1.3-4. gjrtspec <- ugarchspec(mean.model=list(armaOrder=c(1,0)),variance.model =list(model = "gjrGARCH"),distribution="std") volatility_fit <- matrix(nrow= TT, ncol= l) for (i in 1:l){ Tgjrmodel= ugarchfit(gjrtspec, w0[, i]) volatility_fit[,i] = as.numeric(sigma(Tgjrmodel)) } colnames(volatility_fit) <- symetf |

I use few methods for covariance estimation, all have been discussed in this blog in the past: Sample covariance (sample) CCC, DCC, equicorrelation where we shrink the covariance towards their average, equicorrelation where we shrink the covariances towards the average of the whole matrix. I also include portfolio based on simple combination of the all the covariance estimates and an equal weights portfolio.

For finding the portfolio weights I used the `globalMin.portfolio`

(Eric Zivot and Tim Hesterberg).

**Covariance estimation and portfolio construction:**

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
# make containers for the matrices over time: samp_cov <- samp_cor_tv <- cov_dcc <- cov_ccc <- cov_dec2 <- cov_dec <- array(dim=c(l, l, TT)) # Compute the sample unconditional correlation matrix: ss <- 52 # leave out wind <- 52 # rolling window estimation samp_cor <- cor(w0[1:(TT-ss), ]) # will remain constant throughout the loop, for the CCC mu <- apply(w0[1:(TT-ss), ], 2, mean) port_weights <- list() port_weights$Ccc <- matrix(nrow = TT, ncol = l) port_weights$Dcc <- matrix(nrow = TT, ncol = l) port_weights$Dec <- matrix(nrow = TT, ncol = l) port_weights$Dec2 <- matrix(nrow = TT, ncol = l) port_weights$comb <- matrix(nrow = TT, ncol = l) port_weights$samp <- matrix(nrow = TT, ncol = l) for (j in (wind+1):TT ){ dl <- diag(l) dt <- volatility_fit[j,] * dl cov_ccc[,,j] <- dt %*% samp_cor %*% dt samp_cor_tv[,,j] <- cor(w0[(j-wind):j,]) samp_cov[,,j] <- cov(w0[(j-wind):j,]) cov_dcc[,,j] <- dt %*% samp_cor_tv[,,j] %*% dt cross_mean <- mean(samp_cor_tv[,,j][upper.tri(dl, diag= F)]) # Note diag= F cross_mean2 <- mean(samp_cor_tv[,,j][upper.tri(dl, diag= T)]) # diag=T shrinks towards higher correlation eq_cor2 <- eq_cor <- dl # fill upper and lower with cross_mean2 eq_cor2[upper.tri(eq_cor2, diag=F)] <- cross_mean2 eq_cor2[lower.tri(eq_cor2, diag=F)] <- cross_mean2 # fill upper and lower cross_mean eq_cor[upper.tri(eq_cor, diag=F)] <- cross_mean eq_cor[lower.tri(eq_cor, diag=F)] <- cross_mean # Compute covariances cov_dec[,,j] <- dt %*% eq_cor %*% dt cov_dec2[,,j] <- dt %*% eq_cor2 %*% dt port_weights$Ccc[j,] <- globalMin.portfolio(52*mu, cov_ccc[,,j])$weights port_weights$Dcc[j,] <- globalMin.portfolio(52*mu, cov_dcc[,,j])$weights port_weights$Dec[j,] <- globalMin.portfolio(52*mu, cov_dec[,,j])$weights port_weights$Dec2[j,] <- globalMin.portfolio(52*mu, cov_dec2[,,j])$weights port_weights$samp[j,] <- globalMin.portfolio(52*mu, samp_cov[,,j])$weights cov_comb <- (cov_dec2[,,j] + cov_dec[,,j] + cov_ccc[,,j] + cov_dcc[,,j] + samp_cov[,,j])/5 port_weights$comb[j,] <- globalMin.portfolio(52*mu, cov_comb)$weights } port_ccc <- apply(port_weights$Ccc[-TT, ] * (w0[-1,]), 1, sum) port_dcc <- apply(port_weights$Dcc[-TT, ] * (w0[-1,]), 1, sum) port_dec <- apply(port_weights$Dec[-TT, ] * (w0[-1,]), 1, sum) port_dec2 <- apply(port_weights$Dec2[-TT, ] * (w0[-1,]), 1, sum) port_dec2 <- apply(port_weights$Dec2[-TT, ] * (w0[-1,]), 1, sum) port_samp <- apply(port_weights$samp[-TT, ] * (w0[-1,]), 1, sum) port_comb <- apply(port_weights$comb[-TT, ] * (w0[-1,]), 1, sum) port_equal <- apply(1/l * (w0[-1,]), 1, sum) port_perf <- cbind(port_ccc, port_dcc, port_equal, port_dec, port_dec2, port_comb, port_samp) colnames(port_perf) <- c("CCC", "DCC", "Equal", "Equi1", "Equi2", "Comb", "Sample") |

Here are the results:

In red, for the full sample, in black, only for the out of sample period (so 52 weeks since we use weekly data here). On the Y axis we have the annualized standard deviation of the portfolio.

We see that generally speaking, the most recent 52 weeks are less volatile than the whole sample period. A portfolio based on simple average of all five covariance estimates achieves the lowest volatility over the full sample, and is second best over the out-of-sample period. Most strikingly, the sample covariance matrix (computed based on expanding window in this case) is good enough. What I think is that since we only use 10 names, and we have quite a bit of data, the sample covariance estimate (which is also the Maximum Gaussian Likelihood) is adequate. Thinking about the other models, the DCC is motivated mainly by stylized facts (Heteroscedasticity and volatility clustering), and it is nothing more than a functional form. There is no theoretical reason for the CCC or DCC to perform better. For example, as far as I know there is no proof that shows the DCC actually converges to the real covariance matrix. As for the equicorrelation covariance, I strongly believe in the idea. Clements and friends (2015) find that shrinking is particularly helpful in times of crisis, but if the period is calm, there is not much advantage.

Main takeaway: if you have good ratio of data points to names (or number of rows to number of columns ratio), then you can argue that the sample covariance matrix, the most straight forward thing you can do, is adequate enough.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
globalMin.portfolio <- function(er, cov.mat) { # Eric Zivot and Tim Hesterberg # Compute global minimum variance portfolio # # inputs: # er N x 1 vector of expected returns # cov.mat N x N return covariance matrix # # output is portfolio object with the following elements # call original function call # er portfolio expected return # sd portfolio standard deviation # weights N x 1 vector of portfolio weights call <- match.call() # # check for valid inputs # asset.names <- names(er) er <- as.vector(er) # assign names if none exist cov.mat <- as.matrix(cov.mat) if(length(er) != nrow(cov.mat)) stop("invalid inputs") if(any(diag(chol(cov.mat)) <= 0)) stop("Covariance matrix not positive definite") # remark: could use generalized inverse if cov.mat is positive semi-definite # # compute global minimum portfolio # cov.mat.inv <- solve(cov.mat) one.vec <- rep(1,length(er)) # w.gmin <- cov.mat.inv %*% one.vec/as.vector(one.vec %*% cov.mat.inv %*% one.vec) w.gmin <- rowSums(cov.mat.inv) / sum(cov.mat.inv) w.gmin <- as.vector(w.gmin) names(w.gmin) <- asset.names er.gmin <- crossprod(w.gmin,er) sd.gmin <- sqrt(t(w.gmin) %*% cov.mat %*% w.gmin) gmin.port <- list("call" = call, "er" = as.vector(er.gmin), "sd" = as.vector(sd.gmin), "weights" = w.gmin) class(gmin.port) <- "portfolio" gmin.port } |

Sir,

there was an error regards to the term ” mean_rho”, wished to know how it has been defined.

It was replaced by

`cross_mean`

, so`cross_mean`

not relevant for this post. I fixed that line.