Statistical Shrinkage

Shrinkage in statistics has increased in popularity over the decades. Now statistical shrinkage is commonplace, explicitly or implicitly.

But when is it that we need to make use of shrinkage? At least partly it depends on signal-to-noise ratio.

Introduction

The term shrinkage, I think, is the most underappreciated umbrella term in statistics. The reason is that it is often masked under different names. Take the widely used RiskMetrics estimator for volatility:

    \[\sigma^2_{t+1} = \lambda  \sigma^2_{t} +  (1-\lambda)  r_{t}^2.\]

What do we see here? if E(r_t) = 0 (assuming for example that expected return for the day is zero), the r_{t}^2 term is simply an estimate for the volatility based only on today’s observation. With \lambda= 0.94 (chosen arbitrarily) we simply exert strong pull over the volatility estimate which is based only on the one observation, pushing it towards that estimate which is based on the full sample so far: \sigma^2_{t}. This heavy pull (0.94-heavy) is a shrinkage towards a “grand total”. You are now a step further to recognize shrinkage whichever way you look. Bayesian methods and forecast ensembles to name only a couple of topics. It is all shrinkage.

One shrink and one no-shrink

On Thursday, February 9th 2017, one of the strongest statistical minds passed away. James Stein shocked the statistical world when in 1961 he showed that the Maximum Likelihood Estimation (MLE) method which we love so much, can be bested using but a pinch of shrinkage. The amazement was complete when it was shown that dominance is uniform (as oppose to case-specific, or for a particular region of the parameters).

Thinking regression, I admit the intuition is absent. “Let me tell you what you should do: use unconstrained excellent optimizer” (OLS) “once you are done, you better bias it.” You would think that if the results would be better by biasing, the optimizer would deliver the better version directly and already. The reason why this is not the case is very nuanced and is related to Optimism of the Training Error Rate.

Let’s skip the formulae for the James Stein (JS) shrinkage estimator and go directly to the code. In a simulated regression settings, we can compute the MLE (which is also the OLS) and then see what is the shrinkage we need to apply based on the JS estimator. Simulating 100 observations with 10 explanatory variables, each of which has a coefficient of one.

Ok, we need to take our MLE estimate for beta and shrink it by 0.78:

You can see that the coefficient are now pulled towards zero, which is why they call it shrinkage.

Of course, the in sample RMSE is better using the MLE:

But, going forward? Let’s simulate from the same setting new data 500 times (500 different possible futures say) and use our estimates obtained from before, going forward:

Plotting the results we see that the James Stein Estimator is more accurate than the MLE estimate.
Ridge_RMSE_2

We don’t always need shrinkage. You may have noticed the sdd argument in the first code-chunk above. We have set it to 10. The bet was set to 1. This means that the signal is quite weak compared with the noise, which is exactly when we would like to shrink. Changing sdd to 1, and executing the same code again delivers the following vanilla figure:
Ridge RMSE

If the signal-to-noise ratio is reasonable, we don’t need to call shrinkage techniques to our aid.

What about the theory underpinning statistical shrinkage? Thus far, despite great efforts there is no proper analogy for the optimality theory found in unbiased estimation literature. There is no theory to help us choose the correct amount of shrinkage we should apply. We know shrinkage can be advantageous, but almost that is it. Therefore, data-driven method such as cross-validation are now standard, despite computational difficulties and the lack of theoretical assurance (after all, theoretical assurance is just that).

Very recently, from econometrics rather than statistics, a paper titled: “Efficient shrinkage in parametric models” by Prof. Hansen echos the 1961 Stein’s paper in that “if the shrinkage dimension exceeds two, the asymptotic risk of the shrinkage estimator is strictly less than that of the maximum likelihood estimator (MLE).” The ‘parametric models’ in the title means the paper is concerned with shrinkage applied to estimates which were obtained using MLE. In the paper there is a derivation which seems to me analogous to the term B in our code above. But in a more general setting (equations 10-13 in the paper). A must-read for those interested in shrinkage estimation (free download below).

After you’ve read this, you may wonder how the MLE method survives those brutal attacks. Why keep using MLE if there is more accurate estimators out there? There are good reasons, but we discuss this in another post.

References

Efficient shrinkage in parametric models
Riskmetric technical document
Theory of Point Estimation (amazon link) (Chapter 5 in particular)

Leave a Reply

Your email address will not be published. Required fields are marked *