# Understanding Multicollinearity

Roughly speaking, Multicollinearity occurs when two or more regressors are highly correlated. As with heteroskedasticity, students often know what does it mean, how to detect it and are taught how to cope with it, but not why is it so. From Wikipedia: “In this situation (Multicollinearity) the coefficient estimates may change erratically in response to small changes in the model or the data.” The Wikipedia entry continues to discuss detection, implications and remedies. Here I try to provide the intuition.

You can think about it as an identification issue. If , and you estimate

how can you tell is is moving due to or due to ? If you have a perfect Multicollinearity, meaning your software will just refuse to even try.

For illustration, consider the model

so it is hard to get the correct ‘s, in this case, .

The following function generate data from this model, using a “cc” parameter which determines how correlated are the two x’s.

Now we generate many, using a sequence which controls the correlation:

To produce the following figure:

It is also known that if you do not care to make inference, but only interested in point forecast, you need not concern yourself. This is shown via the solid line. Despite the difficulty in estimating the individual effects, the over all, or in this case, their sum is correct, still 2 as it should be.

How severe is the problem depends of course on the degree of collinearity, higher degree, the more problematic the situation. It is not a linear increase:
You can see that even if correlation is around 0.8, inference is limited but perhaps manageable. As we move up in correlation, the standard deviation of the estimate is getting ridiculously high, in a sense that the estimate is 1, but it can be quite easily estimated as 0 or 2. This is shown by the increased standard deviation of the estimate, especially the sharp increase when correlation is above 0.95.

Finally and a bit more delicate, what is plotted in figure 2 is not the actual standard deviation but an estimate of the real unobserved standard deviation parameter. Since we already simulated 50 iterations per setting, we can have a look at the simulation variance of this estimate. In the code above it is specified as “stdstdestimate” which is the standard deviation of the many standard deviation we get for each correlation value specified. Note the Y axis, what is plotted now is the (simulation) standard deviation of the estimate for the standard deviation of the ‘s. It is a higher order examination (variance of variance). What it shows is that you know that you know less (higher standard deviation for ) but also that you are less sure, how much less you know (higher standard deviation of your estimate for the standard deviation of ). Fattening food for thought.

### 5 comments on “Understanding Multicollinearity”

1. Eran says:

Hi Hrishikesh,
Your book is on my shelf, its high ranking is justified.

1. chris says:

Nicely done!

2. Dolllar says:

Very useful!