# Omitted Variable Bias

Frequently, we see the term ‘control variables’. The researcher introduces dozens of explanatory variables she has no interest in. This is done in order to avoid the so-called ‘Omitted Variable Bias’.

## What is Omitted Variable Bias?

In general, OLS estimator has great properties, not the least important is the fact that for a finite number of observations you can faithfully retrieve the marginal effect of X on Y, that is . This is very much not the case when you have a variable that should be included in the model but is left out. As in my previous posts about Multicollinearity and heteroskedasticity, I only try to provide the intuition since you are probably familiar with the result itself.

For illustration, consider the model So Y is the sum, i.e. . What happens to our estimate for when we do not include in the model? Mathematically we get the simple result*: The second term on the RHS is bad news. is the estimate of the coefficient from the (hypothetical) equation In words, the term represents the bias. It is influenced by:
1.
The real unknown value of . If the real effect of on Y is absolute small, it pushes the combined term to zero and bias is small.
2.
How closely related are to . This is less trivial, if the has nothing to do with and you are lucky to get the estimate to show it, the multiplicand goes to zero and bias is small. You need to be lucky since the estimate (and hence the bias) depends on the actual sample you have, you can be unlucky and get an absolute large estimate even when the X’s are independent in the population level. This subtlety can be better stated in classical textbooks.

Now, why is this so? The unaccounted-for influence of on Y, pushes through anyway. Mr. tells himself that if he is out, he is going to do what he can from the outside. He talks to and depending on (as in the dry formulas), how muscular is Mr. (real value of ) and the nature of their relationship (as in 2 above), is going to accommodate with his request. If they do not know each other, i.e. correlation is zero, will ignore this harassment and the bias is unlikely to be strong.

## Illustration of Omitted Variable Bias

For a couple of more important insights, I need to make an illustration: The fact that you have more 500 instead of 50 observation does absolutely nothing to mitigate this problem. This is to say that the estimate is not only biased but also inconsistent, which means we can not get around this problem it easily.

Finally, have a look at this absurd situation, I plot the standard deviation of the estimate when there is a bias and when there isn’t: In black, the standard deviation when the model is correctly specified. The parabolic shape is due to multicollinearity. In red is the (estimated) standard deviation of the incorrectly specified model.
It is absurd. When the bias is small, around zero, we find it harder to estimate the parameter (standard deviation of the estimate is relatively high). On the other hand, when the bias is strong, standard deviation is lower. When the the model is more severely misspecified, we get a more accurate estimate. Talk about the dangerous inference.

* Derivation in page 149 of this book.