Create own Recession Indicator using Mixture Models

Context

Broadly speaking, we can classify financial markets conditions into two categories: Bull and Bear. The first is a “todo bien” market, tranquil and generally upward sloping. The second describes a market with a downturn trend, usually more volatile. It is thought that those bull\bear terms originate from the way those animals supposedly attack. Bull thrusts its horns up while a bear swipe its paws down. At any given moment, we can only guess the state in which we are in, there is no way of telling really; simply because those two states don’t have a uniformly exact definitions. So basically we never actually observe a membership of an observation. In this post we are going to use (finite) mixture models to try and assign daily equity returns to their bull\bear subgroups. It is essentially an unsupervised clustering exercise. We will create our own recession indicator to help us quantify if the equity market is contracting or not. We use minimal inputs, nothing but equity return data. Starting with a short description of Finite Mixture Models and moving on to give a hands-on practical example.

Mixture Models

Easy. Rather than each observation to come from a well-defined or familiar distribution such as gaussian, the observation now comes from a mixture of few distributions. Sometimes the term components is used to describe those distributions which we mix, so as to reserve the term distribution to describe the overall distribution, rather than the individual distributions. Omitting dependencies on individual parameters we can formally express a mixture of two distribution as:

    \[g(x_i) = \sum_{j=1}^2 \big(   \lambda_j f_j(x_i)  \big),\]

Where g() is the overall distribution, f_1() is for example a normal distribution with some mean and variance, and f_2() is again a normal distribution but with different mean and different variance. \lambda_1 =  (1- \lambda_2) such that they sum up to one. So \lambda_{()} can be interpreted as the probability of an observation coming from each of the distributions. Theoretically speaking, if we have enough f components it would mean that g(), no matter how complicated or flexible it is in reality, can be successfully approximated*. This is part of the reason for mixture models to be found in so many areas of application. Harvey Campbell and Liu Yan use it in their very nice paper: Rethinking Performance Evaluation to help them better understand the difference between money managers. E.g. which money manager is pulled from an f with a positive alpha.

Mixture Models in R

You will be surprised to see how easy it is:
1. Pull some data on the SPY ETF and convert to daily returns.

2. Use the open source R package mixtools to estimate g and f‘s. In the code below k is the number of components, lambda is an initial value of mixing proportions.

The way it was estimated is using the Expectation–maximization algorithm. Computer says that we have two distributions, one is more stable with lower volatility (~0.66) and a positive mean (~0.087), and another distribution with higher volatility (~2.0) and negative mean (~ -0.13). Also, the lambda eventually settles on 75% of the time we are in a stable environment, while 25% of the time observations belong with the more volatile regime. So with this limited information set we got something quite reasonable. Now per observation you have the posterior probability of that observation being from the first or second component. So to actually decide which observation belongs with which regime we can round that probability. If there is a higher probability for the observation to be from the more volatile regime, this is how it is classified, coding wise it means to round the probability:
regime <- apply(mix_mod$posterior, 2, round).

This is how the two regimes look when we look at the classified observations:

Daily SPY returns over time (%)

Classified observations

Density estimates of the two regimes

Densities of the two regimes
So based on the return data alone, the numerical algorithm created those two regimes, which are quite intuitive. Armed with this knowledge, we can now create our own recession indicator.

Create own Recession Indicator

One way to create a recession indicator is to count the number of observations which are classified to the bear regime within some moving window. Volatility clustering stylized fact makes this idea make sense. We use a 120 days moving window, and standardize the result to have all history on the same footing.

Recession Indicator
We are almost there. It is better to have a probability of a recession on the left hand side. We can easily do that using a Sigmoid mapping:
recession_prob <- recession_ind %>% sigmoid
Resulting in:
Recession probability
In my opinion the figure above reflects a much more realistic situation; how difficult it is for a money manager to evaluate the regime we are in. Compare our recession indicator with other, more traditional recession indicators. For example this one below from the Fed, which looks ridiculously smooth**:

References, code and some footnotes

Benaglia, Tatiana, et al. "mixtools: An R package for analyzing finite mixture models." Journal of Statistical Software 32.6 (2009): 1-29

* Although the paper On approximations via convolution-defined mixture models states that it is more or less a folk theorem.
** It should be smoother since it's about the economy rather than the equity market but still.

2 comments on “Create own Recession Indicator using Mixture Models”

Leave a Reply

Your email address will not be published. Required fields are marked *