When you google “Kurtosis”, you encounter many formulas to help you calculate it, talk about how this measure is used to evaluate the “peakedness” of your data, maybe some other measures to help you do so, maybe all of a sudden a side step towards Skewness, and how both Skewness and Kurtosis are *higher moments* of the distribution. This is all very true, but maybe you just want to understand what does Kurtosis mean and how to interpret this measure. Similarly to the way you interpret standard deviation (the average distance from the average). Here I take a shot at giving a more intuitive interpretation.

## Marriage is good for your income

For those of you who are into machine learning, here you can find a cool collection of databases to play around with your favorite algorithm. I choose one out of the available 200 and fit a logistic regression model. The idea is to see what kind of properties are common for those who earn above 50K a year. Our data is such that the “y” variable is binary. A value of 1 is given if the individual earns above 50K and 0 if below. We know many things about the individual. Level of education in years, age, is she married, where from, which sector is she working in, how many working hours per week, race, and more. We can fit logistic regression, which is quite standard for a binary dependent variable, and see which variables are important.

## Backtesting trading strategies with R

Few weeks back I gave a talk about Backtesting trading strategies with R, got a few requests for the slides so here they are:

## Most profitable hedge fund style

This is not an investment advice!!

Couple of weeks back, during amst-R-dam user group talk on backtesting trading strategies using R, I mentioned the most effective style for hedge funds is relative value statistical arbitrage, I read it somewhere. After the talk was over, I was not sure anymore if it was correct to say it and decided to check it.

## Bootstrap example

Bootstrap your way into robust inference. Wow, that was fun to write..

**Introduction**

Say you made a simple regression, now you have your . You wish to know if it is significantly different from (say) zero. In general, people look at the statistic or p.value reported by their software of choice, (heRe). Thing is, this p.value calculation relies on the distribution of your dependent variable. Your software assumes normal distribution if not told differently, how so? for example, the (95%) confidence interval is , the 1.96 comes from the normal distribution.

It is advisable not to do that, the beauty in bootstrapping* is that it is distribution untroubled, it’s valid for dependent which is Gaussian, Cauchy, or whatever. You can *defend* yourself against misspecification, and\or use the tool for inference when the underlying distribution is unknown.

## Europe most dangerous cities

When I was searching for data about U.S prison population, for another post, I ran across eurostat, a nice source for data to play around with. I pooled some numbers, specifically homicides recorded by the police. A panel data for 36 cities over time, from 2000 to 2009. Lets see which are the cities that have problems in this area.

## U.S. prison population

I recently finished reading: The author writes that 6.6% of U.S. American residents will find themselves at some point in their life incarcerated, about 20 million people. A big number on anyone’s scale. You can also find disturbing figures in Wikipedia: figure. Are these facts misleading? we need to account for population growth. The prison population should naturally rise even if the proportion of crime in the general population is constant, since the population itself is growing. Here I show that these facts are NOT misleading, and that the system is indeed not fulfilling its purpose.

## Spurious Regression illustrated

Spurious Regression problem dates back to Yule (1926): “Why Do We Sometimes Get Nonsense Correlations between Time-series?”. Lets see what is the problem, and how can we fix it. I am using Morgan Stanley (MS) symbol for illustration, pre-crisis time span. Take a look at the following figure, generated from the regression of MS on the S&P, *actual prices* of the stock, *actual prices* of the S&P, when we use actual prices we term it regression in levels, as in price levels, as oppose to log transformed or returns.

## Live Rolling Correlation Plot

Open source is amazing! I cannot even start to imagine the amount of work invested in R, in firefox browser (Mozilla), or Rstudio IDE, all of which are used extensively around the globe, **free**. Not *free* as in: *free sample till you decide to upgrade*, or: *sure it’s free, just watch this one minute commercial every time you need to use it*, but free, as in: *we think it might make your life better, enjoy*. Warms the heart, in direct opposite to the *fabulous fabs* out there, that instead of contributing to a better, safer society, set it back and get paid for it (see appendix). Character is also normally distributed I guess.

## piecewise regression

A *beta* of a stock generally means its relation with the market, how many percent move we should expect from the stock when the market moves one percent.

Market, being a somewhat vague notion is approximated here, as usual, using the S&P 500. This aforementioned relation (henceforth, *beta*) is detrimental to many aspects of trading and risk management. It is already well established that volatility has different dynamics for rising markets and for declining market. Recently, I read few papers that suggest the same holds true for *beta*, specifically that the *beta* is not the same for rising markets and for declining markets. We anyway use regression for estimation of *beta*, so piecewise linear regression can fit right in for an investor/speculator who wishes to accommodate himself with this asymmetry.

## Resistant Regression

It is a fact that on most days, not much is going on in the stock market. When we estimate the relation of a stock with the market, or the “beta” of a stock, we use all available daily returns. This might not be wise as some days are not really typical and contaminate our estimate. For example, Steve Jobs past away recently, AAPL moved quite a bit as a result. However, this is a distinct event that does not reflect on the relation with the market, but is company specific. Our aim is to exclude such observations, taking into consideration that we don’t want to lose too much information, not all large swings are irrelevant.

## Price is right, part two – Trading strategy.

Having stock market in mind, in the previous post: “Price is right, part one.”, I stated that we should not think in terms of “the price went up/down too much” but that “the current price level is wrong since…. and the market is not getting it because…”, bearing in mind that Mr. Market is not a weak player to say the least.

In this post I back this claim with the examination of a trading strategy that ignores economical arguments, thus is only based on relative price moves. Say you believe my previous post is horseshit, wouldn’t it be nice to short the market if it’s “too high” and to long it when it “went down too much”? Fine!, let’s have a look at the performance of such a strategy.

## Price is right, part one.

Efficient Market Hypothesis states that the price you see is the price you *should* see. The price that exactly reflects the expectation of engaging a lottery.

## Pairs Trading Issues

A few words for those of you who are not familiar with the “pairs trading” concept. First you should understand that the movement of every stock is dominated not by the companies performance but by the general market movement. This is the origin of many “factor models”, the factor that drives the every stock is the *market factor*, which is approximated by the S&P index in most cases.

## Do they really know what they are doing?

I am talking here about money managers. for those of us who have one. We assume they understand about markets in such a way that they can, and will generate at least the benchmark returns, what ever this benchmark may be.