Hyper-Parameter Optimization using Random Search

Hyper-parameters are parameters which are not estimated as an integral part of the model. We decide on those parameters but we don’t estimate them within, but rather beforehand. Therefore they are called hyper-parameters, as in “above” sense.

Almost all machine learning algorithms have some hyper-parameters. Data-driven choice of hyper-parameters means typically, that you re-estimate the model and check performance for different hyper-parameters’ configurations. This adds considerable computational burden. One popular approach to set hyper-parameters is based on a grid-search over possible values using the validation set. Faster and simpler ways to intelligently choose hyper-parameters’ values would go a long way in keeping the stretched computational cost at a level you can tolerate.

Enter the paper “Random Search for Hyper-Parameter Optimization” by James Bergstra and Yoshua Bengio, suggesting with a straight face not to use grid-search but instead, look for good values completely at random. This is very counterintuitive, for how can a random guesses within some region compete with systematically covering the same region? What’s the story there?

Below I share the message of that paper, along with what I personally believe is actually going on (and the two are very different).

Continue reading

What is the Kernel Trick?

Every so often I read about the kernel trick. Each time I read about it I need to relearn what it is. Now I am thinking “Eran, don’t you have this fancy blog of yours where you write about statistics you don’t want to forget?” and then: “why indeed I do have a fancy blog where I write about statistics I don’t want to forget”. So in this post I explain the “trick” in kernel trick and why it is useful.

Continue reading

Local Linear Forests

Random forests is one of the most powerful pure-prediction algorithms; immensely popular with modern statisticians. Despite the potent performance, improvements to the basic random forests algorithm are still possible. One such improvement is put forward in a recent paper called Local Linear Forests which I review in this post. To enjoy the read you need to be already familiar with the basic version of random forests.

Continue reading

Most popular posts – 2021

Kind of sad, but the same intro which served last year, befits this year also.

Littered with Corona, this year was not easy. But looking around me, I feel grateful. The following quote by Socrates comes to mind:

“If all our misfortunes were laid in one common heap whence everyone must take an equal portion, most people would be content to take their own and depart.”

On topic, as with previous years I checked my website traffic-analytics. Without further ado here are the three most popular posts for 2021.

Continue reading

Publication in Significance – code

Couple of months ago I published a paper in Significance – couple of pages describing the essence of deep learning algorithms, and why they are so popular. I got a few requests for the code which generated the figures in that paper. This weekend I reviewed my code and was content to see that I used a pseudorandom numbers, with a seed (as oppose to completely random numbers; without a seed). So now the figures are exactly reproducible. The actual code to produce the figures, and the figures themselves (e.g. for teaching purposes) are provided below.

Continue reading

A New Parameterization of Correlation Matrices

In volatility modelling, a typical challenge is to keep the covariance matrix estimate valid, meaning (1) symmetric and (2) positive semi definite*. A new paper published in Econometrica (citing from the paper) “introduces a novel parametrization of the correlation matrix. The reparametrization facilitates modeling of correlation and covariance matrices by an unrestricted vector, where positive definiteness is an innate property” (emphasis mine). Econometrica is known to publish ground-breaking research, and you may wonder: what is the big deal in being able to reparametrise the correlation matrix?

Continue reading

What’s the big idea? Deep learning algorithms

Deep learning algorithms are increasingly featuring in popular news outlets, large-scale media events and academic conferences. But what makes them so popular? Why now?

I recently published what I hope is an easy read for all of you modern-statistics geeks lovers; explaining the thrust behind this machine-learning class of models.

You can download the two-pager from Significance, specifically here (subscription required).

Continue reading

Asking Good Questions

Recently, I was lucky enough to speak at the 7th International conference on Time Series and Forecasting (ITISE). The conference itself had excellent collection of talks with a applications in completely different fields. Energy, neuroscience and, how can we not, a great deal of COVID19-related forecasting papers. It was a mix of online and in-person presentations, and with a slew of technical hiccups consuming a lot of valuable minutes time was of the essence. Very few minutes, if any, for questions. I attended my first conference well over a decade ago, and my strong feeling is that things have not changed much since. There is simply not enough training when it comes to the way slides should (and should not) look like, how to deliver a 20 minutes talk about a paper which took a year to draft, and indeed, which questions are good and which are just expensive folly.

Continue reading

R tips and tricks – shell.exec

When you startup your machine, the first thing you do is to open the various programs you work with. Examples: your note-taking program, the pdf file that you need to read, the ppt file you were last working on, and of course your strongest link with the outside world nowadays; your email box. This post shows how to automate this process. Windows machines notoriously need restarting for every little (un)install. I trust you will find this startup automation advice handy.

Continue reading

Bayesian vs. Frequentist in Practice, part 3

This post is inspired by Leo Breiman’s opinion piece “No Bayesians in foxholes”. The saying “there are no atheists in foxholes” refers to the fact that if you are in the foxhole (being bombarded..), you pray! Leo’s paraphrase indicates that when complex, real problems are present, there are no Bayesian to be found.

Continue reading

Random forest importance measures are NOT important

Random Forests (RF from here onwards) is a widely used pure-prediction algorithm. This post assumes good familiarity with RF. If you are not familiar with this algorithm, stop here and see the first reference below for an easy tutorial. If you used RF before and you are familiar with it, then you probably encountered those “importance of the variables” plots. We start with a brief explanation of those plots, and the concept of importance scores calculation. Main takeaway from the post: don’t use those importance scores plots, because they are simply misleading. Those importance plots are simply a wrong turn taken by our human tendency to look for reason, whether it’s there or it’s not there.

Continue reading

R tips and tricks – readClipboard

Here is a small utility function to save you some boring work.

Say you have a file to read into R. The file path is C:\Users\folder1\folder2\folder3\mydata.csv. So what do you do? you copy the path, paste it to the editor, and start reversing the backslash into a forward slash so that R can read your file.

With the help of the rstudioapi package, the readClipboard function and the following function:

You can
1. Simply copy the path C:\Users\folder1\folder2\folder3\mydata.csv
2. execute pathh <- get_path()
3. use pathh which is now R-ready.

No more reversing or escaping backslash.

Beta in the tails

Every form of strength is also a form of weakness*. I love statistics, but I focus to much on methodology, which is not for everyone. Some people (right or wrong) question: “wonderful sir, but what can I do with it?”.

A new paper titled “Beta in the tails” is a showcase application for why we should focus on correlation structure rather than on average correlation. They discuss the question: Do hedge funds hedge? The reply: No, they don’t!

The paper “Beta in the tails” was published in the Journal of Econometrics but you can find a link to a working paper version below. We start with a figure replicated from the paper, go through the meaning and interpretation of it, and explain the methods used thereafter.

Continue reading

How flexible neural networks really are?

Very!

A distinctive power of neural networks (neural nets from here on) is their ability to flex themselves in order to capture complex underlying data structure. This post shows that the expressive power of neural networks can be quite swiftly taken to the extreme, in a bad way.

What does it mean? A paper from 1989 (universal approximation theorem, reference below) shows that any reasonable function can be approximated arbitrarily well by fairly a shallow neural net.

Speaking freely, if one wants to abuse the data, to overfit it like there is no tomorrow, then neural nets is the way to go; with neural nets you can perfectly map your fitted values to any data shape. Let’s code an example and explain the meaning of this.

Continue reading