
Comment on post “On Information Criteria for Autoregression”

Comment on post “nonfarm payroll number”

Comments on “Multivariate volatility forecasting, part 6”

Comment on “Linear regression assumes nothing about your data”

Comment on “Detecting bubbles in real time”
 Interesting, but have you ever considered D. Sornette’s papers on LPPL to detect bubbles (http://www.slideshare.net/arbuzov1989/seminarpsu21102013financialbubblediagnosticsbasedonlogperiodicpowerlawmodel) or more generally (link)
 Interesting post, but the problem with a simple ADF is that the explosive behavior is not capture because it is consider as an outlier. Phillips et al (2014) propose a recursive ADF test which over comes this problem; here is the link if you are interested . It isn’t very hard to implement in R and it will detect multiple bubbles in the series (link).

Comment on “”Bias vs. Consistency”

Comment on “look under the hood of a function in R”

Comment on “Outofsample data snooping”

Comment on “Bootstrapping time series”

Comment on “Volatility forecast evaluation”

Comment on “Pairs Trading Issues”
Under the link you will find a full search within some (ARMA) DGP class by AIC, BIC, SIC which can be plotted as heatmap as well.
This procedure helped me to get a feeling for the choice of information criterion as well as its behaviour.
What would you say are your top three books/ resources for learning about novel time series forecasting methods such as the one above? (preferably ones that don’t assume a highly technical background and provide some R code)
I’ve read/ skimmed through the commonly cited ones – Ruey Tsay, Rob Shumway, Jonathan Cryer etc – but I’m keenly interested in hearing your suggestions.
You have:
– R for Time Series free resource.
– Forecasting: principles and practice and the free soft copy.
– New Introduction to Multiple Time Series Analysis
“linear regression makes the assumption that the data is normally distributed” – correct, OLS regression doesn’t make any such assumption.
But a regression will always pass through the mean of the data. That means that the regression is only an effective estimator if the (arithmetic) mean is the most likely value – i.e. if your data is normally distributed. If your data is lognormally distributed then the mean is often way off the most likely values. Also, taking the logarithm of your data, will ensure that the line passes through the geometric mean of the data – which may or may not be desirable.
I also found this example for (4), from Davidson 2004, page 96.
yt=B1+B2*(1/t)+ut with idd ut has unbiased Bs but inconsistent B2.
also
mu=0.01*y1 + 0.99/(n1) sum_{t=2}^n*yt
.. a very through answer to the question, “How do I view the source code for a function?“.
In addition to what you’ve described, we also show how to find source code for S4 methods, functions that call compiled code (in a base, recommended, or other package), and compiled code built into the R interpreter.
Reminds me of this paper and the dePrado et al paper cited by it. My current hobby is taking Quantopian strategies and plotting their Sharp ratios over time.
I would like to point out that for the sieve bootstrap, the columnwise loop can be vectorized, resulting in much faster code:
res.star[,] < sample(na.omit(scaled.res),(nnumlags)*R, replace = T)
obs.star[1:numlags,] < b1 # for the first obs we plug the original data
for (j in (numlags+1):n)
obs.star[j,] < ar1$x.intercept + ar1$ar%*%obs.star[(j1):(jnumlags),,drop=FALSE] + res.star[(jnumlags),]
1) In the DM test in package forecast we can choose the function power 1 or 2.
Does this mean the MSE and the MAE loss functions using 2 and 1, respectively?
2) Is it common to find different reponses if we choose power function 1 or 2?
I mean, should the performance of the two models be the same regardless of the power function.
1. These tests do not have high power
2. They are not consistent with each other. Each captures different aspect.
My advice:
– Check all and hope for consistency
– Try to increase power using bootstrap
– You can find in the literature papers that show averaging can be a practical solution. That is, instead of deciding between the two measures, just average them.
Hi: the question of what to use for the regression approach that you are discussing
can be answered in the following way:
A) if the log( stock prices) of the two assets are cointegrated, then use log( prices)
and do the regression. testing for cointegration is relatively straightforward
in the bivariate case and any decent time series econometrics book will discuss
that.
B) if the two stock prices are not cointegrated, then use returns.
Still neither A) nor B) will necessarily exhibit more stability in the relationship than the other. i.e; the parameter estimates can definitely change over time and
exhibit lots of instability.
Also, neither A nor B addresses the problem of X and Y not being
“symmetric”. Paul Teetor has written a nice paper that addresses this issue
through the use of total least squares regression. see the link below. I’m not sure
whether Paul’s idea helps with the stability issue.