See for instance “a brief parable of over-Differencing” by cochrane

]]>On the other hand just from the estimates, the 2nd regression has a siginificant t-stat, while the first is too weak. I see no reason for these results not presenting a defendable case for your hypothesis. ]]>

In addition to doing some one way and others another, I’ve always thought they should be labeled the opposite way (USD/JPY should mean the number of dollars you can buy with one yen). I convinced myself that what they do makes sense but if I hadn’t known which one was more valuable to begin with, I don’t know if I ever would have made sense of it.

]]>I feel like the question you are asking is: “Can the t-statistics/R^2 be deflated by serial correlation in the residual?” If so, then the change in your results could be caused by some shift in the error structure and not by a change in the underlying relationship between the two variables. Unfortunately the answer to the question is “Yes”. Serial correlation is in essence a kind of misspecification problem (Greene 253, 5th ed), and its effect is likely lowering the amount of variation that can be explained by the dependent variable (the in model variation).

I’m not entirely sure that this matters: Your stated null is that the correlation before and after 1931 are the same. Thus to my (admittedly simplistic) mind the obvious thing to do is compute the correlation coefficient in each sample and test whether or not they are different.

http://vassarstats.net/rdiff.html

The D-W matters allot more if you are asking causal questions, but you don’t seem to be…

These correlation coefficients can be extracted from the statistics you posted with a bit of least squares algebra (presuming this is from a simple regression!), and I got them as .076 and .489 respectively, which viewed as a one-tailed test via the website above is significant at the 5% level.

(I don’t vouch for the accuracy of my calculations, here’s how I got them though: http://arhouser.net/odds-ends/recovering_r.pdf)

]]>You’re right, Yahoo would be smart to offer more ways to view exchange rates than “market convention”. Market convention even annoys practitioners. Pricing everything in one base currency isn’t the best way though. Bloomberg has this down pat…

They use a a table. The leftmost row is the denominator, the top row is the numerator. You can choose between displaying ‘last price’ or ‘% change’, and these are all highlighted using a red-green heatmap. This way lets you easily see asymmetries in the movements between different FX cross rates. For example, today the SEK has strengthened against all the major currencies, but it has strengthened most against NZD, AUD, CAD, and JPY.

Re: your paper dilemna, Ironman has it right. You’re showing a change state, so a movement from one variable having a zero explanatory power to 21% is interesting.

]]>For the first 20 years of my life we used to say that the Canadian dollar was worth 70 cents. Sometime in the 90s, it flip-flopped, and we said that the USD was worth 1.35 CAD.

It used to be that English speaking countries were quoted in:

Unit of currency / USD (GBP, AUD, NZD, CAD) and non English speaking coutries were in USD / unit FX .

When the Euro was introduced the ECB announced its preference to be quote itself in EUR / USD.

On regressions.. you cannot regress price levels, only price changes.

Regression 1, is no observed correlation. You cannot say that the correlation doesn’t exist, just that you were unable to see it.

Regression 2, is still pretty weakly correlated. I would not make too much out of a non-correlation turning into a weakly correlated series.

Historically everything used to be quoted with GBP as the base currency so the USD/GBP continues to be quoted that way. Pretty much everything else is quoted with USD as the base currency.

The main exception is the Euro. I don’t know why this is. I suspect it has something to do with European pretentiousness.

]]>