jump to navigation

Volatility in Housing Markets (Part 1 of 2) July 20, 2008

Posted by DustinRJay in Calgary real estate, volatility.
Tags: ,
trackback

In general, housing prices have a low volatility compared to other asset classes.  This is due to the underlying fundamental value (rents) being a relatively stable cash flow.  This compares against stocks which have larger variance in earnings and therefore larger volatility in price.

A lookback at historical real estate volatility can help to give a forecast probability cloud.  By comparison, the S&P 500 has a VIX index which is representative of S&P 500 volatility over the next 30 day period and is referred to by some as the fear index.

A quarterly calculation of year over year price changes by histogram for Calgary real estate from Q3 1977 to Q1 2008 helps identify the scale of price changes that could occur in one year.  The results are below:

  • P90: -5.6% (90% chance of price growth being greater than -5.6%)
  • P50: +5.9% (50% chance of price growth being greater than +5.9%)
  • P10: +19.9% (10% chance of price growth being greater than +19.9%)

Furthermore, the probability of an event occurring that is above the P10 or below the P90 for 5 consecutive years is 1 in 100,000 for each (i.e.: (1/10)^5 = 1/100,000).  The shortfall of this kind of approach to volatility is that this calculation is not statistically independent as bear and bull markets typically last between 2-10 years.

What this analysis demonstrates is that even if a bearish scenario is the right approach, Mr. Market could take a very long time to unwind.  The following graph illustrates what 5 consecutive P10, P50 and P90 events would look like and is meant to represent the best case, best guess and worst case respectively.

Advertisements

Comments»

1. nonplused - July 24, 2008

I think there is something wrong with this analysis. Perhaps the last 2 years, 2006 on, need to be excluded as outliers. The reason I would do this is that the dynamics of the market were temporarily distorted by the 0/40 CMHC mortgage, which will slowly be undone in the next few years.

To me, the results now do not pass the “common sense” test. Is it really possible that in 10 years the average price of a house in Calgary could be $10,000,000? Or is it more likely that something is wrong with the 2006 – 2007 data and it should be excluded when drawing the trend. If you draw a straight line through all the data to about 2004 and then extrapolate, the P90 case is actually the expected case and only the history of the last 2 or 3 years changes this. And if it is a bubble and it corrects as fast as it inflated, the p50 case should be about -15% for 2 years down to the trend.

2. radley77 - July 24, 2008

For each point in the P90 case represents a 1 in 10 chance of happening. That means the chance of it happening twice in a row would be 1 in 100 assuming the events are independent of each other. The chance that you illustrated of the P10 case (or occurring for 10 years in a row) would be 1 in 10 billion. This equates to a miniscule probability for an absurdly high theoretical number.

Of the 118 samples available only one point was less than -15% year over year change. The forecast of two consecutive years of drops of 15% is extremely improbable due to low volatility in the housing market.

To me the case for even the drop of 15% year over year would have to be incredibly laid out (again your making a bet that has only occurred once in the past 118 samples ( 0.8% chance)). To expect it to occur two years in a row would be unprecedented and falls outside the limits of what could be expected from historical volatility.

I hope to have another chart up shortly when I get the time…

3. nonplused - July 25, 2008

There may be only one case of a 15% drop true, but that isn’t how volatility is measured. It’s absolute change, so if there are cases of 30% increases it implies a 30% drop is likely as well with the same frequency (sort of, real estate is a commodity so it should be log-normal not normally distributed). I think the data should be fitted to a log-normal curve rather than using a simple histogram. A histogram assumes you have sufficient data points to define the entire distribution, which I think is highly unlikely.

4. plusedplused - July 25, 2008

I have to agree with nonplused. Just looking at the graph the best guess clearly depends on a much higher rate of appreciation than the historical averages would dictate. I can only imagine the last few years of data being the reason for this.

I’d be interested to see this type of analysis but limited to those periods of time where bubbles occurred to see what the most likely outcome will be with this bubble.

5. section31 - July 28, 2008

I think 80 to 82 contradicts the 2009 and beyond extrapolation.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: