83069945

Two Cheers for This Year’s Nobel Prize

Well three cheers for Daniel Kahneman, for he has won the Nobel prize, and he is by all accounts a very fine bloke. Vernon Smith shared the prize; about him I know nothing. But Kahneman’s paper with Tversky pretty much kick-started the behavioral finance revolution, bringing the experimental psychology literature into the economics of expectations formation. But only two cheers for the contribution he’s made to economics, for reasons that I will endeavour to explain.

For those people who read this blog without the benefit of an economics degree, fuck off and don’t come back until you’ve got one. In fact get a PhD. I’m certainly not going to waste my time talking to people who only have an undergraduate degree … sorry, I appear to be channeling the Monetary Analysis department of the Bank of England there. Anyway, for the benefit of people who aren’t up to speed on why Kahneman’s work matters, here’s a potted history of expectations in economics.

Expectations in economics matter because the whole point of an economic model is that people change their behaviour based on the interaction between their own preferences and the environment that they find themselves in. Lots of economics models are “static”, in that they basically abstract away from the fact that actions happen over time; these are “one-period” models and are popular because their mathematical simplicity means that they can be taught to even the stupidest undergraduate students and thus serve as a means by which one can get across the basic economic insights. The problem arises, of course, when the undergraduate students get elected to be President of the United States. But anyway, one-period, static models (like, say, the supply and demand curves) are the mainstay of economics.

But you can’t do much in the way of real-world applications with a one-period model. You have to find some way of taking into account the fact that people base their actions not just on what’s happening now, but also what they expect to happen in the near future. To take a concrete example, consider the price of a share of Amalgamated Widgets, an imaginary company which fortunately happens to have just filed its accounts for the last twelve months and to have earnings per share of exactly one dollar. How much would you pay for a share of AW?

Obviously, that’s going to depend on your expectation of what it’s going to do in the future. In fact, Paul Samuelson demonstrated that, under innocent-looking assumptions, your valuation of AW is equal to your expectation of its cash earnings for every period from now to the end of time, with each future period discounted at the appropriate rate of interest. This ought to be true whether you’re planning to hold onto the stock forever or to flip it in the next five minutes, because the price you can sell it for in five minutes’ time depends on what someone *else* think’s it’s worth, and so on and so on. Since *someone*’s going to be holding the stock forever, *everyone* involved with its price, even the shortest-term trader, has to be concerned with long term earnings. Samuelson referred to this as the “Law of Iterated Expectations”; my expectation of your expectation of what AW is worth, is the same as my expectation of what AW is worth, which is the same as the properly discounted value of what I expect AW to earn. No matter how many times we “iterate” the expectations by adding more and more middlemen, it all comes down to this.

So what would my expectations be, then? Well, the simplest way to model them is to assume that I have myopic expectations; I assume that all future periods will be exactly the same as this one. Then, next period, when I turn out to be wrong, I update my assumption that everything will be the same as the new period.

The myopic expectations model (known in forecasting circles as “the naive forecast”) is such a traqnsparent and stupid way of thinking about the future that it’s embarrassing how difficult it is to come up with a method that works better in practice. Lots and lots of professional models, the kind of services that you pay thousands of dollars for, underperform either the pure myopic model, or the myopic model with respect to growth rates. Which is a comfort of sorts to academic economists, who would otherwise probably be moved to tears and suicide by the differential between professorial salaries and the kind of sums that a charmer with a bow tie can pull down in the prediction industry.

Somewhat more sophisticated, we have the “adaptive expectations” model, whereby I have some idea of what would be a normal earnings number for AW, and I update that in a sort of Bayesian fashion with every earnings announcement. Adaptive expectations never really took off in finance theory, but they were very big as the engine of a lot of Keynesian macro models in the early 1970s, shortly before the “revolution” which brought us …

Rational expectations! Beloved of a million and one young and ambitious graduate students with spots, straggly beards and a head for calculus, and loudly derided by a million and two youngish journalists with tweed jackets, hazy left wing politics and a two hundred word screed to write before lunchtime. The rational expectations model and its close cousin, the “efficient markets theory” are actually based on a perfectly sensible intuition; that the mistakes people make are likely to cancel each other out, so when you’re talking about a large population, you should assume that they do cancel each other out. The trouble comes when actually existing rational expectations guys forget the derivation of the theory and start applying “efficient markets” concepts to real life problems as if it was a conclusive argument that the market is always right.

Of course, the big problem with rational expectations models is that when the rubber met the road, in empirical testing, they performed disastrously (by which I mean, really really disastrously). They started off with a few successes; Patrick Minford’s model at the University of Liverpool seemed to be predicting the early 1980s recession pretty well. But after a few years, it was noticed that Minford’s model predicted recessions more or less all the time, and thus occasionally did very well in the same way in which a stopped clock is occasionally right. This was the problem with rational-expectations models from a mathematical point of view; they tend to have “positive feedback” characteristics which make them sink into certain preferred solutions, leading to forecasting performance which, in the words of Steve Nickell, “was enough to make you weep if you cared about that sort of thing”. How bad was it, Virginia? I’ll tell you how bad. Robert Lucas, who won a Nobel Prize himself for inventing rational expectations models, has more or less given up on them. Yup, these models were bad enough to make an economist admit he was wrong. (We should note at this point that it is my personal view that “rational expectations” as a principle of economic modelling was never really given a fair go. The limitations in the mathematical form of the models actually used were so severe that they would have cocked up in exactly this way more or less whatever the economic intution they were modelling. But here we are departing from the broad historical sweep).

So then, we reach Kahneman and Tversky (in case you’re interested, Tversky unfortunately died, and hence isn’t eligible for the Nobel which is not awarded posthumously1). While the empirical battle over rational expectations was dragging out to a bloody end, they came in with a theoretical contribution to the debate. This basically involved taking some of the results from experimental psychology about how people formed expectations and made decisions (fixation on key levels, loss aversion, concerns for distributive equity, etc), and used them as the basis for making a case that people did, in fact, make systematic errors of judgement, and it was not safe to make the assumption that errors would cancel each other out. Rational expectations in macro was dead, and the “behavioural finance” crowd have been chipping away at efficient markets ever since.

So, certainly a hurray from me as a heterodox economist. Rational expectations was a bad theory, and it is good that it no longer rules the roost, and Kahneman certainly did his bit. But only two cheers, because in my view, economics is never really going to get to grips with time and uncertainty as long as it is stuck in a paradigm under which the response to the failure of the last approach is to start tinkering with the way in which we model expectations.

To start with, consider the mathematical structure of the models we’ve been talking about (if you aren’t familiar with the mathematical structure, just adopt a thoughtful facial expression). Although the concept of future time is present through the expectations operator, they’re actually static models. “Taking expectations” is not really a way of modelling the future; it’s a way of avoiding modelling the future by flattening it down into the present, only treating it at all through its effect on today’s expectations. Kahneman’s work doesn’t actually change this fundamental characteristic; behavioral models of expectations are still, structurally, one-period models being forced to do the work of genuine dynamic modelling.

The real work that needs to be done is in attacking the fundamental assumptions of “expectations” modelling in economics. I mentioned above that Samuelson’s assumptions underlying the Law of Iterated Expectations were “innocent-looking”, which they are, but they’re actually extremely restrictive. Importantly (and this is a topic I’ve harped on about before), they’re only valid for expectations of *ergodic* processes.

What the hell is an “ergodic process” when it’s at home?

Ergodicity is a statistical property. A data generating process is “ergodic” if the data that it generates is “well-behaved” in the sense that you can take a sample of it and that sample will be in some way representative of the whole. Imagine a random number generator, spewing out numbers, and yourself sitting in front of it, writing the numbers down. After 1000 numbers, you calculate the mean of the observations. If the random number generator is driven by an ergodic process, you now have a decent estimate of what the mean will be after 10,000 observations. With ergodic stochastic processes, collecting more data gets you a better and better estimate of what the underlying parameters of the process are, as the “noise” cancels itself out in some statistically well-defined way.

But imagine if you were in front of the machine, and you kept on collecting more and more data, but the average after 1000 numbers was completely different from the average after 10,000, which was nothing like the average after 100,000 and so on. Imagine further that it *never* settled down, no matter how much data you collected. That would be a strongly nonergodic process; over time periods of around a week to a month, lots of weather data appears to be nonergodic, which is why medium term weather forecasting is so difficult. It’s clear here that to talk about “expectations” of the future states of a nonergodic system are meaningless; people might have opinions about the future, but there aren’t the solid linkages between these views and the actual data which one would need to call them “expectations”. Certainly, there isn’t enough to support the trick used by economists in using the expectations operator to make dynamic processes static so that they can be modelled tractably.

So what? Well, so this:

Most processes which are characterised by positive feedback are nonergodic

Most economic processes of interest are subject to significant, destabilising positive feedback

It’s a real problem, and in my opinion, Paul Davidson ought to be looking at something like a Nobel Prize for being one of the few economists to take it seriously. (He won’t get it, of course, because this is way out of the mainstream of academic economics, where it is still considered the mark of a clever young man to say that “chaos theory never amounted to much”). Kahneman’s work is important, but in order to be a constructive contribution to some future correct theory of economics, it needs to be thought of as a description of human decision making and behaviour forming, not as a way of rescuing the broken models of expectations economics. So a hearty “Hip hip” from me, but I’ll be keeping the champagne on ice for the meantime …


1In fact, you can be awarded the Nobel Prize posthumously if you pop your clogs between the time it is announced and the actual awards ceremony.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: