I am currently reading Tony Lawson’s seminal Economics and Reality – seminal for philosophers of economics, though sadly not a staple read for the average Economics PhD. His broad target is the methodologies underlying modern neoclassical economic thought – economic “pure theory”, and the use of econometrics. Although an econometrician by profession and background, he argues for a radical philosophical departure from the metaphysics and epistemology of the neoclassical tradition. Also, he taught John Latsis, one of my most engaging and awesome tutors. So in many ways, a cool guy. I went to see him in Cambridge last December, and I was struck by how warm he was to me, a stranger. He has a youthful manner despite his age, and a lot of humility despite his success. I imagine this is because he has always been on the naughty side of the economics fence.
The following post reconstructs Lawson’s arguments concerning the validity of the search for econometric laws of human economic nature.
Posts to follow will look at, among other things, the application and extension of Lawson’s critique of econometrics in the field of development economics.
I. What is Econometrics?
“Economics, so far, has not led to very accurate and universal laws like those obtaining in the natural sciences.” – Haavelmo (1944)
Haavelmo’s observation led him to develop probabilistic methods in what we nowadays recognise as statistics or econometrics (the latter being the application of the former in economic research). However, Lawson contends that we are no closer today to the holy grail of physical law, despite sixty years of econometric research and applications. The answer, in Lawson’s view, is to abandon the project as it has been, and adopt a fundamentally different approach in our methodology.
(It seems that the holy grail is nonetheless still holy – although I haven’t read enough of Lawson to get a good grasp of his take on this, he thinks that economics can be a successful science – although with a different conception of science than the law-like conception.)
The use of econometrics is to extract information about the true population values of certain variables from finite, sample size information. To take a step back, econometrics as a whole has two major projects.
The first project in econometrics is to develop metrics that describe different kinds of distributions of values – that is, the population moments – and how to estimate them from finite sample set data. This gives us probabilistic estimates of what true population values are, to tell us what to expect to see from yet unseen population data. This project is extremely philosophically interesting in its own right; when we make inferences from finite samples (the height of all the people in the room) to larger or potentially infinite populations (the height of all people), we must use probabilistic methods of reasoning.
Probabilities pile up in all parts of statistics. Take a variable that we are investigating – the wage someone earns – and we are testing the hypotheses that women’s wages differ from men’s wages. We have two populations of interest: all of the women in the economy, and all of the men. If we had access to all the wage data, then we would be able to just say whether the population averages were different. However, if we only have access to sample subsets of the populations, then we can use frequentist probabilistic hypothesis testing.
In this case, we find that x differs from y with statistical significance of p. Imagine that there is an underlying distribution of wages for men and for women, for which the averages are the same. Say we were to pluck, at random, men and women from the population to use in our samples, and we did this 100 times, generating 100 different samples (which can be overlapping: so we release the people back into the wild each time). In p% of these 100 different samples, we would find that the sample means differed from one another. How low does p have to be for us to be sure that we’re not in that sceptical scenario? That’s one way in which probability underlies statistics. But it goes deeper: the only reason that we can use this method of probabilistic inference at all is because we think the Central Limit Theorem applies in this case.
One side-question so far, for this first project: How does this work in a Bayesian and not a frequentist model of probability? Is there an alternative interpretation? I don’t quite understand Bayesianism, because I don’t get what credences actually mean. Are they meant to relate to a mental process that actually occurs, or that we can consciously be aware of, or that we can introspect about philosophically?
The second major project in econometrics, which relies on but goes further than the first project, is to estimate the causal effect of independent variables x1, x2…xn on a dependent variable of interest, y. How do we here interpret the idea of “causal effect”? I believe that econometricians themselves are going beyond the idea of apparently constant conjunctions, which is what they are often portrayed as looking for. When we say that we have found a robust econometric relationship from a regression of x on y, I believe that we mean something like this:
(Mechanism) There is some well-picked-out mechanism that operates between x and y, such that a change in x will feed through to (i.e. “cause”) a change in y.
We are not saying something like this:
(Constant Conjunction) A difference in x will always be found to be accompanied by a corresponding difference in y.
Why do I think that the mechanism view is what econometricians are actually saying? Well, take the spectre that is haunting econometric civilisation: omitted variable bias. Fear of omitted variable bias has led to improvements upon improvements upon the theory of instrumental variables and identification.
To illustrate omitted variable bias, most textbooks drag out the too-familiar case of estimating the effect of some policy intervention on the education of a bunch of kids. One might have the theory that class size is inversely proportional to exam performance, because more intensive teaching produces better results. However, class size is also correlated with family income, which may be correlated with amount of help that the pupil gets at home. If we were simply to regress exam performance on class size, we would have a biased estimate of the effect of class size on performance, because we would be “counting in” the added effect of family income on performance. This is what we call omitted variable bias.
II. Lawson’s argument for the abandoning of econometrics as we know it
The first part of Lawson’s argument is the empirical claim that econometric forecasting has been largely useless. As Kay (1995:19) says in a study of 34 UK forecasting groups: “Economic forecasters do not speak with discordant voices; [keeping an eye on each other] they all say more or less the same thing at the same time. And what they say is almost always wrong. The differences between forecasts are trivial relative to the differences between all forecasts and what happens.”
The second part of Lawson’s argument is that econometric forecasting will necessarily continue to be useless. This is because the law-like view of the world underlying econometric forecasting does not hold in interesting ways for the social world. Lawson’s broad strategy in Economics and Reality is that of transcendental analysis. This means that he asks, of the different methodologies of orthodox economics, the question: What would the social/economic world have to be like for such methodologies to work? He then shows us that the ontology presupposed by econometrics and economic theory is not rich enough to account for the real social world.
Every methodology presupposes a metaphysics; that is, every way of investigating the social world presupposes a certain view of the nature of what there is to be investigated. The methodology of econometrics presupposes the metaphysics of “regularity stochasticism”:
(Regularity stochasticism) For every measurable economic state of affairs y, there exists a set of conditions x1, x2…xn, such that y and (x1…xn) are regularly conjoined under some well-behaved probabilistic function.
“In other words”, as Lawson says, “stochastic closures are everywhere assumed to hold; for any event y a stable and recoverable relationship between a set of conditions (x1…xn) and the expected value of y (conditional upon x1…xn) is postulated.”
What does it take for stochastic closures to hold for the economic structures we want to investigate? Two conditions: intrinsic and extrinsic closure.
Intrinsic closure: the objects of study cannot in themselves change. That is, the object of study – say, an individual – must have an internal structure such that they behave in the same way in response to a set of economic events – say, being offered the choice between two goods at a certain price. If the internal structure of the individual changes, e.g. depending on mood, time since last meal, or other internal states, then the regularity breaks down.
Extrinsic closure: external forces, not accounted for under the system under study, must not impinge on the objects of study. That is, there must not exist external (exogenous) influences that can significantly affect our dependent variable y, that are not accounted for in our dependent variables (x1…xn). If there are, and these external influences impinge on our system of study, then our regularities may break down, or even be spurious.
Lawson argues that individuals are sufficiently complex that intrinsic closure does not hold, and that the social world is sufficiently rich that extrinsic closure does not hold either. However, because econometrics is joined to these two assumptions, we see economists on the one hand looking for extrinsic closure, creating models of isolated systems (“closed economies”) or adding ever more variables to their regressions; and on the other hand, looking for intrinsic closure, characterising individuals as atomistic (“in effect as lacking intrinsic structure”).
The search for isolated systems and atomistic individuals sets us off in “the search for systems so large that they exclude nothing, couched in terms of single individuals so small that they include nothing. This is the direction in which the familiar responses to econometric failure ultimately lead”. Lawson calls us, as philosophers and economists, to abandon this search by abandoning the regularity stochasticism view of the social world, and start searching in another direction.