Search
Assumptions in Empirical Economics

Assumptions in Empirical Economics

Add to Reading List
Add to Reading List

When a theorist in economics first presents a model to others, the first thing he usually presents or the first question usually asked by those who are attending his presentation concerns the assumptions made when formulating the model. The rationale behind that is straightforward: the assumptions one makes are the building blocks of the model. If they are solid and make sense, then we can trust the outcome more than if they are based on something cooked up in the researcher's mind. 

A simple example showing the importance of assumptions is the following:

Suppose that I want to show that in an economy where only beef and cheese are produced, the production of cheese is economically disastrous. Then, assuming a utility function (a measure of how happy I am by consuming) of U=(Consumption of Beef - Consumption of Cheese)^(1/γ) I could use some derivations to show that cheese will not be good. Why? Because, as the reader may observe in the utility function, cheese consumption lowers my potential happiness. Thus, the only way for me to be "happier" is not to consume any cheese at all. By imposing an unrealistic assumption (even if this might hold for a very small percentage of the population) I can prove practically anything.

Obviously, economists are smarter than this and thus try to avoid such issues. Even more commonly, they just try to mask the lack of a coherent relation between the world and what they assume by imposing more elaborate assumptions, such as the ones Paul Pfleiderer shows in his deconstruction of some peer-reviewed papers. A notable example is one where "the intermediary can threaten not to contribute his specific collection skills and thereby capture a rent from investors". This might sound a bit appealing: if I can threaten to stop being the middle man and they cannot do it on their own, then I can get people to pay me for doing it. 

Yet, as Pfleiderer correctly notes, "who at JP Morgan or Barclays will threaten to stop using his skills? The loan officer or the CEO?" With at least hundreds of people willing to take up their places which official would be willing to stop offering his services when he knows he can be replaced by someone else? In addition, even if JP Morgan, as a firm, decides to do that, dozens of other firms would be willing to take up their places, meaning that such a threat is not only non-credible but hazardous to the firm making it. Yet, famous economists did not have a problem using it.

A more serious problem occurs when using empirical research for reaching conclusions. The main issue here is that assumptions are also made, yet they are not as obvious as the ones in theoretical work. For example, if in a given regression analysis I estimate the dependent variable is real GDP and the independent variables are Government Debt and Bank Loans then I am implicitly making the assumption that the latter two variables are affecting real GDP. Yet, this assumption is not something which can be easily discarded especially if the regression coefficients are statistically significant.

What is important here is to note that the fact that coefficients are statistically significant does not really mean anything. Have a look at the following analysis (specifically equation 3):

Although the estimates appear to be significant, in at least a 10% significance level, you would be surprised to note that the ORGAN variable which is important at the 1% significance level measures the size of the male organ. Thus, the author is clearly making the following assumption, even if it is not explicitly stated in the paper: penis size has an effect on GDP. 

Actually, this paper was more or less presented as a joke (I hope!), yet it does not fail to show the shortcomings of regression analysis. In essence, the coefficient values as well as their significance do not matter at all with regards to causality. All that they actually measure is whether some data affect some other data, for God knows what reasons. Remember that regression is mathematics and mathematics does not care what you name your data; as long as a series exists, mathematics can give you an analysis, yet, one which does not mean anything other than that the two series have some mathematical connection. 

The issue becomes even more exacerbated if one considers how much these implicit assumptions affect economic thought: if, in the two-variable regression of Bank Loans and Government Debt I mentioned above, the result showed that the latter variable's coefficient was negative and statistically significant would that mean that Debt reduces real GDP? The answer is simply that we do not know. In the data we have fed the computer, this appears to be the answer, but whether one causes the other we have no idea. 

As Fisher Black (of the Black-Scholes option pricing model) noted more than 30 years ago: “since an econometric model is by definition a causal model, and since regression coefficients are closely related to partial correlation coefficients, the use of regressions to develop or refine an econometric model usually amounts to interpreting correlation as implying causation. (..) the confusion [of correlation and causality] is covered up by the use of language that avoids the word “cause” and its derivatives. People often use “determine”, “influence” and “predict” instead of “cause””

Black does not stop at that: in a simple example of correlations the following table is presented, with f denoting father’s height, m mother’s, s son’s and d daughter’s:

The causal relationship here is obvious: fathers and mothers contribute equally to their offsprings’ heights. Yet, the correlation indicates that the following equation could also be used and be correct based on the above correlations (with h being the overall average height for an adult):

The 1st equation in essence states that the father’s height depends on his son’s and the average, while the 3rd that the father’s height depends on the mother’s height (i.e. his spouse’s) and the average. For equations 2 and 4 just replace father with mother and son with daughter. Even if we could assume that the father’s height is determined by the son’s (I would even go as far as to say that it can be predicted by it), equations 3 and 4 have no possible causal relationship between them. Your spouse’s height has no causal relationship on how tall you are. Yet, given the correlations above, the equations hold and would most likely give us robust regression coefficients, even if inference is completely wrong.

An additional problem which often arises in statistics is with regard to the significance of the regression coefficients. As most of us have learned through arduous classes in various universities around the world, a coefficient is significant if the null hypothesis (i.e. that the coefficient value is equal to 0) is rejected at the 5% significance level. Yet, as Ziliak and McCloskey point out, the fact that a coefficient is statistically significant does not really warrant its economic significance. For example, a small variance will force the value of the T-test to be statistically significant (i.e. rejecting the null and accepting the significance of the coefficient), while a large variance might force us to accept the null. The distinction between statistical significance and economic significance is not often addressed in texts. One of the few is Wooldridge, who (p.135) mentions that:

In addition, (p.136) “Some researchers insist on using smaller significance levels as the sample increases (..) Using a smaller significance level means that economic and statistical significance are more likely to coincide but there are no guarantees (..)”

The aforementioned issues persist even if we use other, more complicated measures of finding coefficients such as VAR models, the answer is still the same. This is not causality; it is merely a coefficient value derived from partial correlation relationships (one of the benefits of VAR models is that coefficient significance is seldom employed). For example, if one value changes then we expect the change in the other variable to be of the extent the derived coefficient states. Yet, we have no idea whether the change is caused by a shock exogenous to both variables - since we might potentially have left it out of the model specification either intentionally or unintentionally, (or even worse we could not measure it) - or whether it was indeed the change in the independent variable which caused the dependent variable to change (even though all variables are dependent in a VAR specification, the main point is the same). In addition, even if we acknowledge the fact that we cannot make causal inferences from the model (known as the identification problem) many econometricians usually like to employ instrumental variables which are supposedly connected with just one of the variables and not at all with the others; yet, results still do not suggest any form of causality and the availability of such instruments is highly debatable. As Black comments “Do such variables exist?”

So far,  the only way we know of examining “what affects what” is Granger causality, which is not causality per se, but just measures whether past values of one variable can be better explained if we add another variable. As econometricians state, this is "predictive causality".

Today, empirical research is employed very differently. In the past, it was just a test for theory, examining whether the postulates of a theoretician hold in the real world or not. Today, econometricians make implicit assumptions on the causality of variables and use data to test those relations. This might not be an issue if their results did not dictate policy – but it does. It has not been a year since Reinhart and Rogoff were proven wrong. Yet, their "contribution" still holds: people fear that above some level of debt, real GDP growth is reduced, and thus engage in devastating austerity measures to prevent that from happening.

As empirical work gathers more and more power, it becomes more and more independent of theory. That would not be necessarily bad if results not replicated by others and assumptions not really questioned for their validity did not have such an effect on policy. The problem is that we do not usually question the empirical assumptions made, whether they be variable ordering, orthogonality of shocks or stationarity, and thus fall victim to the "biases" (biases as in the belief that the specific assumption made is correct) of researchers.

Remember that we all have biases, whether believing that QE causes hyperinflation or that debt causes growth to stop or anything else for that matter. What matters is not that we have biases: it's whether they are right or wrong. Unfortunately the only way to test them is through empirical research.

Related Reading

Jeffrey M. Wooldridge – Introductory Econometrics: A Modern Approach (5th edition)

Ziliak, T. Stephen and Deidre McCloskey (2009) – The Cult of Statistical Significance

Black, Fisher (1982) – The Trouble with Econometric Models

Paul Pfleiderer (2013) – Chameleon Models: The Misuse of Theoretical Research in Financial Economics

Tatu Westling (2011) – Male Organ and Economic Growth: Does Size Matter?


JOIN PIERIA TODAY!

Keep up to date with the latest thinking on some of the day's biggest issues and get instant access to our members-only features, such as the News DashboardReading ListBookshelf & Newsletter. It's completely free.

Comments

Please read our Community Guidelines before posting

So do tell me Jon, since I do not have a clue about econometrics: aren't you making assumptions when estimating a model? Or do IV and fixed effects (or whatever other technique you might use) measure causality instead of correlation? They are vastly most complicated than the 1980's, I'll give you that. But deep down, whether you are using regression or Maximum Likelihood you have to make assumptions: what is on the right-hand-side of the equation affects the left-hand-side (although, as already said, this does not occur with VAR models, even though the ordering of a variable matters with regards to its impulse response).

Thus do tell me: even though we have largely increased our level of expertise since the 1980's (btw Wooldridge is a 2012 edition and probably the most widely used textbook in econometrics) did we really manage to get past those initial assumptions? No, not really.

I'm sorry but it is obvious the author has no clue about microeconometrics. Everything here is about time series, and pretty outdated at that. Fisher Black might have been a great mind but he died almost 20 years ago and missed big advances on credible identification and causality. IV, fixed effects, randomization, control functions, non-parametric techniques, use of natural experiments, and I'm not even mentioning more structural methods, which are just vastly better than the 1980s, when I assume the author took those courses.

I think you are answering your question yourself Steve. This is statistics, nothing can be proven. It can just be shown that you do not disprove it or that you disprove it.

Euronomist: Do you think an opposite-sign correlation has greater power to *dis*prove a causal assertion? Not definitively, of course (there may be a positive cause that is overwhelmed by a negative unknown confuting cause, resulting in a false negative correlation). But it seems like correlations that are opposite those expected in a causal prediction are more powerful in disproving a causal assertion than a "correct" correlation is in proving a causal assertion.

??

Twitter Feed

Why a company’s choice of AGM location could be a possible ‘sell’ signal - http://t.co/Jk1Q9WZ3pt

RT @azizonomics: Latest blog post for @PieriaView "On UKIP and leaving the European Union" http://t.co/K6YR8zDzCB @Frances_Coppola @tomashi…

RT @D_Blanchflower: Immigration: Could we – should we – stop migrants coming to Britain? excellent from @jdportes http://t.co/G8PNDbYqgk