Tuesday, September 13, 2011

Making Inferences about the Effect of the Stimulus

Did the stimulus help the US economy? In the wake of the debt ceiling debacle, Obama cannot expect to get additional stimulus passed, because of Republican opposition to increasing spending when the debt has become as large as it is.  I want to stop and think about how we'd know if it had worked. One approach is what's called an interrupted time-series research design: the economy was bad before the stimulus, then there was the stimulus, and the economy is not now "good." So the stimulus had no effect. The trouble with this is that it's a pretty crummy research design. The post-stimulus world is a different world than the pre-stimulus one, and it's pretty hard to compare the two. We can really only tell what effect the stimulus had if we could compare a pair of USAs. They are exactly identical, except that one got stimulus and the other didn't. Unfortunately, we don't see the no-stimulus world; we need to be able to see that alternate universe in order to make a causal claim. This is what's known as the Fundamental Problem of Causal Inference (FPCI). If you want to know how a cause (or "treatment") affects an outcome, you need to compare the real world that received the treatment with an identical world in which the only different thing is whether or not a treatment was received.

We don't know how well the stimulus worked because the possible states of the world include the following:
  1. The economic stimulus had no effect; the economy has not changed as a result..
  2. The economic stimulus had a large effect; the economy would have been much worse without it.
  3. The economic stimulus had a large effect, the economy would have been much better without it.
Not to mention the many states of the world between those three. We could be in state 2 or 3 and we wouldn't be able to differentiate between them. State 2 includes the possible worlds in which economic and job growth both grow largely as well as the world where economic indicators grow more than they might have otherwise, but they still aren't positive.

Just to be clear: this means that we cannot tell if the stimulus was effective by comparing the state of the economy before the stimulus with the state of the economy afterward. We could compare the behavior of firms that received stimulus funds with those that received no funds. It's not quite the stimulus vs. no-stimulus world, but it's closer, especially if we think that direct and indirect effects across firms were minimal (this is a big problem that I do not address here). A recent paper by Jones and Rothschild (h/t to Tyler Cowen for covering the paper) attempts part of this comparison. The study claims that the stimulus should have been larger to be effective, because only 42 percent of the workers hired with stimulus funds were unemployed. Jones and Rothschild, the authors of the study survey firms that received stimulus funds to find out how they behaved given that they used those funds.

What's wrong with this picture? It's the FPCI: By studying firms that received stimulus funds, we don't know how firms that did not receive funds reacted. Not only can we not assess how an outcome varied by using a variable that doesn't change--in this case, a binary indicator of whether a firm received funds or not, but we cannot assess how an outcome varied when we don't observe the counterfactual state. That is, how would these funds behave if they had not received stimulus. The authors attempt to mitigate this by trying to get their respondents to think couterfactually, but this fails to solve the problem. We can't expect the individuals involved in decision-making to know how they would have behaved in different circumstances. We only know how they did behave. More specifically, in the case of this paper, we don't know whether non-stimulus firms were shedding workers at drastic rates (so that 42% looks really, really good in comparison) or were hiring from the ranks of the unemployed at much higher rates (so that 42% looks pretty poor).

There are at least two ways to solve this problem. One may already be lurking in their data. The other requires significant additional work. One way to (attempt) to solve the FIPC is to make relevant comparisons. We could compare the US to another, similar country that didn't attempt stimulus, or that attempted less stimulus. But more specifically, we could focus on the implicit comparison the authors make: We can compare firms that received lots of stimulus to those that received less stimulus. Or we can compare firms that received stimulus to those that received no stimulus. As I mentioned, the first might already be in their data--it makes sense that they would have asked how much stimulus funds the firm received, then they could calculate a per-employee or per-output measure of stimulus received to make the numbers more comparable. Or they can conduct another survey and attempt to match firms that didn't receive stimulus with those that did.

Both of these approaches may still be problematic in terms of FIPC. No matter how many factors these firms share, unless stimulus was awarded via a lottery, it will be difficult to know for certain that the difference made by the stimulus is not, in fact, a result of some other factor.

Wednesday, September 7, 2011

Two Roads Diverging? Scientific Realism and Rational Choice

Hampsher-Monk and Hindmoor (2010; here, gated) argue that game theoretic models may be instrumentalist, structuralist, or realist. Whether a model is realist or not hinges on whether or not "interpretive" evidence, that is, evidence about how the actors in question think about an issue, is useful as evidence in favor of game theoretic models.

In instrumentalist models, the truth of assumptions is irrelevant because a model's utility is in its predictions. In structuralist models, actors are forced into their responses by external structures, so how actors think and process is irrelevant. In realist models, assumptions need to be true because models are both descriptive as well as explanatory.

It seems to me, however, that interpretive evidence should also be useful in structural models because a structural model may also be realist. That is, in arguing that structures determine behavior, it makes no claim about reasoning regarding behavior. In some sense, it's really a realist argument that says that reason doesn't matter for behavior, so we should be able to see some disjunction between reasoning and behavior across different actors. So interpretive evidence would be useful here.


It does strike me that they are correct, however, about instrumental models and interpretive evidence. It also suggests that a realist can legitimately argue that some factors aren't relevant as part of their explanations, but this strikes me as different than assuming that something isn't relevant a priori. Is this correct?

But this hinges on a bigger question: So, can a scientific realist make simplifying assumptions (and know that he or she is making assumptions) and still be a realist? Or does being a realist mean you don't have the ability (luxury?) of making assumptions?