Kamis, 05 Mei 2011

What I learned in econ grad school, Part 2


















When I wrote my earlier post recounting what I learned in econ grad school, I realized shortly after I finished it that I might have sounded like I was being a little too harsh on my own econ department, which is really quite a good one. That's why I added the following:
In my second year I took a macro field sequence, which taught me all about demand-based models, frictions, heterogeneity, and other interesting stuff. I don't want to make it sound like graduate school taught me nothing about how to understand the recession...it taught me plenty. It just all came in the field course...
I realize now that this update deserved its own post. After all, the course I described in my last post was one single semester, out of four that I've spent learning macro. If we're trying to assess how well grad school trains macroeconomists, we should talk about the field classes that they're required to take.

My second-year field class was divided into four half-semester portions. Each had its own theme. Broadly, these were: 1) Heterogeneous-agent models, 2) Sticky-price models, 3) Neo-monetarism, and 4) Labor search. Some highlights:

* We spent quite a lot of time on heterogeneous-agent models, e.g. Krusell-Smith model. These models turn out to be very tricky to solve numerically. So far, they have also been mostly wrong in their predictions. But they are very interesting nonetheless.

* We learned about sticky-price models and their cousins, Greg Mankiw's sticky-information models (Mankiw is pictured above). I really liked Mankiw's model; although it (like most macro models) is a "storytelling" model with some implausible assumptions and no real predictive power, the story it tells points in some very interesting research directions, since it involves much more interesting microfoundations than the standard "tastes and technology."

* We briefly covered structural vector autoregressions, or SVARs (I also learned these in a stats class). I liked these because the focus was on making forecasts...finally, someone calculating something! Also, they were honest about their limitations; their standard error bars were so big that they had very little predictive power more than one quarter into the future, but they admitted and prominently displayed this fact, instead of using something like "moment matching" to try to exaggerate their empirical success.

* We studied this very interesting paper by Basu, Fernald and Kimball. Basically, the paper constructs a very general form of the RBC model, and finds that it can't explain economic fluctuations. The reason is that improvements in technology, which are what cause booms in Prescott's original RBC setup, actually cause recessions once you allow for things like imperfect competition. This reinforces similar results by Jordi Gali, who used SVARs but arrived at the exact same conclusion.

* We learned some neo-monetarist models (by the way, what I learned was called "neo-monetarism" seems very different from what Stephen Williamson thinks it is). The neo-monetarist policy response to recessions, I learned, is quantitative easing. Or, as my advisor Miles Kimball put it: "Print money and buy stuff!" (He actually repeated this line four times in a row. When I asked him later what he thought of Bernanke's response to the recession, he grinned hugely and said "He printed money and bought stuff!") I also learned that some neo-monetarist models have a role for fiscal policy, but only for a short time after a particularly severe drop in investment.

* We studied labor search models, e.g. the Mortensen-Pissarides model (which recently won its creators the pseudo-Nobel). Although these models, like the heterogeneity models, make some incorrect predictions, they are commendable for admitting this fact. I liked these models because they relied on interesting and observable microfoundations (e.g. the job matching function).

The field course addressed some, but not all, of the complaints I had had about my first-year course. There was more focus on calculating observable quantities, and on making predictions about phenomena other than the ones that inspired a model's creation. That was very good.

But it was telling that even when the models made wrong predictions, this was not presented as a reason to reject the models (as it would be in, say, biology). This was how I realized that macroeconomics is a science in its extreme infancy. Basically, we don't have any macro models that really work, in the sense that models "work" in biology or meteorology. Often, therefore the measure of a good theory is whether it seems to point us in the direction of models that might work someday.

Anyway, Brad DeLong would still probably have some issues with my field course. We did learn a lot of demand-side models, and a bit of history as well (I learned about Wicksell, and about the Great Depression, both for the first time). But never once was finance mentioned. I learned about the existence of financial accelerator models in an email from a friend at Berkeley...

There were two other big conclusions I drew from that course.

The first was that the DSGE framework is a straitjacket that is strangling the field. It's very costly in terms of time and computing resources to solve a model with more than one or two "frictions" (i.e. realistic elements), with more than a few structural parameters, with hysteresis, or with heterogeneity, etc. This means that what ends up getting published are the very simplest models - the basic RBC model, for example. (Incidentally, that also biases the field toward models in which markets are close to efficient, and in which government policy thus plays only a small role.) 

Worse, all of the mathematical formalism and kludgy numerical solutions of DSGE give you basically zero forecasting ability (and, in almost all cases, no better than an SVAR). All you get from using DSGE, it seems, is the opportunity to puff up your chest and say "Well, MY model is fully microfounded, and contains only 'deep structural' parameters like tastes and technology!"...Well, that, and a shot at publication in a top journal.

Finally, my field course taught me what a bad deal the whole neoclassical paradigm was. When people like Jordi Gali found that RBC models didn't square with the evidence, it did not give any discernible pause to the multitudes of researchers who assume that technology shocks cause recessions. The aforementioned paper by Basu, Fernald and Kimball uses RBC's own framework to show its internal contradictions - it jumps through all the hoops set up by Lucas and Prescott - but I don't exactly expect it to derail the neoclassical program any more than did Gali.

It was only after taking the macro field course that I began to suspect that there might be a political motive behind the neoclassical research program (I catch on quick, eh?). "Why does anyone still use RBC?" I asked one of the profs (not an RBC supporter himself). "Well," he said, stroking his chin, "it's very politically appealing to a lot of people. There's no role for government." 

That made me mad! "Politically appealing"?! What about Science? What about the creation of technologies that give humankind mastery over our universe? Maybe macro models aren't very useful right now, but might they not be in the future? The fact is, there are plenty of smart, serious macroeconomists out there trying to find something that works. But they are swimming against not one, but three onrushing tides - the limited nature of the data, the difficulty of replicating a macroeconomy, and the political pressure for economists to come up with models that tell the government to sit on its hands.

Macro is a noble undertaking, but it's 0.01 steps forward, N(0,1) steps back...

Tidak ada komentar:

Posting Komentar