Jumat, 18 Oktober 2013

Eugene Fama explained. Kind of. Part 2: Asset pricing


Following up on my post on Fama’s corporate governance contributions, let’s turn to a mildly technical explanation of Fama’s asset pricing work, for those who haven’t read finance papers since 1990 or so. A lot of this was joint work with Ken French, who is great—don’t believe everything you read on the Internet.* If I have time and interest, I might do a third post on miscellaneous Famacana.

You might think that a field called “asset pricing” would explain the prices of assets. You would be wrong. Instead, it is mostly about asset returns and expected returns.

They’re not that different because a return is simply one price divided by previous one. To actually get to prices, you need to estimate cash flows (or earnings or dividends), and academics, with some exceptions, hate doing that. So we’re left with these weird ratios of prices known as “returns”.

In my post on Lars Peter Hansen, I wrote down an asset pricing model based on a representative consumer utility maximization problem. Using slightly different assumptions, we can justify the so-called Capital Asset Pricing Model or CAPM, which is an older model that holds that returns can be described by \begin{equation} \label{capm} E_t[r_{i,t+1} - r_{f,t+1}] = \beta_i \lambda, \end{equation} where \( r_{f,t+1} \) is the risk-free rate and \( \lambda \) is a number known as the risk premium and \( \beta_i \) is the regression coefficient from a regression of asset returns \( r_{i, t+1} \) on the market return, \( r_{m, t+1} \). (That’s assuming there is a risk free asset. If not, the result changes slightly.) This implies that assets that are more correlated with the market have a higher expected (excess) return, while assets that are uncorrelated or even negatively correlated with the market have a lower expected return, because they provide more insurance against fluctuations in the market.

These days almost every stock picking site lists “beta”, usually the slope of a regression of returns on the S&P 500 index or perhaps a broader market index. An implication of CAPM in the form of equation \( \eqref{capm} \) is that the intercept of that regression, known as Jensen’s alpha or just alpha or abnormal return, is zero.

Ever since this stuff was first proposed, it has been well known that alpha is not zero when you actually run those regressions. (Of course there are ton of econometric disputes in this area. Nobody does empirical research because it is easy and fun.) It’s not zero for individual stocks, but individual stocks are weird and maybe that’s because of noise or other shenanigans.

What’s was more worrying is that the return on somewhat mechanical trading strategies did not have zero alpha: A portfolio of stocks with a high ratio of book value to market value (“value stocks”) has a higher alpha than one with a low such ratio (“growth stocks”). A portfolio of stocks of small companies has a higher alpha than one a portfolio of large company stocks. There are other examples like this, and they suggest that CAPM does not provide a good explanation of the cross section of expected stock returns, or why different stocks have different expected returns.

That’s worrying for the efficient markets hypothesis if CAPM is The True Model. The results of Fama and French (1992) and (1993) suggest that it may not be. Based on the empirical evidence, they propose that expected stock returns are related not just to the stock’s exposure to market risk, but also to two additional factors: The return on a portfolio that is long value stocks and short growth stocks (“HML” or high minus low), and the return on a portfolio that is long small stocks and short big stocks (“SMB” or small minus big). If you do a multivariate regression \begin{equation} r_{i,t+1} = \alpha + \beta_m r_{m,t+1} + \beta_{\mathit{HML}} r_{\mathit{HML},t+1} + \beta_{\mathit{SMB}} r_{\mathit{SMB}, t+1} + \varepsilon_{t+1}, \end{equation} you have an alpha against what is now called the Fama–French 3-factor model. When you let \( r_{i,t+1} \) be returns on portfolios of stocks sorted by either value, size, or both, the resulting 3-factor alphas are a lot closer to zero. Here are the t-statistics:


It’s not perfect, but it’s better than people were able to do before. If Fama–French is the correct model, EMH is in slightly better shape.

Since then a ton of researchers have tried to add factors to the model to better explain the cross section of expected returns, the most widely used being the Carhart momentum factor, to form a 4-factor model. The 3-factor and 4-factor models are the most widely used models in finance for almost any setting where expected and abnormal returns are studied.

There have been many attempts to explain why the value and size factors exist and what explains the risk premia associated with them, i.e. the size of the premium those stocks command. Most of them revolve around hypotheses that the market index does not fully capture the systematic, undiversifiable risk that investors are exposed to. For example, one explanation is that both value and size factors are related to distress risk, the risk of being exposed extra costs associated with financial distress that are not fully captured in the market return measure.

Most papers that propose new factors—someone once claimed that there are 50 factors in the literature explaining returns, but I find that figure quite low—propose some kind of explanation. Some of the arguments build on consumption based asset pricing. One example is a factor relating to takeover risk: the hypothesis is that for companies that are likely to be bought, much of the expected return comes from a potential takeover premium. But takeovers are cyclical and come in waves in a way that you can’t diversify away, so investors have to be rewarded for that risk in addition to market risk (and whatever value and small stock premia represent).

* The photos on that piece are now lost to posterity. The first one is reproduced above, courtesy of mahalanobis. The last one was French with some blonde models. Also, I think an MBA from Rochester costs a lot more than $19.95.

Rabu, 16 Oktober 2013

Eugene Fama explained. Kind of. Part 1: Corporate governance


Nobel Fever has broken out here at Not Quite Noahpinion, so let’s continue with some overlooked corners of the laureates’ Ĺ“uvres. According to Google Scholar, Fama and Jensen (1983) is Fama’s third most cited paper, yet it isn’t mentioned at all in the scientific summary put out by the Economic Sciences Prize Committee of the Royal Swedish Academy of Sciences, which, if I remember correctly, sometimes does mention work that is not directly related to what the prize is being awarded for. In this case it is particularly tragic because Fama’s corporate governance papers are very good, and considered by some* to be much better than his asset pricing work. (A phrase which rhymes with “feta pining” is sometimes whispered in connection with the latter.)

I am talking about a pair of papers published back to back in the same issue of the Journal of Law and Economics in 1983; this is the other one. They are joint work with Michael Jensen, who is well known for his later corporate governance work and who is also the Jensen of “Jensen’s alpha” (these guys were generalists, of sorts). Go read them right now. They are very good, there is no math, I have linked to ungated versions, and their explanation is probably clearer than mine.

Both problems are about the separation of ownership and control, a feature seen in most organizations, both for-profit and non-profit. The academic study of organizations dates back to at least 1889, or even 1776, but who still reads Adam Smith?

If the owner of an enterprise exercises full control over its operations, as is the case in a sole proprietorship with few or no employees, there is no agency problem: the manager–owner can be trusted to make whatever decisions will maximize his profit (or whatever he is maximizing, when it is not a for-profit firm), and will even make appropriate tradeoffs between the short term and the long term.

In the real world, that’s not practical. Despite what Adam Smith might have preferred, modern firms (both for-profit and non-profit)—or even the commercial partnerships in the Middle Ages that Weber studied—are too large to be owned by one person or even a family and too complicated to be managed by a large group of stakeholders. If contracts could be perfectly specified and made contingent on every eventuality, there wouldn’t be a problem: the management contract would specify what the manager has to do no matter what happens, and the contract would be written to maximize profits. Of course this is even more unrealistic than thousands or millions of shareholders collectively making all the decisions of a firm.

Fama and Jensen’s analysis starts with the role of residual claims. Some claimants on a firm are promised fixed amounts that they will receive when the cash flows are paid out. For example, a bank or a group of bondholders that makes a loan to a business will only receive the amount that was actually borrowed, plus interest. The firm might go bankrupt, but hopefully in all probability the bank will simply be paid what it was promised. This means that by and large, they won’t care too much about the firm’s management decisions. Only the residual claimants, who in a typical corporation are the shareholders, need to fret about the firm’s management from day to day. In turn, they will bear most of the risk associated with the firm’s operations, and also receive the rewards if the firm is better managed than expected.

At a granular level, the agency problem is about making decisions. Fama and Jensen split up the decision process into decision management, which involves initiating decisions (coming up with the idea) and implementing decisions (actually doing the work), and decision control, which involves ratifying or approving decisions and monitoring that they are carried out faithfully. We now end up with three roles: residual claimant, decision management and decision control.

Some undertakings are “noncomplex”: the necessary information can be concentrated in a few people. Examples include small firms, as well as large firms with relatively simple decisions such as mutual funds and other mutual financial firms. In noncomplex organizations it can be optimal to combine decision management and decision control to economize on decision costs, but then you would have the foxes watching the henhouse. Who looks after the interests of residual claimants? The answer is that when decision management and decision control are combined, you would often restrict residual claims to a small group of people who either are managers or trust them, for example family members and close business associates. Partnerships and small corporations with stock transfer restrictions examples of what Fama and Jensen call “closed corporations”. The tradeoff is that you reduce the possibility of risk sharing and some efficiency is lost that way.

In more complex organizations, decision management would be separated from decision control. The information necessary to make decisions may be diffused among a lot of people, who would each be responsible for initiating and implementing decisions in their little area, but a few people—the managers, who would also be the residual claimants or close family members or business associates—could handle decision control, ratifying and monitoring the decision managers.

In large corporations with a lot of assets, you need more residual claimants to share risk, which in turn makes it impractical for all the residual claimants to participate in monitoring, and the agency problems associated with combining decision management and decision control are worsened. For such organizations, Fama and Jensen hypothesize, decision management and decision control will tend to be separated. In the largest corporations, this separation is complete, and residual claimants have almost zero participation in decision control.

Fama and Jensen also survey a variety of organizational forms that have different tradeoffs between the three roles and ways in which separation is achieved, for example with expert boards and through the market for takeovers. In financial mutuals, a large body of customers are also owners, and decision control is delegated to a professional board of directors, but an additional control function exists in the ability of each residual claimant to quickly and easily terminate his or her claims by redeeming the claim.

In nonprofit organizations, there are no residual claimants as such, which Fama and Jensen justify as a means of reducing the conflict between donors and residual claimants. Who would want to give a donation what will ultimately end up in the pockets of residual claimants? The solution: eliminate the latter. In US nonprofits, decision control is typically exercised by a board of directors that consists of large donors.

In the Roman Catholic Church, control is not exercised by donors (parishioners), but by the church hierarchy itself, and ultimately the Pope, with almost no separation between decision management and decision control. The solution is vows of chastity and obedience that bind the hierarchy to the organization, in exchange for lifetime employment. Fama and Jensen go on to claim that Protestantism is a response to the breakdown of this contract, and that “the evolution of Protestantism is therefore an example of competition of among alternative contract structures to resolve an activity’s major agency problem—in this case monitoring important agents to limit expropriation of donations.”

* weasel words, I know. Sorry.

Selasa, 15 Oktober 2013

Rationality: The Issue Is Not the Issue

The latest round of sort-of-Nobels, awarded to Fama, Hansen and Schiller for work on (or in Hansen’s case related to) the dynamics of asset markets, has brought up once again the question of whether economic models should assume that individuals are rational.  Mark Thoma makes the case for rationality as a reference assumption which we can vary to see how the real world might differ from the “ideal” one.  Undoubtedly, the politics of the Fama-Schiller divide map into intuitions about rationality and the role of the state.  In the Fama/freshwater world people are rational, and the government can succeed only by duping them, which it can do only temporarily at long-term cost.  In the Schiller/saltwater world people are less than fully rational, and careful government action can improve market outcomes over some potential time horizons. Thus the intensity with which otherwise rational people debate rationality.

Here I would like to propose that the rationality dispute, while important, is not that important (an obscure reference to Lewis Mumford on Wilhelm Reich).  In fact, for most macroeconomic questions whether or not you assume individuals are rational has almost no bearing at all.  Here is an off-the-top-of-my-head list of modeling assumptions modern macroeconomists make that are not about rationality but actually drive their results.

1. Market-clearing equilibria are stable everywhere.  This is the default assumption.  The main departure that gets invoked to yield “Keynesian” outcomes is delayed adjustment to clearing, as in a two period model where adjustment occurs with a lag.  Another is the use of steady-state outcomes in matching models of the labor market, in which wages adjust to achieve a constant ratio of unemployed workers to job vacancies.  But stability concerns run deeper.  (a) Many markets appear to adjust to a permanent state of excess supply; in fact this is one of the hallmarks of a capitalist economy, just as permanent shortages result from central planning.  We don’t have a good understanding at the present how this works, and this includes not having a good understanding of the adjustment process.  (b) Expectations feed back into the adjustment process of markets through changes in the valuations people give to surpluses and shortages, as Keynes noted in The General Theory.  The difficulty in drawing up realistic models of expectation formation with heterogeneous agents is not just a matter of identifying the location of an equilibrium but also its stability properties.  (c) The Sonnenschein-Debreu-Mantel result is about the stability of multi-market equilibrium.  It says: in general they aren’t.  Rather, out-of-equilibrium trading generates path dependence.

2. Firms are long run profit-maximizers.  It is convenient to be able to treat households and firms symmetrically, but firms are not individuals.  (And households are not individuals either, but that may or may not wash out at the macro level.)  The structure of incentives that translates individual choice into the collective choice of organizations can hardly be assumed to achieve perfect correspondence between one and the other.  As we’ve seen, this issue is especially acute when financial institutions are involved.

3. Default does not occur in equilibrium.  This is a corner solution and therefore intrinsically implausible.  Rational decision-makers will usually accept some positive default risk, which means that defaults can and will happen; they are not disequilbrium phenomena.  Gradually we are beginning to see models that consider the systemic implications of ubiquitous default risk.  In the future, I hope that no macro model without default will be taken seriously.

4. All uncertainty is statistical.  No it’s not.  And any model in which, for instance, individuals fine-tune today’s consumption as part of a maximization that includes their consumption decades later, based on a calculable lifetime expected budget constraint, is grossly implausible.  Similarly with long-lived investments.  This is the point of Keynes’ animal spirits, and it strikes at the heart of conventional intertemporal modeling.

5. There is no interdependence among individuals or firms other than what is channeled through the market.  Everything else is a matter of simply adding up: me plus you plus her plus Microsoft plus the local taco truck.  We live in isolated worlds, only connecting through the effects of our choices on the prices others face in the market.  This assumption denies all the other ways we affect one another; it is very eighteenth century.  In all other social sciences it has disappeared, but in economics it rules.  Yet non-market interconnections matter mightily at all levels.  There are discourses in asset markets, for instance: different narratives that compete and draw or lose strength from the choices made by market participants and the reasons they give for them.  Consumption choices are profoundly affected by the consumption of others—this is what consumption norms are about.  The list of interactions, of what makes us members of a society and not just isolated individuals, is as long as you care to make it.  Technically, the result is indeterminacy---multiple equilibria and path dependence—as well as the inability to interpret collective outcomes normatively (“social optima”).  Also, such models quickly become intractable.

6. Rational decision-making takes the form of maximizing utility.  Utility maximization is a representation of rational decision-making in a world in which only outcomes count and all outcomes are commensurable.  This is another eighteenth century touch, this time at the level of individual psychology.  Today almost no one believes this except economists.  It is not a matter of whether people are rational, by the way, but what the meaning and role of rationality is.  It is not irrational to care about processes as well as outcomes, or to recognize that some outcomes do not trade off against others to produce only a net result.  (This is what it means to be torn.)  The utilitarian framework is a big problem in a lot of applied micro areas; it is less clear what damage it does in macro.  I suspect it plays a role in wage-setting and perhaps price setting in contexts where long-term supplier-customer relationships are established.  We have a few models from the likes of George Akerlof that begin to get at these mechanisms, but their utilitarian scaffolding remains a constraint.

These are enormous problem areas for economic theorizing, and none of them are about whether or not people are “irrational”.  Rather, taking in the picture as a whole, debates over rationality seem narrow and misplaced.  Alas, the current Riksbank award will tend to focus attention on the status of the rationality assumption, as if it were the fault line on which economic theory sits.

I don’t want to be a complete curmudgeon, however.  I think microeconomics is steadily emerging from the muck, thanks to an ever-increasing role for open-minded empirical methods.  Not only is the percentage of empirical papers increasing at the expense of theory-only exercises, but the theoretical priors within empirical work are becoming less constraining as techniques improve for deriving better results from fewer assumptions.  Macro is another story, though.  Here the problem is simply lack of data: there are too many relevant variables and too few observations; traction requires lots of theory-derived structure.  If the theory is poor, so is the “evidence”.  How do we escape?

If Fama were Newton, would Shiller be Einstein?



OK OK, another quick break from my blogging break. An Econ Nobel for behavioral finance is just too juicy to resist commenting on.

Before the prize was announced, I said that it would be funny if Fama and Shiller shared the prize. But I also think it's perfectly appropriate and right that they did. Many people have called this prize self-contradictory, but I don't think that's the case at all. If Newton and Einstein had lived at the same time, and received prizes in the same year, would people say that was a contradiction? I hope not! And although it's probably true that no economist is on the same intellectual plane as those legendary physicists, Fama's Efficient Markets Hypothesis (which was actually conceived earlier, in different forms, by Bachelier and Samuelson and probably others) seems to me to be the closest thing finance has to Newton's laws, and behavioral finance - of which Shiller is one of the main inventors - seems like an update to the basic EMH theory, sort of like relativity and quantum mechanics were updates to Newton.

People criticize the Efficient Markets Hypothesis a lot, because it fails to account for a lot of the things we see in real financial markets - momentum, mean reversion, etc. But that's like criticizing Newton's laws of motion because they fail to explain atoms. Atoms are obviously important. But the fact is, Newton's laws explain a heck of a lot of stuff really well, from cannonballs to moon rockets. And the EMH explains a heck of a lot of real-world stuff really well, including:

* Why most mutual funds can't consistently beat the market without taking a lot of risk.

* Why almost all of you should invest in an index fund instead of trying to pick stocks.

* Why "technical analysis", or chartism, doesn't work (except perhaps in the highest-frequency domains).

* Why many measures of risk are correlated with high average returns.

These are important empirical, real-world victories for the EMH. Just like Newton's Laws will help you land on the moon, the EMH will help you make more money with less effort. I don't know what more you could ask of a scientific theory. If macroeconomics had a baseline theory as good as the EMH, it would be in much better shape than it is! (And if you think I'm being contentious by saying that, see what Gene Fama has to say about our understanding of recessions...)

Update: EMH-haters should also realize that the EMH implies that there are large market failures in the finance industry, because of the continued popularity of active management. So informational efficiency implies economic inefficiency in finance.

Anyway, sticking with the physics analogy, Shiller is really more like Michelson & Morley than Einstein. He found an anomaly - mean reversion in stock prices - that provoked a paradigm shift. In other words, Shiller really discovered behavioral finance rather than inventing it (actually, Fama, whose work is mainly empirical, isn't really like Newton either). The theorizers who came up with the reasons why markets couldn't be completely efficient were people like Joseph Stiglitz, Sanford Grossman, Paul Milgrom, and Nancy Stokey. I hope those people win Econ Nobels as well (Stiglitz, of course, already has one). But that paradigm shift built on the EMH, it didn't knock it down.

I think Mark Thoma puts it very well. The EMH is a simple model that works well in some cases, not so well in others. It's a great guide for most investors. It's not a great guide for policy, because bubbles are real, and they hurt the economy even if you can't reliably predict when they'll pop. And it's not a great guide for executive compensation, because stock prices fluctuate more than the value of firms.

But that's how science should work. You find a useful model, and then you take it as far as you can. When you hit the limits of its usefulness, you look for new stuff to add. And, by and large, that is what finance theory has done over the last half-century.

Anyway, for more info and opinions on this year's Econ Nobelists, check out Mark Thoma's link roundup. I especially like this one by Daniel of Crooked Timber. Also, see the various posts by my co-bloggers. Now back to work for yours truly...

Robert Shiller and Radical Financial Innovation


Robert Shiller, who shares this year's Nobel Prize with Eugene Fama and Lars Peter Hansen, is perhaps most famous for his ability to "predict the future." But he also has an impressive grasp of the past. As just one example, in my recent blog post on the history of inflation-protected securities, Shiller's paper on "The Invention of Inflation-Indexed Bonds in Early America" was the most useful reference. Shiller's ability to develop intuition from financial history has, I believe, contributed to his success in behavioral finance, or "finance from a broader social science perspective including psychology and sociology."

Rather than attempting a comprehensive overview of Shiller's work, in this post I would like to focus on "Radical Financial Innovation," which appeared as a chapter in Entrepreneurship, Innovation and the Growth Mechanism of the Free Market Economies, in Honor of William Baumol (2004).

The chapter begins with some brief but powerful observations:
According to the intertemporal capital asset model... real consumption fluctuations are perfectly correlated across all individuals in the world. This result follows since with complete risk management any fluctuations in individual endowments are completely pooled, and only world risk remains. But, in fact, real consumption changes are not very correlated across individuals. As Backus, Kehoe, and Kydland (1992) have documented, the correlation of consumption changes across countries is far from perfect…Individuals do not succeed in insuring their individual consumption risks (Cochrane 1991). Moreover, individual consumption over the lifecycle tends to track individual income over the lifecycle (Carroll and Summers 1991)... The institutions we have tend to be directed towards managing some relatively small risks."
Shiller notes that the ability to risk-share does not simply arise from thin air. Rather, the complete markets ideal of risk sharing developed by Kenneth Arrow "cannot be approached to any significant extent without an apparatus, a financial and information and marketing structure. The design of any such apparatus is far from obvious." Shiller observes that we have well-developed institutions for managing the types of risks that were historically important (like fire insurance) but not for the significant risks of today. "This gap," he writes, "reflects the slowness of invention to adapt to the changing structure of economic risks."

The designers of risk management devices face both economic and human behavioral challenges. The former include moral hazard, asymmetric information, and the continually evolving nature of risks. The latter include a variety of "human weaknesses as regards risks." These human weaknesses or psychological barriers in the way we think about and deal with risks are the subject of the behavioral finance/economics literature. Shiller and Richard Thaler direct the National Bureau of Economic Research working group on behavioral economics.

To understand some of the obstacles to risk management innovation today, Shiller looks back in history to the development of life insurance. Life insurance, he argues, was very important in past centuries when the death of parents of young children was fairly common. But today, we lack other forms of "livelihood insurance" that may be much more important in the current risk environment.
"An important milestone in the development of life insurance occurred in the 1880s when Henry Hyde of the Equitable Life Assurance Society conceived the idea of creating long-term life insurance policies with substantial cash values, and of marketing them as investments rather than as pure insurance. The concept was one of bundling, of bundling the life insurance policy together with an investment, so that no loss was immediately apparent if there was no death. This innovation was a powerful impetus to the public’s acceptance of life insurance. It changed the framing from one of losses to one of gains…It might also be noted that an educational campaign made by the life insurance industry has also enhanced public understanding of the concept of life insurance. Indeed, people can sometimes be educated out of some of the judgmental errors that Kahneman and Tversky have documented…In my book (2003) I discussed some important new forms that livelihood insurance can take in the twenty-first century, to manage risks that will be more important than death or disability in coming years. But, making such risk management happen will require the same kind of pervasive innovation that we saw with life insurance."
Shiller has also done more technical theoretical work on the most important risks to hedge:
"According to a theoretical model developed by Stefano Athanasoulis and myself, the most important risks to be hedged first can be defined in terms of the eigenvectors of the variance matrix of deviations of individual incomes from world income, that is, of the matrix whose ijth element is the covariance of individual I’s income change deviation from per capita world income change with individual j’s income change deviation from per capita world income change. Moreover, the eigenvalue corresponding to each eigenvector provides a measure of the welfare gain that can be obtained by creating the corresponding risk management vehicle. So a market designer of a limited number N of new risk management instruments would pick the eigenvectors corresponding to the highest N eigenvalues."
Based on his research, Shiller has been personally involved in the innovation of new risk management vehicles. In 1999, he and Allan Weiss obtained a patent for "macro securities," although their attempt in 1990 to develop a real estate futures market never took off.

Senin, 14 Oktober 2013

Lars Peter Hansen explained. Kind of.



The entire econo-blogosphere has its usual pieces up explaining the work of two of this year’s Nobel(ish) laureates in economics, Gene Fama and Bob Shiller. Most of them just handwave when it comes to Lars Peter Hansen’s contributions.

Disclaimer: Skip this post if you already know all about GMM and can spout out the Hansen (1982) results without even looking. Also, I am not an econometrician and I learned this stuff years ago. Substantive corrections and amplifications are very welcome in the comments. I will try to not to assume that the reader knows everything in finance and econometrics except GMM. I will fail.

(Haters: Yes, yes, I’m sure Noah would never do an entire post just on the incredibly banal concept of GMM. Too bad, he is away this fall. Except of course when he’s not. Also to haters: My putting out crappy posts is an incentive for him to come back sooner.)

Generalized Method of Moments, or GMM, is a method for estimating statistical models. It allows you to write down models, estimate parameters, and test hypotheses (restrictions) on those models. It can also provide an overarching framework for econometrics. Fumio Hayashi’s textbook, which old timers shake their head at, uses GMM as the organizing framework of his treatment, and derives many classical results as special cases of GMM.

Since the claim is that GMM is particularly useful to financial data, let’s motivate with a financial model based on Hansen and Singleton (1983). It may seem preposterously unrealistic, but this is what asset pricing folks do, and you can always consider it a starting point for better models, as we do with Modigliani-Miller. Suppose the economy consists of a single, infinitely-lived representative consumer, whose von Neumann-Morgenstern utility function exhibits constant relative risk aversion, \begin{equation} U(c_t) = \frac{c_t^\gamma} \gamma, \quad \gamma < 1, \end{equation} where \(c_t\) is consumption in period t and \(\gamma\) is the coefficient of relative risk aversion. She maximizes her expected utility \begin{equation} E_0 \left[ \sum_{t=0}^\infty \beta^t U(c_t) \right], \quad 0 < \beta < 1, \end{equation} where \(E_0\) means expectation with information available at the beginning of the problem and \(\beta\) is a discount factor to represent pure time preference. This utility function implies that the representative consumer prefers more consumption, other things being equal, with more weight on consumption that happens sooner rather than later, but also wants to avoid a future consumption path that is too risky. She will invest in risky, assets, but exactly how risky?

If we introduce multiple assets, then try to solve this model by differentiating the expected utility function and setting it equal to zero, we get a first order condition \begin{equation} \label{returns-moment} E_t \left[ \beta \left( \frac{c_{t+1}}{c_t} \right) ^\alpha r_{i,t+1} \right] = 1, \quad i = 1, \ldots, N, \end{equation} where \( \alpha \equiv \gamma - 1 \) and \(r_{i,t+1}\) is the return on asset i from time t to time t+1. This approach is the basis of the entire edifice of “consumption based asset pricing” and it provides a theory for asset returns: they should be related to consumption growth, and in particular, assets that are highly correlated with consumption growth (have a high “consumption beta”) should have higher returns because they provide less insurance against consumption risk.

Equation \( \eqref{returns-moment} \) contains some variables such as \( c_t \) and \( r_{i,t+1} \) that we should hopefully be able to read in the data. It also contains parameters \( \beta \) and \( \alpha \) (or, of you prefer, \( \gamma \)), that we would like to estimate, and then judge whether the estimates are realistic. We would also like to test whether \( \eqref{returns-moment} \) provides a good description of the consumption and returns data, or in other words, whether this is a good model.

The traditional organizing method of statistics is maximum likelihood. To apply it to our model, we would have to add an error term \( \varepsilon_t \) that represents noise and unobserved variables, specify a full probability distribution for it, and then find parameters \( \beta \) and \( \alpha \) that maximizes the likelihood (which is kind of like a probability) that the model generates the data we actually have. We then have several ways to test the hypothesis that this model describes the data well.

The problem with maximum likelihood methods is that we have to specify a full probability distribution for the data. It’s common to assume a normal distribution for \( \varepsilon_t \). Sometimes you can assume normality without actually imposing too many restrictions on the model, but some people always like to complain whenever normal distributions are brought up.

Hansen’s insight, based on earlier work, was that we could write down the sample analog of \( \eqref{returns-moment} \), \begin{equation} \label{sample-analog} \frac 1 T \sum_{t=1}^T \beta \left( \frac{ c_{t+1}}{c_t} \right)^\alpha r_{i,t+1} = 1, \end{equation} where instead of an abstract expected value we have an actual sample mean. Equation \( \eqref{sample-analog} \) can be filled in with observed values of consumption growth and stock returns, and then solved for \( \beta \) and \( \alpha \). Hansen discovered the exact assumptions for when this is valid statistically. He also derived asymptotic properties of the resulting estimators and showed how to test restrictions on the model, so we can test whether the restriction represented by \( \eqref{returns-moment} \) is supported by the data.

One big puzzle in consumption based asset pricing is that consumption is much smoother than stock returns than is predicted by the theory (I haven’t derived that, but manipulate \( \eqref{returns-moment} \) a little and you will see it); one of my favorite papers in this literature uses garbage as a proxy for consumption.

How does GMM relate to other methods? It turns out that you can view maximum likelihood estimation as a special case of GMM. Maximum likelihood estimation involves maximizing the likelihood function (hence the name), which implies taking a derivative and setting the derivative (called the score function in this world) equal to zero. Well, that’s just GMM with a moment condition saying the score function is equal to zero. Similarly, Hayashi lays out how various other classical methods in econometrics such as OLS, 2SLS and SUR can be viewed as special cases of GMM.

People who are not expert theoretical econometricians often have to derive their own estimators for some new-fangled model they have come up with. In many contexts it is simply more natural, and easier, to use moment conditions as a starting point than to try to specify the entire (parameterized) probability distribution of errors.

One paper that I find quite neat is Richardson and Smith (1993), who propose a multivariate normality test based on GMM. For stock returns, skewness and excess kurtosis are particularly relevant, and normality implies that they are both zero. Since skewness and excess kurtosis are moments, it is natural to specify as moment conditions that they are zero, estimate using GMM, and then use the J-test to see if the moment conditions hold.

PS. Noah will tell me I am racist for getting my Japanese names confused. I was going to add that in addition to econometrics, Hayashi is also known for his work on the economy of Ancient Greece. That’s actually Takeshi Amemiya, whose Advanced Econometrics is a good overview of the field as it stood right before the “GMM revolution”.

Is Niall Ferguson Right?


Seems implausible, doesn't it? In fact, I'm sure that after reading the title to this post, many of you are already preparing to launch another round of pleas to Noah to cast me into the outer darkness, where there is weeping and gnashing of teeth. But hear me out for a bit.

I pretty much agree with everything our beloved blog owner said in his response to Ferguson on Friday. Professor Ferguson has clearly let his critics get under his skin, and he really ought to have better uses for his time than writing a series of extremely long anti-Krugman articles which, like the Police Academy movies, get less compelling with each installment.

Nevertheless, in his first installment Ferguson does land a direct hit on the issue of the Euro, documenting Krugman's repeated and sometimes confident predictions that the Eurozone would collapse between 2010 and 2012:
By my reckoning, Krugman wrote about the imminent break-up of the euro at least eleven times between April 2010 and July 2012:
1. April 29, 2010: "Is the euro itself in danger? In a word, yes. If European leaders don't start acting much more forcefully, providing Greece with enough help to avoid the worst, a chain reaction that starts with a Greek default and ends up wreaking much wider havoc looks all too possible."
2. May 6, 2010: "Many observers now expect the Greek tragedy to end in default; I'm increasingly convinced that they're too optimistic, that default will be accompanied or followed by departure from the euro."
3. September 11, 2011: "the euro is now at risk of collapse. ... the common European currency itself is under existential threat."
4. October 23, 2011: "[the] monetary system ... has turned into a deadly trap. ... it's looking more and more as if the euro system is doomed."
5. November 10, 2011: "This is the way the euro ends ... Not long ago, European leaders were insisting that Greece could and should stay on the euro while paying its debts in full. Now, with Italy falling off a cliff, it's hard to see how the euro can survive at all."
6. March 11, 2012: "Greece and Ireland ... had and have no good alternatives short of leaving the euro, an extreme step that, realistically, their leaders cannot take until all other options have failed - a state of affairs that, if you ask me, Greece is rapidly approaching."
7. April 15, 2012: "What is the alternative? ... Exit from the euro, and restoration of national currencies. You may say that this is inconceivable, and it would indeed be a hugely disruptive event both economically and politically. But continuing on the present course, imposing ever-harsher austerity on countries that are already suffering Depression-era unemployment, is what's truly inconceivable."
8. May 6, 2012: "One answer - an answer that makes more sense than almost anyone in Europe is willing to admit - would be to break up the euro, Europe's common currency. Europe wouldn't be in this fix if Greece still had its drachma, Spain its peseta, Ireland its punt, and so on, because Greece and Spain would have what they now lack: a quick way to restore cost-competitiveness and boost exports, namely devaluation."
9. May 17, 2012: "Apocalypse Fairly Soon ... Suddenly, it has become easy to see how the euro - that grand, flawed experiment in monetary union without political union - could come apart at the seams. We're not talking about a distant prospect, either. Things could fall apart with stunning speed, in a matter of months."
10. June 10, 2012: "utter catastrophe may be just around the corner."
11. July 29, 2012: "Will the euro really be saved? That remains very much in doubt." 
 Reading this section of Ferguson's article was particularly painful for me, as I was very much in agreement with Krugman on the likelihood of a Eurozone breakup. And, in fact, I still think the economic arguments Krugman used to support his position were sound.

And yet, it didn't happen. The Euro not only didn't implode, but as Ferguson notes, new countries continue to join.

The fact that Ferguson turned out to be wrong about hyperinflation or the budget deficit is easy to explain: he didn't know what he was talking about. Yet Krugman and others did know what they were talking about. And yet they were still wrong. 

As the political scientist Philip Tetlock has demonstrated, experts tend to be pretty bad at making accurate predictions even on areas that involve their subject of expertise:
In the most comprehensive analysis of expert prediction ever conducted, Philip Tetlock assembled a group of some 280 anonymous volunteers—economists, political scientists, intelligence analysts, journalists—whose work involved forecasting to some degree or other. These experts were then asked about a wide array of subjects. Will inflation rise, fall, or stay the same? Will the presidential election be won by a Republican or Democrat? Will there be open war on the Korean peninsula? Time frames varied. So did the relative turbulence of the moment when the questions were asked, as the experiment went on for years. In all, the experts made some 28,000 predictions. Time passed, the veracity of the predictions was determined, the data analyzed, and the average expert’s forecasts were revealed to be only slightly more accurate than random guessing—or, to put more harshly, only a bit better than the proverbial dart-throwing chimpanzee.
Not only do experts only do slightly better than chance, on average, but it turns out when it comes to making accurate predictions, level of expertise faces rapidly diminishing marginal returns:
In political and economic forecasting, we reach the inflection point surprisingly quickly. It lies in the vicinity of attentive readers of high-quality news outlets, such as The Economist. The predictive value added of Ph.Ds, tenured professorships and Nobel Prizes is not zero but it is disconcertingly close to zero.
When experts do turn out to be wrong, Tetlock reports, they typically do not concede that they have made a real error. Instead, they are inclined to make excuses: my predictions were right, it's just that the timing was off. I would've been right, but an unforeseeable factor changed the equation. And so on.

Certainly when it comes to the Eurozone crisis, these are the excuses that jump readily to my mind. Sure, Europe has managed to stave off collapse for now. But the problems with the Euro are still there, and eventually they will out. The Eurozone would have collapsed if Draghi hadn't said that the ECB would do what it took to prevent it, and who could've foreseen that? And is Niall Ferguson really the best person to being making accusations here? Back during the height of the crisis he was speculating about a Eurozone break up too. 

I would like to go along with such excuses, but I can't quite make myself do it. While Draghi's intervention wasn't foreseen by a lot of people, it's hard to say that it wasn't foreseeable. And claiming that the Eurozone will collapse "someday" is all but unfalsifiable. I'm afraid the lesson here is that even getting the economics right doesn't help you much in predicting an outcome that depends not only on economics but also on politics (which, broadly speaking, most macroeconomic issues do).

According to Tetlock, there are ways to improve one's predictive accuracy. Experts "who tended to use one analytical tool in many different domains... preferred keeping their analysis simple and elegant by minimizing 'distractions'... [and] were unusually confident" were less accurate on average. By contrast, the better predictors:
used a wide assortment of analytical tools, sought out information from diverse sources, were comfortable with complexity and uncertainty, and were much less sure of themselves—they tended to talk in terms of possibilities and probabilities and were often happy to say “maybe.” In explaining their forecasts, they frequently shifted intellectual gears, sprinkling their speech with transition markers such as “although,” “but,” and “however.”

So I think that Ferguson is right that a little more humility is in order, even if he himself doesn't really practice what he preaches. 

Kamis, 10 Oktober 2013

The Ferg-beast attacks



I'm breaking my blogging hiatus to briefly respond to Niall Ferguson, not because I'm mad, but because it's fun. For those who haven't seen it, Ferguson has posted a massive three-part rant (part 1, part 2, part 3) in which he attempts to take Paul Krugman to task for making wrong predictions, for being a big meanie, etc. etc. The series is titled "Krugtron the Invincible", after my blog post of the same name. In fact, Ferguson references me in part 1 of his rant:
The resemblance between Krugman and Voltron was suggested by one of the gaggle of bloggers who are to Krugman what Egyptian plovers are to crocodiles.
...and again in part 3:
For too long, Paul Krugman has exploited his authority as an award-winning economist and his power as a New York Times columnist to heap opprobrium on anyone who ventures to disagree with him. Along the way, he has acquired a claque of like-minded bloggers who play a sinister game of tag with him, endorsing his attacks and adding vitriol of their own. I would like to name and shame in this context Dean Baker, Josh Barro, Brad DeLong, Matthew O'Brien, Noah Smith, Matthew Yglesias and Justin Wolfers. Krugman and his acolytes evidently relish the viciousness of their attacks, priding themselves on the crassness of their language.
Having been named, but not successfully shamed, I feel I ought to make a few points in response:

1. The Egyptian plover does not, in fact, eat meat out of the mouths of Nile crocodiles. That is a myth.

2. Ferguson attempts to peg me as a Krugman yes-man. This is lazy and uninformed on his part, since everyone who reads my blog or Twitter feed knows that I am a Brad DeLong yes-man, not a Krugman yes-man.

3. Some of Krugman's critics have good points. Others seem driven mainly by personal dislike for Krugman's polemical style or liberal politics. But a few seem to be motivated by a peculiar type of intellectual inferiority complex. They seem to believe that by catching a famous "smart guy" like Krugman making a mistake, they can prove themselves to be highly intelligent. Naturally, this doesn't actually work. Ferguson should worry about whether he falls into this latter category of Krugman detractors.

4. Of course, there is a fourth reason for criticizing a public intellectual: pure enjoyment. Ferguson seems too angry and bitter for his Krugman-bashing to be motivated by pure fun, but my own occasional Ferguson-bashing (see here and here) was motivated each time by the peculiar glee that comes from giving a really bad article the thorough point-by-point thrashing that it deserved.

5. My advice to Ferguson would be: Unless you're aiming for your legacy as a public intellectual to be "that British guy who constantly went after Paul Krugman", I'd suggest finding something else to write about. History, for example. I hear you're a very good historian.

6. To readers: If you really want to read more Ferguson-bashing, see Matt O'Brien and Josh Barro.

Senin, 07 Oktober 2013

Borrowing from the Future — Except that We Aren't


This morning’s New York Times featured an utterly clueless op-ed by Stephen D. King (not the horror writer, at least not intentionally).  It would take too long to go through all the errors and misrepresentations he managed to pull together, but one in particular deserves mention, since it is a common meme: fiscal deficits mean that “we are borrowing from future generations”.

Let’s look at this calmly.  First, suppose we have a closed economy; there is no trade and no international flows of lending and debt service.  In this state of happy isolation, the government may choose to run a budget deficit.  If it does, it issues bonds.  People who want this kind of asset buy the bonds, and it is their purchase that funds government spending not covered by tax revenue.  These bonds have maturities, so what happens when their time is up?

One of three things.  First, the bonds could be redeemed with no new bonds issued.  This would reduce the amount of bonds in private portfolios, but it would give the ex-bondholders an equivalent amount of cash to spend on something else.  The cash comes from a budget surplus, which means that taxpayers have paid in extra.  Those who don’t own bonds are now net payers to those who do.  From an accounting standpoint, however, they also gain in the sense that they are no longer liable to pay a stream of interest on the bonds into the future.

Historically, this doesn’t happen very much.  The best example is the huge deficits that the US and Britain used to finance WWII.  Neither country ran equivalent surpluses later on to “pay off” the debt.  Their economies grew, inflation eroded debt claims, and over time the debt-to-GDP levels slid back down to more reasonable levels.  Specifically, when the war bonds came due these governments simply issued more bonds to finance their redemption; they rolled over the debt.  That’s what happens in 99% of the cases, and you can see this by looking at how small and infrequent fiscal surpluses actually are.  This refinancing through issuance of new debt is option number two.

The third possibility is default.  Banana republics and governments under the influence of suicide cults sometimes do this.  ‘Nuf said.

So where is “borrowing from the future”?  Well, all government is borrowing from some people to pay other people, and paying back these debts, should it ever happen, simply reverses that flow.  Either way, money is making its way from one group to another at the same point in time.  Note that interest on the debt has nothing to do with present vs future either: the current generation, which incurs the debt, begins paying interest immediately in exactly the same way their distant descendants will.

None of this means, of course, that deficits are innocent in every respect.  To run them, the government has to be able to sell its bonds or, if the central bank is going to monetize them, the economic environment has to permit this.  (It can’t be too inflationary.)  That can break down.  Past experience can guide us in assessing how close we are to the upper limit of our fiscal space.  (The struggling countries in the Eurozone are a special case, because they don’t borrow in their own currencies.)

Ah, but you say, we are not in a closed economy.  Some of our borrowing is from foreigners, and debt service transfers money from us to them.  Some responses:

1. This is not about present versus future, but our people and their people.  After all, international interest payments, like intranational ones, don’t wait to be paid; we begin paying them right away.

2. The world as a whole is a closed economy.  If “we” are everyone in the world (and in some senses we are), there is no outside, at least economically.

3. The amount of foreign borrowing to finance fiscal deficits is not directly a function of the size of the deficit but of whatever determines our international payments position—our current and capital accounts.  To make the case that government borrowing increases the current account deficit, you have to, well, make the case.  For instance, is there a historical relationship between the size of US fiscal deficits as a proportion of GDP and corresponding current account deficits?  Show this if you can.  (In general there isn't, although the current account and the sum of net domestic borrowing, both public and private, is an accounting identity.  If two things are an identity, one does not "cause" the other because they are both the same thing.)  On a theoretical level, the idea is that issuing more public debt raises domestic interest rates, which in turn raises exchange rates, which in turn increases trade deficits.  OK, try to show that.  (The US record doesn’t provide much support.)

So there you have it.  At an individual level, borrowing is truly borrowing from the future.  At a population level, borrowing is the creation of assets and liabilities across different people.  People like King are committing a fallacy of composition.

Incidentally, we are borrowing from the future.  We are shirking the investments we need to make so that our children and grandchildren can live in a habitable world with a well-educated population that enjoys a productive infrastructure.  No fallacy of composition there.

In Defense of the Constitution



Democracy is dead, and the Constituion is responsible. That, at any rate, seems to be the emerging consensus among some of America's smarter left of center bloggers. Last week, Dylan Matthews of Wonkblog laid the blame for the current government shutdown not on President Obama or Ted Cruz, but on James Madison: 
This week's shutdown is only the latest symptom of an underlying disease in our democracy whose origins lie in the Constitution and some supremely misguided ideas that made their way into it in 1787, and found their fullest exposition in Madison's Federalist no. 51.
Matthews' opinion was seconded by Matt Yglesias, who argued that ultimately "America was doomed" because of our presidential system of government. And this weekend Jonathan Chait argued that the current budget battles might have been "destined all along" due to our constitutional structure. 

Matthews, Yglesias, and Chait all based their claims on a 1990 essay by political scientist Juan Linz, The Perils of Presidentialism. Written during the wave of democratization that coincided with the end of the Cold War, Linz's essay lists a number of dangers arising from a strong presidency, and argues that newly democratic nations ought to consider adopting a parliamentary system of government (like that of Germany) rather than a presidential one (like we have here in the U.S.).  

It's a good essay, and if one were designing a system of government from scratch, particularly in a society without a long tradition of democratic self-government, Linz's concerns ought to be taken seriously. But there is something a little odd about seeing The Perils of Presidentialism being cited as an argument by those who wish that the House of Representatives would be more accommodating to the President. As the title suggests, The Perils of Presidentialism is chiefly concerned that come from the president having too much power. 

For example, Linz argues that because the president alone is elected by the whole nation, he and his supporters "may be tempted to define his policies as reflections of the popular will and those of his opponents as the selfish designs of narrow interests." This certainly is a temptation. In fact, it seems to be a strong temptation for Chait himself, who describes today's GOP as "a party large enough to control a chamber of Congress yet too small to win the presidency." Matthews likewise suggests that his personal favorite theory of democracy is summed up in a statement by Max Weber: "In a democracy the people choose a leader in whom they trust. Then the chosen leader says, 'Now shut up and obey me." 

It gets worse. According to Linz, there is also a danger that the president will "use ideological formulations to discredit his foes," perhaps, for example, by referring to them as nihilists, anarchists, terrorists, etc.

And then there is the timing factor. Linz' also argues that, because of presidential term limits, a president's 
awareness of the time limits facing him and the program to which his name is tied cannot help but affect his political style...  This exaggerated sense of urgency on the part of the president may lead to ill-conceived policy initiatives, overly hasty stabs at implementation, unwarranted anger at the lawful opposition, and a host of other evils. 
Ultimately, Linz fears, frustration with legislative unwillingness to go along with presidential policy might lead to a military coup, or other forms of violence

While it's fun to speculate about alternative systems of government, America is not at risk of devolving into a military dictatorship any time soon. For all our faults, the U.S. is in fact one of the most successful countries in the history of the planet, and our constitutional structure, with its system of divided power, is one of the reasons for that. If you want to attack the House Republicans, be my guest. But leave James Madison out of it.   

UPDATE: On Twitter, Adam Gurri directs me to a piece of his that sums up my thinking perfectly: 
Rather than judging institutions on the basis of theory, we ought to be looking at their resilience; how they stand the test of time. The electoral college is frequently a target of criticism and ridicule by people who feel it is outdated, but it is precisely because its life can be measured in centuries rather than decades that we should trust it by default. At least, we should trust it more than the simple stories proffered by pundits and scholars. 
The more I learn about the Swiss canton system, the crazier it seems to me. Yet there are some cantons that have been in continual existence for something like 700 years—and Switzerland is a very wealthy and very peaceful nation. We should not conclude from this that their system of government should be spread to every corner of the Earth, but it is clear that there is something about the system as it operates in Switzerland that works.  

Minggu, 06 Oktober 2013

Metaphysics and the Breaking Bad Finale


If we spirits have offended, think but this and all is mended
That you have but slumbered here, while these visions did appear

Last weekend was the series finale of Breaking Bad, the critically acclaimed drama about an Albuquerque chemistry teacher who becomes a meth dealer after being diagnosed with cancer. From what I can gather, most fans were satisfied by the show's resolution. There is a small but growing cadre of viewers, however, who found parts of the episode a bit unrealistic, and have responded by arguing that the ending was in fact all a dream. Here, for example, is Emily Nussbaum:

[H]alfway through, at around the time that Walt was gazing at Walt, Jr., I became fixated on the idea that what we were watching must be a dying fantasy on the part of Walter White, not something that was actually happening—at least not in the “real world” of the previous seasons. … I mean, wouldn’t this finale have made far more sense had the episode ended on a shot of Walter White dead, frozen to death, behind the wheel of a car he couldn’t start? Certainly, everything that came after that moment possessed an eerie, magical feeling—from the instant that key fell from the car’s sun visor, inside a car that was snowed in. Walt hit the window, the snow fell off, and we were off to the races. Even within this stylized series, there was a feeling of unreality—and a strikingly different tone from the episode that preceded this one. 

This isn’t the first time that a show or movie has inspired some creative theorizing. The end of The Sopranos sparked debates over whether Tony Soprano had or had not been killed at the moment after the episode ended. Fans and critics alike engaged in elaborate exegesis over whether Leonardo DeCaprio was awake or dreaming at the end of Inception. And the ending of Lost, well, what can you say.

These sorts of arguments come so naturally to people that it is easy to gloss over a certain strangeness underlying the whole debate. Breaking Bad, after all, is a work of fiction. Walter White does not exist. As such, it’s not entirely clear what it means to say make claims about what happened to him or other fictional characters outside explicit descriptions of what happened on the screen.

This sort of attitude is perhaps best expressed by the late novelist Douglas Adams (of Hitchhiker’s Guide to the Galaxy fame). Responding to a fan question about some detail of Arthur Dent’s life not mentioned in his books, Adams responded:

The book is a work of fiction. It’s a sequence of words arranged to unfold a story in a reader’s mind. There is no such actual, real person as Arthur Dent. He has no existence outside the sequence of words designed to create an idea of this imaginary person in people’s minds. There is no objective real world I am describing, or which I can enter, and pick up his computer, look at it and tell you what model it is, or turn it over and read off its serial number for you. It doesn’t exist.

 On one level, Adams is perfectly correct. Still, there is something unsatisfying about this point of view. It’s not clear that one could enjoy fiction were it to be held consistently. 


                                     You may say I'm a dreamer, but I'm not the only one.

An alternate perspective was offered by the philosopher David Lewis in his paper Truth in Fiction. Lewis is best known as an advocate of Modal Realism (the view that every possible world really exists). But while Truth in Fiction utilizes Lewis’ possible world semantics, I don’t think one need to accept Modal Realism in order to accept his basic argument.

According to Lewis: 

A sentence of the form 'In the fiction f, x is non-vacuously true iff, wherever w is one of the collective belief worlds of the community of origin f, then some world where f is told as known fact and f is true differs less from the world w, on balance, than does any world where f is told as known fact and x is not true. It is vacuously true iff there are no possible worlds where f is told as known fact.   
 Okay, so that's a bit of a mouthful, but basically Lewis is saying that the claim "Walter White froze to death at the end of Breaking Bad" is true if a world in which that happened is closer to the actual world than a world in which everything before that point depicted in the show happened and he did not die in that way (the bit about collective beliefs is to deal with a situation where a society might have factually wrong beliefs that are depicted in the story, e.g. the solution to the Sherlock Holmes story "Adventures of the Speckled-Band" is based on an error about snakes, but Lewis doesn't think we should therefore conclude that Holmes got the wrong man).

This, however, poses a bit of a problem. There are, after all, plenty of fictional tales containing elements that are implausible or even impossible in our world. A world in which everything past the first couple of scenes of Buffy the Vampire Slayer was a dream would be more similar to the actual world than a world in which it all really happened (and this isn’t simply because the viewers of Buffy mistakenly thought that vampires were real). On the other hand, it’s not clear that Lewis is entitled to rule out dream explanations per se. After all, there are works of fiction where large portions of the narrative turn out to be a dream (the ninth season of Dallas perhaps being the most famous example).

So while I don’t find the Breaking Bad dreamers’ arguments terribly plausible, their theories seem to have done something I would have thought even less plausible, namely, refute one of the Twentieth Century’s most capable philosophers.   

Sabtu, 05 Oktober 2013

How important was the “structural balance” screw-up in driving European austerity?


I’m really glad to see that this European screw-up is, eventually, making headlines and that the European Commission is reconsidering the operational details of its production function method to estimate potential output and structural deficits (see WSJ).
As reported by the WSJ’s Real Time Brussels blog, the issue has become important, as the new European Fiscal Compact, which entered into force on 1 January 2013, requires that the structural deficit for euro-area Member States be less than 0.5%. So since “the European Commission uses [the structural balance] metric — the actual government budget balance adjusted for the strength of the economy – to determine how much austerity is needed; getting it wrong has big consequences.”
The question is whether “getting the structural balance wrong” in 2010 – the time at which Europe started to become obsessed with fiscal austerity – mattered in driving the amount of consolidation.
I first came across and pointed out the weakness of the structural balance calculations of the European Commission in 2010 in a series of Goldman Sachs publications that I then summarized with my co-author Natacha Valla on VoxEU. The basic storyline was and remains simple. The European Commission drastically revised downward its estimates of potential GDP as the crisis hit, which automatically increased its structural deficit measures for European countries.
This raised 2 questions?
1/ Did it make sense? 2/ And did the downward revisions to potential GDP matter for the size of consolidation packages?
First, did the downward revisions to potential GDP – that the European Commission introduced early in the crisis – make sense?
The European Commission uses a production function methodology for calculating potential growth rates and output gaps (see here). It features a simple Cobb Douglas specification where potential output depends on TFP and a combination of factor inputs (potential labor and capital). Importantly for what follows, potential labor input is calculated as:
Working age Population x Participation rate x Average hours worked x (1 - NAWRU)
It’s important to focus on potential labor input since the bulk of the revisions applied between 2008 and 2010 to potential GDP arose because of revisions to labor input.
Figure 1. Decomposition of the revisions (2010 vs. 2008 vintage) to potential GDP: Spain
Source: VoxEU. Note: The vertical axis is in percentage points.
And as you can guess from the drawing by Manu Cartoons, a large part of that decrease came from an increase in the Commission’s estimate of structural unemployment: the now infamous NAWRU. The European Commission uses the non-accelerating wage rate of unemployment (NAWRU) as an estimate for structural unemployment. We can discuss the pros and cons of NAWRU as a dynamic measure of structural unemployment in normal times, but the following graph should basically scream at you that this measure has failed at separating cyclical and structural unemployment during these extraordinary times.
Figure 2: Actual unemployment and NAWRU: Spain
Source: European Commission, 2013 spring forecast exercise
So although one can argue that a crisis can temporarily – as workers need time to adjust to the new sectoral and geographical composition of jobs – or permanently – because of hysteresis effects – decrease potential labor input, the size of the adjustments applied by the European Commission appears clearly inadequate.
Second, did it matter and was it more important than the fiscal multiplier screw-up in driving austerity?
As pointed out by Real Time Brussels, a reconsideration of the methods that the European Commission uses for estimating potential output could now cut the estimated structural deficits of the periphery countries and mean less austerity given the 0.5% rule of the Fiscal Compact. But I don’t think that, back in 2010, the structural deficit numbers played an important role for the countries that were under the heavy pressure of bond markets as they mostly focused on the nominal deficit and debt to GDP ratio metrics in designing their consolidation packages.
The structural balance screw-up may have mattered, however, for Germany (and for other core countries to the extent that their fiscal strategy mostly mimicked that of Germany) as it designed its consolidation plan for the 2011-2016 period based on the structural balance metric. In my VoxEU piece, you can see that the downward revision to potential GDP applied to core countries was not negligible. For Germany, the European Commission revised the potential GDP growth rate by about 0.5 percentage points.
I need to find the output gap and the cyclical-adjustment parameters used both in 2008 and 2010 by the European Commission to quantify by how much austerity would have decreased had Germany followed the same consolidation path (similar speed and end-points in terms of structural deficit) but used the 2008 vintage version of its potential output estimates rather the revised 2010 estimates. But my guess is that the consolidation efforts Germany planned for were to a significant extent unnecessary restrictive, even taking as given its objective of converging to a structural deficit of 0.35% by 2016.
Given the sheer size of core countries for the euro area, this operational mess-up may have been quite significant in delivering an inappropriate overall fiscal stance and hindering rebalancing. As a matter of fact, I wonder if this operational mess-up wasn’t more important than the fiscal multiplier screw-up in delivering this outcome. Granted, fiscal consolidation would have been less drastic in the periphery without the fiscal multiplier screw-up. But the biggest European fiscal policy failure wasn’t so much that of the periphery, which had pretty much no choice than drastically consolidate in the absence of a proper lender of last resort. The biggest fiscal policy mistake was the early consolidation efforts of the core, which started - and were amplified by the structural balance screw-up - in 2010 although in the absence of default risk, debt adjustment should be very gradual”.
Jeremie Cohen-Setton (@JCSBruegel) is a PhD candidate in economics at UC Berkeley and an Affiliate Fellow at Bruegel. He specializes in Macroeconomic Policies and Macroeconomic History and worked previously as an economist at HM Treasury and at Goldman Sachs. Jeremie blogs at ecbwatchers.org and is the main author of the blogs review at bruegel.org.