Sabtu, 27 Oktober 2012

Does the "Entrepreneurship Subculture" prevent big ideas?


Try this: Think of an animal that isn't an elephant.

What was the first animal you thought of? For a hefty fraction of you, it was probably an elephant. But if I had just said "Think of an animal", only a few of you would have thought of an elephant. The moral of this story is that when you try to fixate on "thinking different," often you just fixate on the same-old, same-old.

This is a colloquial description of a line of research by Steven M. Smith, a cognitive psychologist at Texas A&M University (disclosure: Steven M. Smith is my father). In this book chapter, Smith describes how "initial ideas" can constrain creativity. The results are best summed up by this abstract (omitted from the final draft):
The first ideas to be considered during creative idea generation can have profoundly constraining effects on the scope of the ideas that are subsequently generated. Even if initial ideas are intended to serve as helpful examples, or they are given simply to get the creative process going, the constraints of initial ideas may be inescapable. Such constraints can impede successful problem solving and inhibit creative invention. Overcoming these constraints can be enhanced by reconsidering initially failed problems in new contexts.
Here is another paper by Smith, along with Nick Kohn, detailing how group brainstorming can lead to "collaborative fixation", in which everyone in the group starts fixating on whatever ideas get suggested first.

Why am I bringing this up? Well, in the past few years, I've been reading - and hearing - a lot about the Entrepreneurship Subculture. You all know what this is. It's mostly young people, mostly in urban areas (especially SF and NYC). It's mostly (but not exclusively) made up of entrepreneurs in the fields of technology and media. It includes media outlets like TechCrunch, books like The Lean Startup, "incubators" like YCombinator, forums like Quora, and other outlets like the TED and TEDx talks. I myself have come in contact with this subculture just by dint of being friends with a lot of engineers, and with Peter Chang, whose own venture-funded media startup covers a lot of entrepreneurship-related events out in the Bay Area.

At this point, given the often combative nature of this blog, as well as the title, you might expect me to reveal that I am a detractor of the Entrepreneurship Subculture - a hater, so to speak. But that is not the case. I love the Entrepreneurship Subculture. The sheer intellectual energy of the movement is intoxicating. The people are, by and large, wonderful human beings. And the work being done by those involved in the Subculture is some of the most valuable stuff being done anywhere. In an age when much of highly-educated white-collar America spends its time performing unnecessary medical tests, trying to trick suckers into buying overvalued financial assets, or lobbying government for pork, the crowd at your local TechCrunch Disrupt conference are the real heroes of the economy.

But - as you may expect - I wonder if the Entrepreneurship Subculture isn't creating some unwarranted adverse effects. By putting entrepreneurs in such close and constant contact with each other, does the Subculture ferment creativity and cross-collaboration? Probably. But it also may inadvertently stifle creativity, by exactly the process that Steven Smith's research describes. Working in incubators, attending entrepreneur conferences, reading entrepreneurship publications, and talking constantly to other entrepreneurs may cause "collaborative fixation". Everyone will end up thinking of the same stuff, even if they try to think of something new and differentiated. Especially if they try. 

If that happens, what we'll see is a lot of "me too" products. A social network for miniature terriers. Yet another mobile social local photo sharing app (that line is plagiarized from somewhere, but I can't remember where). Not the kind of big conceptual breakthroughs that really disrupt the industrial structure. Not the kind of big ideas that build really huge and successful companies.

This is important because America needs good entrepreneurship - and especially good tech entrepreneurship - more than ever. Rates of new business formation are falling. The venture capital sector is not making good returns (though much of their profit may have simply been taken by angel investors). And then there is that Great Stagnation

So what is the solution? How can we make sure that a Subculture designed to ferment entrepreneurship doesn't end up accidentally encouraging groupthink? Smith's research suggests a way out: a change of context. Give entrepreneurs a break. Send them out into the wilderness - to other countries, or to small towns away from the beating heart of innovation. Or just encourage them to spend periods of time away from the Subculture, avoiding conferences, not talking to other entrepreneurs, not reading TechCruch. Have them live life, read some science fiction, visit factories and farms and retail outlets and non-tech office parks. Have them talk to friends (or just make friends) who work in other areas of the economy. Get them out of the bubble. Have them write down ideas, keep diaries, etc. I strongly suspect that when they come back, many of them will have new, weird, different ideas that they would not otherwise have had.

A famous Japanese artist once told me "It's impossible to think of anything new in a city." I countered "Yes, but it's hard to collaborate out in the country." We agreed that one needs to alternate. In the same way, I think that the Entrepreneurship Subculture should emphasize the importance of changes in context. Disrupt your own ideas.

Rabu, 24 Oktober 2012

Damage to Rented Premises

Any time a business rents or leases a space to operate from they sign a contract. In that contract are insurance requirements stating that the tenant will carry certain liability limits. Normally they will ask the tenant to carry a commercial general liability policy, and more often than not they ask for at least $1,000,000 per occurrence limit. The reason they ask for this is that if the tenant is the cause of a fire or other type of damage to the rented building, the landlord wants to make sure that the tenant’s insurance will pay for the damages, and not their own insurance.

Commercial General Liability takes care of a lease contract with two different types of coverages. The first is the coverage I mentioned above of $1,000,000 per occurrence limit. This coverage, however, only gets the tenant half way there. The per occurrence limit doesn’t cover for actual areas of a building that the tenant rents or leases. It will pay for only the part of the building that is not rented by the tenant. An example might help explain this better.

Example:

Let’s say that business XYZ, Inc rents unit A of a four unit office building. If XYZ, Inc causes a fire that extends damages to both unit A and unit B, the per occurrence portion of their insurance policy will only cover damages to unit B. It will not pay for damages to unit A because it is leased or rented by them.

Damage to Rented Premises (sometimes called Fire Legal Liability) is the other coverage a tenant needs when they rent space. This coverage is often included in a general liability policy as well but many times is not specifically mentioned in lease contracts. In the example above, Damage to Rented Premises would be the coverage that would pay for unit A that XYZ, Inc. rented.

The reason I bring this up as a blog article topic is because the Damage to Rented Premises is often overlooked. Since it is left out of many lease contracts, businesses don’t think to check with their insurance carrier about the coverage. Your typical commercial general liability policy will only include $100,000 to $500,000. If company XYZ, Inc. in the above example rented a large space, this may not be enough coverage, and they could pay for some of the damages out of pocket.

So next time you rent a space for your business be sure to have Fey Insurance Services review the lease and double check your commercial general liability insurance limits to make sure you are covered in case of a large fire.

Selasa, 23 Oktober 2012

Apple and Sony: an eerie parallel?


(Note: This post is a break from "serious" econ blogging...)

OK, Don't go and make any financial trades based on this (or anything you read in any one blog post), but check this out:


Charts courtesy of Yahoo Finance, idea courtesy of my friend Dayv Wachell.

It's well known that Steve Jobs idolized Sony, especially its founder Akio Morita. Morita was a Jobs-like figure, maniacally focused on design and on pleasing the consumer (while his partner, Masaru Ibuki, the Wozniak of Sony, handled the initial technical wizardry), and loved by the public. Sony even inspired Jobs' wardrobe.

There are other parallels. Sony's big break was a portable music device; so was Apple's. Like Apple today, Sony in the 90s was known for having cult-like fans in its domestic market; these "Sony-heads" would essentially buy anything that Sony put out, even as the company's image slipped internationally. I've even heard people allege that those fans provided the company with a profit cushion that allowed it to ignore problems on the horizon.

In any case, when Sony's charismatic founder died, the firm was at the top of its game, and on top of the world. After Morita's death in October of 1999, Sony's stock price rose dramatically, only to crater a few months later.

Since Steve Jobs' death a year ago, Apple's stock has soared. But is there a parallel here? Is Apple's recent downtick the start of the kind of epic fall suffered by Sony in 2000? If so, that'll be kind of neat.

As I wrote at the top of this post, the point here is not to say "Apple = Sony! Sell!" Don't do that. (If you're interested in trading Apple stock, go be an Apple analyst; otherwise, keep your money in a nice diversified portfolio and trade only once per year.) The point is to wonder about the effects of iconic founders. Do celebrity founders who reach the iconic status of a Jobs or a Morita endow their companies with fat profit margins, by creating a group of super-fans who would pay $600 for a brick as long as it sported the company logo? Does a diehard core of super-fans lead a company to become complacent after the death of the iconic founder?

And does the death of an iconic founder lead to a predictable rise in a firm's stock price? Do investors implicitly believe - for a little while, anyway - that the "spirit" of an iconic founder lives on in the company he founded, causing them to overestimate the company's prospects, leading to a predictable crash in price? And can the timing of that price crash be predicted?

It's an interesting question, and one that's hard to evaluate econometrically, since iconic founders seem like they should be pretty rare. But maybe they're not as rare as I think, and someone out there will find a way to investigate this empirically and write a paper on it. In which case, any market-beating investment opportunity based on iconic founder deaths will promptly disappear...

Update: Apple has fallen to $550 since the writing of this post...

Minggu, 21 Oktober 2012

Money is just little green pieces of paper!


Have you ever heard people say that "money is just little green pieces of paper"? Well, that is exactly what Steve Williamson claims in this post. Most of the post is an anti-Krugman volley, but buried in one of Steve's points is the following extremely interesting claim:
What is a bubble? You certainly can't know it's a bubble by just looking at it. You need a model. (i) Write down a model that determines asset prices. (ii) Determine what the actual underlying payoffs are on each asset. (iii) Calculate each asset's "fundamental," which is the expected present value of these underlying payoffs, using the appropriate discount factors. (iv) The difference between the asset's actual price and the fundamental is the bubble. Money, for example, is a pure bubble, as its fundamental is zero. (emphasis mine)
Can this be true? Is money fundamentally worth nothing more than the paper it's printed on (or the bytes that keep track of it in a hard drive)? It's an interesting and deep question. But my answer is: No.

First, consider the following: If money is a pure bubble, than nearly every financial asset is a pure bubble. Why? Simple: because most financial assets entitle you only to a stream of money. A bond entitles you to coupons and/or a redemption value, both of which are paid in money. Equity entitles you to dividends (money), and a share of the (money) proceeds from a sale of the company's assets. If money has a fundamental value of zero, and a bond or a share of stock does nothing but spit out money, the fundamental value of every bond or stock in existence is precisely zero.

That's a weird way of thinking about the world.  It would mean that the size of a stock bubble, measured in percentage of terms, is always and everywhere infinite. It would mean that the size of a stock bubble, measured in absolute terms, is just the price of the stock - that Google's stock now has a bigger "bubble component" than Pets.com's ever did, simply because Google's stock price is higher than Pets.com's ever was. If money is a pure bubble, this must be the case.

So it's a weird way of thinking about the world...but is it correct?

It seems to hinge on the definition of "fundamental value". Usually we define "fundamental value" as the (discounted) amount of money you'll have if you hold on to an asset. But if money has no fundamental value, then this is zero.

So what is "fundamental value"? Is it consumption value? If that's the case, then a toaster has zero fundamental value, since you can't eat a toaster (OK, you can fling it at the heads of your enemies, but let's ignore that possibility for now). A toaster's value is simply that it has the capability to make toast, which is what you actually want to consume. So does a toaster have zero fundamental value, or is its fundamental value equal to the discounted expected consumption value of the toast that you will use it to produce?

If it's the latter, then why doesn't money have fundamental value for the exact same reason? After all, I can use money to buy a toaster, then use a toaster to make toast, then eat the toast. If the toaster has fundamental value, the money should too.

So does saying "money is a pure bubble" mean that toasters have no fundamental value, and that therefore, the price of toasters - or, indeed, of any non-consumable good - is a pure bubble? If "fundamental value" = "consumption value", it seems that it must mean exactly that. Now we are into a very weird way of thinking about the world.

Or is there another way to define "fundamental value", besides "expected discounted stream of money payments" or "expected discounted consumption value"? I can't think of one...any takers?


Update: Brad Delong and Paul Krugman weigh in. Paul suggests a more expansive definition of "bubble", while Brad conjectures about what Steve Williamson might mean. And yes, it feels weird calling Paul Krugman by his first name when we've never actually met...

Update 2: Steve Williamson weighs in:
The payoffs on my stocks and bonds, and the sale of my house, may be denominated in dollars, but that does not mean that the value of those assets is somehow derived from the value of money.
Not in general, no. But if the fundamental value of money is precisely, exactly zero then it does mean that. Any finite number multiplied by zero is still zero, so using Steve's definition of "fundamental value" - whatever the heck that is - the expected discounted present "fundamental" value of the stream of (money) payments from any stock or bond is precisely, exactly zero. As for the definition of "bubble", Steve claims that I disagree with his definition ("price > fundamental value"), but actually I do not disagree; that is one of the two main definitions out there (the other being "a rapid rise and crash of prices"), and I think it's perfectly fine.

Update 3: Nick Rowe has some thoughts.

Update 4: David Glasner thinks I've made a mistake. But I haven't made a mistake. If there exists a machine whose only possible function or use is to spit out assets that have zero fundamental value, then that machine has zero fundamental value. There exist many financial assets whose only possible function or use is to spit out fiat money. If the fundamental value of fiat money is always identically zero (as Williamson claims), then the fundamental value of those financial assets is always identically zero.

Update 5: David Andolfatto attempts to rebut my claim that if FV(money)=0, then the FV of most financial assets is also identically zero. Here is his attempted rebuttal:
What of Noah's claim that if money is a bubble, then nearly every financial asset is a bubble? This just seems plain wrong to me. Financial assets are typically backed by physical assets. For example, the banknotes issued by private banks in the U.S. free-banking era (1836-63) were not only redeemable in specie, but they constituted senior claims against the bank's physical assets in the event of bankruptcy. Mortgages are backed by real estate, etc.
I don't think this is a very good rebuttal. Sure, there are examples of financial assets that can be exchanged directly for real assets (without being first exchanged for money). But these are few and far between. Most financial assets only pay you in money, no matter what happens. So I don't think David's rebuttal really works.

Note: None of these critics has yet to offer a concrete definition of "fundamental value". The whole point of my post is to ask for a concrete definition of fundamental value...so far I haven't got one.

Update 5: Steve Williamson finally does provide a definition of fundamental value, cribbing from Allen, Morris, & Postlewaite (1993) (which, by the way, is an excellent paper which you should read if you have time):
To summarize, we are arguing that the fundamental value of an asset is the present value of the stream of the market value of dividends or services generated by that asset.
According to this definition, money is priced above its fundamental value, because money pays no dividends and thus has a fundamental value of zero. Also note that according to this definition, T-bills have a fundamental value of zero, since they pay no coupons. In other words, by this definition, the market value of the redemption payment of a bond does not count toward its fundamental value.

It's not the definition I'd choose; I'd include the redemption in the fundamental (in which case money would have a positive fundamental, since you can "redeem" it for itself). But "fundamental value = dividends" is a perfectly consistent definition. Great! Steve also writes:
I'll leave you to judge whether Allen, Morris, and Postlewaite are better or worse economic theorists than Paul Krugman or Noah Smith.
I can resolve half of this question for you right now: Allen and Morris (and almost certainly Postlewaite, though this is the only paper of his I've read) are better theorists than I am. All you aspiring theorists out there, take a lesson from those guys, and remember to define your terms explicitly and precisely!

Kamis, 18 Oktober 2012

Reinhart-Rogoff vs. Bordo-Haubrich (with grandstanding by John Taylor)


If you follow econ blogs at all, you'll have been reading lots about the dustup between Carmen Reinhart & Kenneth Rogoff, whose research argues that financial crises cause slow economic recoveries, and Michael Bordo & Joseph Haubrich, whose research argues that recoveries after financial crises are usually very rapid. Here is a Bloomberg op-ed by R&R defending their work.

The argument is politically important, because it tells us how good the Obama administration has been doing. If R&R are right, then Obama has been a good steward of the economy, since America's recovery has slightly outperformed the average of their sample of historical post-crisis recoveries. But if B&H are right, then Obama has done a historically bad job. Thus it is no surprise to find Mitt Romney's economic advisors, in particular John Taylor, hawking the Bordo-Haubrich research and disparaging that of Reinhart and Rogoff.

First of all, do not listen to John Taylor. He is not being a scientist right now, he is being a politician. Paul Krugman is right; this is an example of how politics hurts the academic discipline of economics. But unlike Krugman I think it's inevitable; you can hardly expect John Taylor not to do his job and support his boss. People know to take that into account when reading what he writes, and Taylor knows they take it into account. Are we ever going to get economists to stop advising political candidates? Are we ever going to get political candidates to stop insisting that their advisors support their campaign narrative? To each of these questions I answer: Maybe, but I am not optimistic.

But do pay attention to the academic dispute between R&R and B&H. It's very interesting. How do the two research teams arrive at such different conclusions? Essentially, there are three big differences in the methodologies used by the two teams. 

Difference 1: R&R compare recoveries across different countries. B&H only look at the U.S.

Difference 2: R&R define the "strength of a recovery" as the time required to reach the pre-crisis level of GDP per capita; B&H define the "strength of a recovery" as the rate of total GDP growth at a certain time following the trough of the recession.

Difference 3: R&R define a "financial crisis" much more narrowly than B&H.

Let's talk about Difference #1. Because B&H include only the U.S., they ignore episodes like Japan's crisis-and-recovery in the early 1990s. This means that, for one thing, B&H have a much smaller sample than R&R. If you believe that every nation is fundamentally different, this is unavoidable; but if you believe that "financial crises" are a universal phenomenon, then B&H are making a big mistake. 

It also means that B&H are comparing across different periods of history. This doesn't seem appropriate to me. For one thing, in its earlier history, the United States was experiencing "catch-up growth", which means that the trend rate of growth was much higher than it is now. For another thing, past eras had considerably higher productivity growth than the current era, which also raised the trend rate of U.S. growth. Finally, as R&R point out in their op-ed, U.S. population growth was higher in the past. B&H, by failing to detrend their GDP series, leave out all of these important facts.

Basically, I think R&R's methodology is much better here. B&H, by refusing to even look at other countries, are potentially throwing away a huge amount of information. Sure, combining samples across countries introduces a lot of omitted variables, but you can always just compare within-country analyses to cross-country analyses and note whether and how the two are different. And you can always just make a list of potential cross-country structural differences. Then you let the reader decide for herself whether cross-country or single-country makes more sense. I think this is much better than simply choosing one specification and sticking with it.

OK, let's talk about Difference #2. This is partly a case of an apples-to-oranges comparison; the two research teams are measuring different things, and their stories are not necessarily incompatible. B&H tell a story of a "string-plucking" effect, where financial crises are followed by very deep recessions, and deeper recessions mean faster, but longer, recoveries. R&R's observation that recoveries from financial crises take longer than others could be consistent with that string-plucking story. 

(The point of contention appears to be over the "shape" of recoveries - R&R contend that financial crises produce L-shaped recoveries, while B&H say there is no conclusive evidence of that. The difference is caused by the difference in the definition of "financial crises", which we'll discuss in a moment.)

Note, by the way, that this second point shows that John Taylor is being a bit disingenuous when he uses B&H's results as a stick with which to beat the Obama Administration. Here, and again here, Taylor agrees with B&H and R&R that "there is no disagreement that recessions associated with financial crises have tended to be deeper than those without financial crises." In the "string-plucking" model proposed in the appendix of B&H's paper, they claim that deeper recessions will be followed by faster recoveries; in this model, one reason for a slower recovery under Obama is that the recession of 2009 was not as deep as recessions during the 1800s. So John Taylor is overlooking the obvious implication of B&H's model - that Obama slowed the recovery by reducing the severity of the recession.

OK, on to Difference #3 - the definition of a "financial crisis". My instincts tell me that B&H's more expansive definition of financial crisis is wrongheaded - after all, they include 1981 as a "financial crisis", even though basically everyone believes that that was a "Fed recession" caused by the Volcker disinflation. Intuition strongly suggests that R&R's restrictive definition of a "financial crisis" is much more credible.

BUT, I don't think we should always trust our intuition. It is certainly possible that R&R constructed their definition of "financial crises" by looking at the data, picking out L-shaped recoveries, noticing that what happened to the financial systems of countries right before those L-shaped recoveries looked different in some respects from what happened prior to V-shaped recoveries, and then defined those observed differences as "financial crises". 

Is this a bad or wrong approach? Heck no! It's exactly what I would have done. It's a naturalistic approach. You observe patterns in nature and you write them down. That's how science gets all of its insights.

But it's an incomplete approach. If you observe a pattern and then conclude that the pattern is structural, you are data-mining. Before we believe a theory, we need to use it to make out-of-sample predictions. In this case, what that means is that before we accept R&R's definition of "financial crisis", we really need to wait and watch history unfold, and see if subsequent L-shaped recoveries still correlate with the things R&R define as the essential characteristics of a "financial crisis". That will take a long time.

Alternatively, we could use microfoundations. If we successfully identified the processes by which R&R-defined financial crises affect recoveries (and B&H-defined crises don't), we could conclude in favor of R&R's definition without having to wait for out-of-sample crises to unfold.

But until we do at least one of those things, I am not willing to say with certainty that R&R's definition of crises, intuitive though it may be, is better than B&H's.

So, in conclusion: I like R&R's approach better than B&H's, because it comes at the problem from more different angles. This is how I think the best empirical research is done; you ask a question, and then you attack that question with multiple data sources, multiple alternative assumptions, and multiple models. This is how Justin Wolfers, for example, attacked the question of whether prediction markets or opinion polls do a better job of forecasting election results. B&H don't do this; they throw away the information contained in other countries, and they don't try alternative definitions of "financial crisis". In addition, I think they make a mistake by not adjusting their GDP growth data for long-term trends.

And I think no one should take John Taylor's promotion of B&H's results seriously, since he is part of Team Romney.

However, this does not mean I totally believe the results of Reinhart & Rogoff. The fact that their results ring true to me might just be a function of how long those results have been publicized in the media. The fact is, the data sample they have to work with is small and riddled with all kinds of potential confounding effects and omitted variables. That is what macro has to deal with, folks. It ain't pretty.

Rabu, 17 Oktober 2012

What is math, and why should we use it in economics?


In my last post, I pointed out that the Nobel Prize-winning work of Lloyd Shapley and Al Roth, makes heavy use of mathematics, and indeed would be completely impossible without math. This, I said, is evidence against the idea that economics doesn't need (or shouldn't use) math.

But then some commenters asked me: What do you mean by "math"? And I thought that was an interesting question.

There is no "correct" definition of the word "math", any more than there is a correct definition of the word "art", or the word "love". There are many different definitions, all of which are drawn from similar connotations; in other words, people look at a bunch of things, say "This is math, and that is math", and then try to distill and formalize the similarities between the things that seem like math. For example, the definition I tended to like in college was called the "formalist" definition: 
"Mathematics is the manipulation of the symbols of a language according to explicit, syntactical rules."
Basically, this just means "math" = "logic". Philosophically, I'm fine with that. It's an expansive definition. But it's not very helpful when talking about economic methods, since it includes lots of stuff that people wouldn't normally call "math".

So what do I think is a useful definition? When it comes to scientific methodology, I think of "math" as basically being the same thing as "precision of meaning." This working definition is not a yes-or-no sort of thing; it's a sliding scale. Methods can be more math-y or less. 

So what do I mean by "precision of meaning"? Basically, something with a precise meaning has fewer alternative things that it could mean. For example, compare the two scientific propositions:

1. If you push something, it will push you back.

2. Momentum is conserved.

The second statement has a more precise meaning than the first. For example, the first statement could mean "If I push on something with a force of 5 Newtons, it will push on me with a force of 5 Newtons in the exact opposite direction that I pushed." Or, it could just as easily mean "If I push on something with a force of 5 Newtons, it will push on me with a force of anywhere between 1 to 1,000,000 Newtons, in a direction 15 degrees east of the direction I pushed." But the second statement can only mean the first of those two things, not the second. 

Therefore, I would say that the second statement is more mathematical than the first. Note that both of these statements are logical statements; for example, you can apply the rules of first-order logic to either statement to rule out the situation where I push something and it doesn't push me back at all. By the formalist definition, we can do "math" with either statement. But my "precision" definition makes a distinction between the two.

So by this definition, are probabilistic statements less mathy than deterministic ones? No, as long as they are explicit about the fact that they are probabilistic statements.

Are qualitative statements less mathy than quantitative statements? Not necessarily ("The sign of the first derivative is positive" is qualitative but is precise in its meaning), but in practice, this often tends to be the case. Quantitative statements must be precise, while qualitative statements may or may not be. This is just due to differences in the languages we use for expressing qualitative and quantitative statements. And this tendency is why people usually think math is about numbers and/or symbols that stand for numbers.

What, then, to raise the old question once more, is mathematics? The answer, it appears, is that any argument which is carried out with sufficient precision is mathematical, and the reason that your friends and ours cannot understand mathematics is not because they have no head for figures, but because they are unable [or unwilling, DRH] to achieve the degree of concentration required to follow a moderately involved sequence of inferences. This observation will hardly be news to those engaged in the teaching of mathematics, but it may not be so readily accepted by people outside of the profession. For them the foregoing may serve as a useful illustration.
So there you go. Great minds think alike...and mine occasionally happens to stumble to the same conclusions.

So why should we use math in economics? Well, I can think of a number of reasons:

1. We may want to make precise predictions about what will happen in a market.

2. We may want to make precise predictions about the conditions under which things will happen in a market.

3. Precise statements often help resolve debates, avoiding the phenomenon of "talking past each other".

4. Precise statements often lead to unintuitive but logically inescapable results.

5. It is usually easier to check sets of precise statements for logical inconsistencies.

I think all of these reasons are good reasons sometimes and bad reasons sometimes (note how imprecise of a statement that is!). I have no hard-and-fast rule about how much precision to use, and when. But I do know that if you tried to implement a Shapley-Roth matching algorithm without mathematically precise statements about what happens when, it would be hopeless. 

And I also know that in the blogosphere, many debates go on and on without being resolved, when both sides are really just talking past each other. Egos get bruised, grudges develop, and understanding is not advanced, even when the different sides' positions are not mutually incompatible or even that far off. That's why, when debates get really long and confusing, I think it's time to whip out the math, define terms, and get really precise. (By the way: In my experience, defining terms is really the critical piece of this. It's very very hard to make imprecise statements when all your words are precisely defined!)

So are there times when we should use less math in economics? Sure. Sometimes we understand a phenomenon so little that imprecise statements are more valuable than precise ones; precise formulations, if we believe them, give us the illusion of understanding, while imprecise statements, by pointing us in many directions at once, give us a menu of options for seeking the truth. And I also suspect (without proof) that some authors use excessive precision as a form of obscurantism, cloaking simple ideas in daunting reams of equations, or performing byzantine manipulations of simplistic assumptions, in order to deter outsiders from entering their hyper-specialized sub-field and criticizing their work. 

But these are cases in which the purpose of imprecision is to lead us to greater future truth. And that truth, if it is found, will certainly be expressed with great precision - i.e., if there is an economic theory that really works, it's going to use some math. The only time not to use math in econ is when we haven't found the right math yet.

And in practice, I find that a few of the people calling for less math in economics (You know who you are!) don't seem to have any such goal in mind. There are a few people out there who would rather econ stay imprecise forever - so that nobody will ever be proved wrong or right, and we can let a million flowers bloom, and everyone's scholarly opinion about the economy will be equally valid. Paul Krugman discusses these folks when he says:
[Some people] claim to reject neoclassical economics, but their alternative is not an alternative model but a lot of verbiage; they talk at the economy, and imagine that by so doing they achieve a higher level of sophistication and realism than economists who try to express their ideas in terms of little models. 
And they’re kidding themselves; all they’ve done is hide their implicit models and prejudices behind a dust cloud.
Agreed. Math is not always the most appropriate tool in economics. But the more real successes economics achieves, the more good math it will use.

Update: And here is a useful reminder that the things people call "math" don't always meet my definition...computer-generated gibberish was accepted for publication in a math journal. Gibberish, of course, has no precision of meaning at all.

Update 2: Alex Marsh has a good post that discusses the pitfalls of using math in economics. The main pitfall he identifies is that people start to believe in their own math because it's simple. Marsh is absolutely right. Making simplifications is a necessary evil, and when people do it, sometimes they forget - or decide not to believe - that the things they left out of the model still exist. Believing that your own oversimplifactions are the Laws of the Universe is easy, seductive, and deadly. Only empiricism - the relentless insistence on matching models to real-world data - can provide an effective check on this tendency.

Senin, 15 Oktober 2012

A Nobel for economics that really works


Out here in the blogosphere, it is common to hear things like the following:

1. "Economics doesn't work; it has no practical applications."

2. "Economic will never discover any stable scientific laws, because human behavior changes."

3. "Economics shouldn't use math, because math can't describe human behavior."

4. "Economics is not a science."

I have some sympathy for these viewpoints. But economics is a very broad discipline, and I think there are many cases in which these criticisms couldn't be more wrong. There are cases in which economics works, in which it does discover "laws", and in which difficult math is absolutely essential. For example, consider the theories that won the Economics Nobel Prize this week.

The prize, given to Lloyd Shapley (who, by the way, spends his summers at Stony Brook) and to Alvin Roth (who was recently hired by Stanford despite being commonly cited as working for Harvard), was awarded for the invention of Matching Theory. Matching Theory is basically an algorithm - a mathematical technology - for finding optimal matches between pairs or groups of people. It incorporates human preferences, optimization, and strategic behavior, so it is economics. Alex Tabarrok gives a great introduction to the theory in this blog post.

I would like to point out some things about Matching Theory:

1. It is testable, tested, and correct for a very broad class of situations. There are many situations in which the theory makes predictions about when matching will be stable. As Tabarrok points out, these predictions have been confirmed in a number of real-world situations (and, I am sure, in controlled experiments as well).

2. It is practically applicable. Implementing the algorithms designed by Al Roth has resulted in much improved availability of kidney transplants.

3. The theory uses a lot of math. It does not rely on verbal characterizations of human behavior, but on hard quantitative predictions derived from non-trivial mathematics. Without that math, the theory would be useless.

In other words, Matching Theory is what most scientists would call science. Nor, I believe, is it the only such example. Critics of the economics profession should realize this. It's an important fact that not all fields of economics - and not all techniques and theories and schools of thought within each field - are created equal, in terms of their testability, real-world applicability, and appropriate use of mathematics.

As for the Nobel, I see this decision as increasing the credibility of the prize itself. The econ Nobel traditionally lies somewhere in between the Peace Prize - which everyone knows is a big joke - and the Physics, Chemistry, and Medicine Prizes, which have managed to maintain very high levels of credibility. But the Economics Prize has always seemed to alternate between testable, applicable, "science-y" sorts of economics (think James Heckman's selection models, Daniel Kahneman's experiments, William Vickrey's auction theory, or the 2007 prize for mechanism design) and less testable, more "storytelling" kinds of econ (I am sure you know which ones I'm talking about). It is no coincidence that the former tend to be prizes for microeconomics and the latter tend to go to macroeconomists.

My guess - and this is just a wild guess - is that in the years since the Crisis of 2008, the enormous wave of criticism directed at the econ profession has not been lost on the Nobel Prize committee. The only macroeconomists selected for the prize in the past few years have been Chris Sims and Thomas Sargent - two hardcore empiricists whose work serves to illuminate the data limitations and huge error margins faced by macroeconomics. It is my sincere hope that the prize will continue to move in the direction of the science prizes, toward testable, applicable theories and credible empirical results.

And in the meantime, my heartiest congratulations to Lloyd Shapley and Al Roth, who richly deserved the prize.

Update: Mark Thoma is thinking along similar lines.

Packaging Health Plan Fee Details for a Post-Election Launch

Self-insured employers have been waking up in recent weeks and months to the reality that they will soon be hit with new fees to finance a transitional reinsurance program provided for the in the Affordable Care Act (ACA).  But they are likely going to have to wait on the details until after the November elections.

As a quick refresher, the fees will be earmarked to capitalize reinsurance facilities in each state that serve as financial backstops for health insurance companies which offer individual coverage plans through public health insurance exchanges slated to come on-line in 2014.  Health insurance companies will also be subject to this fee.

What has caused some confusion is that the statute and a pre-curser rule finalized earlier this year references that third party administratorson behalf of self-insured plans will be responsible for paying the fee.   In private meetings over the summer, regulators clarified that it was not the intent that TPAs be financially liable for these fee, but rather they will be expected to assist in the collection of these fees from their clients.  Those details, along with the specific fee amounts, are still under wraps.

This blog has learned that an increasing number of large self-insured employers have been complaining directly to senior White House officials that the fee is fundamentally unfair because it helps to support the profitability health insurance companies, with no direct benefit for employers.  Responses have ranged from “we hear you but there is nothing we can do” to “there should be no complaining now because you (the employer community) signed off on this ACA provision during the legislative process.”

The former response is expected, but the latter response deserves some fact checking.

According to a source directly involved with drafting this section of the ACA, there is an interesting back story that is not widely known.  When legislative language was being developed, Democratic drafters did not understand the difference between independent TPAs with insurance company owned ASOs and did not understand that ASOs are typically separate business entities from their insurance company parents.

The reason why this is important is because ACA legislative drafters recognized that it did not make sense to impose fees on self-insured plans to subsidize insurance companies but they figured by referencing TPAs they would exclusively tap the fully-insured marketplace on the assumption that all TPAs were owned by insurance companies.

Only later in the legislative drafting process did they come to understand that many self-insured employers had no insurance company connection.  But by that time there was no turning back and there was no alternative to collecting the necessary revenue – all self-insured employers were going to have to pay.  No wonder that that the regulators have been slow with details on how this is all going to work.

So this brings back to the timing of when these details will be published.  Clearly if the Administration thought that employer community was going to be happy with the new rules, they would be released prior to Election Day.  But the best intel suggests that the proposed are done and are sitting right now at the Office of Management & Budget (OMB) awaiting a green light for release, likely shortly after election day.

The one positive detail is that the rules will be coming out in proposed form, so there will be an opportunity for formal stakeholder input -- just another thing to look forward to as we enter the holiday season.

Minggu, 14 Oktober 2012

Michigan Health Care Claims Tax May Just Be The Opening Bid

This blog has previously reported about the one percent health care claims tax that the state of Michigan has imposed on all payers, including self-insured group health plans.  We have also commented on the refusal of most within the employer community to support a legal challenge to the law, which should be preempted by the Employee Retirement Income Security Act (ERISA).

While one prominent Michigan employer has privately been a big financial supporter of this self-insurance legal defense initiative, the state’s largest employer organizations, as well as at least one major national association focused on ERISA preemption issues have been on the sidelines.

Now, it’s probably unrealistic to expect that the average self-insured employer will take the time to think about the longer term implications of ERISA preemption erosions.  Significant as these implications are, those employers are more concerned about the immediate financial implications.

 Fair enough.  Let’s talk about this shorter term perspective. 

 We have just learned from a very reliable source that the revenue collected so far this from health claims tax is much lower than projected -- so much lower, in fact, that the state Legislature will likely consider a proposal to raise it early next year.

 For employers who ran the numbers and determined that they could absorb a one percent tax, they should get ready to do a new set of calculations, perhaps on a yearly basis going forward, should a federal appeals court not strike down the law.  At some point it would seem that this health care tax could become an important factor as employers consider whether self-insurance is as cost effective as it otherwise would be,

 And in case you think this issue is contained to Michigan, think again.  Other cash-strapped states are watching how things play out in Michigan and at least some are likely to follow-suit if they believe such action will go unchallenged.

 When a camel gets its nose under the tent the occupants should not be surprised that the damage often cannot be contained.  For self-insured employers with workers in Michigan, they may soon learn this important lesson.

 

 

 

 

 

Sabtu, 13 Oktober 2012

Debt and the burden on future generations, Part MMMVIII


I don't want to bore people, but once again this question has come up (see here, here, herehere, herehere, and here for the whole battle royale) , and I thought I'd blog about it, because hey, every econ blog should occasionally do some little "thought experiment" type stuff, even if it doesn't quite as much traffic as does making fun of commenters.

The question, once again, is: "Does government debt impose a burden on future generations?" I took a crack at this question back in January, and my answer is still the same, but I'd like to phrase it more concretely.

Here's how I like to think about this question. In my mind, to "impose a burden on future generations" means  "to decrease the consumption possibilities of future generations". So the question is really whether or not the size of today's stock of government debt reduces the total consumption possibilities of people not currently born. In other words, if government debt is $1,000,000,000 today, does that mean that the consumption of future people must be lower than if government debt were $1 today?

Let's assume a closed economy. In that case, the economy's maximum potential consumption at any point in time is determined by the productive capacity of the economy at that time. Productive capacity is determined by the size of the capital stock, the labor force, the availability of natural resources, and the level of production technology. (For convenience, I'm defining the "capital stock" as including all consumer durables, and defining "consumption" as including the flow of services from those durables.) Now let's assume that the technology level, the labor force, and the amount of natural resources are all completely exogenous, so that the government cannot affect these things (this may not be realistic but we could always drop that assumption later). So the productive capacity of the economy at any point in time is just a monotonic function of the economy's capital stock - more capital at time T means more potential consumption at time T.

Now let's define "burden on future generations". That means that at some time T > 0 (t=0 being today), the potential consumption of the economy will be lower. Since the potential consumption of the economy at any time t is determined entirely by the size of the capital stock at time t, what we are really asking is whether or not the following proposition is true:

∀{D_t},{C_t} ∃T>0 s.t. K_T = f(D_0), where f'(D_0) < 0 

Here K is the capital stock, D is government debt, f is some function, t=0 is today, D_0 is today's stock of government debt, {D_t} is the path of government debt between t=0 and t=T, and {C_t} is the path of consumption between t=0 and t=T. If this proposition is true, then no matter what anybody does in the future, higher debt today necessarily means a smaller capital stock at some point in the future. 

Note that this proposition is not stated as formally as it could be or really should be, for which I apologize.

So now, let's think about what determines the capital stock at a future time T. This is determined by the sequences of consumption and investment from t=0 to t=T-1. In order for K_T to be constrained to be lower than it would otherwise be, it must be the case that K_T-1 is lower than it would otherwise be (this follows easily from the assumption that the production function is monotonic in the level of the capital stock). By backwards induction, the above proposition can only hold if the following proposition holds:

K_1 = f(D_0), where f'(D_0) < 0 

Remember, t=1 means tomorrow. In other words, only if tomorrow's capital stock depends in a negative way on today's stock of government debt can it be true that a higher D_0 forces K_T to be lower at some point in time.

Tomorrow's capital stock depends entirely on today's level of investment (today's level of production is fixed, because today's capital stock is fixed). So our question now reduces to:

Question: If I_0 = g(D_0), where I_0 is today's investment and g is some function, what is the sign of g'(D_0)? 

If g'(D_0) is positive, then a higher government debt stock today means that the economy will invest more today; this means that government debt will impose no burden on future generations.

So is it possible that g'(D_0) > 0? In other words, given two societies that are identical in all respects except that Society 1 has a higher stock of government debt than Society 2, is it possible that Society 1 will invest more today (and consume less today) than Society 2?

Of course it's possible. The investment/consumption choice is entirely behavioral. And when I say "behavioral" I am including the behavior of the government. If Society 1's government chooses to cut welfare and use the money to build a bunch of roads, for example, it could easily invest more and consume less today than Society 2; the high level of D_0 in Society 1 would not prevent it from being able to do this.

So government debt need not be a burden on future generations. It all depends on how economy-wide consumption/savings decisions react to the size of the stock of government debt. And that is heavily dependent on the behavioral model one chooses. Might a higher stock of government debt outstanding induce a society to invest less and consume more (which would constrain future consumption to be lower under certain additional assumptions)? Sure.

So the answer to the question is: It depends. What does it depend on? It depends on how consumption/savings decisions react to the size of the stock of government debt, which depends on the behavior of the government, firms, and households. Modeling that behavior is a major challenge.

Also, note that this does not answer the question of "Does government borrowing impose a debt on future generations?" This is because the economy's consumption-savings choices may respond differently to changes in debt than to levels of debt. But in general, the answer will have the same form.

So to sum up:
  • Must higher government debt today lead to lower potential consumption sometime in the future? No.
  • Does higher government debt today lead to lower potential consumption sometime in the future? Maybe; I don't know.
  • Does higher government debt today lead to lower actual consumption sometime in the future? Maybe; I don't know.
  • Must higher government borrowing today lead to lower potential consumption sometime in the future? No.
  • Does higher government borrowing today lead to lower potential consumption sometime in the future? Maybe; I don't know.
  • Does higher government borrowing today lead to lower actual consumption sometime in the future? Maybe; I don't know.

(Just in case you were wondering: The example Nick Rowe creates here is a case of higher government borrowing today leading to lower actual consumption in the future. He uses a "fruit-tree economy" with no capital (or if you prefer, with K fixed), so potential consumption in each period is fixed. In that sort of economy, it is impossible for anything to "impose a burden" on any cohort, using my definition of "imposing a burden".) 

Update: More interesting conversation between me and Nick over at his blog, as well as in the comment section of this post. We look deeper into the issue and get some more interesting results.

Update 2: Nick and I have been discussing the issue. I think we agree on everything now, and a number of interesting conclusions have emerged. Let me see if I can translate them into plain English...

The "Burden" Result: It is possible that the existence of past government transfers can ensure that either currently living people or as-yet-unborn (or both) must get screwed, relative to the baseline in which no transfers occurred. These past government transfers can be accomplished by government borrowing and spending; in that case, the past government transfers will affect the value of today's government debt. This is the upshot of Nick's model.

The "No Future Burden" Result: However, no matter what transfers happened in the past or how much government debt we have today, then given some simple assumptions, it is always possible to get away with only screwing people who are currently alive (and yes, you can quote me on that!). This is the upshot of my proof.

Note that these two results are not incompatible at all. So Nick and I don't disagree.

The "Dues Paid" Result: Given some more simple assumptions, it is always possible to limit the total amount of screwage (in consumption terms, not utility terms) to the amount of consumption that was, in the past, transferred away from people who are currently alive. In other words, the total amount of screwage never has to be bigger than the total "dues" already paid by currently living people. This is something I realized while talking to Nick over at his blog. I think it's kind of interesting.

The "Debt Does Not Equal Burden" Result: This means that the govt. debt number may not equal the burden number (and in general does not). The size of the current stock of government debt may be much larger than the total amount of the aforementioned screwage. In other words, govt. debt may be $10,000,000,000 today, but the total amount of necessary screwage might be much smaller, or might even be zero. This can happen, for example, if the government spends money on the same people it taxes, or if people leave government bonds to their children in a certain way. So debt is not a book-keeping device that faithfully records the amount of necessary future screwage.

(Note that this means that government debt's effect on society is very different from the effect of one household's debt on that household. If I borrow $10,000 and spend it today, I'm going to need to take a $10,000 hit in the future in order to pay it back. But if the government borrows $10,000 today, it's quite possible that nobody ever has to take a hit at all. I am not sure, but I think that this might be Paul Krugman's main point.)

(Update: Antonio Fatas thinks that this last result should be the main takeaway from the debate.)

In conclusion: When you ask "Does debt impose a burden on future generations?", you have to be very careful about exactly what you mean when you ask that question. But if you are careful - if you use math in your explanation, state all definitions and assumptions clearly, and above all think clearly and don't get mad - then the truth will out.

Stop-Loss Regulation and the Coming Zombie Apocalypse

Key regulatory officials made some interesting comments about their interest in self-insured health plans utilizing stop-loss insurance at an American Bar Association event last week in Washington, DC

 Phyllis Borzi, assistant secretary at the U.S. Department of Labor, said her agency is working on two ACA-required studies, one on wellness that is due in 2014 and an annual report to Congress on self-insured plans.

 “To try and help get information on self-insured plans, a couple of things have happened. Probably most recently what we asked for was we put out a tri-agency request for information (RFI),” Borzi said.

 George Bostick, benefits tax counsel at the U.S Treasury Department, said the RFI “produced a number of paranoid responses,” but Borzi then assured the audience that there were no ulterior motives to the RFI.

 “It is what it is. We don't have enough information, we think.. It's not like we have some hidden agenda, pro- or anti-stop-loss; we just want to find out what's going on out there,” Borzi said.

 Another panelist, Amy Turner, senior adviser and special projects manager in EBSA's Office of Health Plan Standards and Compliance Assistance, echoed Ms. Borzi's comments about the departments needing more information on stop-loss insurance and wanted feedback from a “broad group of stakeholders.”

 The departments are sifting through the comment letters responding to the RFI, but Turner said not to expect any stop-loss guidance in the near future.

 “To the extent that some people maybe saw the RFI and thought, ‘Oh my goodness! Is something like the zombie apocalypse going to happen?' I think we're just working on the comment letters. I wouldn't expect any major guidance from the departments very quickly on this,” Turner said.

 This blog will give Ms. Turner the benefit of the doubt that a zombie apocalypse is probably not in the offing regardless of any further regulatory action that may be taken.

 That said, the regulators will have to forgive the “paranoia” expressed by self-insurance industry stakeholders.  After all, the current administration has proven to be very adept at sidestepping normal legislative procedures and inclined to give the green light to regulatory agencies to test the bounds of statutory authority when political needs arise.

 Speaking of political needs, it’s worth reminding everyone of how the regulators explained the reason for the RFI.  The following is an excerpt from the RFI introduction:

 It has been suggested that some small employers with healthier employees may self-insure and purchase stop-loss insurance with relatively low attachment points to avoid being subject to certain consumer protection requirements while exposing themselves to little risk.  This practice, if widespread, could worsen the risk pool and increase premiums in the fully-insured small group market, including the in the Small Business Health Options Program (SHOP) exchanges that begin in the 2014.

 If, in fact, the regulars reach these same conclusions, is it reasonable to believe they will simply sit on their hands?  We’ll be sure to keep an eye out for zombies as these developments continue to play out just in case.

Rabu, 10 Oktober 2012

Deductible Basics

When a covered insurance claim happens the insured, in many cases, will be responsible for the first few dollars of most losses. The amount they are responsible for is called the deductible. More often than not, deductibles are only associated with property damage of the insured’s own possessions whether that is a vehicle that was damaged or damage to their contents, their buildings or even their loss of income. On some occasions you may see deductibles on liability claims but not in many.

Deductibles can come in many different forms on insurance policies. You can have a given dollar amount, say $500. Often times you see this type of deductible on home insurance or business property insurance. Some deductibles might be a percent of the loss like 1% or 10%. Sometimes you will see this type of deductible on a home or business but many times it will be associated specifically with earthquake coverage. Deductibles can be vanishing deductibles. As the insured racks up years of no losses, their deductible gradually drops each year until eventual it is $0.

In most cases the deductible is per claim. This means that each time you have a claim you pay a deductible. It isn’t like your typical health insurance policy where you have an out of pocket deductible for the year and once you meet that limit you are done with the deductible. In property and casualty, if you have a $500 flat per claim deductible you will pay $500 each time you have a claim no matter how many you have in a given year.

Deductibles can be a helpful cost savings tool. They can be raised to help drop premiums but the insured needs to understand that by raising deductibles they have taken on a bit more of the burden of possible claims.

It is important for insureds to understand what their deductible is so that they can be prepared to financially meet its requirement if a claim were to happen. I mention this more in connection with a percentage deductible. The insured should know if the percent is on the cost of the claim or on the coverage limit. For example, if a person had a $200,000 house and an insurance policy with a 5% deductible (on the coverage limit) it would be best to know that you have a $10,000 deductible before you have a claim. Someone that doesn’t know their policy might think that it is 5% per the cost of the claim.

Deductibles are just one of many facets to an insurance policy. Be sure to familiarize yourself with your policy and policy coverages and consult your independent insurance when ever you have any questions.



Selasa, 09 Oktober 2012

Will econ blogging hurt your career?


Many people ask me this. Short answer: It's impossible to know.

Reason 1: The number of bloggers is small, and blogging is new. In terms of econ grad student bloggers, there has been me, Steve Randy Waldman, Adam Ozimek, Daniel Kuehn, Kevin Bryan, JW Mason, and maybe one or two others. That's not a statistically large sample, and there are lots of confounding factors (research quality, blog subject matter, etc.). So it's basically impossible to do any kind of quantitative analysis to answer the question.

Reason 2: Very few of the people you annoy will actually inform you of the fact. For example, I've criticized modern macro a lot. Maybe that has annoyed tons of macroeconomists, to the point where they wouldn't consider working with me, hiring me, or allowing my papers into a journal that they refereed. But if so, they're not going to write me emails and say "Hey, I think you're a jerk." They're just going to quietly decide that I'm a jerk, and I'll never know why my paper really got rejected.

So basically, nobody will know for a long time how blogging impacts people's careers. Those of us who have tried it are basically just very tolerant of Knightian uncertainty. In fact, I love uncertainty. In many situations I'd rather try something just to see what happens. I'm the character that gets killed first in every horror movie, but that's fine with me, since life is not generally like a horror movie.

But here are a few reasons to think that blogging won't be as bad for your career as many people fear:

Reason 1: Blogging is great for meeting people. Through blogging, I've met awesome people like Richard Thaler, Eric Brynjolfsson, George Akerlof, James Heckman, Betsy Stevenson, and Roger Farmer, not to mention fellow blogger/economists like Mark Thoma, Tyler Cowen, Alex Tabarrok, Justin Wolfers, Brad DeLong, John Cochrane, Greg Mankiw, Robert Waldmann, Scott Sumner, Steve Williamson, David Andolfatto, and others (I still haven't met Paul Krugman, in case you were wondering). That doesn't mean those people think I'm an elite researcher just because I blog, or will do me any personal career-related favors. Blogs are not a good-ol'-boy network. But it's very helpful to meet this sort of people, to get ideas and perspective, learn how to think about things, and see what's going on in the world of economics. Not to mention networking; senior people advise younger people, and younger people are potential co-authors.

Reason 2: Blogging really doesn't take up much time. It's like any other hobby; it may put a crimp in your social life, but work will still come first. Heavy blogging will require 2 hours a day, but I would say I spend an average of only 20-30 minutes a day on it. And I never feel pressured to post more. Blogging is not a job, so it's not an obligation.

Reason 3: Blogging helps you think. It helps to get things down on paper. Sometimes you have an idea, and then when you start to write it down you realize how vague and/or implausible and/or illogical it is. Writing an idea clarifies the idea, and it helps you practice communicating your idea to others. This will help with writing papers. Even the "blog-fights" help with logical thinking and being able to dissect arguments.

Reason 4: Name recognition is somewhat important in economics, and there is a bit of evidence that blogging helps to build name recognition. This evidence should be taken with several grains of salt, of course, for reasons discussed above.

So there are reasons that blogging might be good for one's career. But these should be viewed simply as mitigating the (unknowable) risks of blogging, not as reasons to start blogging in the first place. The real reason to blog is to affect the national conversation, to get involved with policy and national affairs in some small way, and simply for the sheer joy of thinking about stuff. In other words, the same reasons that people should go into academia in the first place. If your main goal is to make money and wear a suit and have a swank office - and there's absolutely nothing wrong with that - go find a nice safe job in a bank!

Update: In the comments, Steve Williamson adds:
Here's an old blogger perspective. There is risk associated with getting into anything you have not tried before. When you're young you have to take risks, otherwise you never get anywhere. There is risk in blogging just as there is risk in anything else we do. You can say foolish things in a blog post. You can say foolish things when you present a paper at a conference. In the first case you reveal your foolishness to more people, but they are more forgetful. Tomorrow they will move on to another idiot. Old economists who have had some success in the profession and have tenure can coast - they don't have to take risks. But coasting is no fun, and rust never sleeps. If you have tenure, you can use it to your advantage. Offend a few people. Speak your mind. Maybe it matters.
Sounds like great advice to me...

Update 2: In the comments, Frances Woolley adds:

Academic publishing is becoming increasingly dysfunctional. People have to publish to get tenure/promotions/government or other funding, so everyone wants to get stuff out, but no one wants to referee for journals or read their contents (except for a dozen or so top journals).  
As academic journals lose relevance, conferences, high profile working paper series like the NBER, and blogs are gaining. I get way more eyeballs - and way more ideas out there into the public domain - by blogging than I would by publishing stuff in mid-ranked journals.
Interesting...