In an upcoming New York Times piece, David Leonhardt points out that people aren't good at reasoning about low-probability events:
We make two basic — and opposite — types of mistakes. When an event is difficult to imagine, we tend to underestimate its likelihood. This is the proverbial black swan. Most of the people running Deepwater Horizon probably never had a rig explode on them. So they assumed it would not happen, at least not to them.
On the other hand, when an unlikely event is all too easy to imagine, we often go in the opposite direction and overestimate the odds. After the 9/11 attacks, Americans canceled plane trips and took to the road. There were no terrorist attacks in this country in 2002, yet the additional driving apparently led to an increase in traffic fatalities.
I would probably call these the same mistake, rather than two different ones, but that's a minor quibble. I'll also avoid taking issue with his references to the "true probability" of an event, which would probably take us into areas too philosophical for a family-friendly blog such as this one.
A more major quibble is with the following:
When the stakes are high enough, it falls to government to help its citizens avoid these entirely human errors. The market, left to its own devices, often cannot do so.
I'm not sure which government Leonhardt is thinking of, but the only ones I'm familiar with are made up of humans who routinely commit, um, "entirely human errors."
I'm willing to agree that "markets" are pretty lousy at predicting the odds of never-before-seen events -- though BP's safety procedures are presumably set through some combination of BP internal policy (not a market) and government regulation (also not a market). But are governments any better at predicting the odds of never-before-seen events?
The article admits that maybe they're not:
Yet in the case of Deepwater Horizon, government policy actually went the other way. It encouraged BP to underestimate the odds of a catastrophe.
In a little-noticed provision in a 1990 law passed after the Exxon Valdez spill, Congress capped a spiller’s liability over and above cleanup costs at \$75 million for a rig spill. Even if the economic damages — to tourism, fishing and the like — stretch into the billions, the responsible party is on the hook for only \$75 million.
Exxon Mobil earns \$6 billion a quarter. \$75 million is about 1% of this amount. Even if you have an oil spill every quarter, that damage cap is a rounding error to profits. It might as well not exist.
In fact, if you dig deeper, this has nothing to do with cognitive error at all. It's not that people working in government can't estimate the odds of a spill, it's that they're more concerned that their friends in the oil industry don't suffer too much when a spill inevitably occurs:
Senators David Vitter of Louisiana and Jeff Sessions of Alabama, both Republicans, introduced a bill yesterday that would create a liability cap equal to the last four quarters of the responsible party’s profits or double the current limit, whichever is larger.
“Making a company at fault pay their last four quarters of profits is much more effective way to ensure that energy companies actually pay for their mistakes without chasing many of them out of business,” said Vitter, in a release.
Yes, well, it's also an effective way to ensure that they take more huge-downside risks than they would if you didn't cap their damages. But the important thing is that they stay in business! Just like the important thing was that AIG stay in business, and that GM stay in business, and that Fannie Mae stay in business.
In markets, companies fail. Companies that can't meet their liabilities go out of business, and new companies replace them. That's how markets work. That's the discipline that markets rely on. Those are the incentives that proponents of markets have in mind.
Because when you artificially insulate companies from the consequences of their actions, that's not a market. It's a recipe for reckless risk-taking and gross malfeasance.
To be sure, I'm not claiming that absent this 1990 law and similar policies, BP would have made smart decisions about the drilling risks they took. But they almost certainly would have made smarter decisions.
If only there were a term for the cognitive bias wherein one systematically overestimates the odds that a law written by self-serving politicians is likely to make a complex situation better.