At work, we received a response to a request for proposals that was incredible and fantastic. I don’t mean that the proposal would save money, reduce confusion, reduce false denials and holds on services or even give a senior executive a suite full of nubile young women whose virtue had already been negoatiated. I truly mean it was incredible along the lines of the product shitting cupcakes out a unicorn’s ass incredible. However, on the first read of the response, it looks really good. The second read is when the bullshit started to become obvious. My boss knew it was bullshit but could not quite put her finger on why it was bullshit, so I spent the past two days deconstructing the proposal and thinking bullshit.
There are a couple obvious sign-posts of bullshit in an argument that I think are relevant to general policy analysis. If you start to see the following signs, you are either engaging with a sophomore in college who just learned something really cool in an introductory class but has neither the advanced classes in the field nor the experience to know better or you are seeing bullshit. These two categories are not mutually exclusive.
The units of analysis make no sense
Avik Roy’s “study” of sticker shock in 2014 based on average prices per county had the unit of analysis as the county. A county is a reasonable first unit of analysis as most state regulators regulate plans at a county level. However, it is a shitty final unit of analysis as there are 3,144 counties in the US. 8 counties contain slightly more than 10% of the US population, and the largest county in the US, Los Angeles County, is roughly 120,000 times larger than the least populated, Loving County, Texas. In his “analysis”, these two counties count the same.
The comparisons are wildly bizarre
Again, Avik Roy compares community rated insurance with a fairly rich benefit package to underwritten insurance with significant exclusions of coverage. As I showed last year, this study included plans that excluded mental health coverage, excluded maternity coverage and included plans that rejected outright a quarter of the individuals who applied for coverage. It is real easy for an insurance company to offer low prices when it is statistically unlikely to pay big claims due to a screening of the risk pool. So any comparison between underwritten policies and community rated policies have to be taken with extreme caution. It can be done, but straight up comparisons can’t be made.
The claims are incredible
Timothy Jost looked at Avik Roy’s Obamacare replacement plan and made a note about an incredible set of claims that the free market/Universal Exchange would shit cupcakes out of its ass:
He claims it would increase access to providers by 4 percent (98 percent for Medicaid recipients) and average health outcomes by 21 percent, [my bold] while reducing the federal budget deficit by $29 billion over the first 10 years and $8 trillion over 30 years. It would, he claims, reduce average commercial premiums by 17 percent for individuals and 4 percent for families by 2023.
These claims are based on analysis of the proposal conducted by Stephen Parente, an American Enterprise Institute Scholar. I can find, however, no description of the methodology, or for that matter of the inputs, applied in this analysis. In particular, how Parente and Roy modeled an improvement in health outcomes, something the CBO never attempts, is a complete mystery.
The bolded part, increasing average health outcomes by 21% is an incredible claim that flies in the face of most evidence that suggests access to great medical care is a 10% to 15% determinant of health status. A 21% improvement in health status is an incredible claim. It should have incredible evidence to support it. The evidence should be made public. However it is not disclosed nor has anyone with significant credibility and the charge to conduct that type of analysis ever published anything similar to that model. It could happen, but the support for that number is extraordinarily weak.
The underpants gnomes dominate the theory of change
As we all know, the underpants gnomes have a simple business model/theory of change to get rich:
1) Steal underpants
3) Get rich
When the underpants gnomes have to do the heavy lifting in a theory of change, it is either a first draft that needs to be fleshed out, an affinity scam, or bullshit. Congressman Ryan (R-Wis) wants to use dynamic scoring to get around the fact that he is making two incompatible promises — lower tax rates, especially on the wealthy, and revenue neutrality. Dynamic scoring is step two of the theory of change.
Don’t look at past predictions
Be extremely skeptical of people who don’t audit their past predictions. Jonathan Chait ripped Reason magazine’s Peter Suderman apart on his Obamacare predictions:
The latter study comes in for criticism by Peter Suderman, Reason’s indefatigable health-care analyst. Like the entire right-wing media, Suderman’s coverage of Obamacare has furnished an endless supply of mockery of the law’s endless failures and imminent collapse. While some of his points have validity, it’s fair to say that the broader narrative conveyed by his work, which certainly lies on the sophisticated end of the anti-Obamacare industry, has utterly failed to prepare his libertarian readers for the possibility that the hated health-care law will actually work more or less as intended.
And yet, in another way, the conservative media has provided a useful lagging indicator of Obamacare’s progress. The message of every individual story is that the law is failing, the administration is lying, and so on. The substance, when viewed as a whole, tells a different story. Here is how Suderman, to take just one example, has described the continuous advancement of the law’s coverage goals:
People get things wrong all the time. That is fine. It is not fine when their is no evaluation of the process that produces wrongness as that guarantees the continuation of the Garbage In-Garbage Out loop.
There are plenty of other high quality bullshit detection tools that are useful in policy analysis, but the above tools can be safely applied by anyone with some curiousity and interest in a subject.