Notes from the CBO score of the BCRA

The Congressional Budget Office released their score of the Senate’s Better Care Reconciliation Act (BCRA). It does not include the Cruz amendment. There is not a whole lot of difference since the last score as there are not many large changes on the coverage side.

I just want to pull out a few things. The most important thing to pull out is Table 3 regarding Medicaid:

The largest savings would come from a reduction in total federal spending for Medicaid resulting both from provisions affecting health insurance coverage and from other provisions. By 2026, spending for that program would be reduced by 26 percent (see Table 3, at the end of this document).3

It is a $575 billion dollar cut to Medicaid. Throwing inadequate opioid specific money or allowing for a $200 billion dollar back door CSR funding stream won’t do anything remotely sufficient to address the people who lose coverage because of these cuts.

THe next nugget is a repetition of the basic point that the value proposition of super high deductibles is absolutely atrocious for lower income individuals:

Because this legislation would change the benchmark plan (in part, by repealing the current-law federal subsidies to reduce cost-sharing payments), the average share of the cost of medical services paid by the plan would fall—for the 40-year-old with income at 175 percent of the FPL in 2026, from 87 percent to 58 percent—and his or her
payments in the form of cost sharing would rise. And the person’s net premiums would be higher under the legislation than under current law for plans of comparable actuarial value. Those changes, CBO and JCT estimate, would contribute significantly to a decrease in the number of lower-income people with coverage through the nongroup market under this legislation, compared with the number under current law.

The baseline deductible in 2026 is a mind busting $13,000. This matters a lot for the people who are losing Medicaid. The deductibles are an absurdist joke.

a single policyholder purchasing an illustrative benchmark plan (with an actuarial value of 58 percent) in 2026, the deductible for medical and drug expenses combined would be roughly $13,000, the agencies estimate… Under this legislation, in 2026, that deductible would exceed the annual income of $11,400 for someone with income at 75 percent of the FPL. For people whose income was at 175 percent of the FPL ($26,500) and 375 percent of the FPL ($56,800), the deductible would constitute about a half and a quarter of their income, respectively.

Finally, the CBO notes a clear mechanical problem that can not be fixed without 60 votes:

The limit on out-of-pocket spending in 2026 is projected to be $10,900. (Under current regulations, the limit on out-of-pocket spending is defined by a formula based on projections of national health expenditures.) Therefore, plans with an actuarial value of 58 percent and a deductible of $13,000 would exceed that limit and would not comply with the law unless the formula used to calculate the limit was adjusted. CBO and JCT estimate that a plan with a deductible equal to the limit on out-of-pocket spending in 2026 would have an actuarial value of 62 percent. A person enrolled in such a plan would pay for all health care costs (except for preventive care) until the deductible was met and none thereafter until the end of the year.

The benchmark plan can’t be built.

Oops



Sunday Evening Open Thread: Can’t Take The Man Anywhere

You must be soooo proud, Repubs.

Apart from [facepalm]-ing, what’s on the agenda as we wrap up the weekend?



Care costs money

The most important concept in health finance is simple; sick people are expensive to cover. Let’s keep that in mind for the rest of the post.

The Independent Journalism Review captures the reaction of Rep. Mark Meadows (R-NC), head of the House Freedom Caucus, to the CBO score.

When reporters pointed out the portion of the CBO report saying individuals with preexisting conditions in waiver states would be charged higher premiums and could even be priced out of the insurance market — destabilizing markets in those states — under AHCA, Meadows seemed surprised.

“Well, that’s not what I read,” Meadows said, putting on his reading glasses and peering at the paragraph on the phone of a nearby reporter.

The CBO predicted:

“…the waivers in those states would have another effect: Community-rated premiums would rise over time, and people who are less healthy (including those with preexisting or newly acquired medical conditions) would ultimately be unable to purchase comprehensive non-group health insurance at premiums comparable to those under current law, if they could purchase it at all — despite the additional funding that would be available under H.R. 1628 to help reduce premiums.”

…..
The CBO analysis was likewise adamant that AHCA’s current high-risk pool funding isn’t enough to cover sick people if states use the mandate waivers.

After reading the paragraph, Meadows told reporters he would go through the CBO analysis more thoroughly and run the numbers, adding he would work to make sure the high-risk pools are properly funded.

Meadows, suddenly emotional, choked back tears and said, “Listen, I lost my sister to breast cancer. I lost my dad to lung cancer. If anybody is sensitive to preexisting conditions, it’s me. I’m not going to make a political decision today that affects somebody’s sister or father because I wouldn’t do it to myself.”

He continued:

“In the end, we’ve got to make sure there’s enough funding there to handle preexisting conditions and drive down premiums. And if we can’t do those three things, then we will have failed.”

There is a plausible high cost risk pool design that could theoretically work. It just costs a lot of money. The Urban Institute provides an updated floor to that type of design.

Government costs for the coverage and assistance typical of traditional high-risk pools would range from $25 billion to $30 billion in 2020 and from $359 to $427 billion over 10 years. (Figure 2)

I think this is a decent lower bound as they don’t look at very high cost but uncommon conditions like hematological defects, cystic fibrosis, major gastro-intestinal conditions, slow progressing cancers or hundreds of other things. But Urban’s estimates points us in the right direction. Taking care of sick people costs somewhere between expensive and very expensive.

This is not new knowledge. Anyone of any ideological stripe who is actively trying to be a good faith broker of information on health care finance has been shouting this basic insight for months. And yet, the Senate just invited actuaries to talk with them for the first time this week. And yet, the House voted on this bill without waiting for expert opinion. The bill was written without a public hearing. The product is a consequence of a process that deliberately excluded even friendly experts who were having a nervous breakdown when they looked at the cash flows much less incorporating the criticism of unfriendly but knowledgeable experts.

Healthcare for people with high needs is expensive.



How the CBO projects market failure

The Congressional Budget Office projects that the AHCA will lead to 15 % of the population living in destablized insurance markets because of the MacArthur/Upton amendments.

he agencies estimate that about one-sixth of the population resides in areas inwhich the nongroup market would start to become unstable beginning in 2020. That instability would result from market responses to decisions by some states to waive two provisions of federal law, as would be permitted under H.R. 1628. One type of waiver would allow states to modify the requirements governing essential health benefits (EHBs), which set minimum standards for the benefits that insurance in the nongroup and small-group markets must cover. A second type of waiver would allow insurers to set premiums on the basis of an individual’s health status if the person had not demonstrated continuous coverage; that is, the waiver would eliminate the requirement for what is termed community rating for premiums charged to such people. CBO and JCT anticipate that most healthy people applying for insurance in the nongroup market in those states would be able to choose between premiums based on their own expected health care costs (medically underwritten premiums) and premiums based on the average health care costs for people who share the same age and smoking status and who reside in the same geographic area (community-rated premiums)

What does that mean and how does that happen? Let’s work through an simple model of a state with 1,000 people in its individual market.
Read more



Open Thread: We’ll Always Have Paris Snark

Absolutely *not* fleeing the Titanic, per fellow WH cronies:

The decision for Priebus to return to the US was pre-planned, not spur-of-the-moment, White House spokeswoman Sarah Huckabee Sanders and a second Trump adviser said.

“He was planning to come for the first stop and then head back for the budget roll out,” Sanders said.

The chaotic nature of this White House has prevented Trump’s team from doing much strategic planning, the second Trump adviser said. Leaving the trip early would give Priebus time to plan for the President’s return, the adviser said.

Some major issues are awaiting Trump back home, including the possible hiring of outside legal counsel in the Russia probe, the selection of a new FBI director, and the effort to pivot back to the President’s domestic agenda…

Yeah, like that’s any different than when Priebus was on the plane.

Reince’s job is to (try to) ram the oligarchs’ agenda through Congress, voters be damned. Trump’s agenda is to loot everything not nailed down, or that his thieving spawn can pry loose. No point in the GOP’s hand-chosen ‘Chief of Staff’ trying to keep them in line, as the Saudi portion of the trip has made abundantly clear.



Warning orders for Sunday morning

There are three important things to come of this.

1) I am staying up past my bed time
2) SNL is aiming for monster ratings
3) We’ll see an exacerbation of the rolling constitutional crisis on Sunday morning

I wish I was being hysterical.

Open Thread



The Great Vote Fraud Data Mistake…A Cautionary Tale

Just in time for the latest, greatest Shitgibbon pursuit of all those not-good-people who got to vote for his opponent, Maggie Koerth-Baker brings the hammer down.  She’s written an excellent long-read over at Five Thirty Eight on what went wrong in the ur-paper that has fed the right wing fantasy that a gazillion undocumented brown people threw the election to the popular-vote winner, but somehow failed to actually turn the result.

The nub of the problem lies with a common error in data-driven research, a failure to come to grips with the statistical properties — the weaknesses — of the underlying sample or set.  As Koerth-Baker emphasizes this is both hardly unusual, and usually not quite as consequential as it was when and undergraduate, working with her professor, used  found that, apparently, large numbers of non-citizens 14% of them — were registered to vote.

There was nothing wrong the calculations they used on the raw numbers in their data set — drawn from a large survey of voters called the Cooperative Congressional Election Study. The problem, though, was that they failed fully to handle the implications of the fact that the people they were interested in, non-citizens, were too small a fraction of the total sample to eliminate the impact of what are called measurement errors. Koerth-Baker writes:

Non-citizens who vote represent a tiny subpopulation of both non-citizens in general and of the larger community of American voters. Studying them means zeroing in on a very small percentage of a much larger sample. That massive imbalance in sample size makes it easier for something called measurement error to contaminate the data. Measurement error is simple: It’s what happens when people answer a survey or a poll incorrectly.1 If you’ve ever checked the wrong box on a form, you know how easy it can be to screw this stuff up. Scientists are certainly aware this happens. And they know that, most of the time, those errors aren’t big enough to have much impact on the outcome of a study. But what constitutes “big enough” will change when you’re focusing on a small segment of a bigger group. Suddenly, a few wrongly placed check marks that would otherwise be no big deal can matter a lot.

This is what critics of the original paper say happened to the claim that non-citizens are voting in election-shaping numbers:

Of the 32,800 people surveyed by CCES in 2008 and the 55,400 surveyed in 2010, 339 people and 489 people, respectively, identified themselves as non-citizens.2 Of those, Chattha found 38 people in 2008 who either reported voting or who could be verified through other sources as having voted. In 2010, there were just 13 of these people, all self-reported. It was a very small sample within a much, much larger one. If some of those people were misclassified, the results would run into trouble fast. Chattha and Richman tried to account for the measurement error on its own, but, like the rest of their field, they weren’t prepared for the way imbalanced sample ratios could make those errors more powerful. Stephen Ansolabehere and Brian Schaffner, the Harvard and University of Massachusetts Amherst professors who manage the CCES, would later say Chattha and Richman underestimated the importance of measurement error — and that mistake would challenge the validity of the paper.

Koerth-Baker argues that Chatta (the undergraduate) and Richman, the authors of the original paper are not really to blame for what came next — the appropriation of this result as a partisan weapon in the voter-suppression wars.  She writes, likely correctly in my view, that political science and related fields are more prone to problems of methodology, and especially in handling the relatively  new (to these disciplines) pitfalls of big, or even medium-data research. The piece goes on to look at how and why this kind of not-great research can have such potent political impact, long after professionals within the field have recognized problems and moved on.  A sample of that analysis:

This isn’t the only time a single problematic research paper has had this kind of public afterlife, shambling about the internet and political talk shows long after its authors have tried to correct a public misinterpretation and its critics would have preferred it peacefully buried altogether. Even retracted papers — research effectively unpublished because of egregious mistakes, misconduct or major inaccuracies — sometimes continue to spread through the public consciousness, creating believers who use them to influence others and drive political discussion, said Daren Brabham, a professor of journalism at the University of Southern California who studies the interactions between online communities, media and policymaking. “It’s something scientists know,” he said, “but we don’t really talk about.”

These papers — I think of them as “zombie research” — can lead people to believe things that aren’t true, or, at least, that don’t line up with the preponderance of scientific evidence. When that happens — either because someone stumbled across a paper that felt deeply true and created a belief, or because someone went looking for a paper that would back up beliefs they already had — the undead are hard to kill.

There’s lots more at the link.  Highly recommended.  At the least, it will arm you for battle w. Facebook natterers screaming about non-existent voter fraud “emergency.”

Image: William Hogarth, The Humours of an Election: The Polling, 1754-55