I’ve been conferencing up and down the Eastern seaboard for the past week and many of my colleagues and collaborators are at their specialty major conferences as well. I found this from the American Society of Clinical Oncologists (ASCO) to be fascinating.
Real-world patients *do not* do as well as those enrolled in clinical trials.
Very cool analysis by Angela Green, @peterbachmd, Sham Mailankody &co @ASCO #ASC019 @carriebennette pic.twitter.com/hd3kHMfFB7
— Aaron Mitchell (@TheWonkologist) June 1, 2019
This study looks at the survival improvement results reported from the Phase 3 clinical trials of major (and expensive) new cancer drugs and then compares the results from a major cancer registry.
Ideally, the results from the clinical trial are the same as the real world results or perhaps slightly worse as new technologies, techniques, and learning by doing will have occurred to get a little bit of an incremental improvement wedge. That is not the usual case. Instead, the real world results are a bit worse than the clinical trial results.
Why does this matter beyond the obvious that lower survival times are less desirable than longer survival times?
As we move towards value based and outcome based contracts, we need to figure out which evidence is reliable and what things we consider in the creation of the contracts. The trial evidence is scientifically rigorous but for some reason, the translation suffers some degradation. Should initial contract phases be based on unadjusted clinical trial outcomes? Should there be a discount rate with some type of upside kicker to account for the likely case of lower pragmatic performance while adding some space for gains in the case of a happy surprise? How long should contracts run until they are renegotiated or re-benchmarked against pragmatic, real world evidence?
I don’t know the answers to any of those questions, but this study raises these thorny implications.