Bloom’s Taxonomy of learning, first proposed by psychologist Benjamin Bloom in 1951, revolutionized the science of education by allowing the cognitive level at which students and teachers work to be classified on a simple scale. Professional academics, for example, regard any work that does not reach the sixth and highest level, evaluation, as derivative. A proper work of Evaluation requires one not only to understand the fundamentals of a given topic, but also to weigh the competing perspectives of other scholars before reaching a coherent and original conclusion.
On a rare occasion when Megan McArdle bothered to ground her suppositions in fact, and therefore performed what a professional would call ‘learning’, McMegan arguably reached level one. McMegan correctly summarized the argument of one relatively dated theoretical report on healthcare spending and innovation, without noting that numerous equally qualified professionals disagree. McMegan also did not note that the same authors later tested their model in the real world and concluded that their earlier study
was wrong [correction – cannot fully explain what happens in the real world].
One can also reach Bloom’s first level by opening the newspaper and reading a paragraph at random. Reading two paragraphs in order, you will probably pick up context and reach level two. Middle schoolers who hope to earn an ‘A’ grade typically reach level 3, Application, on a regular basis. Glibly making crap up, on the other hand, generally won’t net you better than a gentleman’s ‘D’.
Note the correction. Also, below the fold, I have reprinted with permission a summary that Tom sent me by email last night.
Megan McArdle’s response to being caught in a bit of make believe is to assert her commitment to analytical rigor — an claim she defends by pointing to her mastery of the academic literature. In an example focusing on the question of whether or not a reduction in Big Pharma revenue will lead to a loss of life expectancy, she uses the rhetorical trick of both claiming and appealing to academic authority to suggest that advocating cost controls in health reform is tantamount to knocking grandma on the head.
Unfortunately, she gets the argument wrong, and she does so in a very suggestive way. She understands the form of academic discourse –citation of prior work is part of both the labor and rhetoric of scientific communication. But she misses the actual point, which is that you have to check what people say; you can’t just take reported results on faith. This is especially true for those who are not themselves within the discipline being cited; real experts develop all kinds of short cuts to get to the point of new work (though they can certainly get tripped up too), but those of us who want to use such work to inform what we write for broad consumption have to put some effort into figuring out what is going on.
And what I spend way to many words doing is showing several of the different ways in which McArdle failed to do so. She didn’t detect — or perhaps she didn’t care about — the obvious conflict of interest problem in the core piece of research she cites. She failed to notice what was missing in that paper, all the methods and assumptions and cautions about limits to work making a very large claim — all those sections that a careful reader of the literature would have known were signifiers of serious work, and whose absence suggests the reverse. She does not appear to have noticed, or at least questioned the degree to which the conclusions turn on assumptions not in evidence, or not rigorously defended (I’m thinking here of a very fraught claim on the relationship between drug company innovation as measured in drug approvals and longevity. I didn’t go into this in an already too long post — but the connection and assertions of very specific amounts of life lost turn on essentially one researcher’s work, results that are not by any stretch taken as common wisdom at this point, and for a lot of good reasons.)
She didn’t ask, that is…she never seems to have done what any honest journalist, and any good scientist would do as a matter of routine: think, just for a moment, “does this make sense? What could be wrong here.”
If she had, she would have tumbled to the deeper and more important failure she committed here. In her attempt to demonstrate her morally superior attention to the actual research base, she cites a second paper that does not, in fact, say what she thinks it does. She either didn’t actually read it, or she didn’t understand it when she did…and then she committed the one true sin of any journalist: she didn’t pose the question to someone who could have straightened her out.