David Brooks revives Academically Adrift for another spin in the public consciousness.
In my opinion, the best review of the book came from Richard Haswell, writing one of multiple reviews in College Composition and Communication (PDF). CCC is an academic journal, so popular exposure to these reviews was likely limited, which is a shame. There’s a lot of good stuff in there. I focus on Haswell’s here for a simple reason: Richard Arum and Josipa Roksa claim to be looking at harsh reality by reference to quantitative assessment. And, indeed, this is precisely the tack that Brooks and others with political agendas have taken, to fill their rhetoric with talk of cold truths and harsh data, etc. As is typical, imagery overwhelms data. What Haswell’s review demonstrates quite ably is that, even aside from philosophical and pedagogical objections to the nature of their assessment (and there are many valid ones), Arum and Roksa failed on their own terms. The methodology and statistics we are allowed to see are flawed, and crucial elements of the research are completely unavailable to us. As Haswell’s field is (like mine) the teaching of writing, he naturally focuses on that aspect of the book, but the methodological failures he identifies speak to the research as a whole.
In many forums, the narrative surrounding the reception of AA (which I read as part of a seminar in writing assessment last semester) has been of recalcitrant professors who refuse to accept the book’s findings out of defensiveness, apathy, or fealty to whatever ideology they are assumed to have. The mundane reality is that the research doesn’t live up to the standards that Arum and Roksa claim. I don’t mean to push other kinds of critique of the book to the sidelines, as there have been a lot of intelligent and important criticism that doesn’t concern methodology. But when people claim that cold numbers tell a particular story, it’s important to investigate whether they generated those numbers responsibly.
You should really read the whole thing, but here’s a nice taste:
What do they do? They create a self-selected set of participants and show little concern when more than half of the pretest group drops out of the experiment before the post-test. They choose to test that part of the four academic years when students are least likely to record gain, from the first year through the second year, ending at the well-known “sophomore slump.” They choose prompts that ask participants to write in genres they have not studied or used in their courses. They keep secret the ways that they measured and rated the student writing. They disregard possible retest effects. They run hundreds of tests of statistical significance looking for anything that will support the hypothesis of nongain and push their implications far beyond the data they thus generate.
This endorsement isn’t to say that Haswell’s critique is the only one that reveals serious methodological or statistical flaws in the book. Alexander Astin takes a similarly ruthlessly look at one of the claims that has been showing up, bold-faced, in most stories from the newsmedia about Academically Adrift. Like Haswell’s, Astin’s piece debunks a favored claim of the media’s before we even get to important discussions of epistemology or the purpose of the university. Since this information is out there, why have seemingly so few heard about it? I have three assumptions as I write this post. One, that most members of the newsmedia who report on scholarly research don’t bother to actually read the research itself, but instead report based on abstracts, summaries, or what other journalists and pundits say is the takeaway. Second, that most of those who do bother to read the research lack the critical research reading skills to adequately assess what they are reading. (Certainly, the point Astin makes about statistical significance and what it does not mean should be understandable to anybody with basic understanding of the social sciences.) And third, that this particular book’s message is conducive to their interests, both in terms of sensationalism and a general anti-academic bias that has grown in our media in recent years. Unfortunately, as they are the ones who decide which narratives take hold, the methodological flaws in Academically Adrift are likely to remain outside of the conventional wisdom.