A recent New York Magazine article set social media ablaze the other day by asserting that college students were all using generative AI (artificial intelligence) to write their essays and that the result of this practice was a sharp decline in their critical thinking skills. Because the world loves nothing more than to declare the next generation on the brink of zombiedom, the article provoked large follower accounts to spread the news of student brain rot:
There’s nothing more amusing to me about people bemoaning the death of critical thinking in a post where they refuse themselves to use it.
Whenever I see an implausibly huge claim about a technology causing massive cognitive shift, the first thing I do is look on the research it cites for its conclusions. I did this a few years ago when I took apart a study on which a New York Times Op-Ed decrying the use of laptops in the lecture hall was based. That twitter thread is deleted along with the rest of my account (had to happen) but if you’re interested in the TL/DR, it has been memorialized several places. Critically examining study design in my field is second nature to me. Some people do drunk history. I do this.
It turns out the AI rotting student brains claim is based on one study, “The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers” funded by Microsoft and published as part of conference proceedings. In other words, this article probably never went through peer review or was marked up by other scholars in any way before publication.
Reading the abstract I could already tell we were in trouble because the study’s conclusions are based on surveys of 319 knowledge workers.
Folks: They didn't study even one student.
Exactly what are knowledge workers? People who work with knowledge for a living. This is a HUGE domain shift from college. These are people paid to churn out product at work. Work normalizes and even requires plagiarism all the time. Not so college.
The researchers recruited people to participate in the study "through the Prolific platform who self-reported using GenAI tools at work at least once per week." So these are people who wanted to be involved in the study. They already use Gen AI and they already had thoughts about it. They wanted to self-report their thoughts. This is already prejudicial.
We will bracket, for a moment, that the authors are mostly corporate affiliates of Microsoft. To define “critical thinking” they draw on a taxonomy by a figure well known to education researchers, Benjamin S. Bloom. Bloom was doing cutting edge research on critical thinking back in 1956.
Rather than view relying on 75 year old research on brains as a problem, the authors see it as an advantage: "The simplicity of the Bloom et al. framework — its small set of dimensions with clear definitions — renders it more suitable as the basis of a survey instrument."
In other words, they let their instrument define their object.
Defining your object of study based on your preference of instrument is the easiest way to garbage your results. Critical thinking must be simplistic, because we just want to use surveys.
But critical thinking is hardly simple. And abundant research shows it is task and context dependent. This means "critical thinking" in the classroom is not defined the same way as "critical thinking" at work. The golden rule of literacy research is that literacy is always context defined. The very notion of literacy is a historical construct. Read Harvey Graff for all of that.
What did the surveys in the Microsoft funded study measure? Did they measure critical thinking? No. They measured "perception" of critical thinking: “a binary mea- sure of users’ perceived enaction of critical thinking and (2) six five-point scales of users’ perceived effort in cognitive activities associated with critical thinking,"
Now let's go back to NYMag's claims and see what changed in their report of the study.
Citing this study in passing, the NYMag article leaps quickly from knowledge workers to all people. Then it takes out perception of critical thinking and subs critical thinking in its place. Then it leaps again to focus on Gen Z--current college students--and to a non-cited generalized non-defined “ruinous effect” of social media on specifically that generation’s ability to tell fact from fiction.
As if a trillion Russian bots haven't shown us we all can't.
Every single generation thinks the next generation is ruining thinking because of a new technology. Every. Single. One. It's as perennial as apple pie. It's the literacy myth equivalent of "you'll go blind if you masturbate."
Is AI a problem in college? Yes it is, but only because of a bigger problem: Most professors never ask their students to do critical thinking to begin with.
They’re not asking students to produce new knowledge in their classroom instead of regurgitating their lecture notes. They’re not scaffolding papers with multiple drafts and ongoing peer review so that students would have to respond to local feedback. That would take effort. But here is what gets me. The people who could help these professors are just a short walk across campus. They could help you with your AI concerns. They could also help you look at the bigger picture of what you’re asking students to do, and why those students think they can do that by turning to Chat GPT.
So if you're mad that students are skimming rather than doing the deep research, if you're mad that they're just putting opinions out there based on a few things they read online taken as truth, if you want that all to end, I have a very simple message for you:
You first.
There have been a few articles here recently by college professors who report disengaged students, short attention spans, etc. I think there could be some useful research done more qualitatively into student and professor perceptions and experiences of all this. Relying on outdated brain research seems ridiculous, not least because it ignores all the social and cultural stuff which students are having to deal with, and are engaged in. I fear, at base, that you've got an important point here; university teaching, at least in the UK, was in deep, disconnected-from- student-reality trouble long before Chatgpt came along.
A good read. This has my partner’s imprimatur: her data scientist background means she spends a great deal of time bemoaning the way in which the media prematurely grants gospel status to almost any “study” out there, recklessly amplifying low-quality science in pursuit of clickbait oversimplifications. Evidently, many journalists adopt a quasi-postmodern approach to published research, treating all the claims therein as if they were equally valid - despite lacking either the rigour or expertise to judge.
She’s also had personal frustrations with this kind of thing when, on numerous occasions, journalists misrepresented research produced by her team. (We have the gin bills to prove it!) Nor were corrections typically published after they were brought to an editor’s attention. The organisations in question were fairly well respected too (the NYT, BBC, Guardian, Atlantic, New Statesman, Scientific American, etc.).
Though I was aware errors like these weren’t unheard of, I had no idea quite how endemic they were before we met. It’s made me more sceptically critical of such articles ever since.