David Oks’s essay on citations is not central to my interests, but I know there are lots of Hatters who do science and will probably have things to say; I myself found it extremely enlightening. I’ll quote the start and let you click through for the rest:
Here are a few headlines from the world of science. […] So scientists are submitting AI-generated papers; reviewers are using AI to assess them; obviously some amount of low-quality AI-generated content will end up getting approved and published. Well-regarded journals have been caught publishing papers with classic ChatGPT-isms like “here is a possible introduction for your topic” or “as of my last knowledge update.” But that’s not all. Many of those AI-generated papers are being cited by articles in other peer-reviewed journals: and many of those articles, unsurprisingly, appear to be AI-generated themselves.
It’s pretty well-known now that science is “drowning in AI slop.” In that regard, it’s not alone: AI slop is steadily infiltrating every school and workplace in the country. But there’s something about all of this that puzzles me.
I get why students, for example, would want to avoid doing homework. But I don’t really understand why scientists would want to avoid doing science. Or, rather, why they’re so eager to use AI to produce a huge number of shoddy papers. No one forced them to become scientists. I imagine that most people who work as scientists chose to do so out of something like love for the subject. So why are scientists using AI to produce and submit so much garbage?
I don’t think that the answer actually has much to do with AI. It has to do, instead, with the incentives that govern scientific institutions. You could boil it down to one word: citations.
Recent Comments