How AI slop is causing a crisis in computer science

-


AI slop is flooding computer science journals and conferences.Credit: Quality Stock/Alamy

Fifty-four seconds. That’s how long it took Raphael Wimmer to write up an experiment that he did not actually perform, using a new artificial-intelligence tool called Prism, released by OpenAI last month. “Writing a paper has never been easier. Clogging the scientific publishing pipeline has never been easier,” wrote Wimmer, a researcher in human–computer action at the University of Regensburg in Germany, on Bluesky.

Large language models (LLMs) can suggest hypotheses, write code and draft papers, and AI agents are automating parts of the research process. Although this can accelerate science, it also makes it easy to create fake or low-quality papers, known as AI slop.

Computer science was a growing field before the advent of LLMs, but it is now at breaking point. The 2026 International Conference on Machine Learning (ICML) has received more than 24,000 submissions — more than double that of the 2025 meeting. One reason for the boom is that LLM adoption has increased researcher productivity, by as much as 89.3%, according to research published in Science in December1.

“It’s a volume far beyond what the current review system was designed to handle,” and makes “thorough and careful evaluation increasingly infeasible”, says Seulki Lee, a computer scientist at the Korea Advanced Institute of Science and Technology in Daejeon, South Korea.

Volume is not the only problem. Many authors fail to properly validate or verify AI-generated contents, says Lee. Analyses of submissions to prominent AI conferences show that some papers are entirely AI-generated and dozens contain AI fabrications, known as hallucinations. Since the advent of ChatGPT in November 2022, the number of monthly submissions to the arXiv preprint repository has risen by more than 50% and the number of articles rejected each month has risen fivefold to more than 2,400 (see ‘Rejection rates climb’).

REJECTION RATES CLIMB: arXiv computer science rejection rates have risen from about 0.03–0.04 in 2021–2023 to nearly 0.10 by early 2026, roughly doubling between January 2025 and 2026.

One response is to fight fire with fire by using AI in peer review or to weed out fake papers. Other options are blunter. The arXiv has, for example, added eligibility checks for first-time submitters and banned computer-science review articles that have not been previously accepted by a peer-reviewed outlet. The organizers of the International Joint Conferences on Artificial Intelligence (IJCAI), meanwhile, have sought to limit submissions by introducing a policy that requires researchers to pay US$100 for every subsequent paper after their first. These payments then get distributed between reviewers.

The stakes are high, says Lee. If the issue is not addressed, “trust in scientific research, particularly within computer science, faces a substantial risk of erosion”, he says.

AI slop

AI slop is hard to spot by conventional means, says Paul Ginsparg, a physicist at Cornell University in Ithaca, New York, and a co-founder of the arXiv. Volunteer moderators can no longer use how well a paper engages with the relevant literature and methods to gauge its merit. “AI slop frequently can’t be discriminated just by looking at abstract, or even by just skimming full text,” he says. This makes it an “existential threat” to the system, he says.

So far, conferences have coped mainly by boosting the reviewer pool. Last year, the International Conference on Learning Representations (ICLR) introduced a requirement for anyone submitting a paper to also peer review others’ submissions. The organizers of the high-profile NeurIPS conference acknowledged that their recruitment drive meant they would probably be using more junior reviewers. Several conferences also already incentivize high-quality reviewing by offering the best reviewers benefits such as waived conference-registration fees, says Lee.

Several are introducing new measures next year to deter slop. ICML organizers have introduced a policy to stop authors submitting multiple similar papers by requiring them to send the reviewers copies of their other submitted articles, with violations potentially resulting in all papers by those authors being rejected.

More-radical ideas include moving away from publishing through conferences to a rolling, journal-based model. This would avoid an annual crunch of peer review, but some researchers might not want to miss the prestige and networking opportunities of conferences, says Nancy Chen, an AI researcher at the A*Star Institute for Infocomm Research in Singapore, who served as a programme chair for the high-profile NeurIPS conference last year.

Weeding out the chaff



Source link

Latest news

Ring Kills Flock Safety Deal After Super Bowl Ad Uproar

The widespread protests in Iran have exposed both Tehran’s brutal tactics in the streets, where state authorities have...

These Are the Best Alternatives to Google’s Android Operating System

Want Google out of your life? It's pretty easy to find alternative search, email, and photo storage providers,...

I Tried H&R Block’s DIY Tax Service. Here’s Who’ll Benefit From It

Throughout, there's a Virtual Assistant chatbot if you need to ask questions or get help, which gives me...

Gear News of the Week: Samsung Sets a Date for Galaxy Unpacked, and Fitbit’s AI Coach Comes to iOS

Samsung will unveil its next flagship smartphone lineup on February 25 at its Galaxy Unpacked event in San...

The Internet’s Favorite Blanket Is 45 Percent Off

If you’ve spent more than five minutes on TikTok, you’ve probably heard someone rave about Lola Blankets. They’re...

Some of the Year’s Best Mattress Sales Are on Presidents’ Day. Here’s What We Recommend

Hooray for a three-day weekend and a little bit of extra time to relax! Presidents’ Day is also...

Must read

You might also likeRELATED
Recommended to you