It is no longer just Robert F. Kennedy Jr. who publishes reports with false references or misquotes authors. An increasing number of dubious references are appearing here and there, indicating a malicious use of artificial intelligence by some researchers.
Referred to as both “hallucinations” and “misstatements”, these erroneous pieces of information generated by AIs have been flagged for the past four years in all spheres of activity. But in science, they take on a unique form, and potentially harmful to scientific literature: footnotes or bibliographies that refer to real authors, but who never wrote the research they are attributed to, or refer to non-existent research. Moreover, the way these AIs function, that is through probabilities, their creators caution, makes it unlikely to completely eliminate these missteps.
Alerts began to be heard in 2024: in computer science in particular, since the advent of these “large language models” such as ChatGPT, Claude, and others, the number of submitted articles has skyrocketed. And concurrently, the number of articles that must be rejected has also increased, either because it turns out their “signatories” have had them completely written by an AI, or because they contain typical false references from AI. In a study published last January, it was estimated that 2.6% of the 18,000 articles submitted in 2025 to three computer science conferences contained at least one false citation, compared to 0.3% in 2024.

Do you enjoy reading and listening to us? To continue offering you our content completely for free, we need you.
For only $5 per month, contribute to the success of Pieuvre and get access to La Voûte, a series of exclusive episodes of the podcast Rembobinage. Subscribe today!
In October 2025, the preprint server ArXiv – which has existed since the 1990s to allow researchers to publish articles that have not yet been peer-reviewed – announced that, for the first time, it was blocking the publication of certain types of articles in computer science, due to the too high number of dubious articles, whose verification was consuming too much time for reviewers. The growth of the “AI slop” phenomenon, that is texts of low quality, is in itself a problem, but if accompanied by invented citations, the entire research ecosystem is facing a crisis.
To try to assess the extent of this crisis, the team of journalists from the journal Nature recently conducted an analysis – with the help of a British AI firm – which concluded that in 2025, “tens of thousands of publications” – including articles and books – “likely contain invalid references generated by AI.” The British firm in question does not hide its intent: it is seeking to develop tools to help publishers identify problematic publications.
Experiments conducted in 2025 with one of the most popular chatbots, which was asked to generate articles, revealed how it “works”: in one out of five cases, the error was actually an invented reference; and in almost half of the cases, it was a reference corresponding to a real publication, but containing one or more errors (author name, title, date, or URL). Similar distinctions emerge from Nature‘s recent analysis.
“Tens of thousands of publications” still represent a small percentage, compared to the millions of things published in science each year. But researchers interviewed by Nature are concerned about the risk that we may be only at the beginning of a “flood of false references.”




