Hallucinated citations are polluting scientific literature

Source: nature.com

TL;DR

The story at a glance

Fake citations hallucinated by AI tools are appearing in growing numbers of scientific publications, especially in computer science. Researchers like Guillaume Cabanac spotted them first, and publishers including Elsevier, Springer Nature and Wiley now face submissions with fabricated references. Nature reports this now amid surging LLM use in research, following analyses of conference papers and journals from 2025. Computer science conferences saw untraceable references jump from 0.3% in 2024 to 2.6% in 2025.[[1]](https://www.nature.com/articles/d41586-026-00969-z)

Key points

Details and context

Fake citations are not new—human errors like wrong DOIs or years have long occurred—but AI fabricates entirely phony ones, worsening with LLM adoption in fields like computer science. Publishers see more in submissions; some cases prompt corrections if authors explain (e.g., translation tools), but many signal deeper content issues. Tools catch issues pre-submission better than post-publication, yet struggle with journal format variations, unindexed sources, and human-AI error overlap.

The rise tracks LLM surveys showing heavy research use. Extrapolations suggest the problem exceeds major publishers, risking a reproducibility crisis as readers chase ghosts. Responses include screening and investigations, but scale demands better tools and author vigilance.

Key quotes

“Now the problem is not just inaccuracy, it’s about fake citations. It’s about fabricated citations, which is a whole different problem.” — Mohammad Hosseini, Northwestern University.[[1]](https://www.nature.com/articles/d41586-026-00969-z)

“We’re going to see a flood of fake references.” — Alison Johnston, Oregon State University.[[1]](https://www.nature.com/articles/d41586-026-00969-z)

“There have been cases where authors have been able to clearly document where issues have occurred... in which case the paper will be corrected.” — Chris Graf, Springer Nature.[[1]](https://www.nature.com/articles/d41586-026-00969-z)

Why it matters

AI hallucinations threaten the trustworthiness of scientific literature, undermining citations, reproducibility, and knowledge building across fields. Researchers, reviewers, and readers waste time verifying fakes, while publishers face correction backlogs and eroding credibility. Watch publisher tool adoption, conference trends beyond 2026, and LLM safeguards, though full fixes remain uncertain.