{"url":"https://www.nature.com/articles/d41586-026-00969-z","title":"Hallucinated citations are polluting scientific literature","domain":"nature.com","imageUrl":"https://images.pexels.com/photos/8326307/pexels-photo-8326307.jpeg?auto=compress&cs=tinysrgb&h=650&w=940","pexelsSearchTerm":"scientist checking references","category":"Tech","language":"en","slug":"889463f0","id":"889463f0-bf4b-4571-b45c-c59dab27458e","description":"AI-generated fake citations are rising in scientific papers as researchers use large language models for writing and searches.","summary":"## TL;DR\n- AI-generated fake citations are rising in scientific papers as researchers use large language models for writing and searches.\n- Nature's analysis of over 4,000 publications estimates tens of thousands contain invalid references, with manual checks confirming 65% of suspicious cases.\n- Publishers are deploying detection tools like Veracity, but manual reviews remain essential to protect literature integrity.\n\n## The story at a glance\nFake citations hallucinated by AI tools are appearing in growing numbers of scientific publications, especially in computer science. Researchers like Guillaume Cabanac spotted them first, and publishers including Elsevier, Springer Nature and Wiley now face submissions with fabricated references. *Nature* reports this now amid surging LLM use in research, following analyses of conference papers and journals from **2025**. Computer science conferences saw untraceable references jump from **0.3% in 2024** to **2.6% in 2025**.[[1]](https://www.nature.com/articles/d41586-026-00969-z)\n\n## Key points\n- Computer scientist Guillaume Cabanac found a bogus citation in a **2025** *International Dental Journal* paper: a real **2021** preprint wrongly listed as a *Nature* article with an invalid DOI.\n- Experiments with GPT-4o produced literature reviews with **20%** fabricated references and **45%** errors in real ones, often mixing fragments into \"Frankenstein\" citations.\n- Analysis of nearly **18,000** papers from three computer-science conferences showed untraceable references at **2.6%** in **2025**, up from **0.3%** in **2024**; another study pegged **2-6%** in four conferences.\n- *Nature*'s check with Grounded AI of **4,000+** publications from five big publishers implies tens of thousands of invalid references; manual review of **100** suspicious papers found **65** with fakes, suggesting over **110,000** across **7 million** **2025** works.\n- Publishers like Frontiers flag **5%** of manuscripts with AI tools; editor Alison Johnston rejected **25%** of January **2025** submissions for fake references using iThenticate.\n- Grounded AI's Veracity tool scores citation risk by matching AI error patterns from **20,000** synthetic papers, now used by IOP Publishing for screening.\n\n## Details and context\nFake citations are not new—human errors like wrong DOIs or years have long occurred—but AI fabricates entirely phony ones, worsening with LLM adoption in fields like computer science. Publishers see more in submissions; some cases prompt corrections if authors explain (e.g., translation tools), but many signal deeper content issues. Tools catch issues pre-submission better than post-publication, yet struggle with journal format variations, unindexed sources, and human-AI error overlap.\n\nThe rise tracks LLM surveys showing heavy research use. Extrapolations suggest the problem exceeds major publishers, risking a reproducibility crisis as readers chase ghosts. Responses include screening and investigations, but scale demands better tools and author vigilance.\n\n## Key quotes\n“Now the problem is not just inaccuracy, it’s about fake citations. It’s about fabricated citations, which is a whole different problem.” — **Mohammad Hosseini**, Northwestern University.[[1]](https://www.nature.com/articles/d41586-026-00969-z)\n\n“We’re going to see a flood of fake references.” — **Alison Johnston**, Oregon State University.[[1]](https://www.nature.com/articles/d41586-026-00969-z)\n\n“There have been cases where authors have been able to clearly document where issues have occurred... in which case the paper will be corrected.” — **Chris Graf**, Springer Nature.[[1]](https://www.nature.com/articles/d41586-026-00969-z)\n\n## Why it matters\nAI hallucinations threaten the trustworthiness of scientific literature, undermining citations, reproducibility, and knowledge building across fields. Researchers, reviewers, and readers waste time verifying fakes, while publishers face correction backlogs and eroding credibility. Watch publisher tool adoption, conference trends beyond **2026**, and LLM safeguards, though full fixes remain uncertain.","hashtags":["#science","#publishing","#ai","#ethics","#research","#integrity"],"sources":[{"url":"https://www.nature.com/articles/d41586-026-00969-z","title":"Original article"}],"viewCount":3,"publishedAt":"2026-04-07T20:44:13.256Z"}