FBD Enables LLM Debating for Fact Checks

Source: doi.org

TL;DR

The story at a glance

This IEEE conference paper proposes FBD (Fact-Based Debating), a method that employs large language models (LLMs) to debate facts for verifying claims. The authors, primarily from Tianjin University's College of Intelligence and Computing, include Mufan Yu, Guozheng Rao, Xin Wang, Kaijia Tian, and Jiayin Zhang, with Li Zhang from Tianjin University of Science and Technology.[[3]](https://confcats-siteplex.s3.us-east-1.amazonaws.com/ijcnn25/IJCNN_2025_Program_77b2d8aef4.pdf) It appears in the proceedings of the International Joint Conference on Neural Networks (IJCNN) 2025, held June 30 to July 5 in Rome, Italy.[[5]](https://dblp.org/db/conf/ijcnn/index) The paper is reported now as part of the post-conference IEEE Xplore publication.

Key points

Details and context

The paper addresses fact verification, a key challenge in AI amid rising disinformation, by leveraging LLMs in a novel debating framework called FBD. Most authors affiliate with Tianjin University's College of Intelligence and Computing; Li Zhang is from the School of Economics and Management at Tianjin University of Science and Technology.[[3]](https://confcats-siteplex.s3.us-east-1.amazonaws.com/ijcnn25/IJCNN_2025_Program_77b2d8aef4.pdf)

IJCNN 2025, the premier neural networks conference, occurred in Rome with over 2,300 attendees and growth in submissions from prior years.[[6]](https://arxiv.org/abs/2603.19244) The paper fits themes like AI for verification, as seen in related workshops on deepfakes and fake news.[[3]](https://confcats-siteplex.s3.us-east-1.amazonaws.com/ijcnn25/IJCNN_2025_Program_77b2d8aef4.pdf)

Full text is paywalled on IEEE Xplore; metadata comes from conference listings and academic databases like Semantic Scholar and DBLP.[[1]](https://ieeexplore.ieee.org/document/11229172)

Why it matters

Fact verification using LLMs like FBD could improve reliability in combating misinformation across media and AI-generated content. Researchers and developers gain a specific method to enhance LLM reasoning for claim checking, potentially integrable into tools for journalism or social platforms. Watch for follow-up evaluations on benchmarks or real-world applications, though performance details remain inaccessible behind the paywall.

FAQ

Q: What is FBD in this paper?

A: FBD stands for Fact-Based Debating, a technique that uses large language models to conduct structured debates grounded in facts for verifying claims. The paper presents it as a method tailored for fact verification tasks. It was developed by the Tianjin University team.[[1]](https://ieeexplore.ieee.org/document/11229172)

Q: Who authored the FBD paper?

A: Lead authors are Mufan Yu and Guozheng Rao from Tianjin University's College of Intelligence and Computing. Co-authors include Xin Wang, Kaijia Tian, and Jiayin Zhang from the same college, plus Li Zhang from Tianjin University of Science and Technology.[[3]](https://confcats-siteplex.s3.us-east-1.amazonaws.com/ijcnn25/IJCNN_2025_Program_77b2d8aef4.pdf)

Q: Where and when was this paper presented?

A: The paper appeared at the 2025 International Joint Conference on Neural Networks (IJCNN 2025) in Rome, Italy, from June 30 to July 5. It is part of the official proceedings on IEEE Xplore.[[7]](https://www.proceedings.com/82657.html)

Q: What conference accepted this work?

A: IJCNN 2025, with 2,152 papers accepted from 5,526 submissions at about 39% rate. The event focused on neural networks theory, analysis, and applications.[[6]](https://arxiv.org/abs/2603.19244)