SciFact

Semantic Scholar • 2020
Due to the rapid growth in the scientific literature, there is a need for automated systems to assist researchers and the public in assessing the veracity of scientific claims. To facilitate the development of systems for this task, we introduce SciFact, a dataset of 1.4K expert-written claims, paired with evidence-containing abstracts annotated with veracity labels and rationales.
License: CC BY-NC 2.0

Leaderboard

Top Public Submissions
DetailsCreatedAbstract Label-Only (R)
2/19/202173%
2
ParagraphJoint
PLUS Lab: Xiangci Li (UT Dallas), Gully Burns (Chan Zuckerburg Initiative), Nanyun Peng (UCLA)
1/26/202164%
1/26/202166%
4
Law & Econ
fine-tuning the e-FEVER sytem (https://truthandtrustonline.com/wp-content/uploads/2020/10/TTO04.pdf) on the scifact dataset + some tricks for improved evidence retrieval. more details, an academic paper and a link to a github repository with the code will follow shortly
2/11/202163%
5
VerT5erini (BM25 Retrieval)
University of Waterloo
1/27/202161%

Authors

David Wadden, Shanchuan Lin, Kyle Lo, Lucy Lu Wang, Madeleine van Zuylen, Arman Cohan, Hannaneh Hajishirzi