SciFact

Semantic Scholar • 2020
Due to the rapid growth in the scientific literature, there is a need for automated systems to assist researchers and the public in assessing the veracity of scientific claims. To facilitate the development of systems for this task, we introduce SciFact, a dataset of 1.4K expert-written claims, paired with evidence-containing abstracts annotated with veracity labels and rationales.
License: CC BY-NC 2.0

Leaderboard

Top Public Submissions
DetailsCreatedAbstract Label-Only (R)
1
LongChecker
Allen Institute for AI and University of Washington
6/4/202171%
2/19/202173%
3
ARSJoint
Zhiwei Zhang, Jiyi Li, Fumiyo Fukumoto and Yanming Ye
8/13/202164%
4
ParagraphJoint
PLUS Lab: Xiangci Li (UT Dallas), Gully Burns (Chan Zuckerburg Initiative), Nanyun Peng (UCLA)
1/26/202164%
1/26/202166%

Authors

David Wadden, Shanchuan Lin, Kyle Lo, Lucy Lu Wang, Madeleine van Zuylen, Arman Cohan, Hannaneh Hajishirzi