The goal of this project is to involve the public in the development of better AI models. We use stimulating games alongside state-of-the-art AI models to create an appealing experience for non-scientific users. We aim to improve the ways data is collected for AI training as well as surface strengths and weaknesses of current models.
Alon Talmor, Yanai Elazar, Yoav Goldberg, Jonathan BerantTACL • 2020Recent success of pre-trained language models (LMs) has spurred widespread interest in the language capabilities that they possess. However, efforts to understand whether LM representations are useful for symbolic reasoning tasks have been limited and scattered. In this work, we propose eight… more
Alon Talmor, Oyvind Tafjord, Peter Clark, Yoav Goldberg, Jonathan BerantarXiv • 2020To what extent can a neural network systematically reason over symbolic facts? Evidence suggests that large pre-trained language models (LMs) acquire some reasoning capacity, but this ability is difficult to control. Recently, it has been shown that Transformer-based models succeed in consistent… more