Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Searching for Scientific Evidence in a Pandemic: An Overview of TREC-COVID

Kirk RobertsTasmeer AlamSteven BedrickW. Hersh
2021
arXiv

We present an overview of the TREC-COVID Challenge, an information retrieval (IR) shared task to evaluate search on scientific literature related to COVID-19. The goals of TREC-COVID include the… 

Improving the Accessibility of Scientific Documents: Current State, User Needs, and a System Solution to Enhance Scientific PDF Accessibility for Blind and Low Vision Users

Lucy Lu WangIsabel CacholaJonathan BraggDaniel S. Weld
2021
arXiv

The majority of scientific papers are distributed in PDF, which pose challenges for accessibility, especially for blind and low vision (BLV) readers. We characterize the scope of this problem by… 

ManipulaTHOR: A Framework for Visual Object Manipulation

Kiana EhsaniWinson HanAlvaro HerrastiR. Mottaghi
2021
arXiv

The domain of Embodied AI has recently witnessed substantial progress, particularly in navigating agents within their environments. These early successes have laid the building blocks for the… 

Bootstrapping Relation Extractors using Syntactic Search by Examples

Matan EyalAsaf AmramiHillel Taub-TabibYoav Goldberg
2021
EACL

The advent of neural-networks in NLP brought with it substantial improvements in supervised relation extraction. However, obtaining a sufficient quantity of training data remains a key challenge. In… 

First Align, then Predict: Understanding the Cross-Lingual Ability of Multilingual BERT

Benjamin MullerYanai ElazarBenoît SagotDjamé Seddah
2021
EACL

Multilingual pretrained language models have demonstrated remarkable zero-shot crosslingual transfer capabilities. Such transfer emerges by fine-tuning on a task of interest in one language and… 

Evaluating the Evaluation of Diversity in Natural Language Generation

Guy TevetJonathan Berant
2021
EACL

Despite growing interest in natural language generation (NLG) models that produce diverse outputs, there is currently no principled method for evaluating the diversity of an NLG system. In this… 

BERTese: Learning to Speak to BERT

Adi HavivJonathan BerantA. Globerson
2021
EACL

Large pre-trained language models have been shown to encode large amounts of world and commonsense knowledge in their parameters, leading to substantial interest in methods for extracting that… 

Discourse Understanding and Factual Consistency in Abstractive Summarization

Saadia GabrielAntoine BosselutJeff DaYejin Choi
2021
EACL

We introduce Cooperative Generator-Discriminator Networks (Co-opNet), a general framework for abstractive summarization with distinct modeling of the narrative flow in the output summary. Most… 

Challenges in Algorithmic Debiasing for Toxic Language Detection

Xuhui ZhouMaarten SapSwabha SwayamdiptaYejin Choi
2021
EACL

Biased associations have been a challenge in the development of classifiers for detecting toxic language, hindering both fairness and accuracy. As potential solutions, we investigate recently… 

Challenges in Automated Debiasing for Toxic Language Detection

Xuhui ZhouMaarten SapSwabha SwayamdiptaYejin Choi
2021
EACL

Biased associations have been a challenge in the development of classifiers for detecting toxic language, hindering both fairness and accuracy. As potential solutions, we investigate recently…