Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Domain-Specific Lexical Grounding in Noisy Visual-Textual Documents

Gregory YauneyJack HesselDavid Mimno
2020
EMNLP

Images can give us insights into the contextual meanings of words, but current imagetext grounding approaches require detailed annotations. Such granular annotation is rare, expensive, and… 

Easy, Reproducible and Quality-Controlled Data Collection with Crowdaq

Qiang NingHao WuPradeep DasigiZ. Nie
2020
EMNLP • Demo

High-quality and large-scale data are key to success for AI systems. However, large-scale data annotation efforts are often confronted with a set of common challenges: (1) designing a user-friendly… 

Fact or Fiction: Verifying Scientific Claims

David WaddenKyle LoLucy Lu WangHannaneh Hajishirzi
2020
EMNLP

We introduce the task of scientific fact-checking. Given a corpus of scientific articles and a claim about a scientific finding, a fact-checking model must identify abstracts that support or refute… 

Grounded Compositional Outputs for Adaptive Language Modeling

Nikolaos PappasPhoebe MulcaireNoah A. Smith
2020
EMNLP

Language models have emerged as a central component across NLP, and a great deal of progress depends on the ability to cheaply adapt them (e.g., through finetuning) to new domains and tasks. A… 

IIRC: A Dataset of Incomplete Information Reading Comprehension Questions

James FergusonMatt Gardner. Hannaneh HajishirziTushar KhotPradeep Dasigi
2020
EMNLP

Humans often have to read multiple documents to address their information needs. However, most existing reading comprehension (RC) tasks only focus on questions for which the contexts provide all… 

Improving Compositional Generalization in Semantic Parsing

Inbar OrenJonathan HerzigNitish GuptaJonathan Berant
2020
Findings of EMNLP

Generalization of models to out-of-distribution (OOD) data has captured tremendous attention recently. Specifically, compositional generalization, i.e., whether a model generalizes to new structures… 

Is Multihop QA in DiRe Condition? Measuring and Reducing Disconnected Reasoning

H. TrivediN. BalasubramanianTushar KhotA. Sabharwal
2020
EMNLP

Has there been real progress in multi-hop question-answering? Models often exploit dataset artifacts to produce correct answers, without connecting information across multiple supporting facts. This… 

Learning from Task Descriptions

Orion WellerNick LourieMatt GardnerMatthew Peters
2020
EMNLP

Typically, machine learning systems solve new tasks by training on thousands of examples. In contrast, humans can solve new tasks by reading some instructions, with perhaps an example or two. To… 

Learning to Explain: Datasets and Models for Identifying Valid Reasoning Chains in Multihop Question-Answering.

Harsh JhamtaniP. Clark
2020
EMNLP

Despite the rapid progress in multihop question-answering (QA), models still have trouble explaining why an answer is correct, with limited explanation training data available to learn from. To… 

MedICaT: A Dataset of Medical Images, Captions, and Textual References

Sanjay SubramanianLucy Lu WangSachin MehtaHannaneh Hajishirzi
2020
Findings of EMNLP

Understanding the relationship between figures and text is key to scientific document understanding. Medical figures in particular are quite complex, often consisting of several subfigures (75% of…