Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Measuring Association Between Labels and Free-Text Rationales

Sarah WiegreffeAna MarasovićNoah A. Smith
2021
EMNLP

Interpretable NLP has taking increasing interest in ensuring that explanations are faithful to the model’s decision-making process. This property is crucial for machine learning researchers and… 

Mitigating False-Negative Contexts in Multi-document Question Answering with Retrieval Marginalization

Ansong NiMatt GardnerPradeep Dasigi
2021
EMNLP

Question Answering (QA) tasks requiring information from multiple documents often rely on a retrieval model to identify relevant information from which the reasoning model can derive an answer. The… 

Moral Stories: Situated Reasoning about Norms, Intents, Actions, and their Consequences

Denis EmelinRonan Le BrasJena D. HwangYejin Choi
2021
EMNLP

In social settings, much of human behavior is governed by unspoken rules of conduct. For artificial systems to be fully integrated into social environments, adherence to such norms is a central… 

MS2: Multi-Document Summarization of Medical Studies

Jay DeYoungIz BeltagyMadeleine van ZuylenLucy Lu Wang
2021
EMNLP

To assess the effectiveness of any medical intervention, researchers must conduct a timeintensive and highly manual literature review. NLP systems can help to automate or assist in parts of this… 

Paired Examples as Indirect Supervision in Latent Decision Models

Nitish GuptaSameer SinghMatt Gardner and Dan Roth
2021
EMNLP

Compositional, structured models are appealing because they explicitly decompose problems and provide interpretable intermediate outputs that give confidence that the model is not simply latching… 

Parameter Norm Growth During Training of Transformers

William MerrillVivek RamanujanYoav GoldbergNoah A. Smith
2021
EMNLP

The capacity of neural networks like the widely adopted transformer is known to be very high. Evidence is emerging that they learn successfully due to inductive bias in the training routine,… 

Probing Across Time: What Does RoBERTa Know and When?

Leo Z. LiuYizhong WangJungo KasaiNoah A. Smith
2021
Findings of EMNLP

Models of language trained on very large corpora have been demonstrated useful for NLP. As fixed artifacts, they have become the object of intense study, with many researchers “probing” the extent… 

proScript: Partially Ordered Scripts Generation

Keisuke SakaguchiChandra BhagavatulaRonan Le BrasYejin Choi
2021
EMNLP • Findings

Scripts standardized event sequences describing typical everyday activities have been shown to help understand narratives by providing expectations, resolving ambiguity, and filling in unstated… 

Sentence Bottleneck Autoencoders from Transformer Language Models

Ivan MonteroNikolaos PappasNoah A. Smith
2021
EMNLP

Representation learning for text via pretraining a language model on a large corpus has become a standard starting point for building NLP systems. This approach stands in contrast to autoencoders,… 

Sister Help: Data Augmentation for Frame-Semantic Role Labeling

Ayush PancholyMiriam R. L. PetruckSwabha Swayamdipta
2021
EMNLP • LAW-DMR Workshop

While FrameNet is widely regarded as a rich resource of semantics in natural language processing, a major criticism concerns its lack of coverage and the relative paucity of its labeled data…