Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Finetuning Pretrained Transformers into RNNs

Jungo KasaiHao PengYizhe ZhangNoah A. Smith
2021
EMNLP

Transformers have outperformed recurrent neural networks (RNNs) in natural language generation. But this comes with a significant computational cost, as the attention mechanism’s complexity scales… 

Sentence Bottleneck Autoencoders from Transformer Language Models

Ivan MonteroNikolaos PappasNoah A. Smith
2021
EMNLP

Representation learning for text via pretraining a language model on a large corpus has become a standard starting point for building NLP systems. This approach stands in contrast to autoencoders,… 

Think about it! Improving defeasible reasoning by first modeling the question scenario

Aman MadaanNiket TandonDheeraj RajagopalE. Hovy
2021
EMNLP

Defeasible reasoning is the mode of reasoning where conclusions can be overturned by taking into account new evidence. Existing cognitive science literature on defeasible reasoning suggests that a… 

Finding needles in a haystack: Sampling Structurally-diverse Training Sets from Synthetic Data for Compositional Generalization

Inbar OrenJonathan HerzigJonathan Berant
2021
EMNLP

Modern semantic parsers suffer from two principal limitations. First, training requires expensive collection of utterance-program pairs. Second, semantic parsers fail to generalize at test time to… 

DIALKI: Knowledge Identification in Conversational Systems through Dialogue-Document Contextualization

Zeqiu WuBo-Ru LuHannaneh HajishirziMari Ostendorf
2021
EMNLP

Identifying relevant knowledge to be used in conversational systems that are grounded in long documents is critical to effective response generation. We introduce a knowledge identification model… 

Container: Context Aggregation Network

Peng GaoJiasen LuHongsheng LiAniruddha Kembhavi
2021
arXiv

Convolutional neural networks (CNNs) are ubiquitous in computer vision, with a myriad of effective and efficient variations. Recently, Transformers – originally introduced in natural language… 

SciA11y: Converting Scientific Papers to Accessible HTML

Lucy Lu WangIsabel CacholaJonathan BraggDaniel S. Weld
2021
ASSETS

We present SciA11y, a system that renders inaccessible scientific paper PDFs into HTML. SciA11y uses machine learning models to extract and understand the content of scientific PDFs, and reorganizes… 

Delphi: Towards Machine Ethics and Norms

Liwei JiangJena D. HwangChandrasekhar BhagavatulaYejin Choi
2021
arXiv

Failing to account for moral norms could notably hinder AI systems’ ability to interact with people. AI systems empirically require social, cultural, and ethical norms to make moral judgments.… 

Can Machines Learn Morality? The Delphi Experiment

Liwei JiangChandra BhagavatulaJenny LiangYejin Choi
2021
arXiv

As AI systems become increasingly powerful and pervasive, there are growing concerns about machines’ morality or a lack thereof. Yet, teaching morality to machines is a formidable task, as morality… 

Reflective Decoding: Beyond Unidirectional Generation with Off-the-Shelf Language Models

Peter WestXiming LuAri HoltzmanYejin Choi
2021
ACL

Publicly available, large pretrained Language Models (LMs) generate text with remarkable quality, but only sequentially from left to right. As a result, they are not immediately applicable to…