Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Efficient Methods for Natural Language Processing: A Survey

Marcos Vinícius TrevisoTianchu JiJi-Ung LeeRoy Schwartz
2023
TACL

Recent work in natural language processing (NLP) has yielded appealing results from scaling model parameters and training data; however, using only scale to improve performance means that resource… 

Words as Gatekeepers: Measuring Discipline-specific Terms and Meanings in Scholarly Publications

Li LucyJesse DodgeDavid BammanKatherine A. Keith
2023
Findings of ACL

Scholarly text is often laden with jargon, or specialized language that can facilitate efficient in-group communication within fields but hinder understanding for out-groups. In this work, we… 

Are Layout-Infused Language Models Robust to Layout Distribution Shifts? A Case Study with Scientific Documents

Catherine ChenZejiang ShenDan KleinKyle Lo
2023
Findings of ACL

Recent work has shown that infusing layout features into language models (LMs) improves processing of visually-rich documents such as scientific papers. Layout-infused LMs are often evaluated on… 

Riveter: Measuring Power and Social Dynamics Between Entities

Maria AntoniakAnjalie FieldJimin MunMaarten Sap
2023
ACL

Riveter provides a complete easy-to-use pipeline for analyzing verb connotations associated with entities in text corpora. We prepopulate the package with connotation frames of sentiment, power, and… 

Self-Instruct: Aligning Language Models with Self-Generated Instructions

Yizhong WangYeganeh KordiSwaroop MishraHannaneh Hajishirzi
2023
ACL

Large “instruction-tuned” language models (i.e., finetuned to respond to instructions) have demonstrated a remarkable ability to generalize zero-shot to new tasks. Nevertheless, they depend heavily… 

When Not to Trust Language Models: Investigating Effectiveness of Parametric and Non-Parametric Memories

Alex MallenAkari AsaiVictor ZhongHannaneh Hajishirzi
2023
ACL

Despite their impressive performance on diverse tasks, large language models (LMs) still struggle with tasks requiring rich world knowledge, implying the difficulty of encoding a wealth of world… 

Task-aware Retrieval with Instructions

Akari AsaiTimo SchickPatrick LewisWen-tau Yih
2023
ACL • Findings

We study the problem of retrieval with instructions, where users of a retrieval system explicitly describe their intent along with their queries. We aim to develop a general-purpose task-aware… 

PuMer: Pruning and Merging Tokens for Efficient Vision Language Models

Qingqing CaoBhargavi ParanjapeHanna Hajishirzi
2023
ACL

Large-scale vision language (VL) models use Transformers to perform cross-modal interactions between the input text and image. These cross-modal interactions are computationally expensive and… 

CREPE: Open-Domain Question Answering with False Presuppositions

Xinyan Velocity YuSewon MinLuke ZettlemoyerHannaneh Hajishirzi
2023
ACL

When asking about unfamiliar topics, information seeking users often pose questions with false presuppositions. Most existing question answering (QA) datasets, in contrast, assume all questions have… 

Nonparametric Masked Language Modeling

Sewon MinWeijia ShiM. LewisLuke Zettlemoyer
2023
ACL • Findings

Existing language models (LMs) predict tokens with a softmax over a finite vocabulary, which can make it difficult to predict rare tokens or phrases. We introduce NPM, the first nonparametric masked…