Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

VILA: Improving Structured Content Extraction from Scientific PDFs Using Visual Layout Groups

Zejiang ShenKyle LoLucy Lu WangDoug Downey
2022
TACL

Accurately extracting structured content from PDFs is a critical first step for NLP over scientific papers. Recent work has improved extraction accuracy by incorporating elementary layout… 

What Language Model to Train if You Have One Million GPU Hours?

Teven Le ScaoThomas WangDaniel HesslowIz Beltagy
2022
EMNLP

The crystallization of modeling methods around the Transformer architecture has been a boon for practitioners. Simple, well-motivated architectural variations that transfer across tasks and scale,… 

Retrieval Data Augmentation Informed by Downstream Question Answering Performance

James FergusonPradeep DasigiTushar KhotHannaneh Hajishirzi
2022
ACL • FEVER

Training retrieval models to fetch contexts for Question Answering (QA) over large corpora requires labeling relevant passages in those corpora. Since obtaining exhaustive manual annotations of all… 

Quark: Controllable Text Generation with Reinforced Unlearning

Ximing LuS. WelleckLiwei JiangYejin Choi
2022
NeurIPS

Large-scale language models often learn behaviors that are misaligned with user expectations. Generated text may contain offensive or toxic language, contain significant repetition, or be of a… 

Multimodal Knowledge Alignment with Reinforcement Learning

Youngjae YuJiwan ChungHeeseung YunYejin Choi
2022
CVPR

Large language models readily adapt to novel settings, even without task-specific training data. Can their zero-shot capacity be extended to multimodal inputs? In this work, we propose ESPER which… 

ProsocialDialog: A Prosocial Backbone for Conversational Agents

Hyunwoo KimYoungjae YuLiwei JiangMaarten Sap
2022
EMNLP

Most existing dialogue systems fail to respond properly to potentially unsafe user utterances by either ignoring or passively agreeing with them. To address this issue, we introduce ProsocialDialog,… 

NaturalProver: Grounded Mathematical Proof Generation with Language Models

S. WelleckJiacheng LiuXiming LuYejin Choi
2022
arXiv

Theorem proving in natural mathematical language - the mixture of symbolic and natural language used by humans - plays a central role in mathematical advances and education, and tests aspects of… 

Investigating the Benefits of Free-Form Rationales

Jiao SunSwabha SwayamdiptaJonathan MayXuezhe Ma
2022
arXiv

Free-form rationales aim to aid model interpretability by supplying the background knowledge that can help understand model decisions. Crowdsourced rationales are provided for commonsense QA… 

Improving the Generalizability of Depression Detection by Leveraging Clinical Questionnaires

Thong NguyenAndrew YatesAyah ZiriklyArman Cohan
2022
ACL

Automated methods have been widely used to identify and analyze mental health conditions (e.g., depression) from various sources of information, including social media. Yet, deployment of such… 

Zero- and Few-Shot NLP with Pretrained Language Models

Iz BeltagyArman CohanRobert Logan IVSameer Singh
2022
ACL, tutorial

The ability to efficiently learn from little-to-no data is critical to applying NLP to tasks where data collection is costly or otherwise difficult. This is a challenging setting both academically…