Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Efficient Hierarchical Domain Adaptation for Pretrained Language Models

Alexandra ChronopoulouMatthew E. PetersJesse Dodge
2022
NAACL

The remarkable success of large language models has been driven by dense models trained on massive unlabeled, unstructured corpora. These corpora typically contain text from diverse, heterogeneous… 

Few-Shot Self-Rationalization with Natural Language Prompts

Ana MarasovićIz BeltagyDoug DowneyMatthew E. Peters
2022
Findings of NAACL

Self-rationalization models that predict task labels and generate free-text elaborations for their predictions could enable more intuitive interaction with NLP systems. These models are, however,… 

Literature-Augmented Clinical Outcome Prediction

Aakanksha NaikS. ParasaSergey FeldmanTom Hope
2022
Findings of NAACL

We present BEEP (Biomedical Evidence-Enhanced Predictions), a novel approach for clinical outcome prediction that retrieves patient-specific medical literature and incorporates it into predictive… 

Long Context Question Answering via Supervised Contrastive Learning

Avi CaciularuIdo DaganJacob GoldbergerArman Cohan
2022
NAACL

Long-context question answering (QA) tasks require reasoning over a long document or multiple documents. Addressing these tasks often benefits from identifying a set of evidence spans (e.g.,… 

MultiVerS: Improving scientific claim verification with weak supervision and full-document context

David WaddenKyle LoLucy Lu WangHannaneh Hajishirzi
2022
Findings of NAACL

The scientific claim verification task requires an NLP system to label scientific documents which Support or Refute an input claim, and to select evidentiary sentences (or rationales) justifying… 

NeuroLogic A*esque Decoding: Constrained Text Generation with Lookahead Heuristics

Ximing LuS. WelleckPeter WestYejin Choi
2022
NAACL

The dominant paradigm for neural text generation is left-to-right decoding from autoregressive language models. Constrained or controllable generation under complex lexical constraints, however,… 

Paragraph-based Transformer Pre-training for Multi-Sentence Inference

Luca Di LielloSiddhant GargLuca SoldainiAlessandro Moschitti
2022
NAACL

Inference tasks such as answer sentence selection (AS2) or fact verification are typically solved by fine-tuning transformer-based models as individual sentence-pair classifiers. Recent studies show… 

Reframing Human-AI Collaboration for Generating Free-Text Explanations

Sarah WiegreffeJack HesselSwabha SwayamdiptaYejin Choi
2022
NAACL

Large language models are increasingly capa-ble of generating fluent-appearing text with relatively little task-specific supervision. But can these models accurately explain classification decisions?… 

Symbolic Knowledge Distillation: from General Language Models to Commonsense Models

Peter WestChandrasekhar BhagavatulaJack HesselYejin Choi
2022
NAACL

The common practice for training commonsense models has gone from–human–to– corpus–to–machine: humans author commonsense knowledge graphs in order to train commonsense models. In this work, we… 

Time Waits for No One! Analysis and Challenges of Temporal Misalignment

Kelvin LuuDaniel KhashabiSuchin GururanganNoah A. Smith
2022
NAACL

When an NLP model is trained on text data from one time period and tested or deployed on data from another, the resulting temporal misalignment can degrade end-task performance. In this work, we…