Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

NaturalProver: Grounded Mathematical Proof Generation with Language Models

S. WelleckJiacheng LiuXiming LuYejin Choi
2022
arXiv

Theorem proving in natural mathematical language - the mixture of symbolic and natural language used by humans - plays a central role in mathematical advances and education, and tests aspects of… 

Zero- and Few-Shot NLP with Pretrained Language Models

Iz BeltagyArman CohanRobert Logan IVSameer Singh
2022
ACL, tutorial

The ability to efficiently learn from little-to-no data is critical to applying NLP to tasks where data collection is costly or otherwise difficult. This is a challenging setting both academically… 

ABC: Attention with Bounded-memory Control

Hao PengJungo KasaiNikolaos PappasNoah A. Smith
2022
ACL

Transformer architectures have achieved state-of-the-art results on a variety of sequence modeling tasks. However, their attention mechanism comes with a quadratic complexity in sequence lengths,… 

Cross-Task Generalization via Natural Language Crowdsourcing Instructions

Swaroop MishraDaniel KhashabiChitta BaralHanna Hajishirzi
2022
ACL

Can we enable NLP models to appropriately respond to instructional prompts and consequently generalize to new tasks? To study this question, we leverage the existing NLP datasets and the… 

Extracting Latent Steering Vectors from Pretrained Language Models

Nishant SubramaniNivedita SureshMatthew E. Peters
2022
Findings of ACL

Prior work on controllable text generation has focused on learning how to control language models through trainable decoding, smart-prompt design, or fine-tuning based on a desired objective. We… 

Generated Knowledge Prompting for Commonsense Reasoning

Jiachen LiuAlisa LiuXiming LuHannaneh Hajishirzi
2022
ACL

Despite their ability to capture large amount of knowledge during pretraining, large-scale language models often benefit from incorporating external knowledge bases, especially on commonsense… 

Generating Data to Mitigate Spurious Correlations in Natural Language Inference Datasets

Yuxiang WuMatt GardnerPontus StenetorpPradeep Dasigi
2022
ACL

Natural language processing models often exploit spurious correlations between task-independent features and labels in datasets to perform well only within the distributions they are trained on,… 

Generating Scientific Definitions with Controllable Complexity

Tal AugustKatharina ReineckeNoah A. Smith
2022
ACL

Unfamiliar terminology and complex language can present barriers to understanding science. Natural language processing stands to help address these issues by automatically defining unfamiliar terms.… 

Is GPT-3 Text Indistinguishable from Human Text? SCARECROW: A Framework for Scrutinizing Machine Text

Yao DouMaxwell ForbesRik Koncel-KedziorskiYejin Choi
2022
ACL

Modern neural text generation systems can produce remarkably fluent and grammatical texts. While earlier language models suffered from repetition and syntactic errors, the errors made by contemporary… 

Reframing Instructional Prompts to GPTk's Language

Swaroop MishraDaniel KhashabiChitta BaralHanna Hajishirzi
2022
Findings of ACL

How can model designers turn task instructions into effective prompts for language models? Backed by extensive empirical analysis on GPT3, we observe important features for successful instructional…