Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

ClarifyDelphi: Reinforced Clarification Questions with Defeasibility Rewards for Social and Moral Situations

Valentina PyatkinJena D. HwangVivek SrikumarChandra Bhagavatula
2023
ACL

Context is everything, even in commonsense moral reasoning. Changing contexts can flip the moral judgment of an action; Lying to a friend is wrong in general, but may be morally acceptable if it is… 

CREPE: Open-Domain Question Answering with False Presuppositions

Xinyan Velocity YuSewon MinLuke ZettlemoyerHannaneh Hajishirzi
2023
ACL

When asking about unfamiliar topics, information seeking users often pose questions with false presuppositions. Most existing question answering (QA) datasets, in contrast, assume all questions have… 

Do Androids Laugh at Electric Sheep? Humor"Understanding"Benchmarks from The New Yorker Caption Contest

Jack HesselAna MarasovićJena D. HwangYejin Choi
2023
ACL

We challenge AI models to “demonstrate un-derstanding” of the sophisticated multimodal humor of The New Yorker Caption Contest. Concretely, we develop three carefully cir-cumscribed tasks for which… 

Do language models have coherent mental models of everyday things?

Yuling GuBhavana Dalvi MishraPeter Clark
2023
ACL

When people think of everyday things like an “egg,” they typically have a mental image associated with it. This commonsense knowledge helps us understand how these everyday things work and how to… 

Efficient Methods for Natural Language Processing: A Survey

Marcos Vinícius TrevisoTianchu JiJi-Ung LeeRoy Schwartz
2023
TACL

Recent work in natural language processing (NLP) has yielded appealing results from scaling model parameters and training data; however, using only scale to improve performance means that resource… 

Elaboration-Generating Commonsense Question Answering at Scale

Wenya WangVivek SrikumarHannaneh HajishirziNoah A. Smith
2023
ACL

In question answering requiring common sense, language models (e.g., GPT-3) have been used to generate text expressing background knowledge that helps improve performance. Yet the cost of working… 

Few-shot Fine-tuning vs. In-context Learning: A Fair Comparison and Evaluation

Marius MosbachTiago PimentelShauli RavfogelYanai Elazar
2023
Findings of ACL 2023

Few-shot fine-tuning and in-context learning are two alternative strategies for task adaptation of pre-trained language models. Recently, in-context learning has gained popularity over fine-tuning… 

FiD-ICL: A Fusion-in-Decoder Approach for Efficient In-Context Learning

Qinyuan YeIz BeltagyMatthew E. PetersHannaneh Hajishirzi
2023
ACL

Large pre-trained models are capable of few-shot in-context learning (ICL), i.e., performing a new task by prepending a few demonstrations before the test input. However, the concatenated… 

HINT: Hypernetwork Instruction Tuning for Efficient Zero-Shot Generalisation

Hamish IvisonAkshita BhagiaYizhong WangMatthew E. Peters
2023
ACL

Recent NLP models have the great ability to generalise ‘zero-shot’ to new tasks using only an instruction as guidance. However, these approaches usually repeat their instructions with every input,… 

NarrowBERT: Accelerating Masked Language Model Pretraining and Inference

Haoxin LiPhillip KeungDaniel ChengNoah A. Smith
2023
ACL • Proceedings

Large-scale language model pretraining is a very successful form of self-supervised learning in natural language processing, but it is increasingly expensive to perform as the models and pretraining…