Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Improving the Generalizability of Depression Detection by Leveraging Clinical Questionnaires

Thong NguyenAndrew YatesAyah ZiriklyArman Cohan
2022
ACL

Automated methods have been widely used to identify and analyze mental health conditions (e.g., depression) from various sources of information, including social media. Yet, deployment of such… 

Zero- and Few-Shot NLP with Pretrained Language Models

Iz BeltagyArman CohanRobert Logan IVSameer Singh
2022
ACL, tutorial

The ability to efficiently learn from little-to-no data is critical to applying NLP to tasks where data collection is costly or otherwise difficult. This is a challenging setting both academically… 

Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations

Jaehun JungLianhui QinS. WelleckYejin Choi
2022
EMNLP

Despite their impressive capabilities, large pretrained language models (LMs) struggle with consistent reasoning; recently, prompting LMs to generate explanations that self-guide the inference has… 

Penguins Don't Fly: Reasoning about Generics through Instantiations and Exceptions

Emily AllawayJena D. HwangChandra BhagavatulaYejin Choi
2022
arXiv

Generics express generalizations about the world (e.g., “birds can fly"). However, they are not universally true – while sparrows and penguins are both birds, only sparrows can fly and penguins… 

Cross-Task Generalization via Natural Language Crowdsourcing Instructions

Swaroop MishraDaniel KhashabiChitta BaralHanna Hajishirzi
2022
ACL

Can we enable NLP models to appropriately respond to instructional prompts and consequently generalize to new tasks? To study this question, we leverage the existing NLP datasets and the… 

Reframing Instructional Prompts to GPTk's Language

Swaroop MishraDaniel KhashabiChitta BaralHanna Hajishirzi
2022
Findings of ACL

How can model designers turn task instructions into effective prompts for language models? Backed by extensive empirical analysis on GPT3, we observe important features for successful instructional… 

Large Scale Substitution-based Word Sense Induction

Matan EyalShoval SaddeHillel Taub-TabibYoav Goldberg
2022
ACL

We present a word-sense induction method based on pre-trained masked language models (MLMs), which can cheaply scale to large vocabularies and large corpora. The result is a corpus which is… 

Hey AI, Can You Solve Complex Tasks by Talking to Agents?

Tushar KhotKyle RichardsonDaniel KhashabiAshish Sabharwal
2022
Findings of ACL

Humans often solve complex problems by interacting (in natural language) with existing agents, such as AI assistants, that can solve simpler sub-tasks. These agents themselves can be powerful… 

Generating Data to Mitigate Spurious Correlations in Natural Language Inference Datasets

Yuxiang WuMatt GardnerPontus StenetorpPradeep Dasigi
2022
ACL

Natural language processing models often exploit spurious correlations between task-independent features and labels in datasets to perform well only within the distributions they are trained on,… 

Generated Knowledge Prompting for Commonsense Reasoning

Jiachen LiuAlisa LiuXiming LuHannaneh Hajishirzi
2022
ACL

Despite their ability to capture large amount of knowledge during pretraining, large-scale language models often benefit from incorporating external knowledge bases, especially on commonsense…