Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Does progress on ImageNet transfer to real-world datasets?

Alexander W. FangSimon KornblithLudwig Schmidt
2023
arXiv

Does progress on ImageNet transfer to real-world datasets? We investigate this question by evaluating ImageNet pre-trained models with varying accuracy (57% - 83%) on six practical image… 

ProKnow: Process knowledge for safety constrained and explainable question generation for mental health diagnostic assistance

Kaushik RoyManas GaurMisagh SoltaniAmit P. Sheth
2023
Frontiers in Big Data

Virtual Mental Health Assistants (VMHAs) are utilized in health care to provide patient services such as counseling and suggestive care. They are not used for patient diagnostic assistance because… 

MAUVE Scores for Generative Models: Theory and Practice

Krishna PillutlaLang LiuJohn ThickstunZ. Harchaoui
2022
arXiv

Generative AI has matured to a point where large-scale models can generate text that seems indistinguishable from human-written text and remarkably photorealistic images. Automatically measuring how… 

I Cast Detect Thoughts: Learning to Converse and Guide with Intents and Theory-of-Mind in Dungeons and Dragons

Pei ZhouAndrew ZhuJennifer HuPrithviraj Ammanabrolu
2022
arXiv

We propose a novel task, G4C, to study teacher-student natural language interactions in a goal-driven and grounded environment. Dungeons and Dragons (D&D), a role-playing game, provides an ideal… 

Exploring the Challenges of Open Domain Multi-Document Summarization

John GiorgiLuca SoldainiBo WangArman Cohan
2022
arXiv

Multi-document summarization (MDS) has traditionally been studied assuming a set of ground-truth topic-related input documents is provided. In practice, the input document set is unlikely to be… 

I2D2: Inductive Knowledge Distillation with NeuroLogic and Self-Imitation

Chandra BhagavatulaJena D. HwangDoug DowneyYejin Choi
2022
ACL

Pre-trained language models, despite their rapid advancements powered by scale, still fall short of robust commonsense capabilities. And yet, scale appears to be the win-ning recipe; after all, the… 

Reproducible scaling laws for contrastive language-image learning

Mehdi ChertiRomain BeaumontRoss WightmanJ. Jitsev
2022
arXiv

Scaling up neural networks has led to remarkable performance across a wide range of tasks. Moreover, performance often follows reliable scaling laws as a function of training set size, model size,… 

Hyperdecoders: Instance-specific decoders for multi-task NLP

Hamish IvisonMatthew E. Peters
2022
Findings of EMNLP

We investigate input-conditioned hypernetworks for multi-tasking in NLP, generating parameter-efficient adaptations for a decoder using a hypernetwork conditioned on the output of an encoder. This… 

Continued Pretraining for Better Zero- and Few-Shot Promptability

Zhaofeng WuRobert L. Logan IVPete WalshIz Beltagy
2022
EMNLP

Recently introduced language model prompting methods can achieve high accuracy in zero-and few-shot settings while requiring few to no learned task-specific parameters. Never-theless, these methods… 

Exploring The Landscape of Distributional Robustness for Question Answering Models

Anas AwadallaMitchell WortsmanGabriel IlharcoLudwig Schmidt
2022
Findings of EMNLP

We conduct a large empirical evaluation to investigate the landscape of distributional robustness in question answering. Our investigation spans over 350 models and 16 question answering datasets,…