Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

A machine learning parameterization of clouds in a coarse-resolution climate model for unbiased radiation

Brian HennY. R. JaureguiSpencer K. ClarkC. Bretherton
2023
ESSOAr

Coarse-grid weather and climate models rely particularly on parameterizations of cloud fields, and coarse-grained cloud fields from a fine-grid reference model are a natural target for a… 

PromptCap: Prompt-Guided Task-Aware Image Captioning

Yushi HuHang HuaZhengyuan YangJiebo Luo
2023
ICCV • Proceedings

Knowledge-based visual question answering (VQA) involves questions that require world knowledge beyond the image to yield the correct answer. Large language models (LMs) like GPT-3 are particularly… 

TIFA: Accurate and Interpretable Text-to-Image Faithfulness Evaluation with Question Answering

Yushi HuBenlin LiuJungo KasaiNoah A. Smith
2023
ICCV • Proceedings

Despite thousands of researchers, engineers, and artists actively working on improving text-to-image generation models, systems often fail to produce images that accurately align with the text… 

Exploiting Generalization in Offline Reinforcement Learning via Unseen State Augmentations

Nirbhay ModheQiaozi GaoA. KalyanG. Sukhatme
2023
arXiv.org

Offline reinforcement learning (RL) methods strike a balance between exploration and exploitation by conservative value estimation -- penalizing values of unseen states and actions. Model-free… 

Bound by the Bounty: Collaboratively Shaping Evaluation Processes for Queer AI Harms

Organizer of Queer In AINathaniel DennlerAnaelia OvalleJessica de Jesus de Pinho Pinhal
2023
AIES

Bias evaluation benchmarks and dataset and model documentation have emerged as central processes for assessing the biases and harms of artificial intelligence (AI) systems. However, these auditing… 

LEXPLAIN: Improving Model Explanations via Lexicon Supervision

Orevaoghene AhiaHila GonenVidhisha BalachandranNoah A. Smith
2023
*SEM • Proceedings

Model explanations that shed light on the model’s predictions are becoming a desired additional output of NLP models, alongside their predictions. Challenges in creating these explanations include… 

When Not to Trust Language Models: Investigating Effectiveness of Parametric and Non-Parametric Memories

Alex MallenAkari AsaiVictor Zhong
2023
Annual Meeting of the Association for Computational Linguistics

Despite their impressive performance on diverse tasks, large language models (LMs) still struggle with tasks requiring rich world knowledge, implying the difficulty of encoding a wealth of world… 

Data-Efficient Finetuning Using Cross-Task Nearest Neighbors

Hamish IvisonNoah A. SmithHannaneh HajishirziPradeep Dasigi
2023
ACL Findings

Language models trained on massive prompted multitask datasets like T0 (Sanh et al., 2021) or FLAN (Wei et al., 2021a) can generalize to tasks unseen during training. We show that training on a… 

Interleaving Retrieval with Chain-of-Thought Reasoning for Knowledge-Intensive Multi-Step Questions

Harsh TrivediNiranjan BalasubramanianTushar KhotAshish Sabharwal
2023
ACL

Prompting-based large language models (LLMs) are surprisingly powerful at generating natural language reasoning steps or Chains-of-Thoughts (CoT) for multi-step question answering (QA). They… 

DISCO: Distilling Phrasal Counterfactuals with Large Language Models

Zeming ChenQiyue GaoKyle RichardsonAshish Sabharwal
2023
ACL

Recent methods demonstrate that data augmentation using counterfactual knowledge can teach models the causal structure of a task, leading to robust and generalizable models. However, such…