Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Improving Language Models via Plug-and-Play Retrieval Feedback

Wenhao YuZhihan ZhangZhenwen LiangAshish Sabharwal
2023
arXiv

Large language models (LLMs) exhibit remarkable performance across various NLP tasks. However, they often generate incorrect or hallucinated information, which hinders their practical applicability… 

Learning to Generate Novel Scientific Directions with Contextualized Literature-based Discovery

Qingyun WangDoug DowneyHeng JiTom Hope
2023
arXiv.org

Literature-Based Discovery (LBD) aims to discover new scientific knowledge by mining papers and generating hypotheses. Standard LBD is limited to predicting pairwise relations between discrete… 

SQuARe: A Large-Scale Dataset of Sensitive Questions and Acceptable Responses Created Through Human-Machine Collaboration

Hwaran LeeSeokhee HongJoonsuk ParkJung-Woo Ha
2023
arXiv.org

The potential social harms that large language models pose, such as generating offensive content and reinforcing biases, are steeply rising. Existing works focus on coping with this concern while… 

Improving Language Model Negotiation with Self-Play and In-Context Learning from AI Feedback

Yao FuHao PengTushar KhotMirella Lapata
2023
arXiv.org

We study whether multiple large language models (LLMs) can autonomously improve each other in a negotiation game by playing, reflecting, and criticizing. We are interested in this question because… 

LeTI: Learning to Generate from Textual Interactions

Xingyao WangHao PengReyhaneh JabbarvandHeng Ji
2023
arXiv.org

Finetuning pre-trained language models (LMs) enhances the models' capabilities. Prior techniques fine-tune a pre-trained LM on input-output pairs (e.g., instruction fine-tuning), or with numerical… 

Pace v0.2: a Python-based performance-portable atmospheric model

Johann DahmEddie DavisFlorian DeconinckOliver Fuhrer
2023
Geoscientific Model Development

Progress in leveraging current and emerging high-performance computing infrastructures using traditional weather and climate models has been slow. This has become known more broadly as the software… 

From Centralized to Ad-Hoc Knowledge Base Construction for Hypotheses Generation.

Shaked Launer-WachsHillel Taub-TabibJennie Tokarev MademY. Shamay
2023
Journal of Biomedical Informatics

Objective To demonstrate and develop an approach enabling individual researchers or small teams to create their own ad-hoc, lightweight knowledge bases tailored for specialized scientific interests,… 

Machine-Learned Climate Model Corrections From a Global Storm-Resolving Model: Performance Across the Annual Cycle

Anna KwaSpencer K. ClarkBrian HennChristopher S. Bretherton
2023
Journal of Advances in Modeling Earth Systems

One approach to improving the accuracy of a coarse-grid global climate model is to add machine-learned (ML) state-dependent corrections to the prognosed model tendencies, such that the climate model… 

TESS: Text-to-Text Self-Conditioned Simplex Diffusion

Rabeeh Karimi MahabadiJaesung TaeHamish IvisonArman Cohan
2023
arXiv

Diffusion models have emerged as a powerful paradigm for generation, obtaining strong performance in various domains with continuous-valued inputs. Despite the promises of fully non-autoregressive… 

Are Machine Rationales (Not) Useful to Humans? Measuring and Improving Human Utility of Free-Text Rationales

Brihi JoshiZiyi LiuSahana RamnathXiang Ren
2023
arXiv.org

Among the remarkable emergent capabilities of large language models (LMs) is free-text rationalization; beyond a certain scale, large LMs are capable of generating seemingly useful rationalizations,…