Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

LeTI: Learning to Generate from Textual Interactions

Xingyao WangHao PengReyhaneh JabbarvandHeng Ji
2023
arXiv.org

Finetuning pre-trained language models (LMs) enhances the models' capabilities. Prior techniques fine-tune a pre-trained LM on input-output pairs (e.g., instruction fine-tuning), or with numerical… 

TESS: Text-to-Text Self-Conditioned Simplex Diffusion

Rabeeh Karimi MahabadiJaesung TaeHamish IvisonArman Cohan
2023
arXiv

Diffusion models have emerged as a powerful paradigm for generation, obtaining strong performance in various domains with continuous-valued inputs. Despite the promises of fully non-autoregressive… 

Binding Language Models in Symbolic Languages

Zhoujun ChengTianbao XiePeng ShiTao Yu
2023
ICLR • Proceedings

Though end-to-end neural approaches have recently been dominating NLP tasks in both performance and ease-of-use, they lack interpretability and robustness. We propose Binder, a training-free… 

Complexity-Based Prompting for Multi-Step Reasoning

Yao FuHao PengAshish SabharwalTushar Khot
2023
ICLR

We study the task of prompting large-scale language models to perform multi-step reasoning. Existing work shows that when prompted with a chain of thoughts (CoT), sequences of short sentences… 

Editing Models with Task Arithmetic

Gabriel IlharcoMarco Tulio RibeiroMitchell WortsmanAli Farhadi
2023
ICLR

Changing how pre-trained models behave -- e.g., improving their performance on a downstream task or mitigating biases learned during pre-training -- is a common practice when developing machine… 

InSCIt: Information-Seeking Conversations with Mixed-Initiative Interactions

Zeqiu WuRyu ParishHao ChengHannaneh Hajishirzi
2023
TACL

In an information-seeking conversation, a user may ask questions that are under-specified or unanswerable. An ideal agent would interact by initiating different response types according to the… 

Is Reinforcement Learning (Not) for Natural Language Processing?: Benchmarks, Baselines, and Building Blocks for Natural Language Policy Optimization

Rajkumar RamamurthyPrithviraj AmmanabroluKianté BrantleyYejin Choi
2023
ICLR

We tackle the problem of aligning pre-trained large language models (LMs) with human preferences. If we view text generation as a sequential decision-making problem, reinforcement learning (RL)… 

LongEval: Guidelines for Human Evaluation of Faithfulness in Long-form Summarization

Kalpesh KrishnaErin BransomBailey KuehlKyle Lo
2023
EACL

While human evaluation remains best practice for accurately judging the faithfulness of automatically-generated summaries, few solutions exist to address the increased difficulty and workload when… 

Selective Annotation Makes Language Models Better Few-Shot Learners

Hongjin SuJungo KasaiChen Henry WuTao Yu
2023
ICLR • Proceedings

Many recent approaches to natural language tasks are built on the remarkable abilities of large language models. Large language models can perform in-context learning, where they learn a new task… 

AdapterSoup: Weight Averaging to Improve Generalization of Pretrained Language Models

Alexandra ChronopoulouMatthew E. PetersAlexander M. FraserJesse Dodge
2023
Findings of EACL 2023

Pretrained language models (PLMs) are trained on massive corpora, but often need to specialize to specific domains. A parameter-efficient adaptation method suggests training an adapter for each…