Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

LongEval: Guidelines for Human Evaluation of Faithfulness in Long-form Summarization

Kalpesh KrishnaErin BransomBailey KuehlKyle Lo
2023
EACL

While human evaluation remains best practice for accurately judging the faithfulness of automatically-generated summaries, few solutions exist to address the increased difficulty and workload when… 

ArK: Augmented Reality with Knowledge Interactive Emergent Ability

Qiuyuan HuangJ. ParkAbhinav GuptaJianfeng Gao
2023
arXiv.org

Despite the growing adoption of mixed reality and interactive AI agents, it remains challenging for these systems to generate high quality 2D/3D scenes in unseen environments. The common practice… 

Is Reinforcement Learning (Not) for Natural Language Processing?: Benchmarks, Baselines, and Building Blocks for Natural Language Policy Optimization

Rajkumar RamamurthyPrithviraj AmmanabroluKianté BrantleyYejin Choi
2023
ICLR

We tackle the problem of aligning pre-trained large language models (LMs) with human preferences. If we view text generation as a sequential decision-making problem, reinforcement learning (RL)… 

Editing Models with Task Arithmetic

Gabriel IlharcoMarco Tulio RibeiroMitchell WortsmanAli Farhadi
2023
ICLR

Changing how pre-trained models behave -- e.g., improving their performance on a downstream task or mitigating biases learned during pre-training -- is a common practice when developing machine… 

InSCIt: Information-Seeking Conversations with Mixed-Initiative Interactions

Zeqiu WuRyu ParishHao ChengHannaneh Hajishirzi
2023
TACL

In an information-seeking conversation, a user may ask questions that are under-specified or unanswerable. An ideal agent would interact by initiating different response types according to the… 

Selective Annotation Makes Language Models Better Few-Shot Learners

Hongjin SuJungo KasaiChen Henry WuTao Yu
2023
ICLR • Proceedings

Many recent approaches to natural language tasks are built on the remarkable abilities of large language models. Large language models can perform in-context learning, where they learn a new task… 

Binding Language Models in Symbolic Languages

Zhoujun ChengTianbao XiePeng ShiTao Yu
2023
ICLR • Proceedings

Though end-to-end neural approaches have recently been dominating NLP tasks in both performance and ease-of-use, they lack interpretability and robustness. We propose Binder, a training-free… 

Moving Forward by Moving Backward: Embedding Action Impact over Action Semantics

Kuo-Hao ZengLuca WeihsRoozbeh MottaghiAli Farhadi
2023
ICLR

A common assumption when training embodied agents is that the impact of taking an action is stable; for instance, executing the"move ahead"action will always move the agent forward by a fixed… 

Can AI language models replace human participants?

Danica DillionNiket TandonYuling GuKurt Gray
2023
Trends in Cognitive Sciences

Recent work suggests that language models such as GPT can make human-like judgments across a number of domains. We explore whether and when language models might replace human participants in… 

S2abEL: A Dataset for Entity Linking from Scientific Tables

Yuze LouBailey KuehlErin BransomDoug Downey
2023
EMNLP

Entity linking (EL) is the task of linking a textual mention to its corresponding entry in a knowledge base, and is critical for many knowledge-intensive NLP applications. When applied to tables in…