Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

In-Context Learning for Few-Shot Dialogue State Tracking

Yushi HuChia-Hsuan LeeTianbao XieMari Ostendorf
2022
EMNLP Findings

Collecting and annotating task-oriented dialogues is time-consuming and costly. Thus, zero and few shot learning for dialogue tasks presents an exciting opportunity. In this work, we propose an… 

Knowledge Transfer from Answer Ranking to Answer Generation

Matteo GabburoRik Koncel-KedziorskiSiddhant GargAlessandro Moschitti
2022
EMNLP

Recent studies show that Question Answering (QA) based on Answer Sentence Selection (AS2) can be improved by generating an improved answer from the top-k ranked answer sentences (termed GenQA). This… 

Lexical Generalization Improves with Larger Models and Longer Training

Elron BandelYoav GoldbergYanai Elazar
2022
Finding of EMNLP

While fine-tuned language models perform well on many tasks, they were also shown to rely on superficial surface features such as lexical overlap. Excessive utilization of such heuristics can lead to… 

Modeling Context With Linear Attention for Scalable Document-Level Translation

Zhaofeng WuHao PengNikolaos PappasNoah A. Smith
2022
Findings of EMNLP

Document-level machine translation leverages inter-sentence dependencies to produce more coherent and consistent translations. However, these models, predominantly based on transformers, are… 

On Advances in Text Generation from Images Beyond Captioning: A Case Study in Self-Rationalization

Shruti PalaskarAkshita BhagiaYonatan BiskAna Marasović
2022
Findings of EMNLP

Integrating vision and language has gained no-table attention following the success of pretrained language models. Despite that, a fraction of emerging multimodal models is suitable for text… 

Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection

Luca Di LielloSiddhant GargLuca SoldainiAlessandro Moschitti
2022
EMNLP

An important task for designing QA systems is answer sentence selection (AS2): select-ing the sentence containing (or constituting) the answer to a question from a set of re-trieved relevant… 

Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks

Yizhong WangSwaroop MishraPegah AlipoormolabashiDaniel Khashabi
2022
EMNLP

How well can NLP models generalize to a variety of unseen tasks when provided with task instructions? To address this question, we first introduce SUPER-NATURALINSTRUCTIONS, a benchmark of 1,616… 

Teaching Broad Reasoning Skills via Decomposition-Guided Contexts

Harsh TrivediNiranjan BalasubramanianTushar KhotAshish Sabharwal
2022
EMNLP

Question-answering datasets require a broad set of reasoning skills. We show how to use question decompositions to teach language models these broad reasoning skills in a robust fashion.… 

Towards Teachable Reasoning Systems: Using a Dynamic Memory of User Feedback for Continual System Improvement

Bhavana Dalvi MishraOyvind TafjordPeter Clark
2022
EMNLP

Our goal is a teachable reasoning system for question-answering (QA), where a user can interact with faithful answer explanations, and correct its errors so that the system improves over time. Our… 

Twist Decoding: Diverse Generators Guide Each Other

Jungo KasaiKeisuke SakaguchiRonan Le BrasNoah A. Smith
2022
EMNLP

Natural language generation technology has recently seen remarkable progress with large-scale training, and many natural language applications are now built upon a wide range of generation models.…