Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

CONDAQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation

Abhilasha RavichanderMatt GardnerAna Marasović
2022
EMNLP

The full power of human language-based communication cannot be realized without negation. All human languages have some form of negation. Despite this, negation remains a challenging phenomenon for… 

Ensemble Transformer for Efficient and Accurate Ranking Tasks: an Application to Question Answering Systems

Yoshitomo MatsubaraLuca SoldainiEric LindAlessandro Moschitti
2022
Findings of EMNLP

Large transformer models can highly improve Answer Sentence Selection (AS2) tasks, but their high computational costs prevent their use in many real-world applications. In this pa-per, we explore… 

Entailer: Answering Questions with Faithful and Truthful Chains of Reasoning

Oyvind TafjordBhavana Dalvi MishraPeter Clark
2022
EMNLP

Our goal is a question-answering (QA) system that can show how its answers are implied by its own internal beliefs via a systematic chain of reasoning . Such a capability would allow better… 

GENIE: Toward Reproducible and Standardized Human Evaluation for Text Generation

Daniel KhashabiGabriel StanovskyJonathan BraggDaniel S. Weld
2022
EMNLP

While often assumed a gold standard, effective human evaluation of text generation remains an important, open area for research. We revisit this problem with a focus on pro-ducing consistent… 

How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers

Michael HassidHao PengDaniel RotemRoy Schwartz
2022
EMNLP Findings

The attention mechanism is considered the backbone of the widely-used Transformer architecture. It contextualizes the input by computing input-specific attention matrices. We find that this… 

In-Context Learning for Few-Shot Dialogue State Tracking

Yushi HuChia-Hsuan LeeTianbao XieMari Ostendorf
2022
EMNLP Findings

Collecting and annotating task-oriented dialogues is time-consuming and costly. Thus, zero and few shot learning for dialogue tasks presents an exciting opportunity. In this work, we propose an… 

Knowledge Transfer from Answer Ranking to Answer Generation

Matteo GabburoRik Koncel-KedziorskiSiddhant GargAlessandro Moschitti
2022
EMNLP

Recent studies show that Question Answering (QA) based on Answer Sentence Selection (AS2) can be improved by generating an improved answer from the top-k ranked answer sentences (termed GenQA). This… 

Lexical Generalization Improves with Larger Models and Longer Training

Elron BandelYoav GoldbergYanai Elazar
2022
Finding of EMNLP

While fine-tuned language models perform well on many tasks, they were also shown to rely on superficial surface features such as lexical overlap. Excessive utilization of such heuristics can lead to… 

Modeling Context With Linear Attention for Scalable Document-Level Translation

Zhaofeng WuHao PengNikolaos PappasNoah A. Smith
2022
Findings of EMNLP

Document-level machine translation leverages inter-sentence dependencies to produce more coherent and consistent translations. However, these models, predominantly based on transformers, are… 

On Advances in Text Generation from Images Beyond Captioning: A Case Study in Self-Rationalization

Shruti PalaskarAkshita BhagiaYonatan BiskAna Marasović
2022
Findings of EMNLP

Integrating vision and language has gained no-table attention following the success of pretrained language models. Despite that, a fraction of emerging multimodal models is suitable for text…