Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Better Retrieval May Not Lead to Better Question Answering

Zhengzhong LiangTushar KhotSteven BethardAshish Sabharwal
2022
arXiv

Considerable progress has been made recently in open-domain question answering (QA) problems, which require Information Retrieval (IR) and Reading Comprehension (RC). A popular approach to improve… 

Scaling Creative Inspiration with Fine-Grained Functional Facets of Product Ideas

Tom HopeRonen TamariHyeonsu KangDafna Shahaf
2022
CHI

Web-scale repositories of products, patents and scientific papers offer an opportunity for building automated systems that scour millions of existing ideas and assist users in discovering novel… 

Saturated Transformers are Constant-Depth Threshold Circuits

William MerrillAshish SabharwalNoah A. Smith
2022
TACL

Transformers have become a standard neural network architecture for many NLP problems, motivating theoretical analysis of their power in terms of formal languages. Recent work has shown that… 

The Curious Case of Commonsense Intelligence

Yejin Choi
2022
Daedalus

Abstract Commonsense intelligence is a long-standing puzzle in AI. Despite considerable advances in deep learning, AI continues to be narrow and brittle due to its lack of common sense. Why is… 

From Who You Know to What You Read: Augmenting Scientific Recommendations with Implicit Social Networks

Hyeonsu KangRafal KocielnikAndrew HeadJonathan Bragg
2022
CHI

The ever-increasing pace of scientific publication necessitates methods for quickly identifying relevant papers. While neural recommenders trained on user interests can help, they still result in… 

Inferring Implicit Relations with Language Models

Uri KatzMor GevaJonathan Berant
2022
NAACL • UnImplicit

A prominent challenge for modern language understanding systems is the ability to answer implicit reasoning questions, where the required reasoning steps for answering the question are not mentioned… 

LM-Debugger: An Interactive Tool for Inspection and Intervention in Transformer-Based Language Models

Mor GevaAvi CaciularuGuy DarYoav Goldberg
2022
arXiv

The opaque nature and unexplained behavior of transformer-based language models (LMs) have spurred a wide interest in interpreting their predictions. However, current interpretation methods mostly… 

Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation

Ofir PressNoah A. SmithM. Lewis
2022
ICLR

Since the introduction of the transformer model by Vaswani et al. (2017), a fundamental question has yet to be answered: how does a model achieve extrapolation at inference time for sequences that… 

S2AMP: A High-Coverage Dataset of Scholarly Mentorship Inferred from Publications

Shaurya RohatgiDoug DowneyDaniel KingSergey Feldman
2022
JCDL

Mentorship is a critical component of academia, but is not as visible as publications, citations, grants, and awards. Despite the importance of studying the quality and impact of mentorship, there… 

Bursting Scientific Filter Bubbles: Boosting Innovation via Novel Author Discovery

Jason PortenoyMarissa RadenskyJevin D. WestTom Hope
2022
CHI

Isolated silos of scientific research and the growing challenge of information overload limit awareness across the literature and hinder innovation. Algorithmic curation and recommendation, which…