Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

The Semantic Reader Project: Augmenting Scholarly Documents through AI-Powered Interactive Reading Interfaces

Kyle LoJoseph Chee ChangAndrew HeadDaniel S. Weld
2023
arXiv

Scholarly publications are key to the transfer of knowledge from scholars to others. However, research papers are information-dense, and as the volume of the scientific literature grows, the need… 

CHAMPAGNE: Learning Real-world Conversation from Large-Scale Web Videos

Seungju HanJack HesselNouha DziriYoungjae Yu
2023
arXiv.org

Visual information is central to conversation: body gestures and facial expressions, for example, contribute to meaning that transcends words alone. To date, however, most neural conversational… 

Clever Hans or Neural Theory of Mind? Stress Testing Social Reasoning in Large Language Models

Natalie ShapiraMosh LevyS. AlaviVered Shwartz
2023
EACL

The escalating debate on AI’s capabilities warrants developing reliable metrics to assess machine “intelligence.” Recently, many anecdotal examples were used to suggest that newer Large Language… 

Comparing Sentence-Level Suggestions to Message-Level Suggestions in AI-Mediated Communication

Liye FuBenjamin NewmanMaurice JakeschSarah Kreps
2023
International Conference on Human Factors in Computing Systems

Traditionally, writing assistance systems have focused on short or even single-word suggestions. Recently, large language models like GPT-3 have made it possible to generate significantly longer… 

The Parallelism Tradeoff: Limitations of Log-Precision Transformers

William MerrillAshish Sabharwal
2023
TACL • ACL

Abstract Despite their omnipresence in modern NLP, characterizing the computational power of transformer neural nets remains an interesting open question. We prove that transformers whose arithmetic… 

AdapterSoup: Weight Averaging to Improve Generalization of Pretrained Language Models

Alexandra ChronopoulouMatthew E. PetersAlexander M. FraserJesse Dodge
2023
Findings of EACL 2023

Pretrained language models (PLMs) are trained on massive corpora, but often need to specialize to specific domains. A parameter-efficient adaptation method suggests training an adapter for each… 

BotPercent: Estimating Twitter Bot Populations from Groups to Crowds

Zhaoxuan TanShangbin FengMelanie SclarYulia Tsvetkov
2023
arXiv

Twitter bot detection has become increasingly important in combating misinformation, identifying malicious online campaigns, and protecting the integrity of social media discourse. While existing… 

Specializing Smaller Language Models towards Multi-Step Reasoning

Yao FuHao PengLitu OuTushar Khot
2023
ICML

The surprising ability of Large Language Models (LLMs) to perform well on complex reasoning with only few-shot chain-of-thought prompts is believed to emerge only in very large-scale models (100+… 

Do Embodied Agents Dream of Pixelated Sheep?: Embodied Decision Making using Language Guided World Modelling

Kolby NottinghamPrithviraj AmmanabroluAlane SuhrRoy Fox
2023
arXiv

Reinforcement learning (RL) agents typically learn tabula rasa, without prior knowledge of the world, which makes learning complex tasks with sparse rewards difficult. If initialized with knowledge… 

The Semantic Scholar Open Data Platform

Rodney Michael KinneyChloe AnastasiadesRussell AuthurDaniel S. Weld
2023
arXiv

The volume of scientific output is creating an urgent need for automated tools to help scientists keep up with developments in their field. Semantic Scholar (S2) is an open data platform and website…