Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

CORA: Benchmarks, Baselines, and Metrics as a Platform for Continual Reinforcement Learning Agents

Sam PowersEliot XingEric KolveA. Gupta
2021
CoLLAs

Progress in continual reinforcement learning has been limited due to several barriers to entry: missing code, high compute requirements, and a lack of suitable benchmarks. In this work, we present… 

Container: Context Aggregation Network

Peng GaoJiasen LuHongsheng LiAniruddha Kembhavi
2021
arXiv

Convolutional neural networks (CNNs) are ubiquitous in computer vision, with a myriad of effective and efficient variations. Recently, Transformers – originally introduced in natural language… 

SciA11y: Converting Scientific Papers to Accessible HTML

Lucy Lu WangIsabel CacholaJonathan BraggDaniel S. Weld
2021
ASSETS

We present SciA11y, a system that renders inaccessible scientific paper PDFs into HTML. SciA11y uses machine learning models to extract and understand the content of scientific PDFs, and reorganizes… 

Can Machines Learn Morality? The Delphi Experiment

Liwei JiangChandra BhagavatulaJenny LiangYejin Choi
2021
arXiv

As AI systems become increasingly powerful and pervasive, there are growing concerns about machines’ morality or a lack thereof. Yet, teaching morality to machines is a formidable task, as morality… 

Delphi: Towards Machine Ethics and Norms

Liwei JiangJena D. HwangChandrasekhar BhagavatulaYejin Choi
2021
arXiv

Failing to account for moral norms could notably hinder AI systems’ ability to interact with people. AI systems empirically require social, cultural, and ethical norms to make moral judgments.… 

Reflective Decoding: Beyond Unidirectional Generation with Off-the-Shelf Language Models

Peter WestXiming LuAri HoltzmanYejin Choi
2021
ACL

Publicly available, large pretrained Language Models (LMs) generate text with remarkable quality, but only sequentially from left to right. As a result, they are not immediately applicable to… 

SciCo: Hierarchical Cross-Document Coreference for Scientific Concepts

Arie CattanSophie JohnsonDaniel S. WeldTom Hope
2021
AKBC

Determining coreference of concept mentions across multiple documents is fundamental for natural language understanding. Work on cross-document coreference resolution (CDCR) typically considers… 

Scientific Language Models for Biomedical Knowledge Base Completion: An Empirical Study

Rahul NadkarniDavid WaddenIz BeltagyTom Hope
2021
AKBC

Biomedical knowledge graphs (KGs) hold rich information on entities such as diseases, drugs, and genes. Predicting missing links in these graphs can boost many important applications, such as drug… 

Competency Problems: On Finding and Removing Artifacts in Language Data

Matt GardnerWilliam Cooper MerrillJesse DodgeNoah A. Smith
2021
EMNLP

Much recent work in NLP has documented dataset artifacts, bias, and spurious correlations between input features and output labels. However, how to tell which features have “spurious” instead of… 

Ethical-Advice Taker: Do Language Models Understand Natural Language Interventions?

Jieyu ZhaoDaniel KhashabiTushar KhotAshish Sabharwal and Kai-Wei Chang
2021
ACL-IJCNLP

Is it possible to use natural language to intervene in a model’s behavior and alter its prediction in a desired way? We investigate the effectiveness of natural language interventions for…