About AllenNLP

The AllenNLP team envisions language-centered AI that equitably serves humanity. We work to improve NLP systems' performance and accountability, and advance scientific methodologies for evaluating and understanding those systems. We deliver high-impact research of our own and masterfully-engineered open-source tools to accelerate NLP research around the world.

Featured Software

AllenNLP Library

A natural language processing platform for building state-of-the-art models. A complete platform for solving natural language processing tasks in PyTorch.

View

AI2 Tango

A Python library for choreographing your machine learning research. Construct machine learning experiments out of repeatable, reusable steps.

View
  • A Controllable Model of Grounded Response Generation

    Zeqiu Wu, Michel Galley, Chris Brockett, Yizhe Zhang, Xiang Gao, Chris Quirk, Rik Koncel-Kedziorski, Jianfeng Gao, Hannaneh Hajishirzi, Mari Ostendorf, Bill DolanAAAI 2022 Current end-to-end neural conversation models inherently lack the flexibility to impose semantic control in the response generation process. This control is essential to ensure that users' semantic intents are satisfied and to impose a degree of specificity…
  • FLEX: Unifying Evaluation for Few-Shot NLP

    Jonathan Bragg, Arman Cohan, Kyle Lo, Iz BeltagyNeurIPS2021 Few-shot NLP research is highly active, yet conducted in disjoint research threads with evaluation suites that lack challenging-yet-realistic testing setups and fail to employ careful experimental design. Consequently, the community does not know which…
  • Natural Adversarial Objects

    Felix Lau, Nishant Subramani, Sasha Harrison, Aerin Kim, E. Branson, Rosanne LiuNeurIPS2021 Although state-of-the-art object detection methods have shown compelling performance, models often are not robust to adversarial attacks and out-of-distribution data. We introduce a new dataset, Natural Adversarial Objects (NAO), to evaluate the robustness of…
  • One Question Answering Model for Many Languages with Cross-lingual Dense Passage Retrieval

    Akari Asai, Xinyan Yu, Jungo Kasai, Hanna HajishirziNeurIPS2021 We present CORA, a Cross-lingual Open-Retrieval Answer Generation model that can answer questions across many languages even when language-specific annotated data or knowledge sources are unavailable. We introduce a new dense passage retrieval algorithm that…
  • Teach Me to Explain: A Review of Datasets for Explainable NLP

    Sarah Wiegreffe and Ana Marasović NeurIPS2021 Explainable NLP (ExNLP) has increasingly focused on collecting human-annotated explanations. These explanations are used downstream in three ways: as data augmentation to improve performance on a predictive task, as a loss signal to train models to produce…

Qasper

Question Answering on Research Papers

A dataset containing 1585 papers with 5049 information-seeking questions asked by regular readers of NLP papers, and answered by a separate set of NLP practitioners.

A Dataset of Incomplete Information Reading Comprehension Questions

13K reading comprehension questions on Wikipedia paragraphs that require following links in those paragraphs to other Wikipedia pages

IIRC is a crowdsourced dataset consisting of information-seeking questions requiring models to identify and then retrieve necessary information that is missing from the original context. Each original context is a paragraph from English Wikipedia and it comes with a set of links to other Wikipedia pages, and answering the questions requires finding the appropriate links to follow and retrieving relevant information from those linked pages that is missing from the original context.

ZEST: ZEroShot learning from Task descriptions

ZEST is a benchmark for zero-shot generalization to unseen NLP tasks, with 25K labeled instances across 1,251 different tasks.

ZEST tests whether NLP systems can perform unseen tasks in a zero-shot way, given a natural language description of the task. It is an instantiation of our proposed framework "learning from task descriptions". The tasks include classification, typed entity extraction and relationship extraction, and each task is paired with 20 different annotated (input, output) examples. ZEST's structure allows us to systematically test whether models can generalize in five different ways.

MOCHA

A benchmark for training and evaluating generative reading comprehension metrics.

Posing reading comprehension as a generation problem provides a great deal of flexibility, allowing for open-ended questions with few restrictions on possible answers. However, progress is impeded by existing generation metrics, which rely on token overlap and are agnostic to the nuances of reading comprehension. To address this, we introduce a benchmark for training and evaluating generative reading comprehension metrics: MOdeling Correctness with Human Annotations. MOCHA contains 40K human judgement scores on model outputs from 6 diverse question answering datasets and an additional set of minimal pairs for evaluation. Using MOCHA, we train an evaluation metric: LERC, a Learned Evaluation metric for Reading Comprehension, to mimic human judgement scores.

The curse of neural toxicity: AI2 and UW researchers help computers watch their language

GeekWire
March 6, 2021
Read the Article

Green AI

CACM
November 18, 2020
Read the Article

Your favorite A.I. language tool is toxic

Fortune
September 29, 2020
Read the Article

Deep Learning’s Climate Change Problem

Forbes
June 17, 2020
Read the Article

Why are so many AI systems named after Muppets?

The Verge
December 11, 2019
Read the Article