Forward this message to friends and colleagues who want to stay up to date with research and news from the Allen Institute for AI.  Subscribe here

In this edition: Featured work from AI2 at EMNLP 2020, a new AI2 Incubator graduate, and the recipient of the Allen AI Outstanding Engineer Scholarship.

AI2 at EMNLP 2020

AI2 is proud to have a record-breaking 48 papers at the upcoming EMNLP 2020, with 32 of these appearing at the main conference and 16 that were accepted to the new sister publication Findings.

Learn more about some of the newest research coming out of AI2 in thigh highlighted papers below, and check out the full list of EMNLP and Findings 2020 papers from AI2 on our website.
EMNLPX-LXMERT: Paint, Caption and Answer Questions with Multi-Modal Transformers describes how our new X-LXMERT model has image generation capabilities that rival existing state-of-the-art generative models while retaining question answering and captioning abilities. Try it out for yourself, and learn more in this coverage by MIT Tech ReviewThese weird, unsettling photos show that AI is getting smarter.
Findings | RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models investigates whether popular language models are safe to deploy, and what risks they pose of producing offensive, problematic, or toxic content. Check out our interactive Paper Explainer about this research, and this coverage by FortuneYour favorite A.I. language tool is toxic.
EMNLPSciSight: Combining faceted navigation and research group detection for COVID-19 exploratory scientific search presents a system that can explore associations between biomedical facets automatically extracted from papers (e.g., genes, drugs, diseases, patient outcomes) as well as combine textual and network information to search and visualize groups of researchers and their connections. Check out the SciScight demo here.
FindingsTLDR: Extreme Summarization of Scientific Documents presents a new automatic paper summarization model that leverages expert background knowledge and complex language understanding, plus an accompanying dataset. Try it out with the SciTLDR demo.
FindingsUnQovering Stereotyping Biases via Underspecified Questions discusses our work on identifying biases in question answering (QA) models. If these models are blindly deployed in real-life settings, the biases within these models could cause real harm, which raises the question – how extensive are social stereotypes in question-answering models? Explore more in the UnQover Demo.
Check out the full list of EMNLP and Findings 2020 papers from AI2 on our website →
More from AI2

Why Labs funded

We're excited to announce the newest AI2 Incubator spinout WhyLabs, who raised a $4M seed round from Madrona Venture Group, the AI2 Seed Fund, Bezos Expeditions, Defy Partners, and Ascend VC. Learn more in this coverage from GeekWire:
Amazon vets raise $4M from Madrona, Bezos Expeditions, others for AI2 spinout WhyLabs


Allen AI Outstanding Engineer Scholarship

AI2 is pleased to announce the recipient of the Allen AI Outstanding Engineer Scholarship for 2021: UW Allen School junior Sanjana Chintalapati. She is an excellent student and an accessibility advocate, and she has a bright future ahead of her in AI. Learn more in our article on the AI2 Blog.
Copyright © 2020 The Allen Institute for Artificial Intelligence, All rights reserved.
For media inquiries please contact
AI2 Newsletter Archive