AI2 Key Scientific Challenges Program 2017

The Allen Institute for Artificial Intelligence (AI2) has selected 12 proposals from researchers across several institutions to receive $10,000 in unrestricted funding to support their work. Their research will focus on key scientific challenges of interest to AI2, which also has the potential to provide significant advances to the greater artificial intelligence research community. Awardees have unique access to AI2 resources including an AI2 mentor, early access to new AI2 data and code, and a potential internship at AI2 to further develop particularly exciting advances. In return, we have asked that they to commit to openly sharing the results of their work with the broader research community. Updates will be announced and results made available here as research progresses.

Key Scientific Challenges Awardees

  • David Alvarez-Melis, Massachusetts Institute of Technology: Perturbation-based approaches to interpretability for complex machine learning models
  • Antoine Bosselut, University of Washington: Learning Textual Simulators of Common Sense Knowledge
  • Anne Cocos, University of Pennsylvania: Convert PPDB into a taxonomically-structured, sense-aware, lexical knowledge base
  • Mayank Kejriwal, Information Sciences Institute: ELMO: Extracting, Linking, Modeling and Indexing Tables at Scale from Academic Literature
  • Jonathan Kummerfeld, University of Michigan: Scaling up NLP Datasets with Adaptive Annotation Algorithms
  • Craig Eric Larson, Virginia Commonwealth University: Geometry Problem Representations
  • Arindam Mitra, Arizona State University: Knowledge Representation, Reasoning and Declarative Problem Solving for Aristo
  • Minjoon Seo, University of Washington: Distributed Interrogative Decomposition of Natural Language for Learning to Read and Reason
  • Vered Shwartz, Bar-Ilan University: Injecting Lexical Inference Knowledge into Neural Textual Inference
  • Anders Søgaard, University of Copenhagen: Searching for Papers with Papers
  • Katie Stasaski, University of California, Berkeley: Rationale Generation: A Deep Learning Approach for Explaining Math Problems to Students
  • Alane Suhr, Cornell University: Mapping Context-Dependent Natural Language to Executable Code