Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Benchmarking Progress to Infant-Level Physical Reasoning in AI

Luca WeihsAmanda Rose YuileRenée BaillargeonAniruddha Kembhavi
2022
TMLR

To what extent do modern AI systems comprehend the physical world? We introduce the open-access Infant-Level Physical Reasoning Benchmark ( InfLevel ) to gain insight into this question. We evaluate… 

Transparency Helps Reveal When Language Models Learn Meaning

Zhaofeng WuWill MerrillHao PengNoah A. Smith
2022
arXiv

Many current NLP systems are built from language models trained to optimize unsupervised objectives on large amounts of raw text. Under what conditions might such a procedure acquire meaning? Our… 

REV: Information-Theoretic Evaluation of Free-Text Rationales

Hanjie ChenFaeze BrahmanXiang RenSwabha Swayamdipta
2022
arXiv

information. Future work might explore evaluation that penalizes rationales which support incorrect predictions, thus bridging together predictive performance with interpretability metrics.

Is Reinforcement Learning (Not) for Natural Language Processing?: Benchmarks, Baselines, and Building Blocks for Natural Language Policy Optimization

R. RamamurthyPrithviraj AmmanabroluKianté BrantleyYejin Choi
2022
ArXiv

We tackle the problem of aligning pre-trained large language models (LMs) with human preferences. If we view text generation as a sequential decision-making problem, reinforcement learning (RL)… 

Machine-learned climate model corrections from a global storm-resolving model: Performance across the annual cycle

Anna KwaSpencer. K. ClarkBrian Hennand Christopher S. Bretherton
2022
ESSOAr

One approach to improving the accuracy of a coarse-grid global climate model is to add machine-learned state-dependent corrections to the prognosed model tendencies, such that the climate model… 

Pace v0.1: A python-based performance-portable implementation of the FV3 dynamical core

Johann DahmEddie DavisFlorian Deconinckand Oliver Fuhrer
2022
EGUsphere

Progress in leveraging current and emerging high-performance computing infrastructures using traditional weather and climate models has been slow. This has become known more broadly as the software… 

Correcting a 200 km Resolution Climate Model in Multiple Climates by Machine Learning From 25 km Resolution Simulations

S. ClarkNoah BrenowitzB. HennL. Harris
2022
Journal of Advances in Modeling Earth Systems

Bretherton et al. (2022, https://doi.org/10.1029/2021MS002794) demonstrated a successful approach for using machine learning (ML) to help a coarse‐resolution global atmosphere model with real… 

Multi-Scale Contrastive Co-Training for Event Temporal Relation Extraction

Hao-Ren YaoLuke BreitfellerAakanksha NaikCarolyn Rosé
2022
arXiv.org

Extracting temporal relationships between pairs of events in texts is a crucial yet challenging problem for natural language understanding. Depending on the distance between the events, models must… 

Efficient Methods for Natural Language Processing: A Survey

Marcos Vinícius TrevisoTianchu JiJi-Ung LeeRoy Schwartz
2022
arXiv

Getting the most out of limited resources allows advances in natural language processing (NLP) research and practice while being con-servative with resources. Those resources may be data, time,… 

MetaICL: Learning to Learn In Context

Sewon MinM. LewisLuke ZettlemoyerHannaneh Hajishirzi
2022
NAACL

We introduce MetaICL (Meta-training for In-Context Learning), a new meta-training framework for few-shot learning where a pretrained language model is tuned to do in-context learning on a large set…