Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

I can’t believe there’s no images! : Learning Visual Tasks Using Only Language Supervision

Sophia GuChristopher ClarkAniruddha Kembhavi
2022
ICCV International Conference on Computer Vision

Many high-level skills that are required for computer vision tasks, such as parsing questions, comparing and contrasting semantics, and writing descriptions, are also required in other domains such… 

Pace v0.1: A python-based performance-portable implementation of the FV3 dynamical core

Johann DahmEddie DavisFlorian Deconinckand Oliver Fuhrer
2022
EGUsphere

Progress in leveraging current and emerging high-performance computing infrastructures using traditional weather and climate models has been slow. This has become known more broadly as the software… 

Correcting a 200 km Resolution Climate Model in Multiple Climates by Machine Learning From 25 km Resolution Simulations

S. ClarkNoah BrenowitzB. HennL. Harris
2022
Journal of Advances in Modeling Earth Systems

Bretherton et al. (2022, https://doi.org/10.1029/2021MS002794) demonstrated a successful approach for using machine learning (ML) to help a coarse‐resolution global atmosphere model with real… 

Multi-Scale Contrastive Co-Training for Event Temporal Relation Extraction

Hao-Ren YaoLuke BreitfellerAakanksha NaikCarolyn Rosé
2022
arXiv.org

Extracting temporal relationships between pairs of events in texts is a crucial yet challenging problem for natural language understanding. Depending on the distance between the events, models must… 

Efficient Methods for Natural Language Processing: A Survey

Marcos Vinícius TrevisoTianchu JiJi-Ung LeeRoy Schwartz
2022
arXiv

Getting the most out of limited resources allows advances in natural language processing (NLP) research and practice while being con-servative with resources. Those resources may be data, time,… 

MetaICL: Learning to Learn In Context

Sewon MinM. LewisLuke ZettlemoyerHannaneh Hajishirzi
2022
NAACL

We introduce MetaICL (Meta-training for In-Context Learning), a new meta-training framework for few-shot learning where a pretrained language model is tuned to do in-context learning on a large set… 

Evidentiality-guided Generation for Knowledge-Intensive NLP Tasks

Akari AsaiMatt GardnerHannaneh Hajishirzi
2022
NAACL

Retrieval-augmented generation models have shown state-of-the-art performance across many knowledge-intensive NLP tasks such as open-domain question answering and fact verification. These models are… 

Robust fine-tuning of zero-shot models

Mitchell WortsmanGabriel IlharcoMike LiLudwig Schmidt
2022
CVPR

Large pre-trained models such as CLIP or ALIGN offer consistent accuracy across a range of data distributions when performing zero-shot inference (i.e., without fine-tuning on a specific dataset).… 

Noisy Channel Language Model Prompting for Few-Shot Text Classification

Sewon MinMichael LewisHannaneh HajishirziLuke Zettlemoyer
2022
ACL

We introduce a noisy channel approach for language model prompting in few-shot text classification. Instead of computing the likelihood of the label given the input (referred as direct models),… 

FaVIQ: FAct Verification from Information-seeking Questions

Jungsoo ParkSewon MinJaewoo KangHannaneh Hajishirzi
2022
ACL

Despite significant interest in developing general purpose fact checking models, it is challenging to construct a large-scale fact verification dataset with realistic real-world claims. Existing…