Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Climate sensitivity and relative humidity changes in global storm-resolving model simulations of climate change

T. MerlisKai-Yuan ChengIlai GuendelmanStephan Fueglistaler
2024
Science Advances

The climate simulation frontier of a global storm-resolving model (GSRM; or k-scale model because of its kilometer-scale horizontal resolution) is deployed for climate change simulations. The… 

PoliFormer: Scaling On-Policy RL with Transformers Results in Masterful Navigators

Kuo-Hao ZengZichen ZhangKiana EhsaniLuca Weihs
2024
CoRL

We present PoliFormer (Policy Transformer), an RGB-only indoor navigation agent trained end-to-end with reinforcement learning at scale that generalizes to the real-world without adaptation despite… 

Probabilistic Emulation of a Global Climate Model with Spherical DYffusion

Salva Rühling CachayBrian HennOliver Watt‐MeyerRose Yu
2024
ICML•ML4ESM

Data-driven deep learning models are on the verge of transforming global weather forecasting. It is an open question if this success can extend to climate modeling, where long inference rollouts and… 

PDDLEGO: Iterative Planning in Textual Environments

Li ZhangPeter JansenTianyi ZhangNiket Tandon
2024
STARSEM

Planning in textual environments have been shown to be a long-standing challenge even for current models. A recent, promising line of work uses LLMs to generate a formal representation of the… 

Unified-IO 2: Scaling Autoregressive Multimodal Models with Vision, Language, Audio, and Action

Jiasen Lu*Christopher Clark*Sangho Lee*Aniruddha Kembhavi
2024
CVPR

We present Unified-IO 2, the first autoregressive multimodal model that is capable of understanding and generating images, text, audio, and action. To unify different modalities, we tokenize inputs… 

ADaPT: As-Needed Decomposition and Planning with Language Models

Archiki PrasadAlexander KollerMareike HartmannTushar Khot
2024
NAACL Findings

Large Language Models (LLMs) are increasingly being used for interactive decision-making tasks requiring planning and adapting to the environment. Recent works employ LLMs-as-agents in broadly two… 

Evaluating In-Context Learning of Libraries for Code Generation

Arkil PatelSiva ReddyDzmitry BahdanauPradeep Dasigi
2024
NAACL

Contemporary Large Language Models (LLMs) exhibit a high degree of code generation and comprehension capability. A particularly promising area is their ability to interpret code modules from… 

Impossible Distillation: from Low-Quality Model to High-Quality Dataset&Model for Summarization and Paraphrasing

Jaehun JungPeter WestLiwei JiangYejin Choi
2024
NAACL

We present Impossible Distillation, a novel framework for paraphrasing and sentence summarization, that distills a high-quality dataset and model from a low-quality teacher that itself cannot… 

JAMDEC: Unsupervised Authorship Obfuscation using Constrained Decoding over Small Language Models

Jillian R. FisherXiming LuJaehun JungYejin Choi
2024
NAACL

The permanence of online content combined with the enhanced authorship identification techniques calls for stronger computational methods to protect the identity and privacy of online authorship… 

Leveraging Code to Improve In-context Learning for Semantic Parsing

Ben BoginShivanshu GuptaPeter ClarkAshish Sabharwal
2024
NAACL

In-context learning (ICL) is an appealing approach for semantic parsing due to its few-shot nature and improved generalization. However, learning to parse to rare domain-specific languages (DSLs)…