Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Global Precipitation Correction Across a Range of Climates Using CycleGAN

Jeremy J McGibbonSpencer K. ClarkBrian HennChristopher S. Bretherton
2023
ESSOAr

Accurate precipitation simulations for various climate scenarios are critical for understanding and predicting the impacts of climate change. This study employs a Cycle-generative adversarial… 

I2D2: Inductive Knowledge Distillation with NeuroLogic and Self-Imitation

Chandra BhagavatulaJena D. HwangDoug DowneyYejin Choi
2023
Annual Meeting of the Association for Computational Linguistics

Commonsense capabilities of pre-trained language models dramatically improve with scale, leading many to believe that scale is the only winning recipe. But is it? Here, we investigate an alternative… 

Let Me Teach You: Pedagogical Foundations of Feedback for Language Models

Beatriz BorgesNiket TandonTanja KaserAntoine Bosselut
2023
arXiv

Natural Language Feedback (NLF) is an increasingly popular avenue to align Large Language Models (LLMs) to human preferences. Despite the richness and diversity of the information it can convey, NLF… 

Chain-of-Thought Hub: A Continuous Effort to Measure Large Language Models' Reasoning Performance

Yao FuLitu OuMingyu ChenTushar Khot
2023
ICML 2023, the Challenges in Deployable Generative AI workshop

As large language models (LLMs) are continuously being developed, their evaluation becomes increasingly important yet challenging. This work proposes Chain-of-Thought Hub, an open-source evaluation… 

ARIES: A Corpus of Scientific Paper Edits Made in Response to Peer Reviews

Mike D'ArcyAlexis RossErin BransomDoug Downey
2023
arXiv.org

Revising scientific papers based on peer feedback is a challenging task that requires not only deep scientific knowledge and reasoning, but also the ability to recognize the implicit requests in… 

Perspective: Large Language Models in Applied Mechanics

Neal R. BrodnikSamuel CartonCaelin MuirS. Daly
2023
Journal of applied mechanics

Large language models (LLMs), such as ChatGPT and PaLM, are able to perform sophisticated text comprehension and generation tasks with little or no training. Alongside their broader societal… 

Phone2Proc: Bringing Robust Robots Into Our Chaotic World

Matt DeitkeRose HendrixLuca WeihsAniruddha Kembhavi
2023
CVPR

Training embodied agents in simulation has become mainstream for the embodied AI community. However, these agents often struggle when deployed in the physical world due to their inability to… 

Visual Programming: Compositional visual reasoning without training

Tanmay GuptaAniruddha Kembhavi
2023
CVPR

We present VISPROG, a neuro-symbolic approach to solving complex and compositional visual tasks given natural language instructions. VISPROG avoids the need for any task-specific training. Instead,… 

The Tail Wagging the Dog: Dataset Construction Biases of Social Bias Benchmarks

Nikil SelvamSunipa DevDaniel KhashabiKai-Wei Chang
2023
ACL

How reliably can we trust the scores obtained from social bias benchmarks as faithful indicators of problematic social biases in a given language model? In this work, we study this question by… 

LLM-Blender: Ensembling Large Language Models with Pairwise Ranking and Generative Fusion

Dongfu JiangXiang RenBill Yuchen Lin
2023
ACL

We present LLM-Blender, an ensembling framework designed to attain consistently superior performance by leveraging the diverse strengths of multiple open-source large language models (LLMs). Our…