Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

LeTI: Learning to Generate from Textual Interactions

Xingyao WangHao PengReyhaneh JabbarvandHeng Ji
2023
arXiv.org

Finetuning pre-trained language models (LMs) enhances the models' capabilities. Prior techniques fine-tune a pre-trained LM on input-output pairs (e.g., instruction fine-tuning), or with numerical… 

Machine-Learned Climate Model Corrections From a Global Storm-Resolving Model: Performance Across the Annual Cycle

Anna KwaSpencer K. ClarkBrian HennChristopher S. Bretherton
2023
Journal of Advances in Modeling Earth Systems

One approach to improving the accuracy of a coarse-grid global climate model is to add machine-learned (ML) state-dependent corrections to the prognosed model tendencies, such that the climate model… 

From Centralized to Ad-Hoc Knowledge Base Construction for Hypotheses Generation.

Shaked Launer-WachsHillel Taub-TabibJennie Tokarev MademY. Shamay
2023
Journal of Biomedical Informatics

Objective To demonstrate and develop an approach enabling individual researchers or small teams to create their own ad-hoc, lightweight knowledge bases tailored for specialized scientific interests,… 

TESS: Text-to-Text Self-Conditioned Simplex Diffusion

Rabeeh Karimi MahabadiJaesung TaeHamish IvisonArman Cohan
2023
arXiv

Diffusion models have emerged as a powerful paradigm for generation, obtaining strong performance in various domains with continuous-valued inputs. Despite the promises of fully non-autoregressive… 

Are Machine Rationales (Not) Useful to Humans? Measuring and Improving Human Utility of Free-Text Rationales

Brihi JoshiZiyi LiuSahana RamnathXiang Ren
2023
arXiv.org

Among the remarkable emergent capabilities of large language models (LMs) is free-text rationalization; beyond a certain scale, large LMs are capable of generating seemingly useful rationalizations,… 

Embedding Recycling for Language Models

Jon Saad-FalconAmanpreet SinghLuca SoldainiDoug Downey
2023
Findings of EACL

Training and inference with large neural models is expensive. However, for many application domains, while new tasks and models arise frequently, the underlying doc-uments being modeled remain… 

Decomposed Prompting: A Modular Approach for Solving Complex Tasks

Tushar KhotHarsh TrivediMatthew FinlaysonAshish Sabharwal
2023
ICLR

Few-shot prompting is a surprisingly powerful way to use Large Language Models (LLMs) to solve various tasks. However, this approach struggles as the task complexity increases or when the individual… 

Complexity-Based Prompting for Multi-Step Reasoning

Yao FuHao PengAshish SabharwalTushar Khot
2023
ICLR

We study the task of prompting large-scale language models to perform multi-step reasoning. Existing work shows that when prompted with a chain of thoughts (CoT), sequences of short sentences… 

LongEval: Guidelines for Human Evaluation of Faithfulness in Long-form Summarization

Kalpesh KrishnaErin BransomBailey KuehlKyle Lo
2023
EACL

While human evaluation remains best practice for accurately judging the faithfulness of automatically-generated summaries, few solutions exist to address the increased difficulty and workload when… 

ArK: Augmented Reality with Knowledge Interactive Emergent Ability

Qiuyuan HuangJ. ParkAbhinav GuptaJianfeng Gao
2023
arXiv.org

Despite the growing adoption of mixed reality and interactive AI agents, it remains challenging for these systems to generate high quality 2D/3D scenes in unseen environments. The common practice…