Research - Papers
Explore a selection of our published work on a variety of key research challenges in AI.
SUPER: Evaluating Agents on Setting Up and Executing Tasks from Research Repositories
Given that Large Language Models (LLMs) have made significant progress in writing code, can they now be used to autonomously reproduce results from research repositories? Such a capability would be…
Scalable Data Ablation Approximations for Language Models through Modular Training and Merging
Training data compositions for Large Language Models (LLMs) can significantly affect their downstream performance. However, a thorough data ablation study exploring large sets of candidate data…
Merge to Learn: Efficiently Adding Skills to Language Models with Model Merging
Adapting general-purpose language models to new skills is currently an expensive process that must be repeated as new instruction datasets targeting new skills are created, or can cause the models…
ComPO: Community Preferences for Language Model Personalization
Conventional algorithms for training language models (LMs) with human feedback rely on preferences that are assumed to account for an"average"user, disregarding subjectivity and finer-grained…
CLIN: A Continually Learning Language Agent for Rapid Task Adaptation and Generalization
Language agents have shown some ability to interact with an external environment, e.g., a virtual world such as ScienceWorld, to perform complex tasks, e.g., growing a plant, without the startup…
m&m's: A Benchmark to Evaluate Tool-Use for multi-step multi-modal Tasks
Real-world multi-modal problems are rarely solved by a single machine learning model, and often require multi-step computational plans that involve stitching several models. Tool-augmented LLMs hold…
Molmo and PixMo: Open Weights and Open Data for State-of-the-Art Multimodal Models
Today's most advanced multimodal models remain proprietary. The strongest open-weight models rely heavily on synthetic data from proprietary VLMs to achieve good performance, effectively distilling…
Application of the AI2 Climate Emulator to E3SMv2's global atmosphere model, with a focus on precipitation fidelity
Can the current successes of global machine learning-based weather simulators be generalized beyond 2-week forecasts to stable and accurate multiyear runs? The recently developed AI2 Climate…
OLMoE: Open Mixture-of-Experts Language Models
We introduce OLMoE, a fully open, state-of-the-art language model leveraging sparse Mixture-of-Experts (MoE). OLMoE-1B-7B has 7 billion (B) parameters but uses only 1B per input token. We pretrain…
Pushing the frontiers in climate modelling and analysis with machine learning
Climate modelling and analysis are facing new demands to enhance projections and climate information. Here we argue that now is the time to push the frontiers of machine learning beyond…