Skip to main content ->
Ai2

NSF Mid-Scale RI-2: Open Multimodal AI Infrastructure to Accelerate Science

NSF OMAI

NSF OMAI - Developing fully open AI models to fuel scientific innovation

The NSF OMAI project is supported by the U.S. National Science Foundation (NSF) and NVIDIA to create a national level AI infrastructure, built by Ai2. As part of NSF OMAI, Ai2 will develop truly open AI models and solutions that will accelerate scientific discovery, building on their fully open models like Olmo and Molmo.

NSF OMAI news

August 2025 - NSF and NVIDIA award Ai2 a combined $152M to support building a national level fully open AI ecosystem

Ai2 has been awarded a combined $152 million from the U.S. National Science Foundation (NSF) and NVIDIA as part…

March 2026 - MolmoPoint: Better pointing architecture for vision-language models

MolmoPoint is a new vision-language model architecture that replaces text-based coordinate outputs with a more natural, token-based pointing mechanism that directly selects regions from visual features.

March 2026 - Introducing Olmo Hybrid: Combining transformers and linear RNNs for superior scaling

Olmo Hybrid is a fully open 7B language model that combines transformer attention with linear RNN layers to achieve greater expressivity and significantly improved data and compute efficiency compared to pure transformer models.

January 2026 - Open Coding Agents: Fast, accessible coding agents that adapt to any repo

SERA is the first in our family of Open Coding Agents, achieving state-of-the-art performance at low cost.

December 2025 - Introducing Bolmo: Byteifying the next generation of language models

Bolmo is new a byte-level family built by adapting Olmo 3 into a fast, flexible byte-based model with a short extra training run.

December 2025 - Molmo 2: State-of-the-art video understanding, pointing, and tracking

Molmo 2, a new suite of state-of-the-art vision-language models with open weights, training data, and training code, can analyze videos and multiple images at once.

November 2025 - Olmo 3: Charting a path through the model flow to lead open-source AI

Our new flagship Olmo 3 model family empowers the open source community with not only state-of-the-art open…

November 2025 - DR Tulu: An open, end-to-end training recipe for long-form deep research

We introduce Deep Research Tulu (DR Tulu), an open post-training recipe and framework for long-form deep research agents.

July 2025 - BAR: Modular post-training with mixture-of-experts

BAR is a recipe for post-training language models one capability at a time—train domain experts independently, merge them into a single mixture-of-experts model, and upgrade any expert without impacting the others.

Explore NSF OMAI artifacts and resources

The team

Noah Smith

Principal Investigator
Ai2

Samuel Carton

Co-Principal Investigator
University of New Hampshire

Sarah Dreier

Co-Principal Investigator
University of New Mexico

Hannaneh Hajishirzi

Co-Principal Investigator
University of Washington

Travis Mandel

Co-Principal Investigator
University of Hawai'i

Taira Anderson

Project Manager
Ai2

Danielle May

Operations Manager
Ai2

NSF OMAI is supported by the OMAI Scientific and Technical Advisory Boards

Get in touch at omai@allenai.org

Subscribe to receive monthly updates about the latest NSF OMAI and Ai2 news.

Disclaimer & Acknowledgment

This material is based upon work supported by the U.S. National Science Foundation under Award No. 2413244. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the U.S. National Science Foundation.