Skip to main content ->
Ai2

Research - Research principles

AI has limitless potential, which must be developed safely and responsibly. At Ai2, every researcher is dedicated to upholding our five core principles in everything we do.

We are open first

We believe true openness is fundamental for bringing people together with the right tools to advance AI research. We openly train models, providing complete access to data, code, documentation, and all our research through permissive licenses.

We invest in community

We tackle problems too big for any individual to solve. We believe the best breakthroughs come when the whole community works together. We prioritize collaborations with current and future generations of AI researchers and practitioners from all backgrounds to ensure our solutions meet society’s needs.

Our research is grounded in science

We follow scientific methods to ensure our approach is reliable and reproducible by the community. By publishing our work and sharing our methods, data, and results, we open channels for peer-reviewed feedback to ensure that Ai2 meets the highest standards of AI research.

We are human centered

We’re passionate about developing AI solutions that empower people. We’re always driven to develop ethical, trusted solutions with the potential to improve the work and lives of people around the world.

We prioritize sustainability

We build AI in the open to reduce silos and redundancies that are costly to the environment. We’re deeply aware of AI’s carbon impact and are actively working to make models and data centers more efficient.

Open research to safeguard AI development

Our safety work is never done

The safe development of AI is a continuous process; there’s no simple safe or unsafe implementation. This is why we’re actively researching all dimensions of safety across multiple aspects of model behavior and human-AI interactions.

We’re doing our safety research in the open

It’s going to take the entire AI community to develop safe and responsible AI, which is why we're opening up our safety research pipeline. We’re open-sourcing a broad range of safety tools, including data with methods for generating more synthetic data, evaluation benchmarks to detect and understand risk, and safeguards you can apply to your models to make them safer to use.

AI safety is a research problem

The question of how to create highly capable LLMs that minimize human harm remains unsolved. We have a research team dedicated to investigating the root causes of harm in LLMs, including how to unlearn harmful behavior, how to use preference data to enhance safety alignment, and how to use synthetic data and automated techniques to identify risks.