Genmo

Building frontier models for video generation.

We are Genmo—a research lab dedicated to unlocking the right brain of artificial general intelligence through state-of-the-art video generation models.

Our Mission

Imagine AI that can simulate anything—possible or impossible.

Our video generation models act as world simulators, driving breakthroughs in embodied AI by enabling infinite explorations in synthetic realities. Video is the ultimate medium for human-AI interaction, seamlessly integrating text, audio, images, and 3D into one unified experience.

Mochi 1, our first public open-source release, is licensed under Apache 2.0 for both individual and commercial use.

Research Team

Our team includes the original creators of foundational AI research.

50,000+
Total Citations
Our research has shaped the foundation of modern video generation

Our team has contributed to seminal works in this area, such as:

Investors & Advisors

NEA
The House Fund
Gold House Ventures
WndrCo
Essence

Ion Stoica

UC Berkeley, Databricks Co-Founder

Pieter Abbeel

UC Berkeley, Deep RL pioneer, Covariant AI

Joseph Gonzalez

UC Berkeley, SysML pioneer

Abhay Parasnis

CEO of Typeface

Amjad Masad

CEO of Replit

Sabrina Hahn

Investor and author

Bonita Stewart

BAG, long time executive at Google

Michele Catasta

VP of AI at Replit

+

Many more

Video is the language of the future. Help us write the script.