Our Research

Building general-purpose multimodal simulators of the world.

We believe models that use video as their main input/output modality, when supplemented by other modalities like text and audio, will form the next paradigm of computing.
Research from Runway
September 24, 2025
Autoregressive-to-Diffusion Vision Language Models
by Marianne Arriola, Naveen Venkat, Jonathan Granskog, Anastasis Germanidis
We develop a state-of-the-art diffusion vision language model, Autoregressive-to-Diffusion (A2D), by adapting an existing autoregressive vision language model for parallel diffusion decoding. Our approach makes it easy to unlock the speed-quality trade-off of diffusion language models without training from scratch, by leveraging existing pretrained autoregressive models....
June 2, 2025
Dual-Process Image Generation
by Grace Luo, Jonathan Granskog, Aleksander Hołyński, Trevor Darrell
Prior methods for controlling image generation are limited in their ability to be taught new tasks. In contrast, vision-language models, or VLMs, can learn tasks in-context and produce the correct outputs for a given input. We propose a dual-process distillation scheme that allows feed-forward image generators to learn new tasks from deliberative VLMs. Our scheme uses a VLM to rate the generated images and backpropagates this gradient to update the weights of the image generator. Our general framework enables a wide variety of new control tasks through the same text-and-image based interface. We showcase a handful of applications of this technique for different types of control signals, such as commonsense inferences and visual prompts. With our method, users can implement multimodal controls for properties such as color palette, line weight, horizon position, and relative depth within a...
March 31, 2025
StochasticSplats: Stochastic Rasterization for Sorting-Free 3D Gaussian Splatting
by Shakiba Kheradmand, Delio Vicini, George Kopanas, Dmitry Lagun, Kwang Moo Yi, Mark Matthews, Andrea Tagliasacchi
3D Gaussian splatting (3DGS) is a popular radiance field method, with many application-specific extensions. Most variants rely on the same core algorithm: depth-sorting of Gaussian splats then rasterizing in primitive order. This ensures correct alpha compositing, but can cause rendering artifacts due to built-in approximations. Moreover, for a fixed representation, sorted rendering offers little control over render cost and visual fidelity. For example, and counter-intuitively, rendering a lower-resolution image is not necessarily faster. In this work, we address the above limitations by combining 3D Gaussian splatting with stochastic rasterization. Concretely, we leverage an unbiased Monte Carlo estimator of the volume rendering equation. This removes the need for sorting, and allows for accurate 3D blending of overlapping Gaussians. The number of Monte Carlo samples further imbues 3DG...
We're advancing research in AI systems that can understand and simulate the world and its dynamics.
RNA Sessions
RNA Sessions
An ongoing series of talks about frontier research in AI and art, hosted by Runway.
Learn more