Gen-1: The Next Step Forward for Generative AI

Use words and images to generate new videos out of existing ones.
Anastasis Germanidis Feb 2023
Authors
Patrick Esser, Johnathan Chiu, Parmida Atighehchian, Jonathan Granskog, Anastasis Germanidis 1

No lights. No camera. All action.

Realistically and consistently synthesize new videos by applying the composition and style of an image or text prompt to the structure of your source video. It's like filming something new, without filming anything at all.



Introducing a more expressive, cinematic and consistent video generation technique.

Bringing the magic back to making movies.

Learn how Gen-1 can turn any video into a compelling piece of footage.

Mode 01: Stylization

Transfer the style of any image or prompt to every frame of your video.

Source Video
Generated Video
Mode 02: Storyboard

Turn mockups into fully stylized and animated renders.

Input Video
Generated Video
Mode 03: Mask

Turn mockups into fully stylized and animated renders.

Input Video
Generated Video
Mode 04: Render

Turn untextured renders into realistic outputs by applying an input image or prompt.

Input Video
Generated Video
Mode 05: Customization

Unleash the full power of Gen-1 by customizing the model for even higher fidelity results.

Input Video
Generated Video
The New Standard for Video Generation

Based on user studies, results from GEN-1 are preferred over existing methods for image-to-image and video-to-video translation.

73.53%
Preferred over Stable Diffusion 1.5
88.24%
Preferred over Text2Live
A New Era for Motion (and) Pictures

Runway Research is dedicated to building the multimodal AI systems that will enable new forms of creativity. Gen-1 represents yet another of our pivotal steps forward in this mission.

Authors
Patrick Esser, Johnathan Chiu, Parmida Atighehchian, Jonathan Granskog, Anastasis Germanidis 1