Gen-2: Generate novel videos
with text, images or video clips
No lights. No camera. All action.
Realistically and consistently synthesize new videos. Either by applying the composition and style of an image or text prompt to the structure of a source video (Video to Video). Or, using nothing but words (Text to Video). It's like filming something new, without filming anything at all.
Bringing the magic back to making movies.
Learn more about the different ways Gen-2 can turn any image, video clip or text prompt into a compelling piece of film.
Mode 01: Text to Video
Synthesize videos in any style you can imagine using nothing but a text prompt. If you can say it, now you can see it.
Mode 02: Text + Image to Video
Generate a video using a driving image and a text prompt
Mode 03: Image to Video
Generate video using just a driving image (Variations Mode)
Mode 04: Stylization
Transfer the style of any image or prompt to every frame of your video.
Mode 05: Storyboard
Turn mockups into fully stylized and animated renders.
Mode 06: Mask
Turn mockups into fully stylized and animated renders.
Mode 07: Render
Turn untextured renders into realistic outputs by applying an input image or prompt.
Mode 08: Customization
Unleash the full power of Gen-1 by customizing the model for even higher fidelity results.
The New Standard for Video Generation
Based on user studies, results from GEN-1 are preferred over existing methods for image-to-image and video-to-video translation.
A New Era for Motion (and) Pictures
Runway Research is dedicated to building the multimodal AI systems that will enable new forms of creativity. Gen-1 represents yet another of our pivotal steps forward in this mission.