All posts
Editorial
Engineering
Insights
Product
Residencies

Presenting Multiband Format

User Avatar
Patricio Gonzalez Vivo
/
August 31, 2021
How Runway is building data-rich architecture that harnesses AI to multiply footage and empower creators.

Our ability to tell stories with video has always been tied to major technical breakthroughs. From the discovery of the illusion of motion by combining individual still images in the early 19th century; allowing the first cinematography projections by the Lumière brothers; to the ground-breaking CGI effects introduced in the early ’90s, allowing the explosion of the CGI and VFX-heavy films. Every one of those influential technological moments has allowed us to explore new creative routes and has changed our capacity and understanding of what it means to tell stories with video. Today, we are in the midst of another major technological breakthrough. A technological moment that will change the landscape of video production and filmmaking forever.

The democratization of machine learning, neural rendering, and volumetric capture (with affordable LiDAR devices, 3D cameras, and markless tracking) together with the reshaping of the studio into virtual production is bringing all sorts of new materials and techniques to the video and filmmaking process.

At Runway, we are enabling that rapid transformation in filmmaking. We’ve developed a unique graphics pipeline that performs multi-model inference on any video type. Practically, this means Runway can automatically infer rich video metadata, like depth maps and optical flow, from any video input. We call this derived data “bands” and the entire architecture that interacts with them a Multi-Band Video System (MBVS). Similar to light stretched into a color spectrum through a prism, a combination of different AI models can infer a wide range of new data bands from a single input video. Runway’s platform converts regular videos into this augmented video format seamlessly and automatically, allowing for new storytelling possibilities.

We are building the MBVS format on the core of all our products. All the components of a regular video editor are augmented to make use of these different bands. For example: you can use depth to drive a Bokeh effect or add fog to a scene, you can use optical flow to displace a texture or feed a particle system, etc. Some other features have challenged us to rethink video tools as a whole. What does it mean to blend layers using optical flow? How can we reimagine clip transitions using depth data? These questions are the drivers that fuel our daily imagination.

Extra data bands means more complexity, more data storage and computational power. Our tools are constructed from the ground up with this expanded way of handling videos in mind. We treat all these bands as a one single clip layer for simple and seamless use, while it’s always possible to export them independently for maximum flexibility. All the inferred bands in our system run parallel, frame by frame, always in sync, always together. This means you can do the work of an entire team in just a few minutes, and you don’t need any advanced technical knowledge or specialized hardware.

In Runway we are motivated by empowering artists through simple cutting edge tools that multiply their time and skills. We believe that MBVS is the key that will unlock an artist's voice into something that, not long ago, was only achievable by multiple hours of several people. We are excited to see what kind of stories this augmented and data-rich format will bring.

Share
Everything you need to make anything you want.
Trusted by the world's top creatives