Introducing Act-One

A new way to generate expressive character performances using simple video inputs.
October 22, 2024
by Runway
Watch Video
Coming Soon
At Runway, our mission is to build expressive and controllable tools for artists that can open new avenues for creative expression. Today, we're excited to release Act-One, a new state-of-the-art tool for generating expressive character performances inside Gen-3 Alpha.

Act-One can create compelling animations using video and voice performances as inputs. It represents a significant step forward in using generative models for expressive live action and animated content.



Play


Animating a generated character using nothing more than a simple video of an actor's performance.
Eye-lines, micro expressions, pacing and delivery are all faithfully represented in the final generated output.
Capturing the Essence of a Performance

Traditional pipelines for facial animation often involve complex, multi-step workflows. These can include motion capture equipment, multiple footage references, manual face rigging, among other techniques. The goal is to transpose an actor's performance into a 3D model suitable for an animation pipeline. The key challenge with traditional approaches lies in preserving emotion and nuance from the reference footage into the digital character.

Our approach uses a completely different pipeline, driven directly and only by a performance of an actor and requiring no extra equipment.



Actors' performances captured with a simple single camera set up that can be used to animate generated characters.
Animation Mocap

Act-One can be applied to a wide variety of reference images. The model preserves realistic facial expressions and accurately translates performances into characters with proportions different from the original source video. This versatility opens up new possibilities for inventive character design and animation.



A simple at-home camera set up captures an actor's driving performance to animate a generated character. With added voice alternation.


Top: Driving Performance. Bottom: Generated Character Animations.

Live Action

The model also excels in producing cinematic and realistic outputs, and is remarkably robust across camera angles while maintaining high-fidelity face animations. This capability allows creators to develop believable characters that deliver genuine emotion and expression, enhancing the viewer's connection to the content.



Top: Driving Performance. Bottom: Generated Character Animations.

Top: Driving Performance. Bottom: Generated Character Animations.
New Creative Avenues

We've been exploring how Act-One can allow the generation of multi-turn, expressive dialogue scenes, which were previously challenging to create with generative video models. You can now create narrative content using nothing more than a consumer grade camera and one actor reading and performing different characters from a script.



Play
A multi-cam dialogue scene edited together using a single actor and camera set-up to drive the performance of two unique generated characters.

Driving performance and generated output for Character A.
Driving performance and generated output for Character B.
Safety

As with all our releases, we're committed to responsible development and deployment. We're releasing this new tool with a comprehensive suite of content moderation and safety precautions, including:

Our Foundations for Safe Generative Media underpin our current and future releases for Act-One.

Looking Ahead

We're excited to see what forms of creative storytelling Act-One brings to animation and character performance. Act-One is another step forward in our goal to bringing previously sophisticated techniques to a broader range of creators and artists.

We look forward to seeing how artists and storytellers will use Act-One to bring their visions to life in new and exciting ways.

Access to Act-One will begin gradually rolling out to users today and will soon be available to everyone.