In the wake of the research release of Runway Gen-2, we are excited to announce that we’ve entered into a multi-year strategic research partnership with AWS as our preferred cloud provider to scale our high performance computing cluster and leverage their research infrastructure.
Because of this collaboration, we have brought all of our model development and training in-house to accelerate the pace of training and the deployment of new models and products. These efforts are in pursuit of bringing our users best-in-class experiences across our ever-expanding Generative Suite and making professional multimedia creation more accessible.
Runway’s Gen-2, a multimodal AI system that can generate novel videos with text, images, or video clips was trained on AWS in collaboration with NVIDIA. This is a continuation and improvement on our work of multimodal generative models and represents a major improvement to the state-of-the-art AI systems for video generation.
"The pioneering generative models Runway is testing, training, and scaling on AWS are setting the standard for the future of multimodal content," said Matt Garman, Senior Vice President, AWS Sales, Marketing, and Global Services. "We are excited to power Runway's innovations and collaborate to forever change how creatives harness AI to realize their vision."
Gen-2 can realistically and consistently synthesize new videos. Either by applying the composition and style of an image or text prompt to the structure of a source video (Video to Video) or, using nothing but words (Text to Video). It's like filming something new, without filming anything at all. AWS was instrumental in the development and training of this groundbreaking video generation model, and we look forward to continuing to pioneer what’s possible with Generative AI together. We’re looking forward to scaling training and capacity over the course of this partnership.