
Oden Roberts serves as Director of AI VFX at Tool, where he leads the development and integration of AI-driven pipelines across commercial and narrative projects. He has collaborated on projects with NEON, Scott Free Productions and PBS, with work recognized at the Sundance Film Festival and South by Southwest. He has also directed and produced multiple commercials for Apple, Google, Airbnb and leading automotive manufacturers.
Since 2021, his work has focused on applying AI within production, particularly in high-fidelity automotive visualization and digital humans at 4K resolution. He has led AI-driven projects for Amazon and Procter & Gamble, as well as automotive manufacturers, and has collaborated with some of Hollywood’s leading directors to develop practical solutions within the latent space. His work has also leveraged tools like Runway to solve complex 2D and 3D tracking challenges within AI production pipelines. In this conversation, Oden discusses Tool’s custom “Cruise Control” pipeline, developed on Runway.
Can you tell us more about Tool, the types of brands you work with and the work that you do?
Tool is a creative production partner that bridges the gap between agency and production company expertise. We have an in-house AI Studio at the forefront of producing campaign content and automated systems at the intersection of story, design and craft.
With a 30 year history in the creation of high-end commercial content combined with 10 years of leveraging AI, our clients look to us to provide guidance for how AI can be used to unlock creative production opportunities and efficiencies. This includes content production and building highly specialized workflows and pipelines for the creation of repeatable content at scale. Supported by a full-time team that consists of AI VFX, artists, designers, engineers, developers, producers and product managers, we work with brands across all industries, ranging from AMD to P&G to Amazon (and many others).
You’ve created a custom pipeline called “Cruise Control” – tell us about what it can do.
“Cruise Control” is our custom pipeline for generating production-ready driving footage with real-world physics and true 1:1 product fidelity.
The goal with Cruise Control is to give car manufacturers the ability to create on demand running footage for both retail and campaign content. Traditionally, this content is handled by different divisions and the visual quality of the content doesn’t match. We envision Cruise Control unlocking a new way for these teams to work together and deliver high quality content at scale.
It’s trained on video LoRAs of cars in motion, so it understands how a vehicle actually behaves, from weight transfer to motion blur. At the same time, the car itself is held in a strict 1:1 representation. Geometry, proportions, materials and reflections stay locked exactly as designed, with no drift or reinterpretation.
Background scenes, camera angles and car features like color or car racks can be generated in near real time via Cruise Control.
The result is cinematic driving footage where the performance feels real and the product remains perfectly accurate, making it viable for high-end commercial use. So far, we’ve worked on pilots with Honda, Acura, Hyundai and Genesis, and are rolling it out more widely.
How did you come up with the idea, and how did you build it?
We built Cruise Control out of necessity. Dealerships and manufacturers need regional, on-demand footage for new models, and traditional production can’t scale to meet that without sacrificing speed or cost.
The idea was shaped by Tool founder and director, Erich Joiner, who has 30 years of experience shooting cars. Most recently, he had a fun opportunity to collaborate with the team on F1: The Movie with Brad Pitt. His role was to capture the high-speed, on-track race sequences. He knows exactly how cars should behave on camera, and where things fall apart. That made it clear there was a gap between what AI could generate and what automotive clients actually need for true photoreal delivery.
We built a pipeline that bridges that gap. It’s trained on real driving footage through video LoRAs to capture true vehicle physics, while maintaining a strict 1:1 representation of the car. The result is a production-ready system that generates location-specific, physically accurate driving footage that actually holds up in commercial use.
What role did Runway’s tools and platform play?
We used AI as the backbone of the pipeline, with Runway as the undercarriage.
Runway handled the heavy lifting for tracking and replacement, allowing us to swap environments and reflections while keeping everything locked to the original plate. On top of that, we trained custom video LoRAs to establish a strict 1:1 representation of the car, so its geometry, materials and proportions never drift.
In practice, that means we can change the world around the vehicle while the car itself remains perfectly accurate, giving us both flexibility and true product fidelity in every shot.
You’ve obviously been around creative production for a long time, and Tool was an early adopter of Runway’s tools. How has AI changed the production process for you, especially as the models have improved?
Being early with tools like Runway gave us a front-row seat to how quickly this shifted from experiment to actual production infrastructure.
At the beginning, AI was mostly about speed and exploration. It was great for generating ideas, rough comps and “what if” worlds, but it broke down the moment you needed precision. You couldn’t trust it with a client-facing frame.
As the models improved, that flipped. Now it’s less about replacing production and more about compressing it. We can move from concept to near-final imagery in days instead of weeks, while still maintaining control over things like camera, lighting and product fidelity.
The biggest change is that AI sits inside the pipeline, not outside of it. It’s part of pre-vis, part of production and part of post. We’re not just generating images, we’re designing systems that behave like a shoot. It’s essentially a shift from shortcut to infrastructure.
Have you learned anything from putting this together that you’ll apply to other industries or creative work? What’s next for you?
What we really learned is that the magic isn’t just in generating images, it’s in how deeply you can explore the latent space with control. The more we pushed into it, the more we realized you can get incredibly organic results, but only if you constrain it like a real-world system. Physics, camera rules, product lock – those constraints actually unlock better creativity, not limit it.
That applies way beyond automotive. Any industry that relies on precision plus variation, like fashion, architecture, retail or even narrative storytelling, can benefit from this. You can explore thousands of creative directions, but still land on something that’s usable, consistent and grounded.
What’s next is pushing further into that balance. More control and predictability, without losing the organic feel. Building systems that don’t just generate visuals, but understand how to behave like a production, so whether it’s a car, a character or a full world, it holds up from first frame to final delivery.

