
Tell us about BBC Studios and your role within it.
BBC Studios is the commercial arm of the BBC, which means our teams produce content for a range of channels and streamers globally, as well as being a leading distributor and channel operator.
My role is to lead a team that works across applications and engineering, supporting production and business teams to do their best work. AI became a formal part of the job about 12 months ago, where we started to look at opportunities to use AI across our value chain. As a content business, image and video is a key part of that, and particularly in terms of idea development and pre-visualization, we’ve been working with Runway to understand the possibilities and challenges with this technology.
What drew you to Runway initially?
Alongside other toolsets, we have been using Runway for a year or so now, primarily as a tool for development producers to pre-visualize ideas – to convey a concept in a tangible way. AI broadly has become a significant part of how our development teams think.
Outside of pre-visualization, we've also used AI video generation capabilities for things like digital backplates or video effects; being careful that we are transparent in our use and avoid anything misleading.
What was the context for the Morning Live avatar project?
The BBC ran an "AI Unpacked" themed week, using the full breadth of programming to explore what AI means: the good, the bad and the complicated, supporting what is an important discussion of our times. Morning Live is a daily live show, so it made sense to ask: what could we do here that would actually put AI to work rather than just talk about it?
My team was contacted by the production team, who already had a vision for what they wanted to do with an AI avatar, really to check what was possible. We’d first started looking at the feasibility of interactive AI avatars about six to eight months back in various forms, but in reality we found the technology at that time was not really usable for our needs. I knew that Runway had announced the coming availability of the models, so it made sense to look again to see what could be achieved. I should add, in total we had about a week from idea to delivery – so I wasn’t overly optimistic!
It was decided that alongside general chat, the production team would focus on a gardening segment, as it lends itself to questions from the audience and we have expertise on hand to verify and correct the answers. It also helps that the knowledge base for responses can be quite contained to a specific subject area.

Morning Live is a news show – how did you think about ensuring transparency in any AI interactions?
We always aim to be transparent about our use of AI, and with this piece in particular—being part of a live show—the presenters were able to clearly sign-post that this was an AI avatar. Alongside that, the creative decision was taken that the avatar should clearly not be human-like, but we didn't want a standard computer game type character either – it was really that brief which shaped everything about the look and feel.
Morning Live draws a large daily audience, so in terms of having a conversation about AI, and the themes surrounding AI, using an avatar to do so really brought the subject to life.
What were the biggest technical challenges going into a live broadcast?
The avatar sections formed just over six minutes of the show – from a technical perspective, latency and integration with broadcast systems were the main challenges. It's a live show, so dead air is not an option, but a live studio with multiple cameras, audio feeds and so on is a complex environment.
The other challenge was just the time frame. We had very little time to prepare, which actually imposed useful guardrails in some ways. The Runway team was great to work with. They helped us iterate on the look, the feel, the response mechanism and the knowledge base quickly. My personal view, when it was done: I was impressed it worked so well in that short space of time.
How did you build out the knowledge base?
We wanted to define a domain carefully so the avatar could give useful, credible responses without going off-script. Gardening turned out to be well-suited for that – it feels broad, but it's actually quite bounded in terms of knowledge referenced. And we have experts in that space who can correct and build on the responses as needed.
The interaction model also helped. Because questions came in from viewers, the avatar was responding to actual people asking actual things. That's a more honest test than a scripted demo.
Where do you see the technology going from here?
On a personal level I'd love to use AI avatars again. Obviously finding the right format and context is a key challenge, and the real-time nature of the technology lends itself to a live context, but I think there are probably non-live contexts which could be genuinely compelling.
More broadly, what excites me isn't really the replication of things that already exist. What excites me is the next phase of creativity: things you couldn't do with a camera or effects package or any traditional tool. I think we're in the early innings of that with generative AI. Controllability improvements will be key to that – having granular control within real-time environments will diminish a lot of the trial and error that’s currently baked into using AI.
What advice would you give to other broadcasters and media organizations thinking about adopting AI?
Engineer agility into your planning. That's the honest answer. As technologists we are often hardwired to build architecture and infrastructure pipelines that take months to develop. And some of what you build will be obsolete in six to nine months because the models will have moved. That's uncomfortable for teams used to long production cycles.
The phrase you hear constantly at events and forums is "this is the worst it'll ever be." That's exciting, but it's also a planning challenge when consistency and standards matter. The interoperability frameworks that exist in film and video (whether that is codecs, resolution, etc.) took years to develop, but for technical teams have been invaluable for architecting solutions. We're not there with AI. So my advice: stay engaged, stay curious and don't over-invest in any single pipeline. Build knowing you'll need to rebuild.

