Earlier this week, we launched Runway Characters – a real-time video agent API that lets you build fully custom conversational characters from a single image. Characters is built on our recently launched World Model, GWM-1, and represents a new type of real-time interaction with AI. These kinds of immersive simulated experiences are going to reshape the way we experience and engage with the internet. Building a technology this powerful means grappling with how it might be misused.
We want to think through those implications openly, because we believe that building responsibly starts with asking the right questions.
Before diving into risks, it's worth being clear about why we built this and what we think it enables.
Interactive avatars can make digital experiences more human. Text chatbots work, but they don't engage us the way a face does. We're wired to respond to eye contact, facial expressions and vocal tone. An avatar that listens, nods and responds naturally can make a customer support interaction feel less transactional, a tutoring session more engaging or a brand experience more memorable.
There's also real accessibility value here. An avatar can be available 24/7, in any language, with infinite patience. For a student who needs to practice a conversation a hundred times without judgment, or a customer in a time zone far from your support team, that matters.
And for creative applications, Runway Characters opens up possibilities that simply weren't feasible before.
While these are valuable use cases, the same capabilities that enable them create real risks that we have to take seriously.
Identity: Whose Face, Whose Voice?
The most fundamental question with avatar technology is: who can this avatar look and sound like?
Characters can be generated from a single image. That's powerful for legitimate uses – bringing a historical figure to life for educational purposes or letting a company use their actual employees as the basis for support avatars with consent. But it also means someone could potentially create an avatar that looks and sounds like a person who never agreed to be represented that way.
This raises hard questions about consent and likeness rights. What verification should be required before someone can create an avatar based on a real person? How do we balance ease of use against the risk of non-consensual digital impersonation?
There's also the question of synthetic identity disclosure. When someone interacts with a Character, should they always know it's AI-generated? In some contexts this may be obvious. In others, the line gets blurrier. We believe transparency matters, but the right implementation isn't always straightforward.
Finally, there are questions about representation. Avatar creation tools can inadvertently encourage stereotypes or caricatures. How do we ensure the range of avatars people create reflects human diversity thoughtfully rather than reductively?
Deception and Manipulation
Impersonation is particularly concerning. Unlike a pre-recorded deepfake video, an interactive avatar can respond to questions, adapt to context and overcome objections in real-time. This dramatically expands the potential for fraud and social engineering. A scammer impersonating a grandchild becomes far more convincing when they can actually hold a conversation and respond emotionally. An attacker impersonating a bank official or law enforcement officer becomes more coercive when the interaction feels genuinely live.
Even in legitimate applications, avatars raise questions about trust and manipulation. Humans instinctively trust faces. We read sincerity in eye contact, empathy in expression, attention in a nod. An avatar can perform all of these cues perfectly, every time, regardless of whether the underlying interaction deserves that trust. Does that create ethical obligations about how avatars are used in persuasion, sales or emotional contexts?
There's also the risk of parasocial attachment. An avatar that's always available, always attentive and always responsive could become a substitute for human connection rather than a supplement to it, particularly for lonely or vulnerable users. And when users bring genuine emotional distress to an avatar—disclosing mental health struggles, grief or crisis—the stakes for handling those moments effectively become very real.
New Safety Challenges
Interactive, real-time media generation creates safety challenges that don't exist with pre-rendered content. With a real-time avatar, content is generated in the moment. By the time a problem is detected, the user has already experienced it. This means safety systems need to be predictive, not just reactive, and moderation approaches designed for static content don't translate directly.
There's also the question of two-way abuse. Users may attempt to get avatars to produce harmful content – a challenge familiar from text-based AI systems. But users may also direct abusive, harassing or sexually explicit content at the avatar itself. The avatar isn't necessarily harmed, but the interaction may normalize harmful behaviors, particularly if younger users have access to the product. This is an area where research is ongoing and norms are still emerging.
Our Approach
The current state of the technology means that few people will genuinely believe that they are talking to a real person. But as the technology progresses, this will change very quickly. We are trying to think through and address these risks proactively. We don’t think we’ve solved these problems, but here are the safeguards we’ve put in place at launch:
Content and identity restrictions: We're applying robust filters aligned with our Usage Policy. At launch, we're not permitting avatars featuring children, public figures or IP-protected content without authorization. We're also limiting Characters from providing medical, legal or financial advice, as well as uses that mimic professional therapeutic or counseling interventions. If you believe your likeness has been used without your consent, you can report it here.
Transparency expectations: We encourage—and in some contexts require—clear disclosure that users are interacting with an AI-generated character rather than a human.
Ongoing monitoring: We're committed to monitoring how Characters is actually used, learning from edge cases and misuse attempts and updating our policies and technical safeguards accordingly. This includes ongoing red teaming to proactively surface vulnerabilities before bad actors do, and suspending users who violate our policies.
We recognize that addressing the challenges in safely deploying new types of AI-powered experiences will require close partnership with others in the industry and the enterprises who are ultimately deploying these Characters. We are committed to collaborating with these partners thoughtfully, learning from our deployments, engaging with researchers and policymakers and updating our approach as we learn more.

