
Gabo Arora is an immersive artist, filmmaker and founder working at the intersection of storytelling and emerging technology. A pioneer of virtual reality and new documentary forms, he is best known for Clouds Over Sidra, one of the first VR films used at scale for humanitarian advocacy, and The Last Goodbye, now in the permanent collection of the Museum of Modern Art.
Arora was the first Creative Director at the United Nations, and later founded Lightshed, a creative technology studio focused on building new formats for storytelling across VR, AI and spatial computing. In this conversation, Gabo discusses his most recent project, “The Great Dictator,” which premiered at SXSW 2026.

Can you tell us more about yourself and your background?
I grew up in Queens, New York, in a working-class immigrant family where a career in the arts felt out of reach. I initially told my parents I would study economics at NYU, but eventually transferred into film.
After graduating, I found it difficult to pursue the kinds of stories I cared about, so I moved into international human rights work and became a UN diplomat. Over time, I grew frustrated with the limits of traditional reporting and returned to storytelling through emerging media. That led to Clouds Over Sidra, one of the first VR documentaries, which helped catalyze a new field.
I later founded Lightshed to explore how technologies like VR, AI and spatial computing could be used to create new forms of narrative. Across my work, I’m interested in using technology not for its own sake, but to create meaning and connection.
"The Great Dictator" is a really interesting project. Can you tell us more about the concept?
Most AI-generated films still reproduce the same passive experience of watching a screen. With The Great Dictator, I wanted to explore a more personal and participatory form.
The project invites people to step into historical moments and deliver speeches that shaped the world. It combines performance, archival material and AI to create a short film in which participants see themselves inside history.
At its core, the work is a question: if you see yourself in history, do you feel more connected to it? And if you momentarily inhabit a position of power, does it change how you think about your own role in the present?
Tell us about your creative process – how did you come up with your ideas, and how did Runway’s tools play a role in shaping your workflows?
My ideas usually emerge through conversation and exposure. I spend a lot of time attending events, demoing work and talking with other artists and technologists. The pace of change in AI makes that exchange essential.
Runway’s tools made it possible to move from concept to execution quickly. The workflow involved selecting archival clips, preparing frames in advance and then inserting participants into those scenes in real time using image and video models. The release of the Gen-4.5 video model significantly improved the quality and coherence of the final output.
More broadly, tools like Runway allow for a new kind of iterative, experimental process where the boundaries between prototyping and production are increasingly fluid.
You’re a longtime filmmaker, and really a historian of video. What drove you to use AI, and what did the technology unlock for you when it comes to bringing to life what’s in your head?
I’ve always worked with emerging technologies as a medium for storytelling. AI felt like a natural extension of that trajectory.
What interested me was not just what AI could generate, but what it could enable experientially. In this case, it allowed for a form of storytelling where the audience becomes the subject. That shift opens up new possibilities for engagement, especially with topics like history and civic identity.

How did you work with Runway’s tools to get the specific look you wanted, especially the consistency?
After extensive testing, we developed a workflow that balanced quality and speed.
We began by selecting archival footage and preparing key frames in advance, including creating space for participant insertion. During the live experience, we captured the participant’s image and inserted them into the scene using image models with strong prompt adherence. We then used Runway’s video model to animate the sequence, producing a short film within minutes.
Each clip required tailored prompts and adjustments, and while the animations are not exact recreations, they preserve the emotional and visual logic of the original footage.
You premiered "The Great Dictator" at South by Southwest – what was the reaction like from audiences?
The response at SXSW was strong. The installation ran at capacity, often standing room only. Many participants had emotional reactions, and there was a consistent sense of surprise at seeing themselves placed inside historical moments.
What stood out most was how quickly people grasped the idea and wanted to try multiple speeches. There was a clear sense of connection and ownership over the experience.
How has AI changed your approach and style when it comes to making digital art?
AI has expanded my approach by allowing me to combine formats more fluidly. I’m less interested in fully synthetic work and more in how AI can augment existing forms – film, performance, installation.
The most compelling work, to me, sits in that tension between the real and the generated.

What’s next for you – what other projects do you want to create?
I’m increasingly interested in the unconscious and how it shapes behavior and perception.
AI offers a potential way to externalize aspects of our inner lives, not in a literal or diagnostic sense, but as a creative and reflective tool. I don’t yet know what form that will take, but I see it as a direction worth exploring – using technology not just to represent the world, but to help us better understand ourselves.


