Policy Model

Control your robot with Runway's General World Model. GWM-1 predicts actions from multimodal sensory input — cameras, proprioception, and additional modalities specific to your setup — fine-tuned for your hardware, environment, and tasks. Every deployment is custom-scoped with the Runway team.

How it works

From your data to a deployed policy.

Fine-tune on your data

Work with Runway to fine-tune GWM-1 on your robot's demonstration data — specific to your hardware, grippers, cameras, and environment.

Get a custom policy model

Receive a policy model optimized for your embodiment, task configuration, and control frequency. The proprioceptive state format is defined per-deployment to match your robot's degrees of freedom.

Deploy via SDK

Integrate with the Python SDK and run inference via cloud API. Feed camera frames and proprioceptive state, get predicted actions back in real time.

SDK preview

A few lines to get started.

Integration is straightforward — initialize a policy session with an observation and task description, then step through frames in a loop to get action predictions.

from PIL import Image
from runway_robotics_sdk import RunwayRobotics

client = RunwayRobotics()

policy = client.policy_model.create(
    base_view=Image.open("data/base_view.jpg"),
    task_description="Pick up the red cup and place it on the shelf",
)

# Run inference loop — feed observations + proprioceptive state, get actions
for _ in range(10):
    obs = capture_camera_frame()
    proprio = get_proprioceptive_state()  # np.ndarray, format varies by embodiment
    result = policy.step(obs, proprio)
    execute_actions(result.actions)  # predicted action chunk

Every deployment is scoped with our team — from fine-tuning on your data to integration with your specific hardware and environment. Get in touch to discuss your use case.

Get Access