Skip to main content

What is Daydream Scope?

Daydream Scope is an open-source platform for running real-time interactive generative AI video pipelines on your own hardware or in the cloud. Point any source at it (webcam, video file, screen capture, or a feed from another creative tool) and transform it live using state-of-the-art video diffusion models. There is no render queue, no progress bar, no waiting. Generation runs frame by frame at interactive frame rates, and you steer it as it happens with text prompts, reference images, control videos, or live parameter changes. Scope is built for creative technologists, developers, researchers, and AI artists working on live performances, installations, real-time AI video workflows, and experimental generative systems.

Quick Start

Install Scope locally or in the cloud and run your first generation in minutes

What can you do with Scope?

Real-time video generation

Stream AI-generated video via WebRTC at interactive frame rates. Steer the generation live with prompts, controls, and reference images, frame by frame.

Visual graph editor

Build workflows visually with a node-based editor. Connect pipelines, inputs, outputs, and control nodes to design interactive AI video systems without writing code.

Connect to your tools

Send and receive video with Spout (Windows), Syphon (macOS), and NDI (network). Integrate with TouchDesigner, Resolume, Unity, Unreal, and existing live production setups.

Build with nodes

Extend Scope with custom nodes written in Python. Ship them via pip or Git, with auto-generated UI and live hot reload.

Supported pipelines

Scope ships with five autoregressive video diffusion pipelines. Each has different strengths for text-to-video and video-to-video generation, and the right one depends on your hardware and your creative goals.

StreamDiffusion V2

Low-latency real-time generation on 24GB GPUs

LongLive

Extended coherent generation with scene continuity

Krea Realtime

Higher-quality generation on 32GB+ GPUs (14B model)

RewardForcing

Reward-guided control for precise output direction

MemFlow

Memory-bank pipeline for long-form consistency

Compare all pipelines

Detailed capabilities, VRAM requirements, and use cases at a glance

Get involved

GitHub Repository

Contribute code, report issues, or explore the source

Join Discord Community

Connect with the community, get help, and share your creations in our #scope channel

Current status

Scope is currently in alpha. Expect some rough edges as we iterate in public with the open-source AI community. We’re actively developing new features and improving stability.

Next steps

Quick Start

Install Scope and run your first generation

Visual graph editor

Learn how to build AI video workflows with nodes

Guides

Explore guides for LoRAs, VACE, and live integrations

API Reference

Programmatic control via WebRTC and REST