Get Started with Scope
Daydream Scope is an open-source tool for running and customizing real-time interactive generative AI pipelines and models. Watch this 10-minute walkthrough covering the main Scope features and how to get your first generation running:
Follow the steps below to install Scope and create your first generation.
Prerequisites
For Desktop App or Local Installation:- NVIDIA GPU with ≥24GB VRAM (RTX 4090/5090 or similar)
- CUDA 12.8+ drivers
- Windows or Linux
- RunPod account with credits
- Similar GPU requirements apply to your instance selection
Full System Requirements
View detailed hardware specs, pipeline-specific VRAM needs, and software dependencies
Krea Realtime requires ≥32GB VRAM and uses the fp8 quantized model at that tier (e.g. RTX 5090). For higher resolutions without quantization, ≥40GB VRAM is recommended (e.g. H100, RTX 6000 Pro). A 24GB GPU like the RTX 4090 cannot run Krea Realtime - use LongLive or StreamDiffusion V2 instead.
Step 1: Install Scope
Choose your installation method:- Desktop App
- Local Install
- Cloud (RunPod)
The Daydream Scope desktop app is an Electron-based application that provides the simplest way to get Scope running on your Windows machine.
Download Daydream Scope for Windows
Download the latest Windows installer (.exe). This link always points to the most recent release.
Install the application
Run the downloaded
.exe file and follow the standard Windows installation prompts.Looking for a specific version?
Looking for a specific version?
Visit the Daydream Scope releases page on GitHub. Select the release you want, expand the Assets section at the bottom, and download the file ending in
.exe.Step 2: Your First Generation
Once Scope is running, open the interface atlocalhost:8000 (or your RunPod URL).
Text-to-Video
The LongLive pipeline is pre-selected with a prompt describing a 3D animated panda walking through a park. Just hit play - you’ll see the generation running in real-time, frame by frame, with no render queue or progress bar. Stop the generation, try changing the prompt to something completely different, and hit play again:- “a dragon flying through clouds over a volcano”
- “a robot walking on mars”
- “an astronaut floating through a neon city”
Video-to-Video
Now try switching the input mode from Text to Video. A looping cat test video is loaded by default. Hit play and watch the model transform the video based on your prompt while preserving its structure and motion. Experiment with different prompts to see how the same source video gets reinterpreted:- “a cow sitting in the grass”
- “a fish sitting in the grass”
- “a dragon sitting in the grass”
You can also use your webcam as a live input, load your own video file, or receive video from other applications via Spout (Windows only).
Explore Community Projects
See what others are creating with Scope:Realtime Camera Restyle
Live camera feed restyled in real-time using VACE on an iPad
Flower Transformation
A seeded case study exploring StreamDiffusion to Scope transition
Browse all community projects
Explore more creations, download timelines, and share your own work on the Daydream Community Hub
Step 3: Next Steps
Now that you’re generating, here are some things to try:Using LoRAs
Add style adapters to transform your generations - from photorealistic to Pixar with a single file
Using VACE
Guide generation with reference images and control videos for character consistency
Using Spout
Share real-time video with TouchDesigner, Unity, and other Windows applications
Go Deeper
Ready to build programmatically? Scope exposes a powerful API for integration into your own applications.API Reference
Set up the server, connect via WebRTC, and control generation in real-time
Pipelines
Explore each pipeline’s capabilities, parameters, and hardware requirements
Supported Pipelines
Scope currently ships with five autoregressive video diffusion pipelines: StreamDiffusion V2, LongLive, Krea Realtime, RewardForcing, and MemFlow. Four run on 24GB GPUs, while Krea Realtime needs 32GB+ for its larger 14B model.Pipelines Overview
Compare all pipelines - features, VRAM requirements, and use cases at a glance
Step 4: Connect, Share & Contribute
Community Hub
Browse creations, download timelines, and share your work
Ask on Discord
Have questions or want to connect with others? Join our friendly community
Contribute on GitHub
Report issues, suggest features, or contribute code
Troubleshooting
CUDA version mismatch
CUDA version mismatch
Run
nvidia-smi and verify CUDA version is ≥ 12.8. Update your NVIDIA drivers if needed.Build fails or dependencies won't install
Build fails or dependencies won't install
- Ensure UV, Node.js, and npm are properly installed
- Try clearing the cache:
uv cache clean && uv run build
Python.h: No such file or directory
Python.h: No such file or directory
Install the Python development package:
Models won't download
Models won't download
- Check your internet connection
- Verify disk space in
~/.daydream-scope/models - Model downloads can be large - be patient on first run
Can't connect to RunPod UI
Can't connect to RunPod UI
- Verify the instance is fully deployed in RunPod dashboard
- Ensure you’re accessing port 8000
- Check that
HF_TOKENis correctly set
WebRTC connection fails
WebRTC connection fails
- Verify your
HF_TOKENis valid with read permissions - Try redeploying the instance with the correct token