Get Started with Scope
Daydream Scope is an open-source tool for running and customizing real-time interactive generative AI pipelines and models. Follow these steps to install Scope and create your first generation.Supported Pipelines
Scope supports five autoregressive video diffusion models:StreamDiffusion V2
Real-time video generation with streaming capabilities for immediate visual feedback
LongLive
Extended generation capabilities for longer video sequences with consistent quality
Krea Realtime
14B model for higher quality generation (requires ≥32GB VRAM)
RewardForcing
Trained with Rewarded Distribution Matching for improved output quality
MemFlow
Memory bank architecture for better long-context consistency
Prerequisites
For Desktop App or Local Installation:- NVIDIA GPU with ≥24GB VRAM (RTX 4090/5090 or similar)
- CUDA 12.8+ drivers
- Windows or Linux
- RunPod account with credits
- Similar GPU requirements apply to your instance selection
Full System Requirements
View detailed hardware specs, pipeline-specific VRAM needs, and software dependencies
Krea Realtime requires ≥32GB VRAM (≥40GB recommended for higher resolutions).
Step 1: Install Scope
Choose your installation method:- Desktop App
- Local Install
- Cloud (RunPod)
The Daydream Scope desktop app is an Electron-based application that provides the simplest way to get Scope running on your Windows machine.
1
Download the installer
Visit the Daydream Scope releases page on GitHub.Select the latest release (or the version you want to install).
2
Find the Windows installer
Expand the Assets section at the bottom of the release.Download the file ending in
.exe.3
Install the application
Run the downloaded
.exe file and follow the standard Windows installation prompts.4
Launch Scope
Once installed, launch Daydream Scope from your Start menu or desktop shortcut.
Step 2: Your First Generation
Once Scope is running, open the interface atlocalhost:8000 (or your RunPod URL).
What You’ll See
- Default mode: Video mode with a looping cat test video
- Default prompt: “a dog walking in grass”
- Expected speed: ~8 FPS (varies by hardware)
Try Updating the Prompt
Change the prompt in real-time and watch the video transform:- “a cow walking in grass”
- “a dragon flying through clouds”
- “a robot walking on mars”
Try Community Examples
Want to see what’s possible? Import timeline files from these community projects:Factory Fire to Rain
A dramatic real-time transition from industrial calm to explosion to storm — showcasing narrative arcs with LongLive
Urban Battlefield
AAA game-style cinematic of a soldier in a war zone — powered by Krea Realtime 14B
Origami Christmas Vibes
A cozy paper-craft holiday scene with gentle evolution — demonstrating atmosphere and stability with LongLive
Success! You just generated your first real-time AI video.
Step 3: Connect, Share & Contribute
Community Hub
Browse creations, download timelines, and share your work
Join Discord
Get help, share creations, and connect in our #scope channel
Contribute on GitHub
Report issues, suggest features, or contribute code
Troubleshooting
CUDA version mismatch
CUDA version mismatch
Run
nvidia-smi and verify CUDA version is ≥ 12.8. Update your NVIDIA drivers if needed.Build fails or dependencies won't install
Build fails or dependencies won't install
- Ensure UV, Node.js, and npm are properly installed
- Try clearing the cache:
uv cache clean && uv run build
Python.h: No such file or directory
Python.h: No such file or directory
Install the Python development package:
Models won't download
Models won't download
- Check your internet connection
- Verify disk space in
~/.daydream-scope/models - Model downloads can be large — be patient on first run
Can't connect to RunPod UI
Can't connect to RunPod UI
- Verify the instance is fully deployed in RunPod dashboard
- Ensure you’re accessing port 8000
- Check that
HF_TOKENis correctly set
WebRTC connection fails
WebRTC connection fails
- Verify your
HF_TOKENis valid with read permissions - Try redeploying the instance with the correct token