What is Daydream Scope?
Daydream Scope is an open-source tool for running and customizing real-time interactive generative AI pipelines and models. It’s currently in alpha, so expect some rough edges, but we’re excited to iterate in public with the open-source AI community. Scope enables you to:- Stream real-time AI-generated video via WebRTC with low latency
- Use an interactive timeline editor to modify generation parameters on the fly
- Work with multi-modal inputs including text prompts, videos, camera feeds, and more
- Experiment with state-of-the-art video diffusion models
Supported Pipelines
Scope currently supports three autoregressive video diffusion models:StreamDiffusion
Real-time video generation with streaming capabilities for immediate visual feedback
LongLive
Extended generation capabilities for longer video sequences
Krea Realtime
Text-to-video generation with real-time streaming (requires ≥32GB VRAM)
Krea Realtime requirements: NVIDIA GPU with ≥32GB VRAM (≥40GB for higher resolutions). Can run on 32GB with fp8 quantization at lower resolutions.
Choose Your Deployment Path
Pick the option that best fits your setup:Local Installation
Best performance with NVIDIA RTX 4090/5090 or similar GPU
RunPod Cloud
No GPU required - deploy in minutes using our template
Local Installation
Best for: Users with high-end NVIDIA GPUs who want maximum performance and full control
System Requirements
- OS: Linux or Windows
- GPU: NVIDIA RTX 4090/5090 or similar with ≥24GB VRAM
- Drivers: CUDA 12.8+
Newer GPU generations provide higher FPS throughput and lower latency. Make sure your GPU has at least 24GB of VRAM and supports CUDA 12.8 or higher.
Krea Realtime requirements: If you plan to use the Krea Realtime pipeline, you’ll need ≥32GB VRAM (≥40GB for higher resolutions). You can run it on 32GB with fp8 quantization at lower resolutions.
Installation Steps
1
Check your GPU drivers
First, verify that your NVIDIA drivers are properly installed and support CUDA 12.8 or higher:The output should show your GPU and a CUDA version of at least 12.8.
2
Install dependencies
You’ll need the following installed on your system:
- UV (Python package manager used to run the server)
- Node.js and npm
3
Clone the repository
Clone the Scope repository to your local machine:
4
Build frontend and install Python dependencies
Run the build command to set up both the frontend and backend:This will install all required Python packages including Torch and FlashAttention. The first-time install may take a while as dependencies are downloaded and compiled.
5
Start the Scope server
Launch the Scope server:On the first run, model weights will download automatically to
~/.daydream-scope/models. This may take some time depending on your internet connection.6
Access the UI
Once the server is running, open your browser and navigate to:http://localhost:8000You should see the Scope interface ready to use!
Cloud Deployment (RunPod)
Best for: Researchers and developers without access to local high-end GPUs
Deployment Steps
1
Access the Daydream Scope template
Click the link below to access our pre-configured RunPod template:RunPod Template: Deploy Daydream Scope on RunPodThis will take you directly to the RunPod deployment console with our template loaded.
2
Create a HuggingFace token
RunPod deployment requires a HuggingFace token to enable TURN server functionality. This helps establish WebRTC connections in cloud environments with restrictive firewall settings.
- Create a free account at huggingface.co (if you don’t have one)
- Navigate to Settings → Access Tokens
- Click New token and create a token with read permissions
- Copy the token - you’ll need it in the next step
The HuggingFace integration provides 10GB of free streaming per month via Cloudflare TURN servers.
3
Select your GPU
Choose a GPU that meets Scope’s requirements:
- Minimum: ≥24GB VRAM
- Recommended: NVIDIA RTX 4090/5090 or similar
- Drivers: CUDA 12.8+ support
4
Configure environment variables
Now you’ll add your HuggingFace token to the deployment:
- Click “Edit Template” in the RunPod interface
- Find the environment variables section
- Add a new variable named
HF_TOKEN - Paste your HuggingFace token as the value
- Click “Save” to save your changes
5
Deploy your instance
Click “Deploy On-Demand” to start your RunPod instance.Wait for the deployment to complete. This usually takes a few minutes as the container initializes and downloads necessary model weights.
6
Access your Scope instance
Once deployment is complete, RunPod will provide you with a URL. Open the app at port 8000.Your URL will look something like:
https://your-instance-id.runpod.io:8000The Scope interface should now be ready to use!Using Scope
Now that you have Scope running (either locally or on RunPod), here’s what to expect and how to get started.Your First Run
When you first open Scope, you’ll see:- Default mode: Video mode with a looping cat test video
- Default prompt: “a dog walking in grass”
- Expected speed: ~8 FPS (varies depending on hardware)
- “a cow walking in grass”
- “a dragon flying through clouds”
- “a robot walking on mars”
Key Features
Video Mode
Apply your prompts to static test videos. This is perfect for experimenting and understanding how Scope transforms video content based on your text descriptions.Camera Mode
Connect a camera to use live camera feeds as input. This enables real-time interactive experiences where you can see AI transformations happening live.Interactive Timeline Editor
One of Scope’s most powerful features - modify generation parameters over time. You can:- Replay example generations
- Modify prompts at different points in the timeline
- Steer the generation in different directions
- Import and export timeline files for reproducible workflows
Custom Prompts
Swap in different characters, scenes, styles, or entirely new concepts. Scope supports rich text-based controls for fine-tuning your generations.Model Parameter Controls
Scope gives you access to various model parameters for fine-tuning generation behavior. Experiment with these to achieve different effects and optimize for your specific use case.Troubleshooting
Local Installation Issues
CUDA version mismatch
CUDA version mismatch
- Run
nvidia-smiand verify CUDA version is ≥ 12.8 - Update your NVIDIA drivers if needed
Build fails or dependencies won't install
Build fails or dependencies won't install
- Ensure UV, Node.js, and npm are properly installed
- Try clearing the cache and rebuilding:
uv cache clean && uv run build
Python.h: No such file or directory
Python.h: No such file or directory
This error has been encountered on certain Linux machines when the Python header file is missing.Solution: Install the On other Linux distributions, install the equivalent Python development package for your system.
python3-dev package.On Debian/Ubuntu-based systems:Models won't download
Models won't download
- Check your internet connection
- Verify you have sufficient disk space in
~/.daydreamscope/models - Model downloads can be large, so be patient on first run
RunPod Deployment Issues
Can't connect to the UI
Can't connect to the UI
- Verify the instance is fully deployed (check RunPod dashboard)
- Make sure you’re accessing port 8000
- Check that your
HF_TOKENis correctly set in environment variables
Poor streaming performance
Poor streaming performance
- Try selecting a more powerful GPU
- Check your internet connection speed
- The TURN server should help, but network conditions vary
WebRTC connection fails
WebRTC connection fails
- Verify your
HF_TOKENis valid and has read permissions - Check that the token is properly set in the environment variables
- Try redeploying the instance with the correct token
Next Steps
Now that you have Scope running, here’s how to dive deeper and connect with the community:Join our Discord
Connect with the community, get help, and share your creations in our #scope channel
Contribute on GitHub
Scope is open-source - contribute code, report issues, or suggest features