Skip to main content

Get Started with Scope

Daydream Scope is an open-source tool for running and customizing real-time interactive generative AI pipelines and models. Follow these steps to install Scope and create your first generation.

Supported Pipelines

Scope supports five autoregressive video diffusion models:

StreamDiffusion V2

Real-time video generation with streaming capabilities for immediate visual feedback

LongLive

Extended generation capabilities for longer video sequences with consistent quality

Krea Realtime

14B model for higher quality generation (requires ≥32GB VRAM)

RewardForcing

Trained with Rewarded Distribution Matching for improved output quality

MemFlow

Memory bank architecture for better long-context consistency

Prerequisites

For Desktop App or Local Installation:
  • NVIDIA GPU with ≥24GB VRAM (RTX 4090/5090 or similar)
  • CUDA 12.8+ drivers
  • Windows or Linux
For Cloud (RunPod):
  • RunPod account with credits
  • Similar GPU requirements apply to your instance selection

Full System Requirements

View detailed hardware specs, pipeline-specific VRAM needs, and software dependencies
Krea Realtime requires ≥32GB VRAM (≥40GB recommended for higher resolutions).

Step 1: Install Scope

Choose your installation method:
Best for: Windows users who want the easiest installation experience
The Daydream Scope desktop app is an Electron-based application that provides the simplest way to get Scope running on your Windows machine.
1

Download the installer

Visit the Daydream Scope releases page on GitHub.Select the latest release (or the version you want to install).
2

Find the Windows installer

Expand the Assets section at the bottom of the release.Download the file ending in .exe.
3

Install the application

Run the downloaded .exe file and follow the standard Windows installation prompts.
4

Launch Scope

Once installed, launch Daydream Scope from your Start menu or desktop shortcut.

Step 2: Your First Generation

Once Scope is running, open the interface at localhost:8000 (or your RunPod URL).

What You’ll See

  • Default mode: Video mode with a looping cat test video
  • Default prompt: “a dog walking in grass”
  • Expected speed: ~8 FPS (varies by hardware)

Try Updating the Prompt

Change the prompt in real-time and watch the video transform:
  • “a cow walking in grass”
  • “a dragon flying through clouds”
  • “a robot walking on mars”
The video transforms based on your prompt while maintaining the structure and motion of the original.

Try Community Examples

Want to see what’s possible? Import timeline files from these community projects:
Each project page includes downloadable timeline files you can import into Scope to replay and remix.
Success! You just generated your first real-time AI video.

Step 3: Connect, Share & Contribute


Troubleshooting

Run nvidia-smi and verify CUDA version is ≥ 12.8. Update your NVIDIA drivers if needed.
  • Ensure UV, Node.js, and npm are properly installed
  • Try clearing the cache: uv cache clean && uv run build
Install the Python development package:
# Debian/Ubuntu
sudo apt-get install python3-dev
  • Check your internet connection
  • Verify disk space in ~/.daydream-scope/models
  • Model downloads can be large — be patient on first run
  • Verify the instance is fully deployed in RunPod dashboard
  • Ensure you’re accessing port 8000
  • Check that HF_TOKEN is correctly set
  • Verify your HF_TOKEN is valid with read permissions
  • Try redeploying the instance with the correct token