Skip to main content

What is Daydream Scope?

Daydream Scope is an open-source tool for running and customizing real-time interactive generative AI pipelines and models. It’s currently in alpha, so expect some rough edges, but we’re excited to iterate in public with the open-source AI community. Scope enables you to:
  • Stream real-time AI-generated video via WebRTC with low latency
  • Use an interactive timeline editor to modify generation parameters on the fly
  • Work with multi-modal inputs including text prompts, videos, camera feeds, and more
  • Experiment with state-of-the-art video diffusion models

Supported Pipelines

Scope currently supports three autoregressive video diffusion models:

StreamDiffusion

Real-time video generation with streaming capabilities for immediate visual feedback

LongLive

Extended generation capabilities for longer video sequences

Krea Realtime

Text-to-video generation with real-time streaming (requires ≥32GB VRAM)
Krea Realtime requirements: NVIDIA GPU with ≥32GB VRAM (≥40GB for higher resolutions). Can run on 32GB with fp8 quantization at lower resolutions.

Choose Your Deployment Path

Pick the option that best fits your setup:

Local Installation

Best for: Users with high-end NVIDIA GPUs who want maximum performance and full control

System Requirements

  • OS: Linux or Windows
  • GPU: NVIDIA RTX 4090/5090 or similar with ≥24GB VRAM
  • Drivers: CUDA 12.8+
Newer GPU generations provide higher FPS throughput and lower latency. Make sure your GPU has at least 24GB of VRAM and supports CUDA 12.8 or higher.
Krea Realtime requirements: If you plan to use the Krea Realtime pipeline, you’ll need ≥32GB VRAM (≥40GB for higher resolutions). You can run it on 32GB with fp8 quantization at lower resolutions.

Installation Steps

1

Check your GPU drivers

First, verify that your NVIDIA drivers are properly installed and support CUDA 12.8 or higher:
nvidia-smi
The output should show your GPU and a CUDA version of at least 12.8.
2

Install dependencies

You’ll need the following installed on your system:
  • UV (Python package manager used to run the server)
  • Node.js and npm
If you don’t have these installed, visit their respective websites for installation instructions.
3

Clone the repository

Clone the Scope repository to your local machine:
git clone git@github.com:daydreamlive/scope.git
cd scope
4

Build frontend and install Python dependencies

Run the build command to set up both the frontend and backend:
uv run build
This will install all required Python packages including Torch and FlashAttention. The first-time install may take a while as dependencies are downloaded and compiled.
5

Start the Scope server

Launch the Scope server:
uv run daydream-scope
On the first run, model weights will download automatically to ~/.daydream-scope/models. This may take some time depending on your internet connection.
6

Access the UI

Once the server is running, open your browser and navigate to:http://localhost:8000You should see the Scope interface ready to use!

Cloud Deployment (RunPod)

Best for: Researchers and developers without access to local high-end GPUs
RunPod is a third-party cloud GPU service that lets you run Scope without any local hardware requirements. We’ve created a template to make deployment as simple as possible. You’ll need a RunPod account with credits added. Create an account at runpod.io and add funds to get started. Pricing varies based on GPU selection - check RunPod’s pricing page for current rates. Prefer video? Watch the tutorial below or follow the written steps.

Deployment Steps

1

Access the Daydream Scope template

Click the link below to access our pre-configured RunPod template:RunPod Template: Deploy Daydream Scope on RunPodThis will take you directly to the RunPod deployment console with our template loaded.
2

Create a HuggingFace token

RunPod deployment requires a HuggingFace token to enable TURN server functionality. This helps establish WebRTC connections in cloud environments with restrictive firewall settings.
  1. Create a free account at huggingface.co (if you don’t have one)
  2. Navigate to Settings → Access Tokens
  3. Click New token and create a token with read permissions
  4. Copy the token - you’ll need it in the next step
The HuggingFace integration provides 10GB of free streaming per month via Cloudflare TURN servers.
3

Select your GPU

Choose a GPU that meets Scope’s requirements:
  • Minimum: ≥24GB VRAM
  • Recommended: NVIDIA RTX 4090/5090 or similar
  • Drivers: CUDA 12.8+ support
Newer GPU generations will give you better FPS and lower latency for a smoother experience.
4

Configure environment variables

Now you’ll add your HuggingFace token to the deployment:
  1. Click “Edit Template” in the RunPod interface
  2. Find the environment variables section
  3. Add a new variable named HF_TOKEN
  4. Paste your HuggingFace token as the value
  5. Click “Save” to save your changes
This token enables the TURN server for reliable WebRTC streaming in cloud environments.
5

Deploy your instance

Click “Deploy On-Demand” to start your RunPod instance.Wait for the deployment to complete. This usually takes a few minutes as the container initializes and downloads necessary model weights.
6

Access your Scope instance

Once deployment is complete, RunPod will provide you with a URL. Open the app at port 8000.Your URL will look something like: https://your-instance-id.runpod.io:8000The Scope interface should now be ready to use!

Using Scope

Now that you have Scope running (either locally or on RunPod), here’s what to expect and how to get started.

Your First Run

When you first open Scope, you’ll see:
  • Default mode: Video mode with a looping cat test video
  • Default prompt: “a dog walking in grass”
  • Expected speed: ~8 FPS (varies depending on hardware)
Try updating the prompt in real-time! For example:
  • “a cow walking in grass”
  • “a dragon flying through clouds”
  • “a robot walking on mars”
You should see the video transform based on your prompt while maintaining the structure and motion of the original video.

Key Features

Video Mode

Apply your prompts to static test videos. This is perfect for experimenting and understanding how Scope transforms video content based on your text descriptions.

Camera Mode

Connect a camera to use live camera feeds as input. This enables real-time interactive experiences where you can see AI transformations happening live.

Interactive Timeline Editor

One of Scope’s most powerful features - modify generation parameters over time. You can:
  • Replay example generations
  • Modify prompts at different points in the timeline
  • Steer the generation in different directions
  • Import and export timeline files for reproducible workflows

Custom Prompts

Swap in different characters, scenes, styles, or entirely new concepts. Scope supports rich text-based controls for fine-tuning your generations.

Model Parameter Controls

Scope gives you access to various model parameters for fine-tuning generation behavior. Experiment with these to achieve different effects and optimize for your specific use case.

Troubleshooting

Local Installation Issues

  • Run nvidia-smi and verify CUDA version is ≥ 12.8
  • Update your NVIDIA drivers if needed
  • Ensure UV, Node.js, and npm are properly installed
  • Try clearing the cache and rebuilding: uv cache clean && uv run build
This error has been encountered on certain Linux machines when the Python header file is missing.Solution: Install the python3-dev package.On Debian/Ubuntu-based systems:
sudo apt-get install python3-dev
On other Linux distributions, install the equivalent Python development package for your system.
  • Check your internet connection
  • Verify you have sufficient disk space in ~/.daydreamscope/models
  • Model downloads can be large, so be patient on first run

RunPod Deployment Issues

  • Verify the instance is fully deployed (check RunPod dashboard)
  • Make sure you’re accessing port 8000
  • Check that your HF_TOKEN is correctly set in environment variables
  • Try selecting a more powerful GPU
  • Check your internet connection speed
  • The TURN server should help, but network conditions vary
  • Verify your HF_TOKEN is valid and has read permissions
  • Check that the token is properly set in the environment variables
  • Try redeploying the instance with the correct token

Next Steps

Now that you have Scope running, here’s how to dive deeper and connect with the community: Explore the different pipelines, experiment with the timeline editor, and see what creative applications you can build with real-time AI video generation!