Skip to main content

Overview

This guide will take you through the process of sending video input to our StreamDiffusion pipeline. You will learn how to adjust parameters to create a variety of visual effects, utilize live streaming and audio interactivity features, generate real-time visuals, and view the resulting output video. Our goal by the end is to have an effect that will transform a user into an anime character via their webcam.

API Auth

The use of the API key is currently subsidized for a limited time, and we will provide an update on pricing in the future.
Get your API key from the Daydream Dashboard. Keep it secure and never commit it to source control.

Creating Your First App

Building on top of our StreamDiffusion pipeline consists of three parts:
  1. Creating a Stream object (backend)
  2. Sending in video and playing the output (frontend)
  3. Setting StreamDiffusion parameters

Using the SDKs

The easiest way to integrate Daydream is with our SDKs. Here’s a full-stack example using the TypeScript SDK (backend) and Browser SDK (frontend).

Backend: Create a Stream

Install the TypeScript SDK:
npm install @daydreamlive/sdk
Create a stream with your API key:
// server.ts or Next.js Server Action
import { Daydream } from "@daydreamlive/sdk";

const daydream = new Daydream({
  bearer: process.env.DAYDREAM_API_KEY,
});

export async function createStream() {
  const stream = await daydream.streams.create({
    pipeline: "streamdiffusion",
    params: {
      modelId: "stabilityai/sdxl-turbo",
      prompt: "anime character",
    },
  });

  return {
    id: stream.id,
    whipUrl: stream.whipUrl,
    playbackId: stream.outputPlaybackId,
  };
}

Frontend: Broadcast & Play

Install the Browser SDK:
npm install @daydreamlive/browser
Broadcast your webcam and play the AI output:
import { createBroadcast, createPlayer } from "@daydreamlive/browser";

// Get whipUrl from your backend
const { whipUrl } = await createStream();

// Get user's webcam
const stream = await navigator.mediaDevices.getUserMedia({
  video: { width: 512, height: 512 },
});

// Start broadcasting
const broadcast = createBroadcast({ whipUrl, stream });
await broadcast.connect();

// The WHEP URL is available from the WHIP response header 'livepeer-playback-url'
// or access it from broadcast.whepUrl after connecting
const player = createPlayer(broadcast.whepUrl);
await player.connect();
player.attachTo(document.querySelector("video#output"));
The WHEP playback URL is returned in the WHIP response header livepeer-playback-url. The Browser SDK automatically captures this for you in broadcast.whepUrl.
For React apps, see the React Hooks guide for a complete example with useBroadcast and usePlayer.

Alternative: Using cURL + OBS

If you prefer to use cURL and OBS instead of the SDKs, here’s how:

1. Create a Stream

Available models:
  • stabilityai/sdxl-turbo - SDXL, high quality (recommended)
  • stabilityai/sd-turbo - SD2.1, fastest
  • Lykon/dreamshaper-8 - SD1.5, great for stylized effects
  • prompthero/openjourney-v4 - SD1.5, artistic style
DAYDREAM_API_KEY="<YOUR_API_KEY>"

curl -X POST \
  "https://api.daydream.live/v1/streams" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer ${DAYDREAM_API_KEY}" \
  -d '{
    "pipeline": "streamdiffusion",
    "params": {
      "model_id": "stabilityai/sdxl-turbo",
      "prompt": "anime character"
    }
  }'
Response:
{
  "id": "str_gERGnGZE4331XBxW",
  "output_playback_id": "0d1crgzijlcsxpw4",
  "whip_url": "https://ai.livepeer.com/live/video-to-video/stk_abc123/whip"
}

2. Stream with OBS

  1. Install OBS
  2. Go to Settings → Stream
  3. Set Service to WHIP and paste the whip_url
  4. Add a video source and click Start Streaming
  5. Watch at: https://lvpr.tv/?v=<output_playback_id>
Streaming into Daydream via OBS Or use the Daydream OBS Plugin for built-in AI effects without needing to create streams manually.

Update Parameters

Change the prompt or other settings in real-time:
await daydream.streams.update(streamId, {
  pipeline: "streamdiffusion",
  params: {
    modelId: "stabilityai/sdxl-turbo",
    prompt: "cyberpunk portrait, neon lights",
    guidanceScale: 1.2,
  },
});
You only need to include the parameters you want to change.

Add ControlNets

ControlNets preserve structure from your input video:
{
  "pipeline": "streamdiffusion",
  "params": {
    "model_id": "stabilityai/sdxl-turbo",
    "prompt": "oil painting portrait",
    "controlnets": [
      {
        "enabled": true,
        "model_id": "xinsir/controlnet-depth-sdxl-1.0",
        "preprocessor": "depth_tensorrt",
        "conditioning_scale": 0.5
      }
    ]
  }
}
Set conditioning_scale to 0 to disable a ControlNet without triggering a pipeline reload.

Available ControlNets

SDXL Models (stabilityai/sdxl-turbo):
  • xinsir/controlnet-depth-sdxl-1.0 - Depth guidance
  • xinsir/controlnet-canny-sdxl-1.0 - Edge detection
  • xinsir/controlnet-tile-sdxl-1.0 - Texture preservation
SD1.5 Models (Lykon/dreamshaper-8, prompthero/openjourney-v4):
  • lllyasviel/control_v11f1p_sd15_depth - Depth
  • lllyasviel/control_v11f1e_sd15_tile - Tile
  • lllyasviel/control_v11p_sd15_canny - Canny edges
SD2.1 Models (stabilityai/sd-turbo):
  • thibaud/controlnet-sd21-depth-diffusers - Depth
  • thibaud/controlnet-sd21-canny-diffusers - Canny edges
  • thibaud/controlnet-sd21-openpose-diffusers - Body poses
  • thibaud/controlnet-sd21-hed-diffusers - Soft edges
  • thibaud/controlnet-sd21-color-diffusers - Color composition

What’s Next?