Overview
This guide will take you through the process of sending video input to our StreamDiffusion pipeline. You will learn how to adjust parameters to create a variety of visual effects, utilize live streaming and audio interactivity features, generate real-time visuals, and view the resulting output video.API Auth
The use of the API key is currently subsidized for a limited time, and we will provide an update on pricing in the future.
Authorization header on every request:
API Documentation
The full API reference is available in the sidebar and contains more in-depth descriptions of all parameters.Creating Your First App
Building on top of our StreamDiffusion pipeline consists of three parts:- Creating a
Streamobject - Sending in video and playing the output
- Setting StreamDiffusion parameters
1. Create a Stream
First, we need to create a ‘Stream’ object. This will provide us with an input URL (to send in video) and an output URL (to play back the modified video).All examples use Bash so you can copy/paste into a terminal.
Create Stream Request
When specifying the pipeline_id, it also needs to match the right model_id.Ex.
pip_SD-turbo corresponds to stabilityai/sd-turboCreate Stream Response
2. Send video and play the output
Now we’ll start sending in video and view the processed output.- Install OBS.
- Copy the
whip_urlfrom the Create Stream response. - In OBS → Settings → Stream: choose
WHIPas the Service and paste thewhip_urlas the Server. Leave the Bearer Token blank and save the settings. - Under the
Sourcessection, add a video source for the stream.(Ex: Video Capture Device ) - Under the
Controlssection, selectStart Streamingto start the stream. - Copy the
output_playback_idfrom the Create Stream response and open:https://lvpr.tv/?v=<your output_playback_id> - You should now see your output video playing.
- Fetch playback URLs from Livepeer Studio’s Playback endpoint:
curl "https://livepeer.studio/api/playback/<your playback id>" - Choose either the HLS or WebRTC endpoint.
- Configure your chosen player with that URL.
3. Set StreamDiffusion parameters
Send aPATCH to https://api.daydream.live/v1/streams/<YOUR_STREAM_ID> to control the effect. You only need to include the parameters you want to change.
Example PATCH Request
IP Adapter Availability: IP adapters for style conditioning are only available for SD1.5 (
prompthero/openjourney-v4), SDXL (stabilityai/sdxl-turbo), and SDXL-faceid models. They are not supported by SD2.1 models like stabilityai/sd-turbo.Base64 for IP Adapter image input is also supported.Parameter change examples
Experiment with any of the parameters. A few useful ones:| Parameter | Description |
|---|---|
prompt | Guides the model toward a desired visual style or subject. |
negative_prompt | Tells the model what not to produce (e.g., discourages low quality, flat, blurry results). |
num_inference_steps | Higher values improve quality at the cost of speed/FPS. |
seed | Ensures reproducibility across runs. Change it to introduce variation. |
ControlNets
ControlNets guide image generation by providing extra structural inputs (poses, edges, depth maps, colors) that help the model interpret the input video. They impact performance differently, so experiment with which ones to enable for your use case. To enable a ControlNet, increase itsconditioning_scale:
stabilityai/sd-turbo):
thibaud/controlnet-sd21-openpose-diffusers: Body and hand pose tracking to maintain human poses in the outputthibaud/controlnet-sd21-hed-diffusers: Soft edge detection preserving smooth edges and contoursthibaud/controlnet-sd21-canny-diffusers: Sharp edge preservation with crisp outlines and detailsthibaud/controlnet-sd21-depth-diffusers: Preserves spatial depth and 3D structure of objects and facesthibaud/controlnet-sd21-color-diffusers: Color composition passthrough to maintain palette and composition
prompthero/openjourney-v4):
lllyasviel/control_v11f1p_sd15_depth: Depth-based guidance for spatial structure preservationlllyasviel/control_v11f1e_sd15_tile: Tile-based pattern control for texture preservationlllyasviel/control_v11p_sd15_canny: Canny edge detection for detailed outline preservation
stabilityai/sdxl-turbo):
xinsir/controlnet-depth-sdxl-1.0: High-resolution depth guidance for SDXL modelsxinsir/controlnet-canny-sdxl-1.0: SDXL-optimized canny edge detectionxinsir/controlnet-tile-sdxl-1.0: Tile-based control for SDXL texture generation