Skip to main content
POST
/
v1
/
streams
Create a new stream
curl --request POST \
  --url https://api.daydream.live/v1/streams \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "pipeline": "streamdiffusion",
  "params": {
    "model_id": "stabilityai/sd-turbo",
    "prompt": "<string>",
    "prompt_interpolation_method": "linear",
    "normalize_prompt_weights": true,
    "normalize_seed_weights": true,
    "negative_prompt": "<string>",
    "guidance_scale": 123,
    "delta": 123,
    "num_inference_steps": 50,
    "t_index_list": [
      0
    ],
    "use_safety_checker": true,
    "width": 704,
    "height": 704,
    "lora_dict": {},
    "use_lcm_lora": true,
    "lcm_lora_id": "<string>",
    "acceleration": "none",
    "use_denoising_batch": true,
    "do_add_noise": true,
    "seed": 0,
    "seed_interpolation_method": "linear",
    "enable_similar_image_filter": true,
    "similar_image_filter_threshold": 0.5,
    "similar_image_filter_max_skip_frame": 4503599627370495,
    "skip_diffusion": true,
    "image_preprocessing": {
      "processors": [
        {
          "type": "blur",
          "enabled": true,
          "params": {}
        }
      ],
      "enabled": true
    },
    "image_postprocessing": {
      "processors": [
        {
          "type": "blur",
          "enabled": true,
          "params": {}
        }
      ],
      "enabled": true
    },
    "latent_preprocessing": {
      "processors": [
        {
          "type": "latent_feedback",
          "enabled": true,
          "params": {}
        }
      ],
      "enabled": true
    },
    "latent_postprocessing": {
      "processors": [
        {
          "type": "latent_feedback",
          "enabled": true,
          "params": {}
        }
      ],
      "enabled": true
    },
    "controlnets": [
      {
        "model_id": "thibaud/controlnet-sd21-openpose-diffusers",
        "conditioning_scale": 0.5,
        "preprocessor": "blur",
        "enabled": true,
        "preprocessor_params": {},
        "control_guidance_start": 0.5,
        "control_guidance_end": 0.5
      }
    ]
  },
  "name": "<string>",
  "output_rtmp_url": "<string>"
}
'
{
  "pipeline": "streamdiffusion",
  "params": {
    "model_id": "stabilityai/sd-turbo",
    "prompt": "<string>",
    "prompt_interpolation_method": "linear",
    "normalize_prompt_weights": true,
    "normalize_seed_weights": true,
    "negative_prompt": "<string>",
    "guidance_scale": 123,
    "delta": 123,
    "num_inference_steps": 50,
    "t_index_list": [
      0
    ],
    "use_safety_checker": true,
    "width": 704,
    "height": 704,
    "lora_dict": {},
    "use_lcm_lora": true,
    "lcm_lora_id": "<string>",
    "acceleration": "none",
    "use_denoising_batch": true,
    "do_add_noise": true,
    "seed": 0,
    "seed_interpolation_method": "linear",
    "enable_similar_image_filter": true,
    "similar_image_filter_threshold": 0.5,
    "similar_image_filter_max_skip_frame": 4503599627370495,
    "skip_diffusion": true,
    "image_preprocessing": {
      "processors": [
        {
          "type": "blur",
          "enabled": true,
          "params": {}
        }
      ],
      "enabled": true
    },
    "image_postprocessing": {
      "processors": [
        {
          "type": "blur",
          "enabled": true,
          "params": {}
        }
      ],
      "enabled": true
    },
    "latent_preprocessing": {
      "processors": [
        {
          "type": "latent_feedback",
          "enabled": true,
          "params": {}
        }
      ],
      "enabled": true
    },
    "latent_postprocessing": {
      "processors": [
        {
          "type": "latent_feedback",
          "enabled": true,
          "params": {}
        }
      ],
      "enabled": true
    },
    "controlnets": [
      {
        "model_id": "thibaud/controlnet-sd21-openpose-diffusers",
        "conditioning_scale": 0.5,
        "preprocessor": "blur",
        "enabled": true,
        "preprocessor_params": {},
        "control_guidance_start": 0.5,
        "control_guidance_end": 0.5
      }
    ]
  },
  "id": "<string>",
  "stream_key": "<string>",
  "created_at": "<string>",
  "output_playback_id": "<string>",
  "name": "<string>",
  "author": "<string>",
  "from_playground": true,
  "gateway_host": "<string>",
  "is_smoke_test": true,
  "whip_url": "<string>",
  "output_stream_url": "<string>"
}
This endpoint creates a new video processing stream using the Daydream StreamDiffusion pipeline. You’ll specify the pipeline type and model configuration in the request body.

Request Body Structure

{
  "pipeline": "streamdiffusion",
  "params": {
    "model_id": "stabilityai/sdxl-turbo",
    // ... additional parameters
  }
}
  • pipeline: The processing pipeline to use. Currently "streamdiffusion" is the only public option.
  • params: Configuration object containing the model and generation parameters.
  • params.model_id: The specific model to use (see table below).

Available Models

model_idFamilyControlNetsIP AdapterCached Attention
stabilityai/sd-turboSD2.16 typesNoNo
stabilityai/sdxl-turboSDXL3 typesYesYes
Lykon/dreamshaper-8SD1.54 typesYesYes
prompthero/openjourney-v4SD1.54 typesYesYes
Which model should I use?
  • SD2.1 (sd-turbo): Fastest, good for real-time effects
  • SDXL (sdxl-turbo): Highest quality, supports IP Adapter for style transfer
  • SD1.5 (dreamshaper-8, openjourney-v4): Great for stylized/cartoon effects, supports IP Adapter and Cached Attention

Example: Create an SDXL Turbo Stream

curl -X POST \
  "https://api.daydream.live/v1/streams" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer ${DAYDREAM_API_KEY}" \
  -d '{
    "pipeline": "streamdiffusion",
    "params": {
      "model_id": "stabilityai/sdxl-turbo",
      "prompt": "anime character"
    }
  }'

Example: Create a Stream with ControlNet

curl -X POST \
  "https://api.daydream.live/v1/streams" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer ${DAYDREAM_API_KEY}" \
  -d '{
    "pipeline": "streamdiffusion",
    "params": {
      "model_id": "stabilityai/sdxl-turbo",
      "prompt": "oil painting portrait",
      "controlnets": [
        {
          "model_id": "xinsir/controlnet-depth-sdxl-1.0",
          "preprocessor": "depth_tensorrt",
          "conditioning_scale": 0.5,
          "enabled": true
        }
      ]
    }
  }'
For full parameter documentation, see the Parameters section.

Legacy Support

The pipeline_id field (e.g., "pip_SD15") is deprecated but still supported for backward compatibility. New integrations should use pipeline + params.model_id instead.

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Body

application/json
pipeline
enum<string>
required
Available options:
streamdiffusion
params
SDTurbo · object
required
name
string

Human-readable name for the stream

output_rtmp_url
string

Custom RTMP URL for stream output destination

Response

Default Response

pipeline
enum<string>
required
Available options:
streamdiffusion
params
SDTurbo · object
required
id
string
required

Unique identifier for the stream

stream_key
string
required

Unique key used for streaming to this endpoint

created_at
string
required

ISO timestamp when the stream was created

output_playback_id
string
required

Playback ID for accessing the stream output

name
string
required

Human-readable name of the stream

author
string
required

ID of the user who created this stream

from_playground
boolean
required

Whether this stream was created from the playground interface

gateway_host
string
required

Gateway server hostname handling this stream

is_smoke_test
boolean
required

Whether this is a smoke test stream

whip_url
string
required

WebRTC WHIP URL for stream ingestion

output_stream_url
string

URL where the processed stream output can be accessed