Skip to main content

TouchDesigner Features

The Daydream TouchDesigner plugin exposes all the power of StreamDiffusion through an intuitive parameter interface.

Parameters Reference

ParameterDescription
Prompt ScheduleWeighted prompts: [("anime style", 1.0), ("oil painting", 0.5)]
Prompt InterpolationHow to blend between prompts: linear or slerp
Negative PromptWhat to avoid: “blurry, low quality, flat”
Seed ScheduleWeighted seeds: [(42, 1.0), (123, 0.5)]
Seed InterpolationHow to blend between seeds: linear or slerp
Randomize SeedsButton to set all seeds to random values
GuidanceHow closely to follow the prompt (1.0-3.0)
DeltaStrength of diffusion effect (0.0-1.0)
StepsNumber of inference steps (1-4)
ControlNet scalesStrength of each conditioning type (0.0-1.0)
IP AdapterEnable/disable style transfer
IP Adapter ScaleStrength of style influence
IP Adapter Typeregular or faceid

Prompt Scheduling

The killer feature for live performances: smoothly transition between prompts over time.
# Single prompt
prompt = "cyberpunk cityscape, neon lights"

# Weighted blend of prompts
prompt_schedule = [
    ("anime style portrait", 0.7),
    ("oil painting", 0.3),
]
With slerp interpolation, transitions are smooth and organic - perfect for VJ sets.
Use prompt scheduling with TouchDesigner’s animation channels to create evolving visuals that respond to music or time.

Seed Scheduling

Control randomness while maintaining smooth transitions:
# Single seed for consistency
seed = 42

# Weighted blend of seeds
seed_schedule = [
    (42, 0.8),
    (123, 0.2),
]
Click Randomize Seeds to generate fresh random values for experimentation.

ControlNets

ControlNets preserve structure from your input video. Available types depend on the model:

SDXL Models

ControlNetWhat It PreservesBest For
Depth3D structure, distanceFaces, scenes with depth
CannySharp edges, outlinesLine art, detailed edges
TileTexture patternsUpscaling, detail preservation

SD1.5 Models

ControlNetWhat It PreservesBest For
Depth3D structureFaces, objects
CannySharp edgesLinework, silhouettes
TileTextureDetail preservation
TemporalNetMotion consistencyReducing flicker

SD2.1 Models

ControlNetWhat It PreservesBest For
OpenPoseBody posesDance, movement
HEDSoft edgesOrganic shapes
CannySharp edgesArchitecture
DepthDistanceSpatial scenes
ColorColor compositionPalette preservation
TemporalNetMotionVideo stability
Adjust ControlNet scales to balance between following the prompt and preserving input structure. Start with 0.5 and adjust from there.
Learn more about ControlNets in the ControlNets Tutorial.

IP Adapter (Style Transfer)

Apply the style of a reference image to your video:
  1. Enable IP Adapter in the parameters
  2. Load a style reference image
  3. Adjust Scale (higher = stronger style influence)
  4. Choose Type:
    • regular - General style transfer
    • faceid - Face-specific (SDXL only)
IP Adapter works best with SDXL and SD1.5 models. SD2.1 doesn’t support it.
Learn more in the IP Adapters Tutorial.

Model Selection

Choose the right model for your use case:
ModelSpeedQualityControlNetsIP Adapter
SD TurboFastestGood6 typesNo
SDXL TurboFastBest3 typesYes
Dreamshaper 8MediumGreat4 typesYes
Openjourney v4MediumGreat4 typesYes

Performance Tips

  1. Resolution: Start with 512x512, increase only if needed
  2. Steps: 2-3 steps is usually enough for real-time
  3. ControlNets: Enable only what you need; each adds latency
  4. Delta: Lower values = more stable, higher = more responsive

Example: VJ Setup

Here’s a typical setup for a live VJ performance:
  1. Input: Live camera or video loop
  2. Model: SDXL Turbo for quality, SD Turbo for speed
  3. ControlNets:
    • Depth at 0.6 (preserve structure)
    • Canny at 0.3 (keep edges)
  4. Prompt Schedule: Animate between themes with CHOPs
  5. IP Adapter: Cycle through style images

Next Steps