Unity Plugin Features
The Daydream Unity plugin brings the full power of StreamDiffusion to your Unity projects.
Transform your camera output in real-time using stable diffusion models:
- Turn your 3D scene into an anime world
- Apply painterly effects (oil painting, watercolor)
- Create cyberpunk or sci-fi aesthetics
- Generate abstract visualizations
Just enter a prompt and watch your scene transform!
Available Models
| Model | Speed | Quality | Best For |
|---|
| SD Turbo | Fastest | Good | Real-time streaming |
| SDXL Turbo | Fast | Best | High quality effects |
| Dreamshaper 8 | Medium | Great | Stylized/cartoon |
| Openjourney v4 | Medium | Great | Artistic styles |
Start with SDXL Turbo (stabilityai/sdxl-turbo) for the best balance of quality and speed.
Prompt Tips
Good prompts make great effects. Here are some examples:
Style-Based:
- “anime character, vibrant colors”
- “oil painting, impressionist, brush strokes”
- “watercolor illustration, soft colors”
- “pixel art, retro game style”
Theme-Based:
- “cyberpunk, neon lights, futuristic”
- “fantasy, magical, ethereal glow”
- “vintage photograph, sepia tones”
- “minecraft screenshot, blocky voxel terrain”
Negative Prompts:
Add things to avoid: “blurry, low quality, distorted, ugly”
Parameters Reference
| Parameter | Description |
|---|
| Prompt | Text describing the desired visual effect |
| Negative Prompt | What to avoid: “blurry, low quality, flat” |
| Model ID | Diffusion model to use (stabilityai/sdxl-turbo, etc.) |
| Resolution | Output size in pixels (384-1024, rounded to 64px) |
| Guidance Scale | How closely to follow the prompt (0.1-20.0) |
| Delta | Strength of diffusion effect (0.0-1.0) |
| Seed | RNG seed for reproducibility (-1 for random) |
| Inference Steps | Number of diffusion steps (1-100) |
| Step Schedule | Intermediate timesteps to apply (default: [11]) |
| Add Noise | Add noise for next frame (default: true) |
Prompt Scheduling
Smoothly transition between prompts using weighted entries:
// In the Inspector, add entries to the Prompt Schedule array:
// Entry 1: "anime style portrait" - Weight: 0.7
// Entry 2: "oil painting" - Weight: 0.3
This blends 70% anime with 30% oil painting.
Choose between linear and slerp interpolation. With slerp, transitions are smooth and organic.
When using prompt scheduling, the simple Prompt field is ignored. Leave the schedule array empty to use the simple prompt.
Seed Scheduling
Control randomness while maintaining smooth transitions:
// In the Inspector, add entries to the Seed Schedule array:
// Entry 1: Seed 42 - Weight: 0.8
// Entry 2: Seed 123 - Weight: 0.2
ControlNets
ControlNets preserve structure from your input video while applying AI effects.
Available ControlNets
| Type | What It Does | When to Use |
|---|
| Depth | Preserves 3D structure | Faces, scenes with depth |
| Canny | Preserves edges | Detailed outlines, architecture |
| Tile | Preserves textures | Detail preservation |
| OpenPose | Preserves body pose | Full body shots |
| HED | Preserves soft edges | Organic shapes |
Default Configuration
The plugin ships with three ControlNets pre-configured for SDXL:
| ControlNet | Model | Default Scale |
|---|
| Depth | xinsir/controlnet-depth-sdxl-1.0 | 0.45 |
| Canny | xinsir/controlnet-canny-sdxl-1.0 | 0.0 (disabled) |
| Tile | xinsir/controlnet-tile-sdxl-1.0 | 0.21 |
Using ControlNets
Adjust the Conditioning Scale slider for each ControlNet:
- Lower (0.2-0.4): More creative freedom
- Medium (0.4-0.6): Balanced (recommended)
- Higher (0.6-0.8): Strong structure preservation
Each ControlNet also supports Guidance Start/End to control which timesteps it applies to.
Enable only the ControlNets you need. Each one adds some processing overhead.
Recommended Combinations
| Use Case | ControlNets | Strength |
|---|
| 3D scene | Depth | 0.5-0.6 |
| Detailed environments | Depth + Tile | 0.4 each |
| Architecture/interiors | Canny + Depth | 0.3-0.4 each |
IP Adapter (Style Transfer)
Apply the style of a reference image to your video:
- Enable IP Adapter in the Inspector
- Set a style image URL — this sets the visual style
- Adjust scale (0.0-1.0) — higher = stronger style
- Choose type:
regular — General style transfer
faceid — Face-specific (SDXL only)
Weight Types
The IP Adapter supports multiple weight interpolation types:
| Type | Effect |
|---|
linear | Default, even distribution |
style transfer | Emphasize style over content |
composition | Preserve composition from reference |
strong style transfer | Maximum style influence |
style and composition | Balance both style and composition |
IP Adapter works best with SDXL models. Use style transfer weight type for the strongest visual style influence.
Similar Image Filter
Skip processing when frames are nearly identical to reduce compute:
| Parameter | Description |
|---|
| Enable | Turn the filter on/off |
| Threshold | Similarity cutoff (0-1, default: 0.98) |
| Max Skip Frame | Maximum consecutive frames to skip (default: 10) |
Display Options
| Setting | Description |
|---|
| Show Overlay | Show AI output as fullscreen overlay (default: on) |
| Show Original PIP | Show camera feed in bottom-right corner (default: on) |
| PIP Size | Picture-in-picture size ratio (0.15-0.4) |
Set showOverlay to false to disable the built-in overlay and render the AI output on any surface using OutputTexture.
Resolution
Start with 512x512 for best performance. You can increase to 768 or 1024 if your connection allows.
Network
- Stable internet connection required
- 5+ Mbps upload recommended
- Wired connection preferred over WiFi
Render Pipeline
The plugin uses a ScreenSpaceOverlay Canvas, which works identically across URP, HDRP, and Built-in pipelines. No extra configuration needed.
Troubleshooting
Choppy Output
- Reduce resolution
- Use SD Turbo instead of SDXL
- Disable unused ControlNets
- Check network stability
Washed Out Colors
- Reduce guidance scale
- Add “vivid colors” to your prompt
- Enable Depth ControlNet at 0.4-0.5
Too Much Flickering
- Enable Depth ControlNet
- Reduce delta value
- Add “consistent” to your prompt
Next Steps