Developing plugins for Scope
Create your own Scope plugins to add custom pipelines, preprocessors, or postprocessors. This guide walks through building a plugin from scratch with working examples.
Prerequisites
- Python 3.12 or newer
- uv package manager
- Scope installed locally for testing
Project Setup
Create a new directory with the following structure:
my-scope-plugin/
├── pyproject.toml
└── my_scope_plugin/
├── __init__.py
├── plugin.py
└── pipelines/
├── __init__.py
├── schema.py
└── pipeline.py
pyproject.toml
[project]
name = "my-scope-plugin"
version = "0.1.0"
requires-python = ">=3.12"
[project.entry-points."scope"]
my_scope_plugin = "my_scope_plugin.plugin"
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
The [project.entry-points."scope"] section registers your plugin with Scope. The key (my_scope_plugin) is your plugin name, and the value points to the module containing your hook implementation.
If your plugin needs additional third-party packages, add them to [project.dependencies] in pyproject.toml - Scope installs them automatically. You don’t need to declare packages that Scope already provides (e.g. torch, pydantic) since they’re available from the host environment.
plugin.py
from scope.core.plugins.hookspecs import hookimpl
@hookimpl
def register_pipelines(register):
from .pipelines.pipeline import MyPipeline
register(MyPipeline)
The register_pipelines hook is called when Scope loads your plugin. Call register() for each pipeline class you want to make available.
Creating a Text-Only Pipeline
A text-only pipeline generates video without requiring input video. This is the simplest type of pipeline.
Example: Color Generator
This pipeline generates solid color frames based on configurable RGB values.
pipelines/schema.py:
from pydantic import Field
from scope.core.pipelines.base_schema import BasePipelineConfig, ModeDefaults
class ColorGeneratorConfig(BasePipelineConfig):
"""Configuration for the Color Generator pipeline."""
pipeline_id = "color-generator"
pipeline_name = "Color Generator"
pipeline_description = "Generates solid color frames"
# No prompts needed
supports_prompts = False
# Text mode only (no video input required)
modes = {"text": ModeDefaults(default=True)}
# Custom parameters: RGB values 0-255
color_r: int = Field(default=128, ge=0, le=255, description="Red component")
color_g: int = Field(default=128, ge=0, le=255, description="Green component")
color_b: int = Field(default=128, ge=0, le=255, description="Blue component")
pipelines/pipeline.py:
from typing import TYPE_CHECKING
import torch
from scope.core.pipelines.interface import Pipeline
from .schema import ColorGeneratorConfig
if TYPE_CHECKING:
from scope.core.pipelines.base_schema import BasePipelineConfig
class ColorGeneratorPipeline(Pipeline):
"""Generates solid color frames."""
@classmethod
def get_config_class(cls) -> type["BasePipelineConfig"]:
return ColorGeneratorConfig
def __init__(
self,
height: int = 512,
width: int = 512,
**kwargs,
):
self.height = height
self.width = width
def __call__(self, **kwargs) -> dict:
"""Generate a solid color frame.
Returns:
Dict with "video" key containing tensor of shape (1, H, W, 3) in [0, 1] range.
"""
# Read runtime parameters from kwargs (with defaults)
color_r = kwargs.get("color_r", 128)
color_g = kwargs.get("color_g", 128)
color_b = kwargs.get("color_b", 128)
# Create color tensor from current values
color = torch.tensor([color_r / 255.0, color_g / 255.0, color_b / 255.0])
# Create a single frame filled with our color
frame = color.view(1, 1, 1, 3).expand(1, self.height, self.width, 3)
return {"video": frame.clone()}
Key points:
- No
prepare() method: Text-only pipelines don’t need to request input frames
modes = {"text": ModeDefaults(default=True)}: Declares this pipeline only supports text mode
__call__ returns {"video": tensor}: Tensor must be in THWC format with values in [0, 1] range
- Runtime parameters are read from
kwargs: Parameters like color_r are passed to __call__() and should be read using kwargs.get()
A video input pipeline processes incoming video frames. It must implement prepare() to tell Scope how many input frames it needs.
Example: Invert Colors
This pipeline inverts the colors of input video frames.
pipelines/schema.py:
from scope.core.pipelines.base_schema import BasePipelineConfig, ModeDefaults
class InvertConfig(BasePipelineConfig):
"""Configuration for the Invert Colors pipeline."""
pipeline_id = "invert"
pipeline_name = "Invert Colors"
pipeline_description = "Inverts the colors of input video frames"
supports_prompts = False
# Video mode only (requires video input)
modes = {"video": ModeDefaults(default=True)}
pipelines/pipeline.py:
from typing import TYPE_CHECKING
import torch
from scope.core.pipelines.interface import Pipeline, Requirements
from .schema import InvertConfig
if TYPE_CHECKING:
from scope.core.pipelines.base_schema import BasePipelineConfig
class InvertPipeline(Pipeline):
"""Inverts the colors of input video frames."""
@classmethod
def get_config_class(cls) -> type["BasePipelineConfig"]:
return InvertConfig
def __init__(
self,
device: torch.device | None = None,
**kwargs,
):
self.device = (
device
if device is not None
else torch.device("cuda" if torch.cuda.is_available() else "cpu")
)
def prepare(self, **kwargs) -> Requirements:
"""Declare that we need 1 input frame."""
return Requirements(input_size=1)
def __call__(self, **kwargs) -> dict:
"""Invert the colors of input frames.
Args:
video: List of input frame tensors, each (1, H, W, C) in [0, 255] range.
Returns:
Dict with "video" key containing inverted frames in [0, 1] range.
"""
video = kwargs.get("video")
if video is None:
raise ValueError("Input video cannot be None for InvertPipeline")
# Stack frames into a single tensor: (T, H, W, C)
frames = torch.stack([frame.squeeze(0) for frame in video], dim=0)
frames = frames.to(device=self.device, dtype=torch.float32) / 255.0
# Invert: white becomes black, black becomes white
inverted = 1.0 - frames
return {"video": inverted.clamp(0, 1)}
Key points:
prepare() returns Requirements(input_size=N): Tells Scope to collect N frames before calling __call__
modes = {"video": ModeDefaults(default=True)}: Declares this pipeline only supports video mode
video parameter: A list of tensors, one per frame, each with shape (1, H, W, C) in [0, 255] range
- Output normalization: Input is [0, 255], output must be [0, 1]
Adding UI Parameters
Expose pipeline parameters in the Scope UI by adding fields to your config with ui_field_config().
Example: Adding an Intensity Slider
from pydantic import Field
from scope.core.pipelines.base_schema import (
BasePipelineConfig,
ModeDefaults,
ui_field_config,
)
class InvertConfig(BasePipelineConfig):
pipeline_id = "invert"
pipeline_name = "Invert Colors"
pipeline_description = "Inverts the colors of input video frames"
supports_prompts = False
modes = {"video": ModeDefaults(default=True)}
# Add a slider that appears in the Settings panel
intensity: float = Field(
default=1.0,
ge=0.0,
le=1.0,
description="How strongly to invert the colors (0 = original, 1 = fully inverted)",
json_schema_extra=ui_field_config(order=1, label="Intensity"),
)
Then read the parameter in __call__():
def __call__(self, **kwargs) -> dict:
video = kwargs.get("video")
intensity = kwargs.get("intensity", 1.0)
# ... process with intensity blending
inverted = 1.0 - frames
result = frames * (1 - intensity) + inverted * intensity
return {"video": result.clamp(0, 1)}
ui_field_config Options
| Option | Type | Description |
|---|
order | int | Display order (lower values appear first) |
modes | list[str] | Restrict to specific modes, e.g., ["video"] |
is_load_param | bool | If True, parameter is set at load time and disabled during streaming |
label | str | Short display label (description becomes tooltip) |
category | str | "configuration" for Settings panel (default), "input" for Input & Controls |
Load-time vs Runtime Parameters
Parameters behave differently depending on is_load_param:
| Type | is_load_param | Editable During Streaming | Where to Read |
|---|
| Load-time | True | No | __init__() |
| Runtime | False (default) | Yes | __call__() kwargs |
Load-time parameters are passed when the pipeline loads and require a restart to change. Use for resolution, model selection, device configuration.
Runtime parameters are passed to __call__() on every frame. Use for effects, strengths, colors.
Runtime parameters must be read from kwargs in __call__(), not stored in __init__():# Correct: Read runtime params from kwargs in __call__()
def __call__(self, **kwargs) -> dict:
intensity = kwargs.get("intensity", 1.0)
# Incorrect: Runtime params are NOT passed to __init__()
def __init__(self, intensity: float = 1.0, **kwargs):
self.intensity = intensity # Always gets the default value!
Creating Preprocessors
Preprocessors transform input video before the main pipeline processes it. Useful for generating control signals (depth maps, edges) for VACE V2V workflows.
from scope.core.pipelines.base_schema import BasePipelineConfig, ModeDefaults, UsageType
class MyPreprocessorConfig(BasePipelineConfig):
pipeline_id = "my-preprocessor"
pipeline_name = "My Preprocessor"
usage = [UsageType.PREPROCESSOR] # Makes it appear in Preprocessor dropdown
modes = {"video": ModeDefaults(default=True)}
Preprocessors must:
- Set
usage = [UsageType.PREPROCESSOR]
- Use
modes = {"video": ModeDefaults(default=True)} (video input required)
- Implement
prepare() returning Requirements(input_size=N)
Testing Your Plugin
Install locally
In the Scope desktop app or UI, install your plugin using the local path to your plugin directory.
Make changes
Edit your plugin source code as needed.
Reload
Click the reload button next to your plugin in the Settings dialog.
Test
Select your pipeline and verify it works as expected.
In the desktop app, you can use the Browse button to select your local plugin directory. Without the desktop app, run pwd in the plugin directory to get the full path to paste into the install field - this also works if you are running the server on a remote machine (e.g. RunPod).
See Also