Create your own Scope nodes to add custom pipelines, preprocessors, or postprocessors. This guide walks through building a node from scratch with working examples.
The [project.entry-points."scope"] section registers your node with Scope. The key (my_scope_node) is your node name, and the value points to the module containing your hook implementation.
If your node needs additional third-party packages, add them to [project.dependencies] in pyproject.toml - Scope installs them automatically. You don’t need to declare packages that Scope already provides (e.g. torch, pydantic) since they’re available from the host environment.
from scope.core.plugins.hookspecs import hookimpl@hookimpldef register_pipelines(register): from .pipelines.pipeline import MyPipeline register(MyPipeline)
The register_pipelines hook is called when Scope loads your node. Call register() for each pipeline class you want to make available.
This pipeline inverts the colors of input video frames.pipelines/schema.py:
from scope.core.pipelines.base_schema import BasePipelineConfig, ModeDefaultsclass InvertConfig(BasePipelineConfig): """Configuration for the Invert Colors pipeline.""" pipeline_id = "invert" pipeline_name = "Invert Colors" pipeline_description = "Inverts the colors of input video frames" supports_prompts = False # Video mode only (requires video input) modes = {"video": ModeDefaults(default=True)}
pipelines/pipeline.py:
from typing import TYPE_CHECKINGimport torchfrom scope.core.pipelines.interface import Pipeline, Requirementsfrom .schema import InvertConfigif TYPE_CHECKING: from scope.core.pipelines.base_schema import BasePipelineConfigclass InvertPipeline(Pipeline): """Inverts the colors of input video frames.""" @classmethod def get_config_class(cls) -> type["BasePipelineConfig"]: return InvertConfig def __init__( self, device: torch.device | None = None, **kwargs, ): self.device = ( device if device is not None else torch.device("cuda" if torch.cuda.is_available() else "cpu") ) def prepare(self, **kwargs) -> Requirements: """Declare that we need 1 input frame.""" return Requirements(input_size=1) def __call__(self, **kwargs) -> dict: """Invert the colors of input frames. Args: video: List of input frame tensors, each (1, H, W, C) in [0, 255] range. Returns: Dict with "video" key containing inverted frames in [0, 1] range. """ video = kwargs.get("video") if video is None: raise ValueError("Input video cannot be None for InvertPipeline") # Stack frames into a single tensor: (T, H, W, C) frames = torch.stack([frame.squeeze(0) for frame in video], dim=0) frames = frames.to(device=self.device, dtype=torch.float32) / 255.0 # Invert: white becomes black, black becomes white inverted = 1.0 - frames return {"video": inverted.clamp(0, 1)}
Key points:
prepare() returns Requirements(input_size=N): Tells Scope to collect N frames before calling __call__
modes = {"video": ModeDefaults(default=True)}: Declares this pipeline only supports video mode
video parameter: A list of tensors, one per frame, each with shape (1, H, W, C) in [0, 255] range
Output normalization: Input is [0, 255], output must be [0, 1]
Parameters behave differently depending on is_load_param:
Type
is_load_param
Editable During Streaming
Where to Read
Load-time
True
No
__init__()
Runtime
False (default)
Yes
__call__() kwargs
Load-time parameters are passed when the pipeline loads and require a restart to change. Use for resolution, model selection, device configuration.Runtime parameters are passed to __call__() on every frame. Use for effects, strengths, colors.
Runtime parameters must be read from kwargs in __call__(), not stored in __init__():
# Correct: Read runtime params from kwargs in __call__()def __call__(self, **kwargs) -> dict: intensity = kwargs.get("intensity", 1.0)# Incorrect: Runtime params are NOT passed to __init__()def __init__(self, intensity: float = 1.0, **kwargs): self.intensity = intensity # Always gets the default value!
Preprocessors transform input video before the main pipeline processes it. Useful for generating control signals (depth maps, edges) for VACE V2V workflows.
from scope.core.pipelines.base_schema import BasePipelineConfig, ModeDefaults, UsageTypeclass MyPreprocessorConfig(BasePipelineConfig): pipeline_id = "my-preprocessor" pipeline_name = "My Preprocessor" usage = [UsageType.PREPROCESSOR] # Makes it appear in Preprocessor dropdown modes = {"video": ModeDefaults(default=True)}
Preprocessors must:
Set usage = [UsageType.PREPROCESSOR]
Use modes = {"video": ModeDefaults(default=True)} (video input required)
In the Scope desktop app or UI, install your node using the local path to your node directory.
2
Make changes
Edit your node source code as needed.
3
Reload
Click the reload button next to your node in the Settings dialog.
4
Test
Select your pipeline and verify it works as expected.
In the desktop app, you can use the Browse button to select your local node directory. Without the desktop app, run pwd in the node directory to get the full path to paste into the install field - this also works if you are running the server on a remote machine (e.g. RunPod).