Skip to main content

Build a Real-Time Video Effects Plugin

In this tutorial you will create scope-vfx - a Scope plugin that applies GPU-accelerated visual effects to any video input. You will ship two effects (chromatic aberration and VHS/retro CRT) and set the project up so adding more effects later is as simple as dropping in a new file. By the end you will have a working plugin installed in Scope with live UI controls, and you will understand the full plugin development workflow. Watch the full 13-minute build walkthrough:

scope-vfx source code

The complete plugin built in this tutorial

What is a Scope plugin?

Daydream Scope is an open-source tool for running real-time interactive AI video pipelines. It supports models like StreamDiffusion V2, LongLive, and Krea Realtime - and its plugin system lets anyone add new pipelines without touching the core codebase. A plugin is a Python package that registers one or more pipelines. A pipeline is a class that:
  1. Declares a configuration schema (what parameters appear in the UI)
  2. Accepts video frames and/or text prompts as input
  3. Returns processed video frames as output
That is the whole contract. Scope handles discovery, installation, UI rendering, and streaming. You write the frame processing logic.

Prerequisites

  • Python 3.12 or newer
  • uv package manager
  • Daydream Scope installed and running (desktop app or CLI)
  • Basic Python and PyTorch knowledge

Scaffold the project

1

Create the directory structure

Create a new directory with the following layout:
scope-vfx/
├── pyproject.toml
└── src/
    └── scope_vfx/
        ├── __init__.py
        ├── schema.py
        ├── pipeline.py
        └── effects/
            ├── __init__.py
            ├── chromatic.py
            └── vhs.py
The plugin entry point lives in __init__.py, the configuration schema in schema.py, the pipeline logic in pipeline.py, and each effect gets its own file under effects/.
2

Configure pyproject.toml

[project]
name = "scope-vfx"
version = "0.1.0"
description = "GPU-accelerated visual effects pack for Daydream Scope"
requires-python = ">=3.12"

[project.entry-points."scope"]
scope_vfx = "scope_vfx"

[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"

[tool.hatch.build.targets.wheel]
packages = ["src/scope_vfx"]
The key line is [project.entry-points."scope"]. This is how Scope discovers your plugin - it scans all installed packages for entry points in the "scope" group and loads whatever module they point to. No configuration files, no manual registration - just a standard Python entry point.
There are no dependencies listed. Scope’s environment already includes PyTorch, Pydantic, and everything else this plugin needs. Only add [project.dependencies] if your plugin uses third-party packages that Scope does not already provide.

Register the plugin hook

Create src/scope_vfx/__init__.py:
from scope.core.plugins.hookspecs import hookimpl


@hookimpl
def register_pipelines(register):
    from .pipeline import VFXPipeline

    register(VFXPipeline)
This is the entire plugin registration. The @hookimpl decorator (from pluggy) marks this function as a hook implementation. When Scope loads your plugin, it calls register_pipelines() and passes a register callback. You call it once per pipeline class you want to make available.
The lazy import (from .pipeline import VFXPipeline inside the function body) is intentional - it delays importing PyTorch and your heavy pipeline code until Scope actually needs it.

Define the configuration schema

Create src/scope_vfx/schema.py:
from pydantic import Field

from scope.core.pipelines.base_schema import BasePipelineConfig, ModeDefaults, ui_field_config


class VFXConfig(BasePipelineConfig):
    """Configuration for the VFX Pack pipeline."""

    pipeline_id = "vfx-pack"
    pipeline_name = "VFX Pack"
    pipeline_description = (
        "GPU-accelerated visual effects: chromatic aberration, VHS/retro CRT, and more"
    )

    supports_prompts = False

    modes = {"video": ModeDefaults(default=True)}

    # --- Chromatic Aberration ---

    chromatic_enabled: bool = Field(
        default=True,
        description="Enable chromatic aberration (RGB channel displacement)",
        json_schema_extra=ui_field_config(order=1, label="Chromatic Aberration"),
    )

    chromatic_intensity: float = Field(
        default=0.3,
        ge=0.0,
        le=1.0,
        description="Strength of the RGB channel displacement (0 = none, 1 = maximum)",
        json_schema_extra=ui_field_config(order=2, label="Intensity"),
    )

    chromatic_angle: float = Field(
        default=0.0,
        ge=0.0,
        le=360.0,
        description="Direction of the channel displacement in degrees",
        json_schema_extra=ui_field_config(order=3, label="Angle"),
    )

    # --- VHS / Retro CRT ---

    vhs_enabled: bool = Field(
        default=False,
        description="Enable VHS / retro CRT effect (scan lines, noise, tracking)",
        json_schema_extra=ui_field_config(order=10, label="VHS / Retro CRT"),
    )

    scan_line_intensity: float = Field(
        default=0.3,
        ge=0.0,
        le=1.0,
        description="Darkness of the scan lines (0 = invisible, 1 = fully black)",
        json_schema_extra=ui_field_config(order=11, label="Scan Lines"),
    )

    scan_line_count: int = Field(
        default=100,
        ge=10,
        le=500,
        description="Number of scan lines across the frame height",
        json_schema_extra=ui_field_config(order=12, label="Line Count"),
    )

    vhs_noise: float = Field(
        default=0.1,
        ge=0.0,
        le=1.0,
        description="Amount of analog noise / film grain",
        json_schema_extra=ui_field_config(order=13, label="Noise"),
    )

    tracking_distortion: float = Field(
        default=0.2,
        ge=0.0,
        le=1.0,
        description="Horizontal tracking distortion (wavy displacement)",
        json_schema_extra=ui_field_config(order=14, label="Tracking"),
    )

Understanding the schema

Pipeline metadata - pipeline_id, pipeline_name, and pipeline_description are class variables that tell Scope how to display your pipeline in the UI. modes = {"video": ModeDefaults(default=True)} - This declares that the pipeline requires video input (camera feed or video file). It will not appear in text-to-video mode. For a text-only pipeline (one that generates frames from nothing), you would use "text" instead. supports_prompts = False - These effects do not use text prompts, so the prompt input is hidden. Each field becomes a UI control. Scope’s frontend reads the JSON Schema that Pydantic generates from this class and automatically renders the right widget:
Field typeUI widget
boolToggle switch
float with ge/leSlider
int with ge/leSlider
enumDropdown
The ui_field_config() helper sets display order, labels, and other UI hints. The order values control the vertical position in the settings panel - we use 1-3 for chromatic params and 10-14 for VHS params to keep them grouped with room for future effects in between.
All parameters here are runtime parameters (the default). They are editable while the pipeline is streaming - move a slider and see the result instantly. If you need a parameter that requires a restart (like model selection), add is_load_param=True to its ui_field_config().

Build the first effect - Chromatic Aberration

Create src/scope_vfx/effects/chromatic.py:
import math

import torch


def chromatic_aberration(
    frames: torch.Tensor,
    intensity: float = 0.3,
    angle: float = 0.0,
) -> torch.Tensor:
    """Displace RGB channels in opposite directions for a chromatic aberration look."""
    if intensity <= 0:
        return frames

    max_shift = int(intensity * 20)
    if max_shift == 0:
        return frames

    rad = math.radians(angle)
    dx = int(round(max_shift * math.cos(rad)))
    dy = int(round(max_shift * math.sin(rad)))

    if dx == 0 and dy == 0:
        return frames

    result = frames.clone()

    # Red channel shifts one direction
    result[..., 0] = torch.roll(frames[..., 0], shifts=(dy, dx), dims=(1, 2))
    # Blue channel shifts the opposite direction
    result[..., 2] = torch.roll(frames[..., 2], shifts=(-dy, -dx), dims=(1, 2))
    # Green channel stays centred

    return result
The effect takes the red and blue color channels and shifts them in opposite directions. The green channel stays put. This mimics the optical imperfection in real camera lenses where different wavelengths of light focus at slightly different positions. torch.roll() does the heavy lifting - it shifts a tensor along specified dimensions, wrapping pixels that fall off one edge back onto the other. Since this operates on GPU tensors, it runs in microseconds even at high resolutions. The intensity parameter maps to a 0-20 pixel displacement range, and angle controls the direction. At intensity 0.3 (the default), you get about 6 pixels of shift - enough to notice without being overwhelming.

Build the second effect - VHS / Retro CRT

Create src/scope_vfx/effects/vhs.py:
import torch


def vhs_retro(
    frames: torch.Tensor,
    scan_line_intensity: float = 0.3,
    scan_line_count: int = 100,
    noise: float = 0.1,
    tracking: float = 0.2,
) -> torch.Tensor:
    """Apply a VHS / retro CRT look."""
    _T, H, W, _C = frames.shape
    result = frames.clone()

    # --- Scan lines ---
    if scan_line_intensity > 0 and scan_line_count > 0:
        rows = torch.arange(H, device=frames.device, dtype=torch.float32)
        wave = torch.sin(rows * (scan_line_count * 3.14159 / H))
        mask = 1.0 - scan_line_intensity * 0.5 * (1.0 - wave)
        result = result * mask.view(1, H, 1, 1)

    # --- Analog noise / film grain ---
    if noise > 0:
        grain = torch.randn_like(result) * (noise * 0.15)
        result = result + grain

    # --- Tracking distortion ---
    if tracking > 0:
        max_shift = tracking * 0.05
        rows_norm = torch.linspace(-1.0, 1.0, H, device=frames.device)
        offsets = max_shift * torch.sin(rows_norm * 6.2832 * 3.0)

        grid_y = torch.linspace(-1.0, 1.0, H, device=frames.device)
        grid_x = torch.linspace(-1.0, 1.0, W, device=frames.device)
        gy, gx = torch.meshgrid(grid_y, grid_x, indexing="ij")

        gx = gx + offsets.view(H, 1)

        grid = torch.stack([gx, gy], dim=-1).unsqueeze(0).expand(result.shape[0], -1, -1, -1)

        result_nchw = result.permute(0, 3, 1, 2)
        result_nchw = torch.nn.functional.grid_sample(
            result_nchw, grid, mode="bilinear", padding_mode="border", align_corners=True
        )
        result = result_nchw.permute(0, 2, 3, 1)

    return result.clamp(0, 1)
This effect combines three sub-effects that together create the unmistakable VHS aesthetic: Scan lines use a sine wave across the frame height to create alternating dark bands, just like a CRT monitor. The scan_line_count parameter controls how many lines you see, and scan_line_intensity controls how dark they are. Analog noise adds random Gaussian noise to simulate the grain you would see on a VHS tape. The multiplier is kept conservative (noise * 0.15) so even at maximum the image is not obliterated. Tracking distortion is the most visually interesting part. It shifts each row of pixels horizontally by a different amount, following a sine curve. This creates the classic “wobbly VHS” look where the image drifts sideways. We use torch.nn.functional.grid_sample() instead of a per-row loop - this is the GPU-friendly way to apply spatially-varying displacements. It runs a single kernel on the GPU regardless of image resolution.

Wire it all together

Create src/scope_vfx/pipeline.py:
from typing import TYPE_CHECKING

import torch

from scope.core.pipelines.interface import Pipeline, Requirements

from .effects import chromatic_aberration, vhs_retro
from .schema import VFXConfig

if TYPE_CHECKING:
    from scope.core.pipelines.base_schema import BasePipelineConfig


class VFXPipeline(Pipeline):
    """GPU-accelerated visual effects pipeline."""

    @classmethod
    def get_config_class(cls) -> type["BasePipelineConfig"]:
        return VFXConfig

    def __init__(self, device: torch.device | None = None, **kwargs):
        self.device = (
            device
            if device is not None
            else torch.device("cuda" if torch.cuda.is_available() else "cpu")
        )

    def prepare(self, **kwargs) -> Requirements:
        """We need exactly one input frame per call."""
        return Requirements(input_size=1)

    def __call__(self, **kwargs) -> dict:
        """Apply enabled effects to input video frames."""
        video = kwargs.get("video")
        if video is None:
            raise ValueError("VFXPipeline requires video input")

        # Stack input frames -> (T, H, W, C) and normalise to [0, 1]
        frames = torch.stack([frame.squeeze(0) for frame in video], dim=0)
        frames = frames.to(device=self.device, dtype=torch.float32) / 255.0

        # --- Effect chain ---

        if kwargs.get("chromatic_enabled", True):
            frames = chromatic_aberration(
                frames,
                intensity=kwargs.get("chromatic_intensity", 0.3),
                angle=kwargs.get("chromatic_angle", 0.0),
            )

        if kwargs.get("vhs_enabled", False):
            frames = vhs_retro(
                frames,
                scan_line_intensity=kwargs.get("scan_line_intensity", 0.3),
                scan_line_count=kwargs.get("scan_line_count", 100),
                noise=kwargs.get("vhs_noise", 0.1),
                tracking=kwargs.get("tracking_distortion", 0.2),
            )

        return {"video": frames.clamp(0, 1)}

Understanding the pipeline class

get_config_class() tells Scope which schema to use for this pipeline. __init__() receives load-time parameters. We only need the device. The **kwargs catch-all is important - Scope may pass additional parameters that we do not use. prepare() tells Scope’s frame processor how many input frames to buffer before calling __call__(). We need exactly 1 frame since our effects are per-frame (no temporal dependencies). __call__() is where the action happens. It extracts the video frames from kwargs (a list of tensors, each (1, H, W, C) in [0, 255] range), stacks and normalises them to [0, 1], runs each enabled effect in sequence, and returns the result in the required [0, 1] THWC format.
Every runtime parameter must be read from kwargs in __call__(), not stored in __init__(). Scope passes the current slider values on every frame. If you read them in __init__(), you would always get the defaults.
Finally, create effects/__init__.py to re-export the effect functions for clean imports:
from .chromatic import chromatic_aberration
from .vhs import vhs_retro

__all__ = ["chromatic_aberration", "vhs_retro"]

Install and test

1

Open the Plugins panel

In Scope, open Settings > Plugins.
2

Install the plugin

If you are using the desktop app, click Browse and select the scope-vfx folder. If you are running the server directly, enter the full path to the plugin directory or a Git URL:
git+https://github.com/viborc/scope-vfx.git
Click Install. Scope will install the plugin and restart the server.
3

Select the pipeline

After restart, select VFX Pack from the pipeline selector. Connect a camera or video source and you should see your feed with chromatic aberration applied.
4

Adjust the controls

Open the Settings panel to see your sliders. Try cranking up the intensity, changing the angle, and toggling on the VHS effect.

Development workflow

When you are iterating on effects, the cycle is:
  1. Edit the effect code
  2. Click Reload next to the plugin in Settings
  3. Scope restarts and picks up your changes
No reinstall needed. The reload triggers a full server restart which clears Python’s module cache, so your latest code is always loaded.

Use it as a post-processor

So far we have been running VFX Pack as a main pipeline, meaning it processes raw camera or video input directly. But what if you want to apply these effects on top of AI-generated video? For example, run LongLive to generate video from a prompt and then add chromatic aberration and VHS effects on top of that output. That is what post-processors are for. In Scope, every pipeline sits in a chain:
Input -> [Preprocessor] -> [Main Pipeline] -> [Post-processor] -> Output
The main pipeline is usually the generative AI model. A post-processor runs after it, transforming the model’s output before it reaches your screen. The pipeline code stays exactly the same. The only change is a single metadata field in the schema. Add UsageType to your import and set usage in your config class:
from scope.core.pipelines.base_schema import BasePipelineConfig, ModeDefaults, UsageType, ui_field_config


class VFXConfig(BasePipelineConfig):
    # ... same as before ...

    usage = [UsageType.POSTPROCESSOR]
    modes = {"video": ModeDefaults(default=True)}
With this one line, VFX Pack moves from the main pipeline dropdown to the post-processor slot in the UI. You can pick any generative model as your main pipeline and VFX Pack will process its output. Your __call__() method receives the exact same tensor format either way. The only difference is who produced those frames - the webcam, or the AI model.
As of writing, Scope’s UI renders parameter sliders for the main pipeline but not yet for pre/post-processors. Your effects will still apply with whatever values are set, but you will not see the sliders when VFX Pack is in the post-processor slot. A workaround: select VFX Pack as the main pipeline first, adjust your sliders, then switch back to your generative model with VFX Pack as post-processor. The values persist.

Adding more effects

The architecture makes extending trivial. Here is how you would add a pixelation / mosaic effect:
1

Create the effect function

Create src/scope_vfx/effects/pixelate.py:
import torch
import torch.nn.functional as F


def pixelate(frames: torch.Tensor, block_size: int = 8) -> torch.Tensor:
    """Pixelate by downscaling then upscaling with nearest-neighbour."""
    if block_size <= 1:
        return frames

    T, H, W, C = frames.shape
    small_h, small_w = max(1, H // block_size), max(1, W // block_size)

    nchw = frames.permute(0, 3, 1, 2)
    small = F.interpolate(nchw, size=(small_h, small_w), mode="area")
    big = F.interpolate(small, size=(H, W), mode="nearest")
    return big.permute(0, 2, 3, 1)
2

Add parameters to the schema

In schema.py, add:
pixelate_enabled: bool = Field(
    default=False,
    description="Enable pixelation / mosaic effect",
    json_schema_extra=ui_field_config(order=20, label="Pixelate"),
)

pixelate_block_size: int = Field(
    default=8,
    ge=1,
    le=64,
    description="Size of each pixel block",
    json_schema_extra=ui_field_config(order=21, label="Block Size"),
)
3

Wire it into the effect chain

In pipeline.py, add to the effect chain inside __call__():
from .effects import chromatic_aberration, vhs_retro, pixelate

# ... inside __call__():
if kwargs.get("pixelate_enabled", False):
    frames = pixelate(
        frames,
        block_size=kwargs.get("pixelate_block_size", 8),
    )
4

Re-export the function

In effects/__init__.py, add:
from .pixelate import pixelate
Reload the plugin in Scope and the new Pixelate section appears in the UI.
Same pattern every time: a standalone function, some schema fields, and a few lines in the effect chain.

What’s next

If this tutorial has inspired you, here are some effects you could add to your own VFX Pack using the exact same pattern:
  • Glitch blocks - random rectangular displacement for a digital corruption look
  • Film grain - more realistic than simple noise, with luminance-dependent grain
  • Vignette - darken the edges for a cinematic frame
  • Color grading - lift/gamma/gain per channel for full color control
  • Kaleidoscope - radial symmetry for trippy visuals
  • Edge glow - Sobel edge detection with additive glow
Each one follows the same pattern: a standalone function in effects/, some schema fields, and a few lines in the effect chain. The plugin grows but the architecture stays simple.

AI-assisted plugin development

We have prepared a set of Claude Code skills and detailed instructions that let you scaffold an entire Scope plugin through an interactive AI-assisted workflow. A dedicated video tutorial showcasing this approach is coming soon.

scope-vfx source code

Browse the complete plugin source code, including the Claude Code skill in .claude/skills/

See Also