Understanding pipeline architecture
Pipelines are the core abstraction for handling streaming video in Scope. A pipeline encapsulates model loading, inference logic, configuration schemas, and metadata.Pipeline Definition
The Pipeline Base Class
All pipelines inherit from the abstractPipeline class:
| Method | Purpose |
|---|---|
get_config_class() | Returns the Pydantic config class that defines parameters and metadata |
__call__() | Processes input frames and returns generated video |
Configuration Schema
Every pipeline defines a Pydantic configuration class that inherits fromBasePipelineConfig. This class serves as the single source of truth for:
- Pipeline metadata (ID, name, description, version)
- Feature flags (LoRA support, VACE support, quantization)
- Parameter definitions with validation constraints
- UI rendering hints for the frontend
Example Configuration
Pipeline Metadata
Configuration classes declare metadata as class variables:| Metadata | Type | Description |
|---|---|---|
pipeline_id | str | Unique identifier used for registry lookup |
pipeline_name | str | Human-readable display name |
pipeline_description | str | Description of capabilities |
pipeline_version | str | Semantic version string |
docs_url | str | None | Link to pipeline documentation |
estimated_vram_gb | float | None | Estimated VRAM requirement in GB |
artifacts | list[Artifact] | Model files required by the pipeline |
Feature Flags
Feature flags control which UI controls are shown:| Flag | Effect |
|---|---|
supports_lora | Enables LoRA management UI |
supports_vace | Enables VACE reference image UI |
supports_cache_management | Enables cache controls |
supports_quantization | Enables quantization selector |
supports_kv_cache_bias | Enables KV cache bias slider |
Artifacts
Artifacts declare model files and resources that a pipeline requires. The system downloads these automatically before the pipeline loads.Available Artifact Types
| Type | Source | Attributes |
|---|---|---|
HuggingfaceRepoArtifact | HuggingFace Hub | repo_id, files |
GoogleDriveArtifact | Google Drive | file_id, files (optional), name (optional) |
Example
Input Requirements
Theprepare() method declares input frame requirements before processing. Pipelines that accept video input must implement it.
| Return Value | Meaning |
|---|---|
Requirements(input_size=N) | Pipeline needs N input frames before __call__() |
None | Pipeline operates in text-only mode (no video input needed) |
Example
Multi-mode Pipeline Example
Pipelines that support both text-to-video and video-to-video modes use theprepare_for_mode() helper from defaults.py:
Requirements when video mode is active (indicated by video=True in kwargs) and None for text mode. The input_size is calculated from the pipeline’s configuration.
Why implement prepare():
- Without it, the frame processor cannot know how many frames to buffer before calling
__call__() - Enables efficient queue management - the processor sizes queues based on requirements
- Allows multi-mode pipelines to dynamically switch between text and video input modes
Mode System
Pipelines can support multiple input modes with different default parameters:| Mode | Description |
|---|---|
text | Text-to-video generation from prompts only |
video | Video-to-video with input conditioning |
default=True flag marks which mode is selected initially.
Preprocessors and Postprocessors
Pipelines can be declared as preprocessors or postprocessors using theusage class variable:
| Type | Purpose | UI Location |
|---|---|---|
UsageType.PREPROCESSOR | Process input video before main pipeline | Preprocessor dropdown |
UsageType.POSTPROCESSOR | Process output video after main pipeline | Postprocessor dropdown |
| (empty list) | Standard pipeline | Main pipeline selector |
Dynamic UI Rendering
JSON Schema Generation
Pydantic models automatically generate JSON Schema viamodel_json_schema(). The backend exposes this schema through the /pipelines endpoint, which the frontend consumes for dynamic UI rendering.
The schema includes:
- Field types and validation constraints (
minimum,maximum,enum) - Default values
- Descriptions (used as tooltips)
- Custom UI metadata (via
json_schema_extra)
Schema-to-UI Flow
Field Type Inference
For fields without acomponent specified, the frontend automatically renders an appropriate widget based on the JSON Schema type. The frontend (schemaSettings.ts) infers widget types from schema properties:
| Schema Property | Inferred Type | Widget |
|---|---|---|
type: "boolean" | toggle | Toggle switch |
type: "string" | text | Text input |
type: "number" with minimum/maximum | slider | Slider with input |
type: "number" without bounds | number | Number input |
enum or $ref | enum | Select dropdown |
Two-Tier Component System
The UI uses a two-tier approach: primitive fields render as individual widgets based on inferred type, while complex components group related fields into unified UI blocks.UI Metadata
Theui_field_config() helper attaches rendering hints to schema fields:
| Field | Type | Description |
|---|---|---|
order | int | Sort order for display (lower values appear first) |
component | str | Groups fields into complex widgets (e.g., “resolution”, “noise”) |
modes | list[str] | Restrict visibility to specific input modes |
is_load_param | bool | If True, field is disabled during streaming |
label | str | Short display label; description becomes tooltip |
category | str | "configuration" for Settings panel, "input" for Input & Controls |
Complex Components
Fields with the samecomponent value are grouped and rendered together:
| Component | Fields | Rendered As |
|---|---|---|
resolution | height, width | Linked dimension inputs |
noise | noise_scale, noise_controller | Slider + toggle |
vace | vace_context_scale | VACE configuration panel |
lora | lora_merge_strategy | LoRA manager |
denoising_steps | denoising_steps | Multi-step slider |
component value are grouped and rendered once.
Mode-Aware Filtering
Fields specify which modes they appear in viamodes:
Load-time vs Runtime Parameters
| Type | is_load_param | Editable During Streaming | Example |
|---|---|---|---|
| Load parameter | True | No | Resolution, quantization, seed |
| Runtime parameter | False | Yes | Prompt strength, noise scale |
Pipeline Registry
Pipelines register with the centralPipelineRegistry at startup:
Built-in vs Plugin Pipelines
Built-in pipelines are registered automatically when the registry module is imported. Plugin pipelines register through the pluggy hook system:GPU-Based Filtering
Pipelines withestimated_vram_gb set are only registered if a compatible GPU is detected. This prevents showing pipelines that cannot run on the current hardware.