Node Taxonomy

AMLP categorizes nodes into six fundamental types, each with distinct characteristics and runtime requirements.

DSP Nodes

Traditional Signal Processing

Deterministic computation — same input always produces same output
Low latency — typically <1ms, often zero-latency for IIR filters
Minimal memory — state size proportional to filter order or delay length
CPU-only processing — usually sufficient, GPU acceleration rarely needed
Mathematical provability — frequency response and phase characteristics computable analytically

Examples: dsp.biquad, dsp.compressor, dsp.beamformer, dsp.fir, dsp.reverb

ML Nodes

Neural Network Inference

Non-deterministic training — model behavior learned from data
Higher latency — typically 5-50ms depending on model size and hardware
Significant memory — model weights, activation buffers, quantization tables
Hardware acceleration — often benefit from GPU/NPU/DSP acceleration
Quality-compute tradeoffs — larger models generally better but slower

Examples: ml.model, ml.ensemble, ml.conditional, ml.transformer, ml.conformer

Hybrid Nodes

DSP-ML Integration

Best of both worlds — DSP reliability with ML adaptability
Lower latency — DSP in critical path, ML for parameter adaptation
Interpretable parameters — DSP parameters have physical meaning
Graceful degradation — falls back to DSP if ML fails

Examples: hybrid.adaptive_filter, hybrid.neural_effect, hybrid.ml_compressor

Spatial Audio Nodes

Immersive Audio Processing

Object-based processing — audio objects with spatial metadata (position, size, divergence)
Scene-based encoding — ambisonics (First Order, Higher Order)
Binaural rendering — HRTF convolution for headphone reproduction
Format conversion — translation between spatial audio formats
Downmix strategies — adaptive conversion to various speaker configurations

Examples: spatial.object_renderer, spatial.ambisonics_encoder, spatial.binaural_renderer, spatial.iamf_encoder

Network Audio Nodes

Audio-Over-IP Integration

AES67 compliance — open standard for audio-over-IP interoperability
SMPTE ST 2110-30 — broadcast standard for professional media over IP
PTP synchronization — IEEE 1588 sample-accurate timing
QoS management — network quality of service for guaranteed delivery
Distributed processing — ML inference across networked systems

Examples: network.aes67_input, network.aes67_output, network.st2110_interface, network.ptp_sync

MCP Integration Nodes

AI Ecosystem Connectivity

Resource exposure — expose audio models and pipelines as MCP resources
Tool invocation — allow AI systems to trigger audio processing via MCP tools
Context provision — provide audio analysis results to AI assistants
Bidirectional communication — enable AI-driven parameter control

Examples: mcp.resource_server, mcp.tool_provider, mcp.context_bridge, mcp.ai_assistant_interface