Skip to main content
← All projects
ArtTypeScriptPythonArchitecture

Omni-Dromenon Engine (Metasystem Master)

Collective audience input shaping live art in real time

Top languageLast commit

Related Live Sites

Omni-Dromenon Engine (Metasystem Master) Concept Sketch

Algorithmic visualization representing the underlying logic of Omni-Dromenon Engine (Metasystem Master). Source: Dynamic Generation

The Problem

Live performance has always been a negotiation between performers and audiences — but the feedback channels are coarse. An audience member can clap or not clap. They cannot communicate "I want the texture to thin out while the harmonic tension increases." Existing tools for interactive performance are either too simple (binary voting, applause meters) or too complex (custom Max/MSP patches that take months per piece).[1] There's nothing in between: a general-purpose engine that works across genres while remaining configurable enough for each. The challenge of designing spectator experiences that move beyond passive consumption has been well documented in HCI research, yet few systems bridge the gap between audience agency and artistic coherence.[5]

The Design Decision

The critical insight: the audience is a co-performer operating a collective instrument, not a data source.[2] And the performer is never subordinate to the crowd. Three override modes (absolute, blend, lock) give the performer graduated control — they can fully override a parameter, blend their intent with the audience's at any ratio, or lock it entirely. This approach reflects the principle that human-centered systems must keep humans in command of consequential decisions, even when the system aggregates collective intelligence.[3] The resulting performances are negotiated in real time, at sub-second latency, across every parameter the performer exposes.

graph TD A[Audience Phones] -->|WebSocket /audience| B[Parameter Bus] B --> C[Spatial Weighting] B --> D[Temporal Weighting] B --> E[Cluster Detection] C --> F[Consensus Engine] D --> F E --> F F --> G[Outlier Rejection] G --> H[Exponential Smoothing] H --> I{Performer Override?} I -->|absolute| J[Performer Value Only] I -->|blend| K[Weighted Mix] I -->|lock| L[Locked Parameter] I -->|none| M[Consensus Value] J --> N[Output Bus] K --> N L --> N M --> N N --> O[Audience UI] N --> P[Performer Dashboard] N --> Q[OSC Bridge → Audio]
Data flow through the Omni-Dromenon Engine: audience input is aggregated through weighted consensus, checked against performer overrides, and distributed to all output channels.

Architecture

┌──────────────────────────────────────────────┐
│           NGINX REVERSE PROXY                │
├──────────────────────────────────────────────┤
│                                              │
│  ┌──────────────────────────────────┐        │
│  │     CORE ENGINE (Port 3000)      │        │
│  │  Express + Socket.io             │  Redis │
│  │  ┌──────────┐ ┌──────────────┐  │◄──┘    │
│  │  │ REST API │ │ WebSocket    │  │        │
│  │  │          │ │ /audience ns │  │ Chroma │
│  │  │          │ │ /performer ns│  │◄──┘    │
│  │  └────┬─────┘ └──────┬──────┘  │        │
│  │       └───────┬───────┘         │        │
│  │         Parameter Bus           │        │
│  │         Consensus Engine        │        │
│  │         OSC Bridge              │        │
│  └──────────────────────────────────┘        │
│                                              │
│  ┌──────────────────────────────────┐        │
│  │     PERFORMANCE SDK (Port 3001)  │        │
│  │  React 18 + Vite                 │        │
│  │  Audience UI  ·  Performer Dash  │        │
│  └──────────────────────────────────┘        │
│                                              │
│  ┌──────────────────────────────────┐        │
│  │     AUDIO SYNTHESIS BRIDGE       │        │
│  │  OSC Server + WebAudio Engine    │        │
│  └──────────────────────────────────┘        │
└──────────────────────────────────────────────┘

Data flow:
Phone → WebSocket /audience → Parameter Bus
→ Consensus (spatial × temporal × cluster)
→ Outlier rejection → Smoothing
→ Performer override check
→ Audience UI + Performer Dashboard + OSC
Figure 1. System architecture of the Omni-Dromenon Engine, showing the three-tier design: reverse proxy, core engine with consensus processing, and frontend/audio output layers.

The architecture reflects distributed systems principles where message-passing between isolated namespaces ensures fault tolerance. I directed the implementation of the core engine to handle high-concurrency WebSocket traffic with sub-50ms latency invariants. The Redis adapter enables horizontal scaling across multiple Node.js processes.[10]

The Consensus Algorithm

Audience inputs are batched, never processed individually. The consensus loop runs every 50ms, computing weighted averages across three axes that must sum to ~1.0. This approach draws on research into social creativity, where collective input must be structured to avoid both tyranny-of-the-majority and cacophony.[4]

Genre PresetSpatialTemporalConsensusRationale
Electronic Music0.30.50.2Rhythmic immediacy
Ballet0.50.20.3Spatial proximity to dancer
Opera0.20.30.5Collective dramatic coherence
Installation0.70.10.2Location is almost everything
Theatre0.40.30.3Balanced narrative needs

Spatial weighting uses exponential decay from the stage — closer audience members have more influence, reflecting the qualitative difference of proximity. Temporal weighting ensures the system responds to the audience's current state (5s decay window), not their historical average. Consensus weighting detects clusters: converging inputs amplify each other, producing decisive group movements rather than perpetual averages. The cluster detection mechanism resonates with Csikszentmihalyi's observations on how group flow states emerge when individual contributions align toward a shared creative target.[7]

consensus-engine.ts
interface ConsensusConfig {
  spatial: number;   // proximity weight
  temporal: number;  // recency weight
  cluster: number;   // convergence weight
  outlierThreshold: number; // z-score cutoff
  smoothingFactor: number;  // EMA alpha
}

function computeConsensus(
  batch: AudienceInput[],
  config: ConsensusConfig,
  performerOverrides: Map<string, Override>
): ParameterState {
  // 1. Weight each input by proximity to stage
  const spatialWeighted = batch.map(input =>
    applyExponentialDecay(input, input.distance, config.spatial)
  );

  // 2. Apply temporal decay (5s window)
  const temporalWeighted = spatialWeighted.map(input =>
    applyTemporalDecay(input, Date.now(), config.temporal)
  );

  // 3. Detect clusters via DBSCAN, amplify convergence
  const clusters = detectClusters(temporalWeighted);
  const clusterWeighted = amplifyConvergence(
    temporalWeighted, clusters, config.cluster
  );

  // 4. Reject outliers beyond z-score threshold
  const filtered = rejectOutliers(
    clusterWeighted, config.outlierThreshold
  );

  // 5. Compute weighted average per parameter
  const consensus = weightedAverage(filtered);

  // 6. Exponential smoothing to prevent jitter
  const smoothed = exponentialSmooth(
    consensus, previousState, config.smoothingFactor
  );

  // 7. Apply performer overrides
  return applyOverrides(smoothed, performerOverrides);
}
Core consensus loop: batched audience inputs are weighted across spatial, temporal, and cluster axes, with outlier rejection and performer override applied before output.

Implementation

Built as a pnpm monorepo (TypeScript + Python) with five packages: core-engine (Express + Socket.io + Redis), performance-sdk (React 18 + Vite), audio-synthesis-bridge (OSC + WebAudio), client-sdk (lightweight embeddable client), and an orchestrate CLI (Python, multi-AI pipeline). The frontend SDK leverages patterns from the Processing community's tradition of making creative coding accessible through well-designed abstractions.[8] The core engine handles two strictly separated Socket.io namespaces — /audience for many concurrent clients (target: 1,000+) and /performer for authenticated controllers. Z-score outlier rejection (threshold: 2.5 SD) and exponential smoothing (factor: 0.3) prevent individual inputs from dominating.

src/bus/parameter-bus.ts (excerpt)
enum BusEvent {
  AUDIENCE_INPUT       = 'audience:input',
  AUDIENCE_INPUT_BATCH = 'audience:input:batch',
  CONSENSUS_UPDATE     = 'consensus:update',
  CONSENSUS_SNAPSHOT   = 'consensus:snapshot',
  PERFORMER_OVERRIDE   = 'performer:override',
  PERFORMER_COMMAND    = 'performer:command',
  SESSION_START        = 'session:start',
  SESSION_END          = 'session:end',
  PARTICIPANT_JOIN     = 'participant:join',
  PARTICIPANT_LEAVE    = 'participant:leave',
  ERROR                = 'error',
  STATS                = 'stats',
}
// All payloads strongly typed via BusEventPayloads interface map.
// Bus emits throughput stats every second: inputs/sec, consensus/sec,
// active subscribers, latency.
Parameter Bus — typed event emitter with 16 event types driving all internal communication

Package Architecture

PackageStackResponsibility
@omni-dromenon/core-engineExpress + Socket.io + Redis + ZodConsensus algorithm, parameter bus, OSC bridge, session lifecycle
@omni-dromenon/performance-sdkReact 18 + Vite + Socket.io-clientAudience sliders, voting panels, performer override dashboard
@omni-dromenon/audio-synthesis-bridgeOSC + WebAudio APIConsensus-to-audio translation for SuperCollider, Max/MSP, Ableton
@omni-dromenon/client-sdkWebSocket clientLightweight embeddable audience participation for third-party sites
packages/orchestratePython 3.11+Multi-AI orchestration CLI with gated research-to-implementation pipeline

Four genre-specific example applications exercise the full stack: generative music (SuperCollider via OSC), generative visual (WebGL shaders driven by consensus uniforms), choreographic interface (pose detection + consensus-weighted movement mapping), and theatre dialogue (branching narrative where audience consensus selects paths). Each example is a self-contained pnpm package demonstrating how the engine adapts to different performance contexts.

Performance Targets

<2ms
P95 Latency
<1ms
Consensus/100 Users
<500MB
Memory/1K Connections
10/sec
Input Rate Limit
20/sec
Consensus Broadcast
1K+
Audience Capacity
Figure 3. Quantitative targets reflecting real-time performance requirements for live deployment

Venue Geometry and Deployment

Performance spaces are modeled as 2D coordinate systems with named zones. The default venue defines front, middle, and back sections with spatial weight multipliers (1.0, 0.8, 0.6) applied before distance-decay calculation. Zone-based weights are configurable per deployment: a black box theatre (50-200 seats) uses high spatial resolution with intimate proximity weighting, while an outdoor festival (1,000+ participants) relaxes latency requirements and emphasizes consensus clustering.

The system ships with Docker Compose configuration for full-stack deployment (core engine, performance SDK, Redis, ChromaDB, Nginx reverse proxy) and includes production-ready documentation — a festival technical rider, indie venue setup guide, and grant budget line-item template — in the repository's docs/community/ directory. Target deployment contexts include Ars Electronica (Linz), NIME (the spatial weighting model is a novel HCI contribution), and Transmediale (Berlin), with draft grant narratives prepared for each venue.

Why This Is Art

The consensus algorithms aren't backstage plumbing — they're the medium. Who gets weighted more? What happens when the crowd and the performer disagree? These are artistic questions answered by system design. This positions the engine within the lineage of generative art, where the system's rules constitute the artwork itself.[6] The engine consumes theoretical foundations from ORGAN-I (recursive-engine's identity models inform how performers and audience maintain coherence across transformations) and produces a framework that could become a commercial product in ORGAN-III. Bourriaud's concept of relational aesthetics — art defined by the human relations it produces rather than the objects it creates — finds its most literal technical expression here: the engine's entire purpose is to structure the relationship between performer and audience.[12] This is ORGAN-II at its most ambitious: art that treats its own governance as part of the aesthetic.

Tradeoffs & Lessons

  • Generality vs. genre-specific optimization — A general-purpose engine across ballet, electronic music, theatre, and installation means no genre gets perfectly tailored behavior. The preset system mitigates this, but custom Max/MSP patches will always outperform a generalized solution for a single piece. The tradeoff is worth it for rapid deployment across genres.
  • Performer override as compositional tool — Initially designed as a safety valve, performer override became the most interesting artistic element. The creative tension between crowd desire and performer resistance produces dynamics impossible in either autocratic or purely democratic systems. Bishop's analysis of participatory art's political dimensions — who holds power, who concedes it, and under what conditions — proved unexpectedly relevant to the override system's design.[11]
  • Monorepo complexity — Five packages in a pnpm workspace means more build configuration, more dependency management, more CI complexity. The alternative (five separate repos) would be worse for a system where packages share types and build in lockstep.
  • WebSocket at scale — Targeting 1,000+ concurrent audience connections pushes Socket.io's single-process limits. Redis adapter handles horizontal scaling but adds operational complexity. The input rate limit (10/sec per client) and z-score outlier rejection provide defense-in-depth against both accidental flooding and adversarial audience behavior.
  • Theoretical integration — The engine consumes recursive identity models from ORGAN-I and implements them as applied consensus feedback loops. This dependency is genuine — the spatial weighting model operationalizes phenomenal proximity from the organon-noumenon framework — but it means the theoretical foundations must remain stable. The unidirectional dependency rule (I to II, never II to I) ensures the theory can evolve without breaking the engine.

By the Numbers

5
Packages
3
Weighting Axes
5
Genre Presets
50ms
Consensus Loop
1K+
Target Audience
TS+Py
Hybrid Stack
Figure 2. Summary statistics for the Omni-Dromenon Engine, reflecting the scope and performance targets of the system.

References

  1. Machover, Tod. Hyperinstruments: A Progress Report. MIT Media Lab, 1992.
  2. Weinberg, Gil. Interconnected Musical Networks. MIT Press, 2005.
  3. Shneiderman, Ben. Human-Centered AI. Oxford University Press, 2022.
  4. Fischer, Gerhard. Social Creativity: Turning Barriers into Opportunities for Design. ACM, 2004.
  5. Reeves, Stuart et al.. Designing the Spectator Experience. CHI, 2005.
  6. Galanter, Philip. What Is Generative Art?. Digital Creativity, 2003.
  7. Csikszentmihalyi, Mihaly. Creativity: Flow and the Psychology of Discovery. Harper Perennial, 1996.
  8. Reas, Casey and Ben Fry. Processing: A Programming Handbook. MIT Press, 2007.
  9. Nygard, Michael T.. Release It! Design and Deploy Production-Ready Software. Pragmatic Bookshelf, 2018.
  10. Tanenbaum, Andrew S.. Distributed Systems: Principles and Paradigms. Pearson, 2007.
  11. Bishop, Claire. Artificial Hells: Participatory Art and the Politics of Spectatorship. Verso, 2012.
  12. Bourriaud, Nicolas. Relational Aesthetics. Les Presses du Réel, 2002.