← All projects
ArtAudioPerformance

Generative Music System

From recursive theory to real-time sound

The Translation Problem

How do you get from a formal system to something people actually experience? That's the core design problem of ORGAN-II. This project translates recursive narrative principles from RE:GE into a real-time generative music system. The music doesn't illustrate the narrative — it is the narrative, in a different medium.[1] Eno's concept of generative music — systems that produce ever-different and changing results through rules rather than fixed compositions — provides the philosophical foundation. The choices made during translation are themselves artistic decisions, and that's where the interesting work lives.[2]

graph TD SE[Layer 1: Symbolic Engine] -->|typed timestamped events| SB[Layer 2: Sonification Bridge] SB -->|musical parameters| PS[Layer 3: Performance System] SE -->|entity state changes| SB SE -->|ritual phase transitions| SB SE -->|myth function activations| SB SE -->|recursive depth changes| SB SB -->|tonal center| PS SB -->|harmonic complexity| PS SB -->|rhythmic pattern| PS SB -->|timbral layering| PS SB -->|melodic contour| PS PS -->|audio synthesis| OUT[Live Sound] PS -->|spatialization| OUT PS -->|performer interaction| OUT
Three-layer data flow — symbolic events from the recursive engine are translated through a sonification bridge into real-time performance output

Three-Layer Architecture

Layer 1: The Symbolic Engine

RE:GE provides the structural backbone — a stream of typed, timestamped symbolic events: entity state changes, ritual phase transitions, myth function activations, recursive depth changes. These events are abstract and carry no inherent sonic representation.[3] Hofstadter's insight that formal systems can generate meaning through structural relationships — not through any intrinsic semantic content — is precisely what makes this translation possible. The symbolic events are meaningful because of how they relate to each other, not because of what they "sound like."

Layer 2: The Sonification Bridge

This is where the artistic decisions live. The bridge maps symbolic events to musical parameters:[4] Hermann et al. establish that effective sonification requires a principled mapping between data dimensions and auditory parameters — arbitrary mappings produce noise, while structurally motivated mappings produce comprehensible sound. Each row in the mapping table below represents a deliberate choice grounded in music-theoretic reasoning.

Symbolic EventMusical ParameterRationale
Identity stabilityTonal center strengthStable identity = clear tonic
Transformation intensityHarmonic complexityGreater change = more tension
Ritual phaseRhythmic patternCeremony = structured time
Recursive depthTimbral layeringSelf-reference = voices within voices
Myth function typeMelodic contourHero ascends, villain descends
Figure 1. Sonification bridge mapping — each symbolic event type is translated to a musical parameter through a music-theoretically motivated rationale
sonification-bridge.ts
interface SymbolicEvent {
  type: 'identity' | 'transformation' | 'ritual' | 'recursion' | 'myth';
  timestamp: number;
  intensity: number;    // 0.0 – 1.0
  depth: number;        // recursive nesting level
  phase?: string;       // ritual phase identifier
  function?: string;    // myth function name
}

interface MusicalParams {
  tonalCenter: number;       // scale degree 0–11
  harmonicComplexity: number; // 0.0 – 1.0
  rhythmPattern: number[];    // onset pattern
  timbralLayers: number;      // voice count
  melodicContour: number[];   // pitch direction sequence
}

function sonify(event: SymbolicEvent): MusicalParams {
  return {
    tonalCenter: mapIdentityToTonic(event.intensity),
    harmonicComplexity: mapTransformToTension(event.intensity),
    rhythmPattern: mapRitualToRhythm(event.phase),
    timbralLayers: mapRecursionToVoices(event.depth),
    melodicContour: mapMythToContour(event.function),
  };
}
Sonification bridge core — maps symbolic events from RE:GE to musical parameter values through weighted transformation functions

Layer 3: The Performance System

Real-time audio synthesis, spatialization, and interaction. Designed for live contexts — gallery installations, concert performances, interactive exhibits.[5] Rowe's taxonomy of interactive music systems — score-driven versus performance-driven, instrument versus player paradigms — informed the decision to build a system that operates in all three performance contexts. The system listens to the symbolic engine and responds in real time, functioning as both instrument and autonomous performer depending on context.[6]

The Discovery: Recursion Sounds Like Counterpoint

This was the project's defining moment, and it wasn't planned. When the recursive engine enters self-referential processing — an entity examining itself, a system modifying its own rules — we initially tried mapping recursive depth to reverb. It sounded terrible.[7] Fux codified the rules of counterpoint as a pedagogical system — species counterpoint, where each "species" adds a layer of rhythmic and melodic complexity atop a cantus firmus. What we discovered is that recursion already is counterpoint: each level of self-reference is a new voice commenting on the voices below it, following rules that derive from but are not identical to the original.

Counterpoint emerged from experimentation: each recursive level gets its own melodic voice, related to but distinct from its parent. The result is Bach-like clarity where you can follow each level of self-reference. Voices commenting on voices. The formal system created the conditions for a musical insight that pure intuition wouldn't have found.[8] Lerdahl and Jackendoff's generative theory demonstrates that musical understanding is hierarchical — listeners parse music into nested grouping structures and metrical structures, precisely the kind of recursive nesting that the engine's symbolic events already encode.

Time Is the Hardest Translation

Narrative time and musical time operate on different scales. We tried linear compression (boring), event-driven (sparse), and finally landed on continuous with event punctuation — an ongoing musical texture driven by entity state, punctuated by significant events. This works because it mirrors how we experience narrative: continuous consciousness punctuated by significant moments.[9] Meadows' distinction between stocks (accumulations) and flows (rates of change) maps directly onto the time problem: entity state is a stock that changes continuously, while symbolic events are discrete flows that perturb the system. The "continuous with event punctuation" approach emerged from treating narrative time as a stock-and-flow system rather than a sequence of discrete steps.

graph LR A[Linear Compression] -->|too uniform| X1[Rejected] B[Event-Driven Only] -->|too sparse| X2[Rejected] C[Continuous + Punctuation] -->|mirrors experience| Y[Adopted] C --> S[Entity State = Continuous Texture] C --> E[Symbolic Events = Punctuation] S -->|harmonic drift| T[Tonal Movement] S -->|timbral evolution| T E -->|rhythmic accent| T E -->|melodic gesture| T T --> O[Output: Musical Time]
Time-mapping approaches attempted — from linear compression through event-driven to the final continuous-with-punctuation model

Performance Contexts

  • Gallery installation — continuous 6-12 hour operation, spatial audio creates distinct narrative zones
  • Live concert — performer shapes narrative via gestural control, 20-45 minutes, one complete mythic cycle
  • Network performance — multiple instances contributing to a shared narrative space

Each context demands a different relationship between system autonomy and human control.[10] Murray identifies the tension between authorial control and procedural generation as the central design challenge of interactive narrative — in the gallery, the system is fully autonomous; in concert, a human performer guides narrative direction through gestural input; in network performance, multiple human and machine agents negotiate a shared mythic space. The system architecture must accommodate all three without compromising any.[11]

3
Architecture Layers
5
Symbolic Event Types
5
Musical Parameters
3
Performance Modes
12h
Max Continuous Run
TS
TypeScript Core
Figure 2. System metrics — three architectural layers translating five symbolic event types across three performance modes

References

  1. Eno, Brian. Generative Music. In Motion Magazine, 1996.
  2. Roads, Curtis. The Computer Music Tutorial. MIT Press, 1996.
  3. Hofstadter, Douglas. Gödel, Escher, Bach: An Eternal Golden Braid. Basic Books, 1979.
  4. Hermann, Thomas, Andy Hunt, and John G. Neuhoff. The Sonification Handbook. Logos Publishing House, 2011.
  5. Rowe, Robert. Interactive Music Systems: Machine Listening and Composing. MIT Press, 1993.
  6. Small, Christopher. Musicking: The Meanings of Performing and Listening. Wesleyan University Press, 1998.
  7. Fux, Johann Joseph. Gradus ad Parnassum. Vienna (trans. Norton, 1965), 1725.
  8. Lerdahl, Fred and Ray Jackendoff. A Generative Theory of Tonal Music. MIT Press, 1983.
  9. Meadows, Donella H.. Thinking in Systems: A Primer. Chelsea Green Publishing, 2008.
  10. Murray, Janet H.. Hamlet on the Holodeck: The Future of Narrative in Cyberspace. MIT Press, 1997.
  11. Galanter, Philip. What is Generative Art? Complexity Theory as a Context for Art Theory. International Conference on Generative Art, 2003.