Generative Music System
From recursive theory to real-time sound
The Translation Problem
How do you get from a formal system to something people actually experience? That's the core design problem of ORGAN-II. This project translates recursive narrative principles from RE:GE into a real-time generative music system. The music doesn't illustrate the narrative — it is the narrative, in a different medium.[1] Eno's concept of generative music — systems that produce ever-different and changing results through rules rather than fixed compositions — provides the philosophical foundation. The choices made during translation are themselves artistic decisions, and that's where the interesting work lives.[2]
Three-Layer Architecture
Layer 1: The Symbolic Engine
RE:GE provides the structural backbone — a stream of typed, timestamped symbolic events: entity state changes, ritual phase transitions, myth function activations, recursive depth changes. These events are abstract and carry no inherent sonic representation.[3] Hofstadter's insight that formal systems can generate meaning through structural relationships — not through any intrinsic semantic content — is precisely what makes this translation possible. The symbolic events are meaningful because of how they relate to each other, not because of what they "sound like."
Layer 2: The Sonification Bridge
This is where the artistic decisions live. The bridge maps symbolic events to musical parameters:[4] Hermann et al. establish that effective sonification requires a principled mapping between data dimensions and auditory parameters — arbitrary mappings produce noise, while structurally motivated mappings produce comprehensible sound. Each row in the mapping table below represents a deliberate choice grounded in music-theoretic reasoning.
| Symbolic Event | Musical Parameter | Rationale |
|---|---|---|
| Identity stability | Tonal center strength | Stable identity = clear tonic |
| Transformation intensity | Harmonic complexity | Greater change = more tension |
| Ritual phase | Rhythmic pattern | Ceremony = structured time |
| Recursive depth | Timbral layering | Self-reference = voices within voices |
| Myth function type | Melodic contour | Hero ascends, villain descends |
interface SymbolicEvent {
type: 'identity' | 'transformation' | 'ritual' | 'recursion' | 'myth';
timestamp: number;
intensity: number; // 0.0 – 1.0
depth: number; // recursive nesting level
phase?: string; // ritual phase identifier
function?: string; // myth function name
}
interface MusicalParams {
tonalCenter: number; // scale degree 0–11
harmonicComplexity: number; // 0.0 – 1.0
rhythmPattern: number[]; // onset pattern
timbralLayers: number; // voice count
melodicContour: number[]; // pitch direction sequence
}
function sonify(event: SymbolicEvent): MusicalParams {
return {
tonalCenter: mapIdentityToTonic(event.intensity),
harmonicComplexity: mapTransformToTension(event.intensity),
rhythmPattern: mapRitualToRhythm(event.phase),
timbralLayers: mapRecursionToVoices(event.depth),
melodicContour: mapMythToContour(event.function),
};
} Layer 3: The Performance System
Real-time audio synthesis, spatialization, and interaction. Designed for live contexts — gallery installations, concert performances, interactive exhibits.[5] Rowe's taxonomy of interactive music systems — score-driven versus performance-driven, instrument versus player paradigms — informed the decision to build a system that operates in all three performance contexts. The system listens to the symbolic engine and responds in real time, functioning as both instrument and autonomous performer depending on context.[6]
The Discovery: Recursion Sounds Like Counterpoint
This was the project's defining moment, and it wasn't planned. When the recursive engine enters self-referential processing — an entity examining itself, a system modifying its own rules — we initially tried mapping recursive depth to reverb. It sounded terrible.[7] Fux codified the rules of counterpoint as a pedagogical system — species counterpoint, where each "species" adds a layer of rhythmic and melodic complexity atop a cantus firmus. What we discovered is that recursion already is counterpoint: each level of self-reference is a new voice commenting on the voices below it, following rules that derive from but are not identical to the original.
Counterpoint emerged from experimentation: each recursive level gets its own melodic voice, related to but distinct from its parent. The result is Bach-like clarity where you can follow each level of self-reference. Voices commenting on voices. The formal system created the conditions for a musical insight that pure intuition wouldn't have found.[8] Lerdahl and Jackendoff's generative theory demonstrates that musical understanding is hierarchical — listeners parse music into nested grouping structures and metrical structures, precisely the kind of recursive nesting that the engine's symbolic events already encode.
Time Is the Hardest Translation
Narrative time and musical time operate on different scales. We tried linear compression (boring), event-driven (sparse), and finally landed on continuous with event punctuation — an ongoing musical texture driven by entity state, punctuated by significant events. This works because it mirrors how we experience narrative: continuous consciousness punctuated by significant moments.[9] Meadows' distinction between stocks (accumulations) and flows (rates of change) maps directly onto the time problem: entity state is a stock that changes continuously, while symbolic events are discrete flows that perturb the system. The "continuous with event punctuation" approach emerged from treating narrative time as a stock-and-flow system rather than a sequence of discrete steps.
Performance Contexts
- Gallery installation — continuous 6-12 hour operation, spatial audio creates distinct narrative zones
- Live concert — performer shapes narrative via gestural control, 20-45 minutes, one complete mythic cycle
- Network performance — multiple instances contributing to a shared narrative space
Each context demands a different relationship between system autonomy and human control.[10] Murray identifies the tension between authorial control and procedural generation as the central design challenge of interactive narrative — in the gallery, the system is fully autonomous; in concert, a human performer guides narrative direction through gestural input; in network performance, multiple human and machine agents negotiate a shared mythic space. The system architecture must accommodate all three without compromising any.[11]
References
- Eno, Brian. Generative Music. In Motion Magazine, 1996.
- Roads, Curtis. The Computer Music Tutorial. MIT Press, 1996.
- Hofstadter, Douglas. Gödel, Escher, Bach: An Eternal Golden Braid. Basic Books, 1979.
- Hermann, Thomas, Andy Hunt, and John G. Neuhoff. The Sonification Handbook. Logos Publishing House, 2011.
- Rowe, Robert. Interactive Music Systems: Machine Listening and Composing. MIT Press, 1993.
- Small, Christopher. Musicking: The Meanings of Performing and Listening. Wesleyan University Press, 1998.
- Fux, Johann Joseph. Gradus ad Parnassum. Vienna (trans. Norton, 1965), 1725.
- Lerdahl, Fred and Ray Jackendoff. A Generative Theory of Tonal Music. MIT Press, 1983.
- Meadows, Donella H.. Thinking in Systems: A Primer. Chelsea Green Publishing, 2008.
- Murray, Janet H.. Hamlet on the Holodeck: The Future of Narrative in Cyberspace. MIT Press, 1997.
- Galanter, Philip. What is Generative Art? Complexity Theory as a Context for Art Theory. International Conference on Generative Art, 2003.