Skip to main content
← All projects
SystemsGovernanceArchitecture

The Eight-Organ System

Governance as creative infrastructure

Top languageLast commit

The Eight-Organ System Concept Sketch

Algorithmic visualization representing the underlying logic of The Eight-Organ System. Source: Dynamic Generation

The Insight

I started from a conviction: the way you organize creative work is creative work. Most technologists treat governance as overhead — project boards, CI/CD, dependency rules.[1] Conway's Law observes that system structure mirrors organizational structure; this project inverts that insight — the organizational structure was designed as an aesthetic artifact. I wanted to find out what happens when you give those decisions the same care as the art they coordinate. The result surprised me: the governance model became the most interesting artifact in the system.[2]

How It Works

Eight GitHub organizations, each representing a distinct organ with a Greek ontological suffix.[3] Alexander's "pattern language" proposes that environments are shaped by named, interrelating patterns — here, each organ is a named pattern governing a domain of creative practice. The naming is not decorative: Greek ontological terms encode the organ's epistemological function.[4]

OrganDomainWhat It Proves
I — TheoriaTheory & epistemologyIntellectual depth, recursive systems
II — PoiesisGenerative artCreative systems design
III — ErgonCommerce & productsProduct-market thinking, revenue
IV — TaxisOrchestrationGovernance design, architecture
V — LogosPublic processTransparency, building in public
VI — KoinoniaCommunityCollaborative infrastructure
VII — KerygmaMarketingPOSSE distribution, content strategy
VIII — MetaUmbrellaCross-system integration

The organ model prevents three pathologies that commonly destroy creative systems: art corrupted by commercial pressure (when every project needs revenue justification), theory compromised by the need to scale (when ideas must be production-ready before they are explored), and community colonized by engagement metrics (when relationships are valued only for their conversion rates). Each organ has its own GitHub organization, governance rules, documentation standards, and definition of success. What counts as excellent work in ORGAN-I (intellectual depth, novel frameworks) is deliberately different from what counts in ORGAN-III (revenue potential, user experience).

The Registry

A machine-readable JSON registry serves as the single source of truth for all 116 repositories.[5] Fowler's principle of a "single authoritative data source" applies here at the organizational level — every repo entry carries status, dependencies, documentation tier, and promotion state. Automated scripts validate the registry against the live state of every repo, checking CI/CD, documentation, dependency integrity, and constitutional compliance.[6]

registry-v2.json (excerpt)
{
  "recursive-engine": {
    "organ": "I",
    "status": "ACTIVE",
    "tier": "flagship",
    "promotion_status": "GRADUATED",
    "implementation_status": "PRODUCTION",
    "dependencies": ["organvm-i-theoria/myth-engine"],
    "portfolio_relevance": "CRITICAL"
  }
}
Registry entry structure — each of 116 repos carries this metadata

The State Machine

Work moves through a formal promotion pipeline.[7] The State pattern from the Gang of Four formalizes how an object alters its behavior when its internal state changes — here, repositories are the objects, and each promotion level unlocks new capabilities and expectations.

stateDiagram-v2 [*] --> LOCAL LOCAL --> CANDIDATE : passes initial review CANDIDATE --> PUBLIC_PROCESS : documentation complete PUBLIC_PROCESS --> GRADUATED : validation suite passes GRADUATED --> ARCHIVED : lifecycle complete CANDIDATE --> LOCAL : fails review PUBLIC_PROCESS --> CANDIDATE : documentation gaps
Repository promotion state machine — each transition has documented criteria and automated validation

Each transition has documented criteria and automated validation. The state machine prevents premature claims and ensures quality at each stage — a project cannot call itself "art" until it produces interesting output.[8] The Veritas Sprint exemplified this principle in practice: the implementation_status: PRODUCTION field was renamed to ACTIVE across 82 repositories, acknowledging that "production" overstated reality for documented-but-not-deployed repos. Honest state machines require honest state names.

No state can be skipped. A repository in LOCAL must pass through CANDIDATE and PUBLIC_PROCESS before reaching GRADUATED. Back-transitions are permitted — a CANDIDATE that fails review returns to LOCAL — but forward-skipping is blocked by the promote-repo workflow's pre-validation checks. This creates genuine narrative arcs for each project: every GRADUATED repository has a documented history of the gates it passed through.

The Dependency Rule

The most important constraint, and the one that generated the most creative tension: no back-edges. Theory feeds art feeds commerce — I→II→III — and that flow is enforced automatically.[9] Martin's Dependency Rule — that source code dependencies must point only inward — is applied here at the organizational level. This forced clean API boundaries and created genuine stakes: theory must commit to art without knowing if it will become commerce. 31 dependency edges, zero circular dependencies, validated on every change.[10]

┌──────────┐    ┌──────────┐    ┌──────────┐
│ I Theoria│───▶│II Poiesis│───▶│III Ergon │
│ Theory   │    │ Art      │    │ Commerce │
└──────────┘    └──────────┘    └──────────┘
      │                               │
      ▼                               ▼
┌──────────┐    ┌──────────┐    ┌──────────┐
│ IV Taxis │◀───│ V Logos  │◀───│VI Koino- │
│Governance│    │ Essays   │    │  nia     │
└──────────┘    └──────────┘    └──────────┘
      │                               ▲
      ▼                               │
┌──────────┐    ┌──────────┐          │
│VII Keryg-│───▶│  META    │──────────┘
│   ma     │    │ Umbrella │
└──────────┘    └──────────┘

No back-edges: theory → art → commerce only.
Meta oversees all. 31 edges, 0 violations.
Enforced dependency direction: theory feeds art feeds commerce. No back-edges permitted.

Automation

Five GitHub Actions workflows govern the system autonomously, embodying the principle that governance should be executable rather than advisory.[11] Scott warns against "high modernist" schemes that impose legibility from above without local knowledge — these workflows encode both top-down rules and bottom-up validation.

  • validate-dependencies — checks the dependency graph on every registry change
  • monthly-organ-audit — full system health check with Markdown report + JSON metrics
  • promote-repo — handles state machine transitions with pre/post validation
  • publish-process — syncs ORGAN-V content to public channels
  • distribute-content — POSSE distribution to Mastodon, Discord, newsletter
.github/workflows/validate-dependencies.yml
on:
  push:
    paths: ['registry-v2.json']

jobs:
  validate:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - run: python scripts/validate-deps.py
        # Checks: cycle detection, transitive depth,
        # back-edge blocking across 31 edges
Dependency validation workflow — runs on every registry change, blocks merges on violations

Implementation Velocity

The system was designed, documented, and deployed in a concentrated sequence of sprints — each with defined success criteria and measurable outputs. Phase -1 established the eight GitHub organizations with Greek ontological suffixes. Phase 0 refined the corpus through cross-AI validation — Claude, Gemini, and GPT-4 each reviewed the planning documents independently, catching inconsistencies no single model would have found. The Bronze Sprint produced 7 flagship READMEs (3,000+ words each). The Silver Sprint generated 58 standard READMEs totaling ~202K words. The Gold Sprint completed community health files, workflow specifications, and the first meta-system essays. All 9/9 launch criteria were met on 2026-02-11. Post-launch, 20+ targeted sprints drove the system from launch-ready to portfolio-standard — including the Platinum Sprint (228 validation checks), the Autonomy Sprint (seed.yaml contracts across all repos), and the Veritas Sprint (an honesty audit that renamed overstated status fields and corrected future-dated essays).

Why This Is Art

The choice of eight organs (not seven, not ten) is a design decision. The Greek naming scheme is an aesthetic decision. The no-back-edges rule creates dramatic tension.[12] Csikszentmihalyi's framework situates creativity at the intersection of domain, field, and individual — this system is simultaneously the domain (the eight-organ architecture), the field (the governance rules that evaluate work), and the individual practice (the daily decisions about what to build). The promotion state machine creates narrative arcs for each project. This isn't an engineer who also makes art, or an artist who codes on the side — the governance model, the registry schema, the dependency graph, these are the work.

Results

The system launched on 2026-02-11 with 9/9 launch criteria met. All 8 organs are operational.[13] Deming's quality management principles — measure, validate, improve — are embedded in the monthly audit cycle. 228 validation checks pass across the Platinum validation suite, covering CI workflow presence, CHANGELOG files, architectural decision records, badge rows, and implementation status fields for every repo. POSSE distribution is live (Mastodon + Discord). Jekyll/GitHub Pages publishes 36 meta-system essays (~132K words) with an Atom RSS feed.

116
Repositories
8
Organizations
739K
Words
31
Dependency Edges
228
Validation Checks
104
CI Workflows
130+
Decision Records
115
Contract Edges
Platinum validation suite: 228/228 checks passing across all 8 organs

104 repositories carry CI workflows. 130 architectural decision records document key design choices. Every repo has a CHANGELOG and documentation tier assignment (8 flagship, 70 standard, 5 archive, 8 infrastructure).[14]

The seed.yaml contract schema (v1.0) establishes 115 declared produces/consumes/subscriptions edges across the system, enabling autonomous orchestration. Cross-org event routing operates through dispatch-receiver.yml workflows deployed to all 8 organization .github repos, connected by a shared CROSS_ORG_TOKEN secret. The metrics variable system — three scripts forming a calculate-store-propagate pipeline — ensures metric consistency across all documents without manual cross-referencing. The monthly organ audit workflow performs a full system health check and generates both Markdown reports and JSON metrics suitable for automated dashboards.

Validation Infrastructure

Five validation scripts enforce system integrity at every level, embodying the principle that governance claims must be machine-verifiable rather than self-reported.[15] Brooks observed that adding documentation does not reduce errors unless the documentation is itself validated — the same applies to governance.

ValidatorScopeWhat It Catches
Registry validationSchema compliance for all 116 entriesMissing fields, invalid status values, ORGAN-III entries without revenue metadata
Dependency graphAll 31 declared edgesCycles, back-edges, cross-organ violations of I to II to III constraint
Documentation completenessEvery repositoryMissing READMEs, word count below threshold, absent badge rows
Link integrity1,267+ cross-referencesBroken URLs, stale references between corpus documents
Constitutional complianceArticles I through VI plus amendments A through DViolations of the system constitution and post-cross-validation amendments

Repository Standards

Every repository meets a defined quality floor that functions as a contract between the system and its audiences. READMEs require 2,000+ words minimum (flagships: 3,000+), written for dual audiences of grant reviewers and hiring managers. 104 repositories carry CI/CD workflows. 130+ architectural decision records document key design choices — technology selections, dependency declarations, naming decisions, and tradeoff analyses.[16] The Platinum validation suite adds checks for CI workflow presence, CHANGELOG files, ADRs, badge rows, and implementation status fields — 228 checks passing across the full suite.

seed.yaml (excerpt)
organ: I
tier: flagship
implementation_status: ACTIVE
promotion_status: GRADUATED
produces:
  - recursive-identity-model
  - generative-entity-framework
consumes:
  - organvm-i-theoria/myth-engine
subscriptions:
  - event: registry.updated
    action: validate-deps
  - event: monthly.audit
    action: report-health
seed.yaml contract schema — every repository declares its organ membership, produces/consumes edges, and event subscriptions

Tradeoffs & Lessons

  • Documentation volume vs. maintenance burden — ~739K words is powerful as proof-of-work, but each README is a liability if the underlying repo changes. The registry-as-source-of-truth pattern helps: stats flow from the registry, not from hand-editing.[17]
  • Strict dependency rules vs. convenience — The no-back-edges constraint is the system's best design decision and its most annoying one. It forced clean interfaces but also meant rewriting code that wanted to "just import" from a downstream organ. Two back-edge violations were detected and corrected during the Convergence Sprint, confirming that automated enforcement catches real mistakes, not just theoretical ones.
  • AI-conductor workflow — ~739K words were produced using a human-AI co-creation model: AI generates volume, human directs and refines. This is honest and documented. The risk is perception ("AI wrote your portfolio"). The mitigation is transparency: every essay in ORGAN-V explains the process.[18]
  • Naming scheme lock-in — Greek ontological naming (Theoria, Poiesis, Ergon) is distinctive but means renaming requires 40+ file edits. The naming is not decorative — each Greek term encodes the organ's epistemological function — but the coupling is real. This was worth the tradeoff for identity coherence.
  • Implementation velocity vs. soak time — The system launched in nine days through concentrated sprint execution. This velocity is itself evidence of the AI-conductor methodology's effectiveness, but it also means the system has less operational history than a conventionally-paced project. The 30-day soak test monitor and operational runbooks address this gap explicitly.

References

  1. Conway, Melvin E.. How Do Committees Invent?. Datamation, 1968. https://www.melconway.com/Home/Committees_Paper.html
  2. Schön, Donald A.. The Reflective Practitioner: How Professionals Think in Action. Basic Books, 1983.
  3. Alexander, Christopher. A Pattern Language: Towns, Buildings, Construction. Oxford University Press, 1977.
  4. Meadows, Donella H.. Thinking in Systems: A Primer. Chelsea Green Publishing, 2008.
  5. Fowler, Martin. Patterns of Enterprise Application Architecture. Addison-Wesley, 2002.
  6. Humble, Jez and David Farley. Continuous Delivery: Reliable Software Releases through Build, Test, and Deploy Automation. Addison-Wesley, 2010.
  7. Gamma, Erich, Richard Helm, Ralph Johnson, and John Vlissides. Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley, 1994.
  8. McConnell, Steve. Code Complete: A Practical Handbook of Software Construction. Microsoft Press, 2004.
  9. Martin, Robert C.. Clean Architecture: A Craftsman's Guide to Software Structure and Design. Prentice Hall, 2017.
  10. Ostrom, Elinor. Governing the Commons: The Evolution of Institutions for Collective Action. Cambridge University Press, 1990.
  11. Scott, James C.. Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed. Yale University Press, 1998.
  12. Csikszentmihalyi, Mihaly. Creativity: Flow and the Psychology of Discovery and Invention. Harper Perennial, 1996.
  13. Deming, W. Edwards. Out of the Crisis. MIT Press, 1986.
  14. Nygard, Michael T.. Release It! Design and Deploy Production-Ready Software. Pragmatic Bookshelf, 2018.
  15. Brooks, Frederick P.. The Mythical Man-Month: Essays on Software Engineering. Addison-Wesley, 1975.
  16. Shneiderman, Ben. Human-Centered AI. Oxford University Press, 2022.
  17. Humble, Jez and David Farley. Continuous Delivery: Reliable Software Releases through Build, Test, and Deploy Automation. Addison-Wesley, 2010.
  18. Shneiderman, Ben. Human-Centered AI. Oxford University Press, 2022.