Agentic Titan
Directed architecture for multi-agent orchestration
Related Live Sites
Agentic Titan Concept Sketch
Problem
Multi-agent AI systems face a scaling problem: frameworks that work for two agents on a laptop collapse when you need 100+ agents with safety constraints, topology awareness, and production-grade observability.[1] Russell and Norvig's comprehensive treatment of intelligent agents defines the theoretical foundation, but most existing frameworks implement only the simplest case: single-agent loops with tool access. They are either too opinionated (locked to one LLM provider, one communication pattern) or too thin (no safety layer, no topology abstractions, no way to reason about agent interactions at scale).[2]
The eight-organ system needed an orchestration framework that could handle diverse workloads — from simple pipeline chains to complex mesh networks — while maintaining safety invariants and supporting any LLM backend. Wooldridge's formalization of multi-agent coordination protocols identified the core challenge: agents must negotiate communication, delegation, and conflict resolution through shared protocols rather than ad hoc messaging.[3] Minsky's "society of mind" thesis — that intelligence emerges from the interaction of many simple agents — provided the philosophical grounding. Nothing on the market fit all three requirements simultaneously.
Approach
Build a polymorphic, model-agnostic framework that separates topology (how agents communicate) from archetype (what agents do) from safety (what agents can't do).[4] This three-axis design draws directly from the Gang of Four's principle of separating interface from implementation — here applied at the system level rather than the class level. You can independently swap topologies, archetypes, and safety policies without rewriting orchestration logic.[5] Martin's Dependency Rule — that source code dependencies must point inward toward policies, not outward toward mechanisms — governs the layering: safety wraps archetype wraps topology, and no inner layer knows about the outer ones.
Architecture
┌─────────────────────────────────────────────────┐
│ SAFETY & GOVERNANCE │
│ HITL Gates | RBAC | Budget | Audit │
├─────────────────────────────────────────────────┤
│ HIVE MIND LAYER │
│ Redis State | ChromaDB Vectors | Events │
├──────────┬──────────────┬───────────────────────┤
│ Topology │ Archetypes │ LLM Adapters │
│ Engine │ (22 types) │ │
├──────────┤ ├───────────────────────┤
│Pipeline │ Researcher │ Anthropic (Claude) │
│Fan-out │ Synthesizer │ OpenAI (GPT-4) │
│Fan-in │ Critic │ Ollama (local) │
│Mesh │ Orchestrator │ Groq (fast cloud) │
│Hierarchy │ Specialist │ │
│Ring │ Guardian │ Routing Strategies: │
│Star │ JuryAgent │ Cost-optimized │
│Tree │ CellAgent │ Quality-first │
│Rhizomatic│ ... +14 more │ Cognitive-aware │
│Fission-F.│ │ Speed-first │
│Stigmergic│ │ Round-robin │
├──────────┴──────────────┴───────────────────────┤
│ RUNTIME FABRIC │
│ Local Python | Docker | OpenFaaS | Firecracker │
└─────────────────────────────────────────────────┘ Nine Topology Patterns
Each topology encodes a different communication contract between agents. The topology engine enforces routing rules at the framework level, and — critically — supports runtime switching between topologies as task demands shift.[7] Tanenbaum and van Steen's classification of distributed system architectures maps directly onto the six classical patterns. Three advanced topologies extend the vocabulary into biological and philosophical territory.[11]
| Topology | Pattern | Use Case |
|---|---|---|
| Pipeline | A → B → C | Sequential processing chains |
| Fan-out | A → [B, C, D] | Parallel task distribution |
| Fan-in | [B, C, D] → A | Result aggregation |
| Mesh | All ↔ All | Collaborative problem-solving |
| Hierarchical | Manager → Workers | Delegated orchestration |
| Ring | A → B → C → A | Iterative refinement and voting |
| Star | Hub ↔ Spokes | Centralized coordination |
| Tree | Root → Branches → Leaves | Divide-and-conquer |
| Custom | User-defined graph | Domain-specific patterns |
The three advanced topologies — rhizomatic (lateral, non-hierarchical connections inspired by Deleuze and Guattari's philosophical model), fission-fusion (dynamic clustering where swarms split into independent exploration clusters and reconverge for collective decision-making, modeled on crow roost dynamics), and stigmergic (environment-mediated coordination where agents communicate through shared traces rather than direct messaging, modeled on insect pheromone systems) — extend multi-agent coordination beyond the classical distributed systems literature into biological and philosophical frameworks.[13] Brooks's subsumption architecture demonstrated that complex behavior can emerge from layered reactive systems without central planning — the stigmergic topology implements this principle at the multi-agent level.
22 Agent Archetypes
Rather than requiring developers to implement agent behavior from scratch, Agentic Titan ships 22 pre-built agent archetypes organized into four categories that span operational, governance, biological, and philosophical models of coordination.[2] Wooldridge's taxonomy of agent types — reactive, deliberative, hybrid — maps onto the archetype categories, with core archetypes being primarily deliberative, biological archetypes primarily reactive, and governance archetypes operating as hybrid systems.
| Category | Count | Archetypes | Model |
|---|---|---|---|
| Core | 10 | Orchestrator, Researcher, Coder, Reviewer, Paper2Code, CFO, DevOps, SecurityAnalyst, DataEngineer, ProductManager | Software development and knowledge work |
| Governance | 5 | JuryAgent, ExecutiveAgent, LegislativeAgent, JudicialAgent, BureaucracyAgent | Institutional decision-making |
| Biological | 2 | EusocialColonyAgent, CellAgent | Living-systems coordination |
| Philosophical | 5 | AssemblageAgent, ActorNetworkAgent, SwarmIntelligenceAgent, DAOAgent, Custom | Theoretical coordination frameworks |
Every archetype extends BaseAgent, which provides lifecycle management (initialize, work, shutdown), hive mind integration, topology-aware communication, resilience patterns (circuit breaker, retry with backoff), PostgreSQL audit logging, and ten explicit stopping conditions: success, failure, max turns, timeout, budget exhaustion, user cancellation, checkpoint required, stuck detection, error threshold, and external kill switch. Archetypes are composable: a workflow can deploy a JuryAgent for deliberation, route its verdict to an ExecutiveAgent for implementation, and have a JudicialAgent review compliance — modeling a complete institutional process as an executable agent pipeline.
apiVersion: titan/v1
kind: Agent
metadata:
name: researcher
labels:
tier: cognitive
spec:
capabilities:
- web_search
- summarization
personality:
traits: [thorough, curious, skeptical]
communication_style: academic
llm:
preferred: claude-sonnet
fallback: [gpt-4o, llama3.2]
tools:
- name: web_search
protocol: native
memory:
short_term: 10
long_term: hive_mind Production Safety Layer
Agentic Titan implements safety not as guardrails added to a finished system but as structural constraints woven into the execution path. Every agent action flows through the safety chain before it executes.[8] Nygard's stability patterns — circuit breakers, bulkheads, timeouts — are implemented as first-class safety primitives rather than afterthoughts.
| Mechanism | Scope | Behavior |
|---|---|---|
| HITL Approval Gates | Per-action | Risk-classified actions routed to human approval via WebSocket or Redis; low-risk auto-approved, high-risk blocks execution |
| Circuit Breakers | Per-agent | Prevents cascade failures in mesh and ring topologies; trips after configurable failure threshold |
| Rate Limiting | Per-agent, per-session | Token-bucket rate limiting on API calls and agent actions |
| Token Budgets | Per-agent, per-session, per-workflow | Hard spending limits; triggers BUDGET_EXHAUSTED stopping condition on breach |
| RBAC | Per-role | Declarative role assignments restrict capabilities; enforced at framework level |
| Dead-Letter Queues | Per-topology | Failed messages captured for post-hoc analysis rather than silently dropped |
| Kill Switches | Global, per-agent | Immediate agent termination; non-negotiable shutdown path |
| Content Filtering | Per-output | Catches unsafe outputs before they reach users or downstream systems |
| Audit Logging | Global | PostgreSQL with Alembic migrations; records every decision, action, and approval/denial |
The mandatory safety overhead is 5-10% — intentional. Unsafe agent systems are worse than slow ones. Every safety violation is logged with full context: which agent, which topology, which policy was breached, and what the agent attempted to do. The combination ensures that production deployments are inspectable, controllable, and auditable — requirements that separate production AI infrastructure from research demonstrations.[14] Brooks observed that conceptual integrity is the most important consideration in system design — the safety layer preserves conceptual integrity by making security non-optional rather than aspirational.
Model Agnosticism
The framework treats LLM providers as interchangeable execution substrates with measurable characteristics, not as frameworks to build around.[4] The Adapter pattern from the Gang of Four formalizes the core tradeoff: indirection cost versus decoupling benefit. A uniform LLMAdapter interface is implemented by four provider-specific adapters (Anthropic, OpenAI, Ollama, Groq), each exposing identical methods for text generation, tool use, and streaming.
The LLM Router selects providers based on five configurable strategies: cost-optimized (prefers local Ollama, then Groq, then premium cloud), quality-first (highest-quality available model), speed-first (routes to Groq), round-robin (load distribution), and cognitive-task-aware (routes based on eight cognitive task types — structured reasoning, creative synthesis, mathematical analysis, cross-domain connection, meta-analysis, pattern recognition, code generation, and research synthesis — using empirical cognitive-strength profiles per model). Switching from cloud development to air-gapped deployment requires changing environment variables, not rewriting agent definitions.
Testing Architecture
The test suite spans 1,312 tests across 108 test files organized into 18 categories, covering every layer from unit-level archetype behavior to end-to-end topology switching under chaos conditions.[6] Fowler's discipline of continuous refactoring kept the test suite stable across 18 development phases while the architecture evolved underneath.
| Category | Files | What It Validates |
|---|---|---|
| Agent Archetypes | 9 | All 22 archetypes — lifecycle, capabilities, governance patterns |
| Hive Mind & Topology | 9 | Criticality detection, fission-fusion dynamics, information centers |
| Workflows | 9 | DAG execution, conversational flows, temporal patterns, narrative synthesis |
| Integration | 8 | Cross-component interaction, topology transitions under load |
| Batch Processing | 7 | Celery integration, worker lifecycle, stall detection, recovery |
| Adversarial | 5 | Prompt injection resistance, boundary testing, malformed input handling |
| Chaos | 4 | Fission recovery, resilience under random failure injection |
| E2E | 4 | Full workflow execution: swarm, topology switching, budget enforcement |
| Performance | 3 | Load testing, throughput benchmarking at scale |
The repository completed a six-tranche quality program (Omega Closure): environment reproducibility, baseline snapshot, full-lint blocking (1,270 initial errors reduced to zero), full-typecheck blocking (28 mypy errors reduced to zero), runtime test completion, security and deploy validation, and documentation closure.[9] McConnell's emphasis on construction quality — naming conventions, defensive programming, self-documenting code — shaped the implementation discipline across all 18 phases.
Comparison with Existing Frameworks
Three frameworks dominate the multi-agent landscape: LangGraph (graph-based orchestration within the LangChain ecosystem), CrewAI (role-based agent teams), and AutoGen (Microsoft's conversational agent framework). Each solves part of the problem.[12]
| Capability | LangGraph | CrewAI | AutoGen | Agentic Titan |
|---|---|---|---|---|
| Topology patterns | DAG only | Pipeline, hierarchy | Conversation loops | 9 patterns + runtime switching |
| Model agnosticism | LangChain adapters | Partial | OpenAI-centric | 4 providers + cognitive routing |
| Safety layer | Basic | Minimal | Guardrails add-on | HITL, RBAC, budgets, circuit breakers, audit log |
| Runtime isolation | None | None | Docker optional | Docker, OpenFaaS, Firecracker microVMs |
| Agent archetypes | None (build your own) | Role strings | AssistantAgent/UserProxy | 22 typed archetypes across 4 categories |
| Declarative spec | Python code | Python code | Python code | YAML DSL (Kubernetes-style) |
Agentic Titan's differentiator is the separation of concerns: topology, archetype, and safety are independently composable. LangGraph couples topology to its graph DSL. CrewAI couples archetype to role strings without behavioral profiles. AutoGen couples conversation patterns to its agent class hierarchy. None support runtime topology switching, and none ship with production safety infrastructure as a first-class layer.
Implementation
Built in Python 3.11+ across 18 development phases with FastAPI for the dashboard API, Celery for batch processing, Redis for state and events, ChromaDB for vector memory, and PostgreSQL for audit logging. The framework provides a unified interface where you define agents by archetype, connect them via topology, wrap them in safety policies, and deploy.[5] Agent archetypes carry behavioral profiles (communication style, decision patterns, error handling strategies) that combine with topology constraints to produce emergent system behavior.
from agentic_titan import Agent, Topology, SafetyPolicy
from hive.topology import TopologyEngine, TopologyType
# Phase 1: Research (swarm — all-to-all exploration)
await topology.set_topology(TopologyType.SWARM)
# Phase 2: Synthesis (pipeline — sequential processing)
await topology.set_topology(TopologyType.PIPELINE)
# Phase 3: Review (ring — token-passing, each reviews prior)
await topology.set_topology(TopologyType.RING)
# Agents unchanged — only the communication contract shifts Results
The framework is consumed downstream by ORGAN-III products: the Hunter Protocol in in-midst-my-life uses hierarchical topology for resume analysis, and the verification pipeline in The Actual News uses fan-out topology for parallel source checking. The governance archetypes inform how ORGAN-IV manages the broader eight-organ system. The base agent lifecycle was originally extracted from metasystem-core (ORGAN-II) and generalized — creative performance contexts stress-tested the framework with sub-second topology switching demands and stigmergic coordination requirements.
Tradeoffs & Lessons
- Topology abstraction vs. performance — The topology engine adds a routing layer that introduces latency (~2ms per hop). For latency-critical pipelines, direct agent-to-agent calls are faster. The tradeoff is worth it for complex topologies where reasoning about message flow matters more than raw speed.[14] Brooks observed that conceptual integrity is the most important consideration in system design — the topology abstraction preserves conceptual integrity at the cost of raw throughput.
- 22 archetypes — too many? — Started with 6, grew to 22 as real use cases demanded finer behavioral distinctions. The governance and biological archetypes emerged from downstream organ requirements, not upfront design. Some could be consolidated, but clarity over minimalism was the deliberate choice.
- Model-agnostic tax — Supporting every LLM backend means maintaining adapter code for each. The unified interface hides provider differences but the adapters need updating when APIs change. Worth it for avoiding vendor lock-in.[4]
- Safety overhead — Mandatory safety layer adds 5-10% overhead. This is intentional — unsafe agent systems are worse than slow ones.[8]
- Firecracker isolation — MicroVM-level isolation (each agent in its own lightweight VM with VSOCK communication and sub-second boot) provides hardware-level security but adds operational complexity. Reserved for production deployments where agents execute untrusted code.
By the Numbers
References
- Russell, Stuart and Peter Norvig. Artificial Intelligence: A Modern Approach. Pearson, 2020.
- Wooldridge, Michael. An Introduction to MultiAgent Systems. Wiley, 2009.
- Minsky, Marvin. The Society of Mind. Simon & Schuster, 1986.
- Gamma, Erich, Richard Helm, Ralph Johnson, and John Vlissides. Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley, 1994.
- Martin, Robert C.. Clean Architecture: A Craftsman's Guide to Software Structure and Design. Prentice Hall, 2017.
- Fowler, Martin. Refactoring: Improving the Design of Existing Code. Addison-Wesley, 2018.
- Tanenbaum, Andrew S. and Maarten van Steen. Distributed Systems: Principles and Paradigms. Pearson, 2007.
- Nygard, Michael T.. Release It! Design and Deploy Production-Ready Software. Pragmatic Bookshelf, 2018.
- McConnell, Steve. Code Complete. Microsoft Press, 2004.
- Hewitt, Carl. Actor Model of Computation. arXiv, 2010.
- Shoham, Yoav and Kevin Leyton-Brown. Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations. Cambridge University Press, 2008.
- Brooks, Rodney A.. Intelligence Without Representation. Artificial Intelligence, 47(1-3), 1991.
- Brooks, Frederick P.. The Mythical Man-Month. Addison-Wesley, 1975.