Multi-Agent Systems
Digitorn supports multi-agent applications where a coordinator agent spawns isolated sub-agents that run in true parallelism. Each sub-agent has its own context window, memory, tools, and optionally its own LLM provider.
Architecture
Coordinator (context window A)
|
|-- spawn_agent(specialist="analyst", task="Analyze auth.py") --> Agent B (context B)
|-- spawn_agent(specialist="analyst", task="Analyze db.py") --> Agent C (context C)
|-- spawn_agent(task="Count all classes", system_prompt="...") --> Agent D (context D)
|
| (coordinator continues working while agents run)
|
|-- [NOTIFICATION] Agent B completed (12s, 8 turns)
|-- [NOTIFICATION] Agent C completed (18s, 12 turns)
|-- [NOTIFICATION] Agent D completed (5s, 3 turns)
|
'-- Coordinator aggregates results and produces final report
Key properties:
- True parallelism — sub-agents run as concurrent asyncio tasks
- Total isolation — each agent has its own context window, messages, and module instances
- No shared memory — agents can't see each other's state during execution
- Structured results — each agent returns findings, facts, errors, and todo state
- Auto-notification — the coordinator is notified when agents complete or fail
YAML Configuration
Minimal Example
app:
app_id: code-review
mode: conversation
agents:
- id: coordinator
role: coordinator
brain:
provider: deepseek
model: deepseek-chat
config:
api_key: "{{env.DEEPSEEK_API_KEY}}"
base_url: "https://api.deepseek.com/v1"
system_prompt: "You are a code review coordinator."
pool:
max_workers: 5
- id: security_analyst
role: specialist
brain:
provider: deepseek
model: deepseek-chat
config:
api_key: "{{env.DEEPSEEK_API_KEY}}"
base_url: "https://api.deepseek.com/v1"
specialty: "Security analysis -- finds vulnerabilities in code"
system_prompt: "You are a security expert. Analyze code for vulnerabilities."
modules: [filesystem, memory]
modules:
filesystem:
config:
allowed_read: ["./"]
memory:
config:
working_memory: true
todo_list: true
Full Configuration
agents:
- id: coordinator
role: coordinator
brain:
provider: deepseek
model: deepseek-chat
config:
api_key: "{{env.DEEPSEEK_API_KEY}}"
base_url: "https://api.deepseek.com/v1"
system_prompt: "You coordinate the analysis."
pool:
max_workers: 5 # max concurrent agents (default: 3)
progress: false # relay sub-agent progress to coordinator (default: false)
auto_retry: 0 # auto-retry failed agents (default: 0 = disabled)
- id: code_analyst
role: specialist
brain: # can use a DIFFERENT model than coordinator
provider: openrouter
model: qwen/qwen3-235b-a22b
config:
api_key: "{{env.OPENROUTER_API_KEY}}"
base_url: "https://openrouter.ai/api/v1"
specialty: "Code analysis -- architecture, patterns, quality"
skills: "./skills/code_analysis.md" # methodology file injected into system prompt
system_prompt: "You are a code analyst."
modules: [filesystem, memory] # only these modules are available
- id: security_analyst
role: specialist
brain:
provider: deepseek
model: deepseek-chat
config:
api_key: "{{env.DEEPSEEK_API_KEY}}"
base_url: "https://api.deepseek.com/v1"
specialty: "Security analysis -- vulnerabilities, credentials, injection risks"
skills: "./skills/security_audit.md"
system_prompt: "You are a security expert."
modules: [filesystem, memory]
Agent Fields
| Field | Type | Required | Description |
|---|---|---|---|
id | string | Yes | Unique agent identifier |
role | enum | Yes | coordinator or specialist |
brain | object | Yes | LLM provider configuration |
system_prompt | string | No | Agent instructions |
specialty | string | No | One-line description of expertise (shown to coordinator) |
skills | string | No | Path to a .md file with detailed methodology |
modules | list | No | Module IDs the specialist can access (default: all) |
pool | object | No | Coordinator-only: pool configuration |
Pool Configuration (coordinator only)
| Field | Type | Default | Description |
|---|---|---|---|
pool.max_workers | int | 3 | Maximum concurrent sub-agents |
pool.progress | bool | false | Relay sub-agent progress events to coordinator |
pool.auto_retry | int | 0 | Auto-retry failed/timed-out agents (0 = disabled) |
Skills Files
A skills file is a Markdown document containing methodology, checklists, or domain knowledge. It is injected into the specialist's system prompt automatically.
Example ./skills/security_audit.md:
# Security Audit Methodology
When analyzing a Python file for security:
1. Check for hardcoded credentials (API keys, passwords, tokens)
2. Check for SQL injection (string concatenation in queries)
3. Check for command injection (subprocess with shell=True)
4. Check for path traversal (user input in file paths)
5. Check for insecure deserialization (pickle.loads, yaml.load)
6. Check environment variable handling (secrets in env)
7. Check error handling (stack traces exposed to users)
8. Check input validation and sanitization
Rate each finding: critical / high / medium / low / info
Actions
The agent_spawn module provides 7 actions, available directly to the coordinator (no discovery needed):
spawn_agent
Spawn an isolated sub-agent. Returns immediately with an agent_id.
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
task | string | Yes | — | Task description for the sub-agent |
specialist | string | No | null | ID of a specialist to use |
system_prompt | string | No | null | Custom prompt for ad-hoc agents |
max_turns | int | No | 30 | Maximum agent turns |
timeout | float | No | 300 | Timeout in seconds |
Two types of agents:
# Specialist -- uses pre-configured brain, skills, modules
spawn_agent(specialist="security_analyst", task="Analyze oauth.py for vulnerabilities")
# Ad-hoc -- uses coordinator's brain, custom prompt
spawn_agent(task="Count all Python classes", system_prompt="Be concise. List class names only.")
agent_status
Check agent progress.
agent_status(agent_id="agent_abc123")
--> {status: "running", elapsed_seconds: 12.5, task: "Analyze oauth.py"}
agent_result
Get the structured result of a completed agent.
agent_result(agent_id="agent_abc123")
--> {
status: "completed",
content: "Found 2 vulnerabilities...",
turns_used: 8,
duration_seconds: 15.2,
memory: {
goal: "Analyze oauth.py",
facts: ["Uses PKCE", "No hardcoded tokens"],
todos: [{id: "t1", status: "done", content: "Read file"}],
notes: [],
entities: ["oauth.py"]
},
errors: []
}
agent_list
List all spawned agents with status.
agent_list()
--> {agents: [{agent_id: "agent_abc", status: "completed"}, ...], running: 2, total: 5}
agent_wait
Block until an agent finishes. Use when the coordinator has nothing else to do.
agent_wait(agent_id="agent_abc123", timeout=120)
--> (returns the structured result when done)
agent_cancel
Cancel a running agent.
agent_cancel(agent_id="agent_abc123")
--> {status: "cancelled"}
reassign_agent
Reassign a failed or cancelled agent with a new task.
reassign_agent(agent_id="agent_abc123", new_task="Analyze oauth.py, read in sections of 200 lines")
--> {agent_id: "agent_def456", status: "running"}
Execution Patterns
Pattern 1: Full Parallel
Spawn all agents, continue working, collect results as notifications arrive.
Coordinator:
spawn_agent(specialist="analyst", task="Analyze file1.py") --> agent_001
spawn_agent(specialist="analyst", task="Analyze file2.py") --> agent_002
spawn_agent(specialist="analyst", task="Analyze file3.py") --> agent_003
(continues working on other tasks...)
[NOTIFICATION] agent_001 completed
[NOTIFICATION] agent_002 completed
[NOTIFICATION] agent_003 failed --> reassign_agent(agent_003, "Retry file3.py")
Pattern 2: Sequential
Spawn one agent, wait for result, use it to spawn the next.
Coordinator:
spawn_agent(task="Research the topic") --> agent_001
agent_wait(agent_001) --> result with facts
spawn_agent(task="Write report using these facts: ...") --> agent_002
agent_wait(agent_002) --> final report
Pattern 3: Mixed
Spawn agents in parallel, process results as they arrive, spawn follow-ups.
Coordinator:
spawn_agent(specialist="analyst", task="file1.py") --> agent_001
spawn_agent(specialist="analyst", task="file2.py") --> agent_002
[NOTIFICATION] agent_001 completed
--> Extract findings, spawn follow-up:
spawn_agent(task="Deep dive into the auth bug found in file1.py")
[NOTIFICATION] agent_002 completed
--> Aggregate all findings into report
Isolation Model
Each sub-agent is fully isolated:
| Resource | Coordinator | Sub-Agent A | Sub-Agent B |
|---|---|---|---|
| Context window | Own | Own | Own |
| Messages | Own | Own | Own |
| Memory (goal, todos, facts) | Own | Own | Own |
| Module instances | Own | Own (fresh) | Own (fresh) |
| LLM provider | Own | Own or shared | Own or shared |
Sub-agents cannot:
- Access the coordinator's memory or context
- Communicate with other sub-agents
- Spawn sub-sub-agents
- See tools outside their
moduleslist
This isolation guarantees:
- No race conditions on module state
- No context window pollution
- True parallel execution with no locks or mutexes
Notifications
The coordinator receives automatic notifications via the background notification system (same as watchers and scheduled jobs):
Agent completed:
[AGENT COMPLETED] agent_abc123 (security_analyst)
Task: "Analyze oauth.py for vulnerabilities"
Duration: 15.2s, 8 turns
Findings: 3 facts stored
Status: completed
Agent failed:
[AGENT FAILED] agent_def456
Task: "Analyze module.py"
Error: "Context overflow after reading 66KB file"
Turns used: 5
--> Use reassign_agent to retry with adjusted params
Agent retrying (when auto_retry > 0):
[AGENT RETRYING] agent_def456
Attempt 2/2
Reason: timeout
Auto-Retry
When pool.auto_retry is set, failed or timed-out agents are automatically retried:
pool:
auto_retry: 1 # retry once on failure/timeout
- Only
timeoutandfailedstatuses trigger a retry cancelledagents are NOT retried- The coordinator receives an
agent_retryingnotification - After all retries are exhausted, the final status is reported
Context Builder Integration
When specialists are defined, the context builder automatically injects information about the available agent pool into the coordinator's system prompt:
### Agent Pool
You have specialized agents at your disposal. They run in parallel
with their own context -- they don't consume your context window.
Available specialists:
- security_analyst: Security analysis -- finds vulnerabilities in code
- code_analyst: Code analysis -- architecture, patterns, quality
After spawning agents, continue working -- you are notified automatically
when they complete or fail. No need to poll with agent_status.
Use agent_wait only if you have nothing else to do.
Max parallel agents: 5
The coordinator is never forced to delegate -- it decides naturally based on the task.
Complete Example
app:
app_id: multi-agent-audit
mode: conversation
agents:
- id: coordinator
role: coordinator
brain:
provider: deepseek
model: deepseek-chat
config:
api_key: "{{env.DEEPSEEK_API_KEY}}"
base_url: "https://api.deepseek.com/v1"
max_tokens: 4096
context:
max_tokens: 90000
strategy: summarize
keep_recent: 8
system_prompt: |
You are a senior software architect. You coordinate code audits
by delegating file analysis to specialists and producing reports.
pool:
max_workers: 3
auto_retry: 1
- id: code_analyst
role: specialist
brain:
provider: deepseek
model: deepseek-chat
config:
api_key: "{{env.DEEPSEEK_API_KEY}}"
base_url: "https://api.deepseek.com/v1"
specialty: "Code analysis -- architecture, patterns, quality assessment"
skills: "./skills/code_analysis.md"
system_prompt: |
You analyze Python source code for architecture patterns,
code quality, and design issues. Be thorough and specific.
modules: [filesystem, memory]
- id: security_analyst
role: specialist
brain:
provider: deepseek
model: deepseek-chat
config:
api_key: "{{env.DEEPSEEK_API_KEY}}"
base_url: "https://api.deepseek.com/v1"
specialty: "Security analysis -- vulnerabilities and risk assessment"
skills: "./skills/security_audit.md"
system_prompt: |
You analyze Python source code for security vulnerabilities.
Rate findings as critical/high/medium/low/info.
modules: [filesystem, memory]
modules:
filesystem:
config:
allowed_read: ["./packages/"]
memory:
config:
working_memory: true
todo_list: true
checkpoint: true
execution:
workspace: "./packages/"