Digitorn
A declarative framework for building AI agent applications. Define what your agents do, how they think, and what tools they use -- entirely in YAML.
What is Digitorn?
Digitorn turns a YAML file into a production-ready AI agent application. You describe the agent's capabilities. The framework handles LLM routing, tool discovery, memory management, security enforcement, multi-agent orchestration, and context window optimization.
app:
app_id: code-assistant
name: "Code Assistant"
modules:
filesystem: {}
git: {}
web: {}
memory:
config:
working_memory: true
todo_list: true
agents:
- id: assistant
role: assistant
brain:
provider: deepseek
model: deepseek-chat
config:
api_key: "{{env.DEEPSEEK_API_KEY}}"
system_prompt: "You are a senior software engineer."
This agent can read and edit files, make git commits, search the web, track its tasks, and maintain context across compactions. All declared, not coded.
Why Digitorn?
Building an AI agent today means writing the same infrastructure over and over: prompt engineering, tool routing, context window handling, memory persistence, error recovery, security policies. Every project starts from scratch.
Digitorn provides this infrastructure as a declarative layer. You describe what your agent should do. The framework handles how it runs.
| What you declare | What Digitorn handles |
|---|---|
brain: deepseek | Provider auto-configuration, connection pooling, retry logic |
modules: [filesystem, git] | Tool discovery, routing, parameter validation, result normalization |
memory: working_memory: true | Cognitive state that survives context compaction |
agents: role: coordinator | Parallel sub-agent orchestration with isolated contexts |
capabilities: grant/deny | Security policies with risk-based approval workflows |
Architecture
The Agent Loop
The core runtime follows a simple cycle:
Core Concepts
Modules
Modules provide agent capabilities. Each module exposes a set of actions that agents discover and execute at runtime.
See the Module Reference for the complete list.
Tool Discovery
Agents discover tools through a two-mode system, chosen automatically based on the number of tools and the context window size:
| Mode | When | How it works |
|---|---|---|
| Direct | Few tools, large context | All tools injected as native function schemas |
| Discovery | Many tools, smaller context | Agent uses meta-tools backed by semantic search |
In discovery mode, the agent uses five meta-tools (search_tools, get_tool, execute_tool, list_categories, browse_category) to find and execute any action from any module. Semantic search is powered by FastEmbed (multilingual embeddings) and Qdrant (in-memory HNSW index).
Memory
The cognitive memory system gives agents persistent awareness across turns and compactions.
Every layer is opt-in. Enable only what you need. See Cognitive Memory.
Multi-Agent
A coordinator agent spawns specialist sub-agents that run in parallel with fully isolated context windows.
Each sub-agent has its own LLM provider, tools, memory, and context window. The coordinator receives structured results (findings, facts, errors) and is notified automatically on completion. See Multi-Agent.
Security
Applications declare what agents can and cannot do. Actions are classified by risk level. The security profile maps risks to policies.
| Risk Level | Examples | Default Behavior |
|---|---|---|
| Low | Read file, list directory, search | Auto-approved |
| Medium | Write file, HTTP POST, git commit | Depends on policy |
| High | Delete file, git push, shell execute | Requires explicit grant |
See Security.
Getting Started
Install
pip install digitorn
Create an application
# my-app.yaml
app:
app_id: my-app
name: "My First Agent"
modules:
filesystem: {}
memory:
config:
working_memory: true
agents:
- id: assistant
brain:
provider: ollama
model: llama3.1:8b
system_prompt: "You are a helpful coding assistant."
execution:
mode: conversation
greeting: "Hello! How can I help you today?"
Run
# Standalone (no daemon)
digitorn run my-app.yaml
# Or deploy to the daemon
digitorn start
digitorn app deploy my-app.yaml
digitorn run my-app
Documentation
Guides
| Guide | Description |
|---|---|
| Getting Started | Installation, first app, running |
| App Configuration | YAML structure reference |
| Agents | Agent definition, brain, providers |
| Tools | Tool discovery, meta-tools, semantic search |
| Cognitive Memory | Goals, tasks, notes, facts, compaction survival |
| Context Management | Compaction strategies, hooks |
| Multi-Agent | Coordinator, specialists, parallel execution |
| Security | Capabilities, policies, approval workflows |
| API Integration | REST API, SSE streaming |
Module Reference
| Module | Actions | Description |
|---|---|---|
| filesystem | 15 | File operations, surgical edits, fast grep |
| database | 29 | SQL databases with introspection |
| git | 17 | Native git via pygit2 |
| shell | 12 | Shell commands and scripts |
| http | 16 | HTTP client |
| web | 4 | Web search and content extraction |
| notebook | 4 | Jupyter notebooks |
| memory | 16 | Cognitive memory system |
| agent_spawn | 7 | Multi-agent orchestration |
| mcp | 11 | External MCP servers |
Advanced
| Guide | Description |
|---|---|
| MCP Servers | Connect external tools via Model Context Protocol |
| Middleware | Request/response pipeline at app, module, and MCP levels |
| Skills | Reusable workflow commands |
| Output Channels | Email, webhook, Slack notifications |
| Examples | Complete real-world applications |
CLI Reference
# Run applications
digitorn run <app.yaml> [message] # Standalone mode
digitorn run <app-id> # Daemon mode (interactive)
# Application management
digitorn app deploy <app.yaml> # Deploy to daemon
digitorn app list # List deployed apps
digitorn app undeploy <app-id> # Remove from daemon
digitorn app validate <app.yaml> # Validate YAML syntax
digitorn app schema <module-id> # Show module config schema
# Daemon lifecycle
digitorn start # Start daemon
digitorn stop # Stop daemon
# MCP servers
digitorn mcp install <name> # Install an MCP server
digitorn mcp list # List installed servers
digitorn mcp test <name> # Test server connection
Glossary
| Term | Definition |
|---|---|
| Action | A function exposed by a module that an agent can call. Defined with the @action decorator. |
| Agent | An LLM-powered entity that receives messages, reasons, and calls actions to accomplish tasks. |
| Brain | The LLM configuration for an agent: provider, model, temperature, context settings. |
| Compaction | Automatic summarization of old messages when the context window fills up. |
| Context window | The maximum number of tokens an LLM can process in a single request. |
| Coordinator | An agent that spawns and manages sub-agents for parallel work. |
| Module | A self-contained package of actions. Modules are declared in YAML and auto-discovered. |
| Provider | An LLM service backend (DeepSeek, OpenAI, Anthropic, Ollama, etc.). |
| Skill | A reusable workflow file (.md) that an agent loads on demand via use_skill. |
| Specialist | A pre-configured sub-agent with a specific role, brain, and skill set. |
| Working memory | Cognitive state (goal, tasks, notes, facts) that is always visible to the agent. |