Skip to main content

Digitorn

A declarative framework for building AI agent applications. Define what your agents do, how they think, and what tools they use -- entirely in YAML.


What is Digitorn?

Digitorn turns a YAML file into a production-ready AI agent application. You describe the agent's capabilities. The framework handles LLM routing, tool discovery, memory management, security enforcement, multi-agent orchestration, and context window optimization.

app:
app_id: code-assistant
name: "Code Assistant"

modules:
filesystem: {}
git: {}
web: {}
memory:
config:
working_memory: true
todo_list: true

agents:
- id: assistant
role: assistant
brain:
provider: deepseek
model: deepseek-chat
config:
api_key: "{{env.DEEPSEEK_API_KEY}}"
system_prompt: "You are a senior software engineer."

This agent can read and edit files, make git commits, search the web, track its tasks, and maintain context across compactions. All declared, not coded.


Why Digitorn?

Building an AI agent today means writing the same infrastructure over and over: prompt engineering, tool routing, context window handling, memory persistence, error recovery, security policies. Every project starts from scratch.

Digitorn provides this infrastructure as a declarative layer. You describe what your agent should do. The framework handles how it runs.

What you declareWhat Digitorn handles
brain: deepseekProvider auto-configuration, connection pooling, retry logic
modules: [filesystem, git]Tool discovery, routing, parameter validation, result normalization
memory: working_memory: trueCognitive state that survives context compaction
agents: role: coordinatorParallel sub-agent orchestration with isolated contexts
capabilities: grant/denySecurity policies with risk-based approval workflows

Architecture

The Agent Loop

The core runtime follows a simple cycle:


Core Concepts

Modules

Modules provide agent capabilities. Each module exposes a set of actions that agents discover and execute at runtime.

See the Module Reference for the complete list.

Tool Discovery

Agents discover tools through a two-mode system, chosen automatically based on the number of tools and the context window size:

ModeWhenHow it works
DirectFew tools, large contextAll tools injected as native function schemas
DiscoveryMany tools, smaller contextAgent uses meta-tools backed by semantic search

In discovery mode, the agent uses five meta-tools (search_tools, get_tool, execute_tool, list_categories, browse_category) to find and execute any action from any module. Semantic search is powered by FastEmbed (multilingual embeddings) and Qdrant (in-memory HNSW index).

Memory

The cognitive memory system gives agents persistent awareness across turns and compactions.

Every layer is opt-in. Enable only what you need. See Cognitive Memory.

Multi-Agent

A coordinator agent spawns specialist sub-agents that run in parallel with fully isolated context windows.

Each sub-agent has its own LLM provider, tools, memory, and context window. The coordinator receives structured results (findings, facts, errors) and is notified automatically on completion. See Multi-Agent.

Security

Applications declare what agents can and cannot do. Actions are classified by risk level. The security profile maps risks to policies.

Risk LevelExamplesDefault Behavior
LowRead file, list directory, searchAuto-approved
MediumWrite file, HTTP POST, git commitDepends on policy
HighDelete file, git push, shell executeRequires explicit grant

See Security.


Getting Started

Install

pip install digitorn

Create an application

# my-app.yaml
app:
app_id: my-app
name: "My First Agent"

modules:
filesystem: {}
memory:
config:
working_memory: true

agents:
- id: assistant
brain:
provider: ollama
model: llama3.1:8b
system_prompt: "You are a helpful coding assistant."

execution:
mode: conversation
greeting: "Hello! How can I help you today?"

Run

# Standalone (no daemon)
digitorn run my-app.yaml

# Or deploy to the daemon
digitorn start
digitorn app deploy my-app.yaml
digitorn run my-app

Documentation

Guides

GuideDescription
Getting StartedInstallation, first app, running
App ConfigurationYAML structure reference
AgentsAgent definition, brain, providers
ToolsTool discovery, meta-tools, semantic search
Cognitive MemoryGoals, tasks, notes, facts, compaction survival
Context ManagementCompaction strategies, hooks
Multi-AgentCoordinator, specialists, parallel execution
SecurityCapabilities, policies, approval workflows
API IntegrationREST API, SSE streaming

Module Reference

ModuleActionsDescription
filesystem15File operations, surgical edits, fast grep
database29SQL databases with introspection
git17Native git via pygit2
shell12Shell commands and scripts
http16HTTP client
web4Web search and content extraction
notebook4Jupyter notebooks
memory16Cognitive memory system
agent_spawn7Multi-agent orchestration
mcp11External MCP servers

Advanced

GuideDescription
MCP ServersConnect external tools via Model Context Protocol
MiddlewareRequest/response pipeline at app, module, and MCP levels
SkillsReusable workflow commands
Output ChannelsEmail, webhook, Slack notifications
ExamplesComplete real-world applications

CLI Reference

# Run applications
digitorn run <app.yaml> [message] # Standalone mode
digitorn run <app-id> # Daemon mode (interactive)

# Application management
digitorn app deploy <app.yaml> # Deploy to daemon
digitorn app list # List deployed apps
digitorn app undeploy <app-id> # Remove from daemon
digitorn app validate <app.yaml> # Validate YAML syntax
digitorn app schema <module-id> # Show module config schema

# Daemon lifecycle
digitorn start # Start daemon
digitorn stop # Stop daemon

# MCP servers
digitorn mcp install <name> # Install an MCP server
digitorn mcp list # List installed servers
digitorn mcp test <name> # Test server connection

Glossary

TermDefinition
ActionA function exposed by a module that an agent can call. Defined with the @action decorator.
AgentAn LLM-powered entity that receives messages, reasons, and calls actions to accomplish tasks.
BrainThe LLM configuration for an agent: provider, model, temperature, context settings.
CompactionAutomatic summarization of old messages when the context window fills up.
Context windowThe maximum number of tokens an LLM can process in a single request.
CoordinatorAn agent that spawns and manages sub-agents for parallel work.
ModuleA self-contained package of actions. Modules are declared in YAML and auto-discovered.
ProviderAn LLM service backend (DeepSeek, OpenAI, Anthropic, Ollama, etc.).
SkillA reusable workflow file (.md) that an agent loads on demand via use_skill.
SpecialistA pre-configured sub-agent with a specific role, brain, and skill set.
Working memoryCognitive state (goal, tasks, notes, facts) that is always visible to the agent.