Explore the Documentation
Getting Started
Install Digitorn, create your first agent app, and run it in minutes.
Agents and Tools
Configure agents, brains, tool discovery, built-in primitives, and multi-agent orchestration.
Cognitive Memory
Goals, plans, tasks, facts, notes, and checkpoints that survive context compaction.
Security Architecture
7 enforcement gates, approval workflows, audit log, rate limiting, and data classification.
Module Reference
All 11 modules: filesystem, git, shell, web, database, notebook, memory, MCP, and more.
REST API
36 endpoints, SSE streaming, session management, and deployment workflows.
The problem we solve
Every AI agent project rebuilds the same infrastructure. Digitorn provides it as a declarative layer.
Infrastructure overhead
Every AI agent project rebuilds prompt routing, tool dispatching, context management, and error recovery from scratch.
Declare your agent in YAML. The framework handles routing, discovery, context, and recovery automatically.
Memory loss
When the context window fills up, the agent forgets everything. Goals, progress, findings -- gone after compaction.
Cognitive memory survives every compaction. The agent always knows its goal, its tasks, and what it found.
Single-threaded agents
One agent, one context window, one task at a time. Complex work is slow and sequential.
Spawn specialist sub-agents in parallel. Each has its own context, tools, and memory. True concurrency.
Tool integration pain
MCP servers return raw, inconsistent output. Every integration requires custom parsing and error handling.
60+ pre-configured servers. Results are normalized automatically. Smart cache, middleware, auto-reconnect.
20 lines of YAML.
A complete agent.
This agent reads files, makes git commits, searches the web, tracks its tasks with a checklist, and remembers what it found even after the context window is compacted.
- No Python code to write
- No prompt engineering required
- No infrastructure to manage
- Switch LLM providers in one line
app:
app_id: code-assistant
name: "Code Assistant"
modules:
filesystem: {}
git: {}
web: {}
memory:
config:
working_memory: true
todo_list: true
checkpoint: true
agents:
- id: assistant
brain:
provider: deepseek
model: deepseek-chat
config:
api_key: "{{env.DEEPSEEK_API_KEY}}"
system_prompt: |
You are a senior software engineer.
execution:
mode: conversation
greeting: "Hello! What are we building today?"Everything an agent needs
11 modules, 135+ actions, cognitive memory, multi-agent orchestration, security policies, and a production API. All opt-in.
Agent Modules
filesystemRead, write, edit with line numbers. Surgical edits. Fast grep.gitNative via pygit2. Status in 3ms. 60x faster than MCP servers.webSearch (DuckDuckGo free), fetch pages, extract content.databaseSQLite, PostgreSQL, MySQL. Schema introspection. 29 actions.shellCommands, scripts, background tasks with output capture.httpFull HTTP client. JSON API, forms, file upload, downloads.notebookRead and edit Jupyter notebooks. No kernel needed.mcpConnect 60+ external MCP servers. Plug and play.Intelligence
memoryGoals, plans, tasks, sticky notes, key facts, checkpoints.multi-agentCoordinator + specialists. Parallel execution. Structured results.skillsReusable workflow commands. /commit, /review, /audit.contextAuto-compaction, summary brain, memory re-injection.Infrastructure
securityRisk levels, grant/deny/approve policies, approval queue.middlewareApp, module, MCP levels. Secret masking, content filter, RAG.APIREST API, SSE streaming, multi-worker daemon, rate limiting.channelsEmail, Slack, Telegram, webhook, SMS. Pluggable.Any LLM. One config change.
Switch from a cloud API to a local model by changing one line. No code changes. No redeployment.
Plus any OpenAI-compatible API. Text-based tool calling recovery for models with imperfect function calling support.
Ready to build?
Install Digitorn, create a YAML file, and run your first agent in under 5 minutes.