Skip to main content

Open Source AI Agent Framework

Stop coding agent infrastructure.
Start building agents.

Digitorn is a declarative framework that turns a YAML file into a production-ready AI agent application. Tool discovery, cognitive memory, multi-agent orchestration, security policies -- all handled. You just define what your agent does.

Works with any LLM. Deploy anywhere. No vendor lock-in.

The problem we solve

Every AI agent project rebuilds the same infrastructure. Digitorn provides it as a declarative layer.

01

Infrastructure overhead

Every AI agent project rebuilds prompt routing, tool dispatching, context management, and error recovery from scratch.

Declare your agent in YAML. The framework handles routing, discovery, context, and recovery automatically.

02

Memory loss

When the context window fills up, the agent forgets everything. Goals, progress, findings -- gone after compaction.

Cognitive memory survives every compaction. The agent always knows its goal, its tasks, and what it found.

03

Single-threaded agents

One agent, one context window, one task at a time. Complex work is slow and sequential.

Spawn specialist sub-agents in parallel. Each has its own context, tools, and memory. True concurrency.

04

Tool integration pain

MCP servers return raw, inconsistent output. Every integration requires custom parsing and error handling.

60+ pre-configured servers. Results are normalized automatically. Smart cache, middleware, auto-reconnect.

20 lines of YAML.

A complete agent.

This agent reads files, makes git commits, searches the web, tracks its tasks with a checklist, and remembers what it found even after the context window is compacted.

  • No Python code to write
  • No prompt engineering required
  • No infrastructure to manage
  • Switch LLM providers in one line
Build your first agent
app:
  app_id: code-assistant
  name: "Code Assistant"

modules:
  filesystem: {}
  git: {}
  web: {}
  memory:
    config:
      working_memory: true
      todo_list: true
      checkpoint: true

agents:
  - id: assistant
    brain:
      provider: deepseek
      model: deepseek-chat
      config:
        api_key: "{{env.DEEPSEEK_API_KEY}}"
    system_prompt: |
      You are a senior software engineer.

execution:
  mode: conversation
  greeting: "Hello! What are we building today?"

Everything an agent needs

11 modules, 135+ actions, cognitive memory, multi-agent orchestration, security policies, and a production API. All opt-in.

Agent Modules

filesystemRead, write, edit with line numbers. Surgical edits. Fast grep.
gitNative via pygit2. Status in 3ms. 60x faster than MCP servers.
webSearch (DuckDuckGo free), fetch pages, extract content.
databaseSQLite, PostgreSQL, MySQL. Schema introspection. 29 actions.
shellCommands, scripts, background tasks with output capture.
httpFull HTTP client. JSON API, forms, file upload, downloads.
notebookRead and edit Jupyter notebooks. No kernel needed.
mcpConnect 60+ external MCP servers. Plug and play.

Intelligence

memoryGoals, plans, tasks, sticky notes, key facts, checkpoints.
multi-agentCoordinator + specialists. Parallel execution. Structured results.
skillsReusable workflow commands. /commit, /review, /audit.
contextAuto-compaction, summary brain, memory re-injection.

Infrastructure

securityRisk levels, grant/deny/approve policies, approval queue.
middlewareApp, module, MCP levels. Secret masking, content filter, RAG.
APIREST API, SSE streaming, multi-worker daemon, rate limiting.
channelsEmail, Slack, Telegram, webhook, SMS. Pluggable.

Any LLM. One config change.

Switch from a cloud API to a local model by changing one line. No code changes. No redeployment.

DeepSeek
OpenAI
Anthropic
Groq
Mistral
Ollama
vLLM
LM Studio
Together
OpenRouter

Plus any OpenAI-compatible API. Text-based tool calling recovery for models with imperfect function calling support.

Ready to build?

Install Digitorn, create a YAML file, and run your first agent in under 5 minutes.