A multi-agent orchestration terminal user interface (TUI) with dynamic task routing. Built with Rust, ratatui, and powered by LLMs.
-
Multi-Agent System: Pre-configured specialized agents for different tasks:
- Planner: Analyzes and breaks down complex tasks
- Coder: Writes and modifies code
- Reviewer: Reviews code for quality and issues
- Tester: Generates and runs tests
- Explorer: Explores codebase structure
- Integrator: Synthesizes results from multiple agents
-
Dynamic Routing: Automatic task analysis and routing to the most appropriate agent based on confidence scores
-
Two Operation Modes:
- Auto Mode: Automatically routes tasks to the best agent
- Manual Mode: Manually select which agent to use
-
Task Decomposition: Complex tasks are automatically broken into subtasks with dependency tracking and parallel execution support
-
Session Persistence: Auto-save conversations and session state
-
Shared Memory: Agents can share context and information across tasks
-
Streaming Responses: Real-time streaming output from agents
-
Markdown Rendering: Rich markdown display with syntax highlighting
- Rust 1.70+ (edition 2021)
- An OpenAI API key (or configure alternative LLM provider)
git clone https://github.com/falloficarus22/axon.git
cd axon
cargo build --releaseThe binary will be available at target/release/agent-tui.
cargo runOn first run, Agent TUI creates a default configuration file at:
- Linux:
~/.config/agent-tui/config.toml - macOS:
~/Library/Application Support/agent-tui/config.toml
Set your API key via environment variable:
export OPENAI_API_KEY="your-api-key-here"Edit ~/.config/agent-tui/config.toml:
# LLM Configuration
[llm]
provider = "openai"
api_key = "$OPENAI_API_KEY" # Use $ prefix for env vars
model = "gpt-4o"
max_tokens = 4096
temperature = 0.7
# Orchestration Settings
[orchestration]
mode = "auto" # or "manual"
max_concurrent_agents = 5
routing_confidence_threshold = 0.8
auto_confirm_threshold = 0.95
# Persistence Settings
[persistence]
session_dir = "~/.agent-tui/sessions"
memory_dir = "~/.agent-tui/memory"
auto_save_interval = 30 # seconds
max_sessions = 100
# UI Settings
[ui]
theme = "dark"
show_agent_flow = true
show_timestamps = true
show_confidence_scores = true
datetime_format = "%H:%M:%S"
# Keybindings
[keybindings]
quit = "ctrl+c"
submit = "enter"
new_line = "shift+enter"
history_up = "up"
history_down = "down"
autocomplete = "tab"
command_palette = "ctrl+k"
agent_selector = "ctrl+a"
sidebar_toggle = "ctrl+b"
memory_manager = "ctrl+m"Define custom agents in the [agents] section:
[agents.my_custom_agent]
enabled = true
role = "coder"
description = "My custom coding agent"
model = "gpt-4o"
system_prompt = """
You are a specialized coding assistant.
Your job is to write clean, efficient code.
"""
capabilities = ["code", "refactor", "debug"]agent-tui- Type your request in the input field at the bottom
- Press
Enterto submit - The system analyzes your request and routes it to the appropriate agent
- View the agent's response in the chat area
| Command | Description |
|---|---|
/help |
Show help message |
/mode auto |
Enable automatic agent routing |
/mode manual |
Enable manual agent selection |
/agent <name> |
Select specific agent (manual mode) |
/agents |
List all available agents |
/clear |
Clear current session |
/new |
Start a new session |
/save <name> |
Save current session to file |
/load <id> |
Load a session by ID |
/sessions |
List all saved sessions |
/delete <id> |
Delete a session by ID |
/remember <key> <value> |
Store a value in session memory |
/recall <key> |
Retrieve a value from session memory |
/forget <key> |
Delete a value from session memory |
/cancel |
Cancel the currently running task |
/quit |
Exit application |
| Key | Action |
|---|---|
Enter |
Submit input |
Ctrl+C |
Quit application |
Ctrl+B |
Toggle sidebar |
Ctrl+M |
Open memory manager |
Ctrl+A |
Open agent selector |
Ctrl+X |
Cancel running task |
Ctrl+K |
Open command palette |
Up/Down |
Navigate history |
Tab |
Autocomplete |
/ |
Enter command mode (when input is empty) |
Shift+Enter |
New line in input |
┌─────────────────────────────────────────────────────────────┐
│ TUI Layer │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────────────┐ │
│ │ Chat │ │ Input │ │ Sidebar/Status │ │
│ └─────────────┘ └─────────────┘ └─────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
│
┌─────────────────────────────▼──────────────────────────────┐
│ Orchestrator │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────────┐ │
│ │ Router │ │ Planner │ │ Executor │ │
│ │ (Analysis) │ │ (Decompose) │ │ (Agent Pool) │ │
│ └──────────────┘ └──────────────┘ └──────────────────┘ │
└─────────────────────────────────────────────────────────────┘
│
┌─────────────────────────────▼──────────────────────────────┐
│ Agent Layer │
│ ┌────────┐ ┌────────┐ ┌─────────┐ ┌────────┐ ┌─────────┐ │
│ │Planner │ │ Coder │ │Reviewer │ │Tester │ │Explorer │ │
│ └────────┘ └────────┘ └─────────┘ └────────┘ └─────────┘ │
└─────────────────────────────────────────────────────────────┘
│
┌─────────────────────────────▼──────────────────────────────┐
│ LLM Client │
│ (OpenAI API / Other) │
└─────────────────────────────────────────────────────────────┘
tui/: Terminal UI components usingratatuiorchestrator/: Task routing, planning, and execution coordinationagent/: Agent definitions, registry, and runtimellm/: LLM client abstractionpersistence/: Session and memory storagetypes/: Core data types and structuresshared/: Shared memory for inter-agent communication
cargo runcargo test# Enable mock LLM for testing without API key
cargo run --features mock-llmcargo fmtcargo clippyaxon/
├── README.md # This file
├── .github/ # GitHub workflows and templates
│ └── workflows/
│ └── opencode.yml # AI-powered coding workflow
├── agent-tui/ # Main Rust crate
│ ├── Cargo.toml # Project dependencies and metadata
│ ├── src/
│ │ ├── main.rs # Application entry point
│ │ ├── config.rs # Configuration management
│ │ ├── agent/ # Agent system
│ │ │ ├── mod.rs # Agent registry
│ │ │ ├── runtime.rs # Agent runtime
│ │ │ └── agents/ # Default agent definitions
│ │ ├── llm/ # LLM client
│ │ ├── orchestrator/# Task orchestration
│ │ │ ├── mod.rs # Router, Planner, Executor
│ │ │ └── pool.rs # Agent pool management
│ │ ├── persistence/ # Session/memory storage
│ │ ├── tui/ # Terminal UI
│ │ │ ├── mod.rs # Main app loop
│ │ │ ├── components/ # UI components
│ │ │ └── markdown.rs # Markdown rendering
│ │ ├── types/ # Core types
│ │ ├── shared/ # Shared memory
│ │ └── persistence/ # Persistence layer
│ └── target/ # Build artifacts
└── .gitignore # Git ignore patterns
MIT License.
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request