Full documentation lives in docs/: introduction, tutorials, and API reference as a flat set of markdown files.
OmegaClaw is a hybrid agentic AI framework implemented in MeTTa on OpenCog Hyperon. A large language model (LLM) works together with formal logic engines — NAL and PLN — to reason about the world, track uncertainty, combine evidence, and produce conclusions that are mathematically grounded rather than just plausible-sounding.
The core agent loop is approximately 200 lines of MeTTa.
Most AI assistants generate answers that sound right. OmegaClaw-hosted agents generate answers that come with a mathematical receipt showing exactly how confident each conclusion is and what evidence supports it. When the agent says it is 72% confident, that number comes from formal inference — not a feeling.
- Runs a token-efficient agentic loop that receives messages, selects skills, and acts.
- Delegates reasoning to one of two formal engines, orchestrated by the LLM:
- NAL — Non-Axiomatic Logic, symbolic inference under uncertainty.
- PLN — Probabilistic Logic Networks, probabilistic higher-order reasoning.
- ONA (OpenNARS for Applications) is a planned third engine but is not installed by default — see reference-lib-ona.md for the current experimental status.
- Maintains a three-tier memory architecture (working, long-term, AtomSpace).
- Exposes an extensible skill system covering memory, shell and file I/O, communication channels, web search, remote agents, and formal reasoning.
| Aspect | LLM (neural) | Formal engine (symbolic) |
|---|---|---|
| Natural language understanding | ✅ | ❌ |
| Premise formulation from text | ✅ | ❌ |
| Inference orchestration (which rule when) | ✅ | ❌ |
| Truth-value propagation | ❌ | ✅ |
| Confidence decay through chains | ❌ | ✅ |
| Formal contradiction detection | ❌ | ✅ |
| Auditable conclusion path | ❌ | ✅ |
The LLM turns ambiguous natural language into structured atoms with explicit truth values. The formal engine takes those atoms and applies rules whose truth-value arithmetic is deterministic and auditable.
When the agent outputs a conclusion, you can trace it back through every step: which premises fed into which rule, what truth value each premise carried, and what the math produced.
Requirement: Docker
OmegaClaw can be installed, started, and subsequently restarted with this single command:
curl -fsSL https://raw.githubusercontent.com/asi-alliance/OmegaClaw-Core/refs/heads/main/scripts/omegaclaw | bash -s -- singularitynet/omegaclaw:latestOmegaClaw requires an LLM model to run. When prompted, please select from a list of supported models, enter an API key and a unique IRC channel name, then interact with your OmegaClaw at webchat.quakenet.org.
At startup, the setup script prints a one-time secret.
To activate message handling, send this command in your channel exactly once:
auth <one-time-secret>
The first user who sends the correct secret becomes the authenticated user. All messages from other users are silently ignored.
When done interacting with your OmegaClaw, please use these commands as needed:
| Action | Command |
|---|---|
| Stop OmegaClaw | docker stop omegaclaw |
| View Logs | docker logs -f omegaclaw |
| Remove Containers | docker rm -f omegaclaw |
| Clear Memory | docker volume rm omegaclaw-memory |
To restart Omegaclaw simply rerun the single curl command show above. If there is an updated version of OmegaClaw, it will automatically be installed. When you restart OmegaClaw, you will receive a new authentication token secret to paste into your IRC channel chat for re-verification.
Your OmegaClaw will retain its memory for subsequent restarts unless you clear memory. To clear OmegaClaw memory and return to its initialized state, please run the command to stop OmegaClaw, the command to remove containers, the command to clear memory (all shown above), and then the single curl script command shown above.
