Open
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Some ideas, based on patham9's fork.
Add containerization support
Introduce a
Dockerfile(based on Arch Linux) along with arun.shscript that handles building, running, shell access, cleaning, and volume mounting. This makes the entire environment reproducible and isolated while allowing host-mounted source code and persistent memory.Create a flexible environment configuration system
Add
.env.example(and support for.env) to easily configure multiple LLM providers via environment variables. Pair this withcontainer_run.shto auto-detect the provider and generate the appropriateprovider_init.mettafile at startup.Integrate LiteLLM for unified LLM access
Update the Python helper (
lib_llm_ext.py) to use LiteLLM. This removes hardcoded model logic and adds seamless support for OpenAI, Anthropic (Claude), Ollama, Groq, OpenRouter, and any other LiteLLM-compatible backend.Enhance observability and logging
Introduce
agent_run.pyas a wrapper that filters noisy PeTTa output, supports a “dry-run” mode (for testing without calling LLMs), and writes structured JSON-line logs toagent.logwith timestamps and event types.Strengthen memory management
Update
src/memory.mettato include configurable limits (maxHistory,maxRecallItems), time-based episode recall, and better response cleaning.Revised system prompt
Also enhance the system prompt in
memory/prompt.txtwith clearer instructions for goal pursuit, memory usage, and error handling.Refine the core agent loop
Modify
src/loop.mettato use helper functions for parenthesis balancing, response cleaning, and smoother state transitions. This improves command parsing and overall stability of the interaction loop.Add proper project hygiene files
Include a solid
.gitignore(covering Python artifacts,.env, memory files, etc.) and a.dockerignoreto keep Docker builds lean and prevent accidental leakage of secrets or cache files.Introduce version tracking and documentation overhaul
Add a
VERSIONfile and significantly expandREADME.mdwith quick-start guides, provider setup examples, command references, architecture notes, and local development instructions.Enable persistent memory with volume mounts
Configure Docker volumes so the
memory/directory (includinghistory.metta) is read-write mounted from the host. This preserves agent memory and learned skills across container restarts.