Skip to content

This repository contains patterns and implementations for you to simply build your own agents and have those agents build tools and agents for various use cases.

Notifications You must be signed in to change notification settings

madhurprash/meta-tools-and-agents

Repository files navigation

Meta-Agentic System with Dynamic Tool Creation

Use Case

This project demonstrates the construction of a highly autonomous agent system that possesses meta-tooling and meta-agentic capabilities. The system is designed to go beyond traditional static agent implementations by creating agents that can dynamically understand, adapt, and evolve their own capabilities over time. The core use case revolves around building agents that can not only perform tasks but also create and manage other agents, modify their own tool sets, and adapt to new challenges by generating custom tools at runtime.

The system addresses the fundamental limitation of conventional AI agents that rely on predefined, static tool sets. Instead, this implementation creates an agent ecosystem where individual agents can assess their current capabilities, identify gaps in their functionality, and dynamically create new tools or even spawn new specialized agents to handle specific tasks. This meta-level reasoning and self-modification capability represents a significant advancement toward truly autonomous AI systems that can operate effectively in unpredictable environments without human intervention for capability expansion.

The practical applications of this system extend across numerous domains where adaptive problem-solving is crucial. For instance, in financial analysis, the system can dynamically create specialized agents for different market analysis tasks, generate custom data processing tools for new financial instruments, or adapt to changing regulatory requirements by modifying its analytical capabilities. In software development environments, the system can create specialized debugging agents, generate custom testing tools, or spawn agents dedicated to specific programming languages or frameworks as needed.

flow

Implementation Overview

The implementation leverages the Strands Agents SDK as its foundation, which provides a robust framework for building agents that combine foundation models with comprehensive tool suites. The architecture treats each agent as a combination of a foundation model plus a suite of tools, where agents can define their own tools and capabilities, enabling them to create new agents or modify existing ones dynamically.

Use LangGraph with Strands [BigTool Implementation]

At the core of the system is a sophisticated tool management layer built on top of LangGraph BigTool, which enables scalable access to large numbers of tools through semantic search capabilities. Rather than overwhelming agents with hundreds of tools simultaneously, the system employs a vector database approach where tools are stored with their metadata, descriptions, and categorizations. When an agent encounters a task, it performs semantic searches to identify the most relevant tools for the specific context, significantly reducing cognitive load while maintaining access to extensive capabilities.

The meta-tooling capabilities are implemented through three primary tools that work in concert to enable dynamic tool creation and management. The load_tool function enables dynamic loading of Python tools at runtime, registering new tools with the agent's registry and enabling hot-reloading of capabilities while validating tool specifications before loading. The editor tool allows creation and modification of tool code files with syntax highlighting, making precise string replacements in existing tools, inserting code at specific locations, finding and navigating to specific sections of code, and creating backups with undo capability before modifications. The shell tool executes shell commands to debug tool creation and execution problems, supports sequential or parallel command execution, and manages working directory context for proper execution.

The system architecture employs a LangGraph-based workflow that orchestrates the interaction between semantic tool retrieval and dynamic tool creation. When a user presents a task, the workflow first performs semantic search across the stored tool repository to identify relevant existing tools. If suitable tools are found, they are dynamically registered with the agent and used to complete the task. However, if no appropriate tools exist, the system enters a meta-tooling mode where it analyzes the requirements and dynamically creates new tools using the load_tool, editor, and shell capabilities.

Memory management & tool store

Memory management is implemented through LangGraph's checkpointing system, which enables the graph to persist its state across executions. This allows agents to maintain context about previously created tools, learned capabilities, and successful problem-solving patterns. The memory system supports both short-term operational memory for individual task execution and long-term strategic memory for capability evolution and optimization.

The tool storage and retrieval system utilizes Amazon Bedrock's Titan embedding model to create semantic representations of tools, enabling sophisticated similarity-based searches. Tools are categorized into logical groups such as file operations, system integration, memory and storage, network communication, code execution, mathematical operations, cloud services, media processing, documentation, AI reasoning, task management, system control, and agent coordination. This categorization, combined with semantic search, ensures that agents can quickly identify the most appropriate tools for any given task.

Getting Started

Follow the steps below to get started with running this meta-agentic system in your environment:

  1. Install uv and Python dependencies:
curl -LsSf https://astral.sh/uv/install.sh | sh
export PATH="$HOME/.local/bin:$PATH"
  1. Restore Python virtual environment from the pyproject.toml file:
uv venv && source .venv/bin/activate && uv pip sync pyproject.toml
  1. Create a conda kernel. This is needed to run the notebook on your EC2 instance:
uv add zmq
python -m ipykernel install --user --name=.venv --display-name="Python (uv env)"
  1. Open the JupyterLab notebook to run and select the Python (uv env) if not selected automatically (if it is not, refresh the page or close the notebook and open it again and you should see it selected or be able to select it from the drop down).

Configurations

All information to run this sample is fetched from a comprehensive configuration file. This configuration file contains detailed information about model identifiers, inference parameters, agent specifications, tool configurations, and other critical settings for each agent in this multi-agent system.

general:
  name: "Strands meta tooling and meta agentic capabilities"      
  description: |
    This agentic system is designed to provide meta tooling and meta agentic capabilities
    for the Strands agents SDK. It includes a finance agent that can generate code for
    financial calculations and a code generation agent that can generate code for various tasks.

# This config file contains information about the agent to be created
# This agent will have access to creating tools on the fly. This agent has
# access to a store. This is an in memory store that contains information
# about all pre built tools that strands has to offer. When the user asks a question
# the agent will first check for the relevant tools in the tool registry. In this
# case, we use langGraph BigTool as the tool registry. Once the agent has these tools, these
# relevant tools will be registered to the agent at runtime, and the agent will be able to use the new
# tools to answer the question. If the agent needs to create a new tool on the fly, then the agent
# will be able to use the pre existing load tools, editor and shell tool to be able to 
# create tools on the fly. In this case, we will also prompt the agent to create other agents 
# if needed.
autonomous_agent_information:
  # Represents the prompt template which contains the system prompt information
  # for the autonomous agent
  prompt_template_dir: prompt_templates
  # This represents the name of the agent or the 
  # system prompt of the agent
  # System prompts provide high-level instructions to the model about its role, 
  # capabilities, and constraints. They set the foundation for how the model should behave 
  # throughout the conversation. You can specify the system prompt when initializing an Agent.
  system_prompt_fpath: autonomous_agent_system_prompt.txt
  # This is the model id that the agent will use
  agent_model_id: us.anthropic.claude-sonnet-4-20250514-v1:0

Example usage:

  1. Agent configuration:
from strands_tools import load_tool, shell, editor
print(f"Going to create an autonomous agent with the following tools:...")
# First, create the strands agent that will be used as a highly autonomous agent
autonomous_strands_agent = Agent(system_prompt = prompt_template, 
                                    tools=[load_tool,
                                            shell, 
                                            editor, 
                                            # This is the tool that will be used to fetch documents from the FAISS index
                                            fetch_docs],  
                                    model = BedrockModel(
                                    model_id='us.anthropic.claude-3-5-haiku-20241022-v1:0', 
                                    max_tokens=2048
                                    )
                                )
print(f"Agent created: {autonomous_strands_agent}")

This agent is an autonomous agent and the langGraph node has access to fetching the relevant tools from the directory based on semantic similarity, the top relevant tools are then hot reloaded and registered with the agent's tool registry at runtime and the agent can then use that.

Asking about the current time for example that the agent does not have access to but this is a prebuilt strands tool that is stored in the tool registry:

Current User Request: What is the current time in India?

User message: What is the current time in India?
Going to be retrieving the relevant tools from the tool registry based on the user message.
Retrieved 5 tools for query: 'What is the current time in India?'
  - current_time: current_time: ...
  - load_tool: load_tool: ...
  - stop: stop: ...
  - use_aws: use_aws: ...
  - python_repl: python_repl: ...
Relevant tools retrieved from the tool registry: ['current_time', 'load_tool', 'stop', 'use_aws', 'python_repl', 'use_aws', 'python_repl']
Going to register the following tools: ['current_time', 'load_tool', 'stop', 'use_aws', 'python_repl', 'use_aws', 'python_repl']
/opt/homebrew/Caskroom/miniconda/base/lib/python3.12/asyncio/base_events.py:732: RuntimeWarning: coroutine 'get_user_input_async' was never awaited
  def time(self):
RuntimeWarning: Enable tracemalloc to get the object allocation traceback

Executing: from strands_tools import stop, current_time, use_aws, python_repl, load_tool
tool_instance: <module 'strands_tools.stop' from '/Users/madhurpt/Desktop/meta-tools-and-agents/.venv/lib/python3.12/site-packages/strands_tools/stop.py'>
Added tool module: stop
tool_instance: <module 'strands_tools.current_time' from '/Users/madhurpt/Desktop/meta-tools-and-agents/.venv/lib/python3.12/site-packages/strands_tools/current_time.py'>
Added tool module: current_time
tool_instance: <module 'strands_tools.use_aws' from '/Users/madhurpt/Desktop/meta-tools-and-agents/.venv/lib/python3.12/site-packages/strands_tools/use_aws.py'>
Added tool module: use_aws
tool_instance: <module 'strands_tools.python_repl' from '/Users/madhurpt/Desktop/meta-tools-and-agents/.venv/lib/python3.12/site-packages/strands_tools/python_repl.py'>
Added tool module: python_repl
tool_instance: <module 'strands_tools.load_tool' from '/Users/madhurpt/Desktop/meta-tools-and-agents/.venv/lib/python3.12/site-packages/strands_tools/load_tool.py'>
Added tool module: load_tool
Successfully registered tools: ['stop', 'current_time', 'use_aws', 'python_repl', 'load_tool']
Now registered: ['stop', 'current_time', 'use_aws', 'python_repl', 'load_tool']
I'll help you get the current time in India right away.
Tool #5: current_time


The current time in India is 2025-06-20 at 02:25:57 AM, with a timezone offset of +05:30 from UTC. 

A few additional details:
- Timezone: Asia/Kolkata (Indian Standard Time)
- Date: June 20, 2025
- Time: 02:25:57 AM
- Offset: +05:30 hours from Coordinated Universal Time (UTC)

Is there anything else you would like to know about the current time?Agent responded: '\n\nThe current time in India is 2025-06-20 at 02:25:57 AM, with a timezone offset of +05:30 from UTC. \n\nA few additional details:\n- Timezone: Asia/Kolkata (Indian Standard Time)\n- Date: June 20, 2025\n- Time: 02:25:57 AM\n- Offset: +05:30 hours from Coordinated Universal Time (UTC)\n\nIs there anything else you would like to know about the current time?\n'

About

This repository contains patterns and implementations for you to simply build your own agents and have those agents build tools and agents for various use cases.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published