Description
hi, dear developer, thank you very much for your work!
I now want to use the memU memory unit for development within LangGraph. When testing the code in langgraph_integration.md, I encountered some errors. The code is as follows:
import asyncio
import os
from memu.app.service import MemoryService
from memu.integrations.langgraph import MemULangGraphTools
# Ensure you have your configuration set (e.g., env vars for DB connection)
# os.environ["MEMU_DATABASE_URL"] = "..."
sqlite_dsn = "sqlite:///memu.db"
async def main():
# 1. Initialize MemoryService
memory_service = MemoryService(
llm_profiles={
"default": {
"base_url": "http://localhost:11434/v1",
"api_key": "ollama",
"chat_model": "llama3.2:latest",
"embed_model": "nomic-embed-text",
"client_backend": "httpx",
"timeout": 1200, # 20 minutes
}
},
database_config={
"metadata_store": {
"provider": "sqlite",
"dsn": sqlite_dsn,
},
# SQLite uses brute-force vector search
"vector_index": {"provider": "bruteforce"},
},
retrieve_config={"method": "rag"},
)
# If your service requires async init (check your specific implementation):
# await memory_service.initialize()
# 2. Instantiate MemULangGraphTools
memu_tools = MemULangGraphTools(memory_service)
# Get the list of tools (BaseTool compatible)
tools = memu_tools.tools()
# 3. Example Usage: Manually invoking a tool
# In a real app, you would pass 'tools' to your LangGraph agent or StateGraph.
# Save a memory
save_tool = memu_tools.save_memory_tool()
print("Saving memory...")
result = await save_tool.ainvoke({
"content": "The user prefers dark mode.",
"user_id": "user_123",
"metadata": {"category": "preferences"}
})
print(f"Save Result: {result}")
# Search for a memory
search_tool = memu_tools.search_memory_tool()
print("\nSearching memory...")
search_result = await search_tool.ainvoke({
"query": "What are the user's preferences?",
"user_id": "user_123"
})
print(f"Search Result:\n{search_result}")
if __name__ == "__main__":
asyncio.run(main())
execute the code, the error occurs as follows:
Failed to parse XML
Traceback (most recent call last):
File "/usr/lib/python3.13/xml/etree/ElementTree.py", line 1719, in feed
self.parser.Parse(data, False)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^
xml.parsers.expat.ExpatError: junk after document element: line 10, column 0
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/amsr/linchao/memU/src/memu/app/memorize.py", line 1406, in _parse_memory_type_response_xml
root = ET.fromstring(xml_content)
File "/home/amsr/linchao/memU/venv/lib/python3.13/site-packages/defusedxml/common.py", line 126, in fromstring
parser.feed(text)
~~~~~~~~~~~^^^^^^
File "/usr/lib/python3.13/xml/etree/ElementTree.py", line 1721, in feed
self._raiseerror(v)
~~~~~~~~~~~~~~~~^^^
File "/usr/lib/python3.13/xml/etree/ElementTree.py", line 1628, in _raiseerror
raise err
xml.etree.ElementTree.ParseError: junk after document element: line 10, column 0
then I check the xml input, I find that the xml file has not only one root element. The xml input is as follows:
I can help you with extracting user memories based on the provided conversation.
After reviewing the conversation, I have identified the following valuable user information items:
-
The user works as a product manager at an internet company
Basic Information
-
The user is 30 years old
Basic Information
-
The user likes experimenting with cooking after work
Basic Information
-
The user prefers dark mode
Preferences
These items are extracted based on the conversation, and they comply with the extraction rules provided.
In fact, the xml input should only include "The user prefers dark mode", since the content is "The user prefers dark mode."
I ask ai, ai tell me that it is due to the promt which is used for the llm. So could u please help me fix the issue?
Thank u very much on advanced.
Environment
os:Ubuntu 24.04, backend: ollama, chat_model: llama3.2:latest, embed_model: nomic-embed-text
Steps to reproduce
- execute the code:
import asyncio
import os
from memu.app.service import MemoryService
from memu.integrations.langgraph import MemULangGraphTools
# Ensure you have your configuration set (e.g., env vars for DB connection)
# os.environ["MEMU_DATABASE_URL"] = "..."
sqlite_dsn = "sqlite:///memu.db"
async def main():
# 1. Initialize MemoryService
memory_service = MemoryService(
llm_profiles={
"default": {
"base_url": "http://localhost:11434/v1",
"api_key": "ollama",
"chat_model": "llama3.2:latest",
"embed_model": "nomic-embed-text",
"client_backend": "httpx",
"timeout": 1200, # 20 minutes
}
},
database_config={
"metadata_store": {
"provider": "sqlite",
"dsn": sqlite_dsn,
},
# SQLite uses brute-force vector search
"vector_index": {"provider": "bruteforce"},
},
retrieve_config={"method": "rag"},
)
# If your service requires async init (check your specific implementation):
# await memory_service.initialize()
# 2. Instantiate MemULangGraphTools
memu_tools = MemULangGraphTools(memory_service)
# Get the list of tools (BaseTool compatible)
tools = memu_tools.tools()
# 3. Example Usage: Manually invoking a tool
# In a real app, you would pass 'tools' to your LangGraph agent or StateGraph.
# Save a memory
save_tool = memu_tools.save_memory_tool()
print("Saving memory...")
result = await save_tool.ainvoke({
"content": "The user prefers dark mode.",
"user_id": "user_123",
"metadata": {"category": "preferences"}
})
print(f"Save Result: {result}")
# Search for a memory
search_tool = memu_tools.search_memory_tool()
print("\nSearching memory...")
search_result = await search_tool.ainvoke({
"query": "What are the user's preferences?",
"user_id": "user_123"
})
print(f"Search Result:\n{search_result}")
if __name__ == "__main__":
asyncio.run(main())
Expected behavior
Expected output:
The user prefers dark mode.
Version
v1.5.1
Severity
Critical
Additional Information
No response
Description
hi, dear developer, thank you very much for your work!
I now want to use the memU memory unit for development within LangGraph. When testing the code in langgraph_integration.md, I encountered some errors. The code is as follows:
execute the code, the error occurs as follows:
Failed to parse XML
Traceback (most recent call last):
File "/usr/lib/python3.13/xml/etree/ElementTree.py", line 1719, in feed
self.parser.Parse(data, False)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^
xml.parsers.expat.ExpatError: junk after document element: line 10, column 0
During handling of the above exception, another exception occurred:
then I check the xml input, I find that the xml file has not only one root element. The xml input is as follows:
I can help you with extracting user memories based on the provided conversation.
After reviewing the conversation, I have identified the following valuable user information items:
Environment
os:Ubuntu 24.04, backend: ollama, chat_model: llama3.2:latest, embed_model: nomic-embed-text
Steps to reproduce
Expected behavior
Expected output:
Version
v1.5.1
Severity
Critical
Additional Information
No response