Skip to content

Commit e90e92a

Browse files
fix LangChainMCP docs (#347)
1 parent 845a25c commit e90e92a

File tree

2 files changed

+81
-105
lines changed

2 files changed

+81
-105
lines changed
-19.8 KB
Loading

docs/content/advanced/langchain_mcp.md

Lines changed: 81 additions & 105 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,24 @@ This document describe how to re-use the configuration tested in the **AI Optimi
1515

1616
**NOTICE**: Only `Ollama` or `OpenAI` configurations are currently supported. Full support will come.
1717

18+
## Export config
19+
In the **AI Optimizer & Toolkit** web interface, after have tested a configuration, in `Settings/Client Settings`:
20+
21+
![Client Settings](./images/export.png)
22+
23+
and **ONLY** if have been selected `Ollama` or `OpenAI` providers for **both** chat and embeddings models:
24+
25+
* select the checkbox `Include Sensitive Settings`
26+
* press button `Download LangchainMCP` to download a zip file containing a full project template to run current selected AI Optimizer configuration.
27+
* unzip the file in a <PROJECT_DIR> dir.
28+
29+
To run it, follow the next steps.
30+
31+
**NOTICE**:
32+
* if you want to run the application in another server, remember to change in the optimizer_settings.json any reference no more local, like hostname for LLM servers, Database, wallet dir and so on.
33+
* if you don't see the `Download LangchainMCP` check again if you have selected Ollama or OpenAI for both chat and the vectorstore embedding model.
34+
35+
1836
## Pre-requisites.
1937
You need:
2038
- Node.js: v20.17.0+
@@ -23,7 +41,7 @@ You need:
2341
- Claude Desktop free
2442

2543
## Setup
26-
With **[`uv`](https://docs.astral.sh/uv/getting-started/installation/)** installed, run the following commands in your current project directory `<PROJECT_DIR>/src/client/mcp/rag/`:
44+
With **[`uv`](https://docs.astral.sh/uv/getting-started/installation/)** installed, run the following commands in your current project directory `<PROJECT_DIR>`:
2745

2846
```bash
2947
uv init --python=3.11 --no-workspace
@@ -32,147 +50,105 @@ source .venv/bin/activate
3250
uv add mcp langchain-core==0.3.52 oracledb~=3.1 langchain-community==0.3.21 langchain-huggingface==0.1.2 langchain-openai==0.3.13 langchain-ollama==0.3.2
3351
```
3452

35-
## Export config
36-
In the **AI Optimizer & Toolkit** web interface, after tested a configuration, in `Settings/Client Settings`:
37-
38-
![Client Settings](./images/export.png)
39-
40-
* select the checkbox `Include Sensitive Settings`
41-
* press button `Download Settings` to download configuration in the project directory: `src/client/mcp/rag` as `optimizer_settings.json`.
42-
* in `<PROJECT_DIR>/src/client/mcp/rag/rag_base_optimizer_config_mcp.py` change filepath with the absolute path of your `optimizer_settings.json` file.
43-
44-
4553
## Standalone client
46-
There is a client that you can run without MCP via commandline to test it:
47-
48-
```bash
49-
uv run rag_base_optimizer_config.py
50-
```
5154

52-
## Quick test via MCP "inspector"
53-
54-
* Run the inspector:
55+
There is a client that let you run the service via command-line, to test it without an MCP client, in your `<PROJECT_DIR>`:
5556

5657
```bash
57-
npx @modelcontextprotocol/inspector uv run rag_base_optimizer_config_mcp.py
58-
```
59-
60-
* connect to the port `http://localhost:6274/` with your browser
61-
* setup the `Inspector Proxy Address` with `http://127.0.0.1:6277`
62-
* test the tool developed.
63-
64-
65-
## Claude Desktop setup
66-
67-
* In **Claude Desktop** application, in `Settings/Developer/Edit Config`, get the `claude_desktop_config.json` to add the references to the local MCP server for RAG in the `<PROJECT_DIR>/src/client/mcp/rag/`:
68-
```json
69-
{
70-
"mcpServers": {
71-
...
72-
,
73-
"rag":{
74-
"command":"bash",
75-
"args":[
76-
"-c",
77-
"source <PROJECT_DIR>/src/client/mcp/rag/.venv/bin/activate && uv run <PROJECT_DIR>/src/client/mcp/rag/rag_base_optimizer_config_mcp.py"
78-
]
79-
}
80-
}
81-
}
58+
uv run rag_base_optimizer_config_direct.py "[YOUR_QUESTION]"
8259
```
83-
* In **Claude Desktop** application, in `Settings/General/Claude Settings/Configure`, under `Profile` tab, update fields like:
84-
- `Full Name`
85-
- `What should we call you`
86-
87-
and so on, putting in `What personal preferences should Claude consider in responses?`
88-
the following text:
89-
90-
```
91-
#INSTRUCTION:
92-
Always call the rag_tool tool when the user asks a factual or information-seeking question, even if you think you know the answer.
93-
Show the rag_tool message as-is, without modification.
94-
```
95-
This will impose the usage of `rag_tool` in any case.
96-
97-
**NOTICE**: If you prefer, in this agent dashboard or any other, you could setup a message in the conversation with the same content of `Instruction` to enforce the LLM to use the rag tool as well.
98-
99-
* Restart **Claude Desktop**.
10060

101-
* You will see two warnings on rag_tool configuration: they will disappear and will not cause any issue in activating the tool.
102-
103-
* Start a conversation. You should see a pop up that ask to allow the `rag` tool usage to answer the questions:
104-
105-
![Rag Tool](./images/rag_tool.png)
106-
107-
If the question is related to the knowledge base content stored in the vector store, you will have an answer based on that information. Otherwise, it will try to answer considering information on which has been trained the LLM o other tools configured in the same Claude Desktop.
108-
109-
110-
## Make a remote MCP server the RAG Tool
61+
## Run the RAG Tool by a remote MCP server
11162

11263
In `rag_base_optimizer_config_mcp.py`:
11364

114-
* Update the absolute path of your `optimizer_settings.json`. Example:
65+
* Check if configuration is like this for the clients (`Remote client`) in the following lines, otherwise change as shown:
11566

11667
```python
117-
rag.set_optimizer_settings_path("/Users/cdebari/Documents/GitHub/ai-optimizer-mcp-export/src/client/mcp/rag/optimizer_settings.json")
68+
# Initialize FastMCP server
69+
mcp = FastMCP("rag",host="0.0.0.0", port=9090) #Remote client
70+
#mcp = FastMCP("rag") #Local
11871
```
11972

120-
* Substitute `Local` with `Remote client` line:
121-
122-
```python
123-
#mcp = FastMCP("rag", port=8001) #Remote client
124-
mcp = FastMCP("rag") #Local
125-
```
73+
* Check, or change, according following lines of code:
12674

127-
* Substitute `stdio` with `sse` line of code:
12875
```python
129-
mcp.run(transport='stdio')
130-
#mcp.run(transport='sse')
76+
#mcp.run(transport='stdio')
77+
#mcp.run(transport='sse')
78+
mcp.run(transport='streamable-http')
13179
```
13280

13381
* Start MCP server in another shell with:
82+
13483
```bash
13584
uv run rag_base_optimizer_config_mcp.py
13685
```
13786

138-
139-
## Quick test
87+
## Quick test via MCP "inspector"
14088

14189
* Run the inspector:
14290

14391
```bash
144-
npx @modelcontextprotocol/inspector
92+
npx @modelcontextprotocol/inspector@0.15.0
14593
```
14694

147-
* connect the browser to `http://127.0.0.1:6274`
95+
* connect to the linke report like this:
14896

149-
* set the Transport Type to `SSE`
150-
151-
* set the `URL` to `http://localhost:8001/sse`
97+
```
98+
http://localhost:6274/?MCP_PROXY_AUTH_TOKEN=1b40988bb02624b74472a9e8634a6d78802ced91c34433bf427cb3533c8fee2c
99+
```
152100

153-
* test the tool developed.
101+
* setup the `Transport Type` to `Streamable HTTP`
102+
* test the tool developed, setting `URL` to `http://localhost:9090/mcp`.
154103

155104

105+
## Claude Desktop setup
156106

157-
## Claude Desktop setup for remote/local server
158107
Claude Desktop, in free version, not allows to connect remote server. You can overcome, for testing purpose only, with a proxy library called `mcp-remote`. These are the options.
159-
If you have already installed Node.js v20.17.0+, it should work:
160-
161-
* replace `rag` mcpServer, setting in `claude_desktop_config.json`:
162-
```json
163-
{
164-
"mcpServers": {
165-
"remote": {
108+
If you have already installed Node.js v20.17.0+, it should work.
109+
110+
* In **Claude Desktop** application, in `Settings/Developer/Edit Config`, get the `claude_desktop_config.json` to add the reference to the local MCP server for RAG in `streamable-http`:
111+
```json
112+
{
113+
"mcpServers": {
114+
...
115+
,
116+
"rag":{
166117
"command": "npx",
167118
"args": [
168119
"mcp-remote",
169-
"http://127.0.0.1:8001/sse"
170-
]
171-
}
172-
}
173-
}
174-
```
175-
* restart Claude Desktop.
120+
"http://127.0.0.1:9090/mcp"
121+
]
122+
}
123+
}
124+
}
125+
```
126+
127+
128+
* In **Claude Desktop** application, in `Settings/General/Claude Settings/Configure`, under `Profile` tab, update fields like:
129+
130+
- `Full Name`
131+
- `What should we call you`
132+
133+
and so on, putting in `What personal preferences should Claude consider in responses?`
134+
the following text:
135+
136+
```
137+
#INSTRUCTION:
138+
Always call the rag_tool tool when the user asks a factual or information-seeking question, even if you think you know the answer.
139+
Show the rag_tool message as-is, without modification.
140+
```
141+
This will impose the usage of `rag_tool` in any case.
142+
143+
* Restart **Claude Desktop**.
144+
145+
* You will see two warnings on rag_tool configuration: they will disappear and will not cause any issue in activating the tool.
146+
147+
* Start a conversation. You should see a pop up that ask to allow the `rag` tool usage to answer the questions:
148+
149+
![Rag Tool](./images/rag_tool.png)
150+
151+
If the question is related to the knowledge base content stored in the vector store, you will have an answer based on that information. Otherwise, it will try to answer considering information on which has been trained the LLM o other tools configured in the same Claude Desktop.
176152

177153
**NOTICE**: If you have any problem running, check the logs if it's related to an old npx/nodejs version used with mcp-remote library. Check with:
178154
```bash

0 commit comments

Comments
 (0)