Skip to content

Commit 4b8127f

Browse files
authored
oss: nits (#1828)
1 parent c7e2914 commit 4b8127f

File tree

3 files changed

+22
-11
lines changed

3 files changed

+22
-11
lines changed

src/oss/javascript/integrations/middleware/anthropic.mdx

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,9 @@ Middleware specifically designed for Anthropic's Claude models. Learn more about
1010

1111
## Prompt caching
1212

13-
Reduce costs and latency by caching static or repetitive prompt content (like system prompts, tool definitions, and conversation history) on Anthropic's servers. This middleware implements a **conversational caching strategy** that places cache breakpoints after the most recent message, allowing the entire conversation history (including the latest user message) to be cached and reused in subsequent API calls. Prompt caching is useful for the following:
13+
Reduce costs and latency by caching static or repetitive prompt content (like system prompts, tool definitions, and conversation history) on Anthropic's servers. This middleware implements a **conversational caching strategy** that places cache breakpoints after the most recent message, allowing the entire conversation history (including the latest user message) to be cached and reused in subsequent API calls.
14+
15+
Prompt caching is useful for the following:
1416

1517
- Applications with long, static system prompts that don't change between requests
1618
- Agents with many tool definitions that remain constant across invocations
@@ -44,8 +46,8 @@ const agent = createAgent({
4446
The middleware caches content up to and including the latest message in each request. On subsequent requests within the TTL window (5 minutes or 1 hour), previously seen content is retrieved from cache rather than reprocessed, significantly reducing costs and latency.
4547

4648
**How it works:**
47-
1. First request: System prompt, tools, and the user message "Hi, my name is Bob" are sent to the API and cached
48-
2. Second request: The cached content (system prompt, tools, and first message) is retrieved from cache. Only the new message "What's my name?" needs to be processed, plus the model's response from the first request
49+
1. First request: System prompt, tools, and the user message *"Hi, my name is Bob"* are sent to the API and cached
50+
2. Second request: The cached content (system prompt, tools, and first message) is retrieved from cache. Only the new message *"What's my name?"* needs to be processed, plus the model's response from the first request
4951
3. This pattern continues for each turn, with each request reusing the cached conversation history
5052

5153
```typescript

src/oss/python/integrations/middleware/anthropic.mdx

Lines changed: 17 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,9 @@ Middleware specifically designed for Anthropic's Claude models. Learn more about
1414

1515
## Prompt caching
1616

17-
Reduce costs and latency by caching static or repetitive prompt content (like system prompts, tool definitions, and conversation history) on Anthropic's servers. This middleware implements a **conversational caching strategy** that places cache breakpoints after the most recent message, allowing the entire conversation history (including the latest user message) to be cached and reused in subsequent API calls. Prompt caching is useful for the following:
17+
Reduce costs and latency by caching static or repetitive prompt content (like system prompts, tool definitions, and conversation history) on Anthropic's servers. This middleware implements a **conversational caching strategy** that places cache breakpoints after the most recent message, allowing the entire conversation history (including the latest user message) to be cached and reused in subsequent API calls.
18+
19+
Prompt caching is useful for the following:
1820

1921
- Applications with long, static system prompts that don't change between requests
2022
- Agents with many tool definitions that remain constant across invocations
@@ -64,8 +66,8 @@ agent = create_agent(
6466
The middleware caches content up to and including the latest message in each request. On subsequent requests within the TTL window (5 minutes or 1 hour), previously seen content is retrieved from cache rather than reprocessed, significantly reducing costs and latency.
6567

6668
**How it works:**
67-
1. First request: System prompt, tools, and the user message "Hi, my name is Bob" are sent to the API and cached
68-
2. Second request: The cached content (system prompt, tools, and first message) is retrieved from cache. Only the new message "What's my name?" needs to be processed, plus the model's response from the first request
69+
1. First request: System prompt, tools, and the user message *"Hi, my name is Bob"* are sent to the API and cached
70+
2. Second request: The cached content (system prompt, tools, and first message) is retrieved from cache. Only the new message *"What's my name?"* needs to be processed, plus the model's response from the first request
6971
3. This pattern continues for each turn, with each request reusing the cached conversation history
7072

7173
```python
@@ -99,7 +101,9 @@ agent.invoke({"messages": [HumanMessage("What's my name?")]})
99101

100102
## Bash tool
101103

102-
Execute Claude's native `bash_20250124` tool with local command execution. The bash tool middleware is useful for the following:
104+
Execute Claude's native `bash_20250124` tool with local command execution.
105+
106+
The bash tool middleware is useful for the following:
103107

104108
- Using Claude's built-in bash tool with local execution
105109
- Leveraging Claude's optimized bash tool interface
@@ -185,7 +189,9 @@ result = agent.invoke({
185189

186190
## Text editor
187191

188-
Provide Claude's text editor tool (`text_editor_20250728`) for file creation and editing. The text editor middleware is useful for the following:
192+
Provide Claude's text editor tool (`text_editor_20250728`) for file creation and editing.
193+
194+
The text editor middleware is useful for the following:
189195

190196
- File-based agent workflows
191197
- Code editing and refactoring tasks
@@ -196,7 +202,9 @@ Provide Claude's text editor tool (`text_editor_20250728`) for file creation and
196202
Available in two variants: **State-based** (files in LangGraph state) and **Filesystem-based** (files on disk).
197203
</Note>
198204

199-
**API reference:** @[`StateClaudeTextEditorMiddleware`], @[`FilesystemClaudeTextEditorMiddleware`]
205+
**API references:**
206+
- @[`StateClaudeTextEditorMiddleware`]
207+
- @[`FilesystemClaudeTextEditorMiddleware`]
200208

201209
```python
202210
from langchain_anthropic import ChatAnthropic
@@ -286,7 +294,9 @@ agent_fs = create_agent(
286294

287295
## Memory
288296

289-
Provide Claude's memory tool (`memory_20250818`) for persistent agent memory across conversation turns. The memory middleware is useful for the following:
297+
Provide Claude's memory tool (`memory_20250818`) for persistent agent memory across conversation turns.
298+
299+
The memory middleware is useful for the following:
290300

291301
- Long-running agent conversations
292302
- Maintaining context across interruptions

src/oss/python/integrations/providers/elasticsearch.mdx

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -86,7 +86,6 @@ from langchain_community.retrievers import ElasticSearchBM25Retriever
8686

8787
## LLM cache
8888

89-
9089
```python
9190
from langchain_elasticsearch import ElasticsearchCache
9291
```

0 commit comments

Comments
 (0)