Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 5 additions & 3 deletions src/oss/javascript/integrations/middleware/anthropic.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,9 @@ Middleware specifically designed for Anthropic's Claude models. Learn more about

## Prompt caching

Reduce costs and latency by caching static or repetitive prompt content (like system prompts, tool definitions, and conversation history) on Anthropic's servers. This middleware implements a **conversational caching strategy** that places cache breakpoints after the most recent message, allowing the entire conversation history (including the latest user message) to be cached and reused in subsequent API calls. Prompt caching is useful for the following:
Reduce costs and latency by caching static or repetitive prompt content (like system prompts, tool definitions, and conversation history) on Anthropic's servers. This middleware implements a **conversational caching strategy** that places cache breakpoints after the most recent message, allowing the entire conversation history (including the latest user message) to be cached and reused in subsequent API calls.

Prompt caching is useful for the following:

- Applications with long, static system prompts that don't change between requests
- Agents with many tool definitions that remain constant across invocations
Expand Down Expand Up @@ -44,8 +46,8 @@ const agent = createAgent({
The middleware caches content up to and including the latest message in each request. On subsequent requests within the TTL window (5 minutes or 1 hour), previously seen content is retrieved from cache rather than reprocessed, significantly reducing costs and latency.

**How it works:**
1. First request: System prompt, tools, and the user message "Hi, my name is Bob" are sent to the API and cached
2. Second request: The cached content (system prompt, tools, and first message) is retrieved from cache. Only the new message "What's my name?" needs to be processed, plus the model's response from the first request
1. First request: System prompt, tools, and the user message *"Hi, my name is Bob"* are sent to the API and cached
2. Second request: The cached content (system prompt, tools, and first message) is retrieved from cache. Only the new message *"What's my name?"* needs to be processed, plus the model's response from the first request
3. This pattern continues for each turn, with each request reusing the cached conversation history

```typescript
Expand Down
24 changes: 17 additions & 7 deletions src/oss/python/integrations/middleware/anthropic.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,9 @@ Middleware specifically designed for Anthropic's Claude models. Learn more about

## Prompt caching

Reduce costs and latency by caching static or repetitive prompt content (like system prompts, tool definitions, and conversation history) on Anthropic's servers. This middleware implements a **conversational caching strategy** that places cache breakpoints after the most recent message, allowing the entire conversation history (including the latest user message) to be cached and reused in subsequent API calls. Prompt caching is useful for the following:
Reduce costs and latency by caching static or repetitive prompt content (like system prompts, tool definitions, and conversation history) on Anthropic's servers. This middleware implements a **conversational caching strategy** that places cache breakpoints after the most recent message, allowing the entire conversation history (including the latest user message) to be cached and reused in subsequent API calls.

Prompt caching is useful for the following:

- Applications with long, static system prompts that don't change between requests
- Agents with many tool definitions that remain constant across invocations
Expand Down Expand Up @@ -64,8 +66,8 @@ agent = create_agent(
The middleware caches content up to and including the latest message in each request. On subsequent requests within the TTL window (5 minutes or 1 hour), previously seen content is retrieved from cache rather than reprocessed, significantly reducing costs and latency.

**How it works:**
1. First request: System prompt, tools, and the user message "Hi, my name is Bob" are sent to the API and cached
2. Second request: The cached content (system prompt, tools, and first message) is retrieved from cache. Only the new message "What's my name?" needs to be processed, plus the model's response from the first request
1. First request: System prompt, tools, and the user message *"Hi, my name is Bob"* are sent to the API and cached
2. Second request: The cached content (system prompt, tools, and first message) is retrieved from cache. Only the new message *"What's my name?"* needs to be processed, plus the model's response from the first request
3. This pattern continues for each turn, with each request reusing the cached conversation history

```python
Expand Down Expand Up @@ -99,7 +101,9 @@ agent.invoke({"messages": [HumanMessage("What's my name?")]})

## Bash tool

Execute Claude's native `bash_20250124` tool with local command execution. The bash tool middleware is useful for the following:
Execute Claude's native `bash_20250124` tool with local command execution.

The bash tool middleware is useful for the following:

- Using Claude's built-in bash tool with local execution
- Leveraging Claude's optimized bash tool interface
Expand Down Expand Up @@ -185,7 +189,9 @@ result = agent.invoke({

## Text editor

Provide Claude's text editor tool (`text_editor_20250728`) for file creation and editing. The text editor middleware is useful for the following:
Provide Claude's text editor tool (`text_editor_20250728`) for file creation and editing.

The text editor middleware is useful for the following:

- File-based agent workflows
- Code editing and refactoring tasks
Expand All @@ -196,7 +202,9 @@ Provide Claude's text editor tool (`text_editor_20250728`) for file creation and
Available in two variants: **State-based** (files in LangGraph state) and **Filesystem-based** (files on disk).
</Note>

**API reference:** @[`StateClaudeTextEditorMiddleware`], @[`FilesystemClaudeTextEditorMiddleware`]
**API references:**
- @[`StateClaudeTextEditorMiddleware`]
- @[`FilesystemClaudeTextEditorMiddleware`]

```python
from langchain_anthropic import ChatAnthropic
Expand Down Expand Up @@ -286,7 +294,9 @@ agent_fs = create_agent(

## Memory

Provide Claude's memory tool (`memory_20250818`) for persistent agent memory across conversation turns. The memory middleware is useful for the following:
Provide Claude's memory tool (`memory_20250818`) for persistent agent memory across conversation turns.

The memory middleware is useful for the following:

- Long-running agent conversations
- Maintaining context across interruptions
Expand Down
1 change: 0 additions & 1 deletion src/oss/python/integrations/providers/elasticsearch.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,6 @@ from langchain_community.retrievers import ElasticSearchBM25Retriever

## LLM cache


```python
from langchain_elasticsearch import ElasticsearchCache
```
Expand Down