Skip to content

Commit 09d6c71

Browse files
Merge pull request #172 from MicrosoftDocs/main638681567818618109sync_temp
Repo sync for protected branch
2 parents dcc55c5 + 8709d3f commit 09d6c71

24 files changed

+1717
-187
lines changed

semantic-kernel/concepts/TOC.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
href: enterprise-readiness/TOC.yml
77
- name: Memory (Vector Stores)
88
href: vector-store-connectors/TOC.yml
9-
- name: Prompt Engineering
9+
- name: Prompts
1010
href: prompts/TOC.yml
1111
- name: Plugins
1212
href: plugins/TOC.yml

semantic-kernel/concepts/ai-services/chat-completion/function-calling/function-choice-behaviors.md

Lines changed: 387 additions & 21 deletions
Large diffs are not rendered by default.

semantic-kernel/concepts/ai-services/chat-completion/function-calling/function-invocation.md

Lines changed: 144 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -7,21 +7,28 @@ ms.topic: conceptual
77
ms.author: semenshi
88
ms.service: semantic-kernel
99
---
10-
::: zone pivot="programming-language-csharp"
1110
# Function Invocation Modes
11+
1212
When the AI model receives a prompt containing a list of functions, it may choose one or more of them for invocation to complete the prompt. When a function is chosen by the model, it needs be **invoked** by Semantic Kernel.
1313

14-
The function calling subsystem in Semantic Kernel has two modes of function invocation: **auto** and **manual**.
14+
The function calling subsystem in Semantic Kernel has two modes of function invocation: **auto** and **manual**.
1515

1616
Depending on the invocation mode, Semantic Kernel either does end-to-end function invocation or gives the caller control over the function invocation process.
1717

1818
## Auto Function Invocation
19-
Auto function invocation is the default mode of the Semantic Kernel function-calling subsystem. When the AI model chooses one or more functions, Semantic Kernel automatically invokes the chosen functions.
20-
The results of these function invocations are added to the chat history and sent to the model automatically in subsequent requests.
21-
The model then reasons about the chat history, chooses additional functions if needed, or generates the final response.
19+
20+
Auto function invocation is the default mode of the Semantic Kernel function-calling subsystem. When the AI model chooses one or more functions, Semantic Kernel automatically invokes the chosen functions.
21+
The results of these function invocations are added to the chat history and sent to the model automatically in subsequent requests.
22+
The model then reasons about the chat history, chooses additional functions if needed, or generates the final response.
2223
This approach is fully automated and requires no manual intervention from the caller.
2324

25+
> [!TIP]
26+
> Auto function invocation is different from the [auto function choice behavior](./function-choice-behaviors.md#using-auto-function-choice-behavior). The former dictates if functions should be invoked automatically by Semantic Kernel, while the latter determines if functions should be chosen automatically by the AI model.
27+
2428
This example demonstrates how to use the auto function invocation in Semantic Kernel. AI model decides which functions to call to complete the prompt and Semantic Kernel does the rest and invokes them automatically.
29+
30+
::: zone pivot="programming-language-csharp"
31+
2532
```csharp
2633
using Microsoft.SemanticKernel;
2734

@@ -40,11 +47,55 @@ PromptExecutionSettings settings = new() { FunctionChoiceBehavior = FunctionChoi
4047
await kernel.InvokePromptAsync("Given the current time of day and weather, what is the likely color of the sky in Boston?", new(settings));
4148
```
4249

50+
::: zone-end
51+
52+
::: zone pivot="programming-language-python"
53+
54+
```python
55+
from semantic_kernel.connectors.ai.function_choice_behavior import FunctionChoiceBehavior
56+
from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion
57+
from semantic_kernel.connectors.ai.prompt_execution_settings import PromptExecutionSettings
58+
from semantic_kernel.functions.kernel_arguments import KernelArguments
59+
from semantic_kernel.kernel import Kernel
60+
61+
kernel = Kernel()
62+
kernel.add_service(OpenAIChatCompletion())
63+
64+
# Assuming that WeatherPlugin and DateTimePlugin are already implemented
65+
kernel.add_plugin(WeatherPlugin(), "WeatherPlugin")
66+
kernel.add_plugin(DateTimePlugin(), "DateTimePlugin")
67+
68+
query = "What is the weather in Seattle today?"
69+
arguments = KernelArguments(
70+
settings=PromptExecutionSettings(
71+
# By default, functions are set to be automatically invoked.
72+
# If you want to explicitly enable this behavior, you can do so with the following code:
73+
# function_choice_behavior=FunctionChoiceBehavior.Auto(auto_invoke=True),
74+
function_choice_behavior=FunctionChoiceBehavior.Auto(),
75+
)
76+
)
77+
78+
response = await kernel.invoke_prompt(query, arguments=arguments)
79+
```
80+
81+
::: zone-end
82+
83+
::: zone pivot="programming-language-java"
84+
85+
> [!TIP]
86+
> More updates coming soon to the Java SDK.
87+
88+
::: zone-end
89+
90+
::: zone pivot="programming-language-csharp"
91+
4392
Some AI models support parallel function calling, where the model chooses multiple functions for invocation. This can be useful in cases when invoking chosen functions takes a long time. For example, the AI may choose to retrieve the latest news and the current time simultaneously, rather than making a round trip per function.
4493

4594
Semantic Kernel can invoke these functions in two different ways:
95+
4696
- **Sequentially**: The functions are invoked one after another. This is the default behavior.
4797
- **Concurrently**: The functions are invoked at the same time. This can be enabled by setting the `FunctionChoiceBehaviorOptions.AllowConcurrentInvocation` property to `true`, as shown in the example below.
98+
4899
```csharp
49100
using Microsoft.SemanticKernel;
50101

@@ -63,14 +114,39 @@ PromptExecutionSettings settings = new() { FunctionChoiceBehavior = FunctionChoi
63114
await kernel.InvokePromptAsync("Good morning! What is the current time and latest news headlines?", new(settings));
64115
```
65116

117+
::: zone-end
118+
119+
::: zone pivot="programming-language-python"
120+
121+
Sometimes a model may choose multiple functions for invocation. This is often referred to as **parallel** function calling. When multiple functions are chosen by the AI model, Semantic Kernel will invoke them concurrently.
122+
123+
> [!TIP]
124+
> With the OpenAI or Azure OpenAI connector, you can disable parallel function calling by doing the following:
125+
>
126+
> ```python
127+
> from semantic_kernel.connectors.ai.open_ai import OpenAIChatPromptExecutionSettings
128+
> from semantic_kernel.connectors.ai.function_choice_behavior import FunctionChoiceBehavior
129+
>
130+
> settings = OpenAIChatPromptExecutionSettings(
131+
> function_choice_behavior=FunctionChoiceBehavior.Auto(),
132+
> parallel_tool_calls=False
133+
> )
134+
> ```
135+
136+
::: zone-end
137+
66138
## Manual Function Invocation
67-
In cases when the caller wants to have more control over the function invocation process, manual function invocation can be used.
68139
69-
When manual function invocation is enabled, Semantic Kernel does not automatically invoke the functions chosen by the AI model.
70-
Instead, it returns a list of chosen functions to the caller, who can then decide which functions to invoke, invoke them sequentially or in parallel, handle exceptions, and so on.
71-
The function invocation results need to be added to the chat history and returned to the model, which reasons about them and decides whether to choose additional functions or generate the final response.
140+
In cases when the caller wants to have more control over the function invocation process, manual function invocation can be used.
141+
142+
When manual function invocation is enabled, Semantic Kernel does not automatically invoke the functions chosen by the AI model.
143+
Instead, it returns a list of chosen functions to the caller, who can then decide which functions to invoke, invoke them sequentially or in parallel, handle exceptions, and so on.
144+
The function invocation results need to be added to the chat history and returned to the model, which will reason about them and decide whether to choose additional functions or generate a final response.
72145
73146
The example below demonstrates how to use manual function invocation.
147+
148+
::: zone pivot="programming-language-csharp"
149+
74150
```csharp
75151
using Microsoft.SemanticKernel;
76152
using Microsoft.SemanticKernel.ChatCompletion;
@@ -135,11 +211,12 @@ while (true)
135211
}
136212
137213
```
214+
138215
> [!NOTE]
139-
> The FunctionCallContent and FunctionResultContent classes are used to represent AI model function calls and Semantic Kernel function invocation results, respectively.
216+
> The FunctionCallContent and FunctionResultContent classes are used to represent AI model function calls and Semantic Kernel function invocation results, respectively.
140217
> They contain information about chosen function, such as the function ID, name, and arguments, and function invocation results, such as function call ID and result.
141218
142-
The following example demonstrates how to use manual function invocation with the streaming chat completion API. Note the usage of the `FunctionCallContentBuilder` class to build function calls from the streaming content.
219+
The following example demonstrates how to use manual function invocation with the streaming chat completion API. Note the usage of the `FunctionCallContentBuilder` class to build function calls from the streaming content.
143220
Due to the streaming nature of the API, function calls are also streamed. This means that the caller must build the function calls from the streaming content before invoking them.
144221
145222
```csharp
@@ -210,11 +287,62 @@ while (true)
210287
```
211288
212289
::: zone-end
290+
213291
::: zone pivot="programming-language-python"
214-
## Coming soon
215-
More info coming soon.
292+
293+
```python
294+
from semantic_kernel.connectors.ai.function_choice_behavior import FunctionChoiceBehavior
295+
from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion
296+
from semantic_kernel.connectors.ai.prompt_execution_settings import PromptExecutionSettings
297+
from semantic_kernel.contents.chat_history import ChatHistory
298+
from semantic_kernel.contents.function_call_content import FunctionCallContent
299+
from semantic_kernel.contents.function_result_content import FunctionResultContent
300+
from semantic_kernel.kernel import Kernel
301+
302+
kernel = Kernel()
303+
chat_completion_service = OpenAIChatCompletion()
304+
305+
# Assuming that WeatherPlugin is already implemented
306+
kernel.add_plugin(WeatherPlugin(), "WeatherPlugin")
307+
308+
settings = PromptExecutionSettings(
309+
function_choice_behavior=FunctionChoiceBehavior.Auto(auto_invoke=False),
310+
)
311+
312+
chat_history = ChatHistory()
313+
chat_history.add_user_message("What is the weather in Seattle on 10th of September 2024 at 11:29 AM?")
314+
315+
response = await chat_completion_service.get_chat_message_content(chat_history, settings, kernel=kernel)
316+
function_call_content = response.items[0]
317+
assert isinstance(function_call_content, FunctionCallContent)
318+
319+
# Need to add the response to the chat history to preserve the context
320+
chat_history.add_message(response)
321+
322+
function = kernel.get_function(function_call_content.plugin_name, function_call_content.function_name)
323+
function_result = await function(kernel, function_call_content.to_kernel_arguments())
324+
325+
function_result_content = FunctionResultContent.from_function_call_content_and_result(
326+
function_call_content, function_result
327+
)
328+
329+
# Adding the function result to the chat history
330+
chat_history.add_message(function_result_content.to_chat_message_content())
331+
332+
# Invoke the model again with the function result
333+
response = await chat_completion_service.get_chat_message_content(chat_history, settings, kernel=kernel)
334+
print(response)
335+
# The weather in Seattle on September 10th, 2024, is expected to be [weather condition].
336+
```
337+
338+
> [!NOTE]
339+
> The FunctionCallContent and FunctionResultContent classes are used to represent AI model function calls and Semantic Kernel function invocation results, respectively. They contain information about chosen function, such as the function ID, name, and arguments, and function invocation results, such as function call ID and result.
340+
216341
::: zone-end
342+
217343
::: zone pivot="programming-language-java"
218-
## Coming soon
219-
More info coming soon.
220-
::: zone-end
344+
345+
> [!TIP]
346+
> More updates coming soon to the Java SDK.
347+
348+
::: zone-end

0 commit comments

Comments
 (0)