feat(agentic-ai): Add Azure AI Foundry provider to AI agent#7129
Open
nikonovd wants to merge 39 commits into
Open
feat(agentic-ai): Add Azure AI Foundry provider to AI agent#7129nikonovd wants to merge 39 commits into
nikonovd wants to merge 39 commits into
Conversation
f72a3ee to
25a500f
Compare
|
|
||
| private static String authType(AzureAuthentication auth) { | ||
| return switch (auth) { | ||
| case AzureApiKeyAuthentication ignored -> "apiKey"; |
Contributor
Author
|
@copilot fix the open review comments |
Contributor
Fixed in commit
|
1f41236 to
e9b4b6f
Compare
Contributor
There was a problem hiding this comment.
Pull request overview
Adds Azure AI Foundry as a new AI Agent provider, introducing a custom Anthropic-on-Foundry runtime path while reusing the existing Azure OpenAI integration for OpenAI-family deployments. This fits into the agentic-ai provider layer by extending ProviderConfiguration, Spring registration, element templates, and end-to-end coverage for a new Azure-hosted model option.
Changes:
- Added a new
azureAiFoundryprovider configuration, Spring wiring, and element-template options for Azure AI Foundry. - Implemented a custom Anthropic Foundry client stack (
anthropic-javaSDK + JDK HTTP transport + langchain4j adapter). - Refactored Azure authentication into a shared type and added tests/docs around deserialization, architecture boundaries, and E2E behavior.
Reviewed changes
Copilot reviewed 42 out of 43 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
parent/pom.xml |
Adds Anthropic SDK and ArchUnit dependency management. |
connectors/agentic-ai/src/test/java/io/camunda/connector/agenticai/azurefoundry/langchain4j/AnthropicOnFoundryResponseMapperTest.java |
Tests Anthropic response-to-langchain4j mapping. |
connectors/agentic-ai/src/test/java/io/camunda/connector/agenticai/azurefoundry/langchain4j/AnthropicOnFoundryRequestMapperTest.java |
Tests langchain4j request-to-Anthropic mapping. |
connectors/agentic-ai/src/test/java/io/camunda/connector/agenticai/azurefoundry/langchain4j/AnthropicOnFoundryChatModelTest.java |
Tests chat adapter behavior and exception translation. |
connectors/agentic-ai/src/test/java/io/camunda/connector/agenticai/azurefoundry/http/JdkAnthropicHttpClientTest.java |
Tests custom JDK-backed Anthropic HTTP transport. |
connectors/agentic-ai/src/test/java/io/camunda/connector/agenticai/azurefoundry/ArchitectureTest.java |
Enforces package boundary rules for Foundry code. |
connectors/agentic-ai/src/test/java/io/camunda/connector/agenticai/azurefoundry/AnthropicOnFoundryClientFactoryTest.java |
Tests Foundry Anthropic client construction and HTTP behavior. |
connectors/agentic-ai/src/test/java/io/camunda/connector/agenticai/aiagent/model/request/ProviderConfigurationTest.java |
Updates provider tests to shared Azure auth type. |
connectors/agentic-ai/src/test/java/io/camunda/connector/agenticai/aiagent/model/request/provider/AzureFoundryProviderConfigurationDeserializationTest.java |
Adds deserialization coverage for new provider config. |
connectors/agentic-ai/src/test/java/io/camunda/connector/agenticai/aiagent/framework/langchain4j/provider/AzureOpenAiChatModelProviderTest.java |
Updates Azure OpenAI tests after auth refactor. |
connectors/agentic-ai/src/main/java/io/camunda/connector/agenticai/azurefoundry/langchain4j/AnthropicOnFoundryResponseMapper.java |
Maps Anthropic SDK responses into langchain4j chat responses. |
connectors/agentic-ai/src/main/java/io/camunda/connector/agenticai/azurefoundry/langchain4j/AnthropicOnFoundryRequestMapper.java |
Maps langchain4j chat requests into Anthropic SDK requests. |
connectors/agentic-ai/src/main/java/io/camunda/connector/agenticai/azurefoundry/langchain4j/AnthropicOnFoundryChatModel.java |
Adds custom langchain4j ChatModel for Anthropic on Foundry. |
connectors/agentic-ai/src/main/java/io/camunda/connector/agenticai/azurefoundry/http/JdkAnthropicHttpClient.java |
Implements Anthropic SDK HTTP transport over JDK client. |
connectors/agentic-ai/src/main/java/io/camunda/connector/agenticai/azurefoundry/http/BackendAwareAnthropicHttpClient.java |
Applies Foundry backend URL/auth hooks around transport. |
connectors/agentic-ai/src/main/java/io/camunda/connector/agenticai/azurefoundry/AnthropicOnFoundryClientFactory.java |
Builds Anthropic Foundry clients from endpoint/auth config. |
connectors/agentic-ai/src/main/java/io/camunda/connector/agenticai/aiagent/model/request/provider/shared/AzureAuthentication.java |
Extracts shared Azure auth model and template metadata. |
connectors/agentic-ai/src/main/java/io/camunda/connector/agenticai/aiagent/model/request/provider/ProviderConfiguration.java |
Registers Azure AI Foundry as a provider subtype. |
connectors/agentic-ai/src/main/java/io/camunda/connector/agenticai/aiagent/model/request/provider/AzureOpenAiProviderConfiguration.java |
Removes nested Azure auth in favor of shared type. |
connectors/agentic-ai/src/main/java/io/camunda/connector/agenticai/aiagent/model/request/provider/AzureFoundryProviderConfiguration.java |
Defines new Azure AI Foundry provider config and model-family split. |
connectors/agentic-ai/src/main/java/io/camunda/connector/agenticai/aiagent/framework/langchain4j/provider/AzureOpenAiChatModelProvider.java |
Extracts reusable Azure OpenAI builder helper. |
connectors/agentic-ai/src/main/java/io/camunda/connector/agenticai/aiagent/framework/langchain4j/provider/AzureFoundryChatModelProvider.java |
Dispatches Foundry config to Anthropic or OpenAI implementations. |
connectors/agentic-ai/src/main/java/io/camunda/connector/agenticai/aiagent/framework/langchain4j/configuration/AgenticAiLangchain4JChatModelConfiguration.java |
Registers new Foundry provider and client factory beans. |
connectors/agentic-ai/src/main/java/io/camunda/connector/agenticai/aiagent/framework/langchain4j/ChatModelHttpProxySupport.java |
Exposes JDK proxy configurator for Foundry transport wiring. |
connectors/agentic-ai/src/main/java/io/camunda/connector/agenticai/aiagent/AiAgentFunction.java |
Bumps AI Agent task element template version. |
connectors/agentic-ai/pom.xml |
Adds Anthropic SDK/ArchUnit dependencies for module. |
connectors/agentic-ai/element-templates/README.md |
Updates documented AI Agent template versions. |
connectors/agentic-ai/element-templates/hybrid/agenticai-aiagent-outbound-connector-hybrid.json |
Adds Azure AI Foundry provider fields to hybrid task template. |
connectors/agentic-ai/element-templates/hybrid/agenticai-aiagent-job-worker-hybrid.json |
Adds Azure AI Foundry provider fields to hybrid job-worker template. |
connectors/agentic-ai/element-templates/agenticai-aiagent-outbound-connector.json |
Adds Azure AI Foundry provider fields to task template. |
connectors/agentic-ai/element-templates/agenticai-aiagent-job-worker.json |
Adds Azure AI Foundry provider fields to job-worker template. |
connectors/agentic-ai/docs/reference/ai-agent.md |
Documents Azure AI Foundry provider architecture and routing. |
connectors/agentic-ai/docs/adr/004-azure-ai-foundry-provider.md |
Adds ADR for Foundry provider design decisions. |
connectors/agentic-ai/AI_AGENT.md |
Updates AI Agent version reference. |
connectors/agentic-ai/AGENTS.md |
Documents new Foundry package layout and provider behavior. |
connectors-e2e-test/connectors-e2e-test-agentic-ai/src/test/java/io/camunda/connector/e2e/agenticai/aiagent/AzureOpenAiLegacyCompatibilityE2ETest.java |
Adds regression coverage for legacy Azure OpenAI path. |
connectors-e2e-test/connectors-e2e-test-agentic-ai/src/test/java/io/camunda/connector/e2e/agenticai/aiagent/AzureFoundryOpenAiAgentE2ETest.java |
Adds mocked E2E coverage for Foundry OpenAI-family path. |
connectors-e2e-test/connectors-e2e-test-agentic-ai/src/test/java/io/camunda/connector/e2e/agenticai/aiagent/AzureFoundryAnthropicAgentE2ETest.java |
Adds WireMock E2E coverage for Foundry Anthropic tool-call flow. |
08b59b4 to
12766d3
Compare
Move the AzureAuthentication sealed type and its API-key/client-credentials records out of AzureOpenAiProviderConfiguration into the shared/ package so a future provider (Azure AI Foundry) can reference the same auth type without duplication. JSON subtype IDs and template annotations remain identical; generated element templates are byte-identical.
…only) Introduce a new "Azure AI Foundry" provider option in the AI Agent connector template, with a model-family dropdown for Anthropic (Claude) and OpenAI (GPT) deployments. Bumps the AI Agent Task and Sub-process template version to 11; previous v10 templates are preserved under versioned/. Generalises the shared AzureAuthentication discriminator description to no longer reference Azure OpenAI specifically, since both Foundry and Azure OpenAI now share it. Runtime dispatch is intentionally a stub that raises a ConnectorInputException — this commit lands the form so it can be demonstrated in the Modeler ahead of the runtime implementation in a follow-up. Picking the new provider in a process and executing it will fail at job invocation; this is by design until the runtime lands. Refs: #6993
Reflect the Azure AI Foundry provider addition in the element-templates version index. AI Agent Task and Sub-process top rows now point at template version 11; the example version-selection sentence is updated to match.
Apply demo feedback to the Foundry provider's model-family dropdown:
- Drop the parenthetical model-brand suffixes from the family labels
("Anthropic (Claude)" → "Anthropic", "OpenAI (GPT)" → "OpenAI"). The
suffixes were redundant alongside the provider name and risked dating
poorly. JSON family discriminator IDs ("anthropic", "openai") are
unchanged.
- Add a "gpt-4o" placeholder to the OpenAI deployment-name field for
consistency with the Anthropic variant (which already shows
"claude-sonnet-4-6"). "gpt-4o" matches the defaultValue used by the
existing direct-OpenAI provider.
Encodes the user-visible contract for the Milestone 2 Anthropic-on-Foundry runtime via the existing WireMock-based e2e harness: a two-turn agent loop with a tool_use → end_turn round-trip, mocked Anthropic Messages API wire- format responses on the Foundry endpoint path (/anthropic/v1/messages). Currently red — fails at invocation with the Milestone 1 stub ConnectorInputException. Milestone 2 implementation will make it pass. Refs: #6993
Encodes the Milestone 2 contract for OpenAI-on-Foundry via the OpenAI chat-completions wire format, mocked through WireMock on the Azure OpenAI deployment path. Verifies the delegation path through langchain4j-azure-open-ai works behind the unified AzureAiFoundry provider. Currently red — will pass once Phase 8 replaces the M1 stub with real factory dispatch. Refs: #6993
Protects the pre-existing Azure OpenAI provider against accidental breakage from the Milestone 1 shared AzureAuthentication extraction and the upcoming Milestone 2 OpenAI builder helper extraction. Test passes from day one. The ChatModelFactory is mocked at the Spring bean level (rather than using a real WireMock endpoint) because the Azure SDK KeyCredentialPolicy rejects API keys over plain HTTP. The mock still exercises the full connector stack: element-template deserialization, provider-type dispatch, AzureOpenAiProviderConfiguration binding, and agent-loop orchestration.
Adds com.anthropic:anthropic-java-core and com.anthropic:anthropic-java-foundry 2.26.0 as main-scope dependencies for the agentic-ai module (used by the upcoming Azure AI Foundry runtime), and com.tngtech.archunit:archunit-junit5 1.3.0 as a test-scope dependency (used by the Foundry package's architectural boundary tests). No OkHttp transitive — anthropic-java-foundry depends only on anthropic-java-core, and we'll implement the HttpClient SPI over the JDK's java.net.http.HttpClient for proxy-auth support.
Enforces two invariants on the io.camunda.connector.agenticai.azurefoundry
package tree:
1. Only the langchain4j/ subpackage may depend on dev.langchain4j.* —
the rest must survive a future langchain4j replacement.
2. No class under azurefoundry.* may depend on agent-framework internals
(aiagent.agent.., aiagent.memory.., adhoctoolsschema..) — the only
integration point is ChatModel.
Passes immediately — no code exists in the package tree yet; the rule
activates as implementation lands in later tasks.
Also extends pom.xml ignoredDependencies to include archunit transitive
artifacts (archunit-junit5-api, archunit) required by the new test class.
Validates the sealed AzureAiFoundryModel hierarchy roundtrips correctly for both Anthropic and OpenAI family variants (with API-key and client- credentials auth), and confirms the pre-existing legacy azureOpenAi JSON type still deserializes after the Milestone 1 AzureAuthentication extraction.
Defines the expected behavior of the custom com.anthropic.core.http.HttpClient SPI implementation backed by JDK java.net.http.HttpClient: POST with JSON body round-trips, non-2xx responses surface as HttpResponse (not exceptions), async execute works, GET without body works, close() is a no-op. WireMock stands in for the Foundry endpoint. Red — implementation is the next commit.
Provides an implementation of com.anthropic.core.http.HttpClient backed by java.net.http.HttpClient so the Azure AI Foundry provider can reuse the agentic-ai connector's existing proxy support (authenticated proxies via JdkHttpClientProxyConfigurator / JdkProxyAuthenticator). Avoids pulling in Anthropic's bundled OkHttp-based transport, which has no proxy-auth surface through its public builder. Sync and async execute paths share the same request/response conversion. close() is a no-op; the JDK client needs no lifecycle. Body buffering is fine for Messages API payloads (KB-MB); streaming isn't used.
…(red) Contract: the factory builds AnthropicOnFoundryChatModel from an endpoint URL, AzureAuthentication, timeouts, and AnthropicModel config. Verifies API-key and client-credentials auth paths, optional authorityHost, and graceful handling of a trailing slash on the endpoint. Detailed resource- name extraction and bearer-supplier correctness are covered by the e2e tests (mismatched resource would show up as a WireMock URL mismatch). Red — factory + adapter don't exist yet.
Constructs an anthropic-java AnthropicClient from the Azure AI Foundry provider config: wires a JdkAnthropicHttpClient (for proxy-auth support) wrapped in a BackendAwareAnthropicHttpClient that applies the FoundryBackend request-preparation (path-prefix rewrite) and authorization (API-key or Entra ID bearer-token) hooks before each HTTP call. The user-configured endpoint is passed as FoundryBackend.baseUrl(...) so it is authoritative over the default https://<resource>.services.ai.azure.com URL the SDK would compute — enables private endpoints and test WireMock stubs out of the box. Adds BackendAwareAnthropicHttpClient to apply Backend.prepareRequest() and Backend.authorizeRequest() in the HttpClient layer (the SDK core does not call these hooks automatically — they must be wired by the transport). Adds getJdkHttpClientProxyConfigurator() accessor to ChatModelHttpProxySupport so the factory can build a bare JDK HttpClient with proxy support. Also adds an empty stub AnthropicOnFoundryChatModel.java so this compiles; Task 7 fills in the langchain4j adapter.
Contract: adapter translates langchain4j ChatRequest to Anthropic MessageCreateParams, forwards to the mocked AnthropicClient.messages() service, and translates Anthropic Message responses back to ChatResponse — including AiMessage (text + tool-execution-requests), FinishReason mapping (end_turn → STOP, tool_use → TOOL_EXECUTION, max_tokens → LENGTH), and TokenUsage. Each AnthropicServiceException subtype maps to the correct ConnectorException / ConnectorInputException per Milestone 1 spec. Red — adapter still a stub from Task 6.
Langchain4j ChatModel adapter that wraps an anthropic-java AnthropicClient. Translates ChatRequest → MessageCreateParams (model, system, messages, tools, max_tokens, temperature, topP, topK) and Message → ChatResponse (AiMessage with text + tool-execution-requests; FinishReason from StopReason; TokenUsage from Usage counts). Exception translation per spec: client-side errors (400, 401, 403, 404, 422) → ConnectorInputException; retryable service errors (429, 5xx) → ConnectorException with status code; transport/unknown → ConnectorException with a stable "ANTHROPIC_ERROR" code. Scope limited to core chat + tool use per Milestone 2 spec (vision, prompt caching, extended thinking, streaming are deferred). Only imports dev.langchain4j.* in this subpackage; enforced by ArchUnit.
…toryImpl Replace the M1 stub with real provider dispatch: AnthropicModel routes through AnthropicOnFoundryClientFactory, OpenAiModel reuses buildAzureOpenAiChatModel. Wire AnthropicOnFoundryClientFactory as a Spring bean in AgenticAiLangchain4JFrameworkConfiguration. Fix NPE in AnthropicOnFoundryChatModel when modelConfig.parameters() is null. AzureFoundryAnthropicAgentE2ETest is now green; OpenAI stays red (Azure SDK KeyCredentialPolicy requires HTTPS, incompatible with WireMock plain HTTP).
The Azure SDK's KeyCredentialPolicy rejects API keys sent over plain HTTP, which blocks the WireMock-based end-to-end approach used for Foundry Anthropic (the latter goes through our custom JdkAnthropicHttpClient which has no such guard). Mirror the legacy-compat test's pattern: mock the ChatModelFactory at the Spring bean level and assert the agent loop completes. The mocked test still exercises the full connector stack — element-template deserialization, provider-type dispatch, Foundry OpenAI-family binding, and delegation to the shared Azure OpenAI builder helper. The actual wire-format round-trip against Foundry is covered by live integration tests owned by QA.
Documents the Milestone 2 architectural decision: use Anthropic's official anthropic-java-foundry SDK (not a hand-rolled HTTP client, not a custom langchain4j fork) with a custom JDK-backed HttpClient SPI to preserve the connector's authenticated-proxy support. Covers context (enterprise Foundry blocker), options evaluated, consequences (dependency cost, Backend-integration gap workaround, API-shape discovery tradeoff), and the ArchUnit-enforced langchain4j decoupling.
Adds provider and architecture coverage for the Milestone 2 runtime:
- AGENTS.md: LLM Providers table with Azure AI Foundry entry;
azurefoundry/ tree in Core Components; dispatch summary for
AnthropicModel vs OpenAiModel; cross-links to ADR 004 and §22
- docs/reference/ai-agent.md: new §22 "Azure AI Foundry Provider"
describing dispatch by model family, the custom JDK HttpClient SPI
chain, package layout, ArchUnit boundary rules, and key classes;
§12 provider list updated; §18 code paths updated
- Keeping Documentation Up to Date: new bullet for §22 changes
Cross-links to ADR 004 for decision context.
The Foundry Anthropic path was silently ignoring TimeoutConfiguration — ClientOptions had no .timeout(...) call, so a stuck upstream would block indefinitely. Routes the configured timeout through deriveTimeoutSetting in ChatModelFactoryImpl (same fallback-to-default semantics as every other provider) and forwards a Duration to AnthropicOnFoundryClientFactory.create. The factory signature changes from (endpoint, auth, TimeoutConfiguration, model) to (endpoint, auth, Duration, model) — null-tolerant; when null, the SDK applies its own default. Adds test coverage for both null and non-null timeout paths in AnthropicOnFoundryClientFactoryTest, and updates the Foundry-Anthropic dispatch test in ChatModelFactoryTest to assert the resolved Duration is forwarded to the factory.
…cover all mapped exceptions AnthropicIoException previously fell into the generic ANTHROPIC_ERROR bucket in the adapter's translateException, conflating retryable transport errors with arbitrary unknown errors. A stable TRANSPORT_ERROR code lets connector- runtime / incident-error-code policies route transport failures distinctly from auth or input errors. Also adds tests for PermissionDeniedException, NotFoundException, and UnprocessableEntityException — these were already mapped to ConnectorInputException but had no test coverage; bringing them up to the TDD discipline applied to the rest of the branch.
… tests
Captures the MessageCreateParams the adapter constructs from a ChatRequest
via ArgumentCaptor and asserts the previously-untested request-side
translation:
- system messages flow into MessageCreateParams.system
- consecutive tool-result messages are grouped into a single user-role
MessageParam with one ToolResultBlockParam per result
- assistant messages with tool calls produce ToolUseBlockParam content
with the right id / name / parsed input
- maxTokens falls back to 1024 when neither modelConfig nor ChatRequest
sets one
- ChatRequest.maxOutputTokens overrides modelConfig.parameters().maxTokens
…und-trip
Adds two tests that verify the factory's nontrivial wiring by triggering a
full HTTP round-trip against a WireMock server and asserting on the actual
request received:
- Trailing-slash normalization: endpoint with a trailing '/' produces
requests at /anthropic/v1/messages (not //anthropic/v1/messages).
- API-key header injection: AzureApiKeyAuthentication results in an
x-api-key header on outgoing requests. Note: the Foundry SDK uses
"x-api-key" (same as direct Anthropic), not the "api-key" header
used by Azure OpenAI — this is confirmed by the FoundryBackend
HEADER_API_KEY constant in the SDK source.
The tests use reflection to extract the AnthropicClient from the model
rather than calling through the langchain4j ChatModel interface, which
would violate the ArchUnit rule restricting langchain4j imports to the
azurefoundry.langchain4j sub-package.
The bearer-token supplier path (AzureClientCredentialsAuthentication) keeps
its existing smoke-test coverage — verifying the constructed bearer supplier
and scope would require stubbing Azure's token endpoint, which is out of
scope for unit tests; that path is exercised by live QA against a real
Foundry resource.
AnthropicOnFoundryChatModel had grown to ~400 lines mixing three concerns:
orchestration / exception translation, langchain4j → Anthropic request
translation, and Anthropic → langchain4j response translation. Splits the
two translation halves into focused, package-private collaborator classes:
- AnthropicOnFoundryRequestMapper: ChatRequest → MessageCreateParams
(system extraction, tool-result grouping, tool-use block construction,
parameter resolution including the 1024 maxTokens fallback and
ChatRequest-overrides-config rule)
- AnthropicOnFoundryResponseMapper: Message → ChatResponse
(text aggregation, tool-use → ToolExecutionRequest, StopReason →
FinishReason, Usage → TokenUsage)
The adapter shrinks to its true responsibility: orchestrate the chat call
and translate the seven AnthropicException subtypes (plus AnthropicIoException
→ TRANSPORT_ERROR and the AnthropicException fallback → ANTHROPIC_ERROR).
Per-translation tests move to focused test classes
(AnthropicOnFoundryRequestMapperTest, AnthropicOnFoundryResponseMapperTest),
calling the mappers directly instead of going through Mockito ArgumentCaptor
on the AnthropicClient. The adapter test retains the eight exception-
translation tests + a smoke test.
No behavior change.
Adds SLF4J logging at appropriate levels:
- INFO once per AnthropicOnFoundryClientFactory.create — endpoint,
auth type, deployment name (no secrets)
- DEBUG per HTTP call in JdkAnthropicHttpClient — method, URL, status
code (no headers, no bodies)
- DEBUG on close in BackendAwareAnthropicHttpClient
- DEBUG per chat in AnthropicOnFoundryChatModel — model, message and
tool counts on send; stop reason and token usage on response (no
user content)
- WARN with exception detail in the request and response mappers when
a tool argument or schema fails to parse — closes the silent JSON
fallback gap flagged in the final review
Logs only metadata; never API keys, bearer tokens, prompts, tool
arguments, tool schemas, or response bodies.
The Foundry adapter's tool-schema translation previously called
spec.parameters().toString() and re-parsed the result with Jackson — relying
on JsonSchemaElement's toString contract (which isn't documented as JSON)
and duplicating serialization logic that already exists in
JsonSchemaConverter (registered Spring bean using the JsonSchemaElementModule
for proper JsonSchemaElement → Map serialization).
Plumbs the existing JsonSchemaConverter through:
AgenticAiLangchain4JFrameworkConfiguration (Spring bean wiring)
→ AnthropicOnFoundryClientFactory
→ AnthropicOnFoundryChatModel
→ AnthropicOnFoundryRequestMapper
The mapper's buildAnthropicTool now calls jsonSchemaConverter.schemaToMap
and walks the resulting Map<String, Object> to populate the Anthropic
Tool.InputSchema. The WARN-on-failure logging pattern is preserved (now
catching RuntimeException to cover any conversion error).
… tool mapping The previous version called JsonSchemaConverter.schemaToMap on the whole parameter schema and then walked the resulting Map<String, Object> to extract the "properties" and "required" fields — making a Map only to immediately destructure it. ToolSpecification.parameters() exposes those fields directly via the typed JsonObjectSchema API. Cast spec.parameters() to JsonObjectSchema and use: - objectSchema.required() for the required list - objectSchema.properties() for the property-name → JsonSchemaElement map The converter is still needed for per-property serialization (Anthropic's JsonValue cannot consume langchain4j JsonSchemaElement directly), but serializing each element individually means a per-property WARN on conversion failure rather than dropping the whole schema on one bad property.
Wrap RuntimeException from JsonSchemaConverter.schemaToMap in ConnectorInputException so the connector runtime surfaces a terminal incident with a clear message instead of silently emitting a malformed tool schema (which would lead to confusing model behavior — the LLM would attempt tool calls with incomplete arguments). Tool-schema misconfiguration is a terminal config issue, not a transient retryable error. Also adds null-guard on spec.parameters() since langchain4j tools without arguments have null parameters (valid case — e.g. get_current_time()). Adds a test using a Mockito-stubbed converter that throws on schemaToMap, asserting ConnectorInputException is propagated with the tool name and property name in the message.
The Foundry test suite was using snake_case method names (e.g. 'permission_denied_becomes_connector_input_exception'), inconsistent with the module's established camelCase convention (e.g. 'handlesNullInitialAgentContext', 'shouldCreateJdkHttpClientBuilderWithProxyConfiguration'). Pure rename; no semantic change.
Adds five tests for previously-untested response paths:
- aggregatesMultipleTextBlocksInResponse: multiple text content blocks
are concatenated into a single AiMessage.text()
- translatesMixedTextAndToolUseResponse: text preceding tool_use
blocks (Claude's typical "let me check..." preamble pattern)
- translatesMultipleParallelToolUseBlocks: multiple parallel tool
calls in a single assistant response
- translatesStopSequenceStopReasonToStop: stop_sequence maps to
FinishReason.STOP (treated like end_turn)
- mapsNullStopReasonToOther / mapsUnknownStopReasonToOther: null and
unmapped enum values fall through to FinishReason.OTHER (logged DEBUG)
Cache-token surfacing is intentionally not covered.
The test previously only asserted that the agent loop completes, leaving
the request-side translation invisible to the e2e suite. Add JSON-path
assertions on the two request bodies WireMock receives:
- Turn 1: model is claude-sonnet-4-6, user prompt round-trips, and the
tools array contains the BPMN-defined tool GetDateAndTime (proves the
buildAnthropicTool path runs end-to-end).
- Turn 2: messages history includes a user-role tool_result block with
tool_use_id=toolu_01 (proves the request mapper's tool-result grouping
flows through to the SDK's wire format).
Locks in the request-side contract — future regressions where tool
definitions silently stop being sent or tool-result grouping breaks would
fail this test.
… rebase Resolving the Azure AI Foundry rebase against main's structural refactor made langchain4JAzureOpenAiChatModelProvider return the concrete AzureOpenAiChatModelProvider so AzureFoundryChatModelProvider could borrow its OpenAI builder. That broke @ConditionalOnMissingBean: a user-supplied ChatModelProvider<AzureOpenAiProviderConfiguration> no longer matched the concrete return type, so the default still got created and the registry rejected the duplicate "azureOpenAi" type (AgenticAiConnectorsAutoConfigurationTest case [2] failure). Decouple Foundry from the AzureOpenAi provider bean by making buildAzureOpenAiChatModel a static helper that takes its config and proxy support as parameters. Foundry now wires its own ChatModelHttpProxySupport and calls the static helper directly, which lets the AzureOpenAi @bean return the interface again so user overrides match cleanly. Also update AzureOpenAiChatModelProviderTest imports to the extracted shared.AzureAuthentication (rebase miss — the new test file from main still pointed at the old nested path).
- Replace deprecated StringUtils.removeEnd with plain Java string operation - Replace unread pattern variable 'ignored' with unnamed variable '_' - Replace deprecated JsonNode.fields() with JsonNode.properties() Agent-Logs-Url: https://github.com/camunda/connectors/sessions/913e6e4a-8392-4189-9c01-b52b28790a89 Co-authored-by: nikonovd <6192565+nikonovd@users.noreply.github.com>
12766d3 to
78b428c
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
Adds an Azure AI Foundry model provider to the AI agent connectors (task and sub-process) allowing to use both OpenAI and Anthropic model families under one provider class.
Related issues
closes #6993
Checklist
release, as this branch will be rebased onto main before the next release. Example backport labels:
backport stable/8.8: for changes that should be included in the next 8.8.x release.backport release-8.8.7: for changes that should be included in the specific release 8.8.7, and thisrelease has already been created. The release branch will be merged back into stable/8.8 later, so the change
will be included in future 8.8.x releases as well.