Skip to content

Add support for calling Anthropic models on Azure Foundry endpoints#58

Open
JoseCSantos wants to merge 8 commits intoarcprize:mainfrom
JoseCSantos:jcsantos/add_support_for_anthropic_on_azure
Open

Add support for calling Anthropic models on Azure Foundry endpoints#58
JoseCSantos wants to merge 8 commits intoarcprize:mainfrom
JoseCSantos:jcsantos/add_support_for_anthropic_on_azure

Conversation

@JoseCSantos
Copy link
Copy Markdown

This pull request enhances the AnthropicAdapter to support both direct Anthropic API access and Azure-hosted Anthropic endpoints, improving flexibility in how API credentials and endpoints are configured. It also introduces logic to adjust the model name for Azure deployments and ensures consistent usage of the correct model name throughout the adapter.

Provider configuration and model selection:

  • Added support for initializing the Anthropic client using either direct Anthropic API credentials (ANTHROPIC_API_KEY) or Azure-hosted credentials (AZURE_ANTHROPIC_API_KEY and AZURE_ANTHROPIC_ENDPOINT). The adapter now chooses the appropriate connection mode based on available environment variables, with error handling for missing configuration.
  • Introduced the _get_model_name method to strip date suffixes from the model name when using Azure endpoints, ensuring compatibility with Azure deployment naming conventions.

Consistent model usage:

  • Updated all API calls (chat_completion and chat_completion_stream) to use the new _get_model_name method, guaranteeing that the correct model identifier is used depending on the deployment type. [1] [2]

Internal improvements:

  • Added import re to support model name manipulation.

@gkamradt
Copy link
Copy Markdown
Collaborator

Hi, in general, I'm a fan of getting Azure support on here.

However, I would like to make it more explicit as to which Anthropic endpoint it goes to, rather than just relying on the environment variables to do this for us.

The adapter now chooses the appropriate connection mode based on available environment variables, with error handling for missing configuration.

The model config should hold the information on how the model will be tested, not the env variables. Can you update this to make it more explicit to make it part of the model config?

@JoseCSantos
Copy link
Copy Markdown
Author

JoseCSantos commented Jan 8, 2026

Hello,

Thanks for the feedback @gkamradt and sorry for the delay in sending an update to this PR.
Have submitted a new PR addressing your comments.
Created a new adapter for Azure provider and updates models.yml with azure references.
The .env file still needs AZURE_ANTHROPIC_ENDPOINT and AZURE_ANTHROPIC_API_KEY.

In the coming days/weeks I may send follow on PRs for other model vendors in Azure, namely OpenAI.

Have tested Opus 4.5 with 64k reasoning budget and got:

Final Score: 34.58% (41.50/120)
Total Cost: $297.9787
Average Cost per Task: $2.4832

Looking at the official ARC-AGI-2 leaderboard we have for Opus-4.5-64k:
cost per task: $2.40
Score: 37.6%

While the cost is very similar ($2.40 vs $2.48), the final score is 3.0% lower, though the reported 37.6% is still within the 95th confidence interval [26.4%, 42.8%] of my run, so benchmark seems correct.
Would be good to report the 95th CI on the official leaderboard as some models may overlap.

Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants