Skip to content

feat: add MiniMax as LLM provider with M2.7 default#1496

Open
octo-patch wants to merge 1 commit intokoala73:mainfrom
octo-patch:feat/add-minimax-llm-provider
Open

feat: add MiniMax as LLM provider with M2.7 default#1496
octo-patch wants to merge 1 commit intokoala73:mainfrom
octo-patch:feat/add-minimax-llm-provider

Conversation

@octo-patch
Copy link

@octo-patch octo-patch commented Mar 12, 2026

Summary

  • Add MiniMax as a first-class LLM provider in the provider chain (between Groq and OpenRouter)
  • Default model: MiniMax-M2.7 (latest flagship with enhanced reasoning and coding capabilities)
  • Support MINIMAX_MODEL env var override (e.g. MiniMax-M2.7-highspeed for low-latency scenarios)
  • API endpoint: https://api.minimax.io/v1 (OpenAI-compatible)

Changes

  • server/_shared/llm.ts: Add minimax provider with API key auth, model env var override, and provider chain integration
  • .env.example: Add MINIMAX_API_KEY and MINIMAX_MODEL configuration docs
  • tests/shared-llm.test.mts: Add 2 unit tests for default model and model override

Testing

  • All 5 unit tests passing (3 existing + 2 new MiniMax tests)
  • Default model verified as MiniMax-M2.7
  • MINIMAX_MODEL env var override verified with MiniMax-M2.7-highspeed

@vercel
Copy link

vercel bot commented Mar 12, 2026

@octo-patch is attempting to deploy a commit to the Elie Team on Vercel.

A member of the Team first needs to authorize it.

Repository owner deleted a comment from ashsolei Mar 14, 2026
- Add MiniMax as a first-class LLM provider in the provider chain
- Default model: MiniMax-M2.7 (latest flagship with enhanced reasoning)
- Support MINIMAX_MODEL env var override (e.g. MiniMax-M2.7-highspeed)
- Add MiniMax API key configuration to .env.example
- Add unit tests for default model and model override
@octo-patch octo-patch force-pushed the feat/add-minimax-llm-provider branch from 44f4593 to 3be623a Compare March 18, 2026 07:13
@octo-patch octo-patch changed the title feat: add MiniMax as LLM provider feat: add MiniMax as LLM provider with M2.7 default Mar 18, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant