Skip to content

feat: add MiniMax as LLM provider (global + China)#1637

Open
kapelame wants to merge 1 commit into
agent0ai:mainfrom
kapelame:feat/minimax-providers
Open

feat: add MiniMax as LLM provider (global + China)#1637
kapelame wants to merge 1 commit into
agent0ai:mainfrom
kapelame:feat/minimax-providers

Conversation

@kapelame
Copy link
Copy Markdown

Summary

Adds MiniMax as two first-class chat providers:

  • minimaxhttps://api.minimax.io/v1 (international)
  • minimax-cnhttps://api.minimaxi.com/v1 (China domestic)

Both use the OpenAI-compatible litellm provider with api_base override — same pattern as a0_venice, nebius, and ollama_cloud already in the file. models_list.endpoint_url is set so the Model Configuration plugin can list available models dynamically.

Why two entries

MiniMax ships separate global (api.minimax.io) and China (api.minimaxi.com) endpoints with separate accounts and API keys — users on one side don't have credentials for the other. Two entries lets each user pick the one they signed up for without manually editing api_base.

Temperature clamp

MiniMax's API rejects temperature <= 0 and > 1 with HTTP 400. Without a clamp, callers passing the LiteLLM/OpenAI default of 0 (deterministic mode) hit a confusing error from the upstream API. Added a small clamp in models.py:_adjust_call_args that fires when:

  • the resolved provider name is minimax or minimax-cn, OR
  • the api_base URL contains minimax (catches users who set up MiniMax via the generic openai provider + custom api_base), OR
  • the model name contains minimax (extra safety net)

In-range values (0 < temp <= 1) are passed through unchanged.

Tests

tests/test_minimax_provider.py adds 10 tests:

  • 2 verify both providers are registered with the right api_base in the YAML
  • 7 cover the temperature clamp's parametrized positive + negative cases (provider-name match, api_base match, model-name match, in-range pass-through)
  • 1 verifies the clamp does NOT fire for non-MiniMax providers

All 10 pass locally with the pinned requirements.txt env.

Recommended models

MiniMax-M2.7 (latest flagship) and MiniMax-M2.7-highspeed (lower-latency variant). Users select the provider in Settings, then enter the model name.

Notes

A previous bot-generated PR (#1275) added the global minimax provider only and went stale (now has merge conflicts, no maintainer review). This PR supersedes it: covers both regions, includes the temp-clamp safety, and the test file uses the same minimal-stubbing style as tests/test_model_config_api_keys.py.

Adds MiniMax as two first-class chat providers in
`conf/model_providers.yaml`:

- `minimax` → https://api.minimax.io/v1 (international)
- `minimax-cn` → https://api.minimaxi.com/v1 (China domestic)

Both use the OpenAI-compatible litellm provider with `api_base`
override (same pattern as `a0_venice`, `nebius`, `ollama_cloud`,
etc. in the same file). `models_list.endpoint_url` is set so the
Model Configuration plugin can list available models dynamically.

Adds a small temperature clamp in `models.py:_adjust_call_args`
because MiniMax's API rejects `temperature <= 0` and `> 1` with
HTTP 400. Without the clamp, callers passing the LiteLLM/OpenAI
default of 0 (deterministic mode) would hit a confusing error.
The clamp fires when:

- the resolved provider name is `minimax` or `minimax-cn`, OR
- the api_base URL contains "minimax" (covers users who set up
  MiniMax via the generic "openai" provider with custom api_base), OR
- the model name contains "minimax" (extra safety)

Adds `tests/test_minimax_provider.py` with 10 tests covering YAML
registration of both providers and the temperature clamp's
positive + negative cases.

Recommended models: `MiniMax-M2.7`, `MiniMax-M2.7-highspeed`.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant