Closes #1855 /claim #1855
This PR adds complete support for MiniMax AI
✅ Adapter (backend/src/routes/proxy/adapterV2/minimax.ts)
usage: null)reasoning_details array✅ Type Definitions (backend/src/types/llm-providers/minimax/ - 4 files)
reasoning_details thinking contentrole: "" (empty string) - MiniMax quirk✅ Routes (backend/src/routes/proxy/routesv2/minimax.ts)
✅ Database Migration (backend/src/database/migrations/0131_add_minimax_token_prices.sql)
✅ Models Dev Client Integration (backend/src/clients/models-dev-client.ts)
✅ Interaction Handler (frontend/src/lib/llmProviders/minimax.ts)
✅ UI Components
MiniMax doesn’t provide a /v1/models endpoint. We use a hardcoded list:
Problem: MiniMax streaming API returns "usage": null in all chunks (no token counts)
Solution: Implemented token estimation using tiktoken that:
MiniMax supports extended thinking with reasoning_details array:
extra_body: { reasoning_split: true }MiniMax sends "role": "" (empty string) in some stream chunks instead of "role": "assistant". Type schema updated to allow both.
| Feature | Status | Notes |
|---|---|---|
| Tool invocation | ✅ Supported | Full tool call format conversion |
| Tool persistence | ✅ Supported | Tool calls maintained across conversation |
| Token/cost limits | ✅ Supported | Enforced per-request limits |
| Model optimization | ✅ Supported | Can switch to cheaper models |
| Tool results compression | ✅ Supported | TOON compression implemented |
| Dual LLM verification | ✅ Supported | Works with optimization rules |
| Metrics and observability | ✅ Supported | Full Prometheus metrics integration |
| Token estimation | ✅ Supported | Automatic for streaming (usage is null) |
| Feature | Status | Notes |
|---|---|---|
| Chat conversations | ✅ Works | Full message history support |
| Model listing | ✅ Works | Hardcoded model list (no API endpoint) |
| Model selection | ✅ Works | Dropdown with all available models |
| Streaming responses | ✅ Works | Real-time SSE streaming |
| Reasoning content | ✅ Works | Displays thinking process separately |
| Error handling | ✅ Works | MiniMax-specific error codes handled |
| API key management | ✅ Works | Personal/team/org-wide key hierarchy |
| Conversation titles | ✅ Works | Auto-generation using fast model |
| Token tracking | ✅ Works | Estimated tokens logged to interactions |
tool-invocation.spec.ts)tool-persistence.spec.ts)tool-result-compression.spec.ts)model-optimization.spec.ts)token-cost-limits.spec.ts)helm/e2e-tests/mappings/)
.github/values-ci.yaml)Visit MiniMax Platform and generate an API key (a minimum of $25 recharge is required to use the API)
https://github.com/user-attachments/assets/4df301c2-be8b-425c-991c-d883743c284e
curl to actual APIcurl to actual APIpnpm lint passespnpm type-check passesUpdated platform-supported-llm-providers.md
Abhishek Anand
@Rutetid
Archestra
@archestra-ai