Skip to content

Add MiniMax as a first-class LLM provider#7027

Closed
octo-patch wants to merge 1 commit intotensorzero:mainfrom
octo-patch:feature/add-minimax-provider
Closed

Add MiniMax as a first-class LLM provider#7027
octo-patch wants to merge 1 commit intotensorzero:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link

Summary

Adds MiniMax as a first-class LLM provider in TensorZero, alongside existing providers like OpenAI, Anthropic, DeepSeek, etc.

MiniMax offers an OpenAI-compatible chat completions API with models like MiniMax-M2.7 (1M context window). This PR adds native provider support so users can configure MiniMax models directly:

# Short-hand usage
model = "minimax::MiniMax-M2.7"

# Advanced configuration
[models.minimax_m27.providers.minimax]
type = "minimax"
model_name = "MiniMax-M2.7"

Changes

Rust Implementation (7 files, ~1100 additions)

  • crates/tensorzero-types-providers/src/minimax.rs: Serde types for MiniMax API responses (reuses OpenAI-compatible types)
  • crates/tensorzero-core/src/providers/minimax.rs: Full provider implementation:
    • MiniMaxCredentials with Static/Dynamic/WithFallback/None variants
    • MiniMaxProvider implementing InferenceProvider trait
    • Non-streaming and streaming inference
    • Tool use, JSON mode (with strict→normal downgrade), stop sequences
    • Proper error handling via handle_openai_error
    • Unit tests for request building, response parsing, credentials
  • model.rs: ProviderConfig::MiniMax variant in all match arms (infer, infer_stream, batch, thought_block, supports_provider_tools, shorthand)
  • model_table.rs: ProviderType::MiniMax, MiniMaxKind with ProviderKind impl, default credentials
  • config/provider_types.rs: MiniMaxProviderTypeConfig with MINIMAX_API_KEY env default
  • providers/mod.rs and types-providers/src/lib.rs: Module declarations

Documentation (10 files, ~260 additions)

  • New: docs/integrations/model-providers/minimax.mdx - Getting started guide with simple and advanced setup
  • Updated: README.md, docs.json, configuration-reference.mdx, API reference docs, gateway index, deployment guide, credentials management - all updated to include MiniMax in provider lists

Features

Feature Supported
Chat Completions
Streaming
Tool Use
JSON Mode
Shorthand Config
Batch Inference
Multimodal
Embeddings

Test Plan

  • Unit tests included for request construction, response parsing, credential handling, inference params
  • CI compilation and clippy checks
  • Integration tests with live MiniMax API (requires MINIMAX_API_KEY)

Add native MiniMax provider support with OpenAI-compatible API integration,
including chat completions (streaming and non-streaming), tool use, and JSON
mode. Supports MiniMax-M2.7 and other MiniMax models via MINIMAX_API_KEY.

- Provider implementation with credentials, request/response types
- Configuration, shorthand model names (minimax::model_name)
- Documentation: provider guide, configuration reference, API docs
- Unit tests for request building, response parsing, credentials
@github-actions
Copy link
Contributor


Thank you for your submission, we really appreciate it. Like many open-source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution. You can sign the CLA by just posting a Pull Request Comment same as the below format.


I have read the Contributor License Agreement (CLA) and hereby sign the CLA.


You can retrigger this bot by commenting recheck in this Pull Request. Posted by the CLA Assistant Lite bot.

@GabrielBianconi
Copy link
Member

Hi @octo-patch - thank you for the PR. I believe you can already use this API via our OpenAI-compatible provider:

https://www.tensorzero.com/docs/gateway/api-reference/inference-openai-compatible

Do we need a custom provider for it?

@GabrielBianconi GabrielBianconi self-assigned this Mar 23, 2026
@octo-patch
Copy link
Author

Thanks for the pointer, @GabrielBianconi! You're right — since MiniMax provides a fully OpenAI-compatible API at https://api.minimax.io/v1, the existing OpenAI-compatible provider should work seamlessly. No need for a custom provider. Closing this PR.

@octo-patch octo-patch closed this Mar 24, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants