Skip to content

Commit f91e16c

Browse files
author
PR Bot
committed
feat: add MiniMax LLM provider instrumentation
Add OpenTelemetry instrumentation for MiniMax (https://www.minimax.io/), which provides an OpenAI-compatible API. This integration supports: - MiniMax-M2.5 and MiniMax-M2.5-highspeed models (204K context) - Sync and async chat completions - Streaming responses - Function/tool calling - Token usage tracking The implementation follows the same pattern as the existing DeepSeek integration, detecting MiniMax clients by their base_url (api.minimax.io) and wrapping OpenAI SDK calls accordingly. Changes: - New package: python/frameworks/minimax/ (traceai_minimax) - Added MINIMAX to FiLLMProviderValues enum - Updated README.md with MiniMax in supported frameworks - Includes comprehensive tests (19 passing) and usage examples
1 parent 61da683 commit f91e16c

15 files changed

Lines changed: 1764 additions & 0 deletions

File tree

README.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -232,6 +232,7 @@ var tracer = FITracer.Initialize(new FITracerOptions
232232
| [`traceAI-huggingface`](https://pypi.org/project/traceAI-huggingface/) | HuggingFace | [![PyPI](https://img.shields.io/pypi/v/traceAI-huggingface)](https://pypi.org/project/traceAI-huggingface/) |
233233
| [`traceAI-xai`](https://pypi.org/project/traceAI-xai/) | xAI (Grok) | [![PyPI](https://img.shields.io/pypi/v/traceAI-xai)](https://pypi.org/project/traceAI-xai/) |
234234
| [`traceAI-vllm`](https://pypi.org/project/traceAI-vllm/) | vLLM | [![PyPI](https://img.shields.io/pypi/v/traceAI-vllm)](https://pypi.org/project/traceAI-vllm/) |
235+
| [`traceAI-minimax`](https://pypi.org/project/traceAI-minimax/) | MiniMax | [![PyPI](https://img.shields.io/pypi/v/traceAI-minimax)](https://pypi.org/project/traceAI-minimax/) |
235236

236237
#### Agent Frameworks
237238

@@ -434,6 +435,7 @@ Available on [NuGet](https://www.nuget.org/packages/fi-instrumentation-otel).
434435
| | HuggingFace ||| | |
435436
| | xAI (Grok) ||| | |
436437
| | vLLM ||| | |
438+
| | MiniMax || | | |
437439
| | Azure OpenAI | | || |
438440
| | IBM Watsonx | | || |
439441
| **Agent Frameworks** | LangChain ||| | |

python/fi_instrumentation/fi_types.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -931,6 +931,7 @@ class FiLLMProviderValues(Enum):
931931
VERTEXAI = "vertexai"
932932
XAI = "xai"
933933
DEEPSEEK = "deepseek"
934+
MINIMAX = "minimax"
934935

935936

936937
class ProjectType(Enum):
Lines changed: 257 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,257 @@
1+
# TraceAI MiniMax Instrumentation
2+
3+
OpenTelemetry instrumentation for [MiniMax](https://www.minimax.io/) - chat completions via the OpenAI-compatible API.
4+
5+
## Installation
6+
7+
```bash
8+
pip install traceai-minimax
9+
```
10+
11+
## Features
12+
13+
- Automatic tracing of MiniMax API calls via OpenAI SDK
14+
- Support for MiniMax-M2.5 and MiniMax-M2.5-highspeed models (204K context)
15+
- Streaming response support
16+
- Token usage tracking
17+
- Function/tool calling support
18+
- Full OpenTelemetry semantic conventions compliance
19+
20+
## Usage
21+
22+
### Basic Setup
23+
24+
```python
25+
from openai import OpenAI
26+
from opentelemetry import trace
27+
from opentelemetry.sdk.trace import TracerProvider
28+
from opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleSpanProcessor
29+
30+
from traceai_minimax import MiniMaxInstrumentor
31+
32+
# Set up tracing
33+
provider = TracerProvider()
34+
provider.add_span_processor(SimpleSpanProcessor(ConsoleSpanExporter()))
35+
trace.set_tracer_provider(provider)
36+
37+
# Instrument MiniMax
38+
MiniMaxInstrumentor().instrument(tracer_provider=provider)
39+
40+
# Use MiniMax via OpenAI SDK
41+
client = OpenAI(
42+
api_key="your-minimax-api-key",
43+
base_url="https://api.minimax.io/v1"
44+
)
45+
46+
response = client.chat.completions.create(
47+
model="MiniMax-M2.5",
48+
messages=[{"role": "user", "content": "Hello!"}]
49+
)
50+
print(response.choices[0].message.content)
51+
```
52+
53+
### MiniMax Chat
54+
55+
```python
56+
from openai import OpenAI
57+
58+
client = OpenAI(
59+
api_key="your-minimax-api-key",
60+
base_url="https://api.minimax.io/v1"
61+
)
62+
63+
# Simple chat
64+
response = client.chat.completions.create(
65+
model="MiniMax-M2.5",
66+
messages=[
67+
{"role": "system", "content": "You are a helpful assistant."},
68+
{"role": "user", "content": "What is machine learning?"}
69+
],
70+
temperature=0.7,
71+
max_tokens=1024
72+
)
73+
print(response.choices[0].message.content)
74+
```
75+
76+
### Streaming Responses
77+
78+
```python
79+
from openai import OpenAI
80+
81+
client = OpenAI(
82+
api_key="your-minimax-api-key",
83+
base_url="https://api.minimax.io/v1"
84+
)
85+
86+
# Streaming chat
87+
stream = client.chat.completions.create(
88+
model="MiniMax-M2.5",
89+
messages=[{"role": "user", "content": "Tell me a story"}],
90+
stream=True
91+
)
92+
93+
for chunk in stream:
94+
if chunk.choices[0].delta.content:
95+
print(chunk.choices[0].delta.content, end="", flush=True)
96+
print()
97+
```
98+
99+
### Function Calling / Tools
100+
101+
```python
102+
from openai import OpenAI
103+
import json
104+
105+
client = OpenAI(
106+
api_key="your-minimax-api-key",
107+
base_url="https://api.minimax.io/v1"
108+
)
109+
110+
tools = [
111+
{
112+
"type": "function",
113+
"function": {
114+
"name": "get_weather",
115+
"description": "Get the current weather for a location",
116+
"parameters": {
117+
"type": "object",
118+
"properties": {
119+
"location": {
120+
"type": "string",
121+
"description": "The city name"
122+
},
123+
"unit": {
124+
"type": "string",
125+
"enum": ["celsius", "fahrenheit"],
126+
"description": "Temperature unit"
127+
}
128+
},
129+
"required": ["location"]
130+
}
131+
}
132+
}
133+
]
134+
135+
response = client.chat.completions.create(
136+
model="MiniMax-M2.5",
137+
messages=[{"role": "user", "content": "What's the weather in Paris?"}],
138+
tools=tools,
139+
tool_choice="auto"
140+
)
141+
142+
message = response.choices[0].message
143+
if message.tool_calls:
144+
for tool_call in message.tool_calls:
145+
print(f"Function: {tool_call.function.name}")
146+
print(f"Arguments: {tool_call.function.arguments}")
147+
```
148+
149+
### Async Usage
150+
151+
```python
152+
import asyncio
153+
from openai import AsyncOpenAI
154+
155+
async def main():
156+
client = AsyncOpenAI(
157+
api_key="your-minimax-api-key",
158+
base_url="https://api.minimax.io/v1"
159+
)
160+
161+
response = await client.chat.completions.create(
162+
model="MiniMax-M2.5",
163+
messages=[{"role": "user", "content": "Hello!"}]
164+
)
165+
print(response.choices[0].message.content)
166+
167+
asyncio.run(main())
168+
```
169+
170+
### JSON Mode
171+
172+
```python
173+
from openai import OpenAI
174+
175+
client = OpenAI(
176+
api_key="your-minimax-api-key",
177+
base_url="https://api.minimax.io/v1"
178+
)
179+
180+
response = client.chat.completions.create(
181+
model="MiniMax-M2.5",
182+
messages=[
183+
{"role": "system", "content": "Output valid JSON only."},
184+
{"role": "user", "content": "List 3 programming languages with their main use cases"}
185+
],
186+
response_format={"type": "json_object"}
187+
)
188+
189+
import json
190+
data = json.loads(response.choices[0].message.content)
191+
print(data)
192+
```
193+
194+
## Configuration Options
195+
196+
### TraceConfig
197+
198+
```python
199+
from fi_instrumentation import TraceConfig
200+
from traceai_minimax import MiniMaxInstrumentor
201+
202+
config = TraceConfig(
203+
hide_inputs=False,
204+
hide_outputs=False,
205+
)
206+
207+
MiniMaxInstrumentor().instrument(
208+
tracer_provider=provider,
209+
config=config
210+
)
211+
```
212+
213+
## Captured Attributes
214+
215+
### Common Attributes
216+
217+
| Attribute | Description |
218+
|-----------|-------------|
219+
| `fi.span.kind` | "LLM" |
220+
| `llm.system` | "minimax" |
221+
| `llm.provider` | "minimax" |
222+
| `llm.model` | Model name (MiniMax-M2.5, MiniMax-M2.5-highspeed) |
223+
| `llm.token_count.prompt` | Input token count |
224+
| `llm.token_count.completion` | Output token count |
225+
| `llm.token_count.total` | Total token count |
226+
227+
### MiniMax-Specific Attributes
228+
229+
| Attribute | Description |
230+
|-----------|-------------|
231+
| `minimax.response_id` | Unique response ID |
232+
| `minimax.finish_reason` | Response finish reason (stop, tool_calls, length) |
233+
| `minimax.tool_calls_count` | Number of tool calls |
234+
| `minimax.tools_count` | Number of tools provided |
235+
236+
## Available Models
237+
238+
| Model | Description |
239+
|-------|-------------|
240+
| `MiniMax-M2.5` | General-purpose model with 204K context window |
241+
| `MiniMax-M2.5-highspeed` | Faster inference variant with 204K context window |
242+
243+
## Important Notes
244+
245+
1. **OpenAI SDK Required**: MiniMax uses the OpenAI-compatible API, so you need the `openai` package installed.
246+
247+
2. **Base URL**: Always set `base_url="https://api.minimax.io/v1"` when creating the client.
248+
249+
3. **API Key**: Get your API key from the [MiniMax Platform](https://platform.minimax.chat/).
250+
251+
4. **Selective Instrumentation**: The instrumentor only traces calls to MiniMax's API. Regular OpenAI API calls are not affected.
252+
253+
5. **Temperature**: MiniMax requires temperature to be in the range (0.0, 1.0]. A value of exactly 0 is not accepted.
254+
255+
## License
256+
257+
Apache-2.0

0 commit comments

Comments
 (0)