Configure the model in Pi
# Install Pi:
npm install -g @mariozechner/pi-coding-agent# Add to ~/.pi/agent/models.json:
{
"providers": {
"llama-cpp": {
"baseUrl": "http://localhost:8080/v1",
"api": "openai-completions",
"apiKey": "none",
"models": [
{
"id": "MiniMax-M2.5-GGUF"
}
]
}
}
}Run Pi
# Start Pi in your project directory:
piThis repo contains specialized MoE-quants for MiniMax-M2.5. The idea being that given the huge size of the FFN tensors compared to the rest of the tensors in the model, it should be possible to achieve a better quality while keeping the overall size of the entire model smaller compared to a similar naive quantization. To that end, the quantization type default is kept in high quality and the FFN UP + FFN GATE tensors are quanted down along with the FFN DOWN tensors.
| Quant | Size | Mixture | PPL | 1-(Mean PPL(Q)/PPL(base)) | KLD |
|---|---|---|---|---|---|
| Q5_K_M | 157.23 GiB (5.91 BPW) | Q8_0 / Q5_K / Q5_K / Q6_K | 7.126261 Β± 0.115850 | +0.5877% | 0.023465 Β± 0.001079 |
| Q4_K_M | 130.52 GiB (4.90 BPW) | Q8_0 / Q4_K / Q4_K / Q5_K | 7.173459 Β± 0.116673 | +1.2462% | 0.041269 Β± 0.001426 |
| IQ4_XS | 101.10 GiB (3.80 BPW) | Q8_0 / IQ3_S / IQ3_S / IQ4_XS | 7.513587 Β± 0.122746 | +6.0549% | 0.095077 Β± 0.002168 |
| IQ3_S | 78.76 GiB (2.96 BPW) | Q8_0 / IQ2_S / IQ2_S / IQ3_S | 8.284882 Β± 0.135705 | +16.9418% | 0.244096 Β± 0.004148 |
Provided here as well as a couple of graphs showing the Pareto frontier for KLD and PPL for my quants vs Unsloth.
Full graphs of all of the quants are available in the kld_data directory, as well as the raw data broken down per quant as well as a CSV with the collated data.
While the PPL between the quant methods is similar, I feel like the KLD of the quants provided here are slightly better and that these quants will offer better long context performance due to keeping the default type as Q8_0. This comes with a slight performance penalty in PP / TG due to the higher quality quantization but I think the tradeoff is worthwhile.
- Downloads last month
- 870
Model tree for AesSedai/MiniMax-M2.5-GGUF
Base model
MiniMaxAI/MiniMax-M2.5

Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp# Start a local OpenAI-compatible server: llama-server -hf AesSedai/MiniMax-M2.5-GGUF: