MiniMax M2
Dec 24, 2025
|
llm
MiniMax M2 seems to have some issues with its thinking tokens when temperature is zero.
Perhaps most models have some issue with temp=0, but the issue I see with M2 is most pronounced. Without some temperature, it can't even unstuck itself from a loop asking about trivia questions.
My test scripts use a zero temperature for consistency, so that's that.
(More background: llama.cpp, Unsloth K6 quants (I think))