I like this idea. This might be one of the more effective social pressures available for getting inference providers to fix long-standing issues. AWS Bedrock, for example, has crippling defects in its serving stack for Kimi’s K2 and K2.5 models that cause 20%-30% of attempts to emit tool calls to instead silently end the conversation (with no token output). That makes AWS effectively irrelevant as a serious inference provider for Kimi, and conveniently pushes users onto Bedrock’s significantly more expensive Anthropic models for comparable performance on agentic tasks.
bobbiechen 11 hours ago [-]
If I understand correctly, threat model here seems to be to protect against accidental issues that would impact performance, but doesn't cover malicious actor.
For example, Sketchy Provider tells you they are running the latest and greatest, but actually is knowingly running some cheaper (and worse) model and pocketing the difference. These tests wouldn't help since Sketchy Provider could detect when they're being tested and do the right thing (like the Volkswagen emissions scandal). Right?
nulltrace 9 hours ago [-]
Catching accidental drift is still worth a lot. It's basically the same idea as performance regression tests in CI, nobody writes those because they expect sabotage. It's for the boring stuff, like "oops, we bumped a dep and throughput dropped 15%".
If someone actually goes out of their way to bypass the check, that's a pretty different situation legally compared to just quietly shipping a cheaper quant anyway.
frogperson 7 hours ago [-]
Providers like OpenRouter default to the cheapest provider. They are often cheap because they are rediculously quantized and tuned for throughput, not quality.
This is probably kimi trying to protect their brand from bargain basement providers that dont properly represent what the models are capable of.
stingraycharles 2 hours ago [-]
Openrouter has “exacto” verified models trying to combat this, but it seems like it’s not available for most of the models.
latchkey 7 hours ago [-]
> This is probably kimi trying to protect their brand from bargain basement providers that dont properly represent what the models are capable of.
I'm curious what exactly they mean by this...
"because we learned the hard way that open-sourcing a model is only half the battle."
gpm 10 hours ago [-]
Yes and no.
For a truly malicious actor, you're right. But it shifts it from "well we aren't obviously committing fraud by quantizing this model and not telling people" to "we're deliberately committing fraud by verifying our deployment with one model and then serving customer requests with another".
I suspect there's a lot of semi-malicious actors who are only happy to do the former.
j-bos 11 hours ago [-]
Seems like a great challenge for all these systems, see fromtier labs serving quants when under hesvy load.
gertlabs 9 hours ago [-]
This is real issue in our benchmarks. Beware of OpenRouter providers that don't specify quantizations or use lower ones than you might be expecting. OpenRouter does provide configuration options for this, and it often limits your options significantly. That being said, even with the best providers, Kimi-K2-thinking was underwhelming and slow on our benchmarks, albeit interesting and useful for temperature/variation.
Kimi K2.6, however, is the new open source leader, so far. Agentic evaluations still in progress, but one-shot coding reasoning benchmarks are ready at https://gertlabs.com/?mode=oneshot_coding
kristianp 6 hours ago [-]
Openrouter has an "exacto" [1] option to favour higher quality providers for a given model. Have you found any benefits to using that?
Edit: Kimi K2 uses int4 during its training as well as inference [2]. I wonder if that affects the quality if different gguf creators may not convert these correctly?
I did not know about this! We've put a lot of effort into probing providers and their offerings and auto-selecting the best options. I wonder how well their exacto option works.
Going to test it out, thanks!
seism 11 hours ago [-]
A test that runs for 15 hours on a high powered rig is going to be hard to reproduce or scale. But I think this addresses a widespread concern, which affects all kinds of cloud services. What you ping is not necessarily what you get.
Majromax 7 hours ago [-]
My reading of the article is that the first audience for this test is the vendors themselves. The test is long and comprehensive to give the vendor confidence in its own hosting.
Lalabadie 9 hours ago [-]
You can run the whole suite once at the start for each vendor, then roll through each part of it over a two or four week cycle, mimicking regular use. That jeeps the evaluation up to date over time.
OsamaJaber 11 hours ago [-]
Good to see this exist. Inference providers quietly swap quant levels. Most users never check. A standard verifier from the model maker is the right move, would love to see other labs ship the same
punkpeye 2 hours ago [-]
Now this is brilliant.
I run an AI gateway (Glama), and we had to delist all third-party providers because some of them are obviously lying about their quantization.
Being able to vet providers would be a major improvement to our ability to offer a more diverse set of providers.
9 hours ago [-]
m1keil 7 hours ago [-]
A related article from fireworks.ai about running open weights models and why such verifier needs to exists in the first place
For example, Sketchy Provider tells you they are running the latest and greatest, but actually is knowingly running some cheaper (and worse) model and pocketing the difference. These tests wouldn't help since Sketchy Provider could detect when they're being tested and do the right thing (like the Volkswagen emissions scandal). Right?
If someone actually goes out of their way to bypass the check, that's a pretty different situation legally compared to just quietly shipping a cheaper quant anyway.
This is probably kimi trying to protect their brand from bargain basement providers that dont properly represent what the models are capable of.
I'm curious what exactly they mean by this...
"because we learned the hard way that open-sourcing a model is only half the battle."
For a truly malicious actor, you're right. But it shifts it from "well we aren't obviously committing fraud by quantizing this model and not telling people" to "we're deliberately committing fraud by verifying our deployment with one model and then serving customer requests with another".
I suspect there's a lot of semi-malicious actors who are only happy to do the former.
Kimi K2.6, however, is the new open source leader, so far. Agentic evaluations still in progress, but one-shot coding reasoning benchmarks are ready at https://gertlabs.com/?mode=oneshot_coding
Edit: Kimi K2 uses int4 during its training as well as inference [2]. I wonder if that affects the quality if different gguf creators may not convert these correctly?
[1] https://openrouter.ai/docs/guides/routing/model-variants/exa...
[2] https://www.reddit.com/r/LocalLLaMA/comments/1pzfuqg/why_kim...
Going to test it out, thanks!
I run an AI gateway (Glama), and we had to delist all third-party providers because some of them are obviously lying about their quantization.
Being able to vet providers would be a major improvement to our ability to offer a more diverse set of providers.
https://fireworks.ai/blog/quality-first-with-kimi-k2p5