It's an ad. "The Solution: TokensTree". From tokenstree.com
I was expecting a secondary market in tokens, perhaps crypto-powered, but no.
The cost difference for languages roughly correlates with how much text it takes to say something in that language. English is relatively terse. (This is a common annoyance when internationalizing dialog boxes. If sized for English, boxes need to be expanded.) They don't list any of the ideographic languages, which would be interesting.
lxgr 3 hours ago [-]
That would cause the opposite effect of what we’re actually seeing (i.e. “more redundant languages” would be using comparatively fewer tokens).
The real reason is that tokens are probably strictly based on n-gram frequency of the training data, and English is the most common language in the training data.
vfalbor 2 hours ago [-]
This is not cryto or something else, it’s a platform for tokens reduction. You can try and then post it before do it assumtions. :)
telotortium 3 hours ago [-]
My impression of dialog box size from least to greatest is CJK (Chinese < Korean < Japanese) < English < everything else
aprentic 2 hours ago [-]
There's certainly an interesting question here, even if Tokenstree doesn't provide a solution or even define the problem well.
The broader questions are still interesting.
If an AI is trained more on language A than language B but has some training in translating B to A, what is the overhead of that translation?
If the abilities are combined in the same model, how much lower is the overhead than doing it as separate operations?
ie is f(a) < f(b) < f(t(B,A) ? where a and b are in A and B and f() and t() are the costs of processing a prompt and the cost of translating a prompt.
Then there's the additional question of what happens with character based languages. It's not obvious how it would make sense to assign multiple tokens to a single character but there's the question of how much information in character based vs phonic based words and what the information content of sentences with either one is.
Mindless2112 3 hours ago [-]
Funny they didn't include any CJK languages on their list.
bobbiechen 3 hours ago [-]
I heard an anecdote that Qwen Coder works better when prompted in Korean - haven't tested it for myself though.
aprentic 2 hours ago [-]
Deepseek will regularly spit out Chinese (汉字)during English sessions. They generally seem to be syntactically related but it makes me think that there's some overhead of using English with an engine that's primarily trained in Chinese.
simonw 3 hours ago [-]
The title of this piece differs from the HN title, but the HN title is a lot better. The original title is "The Biggest Con of the 21st Century: Tokens", subhead "How AI Companies Are Charging You More Without You Even Realizing It" - which is an absurd title because tokens are NOT the "biggest con" of anything, and AI companies make it very clear exactly how their pricing works.
I also don't like how this article presents numbers for language differences - in the "The Language Tax" section - but fails to clarify which tokenizer and where those numbers came from.
cyberge99 1 hours ago [-]
English Teachers: “Proper grammar is cost effective!”
lxgr 3 hours ago [-]
“Pay by token” is priced by token, not word or semantic unit; news at 11?
The product itself seems genuinely useful, but the article reads very sensationalist about something that should be pretty obvious.
In other news: French publishers are paying 30% more for paper than English publishers!!
simianwords 3 hours ago [-]
This has to be one of the worst things I have read. If this is not satire idk what counts
charcircuit 3 hours ago [-]
The companies didn't arbitrarily choose to bill by tokens. The cost to serve the models scales linearly with tokens which makes it a reasonable pricing strategy. The reality is that you are charged more because it was more expensive to handle the request.
lxgr 3 hours ago [-]
I guess token length is indirectly determined by language frequency in the training set, and it would be possible to train a model on machine translated training data only to combat that (or maybe to force tokenization to overrepresent languages other than English?), but there’s no way that would be economical, and inference would just be accordingly more expensive to recoup that effort.
simianwords 3 hours ago [-]
Europeans be like:
AI commits a racism.
AI commits an environmentalism.
Now use my product (that won't solve either)
vfalbor 3 hours ago [-]
The Biggest Con of the 21st Century: Tokens
How AI Companies Are Charging You More Without You Even Realizing It
You pay for what you use. That's the deal. Except it's not.
When you use an AI model — GPT-4, Claude, Gemini — you do not pay per word. You pay per token. And that tiny technical detail is quietly costing you, depending on which company you choose, up to 60% more for the exact same request.
prophesi 3 hours ago [-]
Wait until you hear that most models tend to perform worse for non-English languages.
aprentic 2 hours ago [-]
Do you know if that's true of non-English models?
As I said elsewhere, Deepseek injects Chinese characters into responses. Anecdotally, that seems to happen when the context gets longer. That suggests that they're primarily trained in Chinese and I would expect them to use fewer tokens for Chinese than English.
Rendered at 20:36:37 GMT+0000 (Coordinated Universal Time) with Vercel.
I was expecting a secondary market in tokens, perhaps crypto-powered, but no.
The cost difference for languages roughly correlates with how much text it takes to say something in that language. English is relatively terse. (This is a common annoyance when internationalizing dialog boxes. If sized for English, boxes need to be expanded.) They don't list any of the ideographic languages, which would be interesting.
The real reason is that tokens are probably strictly based on n-gram frequency of the training data, and English is the most common language in the training data.
The broader questions are still interesting.
If an AI is trained more on language A than language B but has some training in translating B to A, what is the overhead of that translation?
If the abilities are combined in the same model, how much lower is the overhead than doing it as separate operations?
ie is f(a) < f(b) < f(t(B,A) ? where a and b are in A and B and f() and t() are the costs of processing a prompt and the cost of translating a prompt.
Then there's the additional question of what happens with character based languages. It's not obvious how it would make sense to assign multiple tokens to a single character but there's the question of how much information in character based vs phonic based words and what the information content of sentences with either one is.
I also don't like how this article presents numbers for language differences - in the "The Language Tax" section - but fails to clarify which tokenizer and where those numbers came from.
The product itself seems genuinely useful, but the article reads very sensationalist about something that should be pretty obvious.
In other news: French publishers are paying 30% more for paper than English publishers!!
AI commits a racism.
AI commits an environmentalism.
Now use my product (that won't solve either)
You pay for what you use. That's the deal. Except it's not.
When you use an AI model — GPT-4, Claude, Gemini — you do not pay per word. You pay per token. And that tiny technical detail is quietly costing you, depending on which company you choose, up to 60% more for the exact same request.
As I said elsewhere, Deepseek injects Chinese characters into responses. Anecdotally, that seems to happen when the context gets longer. That suggests that they're primarily trained in Chinese and I would expect them to use fewer tokens for Chinese than English.