Very good move. In my experience, for system programming at least, GPT 5.4 xhigh is vastly superior to Claude Opus 4.6 max effort. I ran many brutal tests, including reconstructing for QEMU the SCSI controller (not longer accessible) of a SVSY UNIX of the early 90s used in a 386. Side by side, always re-mirroring the source trees each time one did a breakthrough in the implementation. Well, GPT 5.4 single handed did it all, while Opus continued to take wrong paths. The same for my Redis bug tracking and development. But 200$ is too much for many people (right now, at least: the reality is that if frontier LLMs are not democratized, we will end paying like a house rent to a few providers), and also while GPT 5.4 is much stronger, it is slower and less sharp when the thing to do is simple, so many people went for Claude (also because of better marketing and ethical concerns, even if my POV is different on that side: both companies sell LLM models with similar capabilities and similar internal IP protection and so forth, to me they look very similar in practical terms). This will surely change things, and many people will end with a Claude 5x account + a Codex 5x account I bet.
Tiberium 31 seconds ago [-]
Thanks for confirming my impressions, it's been like 4 now that I've arrived at the same conclusions. GPT models are just better at any kind of low-level work: reverse engineering (with decompilation), C/C++, way more reliable security research (Opus will find way more, but most will turn out to be false positives). I've had GPT create non-trivial custom decompilers for me, and modify existing Java decompilers.
Regarding speed, I don't use xhigh that often, and surprisingly for me GPT 5.4 high is faster than Claude 4.6 Opus high (unless you enable fast mode for Opus)
patates 11 minutes ago [-]
5.4, in my own testing, was almost always ahead of Opus 4.6 for reviews and planning. I'm on plus plan on openai, so I couldn't test it so deeply. Anyone who had more experience on both could perhaps chime in? Pros/cons compared to Opus? I'm invested in Claude ecosystem but the recent quality and session limits decrease have me on the edge.
azuanrb 2 minutes ago [-]
Same for me. I'm on $20 plan for both and I use them both interchangeably. Similar "intelligence" imo. Just different way of doing things, that's all. But Claude is getting worse in terms of token usage so I've cancelled my plan last month.
giwook 8 minutes ago [-]
Do you mind elaborating on your experience here?
Just curious as I've often heard that Claude was superior for planning/architecture work while ChatGPT was superior for actual implementation and finding bugs.
whalesalad 9 minutes ago [-]
I've been paying out the ass via api tokens on Opus 4.6. $3k this year so far. In the last few weeks, Opus became retarded across the board. I still find it far superior to GPT 5.4 in most situations, but 5.4 is not bad.
2001zhaozhao 16 minutes ago [-]
The title is misleading. The only thing they seem to have done was add a $100 plan identical to Claude's, which gives 5x usage of ChatGPT Plus. There is still a $200 plan that gives 20x usage.
jstummbillig 12 minutes ago [-]
That is not the "only" thing: You get access to GPT-5.4 pro.
giwook 11 minutes ago [-]
Just to clarify, one does not get access to the pro model on the Pro plan?
carbocation 10 minutes ago [-]
The $20 Plus plan still exists, and does not give access to the pro model.
The $200 Pro plan still exists, and does give access to the pro model.
What is new is a $100 Pro plan that does give access to the pro model, with lower usage limits than the $200 Pro plan.
dimmke 1 minutes ago [-]
This is still worse than Anthropic's right? Because you get access to their top model even at the $20 price point
irishcoffee 7 minutes ago [-]
So, reading the tea leaves, they're either losing subscribers for the $200 plan, or they're not following the same hockey stick path of growth they thought they were... maybe?
Edit: I wonder if this is actually compute-bound as the impetus
alyxya 1 minutes ago [-]
Plenty of people wanted to spend more than $20 but less than $200 for a plan. It's long overdue IMO.
patates 9 minutes ago [-]
Plus plan doesn't get the pro model, which is (AFAICT) the same 5.4 model but thinks like a lot.
jgalt212 1 minutes ago [-]
You're trying to make words mean what we all think they mean. Stop foisting your Textualism upon us!
strongpigeon 9 minutes ago [-]
You’re right. I missed the “From $100”. Edited title.
selectively 15 minutes ago [-]
Oh. Yikes.
satvikpendem 13 minutes ago [-]
The era of subsidization is over, it seems.
For my money, on the code side at least, GitHub Copilot on VSCode is still the most cost effective option, 10 bucks for 300 requests gets me all I need, especially when I use OpenAI models which are counted as 1x vs Opus which is 3x. I've stopped using all other tools like Claude Code etc.
giwook 9 minutes ago [-]
I use both GH Copilot as well as CC extensively and it does seem more economical, though I wonder how long this will last as I imagine Github has also been subsidizing LLM usage extensively.
FWIW it feels like GH Copilot is a cheaper version of OpenRouter but with trade-offs like being locked into VSCode and the Microsoft ecosystem overall. I already use VSCode though and otherwise I don't see much downside to using GH Copilot outside of that.
satvikpendem 5 minutes ago [-]
I'm hopeful because Microsoft already has a partnership and owns much of OpenAI so can get their models at cost to host on Azure with they already do, so they can pass on the savings to the user. This is why Opus is 3x as expensive in Copilot, because Microsoft needs to buy API usage from Anthropic directly.
creddit 2 minutes ago [-]
Pro used to be $200.
dismalaf 4 minutes ago [-]
> The era of subsidization is over
Of course it is. Returns are diminishing, AGI isn't happening with current techniques but it is good enough to sell, so it's time to monetize. I just got an email from OpenAI as well about ads in their free tier (I signed up once out of curiosity).
sassymuffinz 8 minutes ago [-]
I tried Claude Code for a week straight recently to see what all the hype was about and while it pumped out a bunch of reasonable looking code and features I ended up feeling completely disconnected from my codebase and uncomfortable.
Cancelled the plan I had with them and happily went back to just coding like normal in VSCode with occasional dips into Copilot when a need arose or for rubber ducking and planning. Feels much better as I'm in full control and not trusting the magic black box to get it right or getting fatigue from reading thousands of lines of generated code.
Anyone who says they're able to review thousands of lines effectively that Claude might slop out in a day are lying to themselves.
torben-friis 51 seconds ago [-]
>Anyone who says they're able to review thousands of lines effectively that Claude might slop out in a day are lying to themselves.
The amount you can review before burning out is now the reasonable limit, for the same reason that a car is supposed to stay at the speed you can handle and not the max speed of the engine.
Of course, many people are secretly skipping reviews and some dare to publicly advocate for getting rid of them entirely.
deadbabe 7 minutes ago [-]
Not over yet. More hikes will come. It will reach $1000.
satvikpendem 4 minutes ago [-]
That's what I said by subsidization being over.
operatingthetan 14 seconds ago [-]
The subsidization being "over" would mean we are paying their actual cost or more.
pseudosavant 10 minutes ago [-]
That has me quite tempted. In general, I stay under the Plus limits, but I do watch my consumption. I could use `/fast` mode all of the time, with extra high reasoning, and use gpt-5.4-pro for especially complex tasks. It wasn't worth 10x the price to me before, but 5x is approachable.
xur17 17 minutes ago [-]
Any idea way "5x or 20x more usage" means?
josh_p 15 minutes ago [-]
What’s the difference between the two Pro plans?
Both Pro plans include the same core capabilities. The main difference is usage allowance: Pro $100 unlocks 5x higher usage than Plus (and 10x Codex usage vs. Plus for a limited time), while Pro $200 unlocks 20x usage than Plus.
From their faq
11 minutes ago [-]
terramex 14 minutes ago [-]
5x more usage than in Plus is 100$
20x more usage than in Plus is 200$
I see this when I try to upgrade my Plus subscription.
recursive 16 minutes ago [-]
I assume it means 5x if they get to choose. They're the ones enforcing the limits.
reed1234 16 minutes ago [-]
If you pay 200 you get 20x
recursive 15 minutes ago [-]
The price is $100 according to this post. Where is there an option for $200?
orphea 12 minutes ago [-]
You choose on checkout. There it says
Plan details
5x more usage than Plus 20x more usage than Plus
$120/month $200/month
recursive 5 minutes ago [-]
So curious that the cost in the comparison is just a flat $100, not "$100 or $200" and yet the usage has the "or". Surely just a lapse in copy editing.
AstroBen 14 minutes ago [-]
seems like this $100 replaced the $200 plan
So.. cheaper?
readitalready 8 minutes ago [-]
No, the same $200 plan is still there. They hid it behind the $100 click-through.
This just adds a $100 plan that's 1/4 the usage of the $200 plan..
gib444 12 minutes ago [-]
Exactly! :)
laacz 7 minutes ago [-]
They are actively exploiting the compute shortages of Anthropic. In our team we're pushing for more or less vanilla and portability, since the best harness today might not be the best one in 6 months.
hackable_sand 3 minutes ago [-]
Can you guys remind me again why you're doing this?
gmig 14 minutes ago [-]
This is an additional offering to the existing plan.
>Our existing $200 Pro tier still remains our highest usage option.
righthand 10 minutes ago [-]
This is like the 2010s hosting price wars.
varispeed 13 minutes ago [-]
What is the difference between Pro and normal mode apart from the fact the Pro takes ages to finish? I see not much difference in output quality.
Archerlm 17 minutes ago [-]
just a rumor, but i heard altman was adding a timer which required the R&D dept. to triple
throwatdem12311 12 minutes ago [-]
I heard it’ll take about a year. Timers are a hard problem to solve.
flextheruler 11 minutes ago [-]
Tell me you're losing market share to competitors without telling me you're losing market share to competitors
azuanrb 10 minutes ago [-]
[dead]
selectively 17 minutes ago [-]
Price drops are nice. Unfortunately, the quality differential versus the competitor is night and day.
And everyone serious uses the API rate billing anyway.
aerhardt 32 seconds ago [-]
> the quality differential versus the competitor is night and day.
This myth about the inferiority of ChatGPT and Codex is becoming a meme.
I have active subscriptions to both. I am throwing at Codex all kinds of data engineering, web development and machine learning problems, have been working on non-tech tasks in the "Karpathy Obsidian Wiki" [1] style before he posted about it.
Not only does it crush Claude on cost, it's also significantly better at adherence and overall quality. Claude is there on my Mac, gathering dust, to the point I am thinking of not renewing the sub.
There are plenty of fellow HNers here who feel the same from what I read in the flamewars. I suspect none of us really has a horse in this race and many are half competent (in other threads, they mention they do things like embedded programming, distributed DL systems, etc.)
I'm starting to suspect a vast majority of people pushing the narrative that Claude is vastly better haven't even tried the 5.3 / 5.4 models and are doing it out of sheer tribalism.
Disagree. I use codex extensively. It just works so well with vscode and python. Claude with ridiculous limits - thanks no. For some even xAI is good fit.
nilkn 10 minutes ago [-]
This take is out-of-date by months (which is an eternity in this space). Codex today has caught up and is very much on par with CC.
satvikpendem 12 minutes ago [-]
I prefer and use 5.4 over Opus, it's simply better, faster, and doesn't glaze me like Claude models want to do for some reason.
Rendered at 18:32:34 GMT+0000 (Coordinated Universal Time) with Vercel.
Regarding speed, I don't use xhigh that often, and surprisingly for me GPT 5.4 high is faster than Claude 4.6 Opus high (unless you enable fast mode for Opus)
Just curious as I've often heard that Claude was superior for planning/architecture work while ChatGPT was superior for actual implementation and finding bugs.
The $200 Pro plan still exists, and does give access to the pro model.
What is new is a $100 Pro plan that does give access to the pro model, with lower usage limits than the $200 Pro plan.
Edit: I wonder if this is actually compute-bound as the impetus
For my money, on the code side at least, GitHub Copilot on VSCode is still the most cost effective option, 10 bucks for 300 requests gets me all I need, especially when I use OpenAI models which are counted as 1x vs Opus which is 3x. I've stopped using all other tools like Claude Code etc.
FWIW it feels like GH Copilot is a cheaper version of OpenRouter but with trade-offs like being locked into VSCode and the Microsoft ecosystem overall. I already use VSCode though and otherwise I don't see much downside to using GH Copilot outside of that.
Of course it is. Returns are diminishing, AGI isn't happening with current techniques but it is good enough to sell, so it's time to monetize. I just got an email from OpenAI as well about ads in their free tier (I signed up once out of curiosity).
Cancelled the plan I had with them and happily went back to just coding like normal in VSCode with occasional dips into Copilot when a need arose or for rubber ducking and planning. Feels much better as I'm in full control and not trusting the magic black box to get it right or getting fatigue from reading thousands of lines of generated code.
Anyone who says they're able to review thousands of lines effectively that Claude might slop out in a day are lying to themselves.
The amount you can review before burning out is now the reasonable limit, for the same reason that a car is supposed to stay at the speed you can handle and not the max speed of the engine.
Of course, many people are secretly skipping reviews and some dare to publicly advocate for getting rid of them entirely.
Both Pro plans include the same core capabilities. The main difference is usage allowance: Pro $100 unlocks 5x higher usage than Plus (and 10x Codex usage vs. Plus for a limited time), while Pro $200 unlocks 20x usage than Plus.
From their faq
20x more usage than in Plus is 200$
I see this when I try to upgrade my Plus subscription.
So.. cheaper?
This just adds a $100 plan that's 1/4 the usage of the $200 plan..
5x=$100 20x=$200
>Our existing $200 Pro tier still remains our highest usage option.
And everyone serious uses the API rate billing anyway.
This myth about the inferiority of ChatGPT and Codex is becoming a meme.
I have active subscriptions to both. I am throwing at Codex all kinds of data engineering, web development and machine learning problems, have been working on non-tech tasks in the "Karpathy Obsidian Wiki" [1] style before he posted about it.
Not only does it crush Claude on cost, it's also significantly better at adherence and overall quality. Claude is there on my Mac, gathering dust, to the point I am thinking of not renewing the sub.
There are plenty of fellow HNers here who feel the same from what I read in the flamewars. I suspect none of us really has a horse in this race and many are half competent (in other threads, they mention they do things like embedded programming, distributed DL systems, etc.)
I'm starting to suspect a vast majority of people pushing the narrative that Claude is vastly better haven't even tried the 5.3 / 5.4 models and are doing it out of sheer tribalism.
[1] https://gist.github.com/karpathy/442a6bf555914893e9891c11519...