NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Qwen3.6-Plus: Towards Real World Agents (qwen.ai)
Aurornis 2 hours ago [-]
This is their hosted-only model, not an open weight model like they’ve become known for. They got a lot of good publicity for their open weight model releases, which was the goal. The hard part is pivoting from an open weight provider to being considered as a competitor to Claude and ChatGPT. Initial reactions are mostly anger from everyone who didn’t realize that the play along was to give away the smaller models as advertising, not because they were feeling generous.

Comparing to Opus 4.5 instead of the current 4.6 and other last-gen models is clearly an attempt to deceive, which isn’t winning them any points either.

I think there is a moderately large market for models like this that aren’t quite SOTA level but can be served up much cheaper. I don’t know how successful they’ll be in the race to the bottom in this market niche, though. Most users of cheap API tokens are not loyal to any brand and will change providers overnight each time someone releases a slightly better model.

zozbot234 2 hours ago [-]
> not an open weight model like they’ve become known for.

Right, they state that they'll release "smaller" variants openly at some point, with few details as to what that means. Will there be a ~300B variant as with Qwen 3.5? The blog post doesn't say.

true_religion 15 minutes ago [-]
Opus was released in Feb 2026. Even though it feels like a long 2 months has passed, its' not really clear that they were developing this as a competitor to that product.

There's nothing really strange about not competing directly with the best, but rather showing whom you are as good as.

miki123211 9 minutes ago [-]
Ah, so that explains the recent wave of Qwen team-member departures.
jstummbillig 58 minutes ago [-]
> Initial reactions are mostly anger from everyone who didn’t realize that the play along was to give away the smaller models as advertising, not because they were feeling generous.

The naivety around this has been staggering quite frankly. All of a sudden, people thinking that meta etc are releasing free models because they believe in open access and distribution of knowledge. No, they just suck comparatively. There is nothing to sell. Using it to recruit and generate attention is the best play for them.

miki123211 7 minutes ago [-]
I thought Qwen was releasing open-weight because China can't compete with America (because of people's privacy concerns), so the only thing they could do is salt the ground economically with open models, and make sure everybody loses.

Apparently that wasn't actually the play here.

Gracana 13 minutes ago [-]
I don't think there's so much naivety. People can be aware of the the plan and still be frustrated and disappointed when it happens.
zozbot234 9 minutes ago [-]
I'm not frustrated or disappointed, we have lots of models from Qwen already. We haven't really lost anything. And plenty of players only release "smaller" models anyway, so it's hardly unprecedented.
cubefox 2 hours ago [-]
> I think there is a moderately large market for models like this that aren’t quite SOTA level but can be served up much cheaper.

There isn't, pretty much everyone wants the best of the best.

PhilippGille 2 hours ago [-]
The OpenRouter usage stats indicate the opposite: https://openrouter.ai/rankings?view=month
jjice 2 hours ago [-]
OpenRouter usage is likely skewed towards LLMs that are more niche and/or self-hostable by solid hardware that's available, but most consumers don't have on hand. I can imagine Anthropic and OpenAI LLMs often get called directly from their APIs instead.

At least from my experience and friends of mine, we use OpenRouter for cases where we want to use smaller LLMs like Qwen, but when I've used ChatGPT and Claude, I use those APIs directly.

senordevnyc 42 minutes ago [-]
Same, and my little SaaS is pushing more than 0.1% of the TOTAL volume of tokens on OpenRouter, so the reality is they’re TINY.
vorticalbox 53 minutes ago [-]
what happened around jan this year(26) that caused such a climb in usage?
Someone1234 2 hours ago [-]
> There isn't, pretty much everyone wants the best of the best.

For direct user interaction or coding problems, perhaps. But as API calls get cheaper, it becomes more realistic to use them for completely automated workflows against data-sets, or as sub-agents called from expensive SOTA models.

For example, in Claude, using Opus as an orchestrator to call Sonnet sub-agents, is a popular usage "hack." That only gets more powerful, as the Sonnet equivalent model gets cheaper. Now you can spawn entire teams of small specialized sub-agents with small context windows but limited scope.

alexsmirnov 1 hours ago [-]
Exactly.

I did create my own MCP with custom agents that combine several tools into a single one. For example, all WebSearch, WebFetch, Context7 exposed as a single "web research" tool, backed by the cheapest model that passes evaluation. The same for a codebase research

Use it with both Claude and Opencode saves a lot of time and tokens.

thinkcontext 1 hours ago [-]
> But as API calls get cheaper, it becomes more realistic to use them for completely automated workflows against data-sets

Seems like a huge waste of money and electricity for processes that can be implemented as a traditional deterministic program. One would hope that tools would identify recurrent jobs that can be turned into simple scripts.

thraxil 1 hours ago [-]
No. Right now I'm upset that Google has removed (or at least is in the process of removing) the Gemini 2.0 flash model. We use it for some pretty basic functionality because it's cheap and fast and honestly good enough for what we use it for in that part of our app. We're being forced to "upgrade" to models that are at least 2.5 times as expensive, are slower and, while I'm sure they're better for complex tasks, don't do measurably better than 2.0 flash for what we need. Yay. We've stuck with the GCP/Gemini ecosystem up until now, but this is kind of forcing us to consider other LLM providers.
wolvoleo 10 minutes ago [-]
Not really. It depends on the usecase. For private stuff I'm very happy to take what was SOTA a year or 2 ago if I can have it all running in my home and don't have to share any of my data with some sleazy big tech cloud.

The price is a concern too of course. But privacy is a bigger one for me. I absolutely don't trust any of their promises not to use data for training purposes.

wongarsu 1 hours ago [-]
For coding I want the best. Both I and $work do lots of things besides coding where smaller models like qwen3.5-27b work great, at much lower cost.
joefourier 2 hours ago [-]
Ever hit your daily limit on Claude Code and saw how expensive it is to pay per token?
sidrag22 2 hours ago [-]
maybe there isnt, but as understanding grows people will understand that having an orchestration agent delegate simple work to lesser agents is significant not only for cost savings, but also for preserving context window space.
scoopdewoop 2 hours ago [-]
That isn't true. In a Codex or Claude Code instance, sure... but those are not the main users of APIs. If you are using LLMs in a service for customers, costs matter.
Aurornis 2 hours ago [-]
The market for API tokens is bigger than people like you and I (who also want the best) using then for code.

There are a lot of data science problems that benefit from running the dataset through an LLM, which becomes bottlenecked on per-token costs. For these you take a sample subset and run it against multiple providers and then do a cost versus accuracy tradeoff.

The market for API tokens is not just people using OpenCode and similar tools.

regularfry 2 hours ago [-]
Everyone may want the best, but the amount of AI-addressable work outstrips the budget available for buying the best by quite a wide margin.
noman-land 2 hours ago [-]
OpenCode allows for free inference tho.
wolttam 1 hours ago [-]
Nope. I get very good results from GLM 5 and 5.1. I’m not working on anything so complex and groundbreaking that I need the best.

Coding is a rung on the ladder of model capability. Frontier models will grow to take on more capabilities, while smaller more focused models start becoming the economical choice for coding

esafak 1 hours ago [-]
That's only because current models don't saturate people's needs. Once they are fast and smart enough people will pick cheaper ones.
dev_l1x_be 1 hours ago [-]
How stupid somebody has to be to mix up Opus with Qwen?
cieplok 22 minutes ago [-]
OP didn't say about confusing Opus with Qwen but rather people being confused about Qwen3.6-Plus not being available as an "open weight" model available for self hosting.
Alifatisk 1 hours ago [-]
I understand peoples reactions of Qwen team comparing against Opus 4.5 instead of 4.6. And them comparing against Gemini Pro 3.0 instead of 3.1. But calling it misleading is a bit of stretch in my eyes, people here are acting like we immediately forgot how previous generations performed just because a new version is released.

This field is going in a incredible pace, the providers release a new model every quarter or so. The amount of criticism is a bit overblown in my opinion. The benchmarks still look very good to me. I’ve used GLM-5 (latest is GLM-5.1) and Kimi K2.5, they are decent and gets the job done, so seeing how this model of Qwen performs compared to it is kinda impressive.

Also, why are so many pointing out the fact that this model is not open-weight as if this is their first time doing so. Qwen-3.5-plus, Qwen-3-Max is also closed source. This is not something new.

I think Qwen trying to catch up to the SOTA models is still healthy for us, the consumers. Sure, its sad news that this version is closed-weight, but I won’t downplay their progress.

nickvec 59 minutes ago [-]
I think it’s more the principle of deception that upsets people. Imagine if Apple released a new iPhone and publicly compared its specs to some previous gen Android. It’s not in good faith.
Alifatisk 50 minutes ago [-]
Why are we so quick to call it deception? Their figure is quite clear. They aren't fiddling with the graph or hiding the labels, they are clearly stating which models it compares against. But I agree on the sentiment that the standard practice should be to bench against the latest SOTA models.
patates 16 minutes ago [-]
Even if openly stated, why would they be comparing to a previous generation if not for deception?

Laziness? Lack of time? It's not like the latest generation of the SOTA models were released yesterday.

27 minutes ago [-]
simonw 54 minutes ago [-]
Pretty solid Pelican: https://gist.github.com/simonw/ca081b679734bc0e5997a43d29fad...

I used the https://modelstudio.alibabacloud.com/ API to generate that one, which required signing up for an account and attaching PayPal billing - but it looks like OpenRouter are offering it for free right now so I could have used that: https://openrouter.ai/qwen/qwen3.6-plus:free

jgbuddy 2 hours ago [-]
Worth noting that this model, unlike almost all qwen models, is not open-weight, nor is the parameter count exposed. Also odd that it is compared against opus 4.5 even though 4.6 was released like 2 months ago.
pferdone 2 hours ago [-]
They said in the last paragraph[0]:

"[...] In the coming days, we will also open-source smaller-scale variants, reaffirming our commitment to accessibility and community-driven innovation. [...]"

[0] https://qwen.ai/blog?id=qwen3.6#summary--future-work

deaux 2 hours ago [-]
> we will also open-source smaller-scale variants

In other words, like GP said, this Qwen3.6-Plus model is not open-weight unlike the other Qwen models.

thepasch 2 hours ago [-]
Qwen3.5-Plus is the largest variant of the open weight Qwen3.5 model, expanded with a 1M context window and fine-tuned on the Qwen-native harness’ specific tools.
dgb23 2 hours ago [-]
In a practical sense, I'm primarily interested in small to medium sized models being open. I think that might be common sentiment.

However, my hope is that there will be at least somewhat competitive big and open models as well, from an ethical/ideological perspective. These things were trained on data that was provided by people without their consent, so they should at least be be publicly accessible or even public domain.

pferdone 2 hours ago [-]
> unlike almost all qwen models

Almost all means there have been ones before that were not open. So, no contradiction there.

kennywinker 2 hours ago [-]
> unlike the other Qwen models

Please send the download link for qwen 3.5-plus.

Also, who cares? If you have the hardware to run a ~400b model i don’t think you count as a home user anymore.

cpburns2009 57 minutes ago [-]
If Opus 4.6 was only released two months ago, then it seems reasonable that Qwen hasn't finished fully comparing against the latest Opus.
zozbot234 2 hours ago [-]
I wouldn't say "almost all" seeing as -MAX and -Omni models were always closed.
2 hours ago [-]
furyofantares 1 hours ago [-]
I'll diverge from some of these comments, I don't find it misleading to compare to Opus 4.5.

I can remember how good Opus 4.5 was. If I'm considering using this, it's most informative to me to compare to the model it's closest to that I have familiarity with.

I'm obviously not switching to this if I want the best model. I'm switching if I'm hopeful that the smaller versions are close to it, or if I want to have more options for providers, or for any other reasons unrelated to getting the highest quality responses possible.

bensyverson 1 hours ago [-]
Exactly this. If you can get something close to Opus 4.5 for free, that's noteworthy. I may not use it for the most critical pieces of my app, but not everything I do is galaxy-brain coding.
srmatto 3 hours ago [-]
The benchmarks provided are for Opus-4.5, not for the latest Opus-4.6 and Qwen is still lagging in a lot of them.
Aurornis 2 hours ago [-]
There is no reason to benchmark against Opus 4.5 when Opus 4.6 has been out so long, other than to be misleading.
coldtea 60 minutes ago [-]
I can see reasons, among others that 4.5 was the one established as they were preparing this version. "So long" is merely 2 months ago, and Qwen 3.5 was barely released less than 2 months ago. They were likely already working on finalizing 3.6 before 3.5 official launch, and as 4.6 came out.

In any case, aside Claude fanboyism, having other plays inch closer to similar performance is always useful. Even if they are "6 months behind" as the pace slows down, this guarantees that there's no huge moat and they'll eventually either get to where the SOTA is, or the difference wont be that big.

I'd rather put fewer eggs in 2-3 big player baskets.

thegeomaster 2 hours ago [-]
And it seems they've decided to go closed-source for their largest, best models.
FuckButtons 2 hours ago [-]
3.5-plus was also only available via api. I don’t know what the long term business model for open weights is, I hope there is one, but it seems foolish to assume that companies will be willing to spend millions of dollars of compute on an asset worth zero in perpetuity.
1 hours ago [-]
coldtea 1 hours ago [-]
They always did that. Did they say anywhere they'd open all their models? They still have a business.
kgeist 2 hours ago [-]
They've always had closed-source variants:

- Qwen3.5-Plus

- Qwen3-Max

- Qwen2.5-Max

etc. Nothing really changed so far.

wolvoleo 11 minutes ago [-]
Nice, I hope there will also come a small open version of it.
linolevan 2 hours ago [-]
I’m surprised that people are surprised. Qwen has been hosting private plus and max variants for a while now.
woeirua 2 hours ago [-]
Just more evidence that the B tier models are six months behind. Ultimately that’s good. Opus 4.6 level intelligence will be cheap later this year!
shubhamgarg86 5 minutes ago [-]
the comparison is helpful but i'd want to see how it handles edge cases
karimf 2 hours ago [-]
> In the coming days, we will also open-source smaller-scale variants, reaffirming our commitment to accessibility and community-driven innovation.
wg0 54 minutes ago [-]
It hallucinates a lot more then Sonnet or even MiniMax M2.5. Especially in tool calls, it would end up duplicating the content in code files and then realising later and getting stuck in a loop.
giancarlostoro 2 hours ago [-]
I hope their open source variants are just as good, having a 1 million token window for a fully offline model would be VERY interesting.
sosodev 2 hours ago [-]
I don't know how well it performs, but you can extend Qwen3.5 to 1 million token context using YaRN. Also, Nemotron 3 Super was recently released and scales up to 1 million token context natively.
zkmon 53 minutes ago [-]
It is no longer available on OpenRouter. They say "going away on 3-March", but it's already gone!
Caum 2 hours ago [-]
The agent benchmarks here are interesting but I'd love to see how Qwen3.6-Plus handles long-horizon tasks where it needs to recover from its own mistakes. Most agent evals test the happy path. The hard part is when the model takes a wrong action at step 3 and needs to recognize and backtrack at step 15. Has anyone stress-tested this in a real dev workflow?
throwaw12 2 hours ago [-]
I would love to hear from people using both (Claude Code OR Codex) AND (Qwen) and their experience with Qwen models, are they on par, or how far are they?
scottcha 1 hours ago [-]
I switch between Claude Code (Opus/Sonnet) and Qwen (OpenCode, OpenClaw) multiple times throughout the day and Qwen 3.5 is really nice. I do also use KimiK2.5 and GLM5 pretty often too and I'm starting to get a sense that the agent tool is becoming a little more important than the model with these level of models. As long as tool calling and prompt quality is all configured correctly by the provider.
Art9681 2 hours ago [-]
How convenient of them to compare themselves to the last generation Opus and GPT models to make their model look better than it really is.
MarsIronPI 2 hours ago [-]
It's not open weights so I'm not interested.
esafak 2 hours ago [-]
Does anyone have experience with Alibaba's coding plan? Not that I'm very tempted at $50/month...
eis 2 hours ago [-]
Quite strong results in the benchmarks but why Gemini 3 Pro instead of 3.1? Why only for a few of the benchmarks? Why is OpenAI not there in the coding benchmarks? Why Opus 4.5 and not 4.6? Just jumps out into my eye as a bit strange.

As always, we'll have to try and see how it performs in the real world but the open weight models of Qwen were pretty decent for some tasks so still excited to see what this brings.

techpulselab 1 hours ago [-]
[dead]
maxothex 1 hours ago [-]
[dead]
daft_pink 2 hours ago [-]
Not really interested in using models hosted on alibaba cloud.

Like Qwen local for it’s privacy, but I trust the privacy of Google/OpenAI/Anthropic more than alibaba.

the_pwner224 2 hours ago [-]
I had the exact opposite reaction. I stopped using OpenAI/Google a while ago due to privacy and moved to local Qwen, now I'm considering using Alibaba cloud. You know Google and OpenAI are going to share everything with the US government and Western ad networks. But with Alibaba, who cares if the CCP & Chinese ad networks have a comprehensive profile on me? From a pragmatic perspective it's much better for (outcomes related to) privacy.
zobzu 2 hours ago [-]
so if China has the data good, us has the data bad, got it lol.

us actually has laws around this and they arent sharing very much with thr us gov today. china shares 100% as required by law. and neither care much about "how long do i cook eggs for", but they do care about code generation a lot.

wongarsu 1 hours ago [-]
From an espionage perspective your own government is the safest. But from a civil rights perspective your own government is your most immediate threat. China isn't going to arrest me for my opinions on Netanyahu, my own government could

And the US government has repeatedly shown that it is very interested in collecting all the data available, just like China. In China this is simply done in the open while the US has a veneer of protection for citizens. But where the data collection is forbidden by law they either ignore the law or ask another five eyes member to do the spying and share the results. Both are well documented

thereitgoes456 2 hours ago [-]
> so if China has the data good, us has the data bad

It's not that, it's about relative risk to your own life. Asking questions about "DEI" for example is much more likely to have adverse effects on your life if you ask Grok or an OpenAI chatbot, though still not that likely.

CamperBob2 2 hours ago [-]
As with all arguments equivalent to "I have nothing to hide, so I have nothing to fear," it may be true now, but it may not be true later. The only certainty is that this will not be your call.
the_pwner224 2 hours ago [-]
Agreed
rvz 2 hours ago [-]
> Like Qwen local for it’s privacy, but I trust the privacy of Google/OpenAI/Anthropic more than alibaba.

None should be trusted, unless you are running them locally.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 17:18:27 GMT+0000 (Coordinated Universal Time) with Vercel.