How's the metaverse doing? It was the next big thing and how we're all going to be working inside it in... was it like 3 months ago?
Maybe they need to mine more libra coin first? or is it diem now? is that even still part of meta?
I'm sure this new AI is super intelligent and super awesome and will be writing all the code, making all the blog posts, and generating all our youtube shorts in 6 months.
gallerdude 34 minutes ago [-]
This would have been an amazing release 6 months ago. But the industry moves so fast, this is a trite release. Maybe it’s best for Meta to sell their superintelligence division. I don’t think Zuck’s vision is particularly compelling.
dgellow 27 minutes ago [-]
I never understood why meta decided to join the race. They don’t sell compute like Google or Microsoft. Why not let others do the hard work and integrate their LLMs in your systems if needed?
I assume it’s because they have Instagram, Facebook, WhatsApp, Thread data and feel they should be the ones using them for training, but it’s really not obvious how having a frontier AI lab benefits their business
observationist 2 minutes ago [-]
Adtech Money. They've got GPUs, they've got the infrastructure, and they've got the advertisement platform, and the point is getting AI that can exploit the adtech and create a flywheel effect, maximizing return from the data they collect from Insta, WhatsApp, Facebook, etc.
It's not just about LLMs, it's about being able to model consumers and markets and psychology and so on. Meta is also big in the manipulation side of things, any sort of cynical technological exploitation of humans you can imagine but that is technically legal, they're doing it for profit.
xnx 8 minutes ago [-]
Zuck is trying to convince himself he's good, and not just lucky.
swyx 3 minutes ago [-]
you dont understand why zuck, who paid $1B for instagram when they had no revenue and 7 employees because he is paranoid about platform shifts, decided to join the race for (what is seeming highly possibly) the biggest platform shift in human history?
gallerdude 25 minutes ago [-]
I’m sure there’s more to it than this, but it feels like Zuck has pet interests like VR and now AI.
alex1138 9 minutes ago [-]
But no account support, that's boring
Or any quality control (people missing posts)
Or banning the people who should be banned while leaving everyone else alone
Because Zuck has chronic FOMO, he's said as much himself
zeroonetwothree 19 minutes ago [-]
But then how will Zuck win the billionaire dick measuring contest?
throwaw12 19 minutes ago [-]
> I don’t think Zuck’s vision is particularly compelling.
But he has to do it anyways, otherwise Meta can be disrupted easily.
Google, Apple has hardware, distribution channels for their products
Amazon has the marketplace and cloud
Microsoft has enterprise and cloud
Meta is always looking for ways to stay afloat
xnx 9 minutes ago [-]
Meta has 3.5 billion daily active users
gordonhart 31 minutes ago [-]
A new model comparable (ish) to the Claude/Gemini/GPT flagships is a big deal for the industry and for Meta even if it doesn't set the new frontier.
blahblaher 12 minutes ago [-]
Why would you use this instead of the other more proven models? Unless it's significantly cheaper. The general population mostly wants it free, and the more professional users are willing to pay for good/better responses.
gallerdude 26 minutes ago [-]
I’m not sure. If it was open source, certainly. But 4th place doesn’t really matter if you have nothing different to add.
lairv 22 seconds ago [-]
If the model is truly on par with Opus 4.6/Gemini 3.1/GPT 5.4 this still puts MSL in the frontier lab category, which is no small feat given that they pretty much rebooted last year
Many labs aren't able to keep up with the frontier, xAI, Mistral
datadrivenangel 10 minutes ago [-]
Fourth place means you're not reliant on any of the external providers for internal AI use, which is important for organizational health and negotiating with those other providers.
zozbot234 26 minutes ago [-]
Their new Contemplating mode gives this model a Deep Research ability (akin to existing models from GPT and Gemini) that might make it quite comparable to the just-announced Mythos.
solenoid0937 13 minutes ago [-]
Mythos is a much bigger pre train, Contemplating is not the same thing.
zozbot234 9 minutes ago [-]
> Mythos is a much bigger pre train
Do we have data to substantiate that claim?
solenoid0937 2 minutes ago [-]
It's pretty common knowledge. Spud is the only other PT comparable with Mythos.
Both Spud and Mythos can also scale via inference time compute.
Meta simply did not have enough compute online, long enough ago, to have a similar PT.
throwaw12 21 minutes ago [-]
How is that Meta spent so much money for talent and hardware, but the model barely matches Opus 4.6?
Especially, looking at these numbers after Claude Mythos, feels like either Anthropic has some secret sauce, or everyone else is dumber compared to the talent Anthropic has
strulovich 15 minutes ago [-]
Meta did a bunch of mistakes, and look like Zuckerberg spent a lot of money on talent and made big swings to change it (that happened about a year ago)
I think it’s unrealistic to expect them to come back from that pit to the top in one year, but I wouldn’t rule them out getting there with more time. That’s a possible future. They have the money and Zuckerberg’s drive at the helm. It can go a long way.
coffeebeqn 2 minutes ago [-]
Matching Opus 4.6 would be pretty good? It’s the SOTA actually available model
impulser_ 14 minutes ago [-]
It's not even on par with Sonnet. It's on par with open source models and it not even open source and sit behind a private preview API.
Might as well not release anything.
solenoid0937 15 minutes ago [-]
It's benchmaxxed.
If they actually matched Opus 4.6 on such a short timeline, it would have been mighty impressive. (Keep in mind this is a new lab and they are prohibited from doing distills.)
throwaw12 14 minutes ago [-]
how do you know it's benchmaxxed?
solenoid0937 48 seconds ago [-]
Friends at Meta with access to the model + personal experience at Meta.
Meta's performance process is essentially "show good numbers or you're out." So guess what people do when they don't have good numbers? They fudge them. Happens all across the company.
zozbot234 16 minutes ago [-]
> has some secret sauce
Yup, it's called test-time compute. Mythos is described as plenty slower than Opus, enough to seriously annoy users trying to use it for quick-feedback-loop agentic work. It is most properly compared with GPT Pro, Gemini DeepThink or this latest model's "Contemplating" mode. Otherwise you're just not comparing like for like.
throwaw12 12 minutes ago [-]
> it's called test-time compute.
Why can't others easily replicate it?
coder68 2 minutes ago [-]
I have not delved into the theory yet but it seems that the smaller open-source models do this already to an extent. They have less parameters, but spend much more time/tokens reasoning, as a way to close the performance gap. If you look at "tokens per problem" on https://swe-rebench.com/ it seems to be the case at least.
username223 6 minutes ago [-]
Facebook is working with the talent that can’t find a job at some other company. It doesn’t surprise me they ship mediocrity.
wotsdat 17 minutes ago [-]
[dead]
toddmorey 26 minutes ago [-]
Question: since they've rebooted their approach to AI... have they given up on open models? There's no mention of open source or open weights or access to the models beyond their hosted services.
thegeomaster 22 minutes ago [-]
Alexandr Wang on Twitter [0] mentioned open source plans:
"this is step one. bigger models are already in development with infrastructure scaling to match. private api preview open to select partners today, with plans to open-source future versions. incredibly proud of the MSL team. excited for what’s to come!"
This may be too large to run locally anyway. Maybe they will distill down some smaller open versions later.
creddit 15 minutes ago [-]
Ran some of my internal benchmarks against this and I'm very unimpressed. I don't think this moves them into the OAI v Anthropic v Gemini conversation at all.
Major analytical errors in their response to multiple of my technical questions.
creddit 3 minutes ago [-]
Playing with this some more and it's actively not good. Just basic mathematical errors riddling responses. Did some basic adversarial testing where its responses are analyzed by Gemini and Gemini is finding basic math errors across every relatively (relative to Opus, Gemini or GPT can handle) simple ask I make. Yikes.
zurfer 37 minutes ago [-]
> Muse Spark is available today at meta.ai and the Meta AI app. We’re opening a private API preview to select users.
m4r1k 31 minutes ago [-]
So no Open-weight .. why one would choose Muse Spark instead of Anthropic, OpenAI, or Google models all featuring from good to amazing harness?
visioninmyblood 22 minutes ago [-]
https://meta.ai/ this is where you can try it seems like the API is not publicly accessable yet. I feel they are very late to the game and do not show value to customers over other models.
p_stuart82 4 minutes ago [-]
late isn't the problem. private preview api and no reason to switch. that's just another hosted model
oliver236 15 minutes ago [-]
so glad its beating all the others on bioweapons refusal. this is what i most wanted out of the latest SOTA model
wmf 14 minutes ago [-]
Zuck has a lot more experience being summoned before Congress than you.
13 minutes ago [-]
khalic 35 minutes ago [-]
Oh good, if they built a lab, I’m sure they took the time the precisely define what they mean by super intelligence? Right? …
52-6F-62 8 minutes ago [-]
If this is super intelligence, then it follows we must all be super-duper intelligence.
sidcool 30 minutes ago [-]
Will experiment with the model. But I am scared of sharing any information with the Zuck ecosystem.
Artgor 35 minutes ago [-]
I'm cautiously waiting for the feedback from the first users.
Meta has produced a lot of great models (LLama), maybe this is a comeback... but I'm cautious, as the jump in the quality is almost too high.
Also, I think people aren't used that using such models requires meta.ai or meta ai app.
solenoid0937 28 minutes ago [-]
My Meta friends say it's benchmaxxed af
conradkay 25 minutes ago [-]
It doesn't seem benchmaxxed, ARC AGI 2 score is quite bad (42.5%, GPT 5.4 is 76.1%) and coding is okay. But maybe this is the best Meta can do even benchmaxxing
The impressive part is multimodality, very plausible since there's less focus there by other labs (especially Anthropic)
santiagobasulto 24 minutes ago [-]
This looks like a very interesting model and very promising, especially after llama lost so much ground recently. I hope they release the weights
ComputerGuru 11 minutes ago [-]
So does this confirm the end of llama?
chrsw 32 minutes ago [-]
So Meta is not releasing open source models anymore?
Until you actually try the model itself, assume any benchmark presented to you as being part of the marketing material of the model, as it is not independently verified and completely biased.
The same is true with any other model, unless otherwise stated.
In the next few days, we'll see who Meta has paid to promote this model on social media.
OsrsNeedsf2P 24 minutes ago [-]
The only benchmark they show against SOTA models is in bioweapons refusal. Lol. Lmao, even.
Maybe they need to mine more libra coin first? or is it diem now? is that even still part of meta?
I'm sure this new AI is super intelligent and super awesome and will be writing all the code, making all the blog posts, and generating all our youtube shorts in 6 months.
It's not just about LLMs, it's about being able to model consumers and markets and psychology and so on. Meta is also big in the manipulation side of things, any sort of cynical technological exploitation of humans you can imagine but that is technically legal, they're doing it for profit.
Or any quality control (people missing posts)
Or banning the people who should be banned while leaving everyone else alone
This is Zuck: https://news.ycombinator.com/item?id=4151433 or https://news.ycombinator.com/item?id=10791198
But he has to do it anyways, otherwise Meta can be disrupted easily.
Google, Apple has hardware, distribution channels for their products
Amazon has the marketplace and cloud
Microsoft has enterprise and cloud
Meta is always looking for ways to stay afloat
Many labs aren't able to keep up with the frontier, xAI, Mistral
Do we have data to substantiate that claim?
Both Spud and Mythos can also scale via inference time compute.
Meta simply did not have enough compute online, long enough ago, to have a similar PT.
Especially, looking at these numbers after Claude Mythos, feels like either Anthropic has some secret sauce, or everyone else is dumber compared to the talent Anthropic has
I think it’s unrealistic to expect them to come back from that pit to the top in one year, but I wouldn’t rule them out getting there with more time. That’s a possible future. They have the money and Zuckerberg’s drive at the helm. It can go a long way.
Might as well not release anything.
If they actually matched Opus 4.6 on such a short timeline, it would have been mighty impressive. (Keep in mind this is a new lab and they are prohibited from doing distills.)
Meta's performance process is essentially "show good numbers or you're out." So guess what people do when they don't have good numbers? They fudge them. Happens all across the company.
Yup, it's called test-time compute. Mythos is described as plenty slower than Opus, enough to seriously annoy users trying to use it for quick-feedback-loop agentic work. It is most properly compared with GPT Pro, Gemini DeepThink or this latest model's "Contemplating" mode. Otherwise you're just not comparing like for like.
Why can't others easily replicate it?
"this is step one. bigger models are already in development with infrastructure scaling to match. private api preview open to select partners today, with plans to open-source future versions. incredibly proud of the MSL team. excited for what’s to come!"
https://x.com/alexandr_wang/status/2041909388852748717
Major analytical errors in their response to multiple of my technical questions.
Also, I think people aren't used that using such models requires meta.ai or meta ai app.
The impressive part is multimodality, very plausible since there's less focus there by other labs (especially Anthropic)
The same is true with any other model, unless otherwise stated.
In the next few days, we'll see who Meta has paid to promote this model on social media.