Really happy to see this and will give it a good spin. They seem to be doing things the right way in my subjective opinion:
""" To implement this filter, we begin by ranking URL domains according to the volume of
texts they contribute to the FineWeb (Penedo et al., 2024a) and FineWeb-2 (Penedo et al.,
2025) corpus, as an approximation of web-level English and multilingual data. From this
ranking, we select the top one million English domains and the top one million non-English
domains. Due to domain overlap and the fact that some sites are now offline, the total
number of accessible robots.txt files is smaller than two million. For each domain that
remains reachable, we retrieve its robots.txt file as of January 2025 and examine the
directives relevant to AI training. In particular, we focus on those targeting the AI-specific
user agents listed in Appendix A. Any contents blocked by the current robots.txt is
removed retroactively from the entire 2013-2024 range of the training dataset. We follow
an opt-out policy, that is, if the corresponding robots.txt files are not available, we
consider the data usable for training. The filtering process results in an estimated token
loss of approximately 8% in English data and 4% in multilingual data.
"""
mycall 13 hours ago [-]
> Any contents blocked by the current robots.txt is removed retroactively from the entire 2013-2024 range of the training dataset
Why not check historical versions of the robots.txt (e.g. archive.org) and contain the retroactive cutoff to a certain date range, parsing the robots.txt accordingly? That might increase the corpus size within legal and fair use boundaries.
lllllm 13 hours ago [-]
common crawl anyway respects the CCbot opt-out every time they do a crawl.
we went a step further because back in old ages (2013 is our oldest training data) LLMs did not exist, so website owners opting out today of AI crawlers might like the option to also remove their past contents.
arguments can be made either way but we tried to remain on the cautious side at this point.
we also wrote a paper on how this additional removal affects downstream performance of the LLM https://arxiv.org/abs/2504.06219 (it does so surprisingly little)
3np 3 hours ago [-]
I imagine coverage is sparse enough to not be worth it.
lllllm 13 hours ago [-]
martin here from the apertus team, happy to answer any questions if i can.
PS: you can run this locally on your mac with this one-liner:
pip install mlx-lm
mlx_lm.generate --model mlx-community/Apertus-8B-Instruct-2509-8bit --prompt "who are you?"
trickstra 2 hours ago [-]
Hi, your "truly open" model is "gated" on Huggingface, restricting downloads unless we agree to "hold you harmless" and share our contact info. Can you fix this please, either by removing the restriction, or removing the "truly open" claim?
lllllm 2 minutes ago [-]
We hear you, nevertheless this is one of the very few open-weights and open-data LLMs, and the license is still very permissive (compare for example to Llama). Personally of course I'd like to remove the additional click, but the universities also have a say in this.
trcf22 5 hours ago [-]
Great job!
Would it be possible to know what was the cost of training such a model?
menaerus 3 hours ago [-]
From their report:
> Once a production environment has been set up, we estimate that the model can be realistically trained in approximately 90 days on 4096 GPUs, accounting for overheads. If we assume 560 W power usage per Grace-Hopper module in this period, below the set power limit of 660 W, we can estimate 5 GWh power usage for the compute of the pretraining run.
Fully open model: open weights + open data + full training details including all data and training recipes
Massively Multilingual: 1811 natively supported languages
Compliant: Apertus is trained while respecting opt-out consent of data owners (even retrospectivey), and avoiding memorization of training data
lyu07282 3 days ago [-]
Their struggle with Nvidia driver bugs they had to work around was very relatable. You'd think if someone buys 10,752 of their high-end GPUs you'd get some support with it.
hodgehog11 12 hours ago [-]
Agreed, but the problem seems to be even worse with AMD from what I hear, or at least it was when I checked with some of my HPC buddies a little over a year ago. Constant driver bugs and crickets from upstream "support".
_zoltan_ 16 hours ago [-]
did I miss a blog on this?
lllllm 14 hours ago [-]
we didn't have time to write one yet, but there is the tech report which has a lot of details already
menaerus 2 hours ago [-]
Report is packed with interesting details. Engineering challenges and solutions chapter especially show how things which are supposed and expected to work break when put through a massive scale. Really difficult bugs. Great writeup.
lllllm 1 minutes ago [-]
thank you!
Bromeo 4 days ago [-]
Looks like the performance is pretty decent, somewhere around Llama3.1 for general knowledge (Tables 17) but still a bit behind in Code and Reasoning (Table 18). Llama3.1 was released about one year ago.
esafak 16 hours ago [-]
There's an interesting "Swiss AI Charter" on pg. 107.
nickpsecurity 4 days ago [-]
Upvoting to encourage discussion of these differentiators:
"Apertus is a 70B and 8B parameter language model designed to push the boundaries of fully-open multilingual and transparent models. The model supports over 1000 languages and long context, it uses only fully compliant and open training data, and achieves comparable performance to models trained behind closed doors."
"pretrained on 15T tokens with a staged curriculum of web, code and math data"
"open weights + open data + full training details including all data and training recipes"
"Apertus is trained while respecting opt-out consent of data owners (even retrospectivey), and avoiding memorization of training data"
Mars008 3 days ago [-]
At least not "open source"
> "open weights + open data + full training details including all data and training recipes"
Is it reproducible?
> respecting opt-out consent of data owners (even retrospectivey)
Were they notified and given an option to opt out? Owners and authors are not the same. Data owners aren't copyright owners either.
> avoiding memorization of training data
Not convincing.
ujjkel9938 3 days ago [-]
I saw some of the pretraining code in github, but not the post-training.
Congratulations to the Apertus team! Love the name, which in addition to "open" in Latin reminds me of Pilatus.
cwillu 3 hours ago [-]
“The file reflects data protection deletion requests which have been addressed to SNAI as the developer of the Apertus LLM. It allows you to remove Personal Data contained in the model output. We strongly advise downloading and applying this output filter from SNAI every six months following the release of the model.”
I can't imagine that this actually complies with the law.
lastdong 4 days ago [-]
In my opinion, we need more models trained on fully traceable and clean data instead of closed models that we later find out were trained on Reddit and Facebook discussion threads.
johntash 10 hours ago [-]
I want to see something trained _only_ on stuff like encyclopedias, programming books, etc. I'm interested in how different it would be compared to something with a lot of social media in it.
ekianjo 7 hours ago [-]
Better to do a fine tune or a LoRA than a full retraining from scratch
dcreater 14 hours ago [-]
I want and hope this to succeed. But the tea leaves don't look good at the moment:
- model sizes that the industry was at 2-3 gens ago (llama 3.1 era)
- Conspicuous lack of benchmark results in announcements
- not on openrouter, no ggufs as yet
model sizes: still many good dense models today lie in the range between our small and large chosen sizes
dcreater 14 hours ago [-]
Thank you! Why are the comparisons to llama3.1 era models?
lllllm 13 hours ago [-]
we compared to GPT-OSS-20B, Llama 4, Qwen 3, among many others. Which models do you think are missing, among open weights and fully-open models?
Note that we have a specific focus on multilinguality (over 1000 languages supported), not only on english
kamranjon 12 hours ago [-]
How did it compare with Gemma 3 models? I’ve been impressed with Gemma 27b - but I try out local models frequently and I’m excited to boot up your 70b model on my 128gb MacBook Pro when I get home!
dcreater 7 hours ago [-]
ah im sorry, I missed that - im not that blind usually..
Imagine regulators doing their job for once and creating a clean regulation that removes the uncertainty about the liability for such releases. Such that they can just slap Apache or MIT on it and call it a day and don't require to collect personal data to comply with the "acceptable use policy".
WhitneyLand 14 hours ago [-]
This is an impressive milestone.
It’s easy to become jaded with so many huge models being released, but the reality is they are still from a relatively small group of countries.
For example India has no indigenous models this big despite having a world class talent pool.
porridgeraisin 4 hours ago [-]
> talent pool
Capital though ;)
[I am a grad student here in reinforcement learning]
Anyways, among all the VC/made-at-home driven snake oil, I'd say you should look at sarvam.ai, they are the most focussed and no-nonsense group. They have a few good from-scratch models (I believe upto 7B or 14B), as well as a few llama finetunes. Their API is pretty good.
The main thing folks here are attempting are to get LLMs good at local indian languages (and I don't mean hindi). I don't think people see a value in creating an "indigenous llama" that doesn't have that property. For this, the main bottleneck is data (relatively speaking, there is zero data in those languages on the internet), so there's a team AI4Bharat whose main job is curating datasets good enough to get stuff like _translation_ and other NLP benchmarks working well. LLMs too, for which they work with sarvam frequently.
coalteddy 10 hours ago [-]
Very cool. Love this. Was the training more heavily weighted towards swiss languages and how does the model perform on swiss languages compared to others?
Are there any plans for further models after this one?
lllllm 1 hours ago [-]
The pretraining (so 99% of training) is fully global, in over 1000 languages without special weighting. The posttraining (See section 4 of the paper) had also as many languages as we could get, and did upweight some languages. The posttraining can easily be customized to any other target languages
tarruda 14 hours ago [-]
Is there any practical method to verify that the model was trained from the reported dataset?
lllllm 14 hours ago [-]
we released 81 intermediate checkpoints of the whole pretraining phase, and the code and data to reproduce. so full audit is surely possible - still it would depend on what you consider 'practical' here.
13 hours ago [-]
xdennis 9 hours ago [-]
I don't get it. How is it open if you can't even access it without signing a contract?
>> "The "o" stands for "open", "openness", "open source" and is placed where a "TM" symbol (indicating patents, trademarks, protection) would normally reside. Instead openness is the apertus° trademark."
It's also a completely different kind of thing so trademark probably wouldn't come into it even if they had one.
cmdrk 3 days ago [-]
Does their training corpus respect copyrights or do you have to follow their opt out procedure to keep them from consuming your data? Assuming it’s the latter, it’s open-er but still not quite there.
> Unlike many prior models that release weights without reproducible data pipelines or regard for content-owner rights, Apertus models are pretrained exclusively on openly available data, retroactively respecting robots.txt exclusions and filtering for copyrighted, non-permissive, toxic, and personally identifiable content.
traspler 17 hours ago [-]
Afaik they respect robots.txt on crawl and later when using the data they re-check the robots.txt and will exclude the data if the new robots.txt was updated to deny access. They have further data filtering bit for that you better check the technical report.
Rendered at 11:40:39 GMT+0000 (Coordinated Universal Time) with Vercel.
""" To implement this filter, we begin by ranking URL domains according to the volume of texts they contribute to the FineWeb (Penedo et al., 2024a) and FineWeb-2 (Penedo et al., 2025) corpus, as an approximation of web-level English and multilingual data. From this ranking, we select the top one million English domains and the top one million non-English domains. Due to domain overlap and the fact that some sites are now offline, the total number of accessible robots.txt files is smaller than two million. For each domain that remains reachable, we retrieve its robots.txt file as of January 2025 and examine the directives relevant to AI training. In particular, we focus on those targeting the AI-specific user agents listed in Appendix A. Any contents blocked by the current robots.txt is removed retroactively from the entire 2013-2024 range of the training dataset. We follow an opt-out policy, that is, if the corresponding robots.txt files are not available, we consider the data usable for training. The filtering process results in an estimated token loss of approximately 8% in English data and 4% in multilingual data. """
Why not check historical versions of the robots.txt (e.g. archive.org) and contain the retroactive cutoff to a certain date range, parsing the robots.txt accordingly? That might increase the corpus size within legal and fair use boundaries.
we went a step further because back in old ages (2013 is our oldest training data) LLMs did not exist, so website owners opting out today of AI crawlers might like the option to also remove their past contents.
arguments can be made either way but we tried to remain on the cautious side at this point.
we also wrote a paper on how this additional removal affects downstream performance of the LLM https://arxiv.org/abs/2504.06219 (it does so surprisingly little)
the full collection of models is here: https://huggingface.co/collections/swiss-ai/apertus-llm-68b6...
PS: you can run this locally on your mac with this one-liner:
pip install mlx-lm
mlx_lm.generate --model mlx-community/Apertus-8B-Instruct-2509-8bit --prompt "who are you?"
> Once a production environment has been set up, we estimate that the model can be realistically trained in approximately 90 days on 4096 GPUs, accounting for overheads. If we assume 560 W power usage per Grace-Hopper module in this period, below the set power limit of 660 W, we can estimate 5 GWh power usage for the compute of the pretraining run.
Key features
Fully open model: open weights + open data + full training details including all data and training recipes
Massively Multilingual: 1811 natively supported languages
Compliant: Apertus is trained while respecting opt-out consent of data owners (even retrospectivey), and avoiding memorization of training data
"Apertus is a 70B and 8B parameter language model designed to push the boundaries of fully-open multilingual and transparent models. The model supports over 1000 languages and long context, it uses only fully compliant and open training data, and achieves comparable performance to models trained behind closed doors."
"pretrained on 15T tokens with a staged curriculum of web, code and math data"
"open weights + open data + full training details including all data and training recipes"
"Apertus is trained while respecting opt-out consent of data owners (even retrospectivey), and avoiding memorization of training data"
> "open weights + open data + full training details including all data and training recipes"
Is it reproducible?
> respecting opt-out consent of data owners (even retrospectivey)
Were they notified and given an option to opt out? Owners and authors are not the same. Data owners aren't copyright owners either.
> avoiding memorization of training data
Not convincing.
I can't imagine that this actually complies with the law.
- model sizes that the industry was at 2-3 gens ago (llama 3.1 era) - Conspicuous lack of benchmark results in announcements - not on openrouter, no ggufs as yet
quantizations: available now in MLX https://github.com/ml-explore/mlx-lm (gguf coming soon, not trivial due to new architecture)
model sizes: still many good dense models today lie in the range between our small and large chosen sizes
Note that we have a specific focus on multilinguality (over 1000 languages supported), not only on english
It’s easy to become jaded with so many huge models being released, but the reality is they are still from a relatively small group of countries.
For example India has no indigenous models this big despite having a world class talent pool.
Capital though ;)
[I am a grad student here in reinforcement learning]
Anyways, among all the VC/made-at-home driven snake oil, I'd say you should look at sarvam.ai, they are the most focussed and no-nonsense group. They have a few good from-scratch models (I believe upto 7B or 14B), as well as a few llama finetunes. Their API is pretty good.
The main thing folks here are attempting are to get LLMs good at local indian languages (and I don't mean hindi). I don't think people see a value in creating an "indigenous llama" that doesn't have that property. For this, the main bottleneck is data (relatively speaking, there is zero data in those languages on the internet), so there's a team AI4Bharat whose main job is curating datasets good enough to get stuff like _translation_ and other NLP benchmarks working well. LLMs too, for which they work with sarvam frequently.
Are there any plans for further models after this one?
https://www.merriam-webster.com/dictionary/aperture
>> "The "o" stands for "open", "openness", "open source" and is placed where a "TM" symbol (indicating patents, trademarks, protection) would normally reside. Instead openness is the apertus° trademark."
It's also a completely different kind of thing so trademark probably wouldn't come into it even if they had one.
> Unlike many prior models that release weights without reproducible data pipelines or regard for content-owner rights, Apertus models are pretrained exclusively on openly available data, retroactively respecting robots.txt exclusions and filtering for copyrighted, non-permissive, toxic, and personally identifiable content.