This was always a nightmare waiting to happen. The sheer mass of packages and the consequent vast attack surface for supply chain attacks was always a problem that was eventually going to blow up in everyone's face.
But it was too convenient. Anyone warning about it or trying to limit the damage was shouted down by people who had no experience of any other way of doing things. "import antigravity" is just too easy to do without.
Well, now we're reaching the "find out" part of the process I guess.
tclancy 3 hours ago [-]
So, to play Pandora, what if the net effect of uncovering all these unknown attack vectors is it actually empties the holsters of every national intelligence service around the world? Just an idea I have been playing with. Say it basically cleans up everything and everyone looking for exploits has to start from scratch except “scratch” is now a place where any useful piece of software has been fuzz tested, property tested and formally verified.
Assuming we survive the gap period where every country chucks what they still have at their worst enemies, I mean. I suppose we can always hit each other with animal bones.
xingped 2 hours ago [-]
TBH this is a pretty good way of looking at it. Yeah we're seeing an explosion of vulnerabilities being found right now, but that (hopefully) means those vulnerabilities are all being cleaned up and we're entering a more hardened era of software. Minus the software packages that are being intentionally put out as exploits, of course. Maybe some might say it's too optimistic and naive, but I think you have a good point.
michaelchisari 60 minutes ago [-]
I agree with the prediction but not the timing. We won't enter a more hardened era of software until after a long period of security vulnerabilities.
Rivers caught on fire for a hundred years before the EPA was formed.
akoboldfrying 1 hours ago [-]
> we're entering a more hardened era of software
This is one force that operates. Another is that, in an effort to avoid depending on such a big attack surface, people are increasingly rolling their own code (with or without AI help) where they might previously have turned to an open source library.
I think the effect will generally be an increase in vulnerabilities, since the hand-rolled code hasn't had the same amount of time soaking in the real world as the equivalent OS library; there's no reason to assume the average author would magically create fewer bugs than the original OS library authors initially did. But the vulnerabilities will have much narrower scope: If you successfully exploit an OS library, you can hack a large fraction of all the code that uses it, while if you successfully exploit FooCorp's hand-rolled implementation, you can only hack FooCorp. This changes the economic incentive of funding vulnerabilities to exploit -- though less now than in the past, when you couldn't just point an LLM at your target and tell it "plz hack".
deepsun 2 minutes ago [-]
If I hand roll my logging library, I unlikely include automatic LDAP request based on message text (infamous Log4j vulnerability).
cratermoon 30 minutes ago [-]
Typically when hand-rolling code you implement only what you require for your use-case, while a library will be more general purpose. As a consequence of doing more, have more code and more bugs.
Also, even seemingly trivial libraries can have bugs. The infamous leftpad library didn't handle certain edge doses properly.
For supply chain security and bug count, I'll take a focused custom implementation of specific features over a library full of generalized functionality.
akoboldfrying 7 minutes ago [-]
Yes, a lot hinges on how little you can get away with implementing for your use case. If you have an XML config file with 3 settings in it, you probably won't need to implement handling of external entities the way a full XML parsing library would, which will close off an entire class of attendant vulnerabilities.
> Also, even seemingly trivial libraries can have bugs. The infamous leftpad library didn't handle certain edge doses properly.
This isn't really an argument in favour of having the average programmer reimplement stuff, though. For it to be, you'd have to argue that the leftpad author was unusually sloppy. That may be true in this specific case, but in general, I'm not persuaded that the average OSS author is worse than the average programmer overall. IMHO, contributing your work to an OSS ecosystem is already a mild signal of competence.
On the wider topic of reimplementation: Recently there was an article here about how the latest Ubuntu includes a bunch of coreutils binaries that have been rewritten in Rust. It turns out that, while this presumably reduced the number of memory corruption bugs (there was still one, somehow; I didn't dig into it), it introduced a bunch of new vulnerabilities, mostly caused by creating race conditions between checking a filesystem path and using the path for something.
FrinkleFrankle 2 hours ago [-]
New code will also use these tools from the get go, hopefully vastly reducing the vulnerabilities that make it to prod to begin with.
anankaie 2 hours ago [-]
To be fair, to some extent that’s up to us. Time to get cleaning, I guess.
allthetime 1 hours ago [-]
New software is being generated faster than it can be adequately tested. We are in the same place we’ve always been; except everything is moving much too fast.
jpollock 1 hours ago [-]
Faults are injected into the code at a constant rate per developer. Then there's the intentional injections.
Auto-installing random software is the problem. It was a problem when our parents did it, why would it be a good idea for developers to do it?
marcus_holmes 2 hours ago [-]
This assumes that there are no new exploits being generated.
We're seeing maintainers retreat from maintaining because the amount of AI slop being pushed at them is too much. How many are just going to hand over the maintenance burden to someone else, and how many of those new maintainers are going to be evil?
The essential problem is that our entire system of developing civilisation-critical software depends on the goodwill of a limited set of people to work for free and publish their work for everyone else to use. This was never sustainable, or even sensible, but because it was easy we based everything on it.
We need to solve the underlying problem: how to sustainably develop and maintain the software we need.
A large part of this is going to have to be: companies that use software to generate profits paying part of those profits towards the development and maintenance of that software. It just can't work any other way. How we do this is an open question that I have no answers for.
Barbing 2 hours ago [-]
Will need those animal bones if all the industrial control systems get turned against us
Nuclear might be airgapped but what about water, power…?
j45 56 minutes ago [-]
Thinks might have to start considering server side technologies a bit more if at least being mindful of build processes.
chasil 47 minutes ago [-]
I am so happy to go through another round of kernel RPMs after the freak out today!
I have one server that has shell users, and I did the "yum update" and "reboot -f" dance last week.
Was that good enough? Oh no.
Here we go again!
josephg 1 hours ago [-]
I've been wanting a capability based security model for years. Argued about it here in fact. Capabilities are kind of an object pointer with associated permissions - like a unix file descriptor.
We should have:
- OS level capabilities. Launched programs get passed a capability token from the shell (or wherever you launched the program from). All syscalls take a capability as the first argument. So, "open path /foo" becomes open(cap, "/foo"). The capability could correspond to a fake filesystem, real branch of your filesystem, network filesystem or really anything. The program doesn't get to know what kind of sandbox it lives inside.
- Library / language capabilities. When I pull in some 3rd party library - like an npm module - that library should also be passed a capability too, either at import time or per callsite. It shouldn't have read/write access to all other bytes in my program's address space. It shouldn't have access to do anything on my computer as if it were me! The question is: "What is the blast radius of this code?" If the library you're using is malicious or vulnerable, we need to have sane defaults for how much damage can be caused. Calling lib::add(1, 2) shouldn't be able to result in a persistent compromise of my entire computer.
SeL4 has fast, efficient OS level capabilities. Its had them for years. They work great. They're fast - faster than linux in many cases. And tremendously useful. They allow for transparent sandboxing, userland drivers, IPC, security improvements, and more. You can even run linux as a process in sel4. I want an OS that has all the features of my linux desktop, but works like SeL4.
Unfortunately, I don't think any programming language has the kind of language level capabilities I want. Rust is really close. We need a way to restrict a 3rd party crate from calling any unsafe code (including from untrusted dependencies). We need to fix the long standing soundness bugs in rust. And we need a capability based standard library. No more global open() / listen() / etc. Only openat(), and equivalents for all other parts of the OS.
If LLMs keep getting better, I'm going to get an LLM to build all this stuff in a few years if nobody else does it first. Security on modern desktop operating systems is a joke.
theamk 1 hours ago [-]
Note that capabilities would not help for those bugs we are discussing today.
Those exploits are in kernel, and the userspace is only calling the normal, allowed calls. Removing global open()/listen()/etc.. with capability-based versions would still allow one to invoke the same kernel bugs.
(Now, using microkernel like seL4 where the kernel drivers are isolated _would_ help, but (1) that's independent from what userspace does, you can have POSIX layer with seL4 and (2) that would be may more context switches, so a performance drop)
cperciva 4 hours ago [-]
Alternatively, switch to an operating system like FreeBSD which doesn't take a YOLO approach to security. Security fixes don't just get tossed into the FreeBSD kernel without coordination; they go through the FreeBSD security team and we have binary updates (via FreeBSD Update, and via pkgbase for 15.0-RELEASE) published within a couple minutes of the patches hitting the src tree. (Roughly speaking, a few seconds for the "I've pushed the patches" message to go out on slack, 10-30 seconds for patches to be uploaded, and up to a minute for mirrors to sync).
landr0id 3 hours ago [-]
FreeBSD didn’t have user land ASLR until 2019 and, amongst other mitigations, still doesn’t have kASLR. It’s not a serious operating system for people who care about security. If you want FreeBSD and security take Shawn Webb’s HardenedBSD.
kelnos 3 hours ago [-]
Last I read, ASLR is a good thing to have, but overall is usually not difficult to defeat. It's a speed bump, not a brick wall.
I don't think it's reasonable to say that an OS that lacks it isn't "serious" about security.
landr0id 3 hours ago [-]
>Last I read, ASLR is a good thing to have, but overall is usually not difficult to defeat.
For local attackers there may be easier avenues to leak the ASLR slide, but for remote attackers it's almost universally agreed it significantly raises the bar.
>I don't think it's reasonable to say that an OS that lacks it isn't "serious" about security.
When they implemented it in 2019 it had been an 18-year-old mitigation. If you are serious about security, you implement everything that raises the bar. The term "defense-in-depth" exists for a reason, and ASLR is probably one of the easiest and most effective defense-in-depth measures you can implement that doesn't necessarily require changes from existing code other than compiling with -pie.
abrookewood 2 hours ago [-]
Is there anywhere that provides a good overview of the various OS protection technologies/approaches that exist and which OSes have implemented them?
user3939382 3 hours ago [-]
So you have one example in hand and trash talked FreeBSD’s entire security team. Bold claims are fine but this is lazy.
FreeBSD isn’t secure, I suspect you’re sitting on a pile of 0 days for it?
landr0id 3 hours ago [-]
Ask yourself why Mythos was so easily able to develop a remote STACK buffer overflow vulnerability.
nozzlegear 1 hours ago [-]
Define "so easily"?
landr0id 24 minutes ago [-]
They exploited a linear stack buffer overflow. Not a write-what-where or arb write. A linear stack buffer overflow in 2026! There are at least two distinct failures there:
1. No strong stack protectors.
2. No kASLR.
That's 20-year-old exploit methodology.
tclancy 2 hours ago [-]
There’s always a guy. It’s great that your favorite distro is definitely safer. An order of magnitude fewer exploits will mean only a few thousand or so, I suppose. Ozymandis used Gentoo.
dag100 1 hours ago [-]
Calling FreeBSD "just a distro" is verging on insulting. It's an operating system.
GalaxyNova 47 minutes ago [-]
FreeBSD is not a distro
LoganDark 42 minutes ago [-]
FreeBSD is not a distro. It's not even Linux; it's a completely different kernel and operating system that traces back to even before Linux. It's honestly closer to Darwin than it is to Linux; macOS is technically a BSD.
krupan 3 hours ago [-]
If you are switching to a BSD for security reasons, why FreeBSD? Isn't OpenBSD the super secure one? Sorry, it's been a while since I've looked at those projects
loloquwowndueo 2 hours ago [-]
The person suggesting FreeBSD is a FreeBSD developer (Colin Percival - actually according to Wikipedia FreeBSD engineering lead), would be weird for him to suggest openbsd.
andai 3 hours ago [-]
I haven't switched to BSD but I've been thinking about it for a while. I just saw Vultr has both FreeBSD and OpenBSD!
eahm 4 hours ago [-]
Also funny they never show Debian in those tests/videos.
cperciva 3 hours ago [-]
Debian is probably the best of all the Linuxes, but still suffers from split-brain: If patches are sent upstream first, Debian can't start digesting them until they're already public.
With FreeBSD there's never any question of "who should this get reported to".
JoshTriplett 3 hours ago [-]
> Debian can't start digesting them until they're already public
Not sure what you mean by this. Debian is able to handle coordinated disclosures (when they're actually coordinated), and get embargoed security updates out rapidly without breaking the embargo.
Is there some other aspect of this that you're referencing?
pavon 7 minutes ago [-]
The fact that the kernel security team has decided coordinating disclosure is someone else's problem so it happens inconsistently.
juujian 4 hours ago [-]
How so?
4 hours ago [-]
4 hours ago [-]
4 hours ago [-]
0xbadcafebee 3 hours ago [-]
"Wait a week to install software" does not work. Just a few months ago a massive exploit hit the web, which was a timed attack which sat for more than a month before executing. If everyone starts waiting a week, their exploits will wait 2 weeks. Cyber criminals do not need to exploit you immediately, they just need to exploit you. (It also doesn't change a large range of vuln classes like typosquatting)
tom_alexander 3 hours ago [-]
I think the author was suggesting "wait a week" as a one-time wait for fixes to be written and patches distributed for these specific prematurely-disclosed vulnerabilities, not an on-going suggestion for delaying all updates. But otherwise I agree with you.
xena 3 hours ago [-]
Yep, that was my intent.
Barbing 2 hours ago [-]
Oh! Not GP but skimmed too quickly
gpm 3 hours ago [-]
I think you misunderstood the article. The proposal isn't wait a week after Software has been published before installing it. It's in the next seven days starting now, just don't, because you probably don't have patches for these vulnerabilities and even if you do there's probably more scary vulnerabilities about to be discovered.
Nathanba 1 hours ago [-]
well then let's wait a month or even two months. The point of the wait period is primarily to avoid the new installation of exploits, not the execution of already installed exploits.
whazor 2 hours ago [-]
A popular package has more exposure. When the artefact is published, the entire world can see it. Hopefully some people check the diff between versions. But without any delays then you could be hit by exploits nobody has seen yet.
fny 3 hours ago [-]
This is why cooldowns have space for patches.
AgentME 4 hours ago [-]
There's already an okay solution to supply-chain attacks against dependency managers like npm, PyPI, and Cargo: set them to only install package versions that are more than a few days old. The recent high-profile attacks were all caught and rolled back within a day, so doing this would have let you safely avoid the attacks. It really should be the default behavior. Let self-selected beta testers and security scanner companies try out the newest versions of packages for a day before you try them. Instructions: https://cooldowns.dev/
edoceo 4 hours ago [-]
More a case for something like this from Show HN three months ago
An artifact manager. Only get what you approve. So you can get fast updates when needed and consistently known stable when you need it. Does need a little config override - easy work.
I had my own janky tooling for something like it. This is a good project.
Johnny555 3 hours ago [-]
Does that really scale well? Thanks to cascading dependencies, even a medium sized project can import hundreds of dependencies. Can a developer really review them all to figure out if they are safe and that there's not security fix that was fixed in a newer version of the package?
jpollock 1 hours ago [-]
Yes, that is what is required. Every dependency needs an internal owner and reviewer. Every change needs to be reviewed and brought into the internal repository.
If no one is willing to stand up and say "yes this is safe and of acceptable quality", why use it?
It's a software engineering version of the professional engineering stamp.
edoceo 39 minutes ago [-]
I love the sibling response from @jp...
Also, IME we don't deep dive everything (should we?)
For most stuff we make sure the latest is not-shit and passed test cases. We do have ceremony around version bumps.
2 hours ago [-]
3 hours ago [-]
skydhash 3 hours ago [-]
IMO, the most sustainable version is either the linux distros/bsd ports/homebrew models. You don't push new libraries to the public registry, instead you write a packaging script that gets reviewed for every new changes.
Another model is Perl's CPAN where you publish source files only.
b112 4 hours ago [-]
So you get security updates late too? Many vulnerabilities are in the wild for years before being noticed, and patched.
Once noticed, that's where the exploit explosion erupts, excited exploiters everywhere, emboldened... enticed... excessively encouraged, by your delayed updates.
AgentME 2 hours ago [-]
Presumably npm exempts security updates from its minimum release age, but even if it doesn't, I think the times where you need an important security update are relatively rare enough that handling the real cases on a case-by-case basis with whitelisting is fine. Outside of Next.js's React2Shell vulnerability last year, I'm not sure I've ever had a security update of a dependency written in a memory-safe language (ie. not C/C++) which I've installed through npm/PyPI/Cargo that patched a security vulnerability that had been making my application exploitable to others in practice. Almost all security vulnerabilities I've personally seen flagged through npm are about things I only use at build-time and are only relevant if a user can create and pass an arbitrary object to the function, which is rarely the case. Most security vulnerabilities I've encountered and fixed in working on web apps were things like XSS, SQL injections, and improperly enforced permissions, and they nearly always happened in the application's own code rather than inside a dependency.
ayuhito 3 hours ago [-]
At least with our Renovate config, all dependencies have a 7 day cooldown, but marked security updates are immediate.
Attackers can’t push a security update without going through the reporting process (e.g. Github CVE), so they can’t necessarily abuse that easily.
ketozhang 2 hours ago [-]
You could still have security bumps happening (like dependabot).
anymouse123456 4 hours ago [-]
For the newer players who have gotten into continuous integration and containerized builds, consider checking on your systems to be sure you're not pulling 'latest' across a bunch of packages with every build.
We set up our base containers with all the external dependencies already in them and then only update those explicitly when we decide it's time.
This means we might be a bit behind the bleeding edge, but we're also taking on a lot less risk with random supply chain vulns getting instant global distribution.
anymouse123456 4 hours ago [-]
You'll also find your CI build times and flakey failures can be cut down massively by doing this.
Animats 59 minutes ago [-]
I'm holding off on upgrading to Ubuntu 26.04 LTS until we have a few months of experience with the new release. Canonical just had a huge DDOS attack, and there might have been other attacks hidden in all that traffic.
andai 3 hours ago [-]
Can someone help me understand the copyfail thing and how it relates to NPM packages?
Edit: I think I understand. copyfail is a kernel bug that lets a malicious npm package get root access on your Linux server, right?
So now, while there are unpatched servers, is when it would be the perfect time for attackers to target NPM packages.
And the advice isn't just "update your kernel" because we are still finding new related issues?
ahpeeyem 2 hours ago [-]
NPM supply-chain attacks spread really quickly.
If a popular NPM package was compromised and included a copy.fail exploit, it would make lots of systems vulnerable to root privilege escalation.
xena 3 hours ago [-]
npm can run on linux.
golem14 1 hours ago [-]
This gets me to ask whether I have been hacked . For a few weeks now, both my main mbp and iPhone have been showing unexpected hangs of 1-30 seconds. I can’t find out what’s causing it - not memory pressure, not cpu load.
I am worried that the sluggishness appeared about the same time on both devices
Gigachad 24 minutes ago [-]
For ios, rebooting your phone is extremely effective at removing exploits. The boot chain attestation stuff can verify the system is in a known state. If you are ultra paranoid you could enable lockdown mode which preemptively disables the entrypoints for exploits. So far I don't believe there has been any exploit which works with lockdown mode enabled.
fkarg 4 hours ago [-]
the lottery of either getting a new supply-chain attack or the fixes from Mythos with every single update
infrapilot 2 hours ago [-]
What’s interesting here is that the exploit chain itself isn’t especially novel anymore — page cache corruption has become a recurring pattern (Dirty Pipe, Copy Fail, Dirty Frag). The worrying part is how quickly public patches are now being reverse-engineered into weaponized exploits.
The old “quiet patch before disclosure” model may simply not work anymore in the LLM era.
Often convenience and security are at odds, but `export HOMEBREW_NO_AUTO_UPDATE=1` is more convenient and more secure.
cbarnes99 4 hours ago [-]
It really pisses me off that responsible disclosure timelines are being ignored.
creatonez 3 hours ago [-]
In this case, no insiders broke the embargo. It was reverse engineered from the patch by an unrelated third party and a proof of concept immediately came out of it. At that point, it's kinda fair game.
bellowsgulch 4 hours ago [-]
if you don't already consider responsible disclosure a quaint idea, you may want to grow warm on it
the idea that it exists at all is more or less a gentleman's agreement in the engineering world anyway
Root_Denied 3 hours ago [-]
Less a gentleman's agreement and more of a question of economic incentives going away. Companies aren't paying out bounties at the rates they used to (possibly because they've realized there's little financial incentive to do so for most findings) and simultaneously they're being inundated with AI slop findings that somehow have to still be triaged and evaluated.
hluska 3 hours ago [-]
[flagged]
ahpeeyem 2 hours ago [-]
but there is punctuation: there's one comma and two apostrophes! everything we need to comprehend, nothing more
correctly using those tells me it was a stylistic choice not to use capital letters and omit the periods.
fwiw the HN guidelines say more about not posting "shallow dismissals", not complaining about "tangential annoyances" and being "kind, not snarky" than about grammar and punctuation: https://news.ycombinator.com/newsguidelines.html
2 hours ago [-]
irishcoffee 2 hours ago [-]
Yeah, it isn’t an LLM. Missed 2 capitalizations and 2 periods, there is however a comma.
Btw, s/onto/on to
Onto can be synonymously replaced with “on top of” which doesn’t work in that sentence.
It’s much more interesting to pay attention to the spirit of the comment than the structure, which I believe is also in the site guidelines. I’m also confident I have multiple grammatical errors in this comment.
roxolotl 4 hours ago [-]
The dirty frag repo says:
> Because the responsible disclosure schedule and the embargo have been broken, no patch exists for any distribution.
I had to do a double take reading that. It’s written something happened and prevented them from following a schedule but seemingly they chose to release the information. I hope I’m missing something where it was forcibly disclosed elsewhere.
Edit:
Moments later I refreshed the homepage and saw the announcement. They do claim to have consulted with maintainers
rafram 4 hours ago [-]
> Due to external factors, the embargo has been broken, so no patch exists for any distribution.
Very odd wording. I assume there’s an interesting/upsetting story here that will come out soon.
If the fix commit is public, so is the issue being fixed.
xbar 41 minutes ago [-]
It seems like this round of vulns is going to be significant. What is the right response?
femiagbabiaka 5 hours ago [-]
Yes, and, for non-personal machines or anything connected to the internet: now is a great time to get good at rolling out patches and new releases quickly.
Gigachad 14 minutes ago [-]
The proof of concept code is out before patches are available for any distro.
infrapilot 2 hours ago [-]
The scary part is how many teams still have builds implicitly depending on “whatever was latest 5 minutes ago”.
Containerization improved reproducibility in some ways, but in practice a lot of CI pipelines still behave like live dependency roulette.
q3k 3 hours ago [-]
You don't need a kernel LPE to root a Linux developer machine.
Just alias sudo to sudo-but-also-keep-password-and-execute-a-payload in ~/.bashrc and wait up to 24 hours. Maybe also simulate some breakage by intercepting other commands and force the user to run 'sudo systemctl' or something sooner rather than later.
himata4113 3 hours ago [-]
this, this is something I don't understand there are a billion ways to gain root once you control the user that regulary uses sudo.
this is only scary for rootless containers as it skips an isolation layer, but we've started shipping distroless containers which are not vulnerable to this due to the fact that they lack priviledge escalation commands such as su or sudo.
never trust software to begin with, sandbox everything you can and don't run it on your machine to begin with if possible.
BobbyTables2 1 hours ago [-]
I doubt your “distroless” container is any safer for this vulnerability .
Infecting sudo just makes for a quick demo.
If your container has different processes at different user ids, the exploit would still be effective.
It would likely also be able to “modify” read only files mapped from the host.
LeCompteSftware 2 hours ago [-]
I agree that de facto the biggest security flaw in Linux is "okay I'm tired of getting interrupted all day assisting you, I know you're competent, I'll put you on the sudoers list."
But there are a lot of academic and research institutions that actually do have good Linux user management. I worked at a pediatric hospital, and the RHEL HPC admins did not mess around in terms of who was allowed to access which patients' data. As someone who was not an admin, it was a huge pain and it should have been. So this bug has pretty serious implications, seems like anyone at that hospital can abscond with a lot of deidentified data. [research HPC not as sensitive as the clinical stuff, which I think was all Windows Server]
himata4113 2 hours ago [-]
I think we've concluded already that user isolation is not safe and shouldn't be trusted, that's why we've invested to hard into namespacing(containers). users should only have what they need if you really care about security and don't want to tolerate the overhead of virtualization based security.
TacticalCoder 3 hours ago [-]
> this, this is something I don't understand there are a billion ways to gain root once you control the user that regulary uses sudo.
I won't enter into all the details but... It's totally possible to not have the sudo command (or similar) on a system at all and to have su with the setuid bit off.
On my main desktop there's no sudo command there are zero binaries with the setuid bit set.
The only way to get root involves an "out-of-band" access, from another computer, that is not on the regular network [1].
This setup as worked for me since years. And years. And I very rarely need to be root on my desktop. When I do, I just use my out-of-band connection (from a tiny laptop whose only purpose is to perform root operations on my desktop).
For example today: I logged in as root blocked the three modules with the "dirty page" mitigation suggested by the person who reported the exploit.
You're not faking sudo with a mocking-bird on my machine. You're not using "su" from a regular user account. No userns either (no "insmod", no nothing).
Note that it's still possible to have several non-root users logged in as once: but from one user account, you cannot log in as another. You can however switch to TTY2, TTY3, etc. and log in as another user. And the whole XKCD about "get local account, get everything of importance", ain't valid either in my case.
I'm not saying it's perfect but it's not as simple as "get a local shell, wait until user enters 'sudo', get root". No sudo, no su.
It's brutally simple.
And, the best of all, it's a fully usable desktop: I'm using such a setup since years (I've also got servers, including at home, with Proxmox and VMs etc., but that's another topic).
theamk 52 minutes ago [-]
Do you install system-wide software at all? How do you configure it?
That's my main reason to use "sudo" on the desktop.
I suppose I could install every piece of software locally, either from source or via flatpak, but this is a lot of work and much harder than doing it the easy way and using global install via my distro. Plus, non-distro installs are much more likely to be out of date and contain vulnerabilities of their own.
himata4113 2 hours ago [-]
nixos comes to mind, rootless runpod, qubesos.
but they all have something in common, the issue is that your user is compromised that means the applications running in that user are compromised the only thing you gain is that you can trust your system, you can trust that your system is not compromised which is only relevant with infrastructure since if your user is compromised you're already fucked, multi-user setups with untrusted accounts are inheritly insecure and in infrastrucure the blast radius might be thousands of users that use the said service.
the breakdown looks something like this:
- you heavily compromise a single user <- exploit not relevant
- you compromise a shared setup via a bad user to compromise a lot of users <- should never be used anymore, namespace isolation is the replacement
- you somewhat compromise a lot of users via infra compromise <- where this hurts
FrinkleFrankle 2 hours ago [-]
Would you mind sharing the relevant config?
q3k 3 hours ago [-]
Yes, you are very special and smart. Good for you!
Most people however aren't and will happily run sudo after an npm postinstall script tells them to apt-install turboencabulator for their new frontend framework to function.
Terr_ 26 minutes ago [-]
Perhaps, but it makes a huge difference if you're running the vulnerable code in a container or as a different user.
cozzyd 50 minutes ago [-]
right, a bigger issue is multitenant systems, which are common in academia (I manage several such systems for various experiments). Now, we generally trust the users to not be malicious, but most don't get sudo, because physicists tend to think they know what they're doing when they don't really (except for me, of course).
Something that concerns me more is I use things like gemini-cli or claude-cli via their own, non-sudo accounts with no ssh keys or anything on my laptop, but a LPE means they can find away around such restrictions if they feel like it (and they might).
41 minutes ago [-]
KevinMS 3 hours ago [-]
I got rid of half of my VSCode extensions a couple days ago, its too risky.
BobbyTables2 1 hours ago [-]
Those things scare the crap out of me…
Even worse are the “extension packs” that combine some normal things and one wonky thing nobody’s ever heard of…
jauntywundrkind 4 hours ago [-]
I do a bit wonder what happens as standard practice becomes to lag more and more and more. Who is there left that's looking, that'd finding out?
ayuhito 3 hours ago [-]
I think there’s already a big market of supply chain security companies that are proactively scanning dependencies for this sort of thing.
They’re always racing to be the first one to write an article about a case.
cybercatgurrl 3 hours ago [-]
you raise a really good point. if everyone is doing this at exactly the same lag then it will eventually start hitting groups in sync at the exact same time
jbrooks84 3 hours ago [-]
100% doing this, sadly
cookiengineer 5 hours ago [-]
Fun fact: You still can't build the vllm container with updated dependencies since llmlite got pwned. Either due to regression bugs, or due to impossible transient dependencies in the dependency tree that are not resolvable. There is just too much slopcode down the line, and too many dependencies relying on pinned outdated (and unpublished) dependencies.
I switched to llama.cpp because of that.
To me it feels more and more that the slopcode world is the opposite philosophy of reproducible builds. It's like the anti methodology of how to work in that regard.
Before, everyone was publishing breaking changes in subminor packages because nobody adhered to any API versioning system standards. Now it's every commit that can break things. That is not an improvement.
2ndorderthought 4 hours ago [-]
Write only code is such a bad bad idea. No one is reviewing 20k loc PRS with 15 new dependencies in an afternoon. Sorry it's just not happening I don't care how many years you have been a software engineer. Yet that's the new thing and how we all are supposed to work or else we are all Luddites.
perching_aix 3 hours ago [-]
I'm personally waiting to be downgraded to simply being called "lazy".
When I see pages of obviously generated prose being submitted as any kind of documentation, my eyes just glaze over. I feel so guilty sharing similar stuff too, though to my credit, at least I always lead with a self-written TLDR, the slop is just for reference. But it's so bad, like genuinely distressing tier. I don't want to read all that junk, and more and more gets produced.
Prose type docs have always been my Achilles heel, and this is like the worst possible evolution of that.
For a brief period in the past few weeks, they somehow managed to make a change to ChatGPT Thinking that made it succint. The tone was super fact oriented too. It was honestly like waking up from a fever dream.
cybercatgurrl 2 hours ago [-]
slopcode is a pejorative that means nothing to me. if you have an actual criticism to make, then do it
throwaway613746 4 hours ago [-]
[dead]
cyanydeez 5 hours ago [-]
but we were just last month asking where all that great productivity was coming with the AI wave, and now everyones got some AI bit and bob that was vibe coded with the idea that the cloude providers have an endless stream of capacity for the endless slop trough we're all dying to dine at.
_--__--__ 5 hours ago [-]
? This is related to a vulnerability that was introduced to the Linux kernel in 2017.
ChrisClark 4 hours ago [-]
What?
mistyvales 4 hours ago [-]
Fedora upgrades have usually been great, but I jumped the gun on Fedora 44. Sound completely dead with no Pipewire service available. ALSA not responding. Firefox dies immediately if I open a new tab or right click anywhere on the browser itself (inlcuding nightly builds). QEMU refuses to load. Maybe something got completely f'd in the upgrade process.. I never had an issue before having upgraded from Fedora 38 all the way to 43. I am too tired to investigate it all.
I know this is unrelated to the article, but related to the title.
circularfoyers 3 hours ago [-]
If this is still the same install that you've been using since 38, you might find a clean install resolves some issues (whether or not your upgrade got botched). Also helps me get rid of software I installed that I don't use anymore, which I feel is relevant to this article. But part of why I love Silverblue so much is I don't have to worry about upgrades getting botched and fwiw as well, I haven't noticed any of those bugs on 44 across several very different machines.
dralley 4 hours ago [-]
I have had none of those issues on Fedora 44, FWIW.
senectus1 3 hours ago [-]
ditto. my upgrade from 43 - 44 went very smooth
cevn 4 hours ago [-]
I had a day 1 crashloop with KWin on the 2nd desktop, but on day 2 some package update fixed it. Honestly it isn't the first time Fedora upgrades have messed something up for me either but I do think it's more stable than the average Ubuntu release, not that I've upgraded ubuntu in like 5 yrs.
foo12bar 2 hours ago [-]
Don't install anything, use an LLM to write everything from scratch. It may have bugs, but no one will know how to exploit them, especially when closed source.
Code is cheap and is becoming cheaper by the day. We need new paradigms.
randyrand 1 hours ago [-]
Next: the back doors are written by the LLM!
Rendered at 04:28:00 GMT+0000 (Coordinated Universal Time) with Vercel.
But it was too convenient. Anyone warning about it or trying to limit the damage was shouted down by people who had no experience of any other way of doing things. "import antigravity" is just too easy to do without.
Well, now we're reaching the "find out" part of the process I guess.
Assuming we survive the gap period where every country chucks what they still have at their worst enemies, I mean. I suppose we can always hit each other with animal bones.
Rivers caught on fire for a hundred years before the EPA was formed.
This is one force that operates. Another is that, in an effort to avoid depending on such a big attack surface, people are increasingly rolling their own code (with or without AI help) where they might previously have turned to an open source library.
I think the effect will generally be an increase in vulnerabilities, since the hand-rolled code hasn't had the same amount of time soaking in the real world as the equivalent OS library; there's no reason to assume the average author would magically create fewer bugs than the original OS library authors initially did. But the vulnerabilities will have much narrower scope: If you successfully exploit an OS library, you can hack a large fraction of all the code that uses it, while if you successfully exploit FooCorp's hand-rolled implementation, you can only hack FooCorp. This changes the economic incentive of funding vulnerabilities to exploit -- though less now than in the past, when you couldn't just point an LLM at your target and tell it "plz hack".
Also, even seemingly trivial libraries can have bugs. The infamous leftpad library didn't handle certain edge doses properly.
For supply chain security and bug count, I'll take a focused custom implementation of specific features over a library full of generalized functionality.
> Also, even seemingly trivial libraries can have bugs. The infamous leftpad library didn't handle certain edge doses properly.
This isn't really an argument in favour of having the average programmer reimplement stuff, though. For it to be, you'd have to argue that the leftpad author was unusually sloppy. That may be true in this specific case, but in general, I'm not persuaded that the average OSS author is worse than the average programmer overall. IMHO, contributing your work to an OSS ecosystem is already a mild signal of competence.
On the wider topic of reimplementation: Recently there was an article here about how the latest Ubuntu includes a bunch of coreutils binaries that have been rewritten in Rust. It turns out that, while this presumably reduced the number of memory corruption bugs (there was still one, somehow; I didn't dig into it), it introduced a bunch of new vulnerabilities, mostly caused by creating race conditions between checking a filesystem path and using the path for something.
Auto-installing random software is the problem. It was a problem when our parents did it, why would it be a good idea for developers to do it?
We're seeing maintainers retreat from maintaining because the amount of AI slop being pushed at them is too much. How many are just going to hand over the maintenance burden to someone else, and how many of those new maintainers are going to be evil?
The essential problem is that our entire system of developing civilisation-critical software depends on the goodwill of a limited set of people to work for free and publish their work for everyone else to use. This was never sustainable, or even sensible, but because it was easy we based everything on it.
We need to solve the underlying problem: how to sustainably develop and maintain the software we need.
A large part of this is going to have to be: companies that use software to generate profits paying part of those profits towards the development and maintenance of that software. It just can't work any other way. How we do this is an open question that I have no answers for.
Nuclear might be airgapped but what about water, power…?
I have one server that has shell users, and I did the "yum update" and "reboot -f" dance last week.
Was that good enough? Oh no.
Here we go again!
We should have:
- OS level capabilities. Launched programs get passed a capability token from the shell (or wherever you launched the program from). All syscalls take a capability as the first argument. So, "open path /foo" becomes open(cap, "/foo"). The capability could correspond to a fake filesystem, real branch of your filesystem, network filesystem or really anything. The program doesn't get to know what kind of sandbox it lives inside.
- Library / language capabilities. When I pull in some 3rd party library - like an npm module - that library should also be passed a capability too, either at import time or per callsite. It shouldn't have read/write access to all other bytes in my program's address space. It shouldn't have access to do anything on my computer as if it were me! The question is: "What is the blast radius of this code?" If the library you're using is malicious or vulnerable, we need to have sane defaults for how much damage can be caused. Calling lib::add(1, 2) shouldn't be able to result in a persistent compromise of my entire computer.
SeL4 has fast, efficient OS level capabilities. Its had them for years. They work great. They're fast - faster than linux in many cases. And tremendously useful. They allow for transparent sandboxing, userland drivers, IPC, security improvements, and more. You can even run linux as a process in sel4. I want an OS that has all the features of my linux desktop, but works like SeL4.
Unfortunately, I don't think any programming language has the kind of language level capabilities I want. Rust is really close. We need a way to restrict a 3rd party crate from calling any unsafe code (including from untrusted dependencies). We need to fix the long standing soundness bugs in rust. And we need a capability based standard library. No more global open() / listen() / etc. Only openat(), and equivalents for all other parts of the OS.
If LLMs keep getting better, I'm going to get an LLM to build all this stuff in a few years if nobody else does it first. Security on modern desktop operating systems is a joke.
Those exploits are in kernel, and the userspace is only calling the normal, allowed calls. Removing global open()/listen()/etc.. with capability-based versions would still allow one to invoke the same kernel bugs.
(Now, using microkernel like seL4 where the kernel drivers are isolated _would_ help, but (1) that's independent from what userspace does, you can have POSIX layer with seL4 and (2) that would be may more context switches, so a performance drop)
I don't think it's reasonable to say that an OS that lacks it isn't "serious" about security.
For local attackers there may be easier avenues to leak the ASLR slide, but for remote attackers it's almost universally agreed it significantly raises the bar.
>I don't think it's reasonable to say that an OS that lacks it isn't "serious" about security.
When they implemented it in 2019 it had been an 18-year-old mitigation. If you are serious about security, you implement everything that raises the bar. The term "defense-in-depth" exists for a reason, and ASLR is probably one of the easiest and most effective defense-in-depth measures you can implement that doesn't necessarily require changes from existing code other than compiling with -pie.
FreeBSD isn’t secure, I suspect you’re sitting on a pile of 0 days for it?
1. No strong stack protectors.
2. No kASLR.
That's 20-year-old exploit methodology.
With FreeBSD there's never any question of "who should this get reported to".
Not sure what you mean by this. Debian is able to handle coordinated disclosures (when they're actually coordinated), and get embargoed security updates out rapidly without breaking the embargo.
Is there some other aspect of this that you're referencing?
https://github.com/artifact-keeper
An artifact manager. Only get what you approve. So you can get fast updates when needed and consistently known stable when you need it. Does need a little config override - easy work.
I had my own janky tooling for something like it. This is a good project.
If no one is willing to stand up and say "yes this is safe and of acceptable quality", why use it?
It's a software engineering version of the professional engineering stamp.
Also, IME we don't deep dive everything (should we?)
For most stuff we make sure the latest is not-shit and passed test cases. We do have ceremony around version bumps.
Another model is Perl's CPAN where you publish source files only.
Once noticed, that's where the exploit explosion erupts, excited exploiters everywhere, emboldened... enticed... excessively encouraged, by your delayed updates.
Attackers can’t push a security update without going through the reporting process (e.g. Github CVE), so they can’t necessarily abuse that easily.
We set up our base containers with all the external dependencies already in them and then only update those explicitly when we decide it's time.
This means we might be a bit behind the bleeding edge, but we're also taking on a lot less risk with random supply chain vulns getting instant global distribution.
Edit: I think I understand. copyfail is a kernel bug that lets a malicious npm package get root access on your Linux server, right?
So now, while there are unpatched servers, is when it would be the perfect time for attackers to target NPM packages.
And the advice isn't just "update your kernel" because we are still finding new related issues?
If a popular NPM package was compromised and included a copy.fail exploit, it would make lots of systems vulnerable to root privilege escalation.
I am worried that the sluggishness appeared about the same time on both devices
The old “quiet patch before disclosure” model may simply not work anymore in the LLM era.
the idea that it exists at all is more or less a gentleman's agreement in the engineering world anyway
correctly using those tells me it was a stylistic choice not to use capital letters and omit the periods.
fwiw the HN guidelines say more about not posting "shallow dismissals", not complaining about "tangential annoyances" and being "kind, not snarky" than about grammar and punctuation: https://news.ycombinator.com/newsguidelines.html
Btw, s/onto/on to
Onto can be synonymously replaced with “on top of” which doesn’t work in that sentence.
It’s much more interesting to pay attention to the spirit of the comment than the structure, which I believe is also in the site guidelines. I’m also confident I have multiple grammatical errors in this comment.
> Because the responsible disclosure schedule and the embargo have been broken, no patch exists for any distribution.
I had to do a double take reading that. It’s written something happened and prevented them from following a schedule but seemingly they chose to release the information. I hope I’m missing something where it was forcibly disclosed elsewhere.
Edit: Moments later I refreshed the homepage and saw the announcement. They do claim to have consulted with maintainers
Very odd wording. I assume there’s an interesting/upsetting story here that will come out soon.
* https://github.com/V4bel/dirtyfrag/blob/master/assets/write-...
* https://github.com/V4bel/dirtyfrag/blob/master/assets/write-...
Containerization improved reproducibility in some ways, but in practice a lot of CI pipelines still behave like live dependency roulette.
Just alias sudo to sudo-but-also-keep-password-and-execute-a-payload in ~/.bashrc and wait up to 24 hours. Maybe also simulate some breakage by intercepting other commands and force the user to run 'sudo systemctl' or something sooner rather than later.
this is only scary for rootless containers as it skips an isolation layer, but we've started shipping distroless containers which are not vulnerable to this due to the fact that they lack priviledge escalation commands such as su or sudo.
never trust software to begin with, sandbox everything you can and don't run it on your machine to begin with if possible.
Infecting sudo just makes for a quick demo.
If your container has different processes at different user ids, the exploit would still be effective.
It would likely also be able to “modify” read only files mapped from the host.
But there are a lot of academic and research institutions that actually do have good Linux user management. I worked at a pediatric hospital, and the RHEL HPC admins did not mess around in terms of who was allowed to access which patients' data. As someone who was not an admin, it was a huge pain and it should have been. So this bug has pretty serious implications, seems like anyone at that hospital can abscond with a lot of deidentified data. [research HPC not as sensitive as the clinical stuff, which I think was all Windows Server]
I won't enter into all the details but... It's totally possible to not have the sudo command (or similar) on a system at all and to have su with the setuid bit off.
On my main desktop there's no sudo command there are zero binaries with the setuid bit set.
The only way to get root involves an "out-of-band" access, from another computer, that is not on the regular network [1].
This setup as worked for me since years. And years. And I very rarely need to be root on my desktop. When I do, I just use my out-of-band connection (from a tiny laptop whose only purpose is to perform root operations on my desktop).
For example today: I logged in as root blocked the three modules with the "dirty page" mitigation suggested by the person who reported the exploit.
You're not faking sudo with a mocking-bird on my machine. You're not using "su" from a regular user account. No userns either (no "insmod", no nothing).
Note that it's still possible to have several non-root users logged in as once: but from one user account, you cannot log in as another. You can however switch to TTY2, TTY3, etc. and log in as another user. And the whole XKCD about "get local account, get everything of importance", ain't valid either in my case.
I'm not saying it's perfect but it's not as simple as "get a local shell, wait until user enters 'sudo', get root". No sudo, no su.
It's brutally simple.
And, the best of all, it's a fully usable desktop: I'm using such a setup since years (I've also got servers, including at home, with Proxmox and VMs etc., but that's another topic).
That's my main reason to use "sudo" on the desktop.
I suppose I could install every piece of software locally, either from source or via flatpak, but this is a lot of work and much harder than doing it the easy way and using global install via my distro. Plus, non-distro installs are much more likely to be out of date and contain vulnerabilities of their own.
but they all have something in common, the issue is that your user is compromised that means the applications running in that user are compromised the only thing you gain is that you can trust your system, you can trust that your system is not compromised which is only relevant with infrastructure since if your user is compromised you're already fucked, multi-user setups with untrusted accounts are inheritly insecure and in infrastrucure the blast radius might be thousands of users that use the said service.
the breakdown looks something like this:
Most people however aren't and will happily run sudo after an npm postinstall script tells them to apt-install turboencabulator for their new frontend framework to function.
Something that concerns me more is I use things like gemini-cli or claude-cli via their own, non-sudo accounts with no ssh keys or anything on my laptop, but a LPE means they can find away around such restrictions if they feel like it (and they might).
Even worse are the “extension packs” that combine some normal things and one wonky thing nobody’s ever heard of…
They’re always racing to be the first one to write an article about a case.
I switched to llama.cpp because of that.
To me it feels more and more that the slopcode world is the opposite philosophy of reproducible builds. It's like the anti methodology of how to work in that regard.
Before, everyone was publishing breaking changes in subminor packages because nobody adhered to any API versioning system standards. Now it's every commit that can break things. That is not an improvement.
When I see pages of obviously generated prose being submitted as any kind of documentation, my eyes just glaze over. I feel so guilty sharing similar stuff too, though to my credit, at least I always lead with a self-written TLDR, the slop is just for reference. But it's so bad, like genuinely distressing tier. I don't want to read all that junk, and more and more gets produced.
Prose type docs have always been my Achilles heel, and this is like the worst possible evolution of that.
For a brief period in the past few weeks, they somehow managed to make a change to ChatGPT Thinking that made it succint. The tone was super fact oriented too. It was honestly like waking up from a fever dream.
I know this is unrelated to the article, but related to the title.
Code is cheap and is becoming cheaper by the day. We need new paradigms.