Though this outage may be more related to the copy.fail upgrade cycle, it reminds me of a thought I've had recently in respect of agents.
In the UK they have this issue called "TV pickup" (https://en.wikipedia.org/wiki/TV_pickup). TV pickup is where everyone in the UK watching a popular TV show gets up to boil a high-powered tea kettle at the same time on an ad break. This causes a temporary surge in electricity demand and leads to real outages. It was a mystery at first but now is accounted for.
I suspect the global internet is facing an "agent pickup" problem where significant changes (e.g., releases of new frontier models or new package versions) puts unpredictable pressure on arbitrary infrastructure as millions of distributed agents act to address the change simultaneously.
We're at the stage where we blame AI for anything as a first reaction?
(Love the tv pickup story. I also thought of that, in other situations)
piker 14 minutes ago [-]
I wasn't blaming this issue on that in particular, just making an more general observation in line with the post. I'll make that clearer.
Hnrobert42 37 minutes ago [-]
Indeed. It is far more likely to be the copyfail issue.
TonyTrapp 2 hours ago [-]
While the timing with the copy.fail patches mentioned by a few comments here seems suspicious indeed, I have seen this repeating over the last few weeks: packages.ubuntu.com was hardly reachable on some days, causing apt-get to take forever to update the system. They have been struggling hard recently, it seems.
Best of luck to the people having to deal with this mess on a holiday!
Faaak 3 hours ago [-]
Tinfoil hat mode: a competitor wants to exploit copy.fail on some ubuntu servers, and is DDoSing canonical so that they can't update and thus patch the vuln
yallpendantools 2 hours ago [-]
Double tinfoil hat mode: an attacker learned of my plan to finally update my personal computer out of 20.04 today and is DDoSing canonical so I can't do that and I remain vulnerable to the backdoors they've found.
The plot thickens...
bjackman 1 hours ago [-]
If you can access AF_ALG on a server you don't need to do shenanigans like that. It's much easier to just find another bug and exploit that one instead.
The copy.fail website is very silly, it is not a special bug. If anyone gets compromised by that vuln their node architecture was broken anyway, patching copy.fail doesn't help.
loufe 56 minutes ago [-]
In what way is it "not a special bug"? It's a publicly known root access from RCE exploit. Those cannot be a dime a dozen. I'm sure it's especially interesting for any shared hosting services which might be affected, and could be delayed. I could find any places running containered services and exfiltrate secrets parallel services, no?
What constitutes "special" for you, out of curiosity? Something chaining with a hypervisor exploit?
mustardo 1 hours ago [-]
I thought copy.fail is a privelage escalation exploit, become root from a regular user? Am I missing something?
How would "node architecture" make people vulnerable to this?
You have to have shell access to a victim first right? Or am I missing something?
kubb 3 hours ago [-]
s/competitor/intelligence services/
bouncycastle 3 hours ago [-]
Seems reasonable to assume it's something to do with the recently publicized exploits. More likely, this could be an extortion attempt by criminals rather than a competitor.
touwer 1 hours ago [-]
why a competitor? Criminals, secret services, country adversaries...
corvad 2 hours ago [-]
This seems to be pretty targeted, and with the services affected like livepatch and such this could indeed be an actor DDoSing to avoid patches rolling out for copy.fail
jollymonATX 2 hours ago [-]
We are so broken as society ddos'n ubuntu is now a thing.
In the UK they have this issue called "TV pickup" (https://en.wikipedia.org/wiki/TV_pickup). TV pickup is where everyone in the UK watching a popular TV show gets up to boil a high-powered tea kettle at the same time on an ad break. This causes a temporary surge in electricity demand and leads to real outages. It was a mystery at first but now is accounted for.
I suspect the global internet is facing an "agent pickup" problem where significant changes (e.g., releases of new frontier models or new package versions) puts unpredictable pressure on arbitrary infrastructure as millions of distributed agents act to address the change simultaneously.
Has Ubuntu published patches yet?
(Love the tv pickup story. I also thought of that, in other situations)
The plot thickens...
The copy.fail website is very silly, it is not a special bug. If anyone gets compromised by that vuln their node architecture was broken anyway, patching copy.fail doesn't help.
What constitutes "special" for you, out of curiosity? Something chaining with a hypervisor exploit?
How would "node architecture" make people vulnerable to this?
You have to have shell access to a victim first right? Or am I missing something?
This might be the incentive I need to finally purge snap.
I used to have to find a script to purge excess old snaps that would fill up my hard drive. Now Ubuntu only keeps two versions of each snap.
I was wondering why the script didn't have to ever clean more than one version, even when I took longer between running updates.