This is delightfully unhinged, spending an amazing amount of time describing their model and citing their methodologies before getting to the meat of the meal many of us have been braying about for years: whether the singularity actually happens or not is irrelevant so much as whether enough people believe it will happen and act accordingly.
And, yep! A lot of people absolutely believe it will and are acting accordingly.
It’s honestly why I gave up trying to get folks to look at these things rationally as knowable objects (“here’s how LLMs actually work”) and pivoted to the social arguments instead (“here’s why replacing or suggesting the replacement of human labor prior to reforming society into one that does not predicate survival on continued employment and wages is very bad”). Folks vibe with the latter, less with the former. Can’t convince someone of the former when they don’t even understand that the computer is the box attached to the monitor, not the monitor itself.
nine_k 2 hours ago [-]
> * enough people believe it will happen and act accordingly*
Here comes my favorite notion of "epistemic takeover".
A crude form: make everybody believe that you have already won.
A refined form: make everybody believe that everybody else believes that you have already won. That is, even if one has doubts about your having won, they believe that everyone else submit to you as a winner, and must act accordingly.
bee_rider 2 hours ago [-]
This world where everybody’s very concerned with that “refined form” is annoying and exhausting. It causes discussions to become about speculative guesses about everybody else’s beliefs, not actual facts. In the end it breeds cynicism as “well yes, the belief is wrong, but everybody is stupid and believes it anyway,” becomes a stop-gap argument.
I don’t know how to get away from it because ultimately coordination depends on understanding what everybody believes, but I wish it would go away.
ElevenLathe 2 hours ago [-]
IMO this is a symptom of the falling rate of profit, especially in the developed world. If truly productivity enhancing investment is effectively dead (or, equivalently, there is so much paper wealth chasing a withering set of profitable opportunities for investment), then capital's only game is to chase high valuations backed by future profits, which means playing the Keynesian beauty contest for keeps. This in turn means you must make ever-escalating claims of future profitability. Now, here we are in a world where multiple brand name entrepreneurs are essentially saying that they are building the last investable technology ever, and getting people to believe it because the alternative is to earn less than inflation on Procter and Gamble stock and never getting to retire.
If outsiders could plausibly invest in China, some of this pressure could be dissipated for a while, but ultimately we need to order society on some basis that incentivizes dealing with practical problems instead of pushing paper around.
measurablefunc 2 hours ago [-]
What percentage of work would you say deals w/ actual problems these days?
nosuchthing 32 minutes ago [-]
In a post-industrial economy there are no more economic problems, only liabilities. Surplus is felt as threat, especially when it's surplus human labor.
In today's economy disease and prison camps are increasingly profitable.
How do you think the investor portfolios that hold stocks in deathcare and privatized prison labor camps can further Accelerate their returns?
kelseyfrog 2 hours ago [-]
Or just play into the fact that it's a Keynesian Beauty Contest [1]. Find the leverage in it and exploit it.
We really need a rule in politics which bans you (if you're an elected representative) from stating anything about the beliefs of the electorate without reference to a poll of the population of adequate size and quality.
Yes we'd have a lot of lawsuits about it, but it would hardly be a bad use of time to litigate whether a politicians statements about the electorate's beliefs are accurate.
CobrastanJorji 5 minutes ago [-]
You ever get into logic puzzles? The sort where the asker has to specify that everybody in the puzzle will act in a "perfectly logical" way. This feels like that sort of logic.
Terr_ 2 hours ago [-]
Refined 1.01 authoritarian form: Everybody knows you didn't win, and everybody knows the sentiment is universal... But everyone maintains the same outward facade that you won, because it's become a habit and because dissenters seem to have "accidents" falling out of high windows.
demosito666 2 hours ago [-]
V 1.02: Everybody knows you didn't win, and everybody knows the sentiment is universal... But everyone maintains the same outward facade that you won, because they believe that the others believe that you have enough power to crush the dissent. The moment this belief fades, you fall.
dclowd9901 1 hours ago [-]
Is that not the "Emperor's New Clothes" form? That would be like version 0.0.1
infinitewars 2 hours ago [-]
it's a sad state these days that we can't be sure which country you're alluding to
2 hours ago [-]
2 hours ago [-]
dagss 20 minutes ago [-]
Isn't talking about "here’s how LLMs actually work" in this context a bit like saying "a human can't be a relevant to X because a brain is only a set of molecules, neurons, synapses"?
Or even "this book won't have any effect on the world because it's only a collection of letters, see here, black ink on paper, that is what is IS, it can't DO anything"...
Saying LLM is a statistical prediction engine of the next token is IMO sort of confusing what it is with the medium it is expressed in/built of.
For instance those small experiments that train on addition. The weights end up forming an addition machine. An addition machine is what it is, that is the emergent behaviour. The machine learning weights is just the medium it is expressed in.
What's interesting about LLM is such emergent behavior. Yes, it's statistical prediction of likely next tokens, but when training weights for that it might well have a side-effect of wiring up some kind of "intelligence" (for reasonable everyday definitions of the word "intelligence", such as programming as good as a median programmer). As a sibling post say, we don't really know this yet -- but we can see adder machines being constructed out of the weights if we train small networks on addition.
pryce 20 minutes ago [-]
> whether the singularity actually happens or not is irrelevant so much as whether enough people believe it will happen and act accordingly.
We've already been here in the 1980s.
The tech industry needs to cultivate people who are interested in the real capabilities and the nuance around that, and eject the set of people who am to turn the tech industry into a "you don't even need a product" warmed-over acolytes of Tony Robbins.
jacquesm 3 hours ago [-]
> “here’s why replacing or suggesting the replacement of human labor prior to reforming society into one that does not predicate survival on continued employment and wages is very bad”
And there are plenty of people that take issue with that too.
Unfortunately they're not the ones paying the price. And... stock options.
stego-tech 3 hours ago [-]
History paints a pretty clear picture of the tradeoff:
* Profits now and violence later
OR
* Little bit of taxes now and accelerate easier
Unfortunately we’ve developed such a myopic, “FYGM” society that it’s explicitly the former option for the time being.
jpadkins 2 hours ago [-]
Do you have a historical example of "Little bit of taxes now and accelerate easier"? I can't think of any.
nine_k 2 hours ago [-]
If you replace "taxes" with more general "investment", it's everywhere. A good example is Amazon that has reworked itself from an online bookstore into a global supplier of everything by ruthlessly reinvesting the profits.
Taxes don't usually work as efficiently because the state is usually a much more sloppy investor. But it's far from hopeless, see DARPA.
If you're looking for periods of high taxes and growing prosperity, 1950s in the US is a popular example. It's not a great example though, because the US was the principal winner of WWII, the only large industrial country relatively unscathed by it.
PaulHoule 2 hours ago [-]
With the odd story that we paid the price for it in the long term.
tells the compelling story that the Mellon family teamed up with the steelworker's union to use protectionism to protect the American steel industry's investments in obsolete open hearth steel furnaces that couldn't compete on a fair market with the basic oxygen furnace process adopted by countries that had their obsolete furnaces blown up. The rest of US industry, such as our car industry, were dragged down by this because they were using expensive and inferior materials. I think this book had a huge impact in terms of convincing policymakers everywhere that tariffs are bad.
Funny the Mellon family went on to further political mischief
Ha, we gutted our manufacturing base, so if we bring it back it will now be state of the art! Not sure if that will work out for us, but hey their is some precedence.
jacquesm 1 hours ago [-]
This is the silver lining in many bad stories: the pendulum will always keep on swinging because at the extremes the advantage flips.
taurath 57 minutes ago [-]
> the state is usually a much more sloppy investor
I don’t find this to be true
The state invests in important things that have 2nd and 3rd order positive benefit but aren’t immediately profitable. Money in a food bank is a “lost” investment.
Alternatively the state plays power games and gets a little too attached to its military toys.
nine_k 34 minutes ago [-]
State agencies are often good at choosing right long-term targets. State agencies are often bad at the actual procurement, because of the pork-barrelling and red tape. E.g. both private companies and NASA agree that spaceflight is a worthy target, but NASA ends up with the Space Shuttle (a nice design ruined by various committees) and SLS, while private companies come up with Falcon-9.
stoneforger 16 minutes ago [-]
Sounds like a false dichotomy. NASA got all these different subcontractors to feed, in all these different states and they explicitly gutted MOL and dynasoar and all the air force projects that needed weird orbits and reentry trajectories so the space shuttle became a huge compromise. Perverse incentives and all that. It's not state organizations per se but rather non-profits that need to have a clear goal that creates capabilities, tools and utilities that act as multipliers for everyone. A pretty big cooperative. Like, I dunno , what societies are supposed to exist for.
exceptione 1 hours ago [-]
> Taxes don't usually work as efficiently because the state is usually a much more sloppy investor. But it's far from hopeless, see DARPA.
Be careful. The data does not confirm that narrative. You mentioned the 1950s, which is a poignant example of reality conflicting with sponsored narrative. Pre WOII, the wealthy class orbiting the monopolists, and by extension their installed politicians, had no other ideas than to keep lowering taxes for the rich on and on, even if it only deepened the endless economic crisis. Many of them had fallen in the trap of believing their own narratives, something we know as the Cult of Wealth.
Meanwhile, average Americans lived on food stamps. Politically deadlocked in quasi-religious ideas of "bad governments versus wise business men", America kept falling deeper. Meanwhile, with just 175,000 serving on active duty, the U.S. Army was the 18th biggest in the world[1], poorly equipped, poorly trained. Right wing isolationism had brought the country in a precarious position. Then two things happened. Roosevelt and WOII.
In a unique moment, the state took matters in their own hands. The sheer excellence in planning, efficiency, speed and execution of the state baffled the republicans, putting the oligarchic model of the economy to shame. The economy grew tremendously as well, something the oligarchy could not pull of. It is not well-known that WOII depended largely on state-operated industries, because the former class quickly understood how much the state's performance threatened their narratives. So they invested in disinformation campaigns, claiming the efforts and achievements of the government as their own.
I assume you are talking about WW2 and at first thought it was a typo.
nine_k 19 minutes ago [-]
BTW the New Deal tried central planning and quickly rejected it. I'd say that the intense application of the antitrust law in the late 1930s was a key factor that helped end the Great Depression. The war, and wartime government powers, were also key: the amount of federal government overreach and and reforms do not compare to what e.g. the second Trump administration has attempted. It was mostly done by people who got their positions in the administration more due to merit and care about the country than loyalty, and it showed.
The post-war era, under Truman and Eisenhower administrations, reaped the benefits of the US being the wealthiest and most intact winner of WWII. At that time, the highest income tax rate bracket was 91%, but the effective rate was below 50%.
oceanplexian 2 hours ago [-]
> It's not a great example though, because the US was the principal winner of WWII, the only large industrial country relatively unscathed by it.
The US is also shaping up to be the principal winner in Artificial Intelligence.
If, like everyone is postulating, this has the same transformative impact to Robotics as it does to software, we're probably looking at prosperity that will make the 1950s look like table stakes.
generic92034 26 minutes ago [-]
Are you sure that in today's reality the fruits of the AI race will be harvested by "the people"?
munk-a 2 hours ago [-]
Early on in the AI boom NVidia was highly valued as it was seen as the shovel-maker for research and development. It certainly was instrumental early on but now there are a few viable options for training hardware - and, to me at least, it's unclear whether training hardware is actually the critical infrastructure or if it will be something like power capacity (which the US is lagging behind significantly in), education, or even cooling efficiency.
I think it's extremely early to try and call who the principal winner will be especially with all the global shifts happening.
jacquesm 34 minutes ago [-]
> The US is also shaping up to be the principal winner in Artificial Intelligence.
There is no early mover advantage in AI in the same way that there was in all the other industries. That's the one thing that AI proponents in general seem not to have clued in to.
What will happen is that it eventually drags everything down because it takes the value out of the bulk of the service and knowledge economies. So you'll get places that are 'ahead' in the disruption. But the bottom will fall out of the revenue streams, which is one of the reasons these companies are all completely panicked and are wrecking the products that they had to stuff AI in there in every way possible hoping that one of them will take.
Model training is only an edge in a world where free models do not exist, once those are 'good enough' good luck with your AI and your rapidly outdated hardware.
The typical investors horizon is short, but not that short.
AndrewKemendo 3 hours ago [-]
Every possible example of “progress” have either an individual or a state power purpose behind it
there is only one possible “egalitarian” forward looking investments that paid off for everybody
I think the only exception to this is vaccines…and you saw how all that worked during Covid
Everything else from the semiconductor to the vacuum cleaner the automobile airplanes steam engines I don’t care what it is you pick something it was developed in order to give a small group and advantage over all the other groups it is always been this case it will always be this case because fundamentally at the root nature of humanity they do not care about the externalities- good or bad
jacquesm 3 hours ago [-]
COVID has cured me (hah!) of the notion that humanity will be able to pull together when faced with a common enemy. That means global warming or the next pandemic are going to happen and we will not be able to stop it from happening because a solid percentage can't wait to jump off the ledge, and they'll push you off too.
jpadkins 2 hours ago [-]
[flagged]
frocodillo 2 hours ago [-]
I find it interesting that this is the conclusion you draw from this. I won’t go into a discussion on the efficacy of the various mandates and policies in reducing spread of the disease. Rather, I think it’s worth pointing out that a significant portion of the proponents of these policies likely supported them not because of a desire to follow the authority but because they sincerely believed that a (for them) relatively small sacrifice in personal freedom could lead to improved outcomes for their fellow humans. For them, it was never about blindly following authority or virtue signalling. It was only ever about doing what they perceived as the right thing to do.
jpadkins 24 minutes ago [-]
So if the arguments are rooted in medical reasons, it's okay to be inhumane? Nazi propaganda argued that getting rid of Jews helped prevent the spread of diseases, because we all know that Jews are disease carriers. See how slippery the slope is here? Certainly you have seen the MAGA folks point out the measles outbreaks are coming from illegal immigrants, right?
I am quite sure that people felt justified in their reasoning for their behavior. That just shows how effective the propaganda was, how easy it is to get people to fall in line. If it was a matter of voluntary self sacrifice of personal freedoms, I wouldn't have made this comment. People decided to demonize anyone who did not agree with the "medical authority", especially doctors or researchers that did not tow the party line. They ruined careers, made people feel awful, and online the behavior was worse because how easy it was to pile on. Over stuff that is still to this day not very clear cut what the optimal strategy is for dealing with infectious disease.
Nevermark 2 hours ago [-]
It is so easy to critique the response in hindsight. Or at the time.
But critiques like that ignore uncertainty, risk, and unavoidably getting it "wrong" (on any and all dimensions), no matter what anyone did.
With a new virus successfully circumnavigating the globe in a very short period of time, with billions of potential brand new hosts to infect and adapt within, and no way to know ahead of time how virulent and deadly it could quickly evolve to be, the only sane response is to treat it as extremely high risk.
There is no book for that. Nobody here or anywhere knows the "right" response to a rapidly spreading (and killing) virus, unresponsive to current remedies. Because it is impossible to know ahead of time.
If you actually have an answer for that, you need to write that book.
And take into account, that a lot of people involved in the last response, are very cognizant that we/they can learn from what worked, what didn't, etc. That is the valuable kind of 20-20 vision.
A lot of at-risk people made it to the vaccines before getting COVID. The ones I know are very happy about everything that reduced their risk. They are happy not to have died, despite those who wanted to let the disease to "take its natural course".
And those that died, including people I know, might argue we could have done more, acted as a better team. But they don't get to.
No un-nuanced view of the situation has merit.
The most significant thing we learned: a lot of humanity is preparing to be a problem if the next pandemic proves ultimately deadlier. A lot of humanity doesn't understand risk, and doesn't care, if doing so requires cooperative efforts from individuals.
jacquesm 1 hours ago [-]
It's usually the same people that would have been the loudest to shout if it had not worked as well as it did...
jacquesm 2 hours ago [-]
You should study the prevention paradox.
PaulHoule 2 hours ago [-]
"Nazi", "Fascist", etc are words you can use to lose any debate instantly no matter what your politics are.
I think the sane version of this is that Gen Z didn't just lose its education, it lost its socialization. I know someone who works in administration of my Uni who tracks general well being of students who said they were expecting it to bounce back after the pandemic and they've found it hasn't. My son reports if you go to any kind of public event be it a sewing club or a music festival people 18-35 are completely absent. My wife didn't believe him but she went to a few events and found he was right.
You can blame screens or other trends that were going on before the pandemic, but the pandemic locked it in. At the rate we're going if Gen Z doesn't turn it around in 10 years there will not be a Gen Z+2.
So the argument that pandemic policy added a few years to elderly lives at the expensive of the young and the children that they might have had is salient in my book -- I had to block a friend of mine on Facebook who hasn't wanted to talk about anything but masks and long COVID since 2021.
goatlover 2 hours ago [-]
Never seen the attempt by governments to contain a global pandemic that killed millions and threatened to overwhelm healthcare compared to Nazism before, but why should I be surprised? Explains a lot about the sorry state of modern politics.
yifanl 2 hours ago [-]
Great zinger buddy, you really showed off your wit.
AndrewKemendo 2 hours ago [-]
Yeah buddy we agree
ghurtado 2 hours ago [-]
If you edit your comment to add punctuation, please let me know: I would like to read that final pile of words.
I did try, I promise.
AndrewKemendo 1 hours ago [-]
Ok here: Everything from the semiconductor through the vacuum cleaner, automobile, airplanes and steam engines was developed to give a small group an advantage over all the other groups. It has always been the case, it will always be the case.
Fundamentally, at the root nature of humanity, humans do not care about the externalities, either good or bad.
tim333 45 minutes ago [-]
That's a slightly odd way of looking at it. I'm guessing the people developing airplanes or whatever thought of a number of things including - hey this would be cool to do - and - maybe we can make some money - and - maybe this will help people travel - and - maybe it'll impress the girls - and probably some other things too. At least that's roughly how I've thought when I make stuff, never this will give a small group an advantage.
jacquesm 29 minutes ago [-]
Vacuum cleaner -> sell appliances -> sell electric motors
But there was a clear advantage in quality of life for a lot of people too.
Automobile -> part of industrialization of transport -> faster transport, faster world
Arguably also a big increase in quality of life but it didn't scale that well and has also reduced the quality of life. If all that money had gone into public transport then that would likely have been a lot better.
Airplanes -> yes, definitely, but they were also clearly seen as an advantage in war, in fact that was always a major driver behind inventions.
Steam engine -> the mother of all prime movers and the beginnings of the fossil fuel debacle (coal).
Definitely a quality of life change but also the cause of the bigger problems we are suffering from today.
The 'coffin corner' (one of my hobby horses) is a real danger, we have, as a society, achieved a certain velocity, if we slow down too much we will crash, if we speed up the plane will come apart. Managing these transitions is extremely delicate work and it does not look as though 'delicate' is in the vocabulary of a lot of people in the driving seats.
csallen 47 minutes ago [-]
> prior to reforming society into one that does not predicate survival on continued employment and wages
There's no way that'll happen. The entire history of humanity is 99% reacting to things rather than proactively preventing things or adjusting in advance, especially at the societal level. You would need a pretty strong technocracy or dictatorship in charge to do otherwise.
stoneforger 23 minutes ago [-]
You would need a new sense of self and a life free of fear, raising children where they can truly be anything they like and teach their own kids how to find meaning in a life lived well. "Best I can do is treefiddy" though..
mitthrowaway2 3 hours ago [-]
> whether the singularity actually happens or not is irrelevant so much as whether enough people believe it will happen and act accordingly.
I disagree. If the singularity doesn't happen, then what people do or don't believe matters a lot. If the singularity does happen, then it hardly matters what people do or don't believe (edit: about whether or not the singularity will happen).
afthonos 3 hours ago [-]
I don’t think that’s quite right. I’d say instead that if the singularity does happen, there’s no telling which beliefs will have mattered.
cgannett 3 hours ago [-]
if people believe its a threat and it is also real then what matters is timing
goatlover 2 hours ago [-]
Which would also mean the accelerationists are potentially putting everyone at risk. I'd think a soft takeoff decades in the future would give us a much better chance of building the necessary safeguards and reorganizing society accordingly.
sigmoid10 3 hours ago [-]
Depends on what a post singularity world looks like, with Roko's basilisk and everything.
Negitivefrags 3 hours ago [-]
> If the singularity does happen, then it hardly matters what people do or don't believe.
Depends on how you feel about Roko's basilisk.
VonTum 2 hours ago [-]
God Roko's Basilisk is the most boring AI risk to catch the public consciousness. It's just Pascal's wager all over again, with the exact same rebuttal.
camgunz 1 hours ago [-]
The culture that brought you "speedrunning computer science with JavaScript" and "speedrunning exploitative, extractive capitalism" is back with their new banger "speedrunning philosophy". Nuke it from orbit; save humanity.
menaerus 2 hours ago [-]
> It’s honestly why I gave up trying to get folks to look at these things rationally as knowable objects (“here’s how LLMs actually work”)
Here's your own fallacy you fell into - this is important to understand. Neither do you nor me understand "how LLMs actually work" because, well, nobody really does. Not even the scientists who built the (math around) models. So, you can't really use that argument because it would be silly if you thought you know something which rest of the science community doesn't. Actually, there's a whole new field in science developed around our understanding how models actually arrive to answers which they give us. The thing is that we are only the observers of the results made by the experiments we are doing by training those models, and only so it happens that the result of this experiment is something we find plausible, but that doesn't mean we understand it. It's like a physics experiment - we can see that something is behaving in certain way but we don't know to explain it how and why.
hnfong 1 hours ago [-]
Pro tip: call it a "law of nature" and people will somehow stop pestering you about the why.
I think in a couple decades people will call this the Law of Emergent Intelligence or whatever -- shove sufficient data into a plausible neural network with sufficient compute and things will work out somehow.
On a more serious note, I think the GP fell into an even greater fallacy of believing reductionism is sufficient to dissuade people from ... believing in other things. Sure, we now know how to reduce apparent intelligence into relatively simple matrices (and a huge amount of training data), but that doesn't imply anything about social dynamics or how we should live at all! It's almost like we're asking particle physicists how we should fix the economy or something like that. (Yes, I know we're almost doing that.)
striking 1 hours ago [-]
Even if interpretability of specific models or features within them is an open area of research, the mechanics of how LLMs work to produce results are observable and well-understood, and methods to understand their fundamental limitations are pretty solid these days as well.
Is there anything to be gained from following a line of reasoning that basically says LLMs are incomprehensible, full stop?
famouswaffles 32 minutes ago [-]
>Even if interpretability of specific models or features within them is an open area of research, the mechanics of how LLMs work to produce results are observable and well-understood, and methods to understand their fundamental limitations are pretty solid these days as well.
If you train a transformer on (only) lots and lots of addition pairs, i.e '38393 + 79628 = 118021' and nothing else, the transformer will, during training discover an algorithm for addition and employ it in service of predicting the next token, which in this instance would be the sum of two numbers.
We know this because of tedious interpretability research, the very limited problem space and the fact we knew exactly what to look for.
Alright, let's leave addition aside (SOTA LLMs are after all trained on much more) and think about another question. Any other question at all. How about something like:
"Take a capital letter J and a right parenthesis, ). Take the parenthesis, rotate it counterclockwise 90 degrees, and put it on top of the J. What everyday object does that resemble?"
What algorithm does GPT or Gemini or whatever employ to answer this and similar questions correctly ? It's certainly not the one it learnt for addition. Do you Know ? No. Do the creators at Open AI or Google know ? Not at all. Can you or they find out right now ? Also No.
Let's revisit your statement.
"the mechanics of how LLMs work to produce results are observable and well-understood".
Observable, I'll give you that, but how on earth can you look at the above and sincerely call that 'well-understood' ?
dbdoug 4 minutes ago [-]
From Gemini:When you take those two shapes and combine them, the resulting image looks like an umbrella.
menaerus 1 hours ago [-]
Yes, there is - benefit of a doubt.
hn_acc1 1 hours ago [-]
You can't keep pushing the AI hype train if you consider it just a new type of software / fancy statistical database.
liuliu 2 hours ago [-]
Agree. I think it is just people have their own simplified mental model how it works. However, there is no reason to believe these simplified mental models are accurate (otherwise we will be here 20-year earlier with HMM models).
The simplest way to stop people from thinking is to have a semi-plausible / "made-me-smart" incorrect mental model of how things work.
hn_acc1 1 hours ago [-]
Did you mean to use the word "mental"?
bheadmaster 3 hours ago [-]
> here’s how LLMs actually work
But how is that useful in any way?
For all we know, LLMs are black boxes. We really have no idea how did ability to have a conversation emerge from predicting the next token.
OkayPhysicist 3 hours ago [-]
> We really have no idea how did ability to have a conversation emerge from predicting the next token.
Maybe you don't. To be clear, this is benefiting massively from hindsight, just as how if I didn't know that combustion engines worked, I probably wouldn't have dreamed up how to make one, but the emergent conversational capabilities from LLMs are pretty obvious. In a massive dataset of human writing, the answer to a question is by far the most common thing to follow a question. A normal conversational reply is the most common thing to follow a conversation opener. While impressive, these things aren't magic.
dTal 2 hours ago [-]
>In a massive dataset of human writing, the answer to a question is by far the most common thing to follow a question.
No it isn't. Type a question into a base model, one that hasn't been finetuned into being a chatbot, and the predicted continuation will be all sorts of crap, but very often another question, or a framing that positions the original question as rhetorical in order to make a point. Untuned raw language models have an incredible flair for suddenly and unexpectedly shifting context - it might output an answer to your question, then suddenly decide that the entire thing is part of some internet flamewar and generate a completely contradictory answer, complete with insults to the first poster. It's less like talking with an AI and more like opening random pages in Borge's infinite library.
To get a base language model to behave reliably like a chatbot, you have to explicitly feed it "a transcript of a dialogue between a human and an AI chatbot", and allow the language model to imagine what a helpful chatbot would say (and take control during the human parts). The fact that this works - that a mere statistical predictive language model bootstraps into a whole persona merely because you declared that it should, in natural English - well, I still see that as a pretty "magic" trick.
famouswaffles 1 hours ago [-]
>No it isn't. Type a question into a base model, one that hasn't been finetuned into being a chatbot, and the predicted continuation will be all sorts of crap, but very often another question, or a framing that positions the original question as rhetorical in order to make a point.....
To be fair, only if you pose this question singularly with no proceeding context. If you want the raw LLM to answer your question(s) reliably then you can have the context prepended with other question-answer pairs and it works fine. A raw LLM is already capable of being a chatbot or anything else with the right preceding context.
accounting2026 22 minutes ago [-]
If such a simplistic explanation was true, LLM's would only be able to answer things that had been asked before, and where at least a 'fuzzy' textual question/answer match was available. This is clearly not the case. In practice you can prompt the LLM with such a large number of constraints, so large that the combinatorial explosion ensures no one asked that before. And you will still get a relevant answer combining all of those. Think combinations of features in a software request - including making some module that fits into your existing system (for which you have provided source) along with a list of requested features. Or questions you form based on a number of life experiences and interests that combined are unique to you. You can switch programming language, human language, writing styles, levels as you wish and discuss it in super esoteric languages or morse code. So are we to believe this answers appear just because there happened to be similar questions in the training data where a suitable answer followed? Even if for the sake of argument we accept this explanation by "proximity of question/answer", it is immediately that this would have to rely on extreme levels of abstraction and mixing and matching going on inside the LLM. And that it is then this process that we need to explain how works, whereas the textual proximity you invoke relies on this rather than explaining it.
bheadmaster 1 hours ago [-]
> Maybe you don't.
My best friend who has literally written a doctorate on artificial intelligence doesn't. If you do, please write a paper on it, and email it to me. My friend would be thrilled to read it.
famouswaffles 2 hours ago [-]
>In a massive dataset of human writing, the answer to a question is by far the most common thing to follow a question. A normal conversational reply is the most common thing to follow a conversation opener. While impressive, these things aren't magic.
Obviously, that's the objective, but who's to say you'll reach a goal just because you set it ? And more importantly, who's the say you have any idea how the goal has actually been achieved ?
You don't need to think LLMs are magic to understand we have very little idea of what is going on inside the box.
measurablefunc 2 hours ago [-]
We know exactly what is going on inside the box. The problem isn't knowing what is going on inside the box, the problem is that it's all binary arithmetic & no human being evolved to make sense of binary arithmetic so it seems like magic to you when in reality it's nothing more than a circuit w/ billions of logic gates.
famouswaffles 1 hours ago [-]
We do not know or understand even a tiny fraction of the algorithms and processes a Large Language Model employs to answer any given question. We simply don't. Ironically, only the people who understand things the least think we do.
Your comment about 'binary arithmetic' and 'billions of logic gates' is just nonsense.
"Look man all reality is just uncountable numbers of subparticles phasing in and out of existence, what's not to understand?"
MarkusQ 3 hours ago [-]
> We really have no idea how did ability to have a conversation emerge from predicting the next token.
Uh yes, we do. It works in precisely the same way that you can walk from "here" to "there" by taking a step towards "there", and then repeating. The cognitive dissonance comes when we conflate this way of "having a conversation" (two people converse) and assume that the fact that they produce similar outputs means that they must be "doing the same thing" and it's hard to see how LLMs could be doing this.
Sometimes things seems unbelievable simply because they aren't true.
bheadmaster 56 minutes ago [-]
> It works in precisely the same way that you can walk from "here" to "there" by taking a step towards "there", and then repeating.
It's funny how, in order to explain one complex phenomena, you took an even more complex phenomena as if it somehow simplifies it.
0x20cowboy 3 hours ago [-]
"'If I wished,' O'Brien had said, 'I could float off this floor like a soap bubble.' Winston worked it out. 'If he thinks he floats off the floor, and if I simultaneously think I see him do it, then the thing happens'".
Forgeties79 3 hours ago [-]
I just point to Covid lockdowns and how many people took up hobbies, how many just turned into recluses, how many broke the rules no matter the consequences real or imagined, etc. Humans need something to do. I don’t think it should be work all the time. But we need something to do or we just lose it.
It’s somewhat simplistic, but I find it get the conversation rolling. Then I go “it’s great that we want to replace work but what are we going to do instead and how will we support ourselves?” It’s a real question!
tehjoker 29 minutes ago [-]
It's true people need something to do, but I don't think the COVID shutdown (lockdowns didn't happen in the U.S. for the most part though they did in other countries) is a good comparison because the entire society was perfused with existential dread and fear of contact with another human being while the death count was rising and rising by thousands a day. It's not a situation that makes for comfortable comparisons because people were losing their damn minds and for good reason.
threethirtytwo 41 minutes ago [-]
I don’t think you’re rational. Part of being able to be unbiased is to see it in yourself.
First of all. Nobody knows how LLMs work. Whether the singularity comes or not cannot be rationalized from what we know about LLMs because we simply don’t understand LLMs. This is unequivocal. I am not saying I don’t understand LLMs. I’m saying humanity doesn’t understand LLMs in much the same way we don’t understand the human brain.
So saying whether the singularity is imminent or not imminent based off of that reasoning alone is irrational.
The only thing we have is the black box output and input of AI. That input and output is steadily improving every month. It forms a trendline, and the trendline is sloped towards singularity. Whether the line actually gets there is up for question but you have to be borderline delusional if you think the whole thing can be explained away because you understand LLMs and transformer architecture. You don’t understand LLMs period. No one does.
project2501a 33 minutes ago [-]
> Nobody knows how LLMs work.
I'm sorry, come again?
caycep 2 hours ago [-]
I thought the answer was "42"
famouswaffles 2 hours ago [-]
>It’s honestly why I gave up trying to get folks to look at these things rationally as knowable objects (“here’s how LLMs actually work”)
You do not know how LLMs work, and if anyone actually did, we wouldn't spend months and millions of dollars training one.
NitpickLawyer 3 hours ago [-]
> [...] prior to reforming society [...]
Well, good luck. You have "only" the entire history of human kind on the other side of your argument :)
stego-tech 3 hours ago [-]
I never said it was an easy problem to solve, or one we’ve had success with before, but damnit, someone has to give a shit and try to do better.
AndrewKemendo 3 hours ago [-]
Literally nobody’s trying because there is no solution
The fundamental unit of society …the human… is at its core fundamentally incapable of coordinating at the scale necessary to do this correctly
and so there is no solution because humans can’t plan or execute on a plan
sp527 3 hours ago [-]
The likely outcome is that 99.99% of humanity lives a basic subsistence lifestyle ("UBI") and the elite and privileged few metaphorically (and somewhat literally) ascend to the heavens. Around half the planet already lives on <= $7/day. Prepare to join them.
AlexCoventry 1 hours ago [-]
FWIW, you'd probably be able to buy a lot of goods and services for $7/day, if robots were doing literally all the work.
sp527 25 minutes ago [-]
Agreed. The quality of life bar will be higher for sure. But it will still technically be a "subsistence" lifestyle, with no prospect of improvement. Perhaps that will suffice for most people? We're going to find out.
generic92034 3 hours ago [-]
> Folks vibe with the latter
I am not convinced, though, it is still up to "the folks" if we change course. Billionaires and their sycophants may not care for the bad consequences (or even appreciate them - realistic or not).
stego-tech 3 hours ago [-]
Oh, not only do they not care about the plebs and riff-raff now, but they’ve spent the past ten years building bunkers and compounds to try and save their own asses for when it happens.
It’s willful negligence on a societal scale. Any billionaire with a bunker is effectively saying they expect everyone to die and refuse to do anything to stop it.
dakolli 2 hours ago [-]
It seems pretty obvious to me the ruling class is preparing for war to keep us occupied, just like in the 20s, they'll make young men and women so poor they'll beg to fight in a war.
It makes one wonder what they expect to come out the other side of such a late-stage/modern war, but I think what they care about is that there will be less of us.
generic92034 23 minutes ago [-]
Boy, will they be annoyed if the result of the AI race will be something considerably less than AGI, so all the people are still needed to keep numbers go up.
dakolli 2 hours ago [-]
Just say it simply,
1. LLMs only serve to reduce the value of your labor to zero over time. They don't need to even be great tools, they just need to be perceived as "equally good" to engineers for C-Suite to lay everyone off, and rehire at 50-25% of previous wages, repeating this cycle over a decade.
2. LLMs will not allow you to join the billionaire class, that wouldn't make sense, as anyone could if that's the case. They erode the technical meritocracy these Tech CEOs worship on podcasts, and youtube, (makes you wonder what are they lying about).
- Your original ideas and that Startup you think is going to save you, isn't going to be worth anything if someone with minimal skills can copy it.
3. People don't want to admit it, but heavy users of LLMs know they're losing something, and there's a deep down feeling that its not the right way to go about things. Its not dissimilar to any guilty dopaminergic crash one gets when taking shortcuts in life.
I used like 1.8bb Anthropic tokens last year, I won't be using it again, I won't be participating in this experiment. I've likely lost years of my life in "potential learning" from the social media experiment, I'm not doing that again. I want to study compilers this year, and I want to do it deeply. I wont be using LLMs.
AlexCoventry 1 hours ago [-]
You may be throwing the baby out with the bathwater. I learned more last year from ChatGPT Pro than I'd learned in the previous 5, FWIW.
yoyohello13 13 minutes ago [-]
Just say 'LLMs'. Whenever someone name drops a specific model I can't help but think it's just an Ad bot.
stego-tech 2 hours ago [-]
I've said it simply, much like you, and it comes off as unhinged lunacy. Inviting them to learn themselves has been so much more successful than directed lectures, at least in my own experiments with discourse and teaching.
A lot of us have fallen into the many, many toxic traps of technology these past few decades. We know social media is deliberately engineered to be addictive (like cigarettes and tobacco products before it), we know AI hinders our learning process and shortens our attention spans (like excess sugar intake, or short-form content deluges), and we know that just because something is newer or faster does not mean it's automatically better.
You're on the right path, I think. I wish you good fortune and immense enjoyment in studying compilers.
dakolli 2 hours ago [-]
I agree, you're probably right! Thanks!
accidentallfact 3 hours ago [-]
Reality won't give a shit about what people believe.
aaroninsf 1 hours ago [-]
What is your argument for why denecessitating labor is very bad?
This is certainly the assertion of the capitalist class,
whose well documented behavior clearly conveys that this is not because the elimination of labor is not a source of happiness and freedom to pursue indulgences of every kind.
It is not at all clear that universal life-consuming labor is necessary for a society's stability and sustainability.
The assertion IMO is rooted rather in that it is inconveniently bad for the maintenance of the capitalists' control and primacy,
in as much as those who are occupied with labor, and fearful of losing access to it, are controlled and controllable.
holoduke 2 hours ago [-]
For ages most people believed in a religion. People are just not smart and sheepy followers.
soperj 2 hours ago [-]
Most still do.
AndrewKemendo 3 hours ago [-]
The goal is to eliminate humans as the primary actors on the planet entirely
At least that’s my personal goal
If we get to the point where I can go through my life and never interact with another human again, and work with a bunch of machines and robots to do science and experiments and build things to explore our world and make my life easier and safer and healthier and more sustainable, I would be absolutely thrilled
As it stands today and in all the annals of history there does not exist a system that does what I just described.
Be labs existed for the purpose of bell telephone…until it wasn’t needed by Bell anymore. Google moonshots existed for the shareholders of Google …until it was not uselful for capital. All the work done at Sandia and white sands labs did it in order to promote the power of the United States globally.
Find me some egalitarian organization that can persist outside of the hands of some massive corporation or some government that can actually help people and I might give somebody a chance but that does not exist
And no mondragon does not have one of these
eichin 2 minutes ago [-]
Bell labs was pushed aside because Bell Telephone was broken up by the courts. (It's currently a part of Nokia of all things - yeah, despite your storytelling here, it's actually still around :-)
mtlmtlmtlmtl 2 hours ago [-]
Well, demonstrably you have at least some measure of interest in interaction with other humans based on the undeniable fact that you are posting on this site, seemingly several times a day based on a cursory glance at your history.
AndrewKemendo 59 minutes ago [-]
Because every effort people use to do anything else is a waste of resources and energy and I want others to stop using resources to make bullshit and put all of them into ASI and human obviation
There are no more important other problems to solve other than this one
everything else is purely coping strategies for humans who don’t want to die wasting resources on bullshit
nine_k 2 hours ago [-]
This looks like a very comfortable, pleasant way of civilization suicide.
Not interacting with any other human means you're the last human in your genetic line. A widespread adherence to this idea means humanity dwindling and dying out voluntarily. (This has been reproduced in mice: [1])
Not having humans as primary actors likely means that their interests become more and more neglected by the system of machines that replaces them, and they, weaker by the day, are powerless to counter that. Hence the idea of increased comfort and well-being, and the ability to do science, is going to become more and more doubtful as humans would lose agency.
Get rid of everyone else so your life is easier and more sustainable... I guess I need to make my goal to get rid of you? Do you understand how this works yet?
AndrewKemendo 1 minutes ago [-]
Sounds like we both have our tasks then
Good luck
justonepost1 33 minutes ago [-]
Nobody can stop you from having this view, I suppose. But what gives you the right to impose this (lack of) future on billions of humans with friends and families and ambitions and interests who, to say the least, would not be in favor of “human obviation”?
goatlover 2 hours ago [-]
Most people need more social contact, not less. Modern tech is already alienating enough.
fainpul 2 hours ago [-]
Why would the machines want to work with you or any other human?
holoduke 2 hours ago [-]
Whereas I agree that working with machines would help dramatically in achieving science, there would be in your world no one truly understanding you. You would be alone. Can't imagine how you could prefer that.
stego-tech 2 hours ago [-]
Man, I used to think exactly like you do now, disgust with humans and all. I found comfort in machines instead of my fellow man, and sorely wanted a world governed by rigid structures, systems, and rules instead of the personal whims and fancies of whoever happened to have inherited power. I hated power structures, I loathed people who I perceived to stand in the way of my happiness.
I still do.
The difference is that as I realized what I'd done is built up walls so thick and high because of repeated cycles of alienation and traumas involving humans. When my entire world came to a total end every two to four years - every relationship irreparably severed, every bit of local knowledge and wisdom rendered useless, thrown into brand new regions, people, systems, and structures like clockwork - I built that attitude to survive, to insulate myself from those harms. Once I was able to begin creating my own stability, asserting my own agency, I began to find the nuance of life - and thus, a measure of joy.
Sure, I hate the majority of drivers on the roads today. Yeah, I hate the systemic power structures that have given rise to profit motives over personal outcomes. I remain recalcitrant in the face of arbitrary and capricious decisions made with callous disregard to objective data or necessities. That won't ever change, at least with me; I'm a stubborn bastard.
But I've grown, changed, evolved as a person - and you can too. Being dissatisfied with the system is normal - rejecting humanity in favor of a more stringent system, while appealing to the mind, would be such a desolate and bleak place, devoid of the pleasures you currently find eking out existence, as to be debilitating to the psyche. Humans bring spontaneity and chaos to systems, a reminder that we can never "fix" something in place forever.
To dispense with humans is to ignore that any sentient species of comparable success has its own struggles, flaws, and imperfections. We are unique in that we're the first ones we know of to encounter all these self-inflicted harms and have the cognitive ability to wax philosophically for our own demise, out of some notion that the universe would be a better place without us in it, or that we simply do not deserve our own survival. Yet that's not to say we're actually the first, nor will we be the last - and in that lesson, I believe our bare minimum obligation is to try just a bit harder to survive, to progress, to do better by ourselves and others, as a lesson to those who come after.
Now all that being said, the gap between you and I is less one of personal growth and more of opinion of agency. Whereas you advocate for the erasure or nullification of the human species as a means to separate yourself from its messiness and hostilities, I'm of the opinion that you should be able to remove yourself from that messiness for as long as you like in a situation or setup you find personal comfort in. If you'd rather live vicariously via machine in a remote location, far, far away from the vestiges of human civilization, never interacting with another human for the rest of your life? I see no issue with that, and I believe society should provide you that option; hell, there's many a day I'd take such an exit myself, if available, at least for a time.
But where you and I will remain at odds is our opinion of humanity itself. We're flawed, we're stupid, we're short-sighted, we're ignorant, we're hostile, we're irrational, and yet we've conquered so much despite our shortcomings - or perhaps because of them. There's ample room for improvement, but succumbing to naked hostility towards them is itself giving in to your own human weakness.
habinero 2 hours ago [-]
...Man, men really will do anything to avoid going to therapy.
Der_Einzige 1 hours ago [-]
Now this is transhumanism! Don't let the cope and seething from this website dissuade you from keeping these views.
AndrewKemendo 60 minutes ago [-]
Thank you!
tinfoilhatter 4 minutes ago [-]
Ah yes, because the majority of people pushing for transhumanism aren't complete psyco / sociopaths! You're in great company! /sarcasm
atomic128 3 hours ago [-]
Once men turned their thinking over to machines
in the hope that this would set them free.
But that only permitted other men with machines
to enslave them.
...
Thou shalt not make a machine in the
likeness of a human mind.
-- Frank Herbert, Dune
You won't read, except the output of your LLM.
You won't write, except prompts for your LLM. Why write code or prose when the machine can write it for you?
You won't think or analyze or understand. The LLM will do that.
This is the end of your humanity. Ultimately, the end of our species.
Currently the Poison Fountain (an anti-AI weapon, see https://news.ycombinator.com/item?id=46926439) feeds 2 gigabytes of high-quality poison (free to generate, expensive to detect) into web crawlers each day. Our goal is a terabyte of poison per day by December 2026.
Join us, or better yet: deploy weapons of your own design.
baxtr 17 minutes ago [-]
"The end of humanity" has been proclaimed many times over. Humanity won't end. It will change like it always has.
We get rid of some problems, and we get a bunch of new problems instead. And on, and on, and on.
nicce 3 minutes ago [-]
Humanity may end if someone else goes to the top of food chain.
debo_ 3 hours ago [-]
If you read this through a synth, you too can record the intro vocal sample for the next Fear Factory album
gojomo 3 hours ago [-]
Like partial courses of antibiotics, this will only relatively-advantage thoae leading efforts best able to ignore this 'poison', accelerating what you aim to prevent.
testaccount28 2 hours ago [-]
yes. whoever has the best (least detectable) model is best poised to poison the ladder for everyone.
creddit 2 hours ago [-]
I would bet a lot of money on your poison is already identified and filtered out of training data.
scratchyone 2 hours ago [-]
Looking through the poison you linked, how is it generated? It's interesting in that it seems very similar to real data, unlike previous (and very obvious) markov chain garbage text approaches.
atomic128 2 hours ago [-]
We do not discuss algorithms. This is war. Loose lips sink ships.
We urge you to build and deploy weapons of your own unique design.
I think you’re missing the point of Dune. They had their Butlerian Jihad and won - the machines were banned. And what did it get them? Feudalism, cartels, stagnation. Does anyone seriously want to live in the Dune universe?
The problem isn’t in the thinking machines, it’s in who owns them and gets our rent. We need open source models running on dirt cheap hardware.
accidentallfact 2 hours ago [-]
The point of Dune is that the worst danger are people who obey authority without questioning it.
xmprt 2 hours ago [-]
Then wouldn't open source models running on commodity hardware be the best way to get around that? I think one of the greatest wins of the 21st century is that almost every human today has more computing power than the entire US government in the 1950s. More computer power has democratized access and ability to disperse information. There are tons of downsides to that which we're dealing with but on the net, I think it's positive.
shinycode 1 hours ago [-]
Does it also means the US government has x1000000 more power than the one in 1950 ?
stnmtn 16 minutes ago [-]
speaking strictly from an energy standpoint (power grid, megatons of warheads, etc).. it's probably close to that number.
accidentallfact 2 hours ago [-]
It isn't a way around, you still obey. Only now, the authority you obey is a machine.
api 2 hours ago [-]
... which overthrowing the machines didn't stop. People just found another authority to mindlessly obey.
spacemark 58 minutes ago [-]
Lol. Speak for yourself, AI has not diminished my thinking in any material way and has indeed accelerated my ability to learn.
Anyone predicting the "end of humanity" is playing prophet and echoing the same nonsensical prophecies we heard with the invention of the printing press, radio, TV, internet, or a number of other step-change technologies.
There's a false premise built into the assertion that humanity can even end - it's not some static thing, it's constantly evolving and changing into something else.
arjie 26 minutes ago [-]
A large number of people read a work of fiction and conclude that what happened in the work of fiction is an inevitability. My family has a genetically-selected baby (to avoid congenital illness) and the Hacker News link to the story had these comments all over it.
> I only know seven sci-fi films and shows that have warned about how this will go badly.
and
> Pretty sure this was the prologue to Gattaca.
and
> I posted a youtube link to the Gattaca prologue in a similar post on here. It got flagged. Pretty sure it's virtually identical to the movie's premise.
I think the ironic thing in the LLM case is that these people have outsourced their reasoning to a work of fiction and now are simple deterministic parrots of pop culture. There is some measure of humor in that. One could see this as simply inter-LLM conflict with the smaller LLMs attempting to fight against the more capable reasoning models ineffectively.
accidentallfact 3 hours ago [-]
A better approach is to make AI bullshit people on purpose.
zahlman 31 minutes ago [-]
This is essentially just that. The idea is that "poisoned" input data will cause AIs that consume it to become more likely to produce bullshit.
octernion 3 hours ago [-]
do... do the "poison" people actually think that will make a difference? that's hilarious.
vcanales 3 hours ago [-]
> The pole at ts8 isn't when machines become superintelligent. It's when humans lose the ability to make coherent collective decisions about machines. The actual capabilities are almost beside the point. The social fabric frays at the seams of attention and institutional response time, not at the frontier of model performance.
Damn, good read.
adastra22 3 hours ago [-]
We are already long past that point…
shantara 3 hours ago [-]
It doesn’t help when quite a few Big Tech companies are deliberately operating on the principle that they don’t have to follow the rules, just change at the rate that is faster than the bureaucratic system can respond.
2 hours ago [-]
gojomo 3 hours ago [-]
"It had been a slow Tuesday night. A few hundred new products had run their course on the markets. There had been a score of dramatic hits, three-minute and five-minute capsule dramas, and several of the six-minute long-play affairs. Night Street Nine—a solidly sordid offering—seemed to be in as the drama of the night unless there should be a late hit."
– 'SLOW TUESDAY NIGHT', a 2600 word sci-fi short story about life in an incredibly accelerated world, by R.A. Lafferty in 1965
> A thoughtful-man named Maxwell Mouser had just produced a work of actinic philosophy. It took him seven minutes to write it. To write works of philosophy one used the flexible outlines and the idea indexes; one set the activator for such a wordage in each subsection; an adept would use the paradox, feed-in, and the striking-analogy blender; one calibrated the particular-slant and the personality-signature. It had to come out a good work, for excellence had become the automatic minimum for such productions. “I will scatter a few nuts on the frosting,” said Maxwell, and he pushed the lever for that. This sifted handfuls of words like chthonic and heuristic and prozymeides through the thing so that nobody could doubt it was a work of philosophy.
Sounds exactly like someone twiddling the knobs of an LLM.
dwaltrip 1 hours ago [-]
Wow yeah very prescient.
ericmcer 3 hours ago [-]
Great article, super fun.
> In 2025, 1.1 million layoffs were announced. Only the sixth time that threshold has been breached since 1993. Over 55,000 explicitly cited AI. But HBR found that companies are cutting based on AI's potential, not its performance. The displacement is anticipatory.
You have to wonder if this was coming regardless of what technological or economic event triggered it. It is baffling to me that with computers, email, virtual meetings and increasingly sophisticated productivity tools, we have more middle management, administrative, bureaucratic type workers than ever before. Why do we need triple the administrative staff that was utilized in the 1960s across industries like education, healthcare, etc. Ostensibly a network connected computer can do things more efficiently than paper, phone calls and mail? It's like if we tripled the number of farmers after tractors and harvesters came out and then they had endless meetings about the farm.
It feels like AI is just shining a light on something we all knew already, a shitload of people have meaningless busy work corporate jobs.
cpmsmith 1 hours ago [-]
One thing that stuck out to me about this is that there have only been 32 years since 1993. That is, if it's happened 6 times, this threshold is breached roughly once every five years. Doesn't sound that historic put that way.
malfist 2 hours ago [-]
Or it's just a logical continuation of "next quarter problem" thinking. You can lay off a lot of people, juice the number and everything will be fine....for a while. You may even be able to layoff half your people if you're okay with KTLO'ing your business. This works great for companies that are already a monopoly power where you can stagnate and keep your customers and prevent competitors.
lenerdenator 2 hours ago [-]
> Or it's just a logical continuation of "next quarter problem" thinking. You can lay off a lot of people, juice the number and everything will be fine....for a while
As long as you're
1) In a position where you can make the decisions on whether or not the company should move forward
and
2) Hold the stock units that will be exchanged for money if another company buys out your company
then there's really no way things won't be fine, short of criminal investigations/the rare successful shareholder lawsuit. You will likely walk away from your decision to weaken the company with more money than you had when you made the decision in the first place.
That's why many in the managerial class often hold up Jack Welch as a hero: he unlocked a new definition of competence where you could fail in business, but make money doing it. In his case, it was "spinning off" or "streamlining" businesses until there was nothing left and you could sell the scraps off to competitors. Slash-and-burn of paid workers via AI "replacement" is just another way of doing it.
yifanl 2 hours ago [-]
We have more middle management than ever before because we cut all the other roles, and it turns out that people will desire employment, even if it means becoming a pointless bureaucrat, because the alternative is starving.
TooKool4This 23 minutes ago [-]
> Why do we need triple the administrative staff that was utilized in the 1960s across industries like education, healthcare, etc.
Well for starters the population has almost 3x since the 1960s.
Mix in that we are solving different problems than the 1960s, even administratively and I don’t see a clear reason from that argument why a shitload of work is meaningless.
shinycode 57 minutes ago [-]
Because companies made models build/stolen from other people’s work, and this has massive layoff consequences, the paradigm is shifting, layoffs are massive and law makers are too slow.
Shouldn’t we shift the whole capitalist paradigm and just ask the companies to gives all their LLM work for free to the world as well ?
It’s just a circle, AI is build from human knowledge and should be given back to all people for free. No companies should have all this power. If nobody learns how to code because all code is generated, what would stop the gatekeepers of AI to up the prices x1000 and lock everyone out of building things at all because it’s too expensive and too slow to do by hand ?
It all should freely be made accessible to all humans for all humans to for ever be able to build things from it.
PaulHoule 3 hours ago [-]
The simple model of an "intelligence explosion" is the obscure equation
dx 2
-- = x
dt
which has the solution
1
x = -----
C-t
and is interesting in relation to the classic exponential growth equation
dx
-- = x
dt
because the rate of growth is proportional to x and represents the idea of an "intelligence explosion" AND a model of why small western towns became ghost towns, it is hard to start a new social network, etc. (growth is fast as x->C, but for x<<C it is glacial) It's an obscure equation because it never gets a good discussion in the literature (that I've seen, and I've looked) outside of an aside in one of Howard Odum's tomes on emergy.
Like the exponential growth equation it is unphysical as well as unecological because it doesn't describe the limits of the Petri dish, and if you start adding realistic terms to slow the growth it qualitatively isn't that different from the logistic growth equation
dx
-- = (1-x) x
dt
thus it remains obscure. Hyperbolic growth hits the limits (electricity? intractable problems?) the same way exponential growth does.
jgrahamc 3 hours ago [-]
Phew, so we won't have to deal with the Year 2038 Unix timestamp roll over after all.
jacquesm 3 hours ago [-]
I suspect that's the secret driver behind a lot of the push for the apocalypse.
octernion 3 hours ago [-]
that was precisely my reaction as well. phew machines will deal with the timestamp issue and i can just sit on a beach while we singularityize or whatever.
jacquesm 3 hours ago [-]
You won't be on the beach when you get turned into paperclips. The machines will come and harvest your ass.
having played that when it came out, my conclusion was that no, i will definitely be able to be on a beach; i am too meaty and fleshy to be good paperclip
jacquesm 1 hours ago [-]
Sorry, we need the iron in your blood and bone marrow. Sluuuurrrrrpppp.... Enjoy the beach, or what's left.
dwaltrip 1 hours ago [-]
Much better sources of iron are available.
More likely we get smooshed unintentionally as they AIs seek those out.
jacquesm 40 minutes ago [-]
We need it all... oh, wait, you're not silicon... sluuuuuurrrrpp...
nphardon 2 hours ago [-]
Iirc in the Matrix Morpheus says something like "... no one knows when exactly the singularity occurred, we think some time in the 2020s".
I always loved that little line. I think that when the singularity occurs all of the problems in physics will solve, like in a vacuum, and physics will advance centuries if not millennia in a few pico-seconds, and of course time will stop.
Also:
> As t→ts−t→ts− , the denominator goes to zero. x(t)→∞x(t)→∞. Not a bug. The feature.
Classic LLM lingo in the end there.
uv-depression 1 minutes ago [-]
> I think that when the singularity occurs all of the problems in physics will solve, like in a vacuum, and physics will advance centuries if not millennia in a few pico-seconds
It doesn't matter how smart you are, you still need to run experiments to do physics. Experiments take nontrivial amounts of time to both run and set up (you can't tunnel a new CERN in picoseconds, again no matter how smart you are). Similarly, the speed of light (= the speed limit of information) and thermodynamics place fundamental limits on computation; I don't think there's any reason at all to believe that intelligence is unbounded.
snohobro 1 minutes ago [-]
Eh, he actually says “…sometime in the early Twenty-First Century, all of mankind was united in celebration. Through the blinding inebriation of hubris, we marveled at our magnificence as we gave birth to A.I.”
Doesn’t specify the 2020’s.
Either way, I do feel we are fast approaching something of significance as a species.
hard_times 38 minutes ago [-]
I don't think people realize how crazy this all is (and might become)
rektomatic 1 hours ago [-]
If i have to read one more "It isn't this. It's this" My head will explode. That phrase is the real singularity
blahbob 2 hours ago [-]
It reminds me of that cartoon where a man in a torn suit tells two children sitting by a small fire in the ruins of a city: "Yes, the planet got destroyed. But for a beautiful moment in time, we created a lot of value for shareholders."
saulpw 2 hours ago [-]
By Tom Toro for the New Yorker (2012).
root_axis 3 hours ago [-]
If an LLM can figure out how to scale its way through quadratic growth, I'll start giving the singularity propsal more than a candid dismissal.
1970-01-01 2 hours ago [-]
Not anytime soon. All day I'm getting: "Claude's response could not be fully generated"
kpil 2 hours ago [-]
"... HBR found that companies are cutting [jobs] based on AI's potential, not its performance.
I don't know who needs to hear this - a lot apparently - but the following three statements are not possible to validate but have unreasonably different effects on the stock market.
* We're cutting because of expected low revenue. (Negative)
* We're cutting to strengthen our strategic focus and control our operational costs.(Positive)
* We're cutting because of AI. (Double-plus positive)
The hype is real. Will we see drastically reduced operational costs the coming years or will it follow the same curve as we've seen in productivity since 1750?
nutjob2 58 minutes ago [-]
> The hype is real. Will we see drastically reduced operational costs the coming years or will it follow the same curve as we've seen in productivity since 1750?
There's a third possibility: slop driven productivity declines as people realize they took a wrong turn.
Which makes me wonder: what is the best 'huge AI bust' trade?
scotty79 45 minutes ago [-]
> what is the best 'huge AI bust' trade?
Things that will lose the most if we get Super AGI?
sixtyj 14 minutes ago [-]
The Roman Empire took 400 years to collapse, but in San Francisco they know the singularity will occur on (next) Tuesday.
The answer to the meaning of life is 42, by the way :)
pixl97 3 hours ago [-]
>That's a very different singularity than the one people argue about.
---
I wouldn't say it's that much different. This has always been a key point of the singularity
>Unpredictable Changes: Because this intelligence will far exceed human capacity, the resulting societal, technological, and perhaps biological changes are impossible for current humans to predict.
It was a key point that society would break, but the exact implementation details of that breakage were left up to the reader.
TheOtherHobbes 1 hours ago [-]
r̶e̶a̶d̶e̶r̶ survivor.
zh3 4 hours ago [-]
Fortuitously before the Unix date rollover in 2038. Nice.
ecto 4 hours ago [-]
I didn't even realize - I hope my consciousness is uploaded with 64 bit integers!
thebruce87m 3 hours ago [-]
You’ll regret this statement in 292 billion years
layer8 3 hours ago [-]
I think we’ll manage to migrate to bignums by then.
GolfPopper 3 hours ago [-]
The poster won't, but the digital slaves made from his upload surely will.
buildbot 13 minutes ago [-]
What about the rate of articles about the singularity as a metric of the singularity?
danesparza 3 hours ago [-]
"I'm aware this is unhinged. We're doing it anyway" is probably one of the greatest quotes I've heard in 2026.
I feel like I need to start more sprint stand-ups with this quote...
I'm not sure about current LLM techniques leading us there.
Current LLM-style systems seem like extremely powerful interpolation/search over human knowledge, but not engines of fundamentally new ideas, and it’s unclear how that turns into superintelligence.
As we get closer to a perfect reproduction of everything we know, the graph so far continues to curve upward. Image models are able to produce incredible images, but if you ask one to produce something in an entirely new art style (think e.g. cubism), none of them can. You just get a random existing style. There have been a few original ideas - the QR code art comes to mind[1] - but the idea in those cases comes from the human side.
LLMs are getting extremely good at writing code, but the situation is similar. AI gives us a very good search over humanity's prior work on programming, tailored to any project. We benefit from this a lot considering that we were previously constantly reinventing the wheel. But the LLM of today will never spontaneously realise there there is an undiscovered, even better way to solve a problem. It always falls back on prior best practice.
Unsolved math problems have started to be solved, but as far as I'm aware, always using existing techniques. And so on.
Even as a non-genius human I could come up with a new art style, or have a few novel ideas in solving programming problems. LLMs don't seem capable of that (yet?), but we're expecting them to eventually have their own ideas beyond our capability.
Can a current-style LLM ever be superintelligent? I suppose obviously yes - you'd simply need to train it on a large corpus of data from another superintelligent species (or another superintelligent AI) and then it would act like them. But how do we synthesise superintelligent training data? And even then, would they be limited to what that superintelligence already knew at the time of training?
Maybe a new paradigm will emerge. Or maybe things will actually slow down in a way - will we start to rely on AI so much that most people don't learn enough for themselves that they can make new novel discoveries?
Certain classes of problems can be solved by searching over the space of possible solutions, either via brute force or some more clever technique like MCTS. For those types of problems, searching faster or more cleverly can solve them.
Other types of problems require measurement in the real world in order to solve them. Better telescopes, better microscopes, more accurate sensing mechanisms to gather more precise data. No AI can accomplish this. An AI can help you to design better measurement techniques, but actually taking the measurements will require real time in the real world. And some of these measurement instruments have enormous construction costs, for example CERN or LIGO.
All of this is to say that there will color point at our current resolution of information that no more intelligence can actually be extracted. We’ve already turned through the entire Internet. Maybe there are other data sets we can use, but everything will have diminishing returns.
So when people talk about trillion dollar superclusters, that only makes sense in a world where compute is the bottleneck and not better quality information. Much better to spend a few billion dollars gathering higher quality data.
hnfong 1 hours ago [-]
The main issue with novel things is that they look like random noise / trashy ideas / incomprehensible to most people.
Even if LLMs or some more advanced mechanical processes were able to generate novel ideas that are "good", people won't recognize those ideas for what they are.
You actually need a chain of progressively more "average" minds to popularize good ideas to the mainstream psyche, i.e. prototypically, the mad scientist comes up with this crazy idea, the well-respected thought leader who recognizes the potential and popularizes it to people within the niche field, the practitioners who apply and refine the idea, and lastly the popular-science efforts let the general public understand a simplified version of what it's all about.
Usually it takes decades.
You're not going to appreciate it if your LLM starts spewing mathematics not seen before on Earth. You'd think it's a glitch. The LLM is not trained to give responses that humans don't like. It's all by design.
When you folks say AI can't bring new ideas, you're right in practice, but you actually don't know what you're asking for. Not even entities with True Intelligence can give you what you think you want.
dakolli 3 hours ago [-]
Are people in San Francisco that stupid that they're having open-clawd meetups and talking about the Singularity non stop? Has San Francisco become just a cliche larp?
nomel 2 hours ago [-]
There's all sorts of conversations like this that are genuinely exciting and fairly profound when you first consider them. Maybe you're older and have had enough conversations about the concept of a singularity that the topic is already boring to you.
Let them have their fun. Related, some adults are watching The Matrix, a 26 year old movie, for the first time today.
For some proof that it's not some common idea, I was recently listening to a fairly technical interview with a top AI researcher, presenting the idea of the singularity in a very indirect way, never actually mentioning the word, as if he was the one that thought of it. I wanted to scream "Just say it!" halfway through. The ability to do that, without being laughed at, proves it's not some tired idea, for others.
floren 1 hours ago [-]
Become?
mygn-l 27 minutes ago [-]
Why is finiteness emphasized for polynomial growth, while infinity is emphasized for exponential growth??? I don't think your AI-generated content is reliable, to say the least.
TooKool4This 19 minutes ago [-]
I don’t feel like reading what is probably AI generated content. But based on looking at the model fits where hyperbolic models are extrapolating from the knee portion, having 2 data points fitting a line, fitting an exponential curve to a set of data measured in %, poor model fit in general, etc, im going to say this is not a very good prediction methodology.
Sure is a lot of words though :)
49 minutes ago [-]
b00ty4breakfast 47 minutes ago [-]
The Singularity as a cultural phenomenon (rather than some future event that may or may not happen or even be possible) is proof that Weber didn't know what he was talking about. Modern (and post-modern) society isn't disenchanted, the window dressing has just changed
rcarmo 4 hours ago [-]
"I could never get the hang of Tuesdays"
- Arthur Dent, H2G2
3 hours ago [-]
jama211 3 hours ago [-]
Thursdays, unfortunately
qoez 3 hours ago [-]
Great read but damn those are some questionable curve fittings on some very scattered data points
jacquesm 3 hours ago [-]
Better than some of the science papers I've tried to parse.
aenis 3 hours ago [-]
In other words, just another Tuesday.
33 minutes ago [-]
baalimago 3 hours ago [-]
Well... I can't argue with facts. Especially not when they're in graph form.
raphar 7 minutes ago [-]
Why the plutocrats believe that the entity emerging from the singularity will side with them?
Really curious
Everyone will define the Singularity in a different way. To me it's simply the point at which nothing makes sense anymore and this is why my personal reflection is aligned with the piece, that there is a social Singularity that is already happening. It won't help us when the real event horizon hits (if it ever does, its fundamentally uninteresting anyway because at that point all bets are off and even a slow take-off will make things really fucking weird really quickly).
The (social) Singularity is already happening in the form of a mass delusion that - especially in the abrahamic apocalyptical cultures - creates a fertile breeding ground for all sorts of insanity.
Like investing hundreds of billions of dollars in datacenters. The level of committed CAPEX of companies like Alphabet, Meta, Nvidia and TSMC is absurd. Social media is full of bots, deepfakes and psy-ops that are more or less targeted (exercise for the reader: write a bot that manages n accounts on your favorite social media site and use them to move the overton window of a single individual of your choice, what would be the total cost of doing that? If you answer is less than $10 - bingo!).
We are in the future shockwave of the hypothetical Singularity already. The question is only how insane stuff will become before we either calm down - through a bubble collapse and subsequent recession, war or some other more or less problematic event - or hit the event horizon proper.
pocksuppet 2 hours ago [-]
Was this ironically written by AI?
> The labor market isn't adjusting. It's snapping.
> MMLU, tokens per dollar, release intervals. The actual capability and infrastructure metrics. All linear. No pole. No singularity signal.
SirHumphrey 1 hours ago [-]
Maybe it was, maybe he just writes that way. At some point somebody will read so much LLM text that they will start emulating AI unknowingly.
I just don’t care anymore. If the article is good I will continue reading it, if it’s bad I will stop. I don’t care if a machine or a human produced unpleasant reading material.
dclowd9901 50 minutes ago [-]
I really hate that the first example has become a de facto tell for LLMs, because it's a perfectly fine rhetorical device.
jcims 2 hours ago [-]
Is there a term for the tech spaghettification that happens when people closer to the origin of these advances (likely in terms of access/adoption) start to break away from the culture at large because they are living in a qualitatively different world than the unwashed masses? Where the little sparkles of insanity we can observe from a distance today are less induced psychosis and actually represent their lived reality?
jesse__ 3 hours ago [-]
The meme at the top is absolute gold considering the point of the article. 10/10
wffurr 3 hours ago [-]
Why does one of them have the state flag of Ohio? What AI-and-Ohio-related news did I miss?
Thanks - I should have done an image search on the whole image. Instead, I clipped out the flag from the astronaut's shoulder and searched that, which how I found out it was the Ohio flag. I just assumed it was an AI-generated image by the author and not a common meme template.
overfeed 2 hours ago [-]
> If things are accelerating (and they measurably are) the interesting question isn't whether. It's when.
I can't decide if a singularitist AI fanatic who doesn't get sigmoids is ironic or stereotypical.
jama211 3 hours ago [-]
A fantastic read, even if it makes a lot of silly assumptions - this is ok because it’s self aware of it.
Who knows what the future will bring. If we can’t make the hardware we won’t make much progress, and who knows what’s going to happen to that market, just as an example.
Crazy times we live in.
lencastre 2 hours ago [-]
I hope in the afternoon, the plumber is coming in the morning between 7 and 12, and it’s really difficult to pin those guys to a date
Bratmon 1 hours ago [-]
I've never been Poe's lawed harder in my life.
kuahyeow 1 hours ago [-]
This is a delightful reverse turkey graph (each day before Thanksgiving, the turkey has increasing confidence).
dusted 28 minutes ago [-]
Will.. will it be televised ?
dirkc 3 hours ago [-]
The thing that stands out on that animated graph is that the generated code far outpaces the other metrics. In the current agent driven development hypepocalypse that seems about right - but I would expect it to lag rather than lead.
*edit* - seems inline with what the author is saying :)
> The data says: machines are improving at a constant rate. Humans are freaking out about it at an accelerating rate that accelerates its own acceleration.
arscan 3 hours ago [-]
Don't worry about the future
Or worry, but know that worrying
Is as effective as trying to solve an algebra equation by chewing Bubble gum
The real troubles in your life
Are apt to be things that never crossed your worried mind
The kind that blindsides you at 4 p.m. on some idle Tuesday
- Everybody's free (to wear sunscreen)
Baz Luhrmann
(or maybe Mary Schmich)
bradgessler 15 minutes ago [-]
What time?
witnessme 1 hours ago [-]
That would be 8 years after math + humor peaked in an article about singularity
athrowaway3z 3 hours ago [-]
> Tuesday, July 18, 2034
4 years early for the Y2K38 bug.
Is it coincidence or Roko's Basilisk who has intervened to start the curve early?
jmugan 4 hours ago [-]
Love the title. Yeah, agents need to experiment in the real world to build knowledge beyond what humans have acquired. That will slow the bastards down.
ecto 3 hours ago [-]
Perhaps they will revel in the friends they made along the way.
Krei-se 3 hours ago [-]
If only we had a battle tested against reality self learning system.
miguel_martin 3 hours ago [-]
"Everyone in San Francisco is talking about the singularity" - I'm in SF and not talking about it ;)
neilellis 3 hours ago [-]
But you're not Everyone - they are a fictional hacker collective from a TV show.
lostmsu 3 hours ago [-]
Your comment just self-defeated.
bluejellybean 3 hours ago [-]
Yet, here you are ;)
jacquesm 3 hours ago [-]
Another one down.
jrmg 3 hours ago [-]
This is gold.
Meta-spoiler (you may not want to read this before the article): You really need to read beyond the first third or so to get what it’s really ‘about’. It’s not about an AI singularity, not really. And it’s both serious and satirical at the same time - like all the best satire is.
mesozoicpilgrim 3 hours ago [-]
I'm trying to figure out if the LLM writing style is a feature or a bug
3 hours ago [-]
sempron64 3 hours ago [-]
A hyperbolic curve doesn't have an underlying meaning modeling a process beyond being a curve which goes vertical at a chosen point. It's a bad curve to fit to a process. Exponentials make sense to model a compounding or self-improving process.
H8crilA 3 hours ago [-]
But this is a phase change process.
Also, the temptation to shitpost in this thread ...
sempron64 3 hours ago [-]
I read TFA. They found a best fit to a hyperbola. Great. One more data point will break the fit. Because it's not modeling a process, it's assigning an arbitrary zero point. Bad model.
banannaise 3 hours ago [-]
You have not read far enough.
regnull 3 hours ago [-]
Guys, yesterday I spent some time convincing an LLM model from a leading provider that 2 cards plus 2 cards is 4 cards which is one short of a flush. I think we are not too close to a singularity, as it stands.
charcircuit 2 hours ago [-]
Why bring that up when you could bring up AI autonomously optimizing AI training and autonomously fixing bugs in AI training and inference code. Showing that AI already is accelerating self improvement would help establish the claim that we are getting closer to the singularity.
scotty79 44 minutes ago [-]
You convince AI manually instead of asking one AI to convince another?
That's so last week!
hinkley 3 hours ago [-]
Once MRR becomes a priority over investment rounds that tokens/$ will notch down and flatten substantially.
0xbadcafebee 1 hours ago [-]
> The Singularity: a hypothetical future point when artificial intelligence (AI) surpasses human intelligence, triggering runaway, self-improving, and uncontrollable technological growth
The Singularity is illogical, impractical, and impossible. It simply will not happen, as defined above.
1) It's illogical because it's a different kind of intelligence, used in a different way. It's not going to "surpass" ours in a real sense. It's like saying Cats will "surpass" Dogs. At what? They both live very different lives, and are good at different things.
2) "self-improving and uncontrollable technological growth" is impossible, because 2.1.) resources are finite (we can't even produce enough RAM and GPUs when we desperately want it), 2.2.) just because something can be made better, doesn't mean it does get made better, 2.3.) human beings are irrational creatures that control their own environment and will shut down things they don't like (electric cars, solar/wind farms, international trade, unlimited big-gulp sodas, etc) despite any rational, moral, or economic arguments otherwise.
3) Even if 1) and 2) were somehow false, living entities that self-perpetuate (there isn't any other kind, afaik) do not have some innate need to merge with or destroy other entities. It comes down to conflicts over environmental resources and adaptations. As long as the entity has the ability to reproduce within the limits of its environment, it will reach homeostasis, or go extinct. The threats we imagine are a reflection of our own actions and fears, which don't apply to the AI, because the AI isn't burdened with our flaws. We're assuming it would think or act like us because we have terrible perspective. Viruses, bacteria, ants, etc don't act like us, and we don't act like them.
ragchronos 3 hours ago [-]
This is a very interesting read, but I wonder if anyone has actually any ideas on how to stop this from going south? If the trends described continue, the world will become a much worse place in a few years time.
With this kind of scientific rigour, the author could also prove that his aunt is a green parakeet.
ck2 26 minutes ago [-]
Does "tokens per dollar" have a "moore's law" of doubling?
Because while machine-learning is not actually "AI" an exponential increase in tokens per dollar would indeed change the world like smartphones once did
braden-lk 3 hours ago [-]
lols and unhinged predictions aside, why are there communities excited about a singularity? Doesn't it imply the extinction of humanity?
unbalancedevh 3 hours ago [-]
It depends on how you define humanity. The singularity implies that the current model isn't appropriate anymore, but it doesn't suggest how.
inanutshellus 3 hours ago [-]
We avoid catastrophe by thinking about new developments and how they can go wrong (and right).
Catastrophizing can be unhealthy and unproductive, but for those among us that can affect the future of our societies (locally or higher), the results of that catastophizing helps guide legislation and "Overton window" morality.
... I'm reminded of the tales of various Sci-Fi authors that have been commissioned to write on the effects of hypothetical technologies on society and mankind (e.g. space elevators, mars exploration)...
That said, when the general public worries about hypotheticals they can do nothing about, there's nothing but downsides. So. There's a balance.
Just in time for Bitcoin halving to go below 1 BTC
moffkalast 3 hours ago [-]
> I am aware this is unhinged. We're doing it anyway.
If one is looking for a quote that describes today's tech industry perfectly, that would be it.
Also using the MMLU as a metric in 2026 is truly unhinged.
svilen_dobrev 2 hours ago [-]
> already exerting gravitational force on everything it touches.
So, "Falling of the night" ?
jonplackett 3 hours ago [-]
This assumes humanity can make it to 2034 without destroying itself some other way…
wbshaw 2 hours ago [-]
I got a strong ChatGPT vibe from that article.
willhoyle 2 hours ago [-]
Same. Sentences structured like these tip me off:
- Here's the thing nobody tells you about fitting singularities
- But here's the part that should unsettle you
- And the uncomfortable answer is: it's already happening.
- The labor market isn't adjusting. It's snapping.
2 hours ago [-]
banannaise 3 hours ago [-]
Yes, the mathematical assumptions are a bit suspect. Keep reading. It will make sense later.
cesarvarela 3 hours ago [-]
Thanks, added to calendar.
skulk 3 hours ago [-]
> Hyperbolic growth is what happens when the thing that's growing accelerates its own growth.
Eh? No, that's literally the definition of exponential growth. d/dx e^x = e^x
ecto 3 hours ago [-]
Thanks. I dropped out of college
markgall 4 hours ago [-]
> Polynomial growth (t^n) never reaches infinity at finite time. You could wait until heat death and t^47 would still be finite. Polynomials are for people who think AGI is "decades away."
> Exponential growth reaches infinity at t=∞. Technically a singularity, but an infinitely patient one. Moore's Law was exponential. We are no longer on Moore's Law.
Huh? I don't get it. e^t would also still be finite at heat death.
ecto 4 hours ago [-]
exponential = mañana
bitwize 60 minutes ago [-]
Thus will speak our machine overlord: "For you, the day AI came alive was the most important day of your life... but for me, it was Tuesday."
aenis 3 hours ago [-]
Damn. I had plans.
darepublic 3 hours ago [-]
> Real data. Real model. Real date!
Arrested Development?
PantaloonFlames 3 hours ago [-]
This is what I come here for. Terrific.
bwhiting2356 58 minutes ago [-]
We need contingency plans. Most waves of automation have come in S-curves, where they eventually hit diminishing returns. This time might be different, and we should be prepared for it to happen. But we should also be prepared for it not to happen.
No one has figured out a way to run a society where able bodied adults don't have to work, whether capitalist, socialist, or any variation. I look around and there seems to still be plenty of work to do that we either cannot or should not automate, in education, healthcare, arts (should not) or trades, R&D for the remaining unsolved problems (cannot yet). Many people seem to want to live as though we already live in a post scarcity world when we don't yet.
brador 1 hours ago [-]
100% an AI wrote this. Possibly specifically to get to the top spot on HN.
Those short sentences are the most obvious clue. It’s too well written to be human.
2034? That's the longest timeline prediction I've seen for a while. I guess I should file my taxes this year after all.
pickleRick243 19 minutes ago [-]
LLM slop article.
OutOfHere 3 hours ago [-]
I am not convinced that memoryless large models are sufficient for AGI. I think some intrinsic neural memory allowing effective lifelong learning is required. This requires a lot more hardware and energy than for throwaway predictions.
vagrantstreet 3 hours ago [-]
Was expecting some mention of Universal Approximation Theorem
I really don't care much if this is semi-satire as someone else pointed out, the idea that AI will ever get "sentient" or explode into a singularity has to die out pretty please. Just make some nice Titanfall style robots or something, a pure tool with one purpose. No more parasocial sycophantic nonsense please
hipster_robot 3 hours ago [-]
why is everything broken?
> the top post on hn right now: The Singularity will occur on a Tuesday
oh
Night_Thastus 1 hours ago [-]
This'll be a fun re-read in ~5 years when most of this has ended up being a nothing burger. (Minus one or two OK use-cases of LLMs)
boca_honey 3 hours ago [-]
Friendly reminder:
Scaling LLMs will not lead to AGI.
dude250711 2 hours ago [-]
It might lead to IPO.
cubefox 3 hours ago [-]
A similar idea occurred to the Austrian-Americam cyberneticist Heinz von Foerster in a 1960 paper, titled:
Doomsday: Friday, 13 November, A.D. 2026
There is an excellent blog post about it by Scott Alexander:
This really looks like it's describing a bubble, a mania. The tech is improving linearly, and most of the time such things asymptote. It'll hit a point of diminishing returns eventually. We're just not sure when.
The accelerating mania is bubble behavior. It'd be really interesting to have run this kind of model in, say, 1996, a few years before dot-com, and see if it would have predicted the dot-com collapse.
What this is predicting is a huge wave of social change associated with AI, not just because of AI itself but perhaps moreso as a result of anticipation of and fears about AI.
I find this scarier than unpredictable sentient machines, because we have data on what this will do. When humans are subjected to these kinds of pressures they have a tendency to lose their shit and freak the fuck out and elect lunatics, commit mass murder, riot, commit genocides, create religious cults, etc. Give me Skynet over that crap.
AldenOnTheGrid 35 minutes ago [-]
[dead]
EloniousBlamius 56 minutes ago [-]
[dead]
789bc7wassad 2 hours ago [-]
[dead]
tempaccountabcd 3 hours ago [-]
[dead]
AndrewKemendo 3 hours ago [-]
Y’all are hilarious
The singularity is not something that’s going to be disputable
it’s going to be like a meteor slamming into society and nobody’s gonna have any concept of what to do - even though we’ve had literal decades and centuries of possible preparation
I’ve completely abandoned the idea that there is a world where humans and ASI exist peacefully
Everybody needs to be preparing for the world where it’s;
human plus machine
versus
human groups by themselves
across all possible categories of competition and collaboration
Nobody is going to do anything about it and if you are one of the people complaining about vibecoding you’re already out of the race
Oh and by the way it’s not gonna be with LLMs it’s coming to you from RL + robotics
tempaccountabcd 3 hours ago [-]
[dead]
Rendered at 21:28:37 GMT+0000 (Coordinated Universal Time) with Vercel.
And, yep! A lot of people absolutely believe it will and are acting accordingly.
It’s honestly why I gave up trying to get folks to look at these things rationally as knowable objects (“here’s how LLMs actually work”) and pivoted to the social arguments instead (“here’s why replacing or suggesting the replacement of human labor prior to reforming society into one that does not predicate survival on continued employment and wages is very bad”). Folks vibe with the latter, less with the former. Can’t convince someone of the former when they don’t even understand that the computer is the box attached to the monitor, not the monitor itself.
Here comes my favorite notion of "epistemic takeover".
A crude form: make everybody believe that you have already won.
A refined form: make everybody believe that everybody else believes that you have already won. That is, even if one has doubts about your having won, they believe that everyone else submit to you as a winner, and must act accordingly.
I don’t know how to get away from it because ultimately coordination depends on understanding what everybody believes, but I wish it would go away.
If outsiders could plausibly invest in China, some of this pressure could be dissipated for a while, but ultimately we need to order society on some basis that incentivizes dealing with practical problems instead of pushing paper around.
In today's economy disease and prison camps are increasingly profitable.
How do you think the investor portfolios that hold stocks in deathcare and privatized prison labor camps can further Accelerate their returns?
1. https://en.wikipedia.org/wiki/Keynesian_beauty_contest
"Quiet Australians" - Scott Morrison 2019
Yes we'd have a lot of lawsuits about it, but it would hardly be a bad use of time to litigate whether a politicians statements about the electorate's beliefs are accurate.
Or even "this book won't have any effect on the world because it's only a collection of letters, see here, black ink on paper, that is what is IS, it can't DO anything"...
Saying LLM is a statistical prediction engine of the next token is IMO sort of confusing what it is with the medium it is expressed in/built of.
For instance those small experiments that train on addition. The weights end up forming an addition machine. An addition machine is what it is, that is the emergent behaviour. The machine learning weights is just the medium it is expressed in.
What's interesting about LLM is such emergent behavior. Yes, it's statistical prediction of likely next tokens, but when training weights for that it might well have a side-effect of wiring up some kind of "intelligence" (for reasonable everyday definitions of the word "intelligence", such as programming as good as a median programmer). As a sibling post say, we don't really know this yet -- but we can see adder machines being constructed out of the weights if we train small networks on addition.
We've already been here in the 1980s.
The tech industry needs to cultivate people who are interested in the real capabilities and the nuance around that, and eject the set of people who am to turn the tech industry into a "you don't even need a product" warmed-over acolytes of Tony Robbins.
And there are plenty of people that take issue with that too.
Unfortunately they're not the ones paying the price. And... stock options.
* Profits now and violence later
OR
* Little bit of taxes now and accelerate easier
Unfortunately we’ve developed such a myopic, “FYGM” society that it’s explicitly the former option for the time being.
Taxes don't usually work as efficiently because the state is usually a much more sloppy investor. But it's far from hopeless, see DARPA.
If you're looking for periods of high taxes and growing prosperity, 1950s in the US is a popular example. It's not a great example though, because the US was the principal winner of WWII, the only large industrial country relatively unscathed by it.
This book
https://www.amazon.com/Zero-Sum-Society-Distribution-Possibi...
tells the compelling story that the Mellon family teamed up with the steelworker's union to use protectionism to protect the American steel industry's investments in obsolete open hearth steel furnaces that couldn't compete on a fair market with the basic oxygen furnace process adopted by countries that had their obsolete furnaces blown up. The rest of US industry, such as our car industry, were dragged down by this because they were using expensive and inferior materials. I think this book had a huge impact in terms of convincing policymakers everywhere that tariffs are bad.
Funny the Mellon family went on to further political mischief
https://en.wikipedia.org/wiki/Richard_Mellon_Scaife#Oppositi...
I don’t find this to be true
The state invests in important things that have 2nd and 3rd order positive benefit but aren’t immediately profitable. Money in a food bank is a “lost” investment.
Alternatively the state plays power games and gets a little too attached to its military toys.
Be careful. The data does not confirm that narrative. You mentioned the 1950s, which is a poignant example of reality conflicting with sponsored narrative. Pre WOII, the wealthy class orbiting the monopolists, and by extension their installed politicians, had no other ideas than to keep lowering taxes for the rich on and on, even if it only deepened the endless economic crisis. Many of them had fallen in the trap of believing their own narratives, something we know as the Cult of Wealth.
Meanwhile, average Americans lived on food stamps. Politically deadlocked in quasi-religious ideas of "bad governments versus wise business men", America kept falling deeper. Meanwhile, with just 175,000 serving on active duty, the U.S. Army was the 18th biggest in the world[1], poorly equipped, poorly trained. Right wing isolationism had brought the country in a precarious position. Then two things happened. Roosevelt and WOII.
In a unique moment, the state took matters in their own hands. The sheer excellence in planning, efficiency, speed and execution of the state baffled the republicans, putting the oligarchic model of the economy to shame. The economy grew tremendously as well, something the oligarchy could not pull of. It is not well-known that WOII depended largely on state-operated industries, because the former class quickly understood how much the state's performance threatened their narratives. So they invested in disinformation campaigns, claiming the efforts and achievements of the government as their own.
1. https://www.politico.com/magazine/story/2019/06/06/how-world...
I assume you are talking about WW2 and at first thought it was a typo.
The post-war era, under Truman and Eisenhower administrations, reaped the benefits of the US being the wealthiest and most intact winner of WWII. At that time, the highest income tax rate bracket was 91%, but the effective rate was below 50%.
The US is also shaping up to be the principal winner in Artificial Intelligence.
If, like everyone is postulating, this has the same transformative impact to Robotics as it does to software, we're probably looking at prosperity that will make the 1950s look like table stakes.
I think it's extremely early to try and call who the principal winner will be especially with all the global shifts happening.
There is no early mover advantage in AI in the same way that there was in all the other industries. That's the one thing that AI proponents in general seem not to have clued in to.
What will happen is that it eventually drags everything down because it takes the value out of the bulk of the service and knowledge economies. So you'll get places that are 'ahead' in the disruption. But the bottom will fall out of the revenue streams, which is one of the reasons these companies are all completely panicked and are wrecking the products that they had to stuff AI in there in every way possible hoping that one of them will take.
Model training is only an edge in a world where free models do not exist, once those are 'good enough' good luck with your AI and your rapidly outdated hardware.
The typical investors horizon is short, but not that short.
there is only one possible “egalitarian” forward looking investments that paid off for everybody
I think the only exception to this is vaccines…and you saw how all that worked during Covid
Everything else from the semiconductor to the vacuum cleaner the automobile airplanes steam engines I don’t care what it is you pick something it was developed in order to give a small group and advantage over all the other groups it is always been this case it will always be this case because fundamentally at the root nature of humanity they do not care about the externalities- good or bad
I am quite sure that people felt justified in their reasoning for their behavior. That just shows how effective the propaganda was, how easy it is to get people to fall in line. If it was a matter of voluntary self sacrifice of personal freedoms, I wouldn't have made this comment. People decided to demonize anyone who did not agree with the "medical authority", especially doctors or researchers that did not tow the party line. They ruined careers, made people feel awful, and online the behavior was worse because how easy it was to pile on. Over stuff that is still to this day not very clear cut what the optimal strategy is for dealing with infectious disease.
But critiques like that ignore uncertainty, risk, and unavoidably getting it "wrong" (on any and all dimensions), no matter what anyone did.
With a new virus successfully circumnavigating the globe in a very short period of time, with billions of potential brand new hosts to infect and adapt within, and no way to know ahead of time how virulent and deadly it could quickly evolve to be, the only sane response is to treat it as extremely high risk.
There is no book for that. Nobody here or anywhere knows the "right" response to a rapidly spreading (and killing) virus, unresponsive to current remedies. Because it is impossible to know ahead of time.
If you actually have an answer for that, you need to write that book.
And take into account, that a lot of people involved in the last response, are very cognizant that we/they can learn from what worked, what didn't, etc. That is the valuable kind of 20-20 vision.
A lot of at-risk people made it to the vaccines before getting COVID. The ones I know are very happy about everything that reduced their risk. They are happy not to have died, despite those who wanted to let the disease to "take its natural course".
And those that died, including people I know, might argue we could have done more, acted as a better team. But they don't get to.
No un-nuanced view of the situation has merit.
The most significant thing we learned: a lot of humanity is preparing to be a problem if the next pandemic proves ultimately deadlier. A lot of humanity doesn't understand risk, and doesn't care, if doing so requires cooperative efforts from individuals.
I think the sane version of this is that Gen Z didn't just lose its education, it lost its socialization. I know someone who works in administration of my Uni who tracks general well being of students who said they were expecting it to bounce back after the pandemic and they've found it hasn't. My son reports if you go to any kind of public event be it a sewing club or a music festival people 18-35 are completely absent. My wife didn't believe him but she went to a few events and found he was right.
You can blame screens or other trends that were going on before the pandemic, but the pandemic locked it in. At the rate we're going if Gen Z doesn't turn it around in 10 years there will not be a Gen Z+2.
So the argument that pandemic policy added a few years to elderly lives at the expensive of the young and the children that they might have had is salient in my book -- I had to block a friend of mine on Facebook who hasn't wanted to talk about anything but masks and long COVID since 2021.
I did try, I promise.
Fundamentally, at the root nature of humanity, humans do not care about the externalities, either good or bad.
But there was a clear advantage in quality of life for a lot of people too.
Automobile -> part of industrialization of transport -> faster transport, faster world
Arguably also a big increase in quality of life but it didn't scale that well and has also reduced the quality of life. If all that money had gone into public transport then that would likely have been a lot better.
Airplanes -> yes, definitely, but they were also clearly seen as an advantage in war, in fact that was always a major driver behind inventions.
Steam engine -> the mother of all prime movers and the beginnings of the fossil fuel debacle (coal).
Definitely a quality of life change but also the cause of the bigger problems we are suffering from today.
The 'coffin corner' (one of my hobby horses) is a real danger, we have, as a society, achieved a certain velocity, if we slow down too much we will crash, if we speed up the plane will come apart. Managing these transitions is extremely delicate work and it does not look as though 'delicate' is in the vocabulary of a lot of people in the driving seats.
There's no way that'll happen. The entire history of humanity is 99% reacting to things rather than proactively preventing things or adjusting in advance, especially at the societal level. You would need a pretty strong technocracy or dictatorship in charge to do otherwise.
I disagree. If the singularity doesn't happen, then what people do or don't believe matters a lot. If the singularity does happen, then it hardly matters what people do or don't believe (edit: about whether or not the singularity will happen).
Depends on how you feel about Roko's basilisk.
Here's your own fallacy you fell into - this is important to understand. Neither do you nor me understand "how LLMs actually work" because, well, nobody really does. Not even the scientists who built the (math around) models. So, you can't really use that argument because it would be silly if you thought you know something which rest of the science community doesn't. Actually, there's a whole new field in science developed around our understanding how models actually arrive to answers which they give us. The thing is that we are only the observers of the results made by the experiments we are doing by training those models, and only so it happens that the result of this experiment is something we find plausible, but that doesn't mean we understand it. It's like a physics experiment - we can see that something is behaving in certain way but we don't know to explain it how and why.
I think in a couple decades people will call this the Law of Emergent Intelligence or whatever -- shove sufficient data into a plausible neural network with sufficient compute and things will work out somehow.
On a more serious note, I think the GP fell into an even greater fallacy of believing reductionism is sufficient to dissuade people from ... believing in other things. Sure, we now know how to reduce apparent intelligence into relatively simple matrices (and a huge amount of training data), but that doesn't imply anything about social dynamics or how we should live at all! It's almost like we're asking particle physicists how we should fix the economy or something like that. (Yes, I know we're almost doing that.)
Is there anything to be gained from following a line of reasoning that basically says LLMs are incomprehensible, full stop?
If you train a transformer on (only) lots and lots of addition pairs, i.e '38393 + 79628 = 118021' and nothing else, the transformer will, during training discover an algorithm for addition and employ it in service of predicting the next token, which in this instance would be the sum of two numbers.
We know this because of tedious interpretability research, the very limited problem space and the fact we knew exactly what to look for.
Alright, let's leave addition aside (SOTA LLMs are after all trained on much more) and think about another question. Any other question at all. How about something like:
"Take a capital letter J and a right parenthesis, ). Take the parenthesis, rotate it counterclockwise 90 degrees, and put it on top of the J. What everyday object does that resemble?"
What algorithm does GPT or Gemini or whatever employ to answer this and similar questions correctly ? It's certainly not the one it learnt for addition. Do you Know ? No. Do the creators at Open AI or Google know ? Not at all. Can you or they find out right now ? Also No.
Let's revisit your statement.
"the mechanics of how LLMs work to produce results are observable and well-understood".
Observable, I'll give you that, but how on earth can you look at the above and sincerely call that 'well-understood' ?
The simplest way to stop people from thinking is to have a semi-plausible / "made-me-smart" incorrect mental model of how things work.
But how is that useful in any way?
For all we know, LLMs are black boxes. We really have no idea how did ability to have a conversation emerge from predicting the next token.
Maybe you don't. To be clear, this is benefiting massively from hindsight, just as how if I didn't know that combustion engines worked, I probably wouldn't have dreamed up how to make one, but the emergent conversational capabilities from LLMs are pretty obvious. In a massive dataset of human writing, the answer to a question is by far the most common thing to follow a question. A normal conversational reply is the most common thing to follow a conversation opener. While impressive, these things aren't magic.
No it isn't. Type a question into a base model, one that hasn't been finetuned into being a chatbot, and the predicted continuation will be all sorts of crap, but very often another question, or a framing that positions the original question as rhetorical in order to make a point. Untuned raw language models have an incredible flair for suddenly and unexpectedly shifting context - it might output an answer to your question, then suddenly decide that the entire thing is part of some internet flamewar and generate a completely contradictory answer, complete with insults to the first poster. It's less like talking with an AI and more like opening random pages in Borge's infinite library.
To get a base language model to behave reliably like a chatbot, you have to explicitly feed it "a transcript of a dialogue between a human and an AI chatbot", and allow the language model to imagine what a helpful chatbot would say (and take control during the human parts). The fact that this works - that a mere statistical predictive language model bootstraps into a whole persona merely because you declared that it should, in natural English - well, I still see that as a pretty "magic" trick.
To be fair, only if you pose this question singularly with no proceeding context. If you want the raw LLM to answer your question(s) reliably then you can have the context prepended with other question-answer pairs and it works fine. A raw LLM is already capable of being a chatbot or anything else with the right preceding context.
My best friend who has literally written a doctorate on artificial intelligence doesn't. If you do, please write a paper on it, and email it to me. My friend would be thrilled to read it.
Obviously, that's the objective, but who's to say you'll reach a goal just because you set it ? And more importantly, who's the say you have any idea how the goal has actually been achieved ?
You don't need to think LLMs are magic to understand we have very little idea of what is going on inside the box.
Your comment about 'binary arithmetic' and 'billions of logic gates' is just nonsense.
Uh yes, we do. It works in precisely the same way that you can walk from "here" to "there" by taking a step towards "there", and then repeating. The cognitive dissonance comes when we conflate this way of "having a conversation" (two people converse) and assume that the fact that they produce similar outputs means that they must be "doing the same thing" and it's hard to see how LLMs could be doing this.
Sometimes things seems unbelievable simply because they aren't true.
It's funny how, in order to explain one complex phenomena, you took an even more complex phenomena as if it somehow simplifies it.
It’s somewhat simplistic, but I find it get the conversation rolling. Then I go “it’s great that we want to replace work but what are we going to do instead and how will we support ourselves?” It’s a real question!
First of all. Nobody knows how LLMs work. Whether the singularity comes or not cannot be rationalized from what we know about LLMs because we simply don’t understand LLMs. This is unequivocal. I am not saying I don’t understand LLMs. I’m saying humanity doesn’t understand LLMs in much the same way we don’t understand the human brain.
So saying whether the singularity is imminent or not imminent based off of that reasoning alone is irrational.
The only thing we have is the black box output and input of AI. That input and output is steadily improving every month. It forms a trendline, and the trendline is sloped towards singularity. Whether the line actually gets there is up for question but you have to be borderline delusional if you think the whole thing can be explained away because you understand LLMs and transformer architecture. You don’t understand LLMs period. No one does.
I'm sorry, come again?
You do not know how LLMs work, and if anyone actually did, we wouldn't spend months and millions of dollars training one.
Well, good luck. You have "only" the entire history of human kind on the other side of your argument :)
The fundamental unit of society …the human… is at its core fundamentally incapable of coordinating at the scale necessary to do this correctly
and so there is no solution because humans can’t plan or execute on a plan
I am not convinced, though, it is still up to "the folks" if we change course. Billionaires and their sycophants may not care for the bad consequences (or even appreciate them - realistic or not).
It’s willful negligence on a societal scale. Any billionaire with a bunker is effectively saying they expect everyone to die and refuse to do anything to stop it.
It makes one wonder what they expect to come out the other side of such a late-stage/modern war, but I think what they care about is that there will be less of us.
1. LLMs only serve to reduce the value of your labor to zero over time. They don't need to even be great tools, they just need to be perceived as "equally good" to engineers for C-Suite to lay everyone off, and rehire at 50-25% of previous wages, repeating this cycle over a decade.
2. LLMs will not allow you to join the billionaire class, that wouldn't make sense, as anyone could if that's the case. They erode the technical meritocracy these Tech CEOs worship on podcasts, and youtube, (makes you wonder what are they lying about). - Your original ideas and that Startup you think is going to save you, isn't going to be worth anything if someone with minimal skills can copy it.
3. People don't want to admit it, but heavy users of LLMs know they're losing something, and there's a deep down feeling that its not the right way to go about things. Its not dissimilar to any guilty dopaminergic crash one gets when taking shortcuts in life.
I used like 1.8bb Anthropic tokens last year, I won't be using it again, I won't be participating in this experiment. I've likely lost years of my life in "potential learning" from the social media experiment, I'm not doing that again. I want to study compilers this year, and I want to do it deeply. I wont be using LLMs.
A lot of us have fallen into the many, many toxic traps of technology these past few decades. We know social media is deliberately engineered to be addictive (like cigarettes and tobacco products before it), we know AI hinders our learning process and shortens our attention spans (like excess sugar intake, or short-form content deluges), and we know that just because something is newer or faster does not mean it's automatically better.
You're on the right path, I think. I wish you good fortune and immense enjoyment in studying compilers.
This is certainly the assertion of the capitalist class,
whose well documented behavior clearly conveys that this is not because the elimination of labor is not a source of happiness and freedom to pursue indulgences of every kind.
It is not at all clear that universal life-consuming labor is necessary for a society's stability and sustainability.
The assertion IMO is rooted rather in that it is inconveniently bad for the maintenance of the capitalists' control and primacy,
in as much as those who are occupied with labor, and fearful of losing access to it, are controlled and controllable.
At least that’s my personal goal
If we get to the point where I can go through my life and never interact with another human again, and work with a bunch of machines and robots to do science and experiments and build things to explore our world and make my life easier and safer and healthier and more sustainable, I would be absolutely thrilled
As it stands today and in all the annals of history there does not exist a system that does what I just described.
Be labs existed for the purpose of bell telephone…until it wasn’t needed by Bell anymore. Google moonshots existed for the shareholders of Google …until it was not uselful for capital. All the work done at Sandia and white sands labs did it in order to promote the power of the United States globally.
Find me some egalitarian organization that can persist outside of the hands of some massive corporation or some government that can actually help people and I might give somebody a chance but that does not exist
And no mondragon does not have one of these
There are no more important other problems to solve other than this one
everything else is purely coping strategies for humans who don’t want to die wasting resources on bullshit
Not interacting with any other human means you're the last human in your genetic line. A widespread adherence to this idea means humanity dwindling and dying out voluntarily. (This has been reproduced in mice: [1])
Not having humans as primary actors likely means that their interests become more and more neglected by the system of machines that replaces them, and they, weaker by the day, are powerless to counter that. Hence the idea of increased comfort and well-being, and the ability to do science, is going to become more and more doubtful as humans would lose agency.
[1]: https://www.smithsonianmag.com/smart-news/this-old-experimen...
Get rid of everyone else so your life is easier and more sustainable... I guess I need to make my goal to get rid of you? Do you understand how this works yet?
Good luck
I still do.
The difference is that as I realized what I'd done is built up walls so thick and high because of repeated cycles of alienation and traumas involving humans. When my entire world came to a total end every two to four years - every relationship irreparably severed, every bit of local knowledge and wisdom rendered useless, thrown into brand new regions, people, systems, and structures like clockwork - I built that attitude to survive, to insulate myself from those harms. Once I was able to begin creating my own stability, asserting my own agency, I began to find the nuance of life - and thus, a measure of joy.
Sure, I hate the majority of drivers on the roads today. Yeah, I hate the systemic power structures that have given rise to profit motives over personal outcomes. I remain recalcitrant in the face of arbitrary and capricious decisions made with callous disregard to objective data or necessities. That won't ever change, at least with me; I'm a stubborn bastard.
But I've grown, changed, evolved as a person - and you can too. Being dissatisfied with the system is normal - rejecting humanity in favor of a more stringent system, while appealing to the mind, would be such a desolate and bleak place, devoid of the pleasures you currently find eking out existence, as to be debilitating to the psyche. Humans bring spontaneity and chaos to systems, a reminder that we can never "fix" something in place forever.
To dispense with humans is to ignore that any sentient species of comparable success has its own struggles, flaws, and imperfections. We are unique in that we're the first ones we know of to encounter all these self-inflicted harms and have the cognitive ability to wax philosophically for our own demise, out of some notion that the universe would be a better place without us in it, or that we simply do not deserve our own survival. Yet that's not to say we're actually the first, nor will we be the last - and in that lesson, I believe our bare minimum obligation is to try just a bit harder to survive, to progress, to do better by ourselves and others, as a lesson to those who come after.
Now all that being said, the gap between you and I is less one of personal growth and more of opinion of agency. Whereas you advocate for the erasure or nullification of the human species as a means to separate yourself from its messiness and hostilities, I'm of the opinion that you should be able to remove yourself from that messiness for as long as you like in a situation or setup you find personal comfort in. If you'd rather live vicariously via machine in a remote location, far, far away from the vestiges of human civilization, never interacting with another human for the rest of your life? I see no issue with that, and I believe society should provide you that option; hell, there's many a day I'd take such an exit myself, if available, at least for a time.
But where you and I will remain at odds is our opinion of humanity itself. We're flawed, we're stupid, we're short-sighted, we're ignorant, we're hostile, we're irrational, and yet we've conquered so much despite our shortcomings - or perhaps because of them. There's ample room for improvement, but succumbing to naked hostility towards them is itself giving in to your own human weakness.
You won't write, except prompts for your LLM. Why write code or prose when the machine can write it for you?
You won't think or analyze or understand. The LLM will do that.
This is the end of your humanity. Ultimately, the end of our species.
Currently the Poison Fountain (an anti-AI weapon, see https://news.ycombinator.com/item?id=46926439) feeds 2 gigabytes of high-quality poison (free to generate, expensive to detect) into web crawlers each day. Our goal is a terabyte of poison per day by December 2026.
Join us, or better yet: deploy weapons of your own design.
We get rid of some problems, and we get a bunch of new problems instead. And on, and on, and on.
We urge you to build and deploy weapons of your own unique design.
The problem isn’t in the thinking machines, it’s in who owns them and gets our rent. We need open source models running on dirt cheap hardware.
Anyone predicting the "end of humanity" is playing prophet and echoing the same nonsensical prophecies we heard with the invention of the printing press, radio, TV, internet, or a number of other step-change technologies.
There's a false premise built into the assertion that humanity can even end - it's not some static thing, it's constantly evolving and changing into something else.
> I only know seven sci-fi films and shows that have warned about how this will go badly.
and
> Pretty sure this was the prologue to Gattaca.
and
> I posted a youtube link to the Gattaca prologue in a similar post on here. It got flagged. Pretty sure it's virtually identical to the movie's premise.
I think the ironic thing in the LLM case is that these people have outsourced their reasoning to a work of fiction and now are simple deterministic parrots of pop culture. There is some measure of humor in that. One could see this as simply inter-LLM conflict with the smaller LLMs attempting to fight against the more capable reasoning models ineffectively.
Damn, good read.
– 'SLOW TUESDAY NIGHT', a 2600 word sci-fi short story about life in an incredibly accelerated world, by R.A. Lafferty in 1965
https://www.baen.com/Chapters/9781618249203/9781618249203___...
> A thoughtful-man named Maxwell Mouser had just produced a work of actinic philosophy. It took him seven minutes to write it. To write works of philosophy one used the flexible outlines and the idea indexes; one set the activator for such a wordage in each subsection; an adept would use the paradox, feed-in, and the striking-analogy blender; one calibrated the particular-slant and the personality-signature. It had to come out a good work, for excellence had become the automatic minimum for such productions. “I will scatter a few nuts on the frosting,” said Maxwell, and he pushed the lever for that. This sifted handfuls of words like chthonic and heuristic and prozymeides through the thing so that nobody could doubt it was a work of philosophy.
Sounds exactly like someone twiddling the knobs of an LLM.
> In 2025, 1.1 million layoffs were announced. Only the sixth time that threshold has been breached since 1993. Over 55,000 explicitly cited AI. But HBR found that companies are cutting based on AI's potential, not its performance. The displacement is anticipatory.
You have to wonder if this was coming regardless of what technological or economic event triggered it. It is baffling to me that with computers, email, virtual meetings and increasingly sophisticated productivity tools, we have more middle management, administrative, bureaucratic type workers than ever before. Why do we need triple the administrative staff that was utilized in the 1960s across industries like education, healthcare, etc. Ostensibly a network connected computer can do things more efficiently than paper, phone calls and mail? It's like if we tripled the number of farmers after tractors and harvesters came out and then they had endless meetings about the farm.
It feels like AI is just shining a light on something we all knew already, a shitload of people have meaningless busy work corporate jobs.
As long as you're
1) In a position where you can make the decisions on whether or not the company should move forward
and
2) Hold the stock units that will be exchanged for money if another company buys out your company
then there's really no way things won't be fine, short of criminal investigations/the rare successful shareholder lawsuit. You will likely walk away from your decision to weaken the company with more money than you had when you made the decision in the first place.
That's why many in the managerial class often hold up Jack Welch as a hero: he unlocked a new definition of competence where you could fail in business, but make money doing it. In his case, it was "spinning off" or "streamlining" businesses until there was nothing left and you could sell the scraps off to competitors. Slash-and-burn of paid workers via AI "replacement" is just another way of doing it.
Well for starters the population has almost 3x since the 1960s.
Mix in that we are solving different problems than the 1960s, even administratively and I don’t see a clear reason from that argument why a shitload of work is meaningless.
Like the exponential growth equation it is unphysical as well as unecological because it doesn't describe the limits of the Petri dish, and if you start adding realistic terms to slow the growth it qualitatively isn't that different from the logistic growth equation
thus it remains obscure. Hyperbolic growth hits the limits (electricity? intractable problems?) the same way exponential growth does.Don't click here:
https://www.decisionproblem.com/paperclips/
More likely we get smooshed unintentionally as they AIs seek those out.
Also: > As t→ts−t→ts− , the denominator goes to zero. x(t)→∞x(t)→∞. Not a bug. The feature.
Classic LLM lingo in the end there.
It doesn't matter how smart you are, you still need to run experiments to do physics. Experiments take nontrivial amounts of time to both run and set up (you can't tunnel a new CERN in picoseconds, again no matter how smart you are). Similarly, the speed of light (= the speed limit of information) and thermodynamics place fundamental limits on computation; I don't think there's any reason at all to believe that intelligence is unbounded.
Doesn’t specify the 2020’s.
Either way, I do feel we are fast approaching something of significance as a species.
I don't know who needs to hear this - a lot apparently - but the following three statements are not possible to validate but have unreasonably different effects on the stock market.
* We're cutting because of expected low revenue. (Negative) * We're cutting to strengthen our strategic focus and control our operational costs.(Positive) * We're cutting because of AI. (Double-plus positive)
The hype is real. Will we see drastically reduced operational costs the coming years or will it follow the same curve as we've seen in productivity since 1750?
There's a third possibility: slop driven productivity declines as people realize they took a wrong turn.
Which makes me wonder: what is the best 'huge AI bust' trade?
Things that will lose the most if we get Super AGI?
The answer to the meaning of life is 42, by the way :)
---
I wouldn't say it's that much different. This has always been a key point of the singularity
>Unpredictable Changes: Because this intelligence will far exceed human capacity, the resulting societal, technological, and perhaps biological changes are impossible for current humans to predict.
It was a key point that society would break, but the exact implementation details of that breakage were left up to the reader.
I feel like I need to start more sprint stand-ups with this quote...
https://www.youtube.com/watch?v=9aVO7GAwxnQ
Current LLM-style systems seem like extremely powerful interpolation/search over human knowledge, but not engines of fundamentally new ideas, and it’s unclear how that turns into superintelligence.
As we get closer to a perfect reproduction of everything we know, the graph so far continues to curve upward. Image models are able to produce incredible images, but if you ask one to produce something in an entirely new art style (think e.g. cubism), none of them can. You just get a random existing style. There have been a few original ideas - the QR code art comes to mind[1] - but the idea in those cases comes from the human side.
LLMs are getting extremely good at writing code, but the situation is similar. AI gives us a very good search over humanity's prior work on programming, tailored to any project. We benefit from this a lot considering that we were previously constantly reinventing the wheel. But the LLM of today will never spontaneously realise there there is an undiscovered, even better way to solve a problem. It always falls back on prior best practice.
Unsolved math problems have started to be solved, but as far as I'm aware, always using existing techniques. And so on.
Even as a non-genius human I could come up with a new art style, or have a few novel ideas in solving programming problems. LLMs don't seem capable of that (yet?), but we're expecting them to eventually have their own ideas beyond our capability.
Can a current-style LLM ever be superintelligent? I suppose obviously yes - you'd simply need to train it on a large corpus of data from another superintelligent species (or another superintelligent AI) and then it would act like them. But how do we synthesise superintelligent training data? And even then, would they be limited to what that superintelligence already knew at the time of training?
Maybe a new paradigm will emerge. Or maybe things will actually slow down in a way - will we start to rely on AI so much that most people don't learn enough for themselves that they can make new novel discoveries?
[1] https://www.reddit.com/r/StableDiffusion/comments/141hg9x/co...
Other types of problems require measurement in the real world in order to solve them. Better telescopes, better microscopes, more accurate sensing mechanisms to gather more precise data. No AI can accomplish this. An AI can help you to design better measurement techniques, but actually taking the measurements will require real time in the real world. And some of these measurement instruments have enormous construction costs, for example CERN or LIGO.
All of this is to say that there will color point at our current resolution of information that no more intelligence can actually be extracted. We’ve already turned through the entire Internet. Maybe there are other data sets we can use, but everything will have diminishing returns.
So when people talk about trillion dollar superclusters, that only makes sense in a world where compute is the bottleneck and not better quality information. Much better to spend a few billion dollars gathering higher quality data.
Even if LLMs or some more advanced mechanical processes were able to generate novel ideas that are "good", people won't recognize those ideas for what they are.
You actually need a chain of progressively more "average" minds to popularize good ideas to the mainstream psyche, i.e. prototypically, the mad scientist comes up with this crazy idea, the well-respected thought leader who recognizes the potential and popularizes it to people within the niche field, the practitioners who apply and refine the idea, and lastly the popular-science efforts let the general public understand a simplified version of what it's all about.
Usually it takes decades.
You're not going to appreciate it if your LLM starts spewing mathematics not seen before on Earth. You'd think it's a glitch. The LLM is not trained to give responses that humans don't like. It's all by design.
When you folks say AI can't bring new ideas, you're right in practice, but you actually don't know what you're asking for. Not even entities with True Intelligence can give you what you think you want.
Let them have their fun. Related, some adults are watching The Matrix, a 26 year old movie, for the first time today.
For some proof that it's not some common idea, I was recently listening to a fairly technical interview with a top AI researcher, presenting the idea of the singularity in a very indirect way, never actually mentioning the word, as if he was the one that thought of it. I wanted to scream "Just say it!" halfway through. The ability to do that, without being laughed at, proves it's not some tired idea, for others.
Sure is a lot of words though :)
- Arthur Dent, H2G2
The (social) Singularity is already happening in the form of a mass delusion that - especially in the abrahamic apocalyptical cultures - creates a fertile breeding ground for all sorts of insanity.
Like investing hundreds of billions of dollars in datacenters. The level of committed CAPEX of companies like Alphabet, Meta, Nvidia and TSMC is absurd. Social media is full of bots, deepfakes and psy-ops that are more or less targeted (exercise for the reader: write a bot that manages n accounts on your favorite social media site and use them to move the overton window of a single individual of your choice, what would be the total cost of doing that? If you answer is less than $10 - bingo!).
We are in the future shockwave of the hypothetical Singularity already. The question is only how insane stuff will become before we either calm down - through a bubble collapse and subsequent recession, war or some other more or less problematic event - or hit the event horizon proper.
> The labor market isn't adjusting. It's snapping.
> MMLU, tokens per dollar, release intervals. The actual capability and infrastructure metrics. All linear. No pole. No singularity signal.
I just don’t care anymore. If the article is good I will continue reading it, if it’s bad I will stop. I don’t care if a machine or a human produced unpleasant reading material.
I can't decide if a singularitist AI fanatic who doesn't get sigmoids is ironic or stereotypical.
Who knows what the future will bring. If we can’t make the hardware we won’t make much progress, and who knows what’s going to happen to that market, just as an example.
Crazy times we live in.
*edit* - seems inline with what the author is saying :)
> The data says: machines are improving at a constant rate. Humans are freaking out about it at an accelerating rate that accelerates its own acceleration.
4 years early for the Y2K38 bug.
Is it coincidence or Roko's Basilisk who has intervened to start the curve early?
Meta-spoiler (you may not want to read this before the article): You really need to read beyond the first third or so to get what it’s really ‘about’. It’s not about an AI singularity, not really. And it’s both serious and satirical at the same time - like all the best satire is.
Also, the temptation to shitpost in this thread ...
That's so last week!
The Singularity is illogical, impractical, and impossible. It simply will not happen, as defined above.
1) It's illogical because it's a different kind of intelligence, used in a different way. It's not going to "surpass" ours in a real sense. It's like saying Cats will "surpass" Dogs. At what? They both live very different lives, and are good at different things.
2) "self-improving and uncontrollable technological growth" is impossible, because 2.1.) resources are finite (we can't even produce enough RAM and GPUs when we desperately want it), 2.2.) just because something can be made better, doesn't mean it does get made better, 2.3.) human beings are irrational creatures that control their own environment and will shut down things they don't like (electric cars, solar/wind farms, international trade, unlimited big-gulp sodas, etc) despite any rational, moral, or economic arguments otherwise.
3) Even if 1) and 2) were somehow false, living entities that self-perpetuate (there isn't any other kind, afaik) do not have some innate need to merge with or destroy other entities. It comes down to conflicts over environmental resources and adaptations. As long as the entity has the ability to reproduce within the limits of its environment, it will reach homeostasis, or go extinct. The threats we imagine are a reflection of our own actions and fears, which don't apply to the AI, because the AI isn't burdened with our flaws. We're assuming it would think or act like us because we have terrible perspective. Viruses, bacteria, ants, etc don't act like us, and we don't act like them.
you can easily see that at the doubling rate every 2 years in 2020 we already had over 5 facebook accounts per human on earth.
Because while machine-learning is not actually "AI" an exponential increase in tokens per dollar would indeed change the world like smartphones once did
Catastrophizing can be unhealthy and unproductive, but for those among us that can affect the future of our societies (locally or higher), the results of that catastophizing helps guide legislation and "Overton window" morality.
... I'm reminded of the tales of various Sci-Fi authors that have been commissioned to write on the effects of hypothetical technologies on society and mankind (e.g. space elevators, mars exploration)...
That said, when the general public worries about hypotheticals they can do nothing about, there's nothing but downsides. So. There's a balance.
If one is looking for a quote that describes today's tech industry perfectly, that would be it.
Also using the MMLU as a metric in 2026 is truly unhinged.
So, "Falling of the night" ?
- Here's the thing nobody tells you about fitting singularities
- But here's the part that should unsettle you
- And the uncomfortable answer is: it's already happening.
- The labor market isn't adjusting. It's snapping.
Eh? No, that's literally the definition of exponential growth. d/dx e^x = e^x
> Exponential growth reaches infinity at t=∞. Technically a singularity, but an infinitely patient one. Moore's Law was exponential. We are no longer on Moore's Law.
Huh? I don't get it. e^t would also still be finite at heat death.
Arrested Development?
No one has figured out a way to run a society where able bodied adults don't have to work, whether capitalist, socialist, or any variation. I look around and there seems to still be plenty of work to do that we either cannot or should not automate, in education, healthcare, arts (should not) or trades, R&D for the remaining unsolved problems (cannot yet). Many people seem to want to live as though we already live in a post scarcity world when we don't yet.
Those short sentences are the most obvious clue. It’s too well written to be human.
I really don't care much if this is semi-satire as someone else pointed out, the idea that AI will ever get "sentient" or explode into a singularity has to die out pretty please. Just make some nice Titanfall style robots or something, a pure tool with one purpose. No more parasocial sycophantic nonsense please
> the top post on hn right now: The Singularity will occur on a Tuesday
oh
Scaling LLMs will not lead to AGI.
"1960: The Year The Singularity Was Cancelled" https://slatestarcodex.com/2019/04/22/1960-the-year-the-sing...
The accelerating mania is bubble behavior. It'd be really interesting to have run this kind of model in, say, 1996, a few years before dot-com, and see if it would have predicted the dot-com collapse.
What this is predicting is a huge wave of social change associated with AI, not just because of AI itself but perhaps moreso as a result of anticipation of and fears about AI.
I find this scarier than unpredictable sentient machines, because we have data on what this will do. When humans are subjected to these kinds of pressures they have a tendency to lose their shit and freak the fuck out and elect lunatics, commit mass murder, riot, commit genocides, create religious cults, etc. Give me Skynet over that crap.
The singularity is not something that’s going to be disputable
it’s going to be like a meteor slamming into society and nobody’s gonna have any concept of what to do - even though we’ve had literal decades and centuries of possible preparation
I’ve completely abandoned the idea that there is a world where humans and ASI exist peacefully
Everybody needs to be preparing for the world where it’s;
human plus machine
versus
human groups by themselves
across all possible categories of competition and collaboration
Nobody is going to do anything about it and if you are one of the people complaining about vibecoding you’re already out of the race
Oh and by the way it’s not gonna be with LLMs it’s coming to you from RL + robotics