NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
AI hype is crashing into reality. Stay calm (businessinsider.com)
shermantanktop 4 days ago [-]
Assuming this prediction of a tepid future for AI pans out, where is the accountability for all those CEOs and managers who pushed for deep investment based on FOMO, bolstered by wishful thinking and a willing ignorance of the technical details?

IOW, who will be fired for getting this wrong? Answer: nobody.

Some small AI-based companies will tank, but all the leaders in F500 companies know how to survive. If the emperor has no clothes, they'll all have collective amnesia and say they knew it the whole time. A few quotes to the press here, a few emails there, and they will move on with their BSing and talk about "what we learned."

philjohn 4 days ago [-]
At this point you can't convince me that we couldn't just replace most CEO's and VP's with LLM's and get the same, or better, outcomes.
utyop22 4 days ago [-]
Sundar, Cook etc are living off the foundations laid by prior founders / CEOs.

I'll give Cook a lot of respect since he knows supply chains and manufacturing in particular deeply and that's really been his job post Steve Jobs.

But what value does Sundar or Satya add, really? I've not heard a single super insightful comment come out of their mouths. You could replace them and not notice a difference in financial performance.

42lux 4 days ago [-]
They are whips for the boards.
utyop22 4 days ago [-]
Theres been many men throughout history who were whipping boys - none of whom possessed any important quality.
Macha 4 days ago [-]
Note that while none of the CEOs will be fired for getting it wrong, plenty of people will be fired because the CEOs got it wrong, as the CEOs cut headcount to reduce costs to have _something_ to present to investors in earnings reports.
4 days ago [-]
danaris 4 days ago [-]
> where is the accountability for all those CEOs and managers who pushed for deep investment based on FOMO, bolstered by wishful thinking and a willing ignorance of the technical details?

There's never any accountability for the bad—sometimes destructively, catastrophically bad—decisions of managers and executives. And frankly, this is part of what is causing serious problems for our society.

It breeds a class of people who genuinely believe that either a) they are truly always right, always the smartest ones around, and any mistakes or failures are because of all the people around them, or b) it doesn't matter how often they're wrong; they're entitled to always be taken seriously and get their full bonuses, no matter how badly things are falling apart because of their decisions.

Now, part of this is because our corporate world is really really bad at assessing the outcomes of decisions like these (and this is not wholly unrelated to the fact that proper assessment would reveal the levels of incompetence in many C-suites). But part of it is simply because we have built a culture that says these people are never to be questioned.

And that's toxic to any attempts to actually build something better.

aleph_minus_one 4 days ago [-]
> Ahead of launch, OpenAI's Sam Altman said he'd felt "useless" compared to the model's intelligence, even drawing parallels with the Manhattan Project. When it arrived, users apparently felt less intimidated.

So Sam Altman admitted that he isn't so smart after all? :-)

chrsw 4 days ago [-]
AI has huge potential. LLMs though? Not so much. And the timeline for AI to transform the the way we live and work should be measured in decades, not months.
mettamage 4 days ago [-]
Why not? They helped me get more stuff done in new ways than blockchain has ever done. Both technologies were touted to be innovative. I'm doing all kinds of things with AI all the time. It's pretty handy.
chrsw 2 days ago [-]
They're useful. I'm doubting they will fulfill the AI hype, they're on the road to AGI or they're going to replace human work at scale.
2 days ago [-]
captain_coffee 4 days ago [-]
This is the correct answer!
jameslk 4 days ago [-]
The title is clickbaity compared to the content of the article, which seems to have more nuance. Its claim is that recent AI advances were both overhyped and still have a lot of utility that hasn't propagated fully yet (a la the Internet). Not terribly revelatory at this point
aldebran 4 days ago [-]
What does GPT-5 do that just wasn’t possible before or wasn’t possible as well as it is now?

In my daily use cases I only see regressions.

lostmsu 4 days ago [-]
GPT-5 did nothing, but Claude 4 and Gemini 2.5 were just a few months ago.
tim333 2 days ago [-]
>AI seems to have reached its iPhone 4 moment

implying development has levelled off from a functional point of view. Which was kind of true with iPhones - my 13 does basically the same as the 4 did just with better images and speed. And in the future the iphone 29 will probably still do apps, photos and calls, just a bit better.

But I don't think that'll be true with AI - there are huge categories of stuff - being able to do human jobs, being self improving, having robots that can build houses and factories and so on that don't work at the moment but may well in a decade or two. I think it may be more of a Sopwith Camel moment.

flowerthoughts 4 days ago [-]
It's funny how this "crash" is not being explained by headline examples, but by just stating "it's a crash." Normally, crashes are led by bankruptcies, broken promises and other tangible issues. Right now it's just "everyone says it's crashing" which just feels like Mr Altman wants to buy back shares cheaply.
aleph_minus_one 4 days ago [-]
tiarafawn 4 days ago [-]
> if AGI (artificial general intelligence) or superintelligence do in fact one day arrive, it might not seem like much of a leap at all.

That does not seem like a valid conclusion to draw from the observations in the article.

walpurginacht 2 days ago [-]
to be honest with the level of hype it's been sustaining, it's about damn time people get disappointed with unrealistic expectations they set (or let others set) for themselves. All the while people who merely view it as a fancy tool with all of its shortcomings, and treat it as such in building something with it, will keep finding benefits from it

With this, hopefully we can stop having peope using AI as a term exclusively for llms

We're about to, or have already hit the "data" wall recently as it gets increasingly hard to get more data to train on, exacerbated by more and more content being generated that risks autophagy

it's less and less about gathering more data now and either we're gonna try to engineer more data into existence, or it's back to the deep learning boom where it's more common to resort to optimizing architecture and training methodologies to squeeze out more performance from what data we have

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 20:16:26 GMT+0000 (Coordinated Universal Time) with Vercel.