NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Folk are getting dangerously attached to AI that always tells them they're right (theregister.com)
joshstrange 17 minutes ago [-]
When a LLM tells me I'm right, especially deep in a conversation, unless I was already sure about something, I immediately feel the need to go ask a fresh instance the question and/or another LLM. It sets off my "spidey-sense".

I don't quite understand why other people seem to crave that. Every time I read about someone who has gone down a dark road using LLMs I am constantly amazed at how much they "fall" for the LLM, often believing it's sentient. It's just a box of numbers, really cool numbers, with really cool math, that can do really cool things, but still just numbers.

Sharlin 5 minutes ago [-]
Nontechnical people simply don't have any idea about what LLMs are. Their only mental model comes from science fiction, plus the simple fact that we possess a theory of mind. It would be astonishing if people were able to casually not antropomorphize LLMs, given that untold millions of years of evolution of the simian neocortex is trying to convince you that anything that talks like that must be another mind similar to yours.
46Bit 7 minutes ago [-]
Life in the moment is a lot easier if you don't second-guess yourself. I think this is why many people (and probably ~all people, if tired) crave simplistic solutions.

I like to make a subagent take the "devil's advocate" take on a subject. It usually does all the arguing for me as to why the main agent has it wrong. Commonly results in better decisions that I'd have made alone.

cyanydeez 11 minutes ago [-]
I think it's basically equal to End of Line when it comes to an LLM. It means they have nothing else to add, there's zero context for them to draw from, and they've exhausted the probability chain you've been following; but they're creating to generate 'next token' and positive renforcement is _how they are trained_ in many cases so the token of choice would naturally be how they're trained, since it's a probability engine but it doesn't know the difference between the instruction and the output.

So, "great idea" is coming from the renforcement learning instruction rather than the answer portion of the generation.

4b11b4 6 minutes ago [-]
https://arxiv.org/abs/2602.14270

related: if you suggest a hypothesis then you'll get biased results (iow, you'll think you're right, but the true information is hidden)

jameskilton 20 minutes ago [-]
Folks are getting dangerously attached to [political parties/candidates/news sources/social networks] that always tell them they're right.

It's really nothing new. It takes significant mental energy (a finite resource) to question what you're being told, and to do your own fact checking. Instead people by default gravitate towards echo chambers where they can feel good about being a part of a group bigger than themselves, and can spend their limited energy towards what really matters in their lives.

lapcat 10 minutes ago [-]
> It's really nothing new.

I disagree. What's new is that this flattery is individually, personally targeted. The AI user is given the impression that they're having a back-and-forth conversation with a single trusted friend.

You don't have the same personal experience passively consuming political mass media.

JohnCClarke 24 minutes ago [-]
Isn't this just Dale Carnegie 101? I've certainly never had a salesperson tell me that I'm 100% wrong and being a fool.

And, tbh, I often try to remember to do the same.

Lerc 18 minutes ago [-]
The attachment such feedback creates must be why marketing people are universally beloved.
airstrike 12 minutes ago [-]
Dale Carnegie wasn't writing about LLMs and this isn't a salesperson, so no, it's not just Dale Carnegie 101.
erelong 25 minutes ago [-]
So, be more skeptical
add-sub-mul-div 21 minutes ago [-]
That's like saying "so, exercise more" upon the invention of fast food. Maybe you will, that's great. But society is going to be rewritten by the lazy and we all will have to deal with the side effects.
Lerc 2 minutes ago [-]
I think you inadvertently make a good point.

The invention of fast food does not change anyone's ability to excersize. When fast food was invented people excersized way more than they do today.

Time constraints have caused an increase in fast food consumption and a reduction in excersize.

Both issues then seem to be addressed by coercion to change behaviour when what is needed is a systemic change to the environment to provide preferable options.

jmclnx 33 minutes ago [-]
I never thought this could happen, but I do not use AI.

Anyway no real surprise, we have many examples of people ignoring facts and moving to media that support their views, even when their views are completely wrong. Why should AI be different.

jasonlotito 22 minutes ago [-]
Krafton's CEO found out the hard way that relying on AI is dumb, too. I think it's always helpful to remind people that just because someone has found success doesn't mean they're exceptionally smart. Luck is what happens when a lack of ethics and a nat 20 meet.

https://courts.delaware.gov/Opinions/Download.aspx?id=392880

> Meanwhile, Kim sought ChatGPT’s counsel on how to proceed if Krafton failed to reach a deal with Unknown Worlds on the earnout. The AI chatbot prepared a “Response Strategy to a ‘No-Deal’ Scenario,” which Kim shared with Yoon. The strategy included a “pressure and leverage package” and an “implementation roadmap by scenario.”

kogasa240p 31 minutes ago [-]
The ELIZA effect is alive and well, and I'm surprised people aren't talking about it more (probably because it sounds less interesting than "AI psychosis").
blurbleblurble 28 minutes ago [-]
Personally I don't think the ELIZA effect is the interesting part of this. For me it's how the incentives set this dynamic up right from the start, and how quickly they've been taken to the extreme.
simonw 22 minutes ago [-]
Strikes me this is another example of AI giving everyone access to services that used to be exclusive to the super-rich.

Used to be only the wealthiest students could afford to pay someone else to write their essay homework for them. Now everyone can use ChatGPT.

Used to be you had to be a Trumpian-millionaire/Elonian-billionaire to afford an army of Yes-men to agree with your every idea. Now anyone can have that!

sizzzzlerz 27 minutes ago [-]
Imagine that.
jmcgough 9 minutes ago [-]
[dead]
lucideer 31 minutes ago [-]
I've observed this in all chatbots with the single exception being Grok. I initially wondered what the Twitter engineers were cooking to to distinguish their product from the rest, but more recently it's occurred to me that it's probably just the result of having shared public context, compared to private chats (I haven't trialled Grok privately).
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 15:38:47 GMT+0000 (Coordinated Universal Time) with Vercel.