Last year I used LLM to solve AoC, to see how they could keep up, to learn how to steer them and to see how the open models will perform. When I talk about it, quite a bit of "programmers" get upset. Glad to see that Norvig is experimenting.
p/s, anyone who gets upset that folks are experimenting with LLMs to generate code or solve AoC should have their programmer's card revoked.
foobarchu 17 hours ago [-]
Did you make it clear that you weren't submitting to the leaderboards? I of course assume you weren't.
Most of the hubbub I saw was because AI code making it into those leaderboards very clearly violates the spirit of competition.
mark_l_watson 1 days ago [-]
I enjoy reading Peter’s ‘Python studies’ and was surprised to see here a comparison of different LLMs for solving advent of code problems, but the linked article is pretty cool.
Peter and a friend of his wrote an article over a year ago discussing whether or not LLMs are already AGI, and after re-reading that article my opinion was moved a bit to: LLMs are AGI in broad digital domains. I still need to see embodied AI in robots and physical devices before I think we are 100% of the way there. Still, I apply Gemini and also a lot of open weight models to both 1. coding problems and 2. after I read or watch material on Philosophy I almost always ask Gemini for a summary, references, and a short discussion based on what Gemini knows about me.
esafak 1 days ago [-]
> I started with the Gemini 3 Pro Fast model ...
Quiet product announcement.
1 days ago [-]
az09mugen 1 days ago [-]
I'm sorry, but what's the point here ? It's not for a job or improve a LLM or doing something useful per se, just to "enjoy" how version X or Y of an LLM can solve problems.
I don't want to sound grumpy or but it doesn't achieve anything, this is just a showcase of how a "calculator with a small probability of failure can succeed".
Move on, do something useful, don't stop being amazed by AI but please stop throwing it at my face.
mathgeek 1 days ago [-]
Author is Peter Norvig, who has definitely done “something useful” when it comes to AI. He’s earned some time for play.
grim_io 1 days ago [-]
> I don't want to sound grumpy
Well, you didn't try very hard :)
If you think that every model behaves the same way in terms of programming, you don't have a lot of experience with them.
I find it useful to see how other people use all kinds of tools. AI is no different.
It's like getting upset when someone compares how it's like using bun vs deno vs node.
pbalau 1 days ago [-]
> Move on, do something useful, don't stop being amazed by AI but please stop throwing it at my face.
Do you see the irony in what you did?
So, how about you move on, do something useful, don't stop being annoyed by AI, but please stop throwing your opinion in anyone's face.
az09mugen 4 hours ago [-]
I will stop when the AI bubble will burst and people will stop throwing at my face everyday about statistical models. It is litterally everywhere I look, I did not ask for it even when I filter the inputs.
IMHO it would be nice to have an AI summarizer/filter (see the irony ?) for tech news (hn maybe ?) that filters out everything about AI, LLMs and company.
s1mplicissimus 1 days ago [-]
One could argue that pointing out the pointlessness of LLM hype is infact useful, while producing that same hype is not
DangitBobby 1 days ago [-]
You are conflating "hype" with any positive outlook. It has some uses and some people are using it. That's not "hype". It is exhausting to see it everywhere so I sympathize.
p/s, anyone who gets upset that folks are experimenting with LLMs to generate code or solve AoC should have their programmer's card revoked.
Most of the hubbub I saw was because AI code making it into those leaderboards very clearly violates the spirit of competition.
Peter and a friend of his wrote an article over a year ago discussing whether or not LLMs are already AGI, and after re-reading that article my opinion was moved a bit to: LLMs are AGI in broad digital domains. I still need to see embodied AI in robots and physical devices before I think we are 100% of the way there. Still, I apply Gemini and also a lot of open weight models to both 1. coding problems and 2. after I read or watch material on Philosophy I almost always ask Gemini for a summary, references, and a short discussion based on what Gemini knows about me.
Quiet product announcement.
I don't want to sound grumpy or but it doesn't achieve anything, this is just a showcase of how a "calculator with a small probability of failure can succeed".
Move on, do something useful, don't stop being amazed by AI but please stop throwing it at my face.
Well, you didn't try very hard :)
If you think that every model behaves the same way in terms of programming, you don't have a lot of experience with them.
I find it useful to see how other people use all kinds of tools. AI is no different.
It's like getting upset when someone compares how it's like using bun vs deno vs node.
So, how about you move on, do something useful, don't stop being annoyed by AI, but please stop throwing your opinion in anyone's face.
IMHO it would be nice to have an AI summarizer/filter (see the irony ?) for tech news (hn maybe ?) that filters out everything about AI, LLMs and company.