The name "Undercover mode" and the line `The phrase "Claude Code" or any mention that you are an AI` sound spooky, but after reading the source my first knee-jerk reaction wouldn't be "this is for pretending to be human" given that the file is largely about hiding Anthropic internal information such as code names. I encourage looking at the source itself in order to draw your conclusions, it's very short: https://github.com/alex000kim/claude-code/blob/main/src/util...
christinetyip 24 minutes ago [-]
Not leaking codenames is one thing, but explicitly removing signals that something is AI-generated feels like a pretty meaningful shift.
eli 9 minutes ago [-]
Doesn't seem so crazy if the point is to avoid leaking new features, models, codenames, etc.
wnevets 37 minutes ago [-]
BAD (never write these):
- "Fix bug found while testing with Claude Capybara"
- "1-shotted by claude-opus-4-6"
- "Generated with Claude Code"
- "Co-Authored-By: Claude Opus 4.6 <…>"
This makes sense to me about their intent by "UNDERCOVER"
dkenyser 1 hours ago [-]
> my first knee-jerk reaction wouldn't be "this is for pretending to be human"...
"Write commit messages as a human developer would — describe only what the code
change does."
LeifCarrotson 5 minutes ago [-]
The human developer would just write what the code does, because the commit also contains an email address that identifies who wrote the commit. There's no reason to write:
> Commit f9205ab3 by dkenyser on 2026-3-31 at 16:05:
> Fixed the foobar bug by adding a baz flag - dkenyser
Because it already identified you in the commit description. The reason to add a signature to the message is that someone (or something) that isn't you is using your account, which seems like a bad idea.
amarant 55 minutes ago [-]
That seems desirable? Like that's what commit messages are for. Describing the change. Much rather that than the m$ way of putting ads in commit messages
evenhash 20 minutes ago [-]
Unfortunately GitHub Copilot’s commit message generation feature is very human. It’s picked up some awful habits from lazy human devs. I almost always get some pointless “… to improve clarity” or “… for enhanced usability” at the end of the message.
VS Code has a setting that promises to change the prompt it uses to generate commit messages, but it mostly ignores my instructions, even very literal ones like “don’t use the words ‘enhance’ or ‘improve’”. And oddly having it set can sometimes result in Cyrillic characters showing up at the end of the message.
Ultimately I stopped using it, because editing the messages cost me more time than it saved.
/rant
fweimer 36 minutes ago [-]
The commit message should complement the code. Ideally, what the code does should not need a separate description, but of course there can be exceptions. Usually, it's more interesting to capture in the commit message what is not in the code: the reason why this approach was chosen and not some other obvious one. Or describe what is missing, and why it isn't needed.
somat 11 minutes ago [-]
It sounds like if you are vibe-coding, that is, can't even be arsed to write a simple commit message, your commit message should be your prompt.
ImPostingOnHN 30 minutes ago [-]
That sounds like design discussions best had in the issue/ticket itself, before you even start writing code. Then the commit message references the ticket and has a brief summary of the changes.
Writing and reading paragraphs of design discussion in a commit message is not something that seems common.
skydhash 24 minutes ago [-]
Not really about design, but technical reasons why this solution came to be when it’s not that obvious. It’s not often needed. And when it does, it usually fits in a short paragraph.
giancarlostoro 31 minutes ago [-]
As opposed to outputting debugging information, which I wouldnt be surprised if LLMs do output "debug" output blurbs which could include model specific information.
1 hours ago [-]
peacebeard 1 hours ago [-]
~That line isn't in the file I linked, care to share the context? Seems pretty innocuous on its own.~
[edit] Never mind, find in page fail on my end.
stordoff 1 hours ago [-]
It's in line 56-57.
peacebeard 1 hours ago [-]
Thanks! I must have had a typo when I searched the page.
andoando 1 hours ago [-]
I think the motivation is to let developers use it for work without making it obvious theyre using AI
ryandrake 58 minutes ago [-]
Which is funny given how many workplaces are requiring developers use AI, measuring their usage, and stack ranking them by how many tokens they burn. What I want is something that I can run my human-created work product through to fool my employer and its AI bean counters into thinking I used AI to make it.
zos_kia 8 minutes ago [-]
I guess you could just code and have it author only the commit message
__blockcipher__ 1 hours ago [-]
Undercover mode seems like a way to make contributions to OSS when they detect issues, without accidentally leaking that it was claude-mythos-gigabrain-100000B that figured out the issue
stavros 1 hours ago [-]
What does non-undercover do? Where does CC leave metadata mainly? I haven't noticed anything.
sprobertson 39 minutes ago [-]
it likes mentioning itself in commit messages, though you can just tell it not to.
Ah, thanks, it hasn't done it for mine so I was wondering if there's something lower-level somehow.
1 hours ago [-]
mzajc 1 hours ago [-]
There are now several comments that (incorrectly?) interpret the undercover mode as only hiding internal information. Excerpts from the actual prompt[0]:
NEVER include in commit messages or PR descriptions:
- The phrase "Claude Code" or any mention that you are an AI
- Co-Authored-By lines or any other attribution
BAD (never write these):
- 1-shotted by claude-opus-4-6
- Generated with Claude Code
- Co-Authored-By: Claude Opus 4.6 <…>
This very much sounds like it does what it says on the tin, i.e. stays undercover and pretends to be a human. It's especially worrying that the prompt is explicitly written for contributions to public repositories.
I would have expected people (maybe a small minority, but that includes myself) to have already instructed Claude to do this. It’s a trivial instruction to add to your CLAUDE.md file.
arcanemachiner 32 minutes ago [-]
It's a config setting (probably the same end result though):
You can already turn off "Co-Authored-By" via Claude Code config. This is what their docs show:
~/.claude/settings.json
{
"attribution": {
"commit": "",
"pr": ""
},
The rest of the prompt is pretty clear that it's talking about internal use.
Claude Code users aren't the ones worried about leaking "internal model codenames" nor "unreleased model opus-4-8" nor Slack channel names. Though, nobody would want that crap in their generated docs/code anyways.
Seems like a nothingburger, and everyone seems to be fantasizing about "undercover mode" rather than engaging with the details.
andoando 1 hours ago [-]
Ive seen it say coauthored by claude code on my prs...and I agree I dont want it to do that
dmd 17 minutes ago [-]
So turn it off.
"includeCoAuthoredBy": false,
in your settings.json.
petcat 1 hours ago [-]
It's less about pretending to be a human and more about not inviting scrutiny and ridicule toward Claude if the code quality is bad. They want the real human to appear to be responsible for accepting Claud's poor output.
otterley 58 minutes ago [-]
That’s ultimately the right answer, isn’t it? Bad code is bad code, whether a human wrote it all, or whether an agent assisted in the endeavor.
Reason077 59 minutes ago [-]
> "Anti-distillation: injecting fake tools to poison copycats"
Plot twist: Chinese competitors end up developing real, useful versions of Claude's fake tools.
WorldPeas 22 minutes ago [-]
more likely, they would parse them out using simple regex, the whole point is they're there but not used. Distillation is becoming less common now however
ripbozo 2 hours ago [-]
I don't understand the part about undercover mode. How is this different from disabling claude attribution in commits (and optionally telling claude to act human?)
On that note, this article is also pretty obviously AI-generated and it's unfortunate the author didn't clean it up.
giancarlostoro 1 hours ago [-]
It's people overreacting, the purpose of it is simple, don't leak any codenames, project names, file names, etc when touching external / public facing code that you are maintaining using bleeding edge versions of Claude Code. It does read weird in that they want it to write as if a developer wrote a commit, but it might be to avoid it outputting debug information in a commit message.
ramon156 1 hours ago [-]
Even some of these comments are obviously Ai-assisted. I hate that I recognize it.
causal 1 hours ago [-]
I'm amazed at how much of what my past employers would call trade secrets are just being shipped in the source. Including comments that just plainly state the whole business backstory of certain decisions. It's like they discarded all release harnesses and project tracking and just YOLO'd everything into the codebase itself.
CharlieDigital 1 hours ago [-]
Comments are the ultimate agent coding hack. If you're not using comments, you're doing agent coding wrong.
Why? Agents may or may not read docs. It may or may not use skills or tools. It will always read comments "in the line of sight" of the task.
You get free long term agent memory with zero infrastructure.
perching_aix 44 minutes ago [-]
Agents and I apparently have a whole lot in common.
Only being half ironic with this. I generally find that people somehow magically manage to understand how to be materially helpful when the subject is a helpless LLM. Instead of pointing it to a random KB page, they give it context. They then shorten that context. They then interleave context as comments. They provide relevant details. They go out of their way to collect relevant details. Things they somehow don't do for their actual colleagues.
This only gets worse when the LLM captures all that information better than certain human colleagues somehow, rewarding the additional effort.
prepend 36 minutes ago [-]
Comments are great for developers. I like having as much design in the repo directly. If not in the code, then in a markdown in the repo.
hk__2 19 minutes ago [-]
This is also a great way to ensure the documentation is up to date. It’s easier to fix the comment while you’re in the code just below it than to remember “ah yes I have to update docs/something.md because I modified src/foo/bar.ts”.
CharlieDigital 3 minutes ago [-]
People moving docs out of code are absolutely foolish because no one is going to remember to update it consistently but the agent always updates comments in the line of sight consistently.
Agent is not going to know to look for a file to update unless instructed. Now your file is out of sync. Code comments keeping everything line of sight makes it easy and foolproof.
JambalayaJimbo 58 minutes ago [-]
I guess they weren't expecting a leak of the source code?
It's very handy to have as much as possible available in the codebase itself.
pixl97 1 hours ago [-]
Project trackers come and go, but code is forever, hopefully?
treexs 58 minutes ago [-]
well yeah since they tell claude code the business decisions and it creates the comments
evil-olive 23 minutes ago [-]
> So I spent my morning reading through the HN comments and leaked source.
> This was one of the first things people noticed in the HN thread.
> The obvious concern, raised repeatedly in the HN thread
> This was the most-discussed finding in the HN thread.
> Several people in the HN thread flagged this
> Some in the HN thread downplayed the leak
when the original HN post is already at the top of the front page...why do we need a separate blogpost that just summarizes the comments?
groby_b 20 minutes ago [-]
Because the original post was noisy and lacked a concise summary of findings.
Or, more simply: Because folks wanted it enough to upvote it.
simianwords 1 hours ago [-]
> The multi-agent coordinator mode in coordinatorMode.ts is also worth a look. The whole orchestration algorithm is a prompt, not code.
So much for langchain and langraph!! I mean if Anthropic themselves arent using it and using a prompt then what’s the big deal about langchain
ossa-ma 46 minutes ago [-]
Langchain is for model-agnostic composition. Claude Code only uses one interface to hoist its own models so zero need for an abstraction layer.
Langgraph is for multi-agent orchestration as state graphs. This isn't useful for Claude Code as there is no multi-agent chaining. It uses a single coordinator agent that spawns subagents on demand. Basically too dynamic to constrain to state graphs.
simianwords 40 minutes ago [-]
You may have a point but to drive it further, can you give an example of a thing I can do with langgraph that I can't do with Claude Code?
ossa-ma 14 minutes ago [-]
I'm not an supporter of blindly adopting the "langs" but langgraph is useful for deterministically reproducable orchestration. Let's say you have a particular data flow that takes an email sends it through an agent for keyword analysis the another agent for embedding then splits to two agents for sentiment analysis and translation - there is where you'd use langgraph in your service. Claude Code is a consumer tool, not production.
simianwords 5 minutes ago [-]
I see what you mean. Maybe in the cases where the steps are deterministic, it might be worth moving the coordination at the code layer instead of AI layer.
What's the value add over doing it with just Python code? I mean you can represent any logic in terms of graphs and states..
peab 26 minutes ago [-]
nobody serious uses langchain. The biggest agent products are coding tools, and I doubt any of them use langchain
rolymath 1 hours ago [-]
You didn't even use it yet.
space_fountain 1 hours ago [-]
I've tried to use langchain. It seemed to force code into their way of doing things and was deeply opinionated about things that didn't matter like prompt templating. Maybe it's improved since then, but I've sort of used people who think langchain is good as a proxy for people who haven't used much ai?
simianwords 1 hours ago [-]
?
fatcullen 23 minutes ago [-]
The buddy feature the article mentions is planned for release tomorrow, as a sort of April Fools easter egg. It'll roll out gradually over the day for "sustained Twitter buzz" according to the source.
The pet you get is generated based off your account UUID, but the algorithm is right there in the source, and it's deterministic, so you can check ahead of time. Threw together a little app to help, not to brag but I got a legendary ghost https://claudebuddychecker.netlify.app/
sync 13 minutes ago [-]
Cute! Cactus for me. Nice animations too - looks like there were multiple of us asking Claude to reverse engineer the system. I did a slightly deeper dive here if you're interested, plus you can see all the options available: https://variety.is/posts/claude-code-buddies/
(I didn't think to include a UUID checker though - nice touch)
Maybe it would be okay as a first filtering step, before doing actual sentiment analysis on the matches. That would at least eliminate obvious false positives (but of course still do nothing about false negatives).
ArvinJA 27 minutes ago [-]
Is this really the use-case? I imagine the regex is good for a dashboard. You can collect matches per 1000 prompts or something like that, and see if the number grows or declines over time. If you miss some negative sentiment it shouldn't matter unless the use of that specific word doesn't correlate over time with other negative words and is also popular enough to have an impact on the metric.
internetter 21 minutes ago [-]
When you read the code, what you propose is actually its exclusive use... logging.
Why didn't they open the source themselves? What's the point of all this secrecy anyway?
hxugufjfjf 9 minutes ago [-]
Because they (apparently) keep a bunch of secret features and roadmap details in said source code.
stavros 1 hours ago [-]
Can someone clarify how the signing can't be spoofed (or can it)? If we have the source, can't we just use the key to now sign requests from other clients and pretend they're coming from CC itself?
MadsRC 41 minutes ago [-]
What signing?
Are you referencing the use of Claude subscription authentication (oauth) from non-Claude Code clients?
That’s already possible, nothing prevents you from doing it.
They are detecting it on their backend by profiling your API calls, not by guarding with some secret crypto stuff.
Ah, it seems that Bun itself signs the code. I don't understand how this can't be spoofed.
MadsRC 14 minutes ago [-]
Ah yes, the API will accept requests that doesn’t include the client attestation (or the fingerprint from src/utils/fingerprint.ts. At least it did a couple of weeks back.
They are most likely using these as post-fact indicators and have automation they kicks in after a threshold is reached.
Now that the indicators have leaked, they will most likely be rotated.
armanj 31 minutes ago [-]
> Anti-distillation: injecting fake tools to poison copycats
Does this mean `huggingface.co/Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled` is unusable? Had anyone seen fake tool calls working with this model?
> The obvious concern, raised repeatedly in the HN thread: this means AI-authored commits and PRs from Anthropic employees in open source projects will have no indication that an AI wrote them. It’s one thing to hide internal codenames. It’s another to have the AI actively pretend to be human.
I don’t get it. What does this mean? I can use Claude code now without anyone knowing it is Claude code.
it's written to _actively_ avoid any signs of AI generated code when "in a PUBLIC/OPEN-SOURCE repository".
Also, it's not about you. Undercover mode only activates for Anthropic employees (it's gated on USER_TYPE === 'ant', which is a build-time flag baked into internal builds).
simianwords 2 hours ago [-]
I don’t know what you mean. It just informs to not use internal code names.
robflynn 1 hours ago [-]
It also says don't announce that you are AI in any way including asking it to not say "Co-authored by Claude". I read the file myself.
I'm still inclined to think people might be overreacting to that bit since it seems to be for anthropic-only to prevent leaking internal info.
But I did read the prompt and it did say hide the fact that you are AI.
simianwords 1 hours ago [-]
Why does that matter though
giancarlostoro 1 hours ago [-]
I agree with you, I think people are overthinking this.
slopinthebag 2 hours ago [-]
I think it means OSS projects should start unilaterally banning submissions from people working for Anthropic.
simianwords 2 hours ago [-]
Why? What does this have to do with the leak
amelius 23 minutes ago [-]
A few weeks ago I was using Opus and Sonnet in OpenCode. Is this not possible anymore?
alasano 13 minutes ago [-]
It's still possible but if you do it using your Claude Max plan, it's technically no longer allowed.
They don't want you using your subscription outside of Claude Code. Only API key usage is allowed.
Google also doubled down on this and OpenAI are the only ones who explicitly allow you to do it.
seanwilson 2 hours ago [-]
Anyone else have CI checks that source map files are missing from the build folder? Another trick is to grep the build folder for several function/variable names that you expect to be minified away.
motbus3 1 hours ago [-]
I am curious about these fake tools.
They would either need to lie about consuming the tokens at one point to use in another so the token counting was precise.
But that does not make sense because if someone counted the tokens by capturing the session it would certainly not match what was charged.
Unless they would charge for the fake tools anyway so you never know they were there
viccis 31 minutes ago [-]
>This was the most-discussed finding in the HN thread. The general reaction: an LLM company using regexes for sentiment analysis is peak irony.
>Is it ironic? Sure. Is it also probably faster and cheaper than running an LLM inference just to figure out if a user is swearing at the tool? Also yes. Sometimes a regex is the right tool.
I'm reading an LLM written write up on an LLM tool that just summarizes HN comments.
I'm so tired man, what the hell are we doing here.
marcd35 34 minutes ago [-]
> 250,000 wasted API calls per day
How much approximate savings would this actually be?
dangus 16 minutes ago [-]
Something I’ve been thinking about, somewhat related but also tangential to this topic:
The more code gets generated by AI, won’t that mean taking source code from a company becomes legal? Isn’t it true that works created with generative AI can’t be copyrighted?
I wonder if large companies have throught of this risk. Once a company’s product source code reaches a certain percentage of AI generation it no longer has copyright. Any employee with access can just take it and sell it to someone else, legally, right?
simianwords 2 hours ago [-]
Guys I’m somewhat suspicious of all the leaks from Anthropic and think it may be intentional. Remember the leaked blog about Mythos?
Analemma_ 55 minutes ago [-]
It's possible, but Anthropic employees regularly boast (!) that Claude Code is itself almost entirely vibe-coded (which certainly seems true, based on the generally-low quality of the code in this leak), so it wouldn't at all surprise me to have that blow up twice in the same week. Probably it might happen with accelerating frequency as the codebase gets more and more unmanageable.
__blockcipher__ 1 hours ago [-]
I'm normally suspicious but honestly they've been so massively supply-constrained that I don't think it really benefits them much. They're not worried about getting enough demand for the new models; they're worrying about keeping up with it.
Granted, there's a small counterargument for mythos which is that it's probably going to be API-only not subscription
simianwords 1 hours ago [-]
Why would Claude code mention Mythos then
hxugufjfjf 13 minutes ago [-]
You can still use Claude Code with API-only.
drewnick 24 minutes ago [-]
You can use Claude Code with API mode (not a sub)
simianwords 9 minutes ago [-]
fair but I'm guessing access would be limited to 20x max users or something like that. not gated by API.
saadn92 1 hours ago [-]
The feature flag names alone are more revealing than the code. KAIROS, the anti-distillation flags, model codenames those are product strategy decisions that competitors can now plan around. You can refactor code in a week. You can't un-leak a roadmap.
Undercover mode is the most concerning part here tbh.
anonymoushn 2 hours ago [-]
why
AnimalMuppet 1 hours ago [-]
Well, as a general rule, I don't do business with people who lie to me.
You've got a business, and you sent me junk mail, but you made it look like some official government thing to get me to open it? I'm done, just because you lied on the envelope. I don't care how badly I need your service. There's a dozen other places that can provide it; I'll pick one of them rather than you, because you've shown yourself to be dishonest right out of the gate.
Same thing with an AI (or a business that creates an AI). You're willing to lie about who you are (or have your tool do so)? What else are you willing to lie to me about? I don't have time in my life for that. I'm out right here.
otterley 55 minutes ago [-]
Out of curiosity, given two code submissions that are completely identical—one written solely by a human and one assisted by AI—why should its provenance make any difference to you? Is it like fine art, where it’s important that Picasso’s hand drew it? Or is it like an instruction manual, where the author is unimportant?
Similarly, would you consider it to be dishonest if my human colleague reviewed and made changes to my code, but I didn’t explicitly credit them?
feature20260213 17 minutes ago [-]
Yes because you can be sued for copyright violation if you don't know the origin of one, and not the other.
40 minutes ago [-]
AnimalMuppet 45 minutes ago [-]
Why does the provenance make any difference? Let me increase your options. Option 1: You completely hand-wrote it. Option 2: You were assisted by an AI, but you carefully reviewed it. Option 3: You were assisted by an AI (or the AI wrote the whole thing), and you just said, "looks good, YOLO".
Even if the code is line-for-line identical, the difference is in how much trust I am willing to give the code. If I have to work in the neighborhood of that code, I need to know what degree of skepticism I should be viewing it with.
otterley 37 minutes ago [-]
That's the thing. As someone evaluating pull requests, should you trust the code based on its provenance, or should you trust it based on its content? Automated testing can validate code, but it can't validate people.
ISTM the most efficient and objective solution is to invest in AI more on both sides of the fence.
simianwords 1 hours ago [-]
What’s the lie? It’s just asking to not reveal internal names
BoredPositron 42 minutes ago [-]
You are spamming the whole fucking thread with the same nonsense. It is instructed to hide that the PR was made via Claude Code. I don't know why people who are so AI forward like yourself have such a problem with telling people that they use AI for coding/writing, it's a weirdly insecure look.
simianwords 38 minutes ago [-]
I can do that right now with Claude Code without this undercover mode.. In fact I do it many times at work. What's the big deal in this?
Do you not think it is an overreaction to panic like this if I can do exactly what the undercover mode does by simply asking Claude?
BoredPositron 20 minutes ago [-]
It's different if it's an institutional decision or a personal like in your case. Which is and I am repeating myself here borderline insecure.
simianwords 18 minutes ago [-]
what's insecure about it? if it is up to the institution to make that decision - you can still do it. Claude is not stopping you from making that decision
BoredPositron 11 minutes ago [-]
You have to work on your reading comprehension or you are intentional deceptive. Bye.
Jaco07 5 minutes ago [-]
[dead]
skrun_dev 18 minutes ago [-]
[dead]
Rendered at 20:12:05 GMT+0000 (Coordinated Universal Time) with Vercel.
- "Fix bug found while testing with Claude Capybara"
- "1-shotted by claude-opus-4-6"
- "Generated with Claude Code"
- "Co-Authored-By: Claude Opus 4.6 <…>"
This makes sense to me about their intent by "UNDERCOVER"
"Write commit messages as a human developer would — describe only what the code change does."
> Commit f9205ab3 by dkenyser on 2026-3-31 at 16:05:
> Fixed the foobar bug by adding a baz flag - dkenyser
Because it already identified you in the commit description. The reason to add a signature to the message is that someone (or something) that isn't you is using your account, which seems like a bad idea.
VS Code has a setting that promises to change the prompt it uses to generate commit messages, but it mostly ignores my instructions, even very literal ones like “don’t use the words ‘enhance’ or ‘improve’”. And oddly having it set can sometimes result in Cyrillic characters showing up at the end of the message.
Ultimately I stopped using it, because editing the messages cost me more time than it saved.
/rant
Writing and reading paragraphs of design discussion in a commit message is not something that seems common.
[edit] Never mind, find in page fail on my end.
https://code.claude.com/docs/en/settings#attribution-setting...
[0]: https://github.com/chatgptprojects/claude-code/blob/642c7f94...
https://code.claude.com/docs/en/settings#attribution-setting...
~/.claude/settings.json
The rest of the prompt is pretty clear that it's talking about internal use.Claude Code users aren't the ones worried about leaking "internal model codenames" nor "unreleased model opus-4-8" nor Slack channel names. Though, nobody would want that crap in their generated docs/code anyways.
Seems like a nothingburger, and everyone seems to be fantasizing about "undercover mode" rather than engaging with the details.
"includeCoAuthoredBy": false,
in your settings.json.
Plot twist: Chinese competitors end up developing real, useful versions of Claude's fake tools.
On that note, this article is also pretty obviously AI-generated and it's unfortunate the author didn't clean it up.
Why? Agents may or may not read docs. It may or may not use skills or tools. It will always read comments "in the line of sight" of the task.
You get free long term agent memory with zero infrastructure.
Only being half ironic with this. I generally find that people somehow magically manage to understand how to be materially helpful when the subject is a helpless LLM. Instead of pointing it to a random KB page, they give it context. They then shorten that context. They then interleave context as comments. They provide relevant details. They go out of their way to collect relevant details. Things they somehow don't do for their actual colleagues.
This only gets worse when the LLM captures all that information better than certain human colleagues somehow, rewarding the additional effort.
Agent is not going to know to look for a file to update unless instructed. Now your file is out of sync. Code comments keeping everything line of sight makes it easy and foolproof.
> This was one of the first things people noticed in the HN thread.
> The obvious concern, raised repeatedly in the HN thread
> This was the most-discussed finding in the HN thread.
> Several people in the HN thread flagged this
> Some in the HN thread downplayed the leak
when the original HN post is already at the top of the front page...why do we need a separate blogpost that just summarizes the comments?
Or, more simply: Because folks wanted it enough to upvote it.
So much for langchain and langraph!! I mean if Anthropic themselves arent using it and using a prompt then what’s the big deal about langchain
Langgraph is for multi-agent orchestration as state graphs. This isn't useful for Claude Code as there is no multi-agent chaining. It uses a single coordinator agent that spawns subagents on demand. Basically too dynamic to constrain to state graphs.
What's the value add over doing it with just Python code? I mean you can represent any logic in terms of graphs and states..
The pet you get is generated based off your account UUID, but the algorithm is right there in the source, and it's deterministic, so you can check ahead of time. Threw together a little app to help, not to brag but I got a legendary ghost https://claudebuddychecker.netlify.app/
(I didn't think to include a UUID checker though - nice touch)
I’d argue that in this case, it isn’t. Exhibit 1 (from the earlier thread): https://github.com/anthropics/claude-code/issues/22284. The user reports that this caused their account to be banned: https://news.ycombinator.com/item?id=47588970
Maybe it would be okay as a first filtering step, before doing actual sentiment analysis on the matches. That would at least eliminate obvious false positives (but of course still do nothing about false negatives).
Interesting based on the other news that is out.
Are you referencing the use of Claude subscription authentication (oauth) from non-Claude Code clients?
That’s already possible, nothing prevents you from doing it.
They are detecting it on their backend by profiling your API calls, not by guarding with some secret crypto stuff.
At least that’s how things worked last week xD
https://alex000kim.com/posts/2026-03-31-claude-code-source-l...
Ah, it seems that Bun itself signs the code. I don't understand how this can't be spoofed.
They are most likely using these as post-fact indicators and have automation they kicks in after a threshold is reached.
Now that the indicators have leaked, they will most likely be rotated.
Does this mean `huggingface.co/Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled` is unusable? Had anyone seen fake tool calls working with this model?
I don’t get it. What does this mean? I can use Claude code now without anyone knowing it is Claude code.
it's written to _actively_ avoid any signs of AI generated code when "in a PUBLIC/OPEN-SOURCE repository".
Also, it's not about you. Undercover mode only activates for Anthropic employees (it's gated on USER_TYPE === 'ant', which is a build-time flag baked into internal builds).
I'm still inclined to think people might be overreacting to that bit since it seems to be for anthropic-only to prevent leaking internal info.
But I did read the prompt and it did say hide the fact that you are AI.
They don't want you using your subscription outside of Claude Code. Only API key usage is allowed.
Google also doubled down on this and OpenAI are the only ones who explicitly allow you to do it.
They would either need to lie about consuming the tokens at one point to use in another so the token counting was precise.
But that does not make sense because if someone counted the tokens by capturing the session it would certainly not match what was charged.
Unless they would charge for the fake tools anyway so you never know they were there
>Is it ironic? Sure. Is it also probably faster and cheaper than running an LLM inference just to figure out if a user is swearing at the tool? Also yes. Sometimes a regex is the right tool.
I'm reading an LLM written write up on an LLM tool that just summarizes HN comments.
I'm so tired man, what the hell are we doing here.
How much approximate savings would this actually be?
The more code gets generated by AI, won’t that mean taking source code from a company becomes legal? Isn’t it true that works created with generative AI can’t be copyrighted?
I wonder if large companies have throught of this risk. Once a company’s product source code reaches a certain percentage of AI generation it no longer has copyright. Any employee with access can just take it and sell it to someone else, legally, right?
Granted, there's a small counterargument for mythos which is that it's probably going to be API-only not subscription
You've got a business, and you sent me junk mail, but you made it look like some official government thing to get me to open it? I'm done, just because you lied on the envelope. I don't care how badly I need your service. There's a dozen other places that can provide it; I'll pick one of them rather than you, because you've shown yourself to be dishonest right out of the gate.
Same thing with an AI (or a business that creates an AI). You're willing to lie about who you are (or have your tool do so)? What else are you willing to lie to me about? I don't have time in my life for that. I'm out right here.
Similarly, would you consider it to be dishonest if my human colleague reviewed and made changes to my code, but I didn’t explicitly credit them?
Even if the code is line-for-line identical, the difference is in how much trust I am willing to give the code. If I have to work in the neighborhood of that code, I need to know what degree of skepticism I should be viewing it with.
ISTM the most efficient and objective solution is to invest in AI more on both sides of the fence.
Do you not think it is an overreaction to panic like this if I can do exactly what the undercover mode does by simply asking Claude?