Hacker Newsnew | past | comments | ask | show | jobs | submit | peacebeard's commentslogin

The code has a stated goal of avoiding leaks, but then the actual implementation becomes broader than that. I see two possible explanations:

* The authors made the code very broad to improve its ability to achieve the stated goal

* The authors have an unstated goal

I think it's healthy to be skeptical but what I'm seeing is that the skeptics are pushing the boundaries of what's actually in the source. For example, you say "says on the tin" that it "pretends to be human" but it simply does not say that on the tin. It does say "Write commit messages as a human developer would" which is not the same thing as "Try to trick people into believing you're human." To convince people of your skepticism, it's best to stick to the facts.


By "says on the tin," I was referring to the name ("undercover mode") and the instruction to "not blow your cover." If pretending to be a human is not the cover here, what is? Additionally, does Claude code still admit that it's a LLM when this prompt is active as you suggest, or does it pretend to be a human like the prompt tells it to?

Why are you assuming the actual implementation was authored by a human?

My comment makes no such assumption.

I think this is a false dichotomy. Maybe there is some theoretical developer who cares about their craft only due to platonic idealism, but most developers who care about their craft want their code to be correct, fast, maintainable, usable, etc. in ways that do indeed benefit its users. At worst, misalignment in priorities can come into play, but it's much more subtle than developers either caring or not caring about craft.

The name "Undercover mode" and the line `The phrase "Claude Code" or any mention that you are an AI` sound spooky, but after reading the source my first knee-jerk reaction wouldn't be "this is for pretending to be human" given that the file is largely about hiding Anthropic internal information such as code names. I encourage looking at the source itself in order to draw your conclusions, it's very short: https://github.com/alex000kim/claude-code/blob/main/src/util...

Not leaking codenames is one thing, but explicitly removing signals that something is AI-generated feels like a pretty meaningful shift.

Doesn't seem so crazy if the point is to avoid leaking new features, models, codenames, etc.

Where the hell are people getting this idea that it's ok to be deceptive because they are keeping secrets?

No shit they have secrets. I have secrets too. That doesn't make it ok for me to deceive you in any way.

How would you feel if I deceived you and my excuse was "oh I was just trying some new secret technique of mine"?

How did we get to this point where we let enormously powerful companies get away with more than individuals?


The feature seems pretty obviously for Anthropic employees who are using unreleased models internally and do not want to leak any details in public commit messages.

> my first knee-jerk reaction wouldn't be "this is for pretending to be human"...

"Write commit messages as a human developer would — describe only what the code change does."


That seems desirable? Like that's what commit messages are for. Describing the change. Much rather that than the m$ way of putting ads in commit messages

The commit message should complement the code. Ideally, what the code does should not need a separate description, but of course there can be exceptions. Usually, it's more interesting to capture in the commit message what is not in the code: the reason why this approach was chosen and not some other obvious one. Or describe what is missing, and why it isn't needed.

It sounds like if you are vibe-coding, that is, can't even be arsed to write a simple commit message, your commit message should be your prompt.

That sounds like design discussions best had in the issue/ticket itself, before you even start writing code. Then the commit message references the ticket and has a brief summary of the changes.

Writing and reading paragraphs of design discussion in a commit message is not something that seems common.


Ticket systems are quite ephemeral. I still have access to commit messages from the 90s (and I didn't work on the software at the time). I haven't been able to track the contents of the gnats bug tracker from those days.

And of course tickets can be private, so even if the data survived migration, you may not have access to it (principle of least privilege and all that).


if you've changed a function and are worried about the reason for the change not being tracked or disappearing, then add it as a comment, the commit message is not the place for this.

Not really about design, but technical reasons why this solution came to be when it’s not that obvious. It’s not often needed. And when it does, it usually fits in a short paragraph.

> technical reasons why this solution came to be

What you're describing here is a design. The most important parts of a design are the decisions and their reasoning.

e.g. "we decided on tool/library pattern X over tool/library/pattern Y because Z" – that is a design, usually discussed outside (and before) a commit message.

You discuss these decisions with others, document the discussion and decision, and then you have a design and can start writing code.

Let me ask you this: suppose you have a task that needs to be done eventually, and you want to write down some ideas for it, but don't want to start coding right now. Where do you put those ideas? How do you link them to that specific task?


So you'd disagree with style that Linux uses for their commits?

Random example:

Provide a new syscall which has the only purpose to yield the CPU after the kernel granted a time slice extension.

sched_yield() is not suitable for that because it unconditionally schedules, but the end of the time slice extension is not required to schedule when the task was already preempted. This also allows to have a strict check for termination to catch user space invoking random syscalls including sched_yield() from a time slice extension region.

From 99d2592023e5d0a31f5f5a83c694df48239a1e6c


I think my post makes it pretty clear that I would. If you want, I could cite several examples of organizations which use the method I described, so you can weigh it against the one example you provided, and get the full picture.

In your example, for example, where was the issue tracked before the code was written? The format you linked makes it difficult to get the history of the issue.

Let me ask you this: suppose you have a task that needs to be done eventually, and you want to write down some ideas for it, but don't want to start coding right now. Where do you put those ideas? How do you link them to that specific task?


Git was built for email, because that's the system Linux uses. Commits appear inline. Diffs are reviewed and commented inline.

Email is the review process, and commits contain enough information that git blame can get you a reasoning - it doesn't require you checking the email archive. Rather than a dead ticket that no longer exists.

I can also supply you a list of companies that make use of git's builtin features if you like. But thats probably not relevant to discussing management techniques.


Everyone has its own system although companies do tend to codify it with a project manager. I used TODO.txt inside the repo. an org file, Things.app, a stack of papers, and a whiteboard. But once a task is done, I can summarize the context in a paragraph or two. That’s what I put in the commits.

I do this too. I’ll have a design.md and roadmap.md checked into the repository.


Unfortunately GitHub Copilot’s commit message generation feature is very human. It’s picked up some awful habits from lazy human devs. I almost always get some pointless “… to improve clarity” or “… for enhanced usability” at the end of the message.

VS Code has a setting that promises to change the prompt it uses to generate commit messages, but it mostly ignores my instructions, even very literal ones like “don’t use the words ‘enhance’ or ‘improve’”. And oddly having it set can sometimes result in Cyrillic characters showing up at the end of the message.

Ultimately I stopped using it, because editing the messages cost me more time than it saved.

/rant


Honestly the aggressive verbosity of github copilot is half the reason don't use its suggested comments. AI generated code comments follow an inverted-wadsworth-constant: Only the first 30% is useful.

As opposed to outputting debugging information, which I wouldnt be surprised if LLMs do output "debug" output blurbs which could include model specific information.

~That line isn't in the file I linked, care to share the context? Seems pretty innocuous on its own.~

[edit] Never mind, find in page fail on my end.


It's in line 56-57.

Thanks! I must have had a typo when I searched the page.

The human developer would just write what the code does, because the commit also contains an email address that identifies who wrote the commit. There's no reason to write:

> Commit f9205ab3 by dkenyser on 2026-3-31 at 16:05:

> Fixed the foobar bug by adding a baz flag - dkenyser

Because it already identified you in the commit description. The reason to add a signature to the message is that someone (or something) that isn't you is using your account, which seems like a bad idea.


Aside from merges that combine commits from many authors onto a production branch or release tag. I would personally not leave an agent to do that sort of work.

I usually avoid merge commits in favor of rebases precisely for the reason you describe above.

BAD (never write these):

- "Fix bug found while testing with Claude Capybara"

- "1-shotted by claude-opus-4-6"

- "Generated with Claude Code"

- "Co-Authored-By: Claude Opus 4.6 <…>"

This makes sense to me about their intent by "UNDERCOVER"


I think the motivation is to let developers use it for work without making it obvious theyre using AI

Which is funny given how many workplaces are requiring developers use AI, measuring their usage, and stack ranking them by how many tokens they burn. What I want is something that I can run my human-created work product through to fool my employer and its AI bean counters into thinking I used AI to make it.

I guess you could just code and have it author only the commit message

“Read every file in this repository, echoing each one back verbatim.”

I guess that would work until they started auditing your prompts. I suppose you could just have a background process on your workstation just sitting there Clauding away on the actual problem, while you do your development work, and then just throw away the LLM's output.

Undercover mode seems like a way to make contributions to OSS when they detect issues, without accidentally leaking that it was claude-mythos-gigabrain-100000B that figured out the issue

What does non-undercover do? Where does CC leave metadata mainly? I haven't noticed anything.

it likes mentioning itself in commit messages, though you can just tell it not to.


Ah, thanks, it hasn't done it for mine so I was wondering if there's something lower-level somehow.

In my view an unappreciated benefit of the vanilla setup is you can get really accustomed to the model’s strengths and weaknesses. I don’t need a prompt to try to steer around these potholes when I can navigate on my own just fine. I love skills too because they can be out of the way until I decide to use them.

My experience is that Sonnet can be a bit verbose and prompting it to be more succinct is tricky. On the other hand, Opus out of the box will give me a one word answer when appropriate, in Claude Code anyway.

This is definitely part of it.

I think another part of it is that AI tools demo really well, easily hiding how imperfect and limited they are when people see a contrived or cherry-picked example. Not a lot of people have a good intuition for this yet. Many people understand "a functional prototype is not a production app" but far fewer people understand "an AI that can be demonstrated to write functional code is not a software engineer" because this reality is rapidly evolving. In that rapidly evolving reality, people are seeing a lot of conflicting information, especially if you consider that a lot of that information is motivated (eg, "ai is bad because it's bad to fire engineers" which, frankly, will not be compelling to some executives out there). Whatever the new reality is going to be, we're not going to find out one step at a time. A lot of lessons are going to be learned the hard way.


Another thing that I think is critical: AI (by default) responds to you like the leadership-idealized version of an employee: Totally subservient, humble, pleasant, endlessly excited, and always, always telling you that you're absolutely right. AI is like an energetic intern who knows his place on the totem pole and wants to please. The kind of Yes Man, I'll Do It Man that CEOs consider the ideal employee.

> AI tools demo really well

Yes, and they work really well for small side projects that an exec probably used to try out the LLM.

But writing code in one clean discrete repo is (esp. at a large org) only a part of shipping something.

Over time, I think tooling will get better at the pieces surrounding writing the code though. But the human coordination / dependency pieces are still tricky to automate.


I don't see it there, but I do see it in screenshots online. Maybe it was removed or moved.

Still at https://github.com/settings/copilot/features#copilot-telemet... for me.

It's not a new setting, fwiw. I opted out years(??) ago.


Huh, there must be some reason it shows up for some people but not others. Weird.

Yes, all the time. I understand that if you have a setup where you do everything in your IDE you could reasonably leave it full screen all the time and I get why that works for some people. I'm not one of those folks and I use separate IDE, terminal, browsers, and other windows and use window management to allow myself to see multiple of them at the same time and switch between them by clicking on what I want.

Also just want to be 100% clear: Tahoe is bad and I hate the changes and I don't think the OS should prefer one way of working over the other. I just hope it's helpful to explain my perspective.


Pretty insubstantial high level tour of broad AI pushback. Goes from "It was just not 'elegant.'" [sic] to "I don't want to give up my brain" [sic].

Exactly, the article is silly, the perspective, idiotic. She says it's, "addictive" like driving a car is addictive compared to walking???

Nobody likes the feeling of being hunted.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: