The claim is astonishing, given emacs’ continuous use and open source scrutiny for decades. Edit: and turns out to be a problem with git, not emacs.
OTOH it’s really just the core that has been used so widely and so continuously for so long. This integration with git will have been scrutinized far less.
As an emacs user I frequently find myself in territory where I’m seemingly the only person in the world with my use case. In fact that’s half of the value: I can make emacs do whatever I want. Which means there’s security consistent with a bus factor of 1.
Those kinds of bugs exist because no-one is accountable for quality like in other industries, unless it is on high integrity computing, or the cyber security laws that are finally coming into place across several countries.
Experience doesn’t leave me with any confidence that the long term memory will be useful for long. Our agentic code bases are a few months old, wait a few years for those comments to get out of date and then see how much it helps.
The great thing about agentic coding is you can define one whose entire role is to read a diff, look in contextual files for comments, and verify whether they’re still accurate.
You don’t have to rely on humans doing it. The agent’s entire existence is built around doing this one mundane task that is annoying but super useful.
Yes, lets blow another 5-10k a project/month on tokens to keep the comments up to date. The fact ai still cannot consistently refactor without leaving dead code around even after a self review does not give me confidence in comments…
Comments in code are often a code smell. This is an industry standard for a reason and isnt because of staleness. If u are writing a comment, it means the code is bad or there is irreducible complexity. It is about good design. Comments everywhere are almost always a flag.
Or how would you name methods and variables to explain why some payment reconciliation process skips matching for transactions under 0.50 EUR and just auto-approves them, because the external payment processor rounds differently than the internal ledger at sub-euro amounts, creating mismatches that were flooding the finance team's exception queue in 2013, explained more under Jira issue ZXSV-12456 and more details are known by j.doe@myorg.com. The threshold was chosen after analyzing six months of false positives, when it's any higher someone being undercharged doesn't get caught. I don't think autoApproveThreshold = 0.50 or anything like that would get the full context across, even if the rules themselves are all code.
I think surely you can have both! Code should explain itself as often as possible, but when you hit a wall due to some counter-intuitive workarounds being needed, or some business rules or external considerations that you need to keep track of, then comments also make sense. Better than just putting everything in a Jira issue somewhere, given that it often won't be read by you or others and almost certainly will not be read by any AI agents (unless you have an MCP or something, but probably uncommon), or spending hours trying to get the code to explain something it will never explain well. I've had people ask me about things that are covered in the README.md instead of reading it.
You’ve correctly identified that naming isn’t sufficient for all communication. Name the things that stay constant in the code and explain the things that vary with a particular implementation in version control messages. Version control as a medium communicates what context the message was written for, which is far more appropriate than comments.
> Name the things that stay constant in the code and explain the things that vary with a particular implementation in version control messages.
Then the question becomes how often we look in the version control history for the files that we want to touch.
Which of these is more likely:
A) someone digging into the full history of autoApproveThreshold and finding out that they need to contact j.doe@myorg.com or reference ZXSV-12456
B) or them just messing the implementation with changes due to not reviewing the history of every file they touch
If someone is doing a refactor of 20 files, they probably won't review the histories of all of those, especially if the implementation is spread around a bunch of years, doubly so if there are a bunch of "fixes" commit messages in the middle, merge commits and so on. I've seen people missing out on details that are in the commit log many, many times, to the point where I pretty much always reach for comments. Same goes for various AI tools and agents.
Furthermore, if you want to publish a bit of code somewhere (e.g. Teams/Slack channel, or a blog), you'd need to go out of your way to pull in the relevant history as well and then awkwardly copy it in as well, since you won't always be giving other people a Git repo to play around with.
It's not that I don't see your point, it's just that from where I stand with those assumptions a lot of people are using version control as a tool wrong and this approach neither works now, nor will work well for them in the future.
It's more or less the same issue as with docs in some Wiki site or even a separate Markdown file (which is better than nothing, definitely closer than a Wiki, especially if the audience is someone who wants an overview of a particular part of the codebase, or some instructions for processes that don't just concern a few files; but it's still far removed from where any actual code changes would be made, also a downside of ADRs sometimes).
I wrote this an hour ago and it seems that Claude might not understand it as frustration:
> change the code!!!! The previous comment was NOT ABOUT THE DESCRIPTION!!!!!!! Add to the {implementation}!!!!! This IS controlled BY CODE. *YOU* _MUST_ CHANGE THE CODE!!!!!!!!!!!
That definition moves the goalposts almost by definition, people only stopped thinking that chess demonstrated intelligence when computers started doing it.
The term artificial intelligence has always been just a buzzword designed to sell whatever it needed to. IMHO, it has no meaningful value outside of a good marketing term. John McCarthy is usually the person who is given credit for coming up with the name and he has admitted in interviews that it was just to get eyeballs for funding.
Something something powers go to definition… is this an implementation of an LSP server? Or a subset of what’s needed to implement LSP? A formerly proprietary alternative to LSP?
In its simplest form, it's just a dump of the code intelligence information from a static copy of the code. This can power an LSP, however, without additional logic wouldn't be able to handle a project under edit, since the locations won't match between the indexed state and the edited project state; So it lends itself well for something like Sourcegraph that already displays a static copy of the codebase.
Uber uses SCIP as part of the LSP implementation for our Java monorepo (Pieces of which we've [open-sourced](https://github.com/uber/scip-lsp)).
Standardizing on SCIP has helped us generalize tools to be independent of the compiler/language ecosystem (eg we could do call-stack-analysis on any project that exports valid SCIP; do feature flag cleanup; find refs/impls across a wider scope than most LSP servers can handle due to memory constraints).
Love the concept that I can glean from the opening paragraph. Very interested. Not going to upgrade to a Medium premium plan to read it.
Unfortunately using Medium seems to be a choice to compete with every other Medium article that month for the limited number of free articles that can be read without paywall.
OTOH it’s really just the core that has been used so widely and so continuously for so long. This integration with git will have been scrutinized far less.
As an emacs user I frequently find myself in territory where I’m seemingly the only person in the world with my use case. In fact that’s half of the value: I can make emacs do whatever I want. Which means there’s security consistent with a bus factor of 1.
reply