Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hmm the whole point of checkpoints seems to be to reduce token waste by saving repeat thinking work. But wouldn't trying to pull N checkpoints into context of the N+1 task be MUCH more expensive? It's at odds with the current practice of clearing context regularly to save on input tokens. Even subagents (which I think are the real superpower that Claude Code has over Gemini CLI for now) by their nature get spawned with fresh near-empty context.

Token costs aside, arguably fresh context is also better at problem solving. When it was just me coding by hand, I didn't save all my intermediate thinking work anywhere: instead thinking afresh when a similar problem came up later helped in coming up with better solutions. I did occasionally save my thinking in design docs, but the equivalent to that is CLAUDE.md and similar human-reviewed markdown saved at explicit -umm- checkpoints.





Yes I also don't see how having persistent context would help. For one, I don't want to read the slop the AI produced while it wrote the code. It doesn't have intent or thinking, it's a completion machine. The code is the artifact that matters, not the thousands of lines of "Your absolutely right --- I was wrong to ..."

Also, sometimes it gets something very wrong. I don't want to then poison every subsequent sessions with the wrong thing it learned. This has been a major issue for me at $WORK with AGENTS.md files that my colleagues write: they make my agent coding much worse so I need to manually delete them often.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: