Hacker Newsnew | past | comments | ask | show | jobs | submit | a_t48's commentslogin

I love that the TEST PROGRAM at the end to verify the computer is working is to basically scholars mate yourself.

This is very cool. Worth a submission by itself.

If anyone knows a similar solution for zstd, I'm very interested. I'm doing streaming uncompression to disk and I'd like to be able to do resumable downloads without _also_ storing the compressed file.

https://github.com/martinellimarco/indexed_zstd

https://github.com/martinellimarco/libzstd-seek

Note, however, that this can only seek to frames, and zstd still only creates files containing a single frame by default. pzstd did create multi-frame files, but it is not being developed anymore. Other alternatives for creating seekable zstd files are: zeekstd, t2sz, and zstd-seekable-format-go.


Thanks, this is helpful. I might just end up using content defined chunking in addition/instead, but it's good to know that there is a path forward if I stick with the current architecture.

Only if you want the slap to include a free trip to the hospital.

I've worked direct with "collaborative arms" before. They are supposed to be safe for humans to be around. The dents I put in the side of the casing of the arm somewhat said otherwise.


I got exactly this warning message yesterday, saying that it could use up a significant amount of my token budget if I resumed the conversation without compaction.

Compaction wont save you, in fact calling compaction will eat about 3-5x the cold cache cost in usage ive found.

Wouldn't it help if the system did compaction before the eviction happens? But the problem is that Claude probably don't want to automatically compact all sessions that have been left idle for one hour (and very likely abandoned already), that would probably introduce even more additional costs.

Maybe the UI could do that for sessions that the user hasn't left yet, when the deadline comes near.


I saw that too, but that's actually even worse on cache - the entire conversation is then a cache miss and needs to be loaded in in order to do the compaction. Then the resulting compacted conversation is also a cache miss.

You ideally want to compact before the conversation is evicted from cache. If you knew you were going to use the conversation again later after cache expiry, you might do this deliberately before leaving a session.

Anthropic could do this automatically before cache expiry, though it would be hard to get right - they'd be wasting a lot of compute compacting conversations that were never going to be resumed anyway.


Im glad they chose to do that as opposed to hidden behavior changes that only confuse users more.

Really good to know. That should have made it into their update letter in point (2). Empowering the user to choose is the right call.

I've started doing this with hashes in a CLI I'm working on. For slow prints, it's somewhat helpful https://asciinema.org/a/aD38Pk88CZgSZqtq but for debug dumps with many many hashes it really helps readability and tracking hashes across lines.

This is great, AI freeing us from bikeshedding.

The secret to not waiting at Ariscault is to live nearby and go on a weekday - you can walk right up.

Hey, please don't blindly paste/post from LLMs, please.

I appreciate the "please", but this comes across as presumptive. First, you don't know the effort level I put in. Second, you haven't seen the end result. Third, why do you think I would "blindly paste" from an LLM? If you take a look at my profile or other comments, I hope that is clear.

I appreciate feedback in general, and I am glad when people care about making HN a nice place for discussion and community. Sometimes a well-meaning person goes a little too far, and I think it happened above. That's my charitable interpretation. It is also possible that in this age of AI, people are understandably pissed and sending that frustration out into the world. When that happens, just remember the people reading it matter too.

About me: I would not share something unless I think it has value to at least one other person on HN. I've done a lot of work about data and privacy in general (having worked at a differential privacy startup in the past), but I'm much newer to the idea of digging into ways of making telemetry gathering more transparent. I haven't found great resources on the Web about this yet, which is why I started doing the research. And I'm going to share it for others to read, criticize, build on top of, etc.


Where is the gist? I assumed LLM/bot because of the disconnect between "here's a gist" and "still cookin"

I ask everyone to be a bit more careful about the "assume LLM/bot" thing. That hair-trigger is often counterproductive.

Anyhow, the Claude research took 36 minutes to run, so I put it to the side and didn't link it originally. I'm still thinking through it -- there is a lot to cover : https://gist.github.com/xpe/654af2731d40a145e1d0b8b694fe8fd3


How was RGB tested, in that case?

This project requires on-device physical RGB testing, so something like RGB in CI/CD testing won't work.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: