Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So, it’s interesting. You know how with RAM, it’s a good idea for it to be “fully utilized”, in a sense that anything apps aren’t using should be used for file system cache? And then when apps do need it, the least-recently-used cache can be freed to make room? It’s actually similar for the file system itself!

If macOS is using 153GB for iCloud cache, that’s only a bad thing if it’s not giving it back up automatically if your filesystem starts getting full. Because it means you have local copies of things that live in iCloud, making the general experience faster. In that sense, you want your filesystem to be “fully utilized”. The disk viewer in macOS that shows you filesystem utilization should even be differentiating this sort of cache from “real” utilization… this cache should (if everything is working right) should logically be considered “free space”.

Now of course, if there are bugs where the OS isn’t giving that storage back when you need it, that all goes out the window. And yeah… bugs like these happen too damned often. But I still say, the idea is actually a good one, at least in theory.



This would be acceptable if solid state storage weren’t so susceptible to write wear, in a laptop where nothing is user serviceable.


What would the alternative be? Simply don't cache anything you get from icloud? Because even if you delete it more eagerly, that's a write cycle.

In fact, avoiding deleting it in case the user gets it again, is going to put fewer write cycles on the SSD, assuming you're going to write it to the SSD at all. The only alternative I can think of is keeping everything from iCloud in RAM, but that is a pretty insane idea. (Also, then the first thing you'd get is people complaining that iCloud eats up all their 5G data caps, etc.)


Of course, but then iCloud might want to cache a reasonable amount of data, say, the 10% the user uses the most. Seeing iCloud caches in the 100+GB arena makes no sense to me, especially if the system isn’t rapidly releasing that storage when needed.


If the ability to release the storage on-demand works correctly (and this is a big if) there’s no reason to limit to 10%. What benefit will that have? If the system works well, deleting the data eagerly accomplishes nothing.

I think the actual system uses filesystem utilization as a form of “disk pressure”, to the point where once it’s above a certain threshold (say, 90% used), it should start evicting least-recently-used data. It doesn’t wait for 100%, because it takes some nonzero amount of time to free the cache. But limiting the cache size arbitrarily doesn’t seem useful.

It gets more complicated when there are multiple caches (maybe some third party apps has their own caches) and you need to prioritize who gets evicted, but it’s still the same thing in theory.

But yeah, if the system isn’t working right and cache isn’t seen as cache, or if it can’t evict it for some reason, then this all goes out the window. I’m only claiming it’s good in theory.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: