> Roughly 111 GB of RAM. Which is like nothing to a search giant.
You are forgetting job replication. A global service can easily have 100s of jobs on 10-20 datacenters.
Saving 111TiB of RAM can probably pay your salary forever. I think I paid mine with fewer savings while there. During covid there was a RAM shortage too enough to have a call to prefer trading CPU to save RAM with changes to the rule of thumb resource costs.
> A global service can easily have 100s of jobs on 10-20 datacenters.
There's obviously, something in between maintaining the latency with 20 datacenter, increasing the latency a bit reducing hosting to a couple $100 worth of servers, and setting the latency to infinity, which was the original plan.
I'm guessing that they ran out of leeway with small tweaks and found that breaking inactive links was probably a better way out. We don't know the hit rates of what they call inactive nor the real cost it takes to keep them around.
A service like this is probably on maintenance mode too, so simplifying it to use fewer resources probably makes sense, and I bet the PMs are happy about shorter links, since at some point you are better off not using a link shortener and instead just use a QR code in fear of inconvenience and typos.
You are forgetting job replication. A global service can easily have 100s of jobs on 10-20 datacenters. Saving 111TiB of RAM can probably pay your salary forever. I think I paid mine with fewer savings while there. During covid there was a RAM shortage too enough to have a call to prefer trading CPU to save RAM with changes to the rule of thumb resource costs.