> This is also a big party of why artists don’t earn shit.
The pie that Spotify divides up among the artists is a global one. It's not like you listen to one artist, so they get your 10 bucks every month. You're paying Taylor Swift, even though you never listen to her.
it's not bad by itself, but I argue the opaque structure of it is horrendous. Especially in financial matters, you should be able to estimate how much money you get if you put X effort in and get Y metrics. But even getting a proper Y isn't straightforward, let alone Z payout.
So it went from parsing at 25MiB/s to 115MiB/s. I feel like 115MiB/s is very slow for a Rust program, I wonder what it's up to that makes it so slow now. No diss to the author, good speedup, and it might be good enough for them.
115 MiB/s is something like 20 to 30 cycles per byte on a laptop, 50 on a desktop. That’s definitely quite slow as far as a CPU’s capacity to ingest bytes, but unfortunately about as fast as it gets for scalar (machine) code that does meaningful work per byte. There may be another factor of 2 or 3 to be had somewhere, or there may not be. If you want to go meaningfully faster, as in at least at the speed of your disk[1], you need to stop doing work per byte and start vectorizing. For parsers, that is possible but hard.
A quick rule of thumb is that one or two bytes per peak clock cycle per core or so (not unlike an old 8 bit or 16 bit machine!) is the worst case for memory bandwidth when running highly multithreaded workloads that heavily access main RAM outside cache. So there's a lot of gain to be had before memory bandwidth is truly saturated, and even then one can plausibly move to GPU-based compute and speed things up further. (Unified memory+HBM may potentially add a 2x or 3x multiplier to this basic figure, but either way it's in the ballpark.)
The grammar matters also, of course. A pure Python program is going to be much slower than the equivalent Rust program, just because CPython is so slow.
I don't know if this does semantic analysis of the program as well.
This has to be AI slop text... It doesn't actually say what it does. If you've got a non-compacting GC that's fragmenting memory. If you need to release memory, then you better make sure that the blocks the GC is using are empty, and munmap them.
I think they just swapped out LuaJIT's modified built-it dlmalloc[1] with some standard allocator. Then just set some turning values of the allocator to make to more eager to return pages with no allocations left to the OS.
LuaJIT has always had pluggable allocator system you can set at state construction time[2], it did have a restriction you could only use the built-it allocator for 64 bit builds if you don't use the GC64 build option, but thats been default enabled for a while now.
> This isn’t simply a code quality issue; rather, it stems from a “communication gap” between the runtime memory allocation mechanism and the operating system
> It is not merely a patch, but an enhanced runtime environment
> This is not merely a code quality issue, but a profound architectural challenge
What do you mean, at a practical level, when you set out your "priority list" above? Are you referring to the use of congestion charges to discourage private motor vehicle use?
Not OP, but I don't think congestion charges are the most important part here. It's more about what type of infrastructure to prioritize resources and work for. Basically, the idea is that the town or city should not spend money on building parking, for example, and instead spend it on bike lanes, or two more busses, or some extension to the metro line.
It’s entirely dependent on the situation. Some areas, additional charges work best. In others, it’s possible/necessary to redesign road and street layouts to prioritise higher-density modes of transport and physically discourage low-density modes like cars. This might be priority lights for public transport, lowering speed limits and narrowing streets. In some contexts, it’s necessary to completely disallow cars with things like bus lanes, bike/pedestrian-only areas. Separated tram/metro lines, too.
Most of this infrastructure, in practice, also aids emergency vehicle use as they can usually fit down bike lanes and are obviously able to fit in bus lanes.
I'm on a M4 Macbook, and I can see it. I'm inclined to totally accept the blog author's experience as true for them, I'd probably experience the same thing.
> chicken breast is more than twice as "good" ratio wise.
Yes, at more than twice the price for me.
> for the average person's protein intake, yes.
The average person doesn't need that ratio, reaching 60-90g of protein is trivial. That ratio is good for bodybuilding purposes. Now, eating that much tofu, that sucks. Generally, getting 200g of protein sucks, even when you eat protein powder.
Basically there is an entity between Annas Archive and the torrents: hosters. AA has searchable metadata and a hash value. The hosters keep track of hash values, the cached files and in which torrents they are backed up, and take up almost the entire legal liability. Users search on AA what they are looking for but ultimately download it from a hoster.
^- This let's us pass arbitrary starting data to a new thread.
I don't know whether this counts as "very few use cases".
The Memory Ownership advice is maybe good, but why are you allocating in the copy routine if the caller is responsible for freeing it, anyway? This dependency on the global allocator creates an unnecessarily inflexible program design. I also don't get how the caller is supposed to know how to free the memory. What if the data structure is more complex, such as a binary tree?
It's preferable to have the caller allocate the memory.
void insert(BinTree *tree, int key, BinTreeNode *node);
^- this is preferable to the variant where it takes the value as the third parameter. Of course, an intrusive variant is probably the best.
If you need to allocate for your own needs, then allow the user to pass in an allocator pointer (I guessed on function pointer syntax):
void* is a problem because the caller and callee need to coordinate across the encapsulation boundary, thus breaking it. (Internally it would be fine to use - the author could carefully check that qsort casts to the right type inside the .c file)
> What if the data structure is more complex, such as a binary tree?
I think that's what the author was going with by exposing opaque structs with _new() and _free() methods.
But yeah, his good and bad versions of strclone look more or less the same to me.
If you don't pass the size, the allocation subsystem has to track the size somehow, typically by either storing the size in a header or partitioning space into fixed-size buckets and doing address arithmetic. This makes the runtime more complex, and often requires more runtime storage space.
If your API instead accepts a size parameter, you can ignore it and still use these approaches, but it also opens up other possibilities that require less complexity and runtime space by relying on the client to provide this information.
The way I've implemented it now was indeed to track the size in a small header above the allocation, but this was only present in debug mode. I only deal with simple allocators like a linear, pool, and normal heap allocator. I haven't found the need for something super complex yet.
You can prevent buffer overflows even when you don't use a VM. Eg it's perfectly legal for your C compiler to insert checks. But there are also languages like Rust or Haskell that demand an absence of buffer overflows.
You can design a VM that still allows for buffer overflows. Eg you can compile C via the low-level-virtual-machine, and still get buffer overflows.
Any combination of VM (Yes/No) and buffer-overflows (Yes/No) is possible.
I agree that using a VM is one possible way to prevent buffer overflows.
The pie that Spotify divides up among the artists is a global one. It's not like you listen to one artist, so they get your 10 bucks every month. You're paying Taylor Swift, even though you never listen to her.