Not since I asked him to stop dropping by the D forum to remind us that he wouldn't include D in the shootout.
I've personally lost all interest in it.
(The benchmarks are also small and too easily gamed by the author of the benchmark code, and the compiler developer. I encourage people to run timing tests on their own code, as that is what matters to them.)
My experience has been that I get better ideas of benchmarks by profiling my real application code, porting the hotspots to the new language or framework within a mostly bare skeleton and testing it. It’s not perfect, but gives me a better idea as to whether moving will actually help!
> I encourage people to run timing tests on their own code, as that is what matters to them.
This is the heart of the matter: benchmarking toy and unrelated bits of code amount to the same thing.
I think what people really need is a straightforward, accessible understanding of where a software implementation is inefficient. Design inefficiency can sometimes be "hacked over" (read "paved over") a bit like evening out technical debt; implementation inefficiencies, perhaps not so much. Making a complex database query faster by fiddling with (very) obscure SQL options because a given vendor's query planner is broken in a rare edge-case is one example that comes to mind.
I think it's especially sad when a software implementation becomes renowned for its inefficiency. It kinda takes the heat off in what I would argue is a very unfair way, like it legitimizes slowness like speed isn't a worthy pursuit... and then we wonder why our computers are slow. (Yeah, I'm thinking about Python here... and to a small extent many interpreted languages.)
In the context of D (and now DMC++ :D), so specifically compilers, it would be interesting to know what areas of the language don't generate especially fast code, or what bits might produce code that uses a little memory than it could, etc. Because that's what people really want to know before they take the time to write/port; and if they know about all the instances of "don't do XYZ in this very specific way" ahead of time, maybe they can write the best possible code on the first try! (Design and implementation are intertwined in practice.)
I suspect the list of such "avoid"s may not be long. It might make for a particularly efficient kind of developer user manual.
I don't really see a big problem with a little bit of gaming on the benchmark-code side; with a few implementations to compare, it might even give some hints for idiomatic efficient code.
Gaming from the compiler/runtime side would be uglier - but I guess it is somewhat mitigated by real languages running the "general release" version.
No harm in "fastest way to list first 1000 prime numbers" being "print static list of first 1000 prime numbers".
Hand there's some value in having a standard benchmark harness that works easily across languages - as a helpful tool with "running your own benchmarks". Assuming the harness is any good, that is.
> I don't really see a big problem with a little bit of gaming on the benchmark-code side
I just got tired of the vitriol leveled at me with no basis. Things like I must have "sabotaged" other compilers. Probably the absolute worst one was when the journalist decided that Datalight Optimum C ran the benchmarks so fast, it must be a compiler bug and removed the benchmark results from his compiler roundup, calling DC a buggy compiler.
The reality was Datalight C was the first C compiler on DOS to do data flow analysis, and it deleted dead code. (Benchmarks of that era did nothing useful, and dfa detected that.) No cheating at all. A couple years later, everyone did dfa.
I did run my own prime number crunching benchmark for fun and D blew Rust away but lagged behind Go. I used mutable Vec and HashSet in Rust, associative arrays and arrays in D, maps and arrays in Go to store the sieves.
You see, that's the point - even if you do something as simple, there are many ways to do it (and different compiler versions, especially in the case of D), different optimizations at the code and compiler level and all that - it's practically impossible to have a reliable, comprehensive benchmark.
The Rust HashMap and HashSet implementations are generic over hasher: you can pick a faster one if you don't have to worry about getting ddos'd. Last I checked, the go-to fast hasher was fnv: https://doc.servo.org/fnv/
How often do I need to explain that this siphash claim of DDOS protection is utter nonsense. siphash can easily be brute forced like every insecure hash function (<256 bits), and proper DDOS protection can only be provided by a proper collision strategy. Even DJB himself does so.
Is there any way to use it directly in the source w/o having to create a Crate.toml and src dirs? Is there a getting started document? Dub recently included such a feature.
https://doc.rust-lang.org/cargo/ has installation, getting started, a reference, all of that. It needs some work, but for the basics of doing this, it's got it all.
Not since I asked him to stop dropping by the D forum to remind us that he wouldn't include D in the shootout.
I've personally lost all interest in it.
(The benchmarks are also small and too easily gamed by the author of the benchmark code, and the compiler developer. I encourage people to run timing tests on their own code, as that is what matters to them.)