Who cares? What use is measuring the objective performance difference between two solutions if no one cares about the difference? That’s why I’m talking about the actual performance requirement.
But what when people care ? I work in a field where individuals regularly spend 4 to 5 figures for a new CPU for a half-percent of single-core performance improvement ; computers will literally never be fast enough for what we want to do with them so our job is to leave exactly zero possibility for improvement through software
If you care about it, then obviously go to town on optimization, use an FPGA, whatever. But I notice you said 'CPU', which means there is a performance/cost tradeoff even in your field. Otherwise wouldn't you be using specialized hardware?
When we are talking about a general-purpose language like OCaml, then you come in talking about extreme HPC–you must realize it's not relevant in the discussion? Would you comment on threads about Golang talking about how it's not appropriate for HPC usecases?
I mean, yes, FPGAs and dedicated hardware are researched and used in that field.
The main reason CPUs are used "more" is due rather to convenience, and what people want to do not easily being doable on an FPGA ; for instance one of the often used programming languages there (Faust) just got an experimental FPGA port after 20+ years of existence.
But when talking about performance the point is to extract the absolute most performance possible of a given hardware, because you are building a product and you can only afford a.g. some ARM chip and need to get the most out of it to make your product's price fit for its target demographic (to, you know, make money for your business).
In particular, the main "competition" in that field is analog hardware which does not have "performance" issues (but others instead : an analog guitar distortion does not have latency but it does raise the noise floor in the signal) - everyone wants the best of both worlds and it is our job to make it happen
That's an incredible arrogant and close minded way to think about software. There are plenty of domains where every extra % of performance matters. Video games, HFT, robotics etc.
Even in these domains people still balance performance against the expressiveness of the language, or else why wouldn't everyone write their games/HFT/robotics in assembly for that "bare metal" performance?
For example, many popular game engines provide C# scripting engines _even though_ using a garbage collected language (even just for scripting) is slow (and throwing away % of performance) because the performance is _good enough_ for their usecases.
That's because scripts in a game are not usually a performance bottleneck if kept to best practices. And you can bet your ass both unity and unreal have some ASM in their guts.
Exactly, I mainly work on a C++ software and for my PhD rewrote all the core algorithms in OCaml... Boy that was slow, I'm never ever touching that language again for anything that needs peak performance.
I didn't realize we were exclusively talking about real-time systems? Do you think people you argue with on HN are so dumb that they would go around claiming high-level GC languages are suitable for real-time systems?
I feel like if your concern is real-time systems, you should probably say you don’t think it’s suitable for real-time systems, instead of talking about performance differences that no one really disagrees but which don’t matter in practice for most applications anyways.