Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Strong typing and memory safety are compatible with his wants. It's just that Rust isn't it. Any of Clojure, F#/C#, Python, or Go might be good here. (Ok, the nil-punning might make the strong arguable on Clojure's point... but otoh you have spec)


I agree. GC is a life saver, and I think if something can be written with GC, it should be.


Most of those langauges don’t have the perf of rust or cpp.


Most programs don't need C++ performance. Java is usually within factor of two and usually fast enough.


> Most programs don't need C++ performance. Java is usually within factor of two and usually fast enough.

Meshing software is a typical program where you need C++ and where Java is a terrible choice.

By nature, meshing makes you play with millions to billions of very small object with mutual interactions. A thing that Java GC absolutely hate. Sorry for the Java fan-boys.


The billion vertex objects might be a nice torture test benchmark for a gc or malloc implementation, but fortunately things aren't so grim in the real world:

Performant mesh manipulation code as a rule deals with largish float arrays (or other similar contiguous layouts, like AoS or SoA or indexed versions thereof). So there are no per-vertex objects to malloc/free or to track in GC.

In some algorithms, other data structures such as trees, may be used used as intermediate formats, but there too the options to use malloc/gc allocated nodes vs alternatives is similar in a GC'd vs a non-GC'd language.

In general, good GC's often outperform malloc/free. Because GC's are amenable to parallelization in the background, and because malloc/free don't enjoy the bump-allocator style happy case that is used in generational GC's nursery objects.


> and because malloc/free don't enjoy the bump-allocator style happy case that is used in generational GC's nursery objects.

Performance-optimized use of malloc and free involves using custom suballocators (often called simply "allocators" within user code) which are tailored to the "happy cases" that you know about in your code. A "nursery" of objects that can be bump-allocated and then freed in a single operation is known in this context as an arena/region, and you absolutely can exploit this pattern as part of using malloc/free.


I never met a GC that can deal with millions of allocations per second while having several millions of objects in the young generations. Some mesh processing algorithms have a tendency to get you there very quickly if you do not employ custom allocators. At this point you already defeated the GC provided by the language.


You would have to be dealing with a very large number of meshes at once, if you were churning through millions of vertex and uv arrays per second. I can't say what you should use in this scneario. But this is a pretty niche scenario! And you would probably want to focus on parallelism, where C++ again is not the best.


> And you would probably want to focus on parallelism, where C++ again is not the best.

Here again, this is wrong.

99% of highly scalable parallel code that run on supercomputers is C++ or Fortran.

And there is reasons to that, if you want parallelism, you want generally performance. If you want performance you want control on NUMA, SIMD, cache locality, etc... and C++ gives you that. Not Java.


Yet Julia and Chapel are winning hearts of HPC researchers.

Java is not the only language with GC, there are others which offer control over NUMA, SIMD, cache locality, like .NET Native with C# 7/8, or D as two possible examples.


But the GC is frowned upon a lot in the D community. A lot of D programmers try to make everything they write @nogc.


A GC in a systems programming language is there as a productivity and safety mechanism, by no means as the only way of doing 100% of memory allocations.

A vocal sub-community of D programmers argue for @nogc due to GC phobia, as ex-C++ devs, and a GC implementation found lacking. A situation that might improve with the recently released GC pluggable API and a new precise tracing GC.

Many devs that prototype their apps in D without doing premature memory optimizations, end up realizing that for their use case it actually good enough.


No, I am talking about operations that alter the topology of single large meshes. Just creating a single tesselated sphere with full topological information gets you there easily.


Ah, we may be talking past each other then. So I reiterate my point from upthread:

> Performant mesh manipulation code as a rule deals with largish float arrays (or other similar contiguous layouts, like AoS or SoA or indexed versions thereof). So there are no per-vertex objects to malloc/free or to track in GC.


Thia is true only if you have the foresight to write custom allocators. Unless you limit possible mesh topologies, you must deal with many tiny arrays for each vertex, edge and face and their lengths are not uniform.


Yes, depending on what you mean by custom allocator... but this is the same in C++ too, right?

To store custom vertex / face attributes, you can use a variety of sparse techniques to save memory from sitting unused in attribute arrays. Basically you want an interface of get_attribute(type, vertex_id). This sparse matrix, trie, or whatever can be backed by int or float array storage instead of many many tiny malloc'd/gc'd objets.


Yes, these backing constructs essentially become special case memory managers for the problem domain. You may call then by different names, but the essential behaviour is the same.


I have to second this. I tried to write a 3d modelling tool in Java. I got decent, but not stellar performance only once I consciously started to fight the GC in my code.

I converted that code to C++ and recently added a custom allocator to improve performance even further. The result is now functionally equivalent, but about 5 times faster.


> Java is usually within factor of two

No. That's on synthetic benchmarks. Run a real server and you see that Java shits its pants while a C++ service is running rings around it.

That's because memory/cache fragmentation and usage is vastly more important in 2019 than CPU cycles.


Need is relative. If you can get the extra speed, why waste it? Life’s too short to spend it waiting for your computer.

And in this case, the author does need cpp speed.


> Need is relative. If you can get the extra speed, why waste it?

Because you can be a lot more productive if you are not extremely constrained by speed.


That’s a false dichotomy. A language can be both productive and fast.


In theory, yes. In practice, Rust at the moment is less productive than GC languages. I am hoping for autoclone mode in the future (sort of like Swift ARC).


I think the article didn't say anything about performance requirements.

The app is Dust3D, a ease-of-use focused mesh editor which deals with simple meshes.

Certainly from the blog post you get the impression that programmer productivity is a consideration in his choice of language.


3d modelling is a problem domain where performance is the most important feature. Without sufficient performance, the tool.is not interactive. And direct interaction with immediate visual feedback is the whole point of these programs.


Having worked in 3D, including tools, it's my experience that features and productivity (=program is delivered on time) are the most important features :)

The code needs to be "fast enough"¹. Though this often means more attention to speed than in other kinds of software. To rephrase, speed is often necessary but not sufficient to make good tools.

¹ I try to avoid using "execution speed" and "performance" interchangeably, since a crashing, late or incorrect fast program is not a well performing program!


Fair point. Let me rephrase my point of view a little: you haven't made the users truely productive unless the tools are able to provide responsive and accurate interactive feedback. Anything that causes feedback to be non-immediate makes it harder for the user to dial in the exact results they want. They have to spend more time and give up when they have reached an inferior result that meets basic quality standards. Smooth and direct interaction makes them go the extra mile while enjoying the process.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: