You would have to be dealing with a very large number of meshes at once, if you were churning through millions of vertex and uv arrays per second. I can't say what you should use in this scneario. But this is a pretty niche scenario! And you would probably want to focus on parallelism, where C++ again is not the best.
> And you would probably want to focus on parallelism, where C++ again is not the best.
Here again, this is wrong.
99% of highly scalable parallel code that run on supercomputers is C++ or Fortran.
And there is reasons to that, if you want parallelism, you want generally performance. If you want performance you want control on NUMA, SIMD, cache locality, etc... and C++ gives you that. Not Java.
Yet Julia and Chapel are winning hearts of HPC researchers.
Java is not the only language with GC, there are others which offer control over NUMA, SIMD, cache locality, like .NET Native with C# 7/8, or D as two possible examples.
A GC in a systems programming language is there as a productivity and safety mechanism, by no means as the only way of doing 100% of memory allocations.
A vocal sub-community of D programmers argue for @nogc due to GC phobia, as ex-C++ devs, and a GC implementation found lacking. A situation that might improve with the recently released GC pluggable API and a new precise tracing GC.
Many devs that prototype their apps in D without doing premature memory optimizations, end up realizing that for their use case it actually good enough.
No, I am talking about operations that alter the topology of single large meshes. Just creating a single tesselated sphere with full topological information gets you there easily.
Ah, we may be talking past each other then. So I reiterate my point from upthread:
> Performant mesh manipulation code as a rule deals with largish float arrays (or other similar contiguous layouts, like AoS or SoA or indexed versions thereof). So there are no per-vertex objects to malloc/free or to track in GC.
Thia is true only if you have the foresight to write custom allocators. Unless you limit possible mesh topologies, you must deal with many tiny arrays for each vertex, edge and face and their lengths are not uniform.
Yes, depending on what you mean by custom allocator... but this is the same in C++ too, right?
To store custom vertex / face attributes, you can use a variety of sparse techniques to save memory from sitting unused in attribute arrays. Basically you want an interface of get_attribute(type, vertex_id). This sparse matrix, trie, or whatever can be backed by int or float array storage instead of many many tiny malloc'd/gc'd objets.
Yes, these backing constructs essentially become special case memory managers for the problem domain. You may call then by different names, but the essential behaviour is the same.