Decades of speculative science fiction, thought experiments, and discourse led to this. It’s gratifying to see that we’ve garnered enough concern, a major AI lab risking this to reign in the potential of runaway AI disasters. Hopefully we see other labs follow.
Is there a point at which library maintainer feedback would meaningfully influence a by-default JVM change?
I keep a large production Java codebase and its deployments up-to-date. Short of upstreaming fixes to every major dependency, the only feasible way to continue upgrading JDK/JVM versions has often been to carry explicit exceptions to new defaults.
JPMS is a good example: --add-opens still remains valuable today for important infra like Hadoop, Spark, and Netty. If other, even more core projects (e.g. Arrow) hadn't modernized, the exceptions would be even more prolific.
If libraries so heavily depended upon like Mockito are unable to offer a viable alternative in response to JEP 451, my reaction would be to re-enable dynamic agent attachment rather than re-architect many years of test suites. I can't speak for others, but if this reaction holds broadly it would seem to defeat the point of by-default changes.
> Is there a point at which library maintainer feedback would meaningfully influence a by-default JVM change?
Of course, but keep in mind that all these changes were and are being done in response to feedback from other users, and we need to balance the requirements of mocking frameworks with those of people asking for better performance, better security, and better backward compatibility. When you have such a large ecosystem, users can have contradictory demands and sometimes it's impossible to satisfy everyone simultaneously. In those cases, we try to choose whatever we think will do the most good and the least harm over the entire ecosystem.
> JPMS is a good example: --add-opens still remains valuable today for important infra like Hadoop, Spark, and Netty. If other, even more core projects (e.g. Arrow) hadn't modernized, the exceptions would be even more prolific.
I think you have answered your own question. Make sure the libraries you rely on are well maintained, and if not - support them financially (actually, support them also if they are). BTW, I think that Netty is already in the process of abandoning its hacking of internals.
Anyone who has hacked internals agreed to a deal made in the notice we had in the internal files for many years prior to encapsulation [1], which was that the use of internals carries a commitment to added maintenance. Once they use the new supported mechanisms, that added burden is gone but they need to get there. I appreciate the fact that some open source projects are done by volunteers, and I think their users should compensate them, but they did enter into this deal voluntarily.
> If libraries so heavily depended upon like Mockito are unable to offer a viable alternative in response to JEP 451
The main "ergonomic" issue was lack of help from build tools like Gradle/Maven.
[1]: The notice was some version of:
WARNING: The contents of this source file are not part of any supported API.
Code that depends on them does so at its own risk: they are subject to change or removal without notice.
Of course, we did end up giving notice, usually a few years in advance, but no amount of time is sufficient for everyone. Note that JEP 451 is still in the warning period that started over two years ago (although probably not for long)
> Why on earth would you bet your money on some random tool you don't even understand? ... I built a tool for people who knew what harmonic patterns were.
The tool is for drawing "technical analysis indicators", one of the most convoluted ways to ascribe meaning to a random process and something that will only ever be true in the self-fulfilling sense. I don't think it's a surprise that some users are willing to blindly trust the tool, when all users of it are blindly trusting concepts that are built on sand.
Although I'm sure the author is burnt out from the experience now, I'd be interested in hearing how their next side project venture goes- is the experience more enjoyable when you're dealing with a user base that self-selects differently? Or do all users suck equally, just in different ways?
At least half of the interactions that are presented as terrible, I feel are actually quite normal and potentially even pleasant. If you don't actually enjoy talking about your product with 'beginners' or even just normal people, then maybe reconsider the customer support role?
For me this reads as 'I don't enjoy voluntary customer support' rather than my customers suck.
I see only one single sentence - "others had very basic questions, answers to which were given in the description of each script" - that might be referring to situations where people were seeking either clarification (including cases where the answer was in the documentation, but not obviously so) or advice on how to use the tool more effectively, (I exclude bald requests for 'hot tips' or source code from those categories.)
For all I know, the author might have both received and responded substantively (with more than RTFM) to many such requests, but has not mentioned them here because they were not part of the problem.
Agreed- I come from a Java/C++ shop where we tried to tackle this dichotomy with interop but it ended up causing more problems than it solved. A lot of the work that Java has done with modern garbage collectors is impressive, but even they admit (indirectly, via Valhalla) that no/low-alloc code has it's place.
I mostly agree with this, but I've been a big fan of having primitive types in config. Most of the time if I have something I want to configure, it's either one of the following (or a map/list-based structure consisting of):
- scalar value
- feature toggle
- URI/enum option/human readable display text
Having float/long/boolean is trivial to validate in the config language itself, and if they're useful and simple enough isn't it nice to be able to validate your config as early as possible?
It's nice, but it comes at a cost. For example, every user of toml forever will have to put strings in quotes. Why? Because having other types creates ambiguity, that is resolved by this one simple trick.
But if you don't quote them then you have "the Norway problem" like in yaml.
Nullable is a huge issue in Java, but annotation-based nullability frameworks are both effective and pervasive in the ecosystem (and almost mandatory, IMO).
I'm really excited about https://jspecify.dev/, which is an effort by Google, Meta, Microsoft, etc to standardize annotations, starting with @Nullable.
This can never be as effective as changing the default reference type to be not nullable, which would break backwards compatibility, so you can never really relax.
I know Kotlin is basically supposed to be that, it has a lot of other stuff though, and I haven't used it much.
That's basically what c# has done. But it's implemented as a warning which can be upgraded to an error. I think it might even be an error by default in new projects now.
They didn't. A proper fix would require getting rid of null altogether in favor of ADTs or something similar. I work with C# daily and nulls can still slip through the cracks, although it's definitely better than @NotNull and friends.
I haven't worked with Kotlin in a while, but IIRC their non-nullable references actually do include runtime checks, so you cannot simply assign null to a non-nullable and have it pass at runtime like you can (easily) do in C#.
they won’t change the default reference type to non null. might take a few years but you can see their planned syntax here: https://openjdk.org/jeps/401
Depends- one of the hardest parts of the 11-20 upgrade for us was that cms gc was removed.
If you run a bunch of different microservices with distinct allocation profiles, all with high allocation pressure and performance constraints, and you've accomplished this w/ the help of a very fine-tuned CMS setup, migrating that over to G1/ZGC is non-trivial
Java sort of suck for microservices (microservices suck on their own) as it has relatively high bootstrap cost.
High allocation rate feels weird with micro services - I suppose that depends a lot on the coding style. G1GC is meant for generally large setups, with several cores at least. E.g. the default setup of 2048 areas on 2GB heap means, allocations over 1MB require special care.
I can't help but think if you're teetering on a knife's edge, only holding on thanks to hyper tuned GC params, then you should take a step back and consider getting out of that predicament.
Yup. We've clearly benefitted- G1 and generational ZGC have large advantages over CMS- but it's a lot of experimentation and trial-and-error to get there, whereas other deprecations/removals are usually easier to resolve.
Isn't the default config correct for pretty much all workloads, unless you have very special requirements? Like, at most one should just change the target max pause time in case of G1, depending on whether they prefer better throughput at the price of worse latency, or the reverse.
Even simpler, every time you sample random you get the same number of bits of entropy. Sampling once and then cubing spreads the same entropy bits over the interval. Cubing thrice spreads 3x the entropy into the reachable area. Thus you know this guy is doing no net favor to the world.
There might be a potential market in HFT/trading firms (market makers). They're struggling to recruit right now (industry is doing very well, so high headcount growth) and they're usually pretty small so joining w/ friends means working in closer proximity than you might get in big tech.
Their small class size and heavy new-grad hiring also means that they're usually more flexible with recruiting and some would probably welcome a new pipeline for talented new grads like this.
My experience is that HFT/trading firms are almost exclusively looking for seasoned professionals like low latency engineering specialists.
There's typically the expectation of providing value and filling expertise gaps almost right off the bat. Fresh grads are incapable of doing that.
Is this a theoretical difference, or a difference in terms of an actual implementation? I have no knowledge of linear systems algorithms, but as far as I was aware, many w.h.p algorithms resolve to correctness with probability of 1 - (1/n)^c (with c effectively being a hyperparameter), which would seem like quite a strong result in itself.