Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why Rust is not a mature programming language (multimedia.cx)
40 points by DarkCrusader2 on Sept 19, 2020 | hide | past | favorite | 70 comments


The traits section seems to be more a problem of the author seeing object and classes instead. Calling them traits and not classes should have been enough, IMO, to dispel that, but it is a common mistake so fair enough. You should think of them, as the compiler devs do, as Prolog code interpreted at compile time.

Traits are sets. “Impl” means a type, or any type described by the constraints, is in that set.

    trait A {}
    struct S;
    impl A for S {} // S is in A
    impl<T> A for Vec<T> where T: A {} // Vecs of A are also in A
Type constraints are a way to use those sets to specify which types are allowed in which places. The compiler then tries to prove that the types you do use are in fact in the sets you say they have to be in.

That’s it.


On reflection I think they were referring to both traits and trait objects instead (“Call tables”). I don’t know what more you can ask for than the Rust Book’s entry on trait objects. Not sure how you get “immature language” from this page:

https://doc.rust-lang.org/book/ch17-02-trait-objects.html


Trait objects are kind of a pain and there are some improvements that could be made there.

Here's a random example: if you want a trait object with more than one trait, that only works today if one of the two is an auto trait: https://play.rust-lang.org/?version=stable&mode=debug&editio...


Once I regularly see the bashing of a language I know that language is generally accepted and therefore mature.


Counterpoint : V-lang


Wow, first I've heard of this. Looks neat.

Let me save others 6s https://vlang.io


[flagged]


That's gotta be the most active vaporware project I've seen!

A quick glance at their repo tells me:

- 6497 commits from 310 contributors.

- Yesterday alone they merged 6 Pull Requests created by different authors.

- Installations provided for Windows, macOS and Linux.

It does look immature but very active. Not what I'd call vaporware.

https://github.com/vlang/v


What makes you say it’s vaporware? Doesn’t it like...exist?


Pretty much every touted feature is WIP, definitely all the important ones. So yeah no that's vaporware.


Does the Doom demo actually work? That'd be enough for me to call it "not vaporware" by a longshot.


IIUC they use chocolate-doom that has approximately 600 .c files, and replace one of them with a .v file. I don't understand what they mean when they say

> DOOM translated from C to V. Builds in 0.7 seconds (x25 speed-up).


The magic porting of C-Doom to V-Doom is still WIP


Aha, I took their marketing at face value. I never learn... :(


Check out some of the discussions from its announcement and early release from last year. There were a lot of statements that were (or at least seemed) false, and several delays in releasing source code (which increased people's conviction that it wasn't entirely legit).


No real argument about this, seems to be essentially true. I just have never cared about anything in this article while writing Rust. The maturity I care far more about is "if I want a library is it available, documented, api stable, and relatively bug free", which Rust does okay at.


The lack of a language spec hurts, but it's gotten better. E.g. there used to be no formal rules about what was OK to do in an unsafe block. There was a general idea that you shouldn't use different mutable references to the same thing at the same time, but in practice it was hard to tell what that really meant. Then they came up with Stacked Borrows, which is a formal specification for exactly that problem. Miri can even interpret your program and dynamically check that you don't violate Stacked Borrows.

The Go language spec is really awesome, but the core language is so much simpler, so it was probably a lot easier to come up with. E.g. if the spec included every detail about escape analysis, pragmas, assembly, and cgo, it would be significantly longer and more difficult to maintain.


In spite of its currently growing adoption, my prediction is that Rust will be replaced in a foreseeable future.

Rust brought a novel idea to the landscape of programming languages. The borrow checker is a great invention, and a great contribution to making applications more secure.

The language itself is way too complicated. The learning curve is steep. Improvements are made (e.g. lifetime elision) but they don't flatten the curve.

Macros are very hard to debug and understand. Which is even more of an issue when they are used by dependencies and something goes wrong. Error messages don't help. Macros are powerful but can hide too much magic. Plus, they are a language of their own.

"Why doesn't have this structure any methods?" "Oh, they are hidden in traits, provided by other crates, themselves leveraging other traits from other crates". "Still, it doesn't compile. Why?" "Oh, because of that generic that requires traits from yet another crate that are only available when some Cargo features has been enabled". Once again, this is difficult to follow and debug.

Zero-cost abstractions are great, but in Rust, they are overused. This makes the language less accessible, but also constantly causes dependency issues due to incompatible versions.

Rust is powerful but it is not optimized for developer happiness. Fighting the compiler brings anger, not joy. Sure, the compiler may have good reasons to complain. But as a developer, I'm happier and more productive when code immediately runs, even if that means having to fix bugs later. This is from a happiness view, not from a security perspective. But eventually, happiness turns into productivity.

It's too late to fix this. But now that Rust demonstrated that memory safety and speed were not incompatible, future system languages will feel obligated to provide the same properties, maybe by using mechanisms similar to Rust.

So, my prediction is that Rust will slowly be replaced by new safe languages that will be more accessible, more productive, more stable, and bring novel ideas of their own. But Rust will have made history no matter what.


You are, of course, right in many of your points, but we have seen time and time again that languages are successful for other reasons. It usually is a matter of opportunity (i.e. it comes with a mature library that solves a hot problem), not design. However, I am also unsure if rust is in the right spot to gain wide traction or if it will stay in the Tier 2 of enthusiast, but not commercially adopted programming languages, like many other great languages of their times.


> The language itself is way too complicated. The learning curve is steep. Improvements are made (e.g. lifetime elision) but they don't flatten the curve.

I disagree that the improvements that have been made to the language ergonomics haven't changed the slope of Rust's learning curve, and I'm not entirely convinced that the language features are the totality of the problem when it comes to the difficulty of picking Rust up by experienced developers[2]. I also have the further belief that the language itself is not the only thing that affects the learning curve of a language. The available libraries, documentation, tooling, platform support, and the surrounding community all affect how easy to learn a language is, and as importantly, how likely people are to stick around.

> Macros are very hard to debug and understand. Which is even more of an issue when they are used by dependencies and something goes wrong.

I agree. In their current incantation. Long term Rust will have other ways of declaring and using macros. proc_macros is one post-1.0 change, but multiple people in the team have their targets aimed beyond that feature to provide compile-time code expansion that are easier to learn, understand, declare, use, and debug than macro_rules and proc_macros.

> Error messages don't help.

Things got much better in the 2018-2019 period for macros error messages in particular, but they are still a long way off from what I'd personally like to see. It is a hard problem that I am intimately familiar with. There's one particular change I would like to implement in the next year (tracking spans of macro_rule macro arguments through operations on the macro body, regardless of operations on them, like if you call foo!(bar) and the problem is with bar, the underline should point at bar and not to both foo!(bar) and in the definition of foo where bar was used), but improvements in this area are non-trivial and take time.

> Macros are powerful but can hide too much magic.

True, this is a problem not only for crates that rely heavily on macros but also for ones that rely heavily on complex trait bounds. I would personally wish crate writers to exercise some restraint on how complex their APIs are considering also how difficult to understand it is what went wrong when using them wrong, but I have no authority to force anyone to do things the way I'd like to and cope by 1) leading by example and 2) slowly but steadily improving diagnostics for patterns used in the wild.

> Plus, they are a language of their own.

They are. As alluded to but not explicitly said earlier, macro_rules where a placeholder for the 1.0 release. They are in every way an MVP, scheduled to be replaced at some point[1]. The thing is that they work "well enough™️" so there hasn't been a quick push to replace them, and we wouldn't want to rush a new way of doing something you can already do and get stuck with two subpar features. And proc_macros have their own host of issues to deal with, from wrapping your head around writing Rust code to generate Rust code, dealing with the AST, finding the right crates to use, learning how to use the available hooks to provide reasonable error messages, etc.

> "Why doesn't have this structure any methods?" "Oh, they are hidden in traits, provided by other crates, themselves leveraging other traits from other crates". "Still, it doesn't compile. Why?"

Agree, and that can be even exacerbated by the use of Derive and proc_macros, which can generate the impl for the "hidden" trait in a way that rust-analyzer and rg won't find them.

> "Oh, because of that generic that requires traits from yet another crate that are only available when some Cargo features has been enabled". Once again, this is difficult to follow and debug.

This is one of the mentioned "hard problems" that diagnostics could help with but nailing the experience down to make it "enjoyable" (or even bearable) require changes and coordination between rustc, cargo, and in some cases but likely not this one in particular, crate writers.

> Zero-cost abstractions are great, but in Rust, they are overused.

In parts of the ecosystem they absolutely are. I don't think there's anything about the language itself that encourage their overuse, I think it is more of the "new toy" effect: "Oh! Neat feature! And this lets me make this problem unrepresentable! NEAT! Let's use it! (Wait, what does it look like when someone misuses this? OH, NO!)".

> This makes the language less accessible, but also constantly causes dependency issues due to incompatible versions.

> Rust is powerful but it is not optimized for developer happiness.

Again, I disagree, but I can see your point of view. IMO Rust optimizes for debugability: the behavior of the code is always laid bare, which leads to verbosity (although it might be partially hidden behind Derives or macros, there is no "implicit" behavior). I find that this brings me joy when working on production-ready projects.

> Fighting the compiler brings anger, not joy.

It saddens me to hear that and would love to take any extra feedback in order to mitigate the anger it brings you.

> Sure, the compiler may have good reasons to complain. But as a developer, I'm happier and more productive when code immediately runs, even if that means having to fix bugs later.

How would you even find out about the bugs later? There's always the possibility of a buggy branching code path is never run, until it is, months after the fact.

> This is from a happiness view, not from a security perspective. But eventually, happiness turns into productivity.

I migh guess part of the issue is the old "make as many pots as possible in the allotted time" vs "take the allotted time and make a single great pot" experiment: more, faster iterations yields better quality in the same amount of time. This is a perfectly valid complaint about how fussy rustc can be, but the target is for rustc to be more akin to a pair programmer: "Hey! You forgot this trait bound here so that it matches what you're calling. Add it over there and that should do it."

> It's too late to fix this. But now that Rust demonstrated that memory safety and speed were not incompatible, future system languages will feel obligated to provide the same properties, maybe by using mechanisms similar to Rust.

And I've gotten to the part I actually wanted to answer to :)

What do you feel would get in the way from making Rust easier to use? I agree that there are ergonomic related things that could be done that won't because they would affect some of the use cases that Rust targets (like embedded, kernels or databases), like auto-cloning or auto-boxing which would make the language feel way higher level than it currently does, but I think there are lots of other things that could be done (thinking of the match pattern ergonomics work done some time back where you don't have to write & in patterns nearly as much now) that would make the experience of writing code nicer without sacrificing any of the project's stated goals.

> So, my prediction is that Rust will slowly be replaced by new safe languages that will be more accessible, more productive, more stable, and bring novel ideas of their own. But Rust will have made history no matter what.

If Rust's impact is nothing else but improving the larger ecosystem through the indirect impact of raising the bar for whatever comes after, then it will have been worth it.

[1]: https://github.com/rust-lang/rfcs/blob/master/text/1584-macr...

[2]: https://youtu.be/Z6X7Ada0ugE?t=705 Transcript of the relevant excerpt:

> I have the unsubstantiated theory that experienced developers have a harder time than less experienced developers when learning Rust. You need to forget a lot of constructs that work well enough in the languages you already know because they introduce things that go against the single owner enforcement that Rust has, whereas somebody with less experience will simultaneously accept restrictions as "just the way it is" and not seek out more performant constructs that can be much harder to understand or implement. Rust has a curse (it has many, but this one is critical): inefficient code is generally visible. Experienced developers hate to notice that their code is inefficient. They will recoil at seeing `Arc<RefCell<T>>`, but won't bat an eye at using Python. I know because I have the same instinct! This makes it much harder to learn Rust for experienced developers because they start with the "simple Rust code that will work but is slightly inefficient" and in an effort to improve it they land squarely in parts of the language they haven't yet developed a mental model for.


[flagged]


> If you say Java is optimized for developer happiness, or worse, Haskell, I'll pass on this cheap trolling attempt.

Haskell with stack is a pretty happy place for professional software development in my experience.


Everyone is going to have a different definition of "mature", and that's fine :) Obviously lots of respect for Kostya. I do think that framing these as "maturity" is a good framing, that is, fundamentally, he's right. A lot of this stuff has to do with Rust being so young, and in the future, it will be taken care of. I would argue that this is a significantly higher maturity requirement than most people actually need, and that Rust is more mature in other places and so may be for other people, but that's a different thing.

My take on the state of these issues:

> Rust does not have a formal language specification... I understand that adding new features is more important than documenting them but this is lame.

Most languages do not. It also really depends on what you mean by "formal."

It's not about being more important, it's that we value stability very strongly, and don't have the ability to document things with the guarantees we'd prefer. You might call it... not mature enough yet :)

There's been a bunch of movement here, I'm excited to see it continue to develop!

> Function/method calling convention. ... I’m told that newer versions of the compiler handle it just fine but the question still stands

The objection here doesn't have to do with calling conventions, actually, this is about "two-phase borrowing," described in a series of blog posts ending here http://smallcultfollowing.com/babysteps/blog/2017/03/01/nest...

I believe this will get even better with polonius https://nikomatsakis.github.io/rust-belt-rust-2019/

Regarding argument evaluation order, technically it is not yet documented https://github.com/rust-lang/reference/issues/248 but has been left-to-right for basically forever https://internals.rust-lang.org/t/rust-expression-order-of-e... and I actually thought that it was documented as such. I would expect this to shake out the exact same way as struct field destruction order, that is, something that's been one way for a long time and so we wouldn't change it even if maybe it's a good idea to.

> Traits... the problem is not me being stupid but rather the lack of formal description on how it’s done and why what I want is so hard. Then I’d probably at least be able to realize how I should change my code to work around the limitations.

Upcasting/downcasting is rarely used, and so has less love, generally, it's true.

> First of all, bootstrapping process is laughably bad.

So, Kostya acknowledges

> Of course it’s a huge waste of resources for rather theoretical problem but it may prove beneficial for compiler design itself.

Which is I think the way most feel about it. However, there is some desire to improve this, for other, related reasons. https://matklad.github.io/2020/09/12/rust-in-2021.html talks about some of them.

> Then there’s LLVM dependency.

While this stuff is all true, we wouldn't be where we are without it. Everything has upsides and downsides.

> And finally there’s a thing related to the previous problem. Rust has poor support for assembly.

He mentions asm; we're almost there! It took some time because it is not a simple problem. At the end of this, we'll have better support than C or C++ according to his metrics; these are not part of the language standard, so give his previous comments about maturity, I find this one a little weird, but it is what it is. :)

> There’s also one problem with Rust std library that I should mention too. It’s useless for interfacing OS.

Yes, the intention of the standard library is to be portable, so that's not really a goal.

> But the proper solution would be to support at least OS-specific syscall() in std crate

This may in fact be a good idea! I'm not sure how much use it would actually get.


> In C it’s undefined because it depends on how arguments are passed on the current platform (consider yourself lucky if you don’t remember __pascal or __stdcall). In Rust it’s undefined because there’s no format specification to tell you even that much.

>> Regarding argument evaluation order, technically it is not yet documented https://github.com/rust-lang/reference/issues/248 but has been left-to-right for basically forever https://internals.rust-lang.org/t/rust-expression-order-of-e... and I actually thought that it was documented as such.

I am aware the word "undefined" references the order of evaluation. However, I just want to clear possible confusion on the matter. Code that depends on evaluation order doesn't produce undefined behaviour. It produces unspecified behaviour. (This is not a directly reply to Steve, who am I sure knows more about this than I ever will).


And there is no guarantee that the order your compiler decides on has anything to do with the calling convention.


I agree in general, but I do feel like one counterargument needs to be brought up:

> Most languages do not (have a formal specification)

Most languages also aren't trying to replace languages that do. C and C++ are both languages that Rust, afaik rather officially, aims to replace in some areas. It can be a great language like so many others, but if it wants to replace these old giants, it needs to have a proper spec, maybe in the form of an ISO standard. Of course that will come when its time, but thats a good indicator of when a language can compete as an answer to the question of "what tech will we use for our next big, important and highly specialized project?".

This is the pertect indicator of it not being as mature as what it aims to replace.


I'd argue that when Rust is considered as an alternative in one of the domains that C and C++ are prevalent today, the fact that it does not have a formalized spec does actually hurt it today. For example: NVidia evaluated various languages to adopt for their "Safe Autonomous Driving" project. Rust was considered but didn't win. One of the reasons was literally:

"Does not have a formalized spec"

See [1] page 35.

[1] https://www.slideshare.net/AdaCore/securing-the-future-of-sa...


The nvidia thing is part of why I wrote my comment, too, yeah. Was really eye opening to the requirements that some big players have


Sure, though you can make an argument about the relative merits; it is possible that the heavyweight ISO process would have strangled Rust had we started there too early. I do agree that this is why "maturity" is a decent framing for this criticism, after all, C did not have a spec at this point in it's life.

And also, about the invocation of "formal" there...


The difficulty in something like an ISO standard comes from conflicts between stake-holders, not really anything intrinsic to drafting a standard. In C's case, the problem is that many compilers are developed for C at cross-purposes to one-another; GCC and Clang want different things from C than embedded-cross-compiler toolchain authors do; than JIT authors do; than creators of "child" languages like Objective-C or OpenCL C do; etc. The "work" of C standardization is in getting these people to compromise.

Rust doesn't have that problem; there aren't yet any alternative Rust compilers that have any other purpose than to run as a batch-scheduled crate-at-a-time compile step at the command-line.

In such a case, where there's only one real stake-holder, "standardization" becomes less about declaring what should happen; and more about specifying what does happen, in exacting detail, such that someone could build an alternative conforming implementation from the spec without looking at the source of your reference implementation.

I don't feel like the existence of such a descriptive specification would have "strangled Rust" at any point. At most, this would have roughly doubled the work of any fix: writing the code, and then writing the change in the spec. But it wouldn't have actually been double the overall labor overhead, since the increased clarity-of-purpose of modifying the spec to declare a change in intention, would likely have mooted a lot of requesting-clarification and debating at code-review time.

But besides, software-engineering as a discipline now has tools like Behavior-Driven Development to minimize the costs of maintaining a parallel descriptive spec for a project. BDD tests are just regular tests that embed a lot of descriptive strings in them—those strings being words you are already mostly thinking at the time of writing the test. So they're only a little more costly than writing ordinary tests (which the Rust compiler already has), yet can also be compiled out into a descriptive spec. (And then you can diff the generated spec, between versions, and turn that diff into the spec errata for the "minor specification addendum" of that minor release.)


> The difficulty in something like an ISO standard comes from conflicts between stake-holders, not really anything intrinsic to drafting a standard.

Sort of, ISO has some rules that are antithetical to Rust's ethos, like requiring that conversations not be recorded. Rust's development chooses when to be public and when to be private where it makes sense.

I don't actually know if the "meet in person" aspect is a formal ISO rule or a peculiarity of the C and C++ committees, but that would be another vast difference that matters a lot. Especially at this historical moment.

> Rust doesn't have that problem;

We do have this problem, it's just not driven by compiler authors, but by the relevant stakeholders directly. The language team and the compiler team, while sharing some people, are separate.

> less about declaring what should happen; and more about specifying what does happen

This is not how the process plays out in Rust, though you're right that it could, if the compiler team wanted to act in bad faith.


C was almost 20 years old by the time it got a standard. It's clearly not that critical.


Is an OS-specific `syscall` at all useful if your OS isn't called "Linux"?


Depends on the OS. We also do include some specific things, see https://doc.rust-lang.org/stable/std/os/index.html and https://doc.rust-lang.org/stable/core/arch/index.html


Sure platform specific things can be useful. But that's exactly what `syscall()` is. As far as I'm aware, other platforms don't have any real equivalent. On other platforms syscalls must be made via libraries (as Go rather famously found out the hard way).


Yes, you're right that most platforms don't have something stable here. I was thinking of the problem more abstractly.


Doesn't most platforms have a reasonably stable interface for functions that are supposed to be reached from userspace? When does it actually matter if that border coincide with where execution privileges are raised?


They do: the interface is (usually) libc (or some equivalent), not the actual details of how libc makes said call.

For an example of how this can play out, the parent is referring to things like https://marcan.st/2017/12/debugging-an-evil-go-runtime-bug/


On windows, it can talk to ntdll (which is the blessed interface to the kernel from userspace code).

Ditto libsystem on macos.

Freebsd also provides a stable syscall interface, like linux.


`syscall()` is a very thin wrapper around calling the kernel directly by number. ntdll and libsystem aren't equivalent, they are more similar to `libc`. Neither Windows nor MacOS have a stable system call interface so you have to dynamically link to the library and call through the stable library interface instead.

I can't find anything that guarantees FreeBSD's system call ABI. Do you have a source? It would be very interesting if, for example, FreeBSD 10 applications that use syscalls can run on modern FreeBSD without a compatibility layer. However, if the FreeBSD project does not provide guarantees it would be folly to rely on this behaviour in the future. If it does provide such a guarantee then I stand corrected.

EDIT: After some more research it seems the FreeBSD kernel needs the `COMPAT_FREEBSD10` option enabled for my hypothetical example to work. The default options for amd64 include compatibility options back to FreeBSD 4. Defaults for other platforms seems to differ (perhaps depending on when the platform was first supported).

I can't find good documentation on if these provide full compatibility or if any `COMPAT_` options could be dropped in future versions.


> ntdll and libsystem aren't equivalent, they are more similar to `libc`

I disagree.

System calls on linux comprise the interface provided to applications to talk to the kernel.

On windows, ntdll serves the same function--it is itself a 'very thin wrapper around calling the kernel directly by number'. (Especially important since a libc may be hard to come by on windows.)

(Libsystem it seems I was mistaken about; it looks like that's just a bundle of libc, libm, libpthread, etc. Though possibly libsystem_kernel is nearer the mark? Difficult to find information on the subject, and I don't have a mac.)


well i feel like one should focus on the more stable POSIX stuff and then also implement a wrapper for windows.

That will work for most OS's out there, and windows


It's Linux specific, not POSIX-specific.


Ye, I get it why eg. gcc would want to bootstrap itself, but why would you want that in general? Also writing the compiler in its own language to me would seem to just make debugging the compiler much harder ...


You want to be able to have the trust chain all the way to a well known "good" version. As rustc uses nightly features, the only (reliable) way to get the current version without breaking the chain is to compile every version with the prior version. This can be thought of as an "academic" problem, but some OS vendors do insist on doing this and it is annoying and time consuming.

Writing a compiler on its own language has a bunch of benefits.

Early on, it let's both evolve in tandem, even before you know what the language itself might be. Having real world experience in a complex enough codebase for the language will inform some design decisions. Things that are hard to do might get ergonomic work poured into, sharp edges tapered. Things that are too hard to implement or that might cause exponential evaluation might be redesigned to allow for a linear algorithm.

Later, having the compiler written in its language is beneficial for contributors: people that use the language can jump in and help with the development of the compiler. This has the caveat if any large codebase, but it certainly was my case. I would go as far as saying that I really learned Rust through my rustc contributions. (BTW, doing that has the nice benefit of fixing your mental models if the language to actually match reality, instead of some approximation based on the documentation and observed behavior.)

Finally, setting the debugging scaffolding in place will be made a priority in order to debug the compiler, so even early users of the language will benefit of some tooling in that area, however crude it might be at the start.


Self-hosting is a kind of right of passage for programming languages. Another reason to aim for self-hosting is that it means it's now viable to only use that language, for instance in targeting a new hardware platform, via cross-compilation initially and then self-hosting after. If you don't have a self-hosting language, you either always cross-compile or you port two languages.

That is: CRust (a hypothetical C-based Rust compiler) can be made to target XX99 hardware, but in order to run CRust on that hardware you also have to make the C compiler support it. Achieving self-hosting, especially for a language that's targeting low-level capabilities like Rust is, is rather important.


It's not loading for me, so here's an archive: https://web.archive.org/web/20200919120933/https://codecs.mu...


The first comment is amusingly prescient:

> Aw, dang it. This post is going to get out and cause this server to crash again, isn’t it?


Anyone interested in formal language specification should check out k-framework! This space is pretty mind blowing.

http://www.kframework.org/index.php/Main_Page


Adding a few more things to Steve’s response:

> Rust does not have a formal language specification […] A proper mature language (with 1.0 in its version) should have a formal specification

Python doesn’t have a formal specification. It doesn’t even have a semi-formal specification. I would consider Python a proper mature language. (… even if it doesn’t have 1.0 in its version!)

Some recently-published work in the general direction of formally specifying Rust behaviour: https://people.mpi-sws.org/~jung/thesis.html

> In Rust it’s undefined because there’s no format specification to tell you even that much.

That seems to me an very unreasonable definition of “undefined”. By that token, nothing in Rust is defined, which renders the term “undefined” devoid of meaning and completely useless. Rather, the implementation defines the specification, which, yes, is weaker than a formal specification, but not that much weaker. (Rust’s stability guarantees do allow it to retcon, but only if safety is on the line, or sometimes if you can convince people that affected code probably doesn’t exist, or was wrong anyway. Changing argument order evaluation in this sort of case would break plenty of real-life code, and so will never happen.)

> […] traits […] the problem is not me being stupid but rather the lack of formal description on how it’s done and why what I want is so hard.

I disagree that the lack of a formal description of how it’s done is the problem, though such a description would certainly help. I feel that the real problem here is that you’re trying to treat traits as something that they’re not, trying to do things that you’re used to doing in dynamic/GC languages that just don’t map to Rust because of it being a thin abstraction over what can be fastest. Upcasting and downcasting trait objects is really rare, because other constraints of the language make them just not very useful operations. The main reason you want those operations in other languages is inheritance, not interfaces, and Rust doesn’t do inheritance, and traits aren’t inheritance.

> rustc problems

The bootstrapping process is not laughably bad, the developers have merely prioritised their own ergonomics rather than pandering to making proving a theoretical problem false easier. Still more, they’ve prioritised actually using new functionality, which helps make sure that it both works properly, and is the right new functionality. So I find two or three good reasons for doing it the way they have, and one weak reason not to do it that way.

The LLVM dependency: I feel that this complaint is more about there being only a single Rust backend, rather than about the LLVM dependency. This point is then debatable. Yeah, Rust would be more mature if it had more backends (or better, more full implementations), but does having only a single implementation make it immature? Eh. Plenty of mature languages functionally have but a single implementation.

———

I definitely consider Rust not yet mature, I just don’t think that most of the reasons in this article are legitimate. I would focus in no small part on documentation, just not requiring formality of specification or documentation before I’d consider those parts mature. But note that I said that I’d consider Rust not yet mature, rather than the Rust programming language. Because in most regards I do actually consider the programming language itself mature, and a lot of its tooling (more than popular languages thrice its age), though there are definitely still gaps in both. But the ecosystem around the programming language, that’s the area where it’s not yet mature. Rust is much more complex than many languages (e.g. Go), so it takes a lot longer to fill in the spaces in library availability, and there are many very significant gaps that haven’t been filled yet. GUI, for example, is very immature space. Audio is… eh, it’s getting somewhere, but I wouldn’t call it mature. Web is approaching maturity for some sorts of tasks, but is not generally mature.


> The bootstrapping process is not laughably bad, the developers have merely prioritised their own ergonomics

Ironically, I would disagree here; the language developers have not prioritized their own ergonomics, which is why building rustc is a big pain.

I do agree that making it easier to bootstrap in that way would not necessarily help them... there is a sense in which you're right, I just found the framing kind of funny, since I'd argue the same point but for completely opposite reasons.


Building rustc may be a bit of a pain (big pain? It’s more than five years since I last built rustc from source, but I don’t recall having any particular difficulty with it), but I was speaking specifically of the bootstrapping part, which I don’t feel is at all terrible. Specifically targeting version n − 1 means that you can adopt new language features, which you couldn’t if you were targeting version 1.0 forever, or writing it in another language. That’s what I was meaning by prioritising their own ergonomics—allowing them to use new features. With that clarification, I’m interested in whether you agree or disagree. You’ve been a lot closer to the action than I have.


Yes, I do think you're right about this generally, which is what I was getting at at the end.

This doesn't have to be all-or-nothing, though, if you check out matklad's post I linked, he points to a version where the standard library still gets to use unstable features, but the compiler does not, which is an interesting hybrid that may retain the best of both.


I find it odd that inline assembly is considered the sign of a mature systems language when not even all C/C++ compilers have it (for example, the last I checked, Microsoft's x86-64 C++ compiler lacks it)


Well the server for this page seems to be dead. I wonder what language it's written in? ;)


Looks like Wordpress to me.


The problem with this article is that the first example is wrong, as in it compiles fine. I could even tell without running it that it would compile.

Therefore the article as a whole cannot be taken seriously, and reflecting that I stopped reading it immediately after.


It used to not work, and it does acknowledge that it does work now. You missed that since you bounced, which is why it's a good idea to fully digest a thing before deciding if it's terrible or not.


Just to add some context: the code works since 1.36 for 2015 edition code when NLL was deemed fully compatible to backport it to older code[1][2], released on July of 2019[3], and since 1.31[4] with the stable introduction of the 2018 edition back in December of 2018[5].

[1]: https://godbolt.org/z/er3zT6

[2]: https://godbolt.org/z/zKPao4

[3]: https://blog.rust-lang.org/2019/07/04/Rust-1.36.0.html#nll-f...

[4]: https://godbolt.org/z/WKczbv

[5]: https://blog.rust-lang.org/2018/12/06/Rust-1.31-and-rust-201...


Thank you! There seems to be a fair amount of people who, when disproving a single thing in a list, seem to take that as reason to invalidate the rest of the list and reaffirm whatever their belief (bias, desire, ??) is instead.


Like someone said above, who has time to wade through all the verbal feces before deciding that an article is worth the time?

In this day and age, you need to decide quickly, or you'll be wasting a lot of your time.

Now I don't know about you, but my time is valuable.


Someone who isn’t ignorant has time to do it. I don’t pass over reading just because I can disprove a single portion. I value my time, but also my ability to digest and understand all viewpoints regardless of my actual views on them. I suggest you try and do the same.


Ignorance and having time are 2 orthogonal things. Your argument is not even close to valid. And I haven't even mentioned the ad hominem, which is an automatic disqualifier for any argument.


You’re obviously too intelligent for me, what a zinger. This sort of mental gymnastics you play is really detrimental to long term happiness, but to each their own.


I mean, there's not taking it seriously, then there's taking to social media to bash it in the comments. If you're going to do that, then at least take the time to digest it. You don't have to inflict your willful ignorance on to others.


I'm not doing that at all. I'm merely saying it isn't worth the time.


Or the author should/could improve their writing style. Just a thought. One concrete improvement would be not to moan about things that aren't even true (anymore).


Let's be real, life is too short to read every article on the internet before deciding if it's worth your time.


It depends on your goals, and yes, I do agree that life is too short to read everything. But if you don't have the time to fully digest what's written, you also don't have the time to trash talk it, in my humble opinion. That's what I mean by "deciding if it's terrible or not," that is, I see a personal distinction between "did not pique my interest enough to bother" and "this is actively bad." You have to actually read the whole thing to make the latter determination, IMHO.


Well in all fairness, I might've kept reading had it not been for that blatant factual error.


Absolutely. But if you make that decision, why waste more time commenting on it? I don't mean this to be negative, just food for thought.


Generally I don't do this (go ahead, have yourself a google). But the error was so blatant that I just couldn't not say anything about it. In a sense the article baited me into it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: