Hacker Newsnew | past | comments | ask | show | jobs | submit | owlstuffing's commentslogin

Ada was designed to solve different problems in harsher environments than other PLs at the time. Mostly, it was designed for the defense and aeronautics industries and had to compete against other PL designs to become a govt standard, similar to how weapons of war are developed and chosen. Think developing for hardcore code audits. There is no way the language could check all the boxes and remain compatible with, say, Pascal or Modula syntax.

Analytics with type-safe raw SQL (including DuckDb’s awesome extensions) is pure gold:

https://github.com/manifold-systems/manifold/blob/master/doc...


Precisely. 'member CUA?

Many will argue that Oracle is overreacting, and they may not be entirely wrong. But as someone who reviews PRs for open source languages and tooling, their interim actions strike me as both sound and measured.

The number and size of AI-assisted PRs have reached a tipping point. Reviewing them already consumes a significant amount of time, and even filtering out the obvious ones is a drag. More importantly, the risk/reward balance is shifting in the wrong direction. For now, placing constraints on AI-assisted contributions feels like a sensible way to manage that risk.

Will this policy reject or slowdown otherwise beneficial PRs? Potentially. But that is the tradeoff. Until there is a better way to offset risk, this one is probably the least bad strategy.


In isolation, yes, I agree with you. But in the context of the cornucopia of other "carefully evaluated" features mixed into the melting pot, C# is a nightmare of language identities - a jack of all trades, master of none, choose your dialect language. No thanks.


> C# is a nightmare of language identities - a jack of all trades, master of none, choose your dialect language.

I honestly have no idea where you would get this idea from. C# is a pretty opinionated language and it's worst faults all come from version 1.0 where it was mostly a clone of Java. They've been very carefully undoing that for years now.

It's a far more comfortable and strict language now than before.


I can see where he's coming from. For example, `dynamic` was initially introduced to support COM interop when Office add-in functionality was introduced. Should I use it in my web API? I can, but I probably shouldn't.

`.ConfigureAwait(bool)` is another where it is relevant, but only in some contexts.

This is precisely because the language itself operates in many runtime scenarios.


I guess that's a good point. I admit haven't used or seen `dynamic` in so long that I completely forgot about it.

But I'm not sure that's really a problem. Does the OP expect everyone to use an entirely different languages every single context? I have web applications and desktop applications that interact with Office that share common code.

Even `dynamic` is pretty nice as far as weird dynamic language features are concerned.

Interestingly enough `.ConfigureAwait(bool)` is entirely the opposite of `dynamic` -- it's not a language feature at all but instead a library call. I could argue that might instead be better as a keyword.


It is a library call, but one that is tied to the behavior of a language feature (async/await).

The reason I bring it up is that it is another one of those things where it matters in some cases depending on what you're doing.

Look at the depths that Toub had to go through to explain when to use it: https://devblogs.microsoft.com/dotnet/configureawait-faq/

David Fowl concludes in the comments:

    > That’s correct, most of ASP.NET Core doesn’t use ConfigureAwait(false) and that was an explicit decision because it was deemed unnecessary. There are places where it is used though, like calls to bootstrap ASP.NET Core (using the host) so that scenarios you mention work. If you were to host ASP.NET Core in a WinForms or WPF application, you would end up calling StartAsync from the UI thread and that would do the right thing and use ConfigureAwait(false) internally. Request processing on the other hand is dispatching to the thread pool so unless some other component explicitly set a SynchronizationContext, requests are running on thread pool threads.
    > 
    > Blazor on the other hand does have a SynchronizationContext when running inside of a Blazor component.
So I bring this up as a case of how supporting multiple platforms and runtime scenarios does indeed add some layer of complexity.


> It is a library call, but one that is tied to the behavior of a language feature (async/await).

This is a good example of C# light-touch on language design. Async/await creates a state machine out of your methods but that's all it does. The language itself delegates entirely to platform/framework for the implementation. You can swap in your own implementation (just as it possible with this union feature)

> So I bring this up as a case of how supporting multiple platforms and runtime scenarios does indeed add some layer of complexity.

I agree that's true. A language that doesn't support multiple platforms and runtime scenarios can, indeed, be simpler. However that doesn't make the task simpler -- now you just have to use different languages entirely with potentially different semantics. If your task is just one platform and one runtime scenario, the mental cost here is still low. You don't actually need to know those other details.


> This is a good example of C# light-touch on language design.

Is it? F# code doesn't even need ConfigureAwait(false), one simply uses backgroundTask{} instead of task{} to ignore SynchronizationContext.Current, and this didn't require any language design changes at all (both are computation expressions), but it would for C# precisely because it delegates this choice to the framework.


dynamic was also added as part of DLR, initially designed for IronPython and IronRuby support.

This inspired the invokedynamic bytecode in the JVM, which has brought many benefits and much more use than the original .NET features, e.g. how lambdas get generated.


If it’s not for you I guess that is ok. But from your comment I would also deduct that you never professionally used it. After so many different languages it’s the only one I always comeback to.

The only things that I wish for are: rusts borrow-checker and memory management. And the AOT story would be more natural.

Besides that, for me, it is the general purpose language.


General purpose != multiple dialects, that is the trouble with languages like this - C# is a tower of babel.


>a jack of all trades

Yes, C# is a jack of all trades and can be used at many things. Web, desktop mobile, microservices, CLI, embedded software, games. Probably is not fitted for writing operating systems kernels due to the GC but most areas can be tackled with C#.


> Probably is not fitted for writing operating systems kernels

Midori would like to have a word with you:

https://en.wikipedia.org/wiki/Midori_(operating_system)

https://joeduffyblog.com/2015/11/03/blogging-about-midori/


Many systems programming languages with GC have existed since the 1970's, we don't seem most adoption mostly due to developer culture, and monetary issues with management.


C# is a perfect example of feature envy, but because "Java sucks" C# must be the best thing ever in the world of computing. Orthogonality and coherence be damned.


F# units are handy, but nothing like Manifold units (Java):

https://github.com/manifold-systems/manifold/tree/master/man...


> Pulling them all into C# just makes C# seem like a big bag of stuff, with no direction.

Agreed. Java is on the same trail.


Care to elaborate? I think Java is showing remarkable vision and cohesion in their roadmap. Their released features are forward compatible and integrate nicely into existing syntax.

I work much with C# these days and wish C# had as cohesive a syntax story. It often feels like "island of special syntax that makes you fall of a cliff".


It's honestly hilarious since the person you're replying to has heavily advocated Manifold which is a compiler extension to Java that adds every little feature to the language.

https://github.com/manifold-systems/manifold


> It doesn't cover ad-hoc unions

Yes and no. C# unions aren’t sealed types, that’s a separate feature. But they are strictly nominal - they must be formally declared:

    union Foo(Bar, Baz);
Which isn’t at all the same as saying:

    Bar | Baz
It is the same as the night and day difference between tuples and nominal records.


Hi there! One of the C# language designers here, working on unions.

We're very interesting in this space. And we're referring to it as, unsurprisingly, 'anonymous unions' (since the ones we're delivering in C#15 are 'nominal' ones).

An unfortunate aspect of lang design is that if you do something in one version, and not another, that people think you don't want the other (not saying you think that! but some do :)). That's definitely not the case. We just like to break things over many versions so we can get the time to see how people feel about things and where are limited resources can be spent best next. We have wanted to explore the entire space of unions for a long time. Nominal unions. Anonymous unions. Discriminated unions. It's all of interest to us :)


Well, there is also the issue that some things get designed and then abandoned even thought some improvements were expected, dynamic typing from DLR, expression trees, for example.


Very good to hear that!


It’s not their code, and it’s not for them to understand. The endgame here is that code as we know it today is the “ASM” of tomorrow. The programming language of tomorrow is natural human-spoken language used carefully and methodically to articulate what the agent should build. At least this is the world we appear to be heading toward… quickly.


But the endgame is not here and will likely never be, because unlike ASM, LLMs are not deterministic. So what happens when you need to find the bug in the 100,000k LoC you generated in a few weeks that you've never read, and the agent can't help you ? And it happens a lot. I am not doing this myself so I can't comment, but I've heard many vibe coders commenting that a lot of the commits they do is about fixing the slop they outputted a week prior.

Personally, I keep trying OpenCode + Opus 4.6 and I don't find it that good. I mean it does an OK job, but the code is definitely less quality and at the moment I care too much about my codebase to let it grow into slop.


There is a large and growing segment of executives in the software world that is pushing this model hard, like betting their career on it. To them the “dark factory” is an inevitability. As a consequence, not only are developers choosing this path, but the companies they work for are in varying degrees selecting this path for them.


Most, if not all of them, are shooting themselves in the foot. I've been saying this for a long time. The only thing LLMs actually are useful for is automating labor and reducing the amount a worker can demand for their work. Don't fall for this trap.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: