I love that Java and C# (not sure about VB.NET and F# in the .NET ecosystem) are continuing to get handy language features instead of collecting dust. The array syntax stuff is a nice win.
Having other languages be the guinea pigs for language features is a good way to go.
C# designers have always made good choices about syntactic additions. Contrast this to modern C++ designers who jam in every new addition with the goal of total inscrutability.
In ISO languages the features that get win are the ones that win elections rounds, currently C++ has more than 300 people voting, and submitting proposals, that is naturally a problem.
Yeah, it feels like Java (the language, not the JVM or ecosystem) was stuck in limbo since generics were added in Java 5, during that time C# came out and overtook it in terms of features / developer ergonomics.
Yeah I guess the company responsible for Java going bankrupt, and having its team getting up to the grips with the new employer might have something to do with it.
Still there are plenty of markets where even .NET doesn't have a presence, like real time embedded systems, factory automation hardware, copiers, mainframes, blueray players, M2M gateways, and an OS being used by 80% of the planet, as Microsoft botched their own when it was already achieving 10% in Europe.
I love both platforms, however .NET cross-platform story still needs a bit of improvement versus where Java has been used in the last 28 years.
And DevTools eagerness to hinder it as means to sell Visual Studio licenses doesn't help.
They seem to have picked most of the low hanging fruit though. Most of this is nice to have but not earth shattering.
The only feature I've been hoping for is abstract data types. I'm not sure how they could make them work in .Net though. Presumably F# has crossed that hurdle.
Code interception strikes me as just crude and poorly conceived? Given how hard it is to remove features from a language, why introduce things like this? 20 years from now people will have to deal with codebases that use InterceptsLocation decorators and for what?
I get why InterceptsLocation can be useful, but why not make a more generic version that doesn't work with magic line and character offsets? Something as innocuous as a code formatter will totally break this feature. So strange.
That's good to know. I wonder if VS and VS Code are set to at least produce warnings when the feature is used in human generated code. Any feature can be abused, will be abused. At least with warnings by default the programmer can know this isn't a desired use case and then decide if their particular need is a worthwhile exception.
Source generation is used at some level to implement expressivity at minimal runtime cost. It's just operating at a different level of abstraction to make different tradeoffs.
Whether you do Foo.Serialize() and it uses reflection to enumerate the properties, carries around tags permanently or it uses some compiletime generated function has nothing much to do with expressiveness.
If you design around a Foo<bar> does it matter if behind the scenes it generates a FooOfBar?
Should a language inherently care specifically about protobufs, flatbuffers, capnproto, etc because expressiveness or should it just have capable source generators for building strongly typed interfaces without the legwork? Are you sure an alternative implementation which is more expressive would be better or would it just be different?
> Whether you do Foo.Serialize() and it uses reflection to enumerate the properties, carries around tags permanently or it uses some compiletime generated function has nothing much to do with expressiveness.
Thinking those are the only options is exactly why the issue is expressiveness.
Code generators are used because run-time introspection tends to be slow(er) and because debugging code generated at runtime is harder and because IDEs don't know how to deal with code generated at runtime. But there no reason why this should be so. No good reason to have a hard boundary between compiletime and runtime either. These are just historical artifacts.
So how would you solve the AOT (ahead of time compiling) problem without code generation? An entire ecosystem (Unity) that uses C# requires that code must be AOT for supporting IL2CPP (a low level translation of IL to c++). Dynamics and Reflection of non-AOT types are unavailable at runtime. IL2CPP came from Apples requirement that no JIT be run in apps and to get more performance; especially, for features like burst that allows writing C# that directly translates to high performance multithreaded c++.
Different usage, I think: if people are talking about "language not expressive enough", they're referring to mechanisms for generating C#, or IL, not things further down the toolchain for AOT.
System.Xml.Serialization, for example, relies on generating assemblies at runtime to work. That's "code generation", but of a kind that directly conflicts with the AOT meaning of "code generation".
You can't make a language fully general without it turning into a mess; things like the protobuf compiler are reasonable use cases for source generation.
I welcome these but for each of 8, 9, 10, 11 and 12 I still feel that the elphant (missing) in the room is a proper sum type. How come I have to write some 100 line monstrosity of a utility class each time i want "Either a result or an error" or similar.
Fantastic. I note it's indeed being designed since 2017 so it's probably not as easy as I would hope. The design seems to not have progressed beyond discussion at all in 6 years.
Nice to see this so I can convince a fellow tech lead that the reason that we’re creating interfaces for every class has nothing to do with future extensibility and has everything to do with maintaining unit tests and that even official MS sources think that’s the case (he doesn’t believe in anything until an MS based source says it).
And Unit tests make especially little sense in the specific service I’ve been talking about because all the logic is in the Stored Procs it calls. The service itself simply forwards that data.
Yep. The issue with tests that are heavy on interfaces and mocks is that they are "too close" to the code - if refactoring is changing code while tests stay green, how can you refactor when any change to any method breaks a test?
Swapping out interfaces to code-gened interceptors and keeping everything else the same, doesn't look like it would improve this underlying issue at all.
The real problem is you are testing the class behaves like the class,
but not the business-driven requirements.
Therefore refactoring becomes fearful since detecting bugs is no longer the job of tests.
Yes, depending on what you mean by integration test.
It is nuanced subject.
But say you have class A,B,C,D with mocked dependencies E,F,G,H.
There could be bugs in how A interacts with E and F that are hidden even though you have tests for A,B,C,D integrated and even unit tests on E too.
In addition, if someone comes along and refactors A,B,C,D into A',B',C',D' that have different dependencies ("Hey we are moving to microservices!") then the new integration test is different. You change test and code at the same time.
This is a problem because confidence comes from having a stable test, then changing the code and getting the green circles.
The solution (I think) is to make sure you are mocking at the level of well established boundaries. For example mock PostgreSQL. Or mock your ORM if there is a decent mock, with an in-memory version.
Then you can test end-to-end scenarios. Add a TODO, add another TODO, assert that getTodos() returns 2 results, and so on.
Instead of assert getTodos() returns 2 results when the ITodoService.getTodos() returns 2 results, and the outer getTodos just defers to it.
If the parent comment is also saying "If a test describes a business requirement rather than a class and method, then it must be an integration test"
Then I think that parent is wrong, or rather that "it depends on what you mean by integration test" an further, this definition of "integration test" is an extremely unhelpful one that I do not advise using. And it better fits what was originally intended as "unit test". The name "unit" is not a synonym for class or method, it is your choice what the "unit" is, and a lot of the time it is best done as a small chunk of business requirement.
mocking at the level of well established boundaries such as databases and http services, is all that's needed.
What is the alternative specifically for unit tests?
You can create integration and e2e tests that aren't as sensitive but I don't really understand what people are suggesting when they say mocking is bad in unit tests.
It is irrelevant and driven by some testing evangelists
Tests are either quick or slow and may touch external stuff, thats mostly it.
Whats wrong with mocks? If they lead to scenarios where your tests are green, but app doesnt work then it sucks. Ive witnessed projects with all green tests but app wasnt even waking.
I think there is an evolution developers go through. Right around the "design pattern" stage where everything is decoupled, hinges for all parts. Devs seem to focus on the test coverage value. Both these attributes stem from the same failure to understand the underlying business. The model rarely reflects the actual business needs (usually because the dev spends more time reading about, and thinking about tech as opposed to understanding the business) they use flexibility in the codebase as a fallback because gee the business can be anything. Code coverage equally fails because they only think about their code. Sometimes the most useful thing is a business case that was never thought about. It's code that doesn't exist. Perfect code coverage won't tell you anything about code you haven't written.
I'd rather spend time trying to learn the business, and where it's going, and why my product is useful, than mocking out some dumb part I can test a few times manually. Unit tests are super valuable tool, but not everything needs a unit test.
I hate it as well, and seldom do it on my own projects.
At work, I have better things to do than waste time on this for PR, so I create those interfaces, thankfully there are VS tools to create them automatically.
They're not required for MS's dependency injection, not sure about other libraries. 95% of the interfaces I've ever seen used for DI only have one implementation, and that 5% is generous. This makes maintaining the class and the interface tedious.
The reason this is so common is probably because code examples (including MS) often have it, so it gets followed the first time and repeated.
Absolutely my experience is well, I have been writing .NET for over a decade, and I think I have seen a handful of cases mainly when we were bored or when I remembered the Interface Segregation principle and tried to shoehorn it into code (though the principle does work well to have methods take the minimum number of props needed to do their job, but that's a whole other topic).
Just a reminder that in the C# world you don't really have to care about new language features. Visual Studio, ReSharper, and Rider (supports MacOS/Linux) will suggest useful new language features as a one-shortcut refactoring.
A C# developers who went into coma in 2012 after learning about async/await and ASP.NET MVC can be awakened today and be instantly productive. Although he might go into coma again if he finds out .Net is now open source, cross-platform, and that we deploy to "Linux containers".
Or that the beloved frameworks they use in-house only work in .NET Framework, or even if ported to .NET Core they are still Windows only, because they are wrappers around COM.
We've recently started writing our new stuff in .Net, and I last used it back in .Net 1.1 days. I've been super productive from the first day of returning.
Like you say Visual Studio helps a lot with suggesting improvements, and of course all my old ways are still valid.
Unlikely. The new collection literal syntax exists as the dual of the pattern matching syntax, which itself exists primarily because you can't reference a generic type by name without specifying the type parameters.
If you want to write out Dictionary<Guid, ILookup<int, List<MyEntity>>> instead of having it inferred then go for it I guess.
The old collection initialisation syntax has always felt clunky compared to many other languages, especially with more complex Dictionary types.
At the same time it feels funny going back to having the type defined first since many have spent years converting explicit 'Type foo...' into 'var foo...' because the linter suggested it.
Then again, perhaps this is one step towards more type inference and being able to leave out the type in more places
Personally, I'm still using var for most locals, but target-typed new for field and property initializers. Not every feature had to be used everywhere.
Collection expressions are somewhere between spooky and very cool. If you don't specify a concrete type for the collection, the compiler is free to choose the "best" option given your usage of the collection:
Disappointed they didn't add partial properties[1] which would make it possible to have a source generator generate an INotifyPropertyChanged implementation. But it is good to see that they're still improving source generators, and supposedly they are likely to add partial properties in C# 13.
> int result = some_variable switch { a => callAFunctionReturningVoid(); 1, b => 2, };
What does it even mean? Multi-statement lambdas without return statement, I get it, but what is the return value? And returning a void to int? How is it all supposed to work? What the trailing comma means? Inconsistent way of dealing with things that may confuse developers just results in that proposal resisted.
It is not a lambda but a switch expression. The tricky bits here are separating a call and the returned expression with ;. The trailing comma is just sugar to allow for another case
Understood. You want to escape curly braces and let the last expression be the returned one. F# does implicitly return the last expression.
> The expression is the body of the function, the last expression of which generates a return value. Examples of valid lambda expressions include the following:
this parameters (well the first one) is already used for extension methods. Constructors can't be extension methods but it could still lead to confusion
Never realized that, but yes. Seems like it would be better to have a different syntax for extension methods, I always thought they were clunky, but too late now.
I stopped paying _a lot_ of attention to new C# features a few versions ago. While I like other syntaxes too (Python flexibility is really enjoyable), once my mind switches to "C# mode" I don't miss any other thing from more dynamic languages, at all.
I feel that recent "features" are more in the "things I can do with a preprocessor or a code generator" than real language enhancements.
Of course, I can be (and I'm probably) extremely wrong, but I find that release pace exhausting and I see a lot of great software written without resorting to new "features".
> I feel that recent "features" are more in the "things I can do with a preprocessor or a code generator" than real language enhancements.
As someone that has spent far too much time in JS world any time I’m able to ditch a preprocessor or code generator is a a great relief. I want my build processes to be as simple and straightforward as they can possibly be.
People very rarely rely on extra source code generation stages or preprocessors in C#. The Javascript ecosystem is dependent on them because Javascript is the only language that's a first class citizen in the browser. You can also do some pretty expressive things in C# with LINQ, expression trees, reflection, and so on. You can even emit IL at runtime if you really need to.
I didn't write that all additions to C# since 1.0 are useless.
We have generics since .NET 2 (2005 IIRC) and at the time, the lang improvements had a reason to exist (framework support in some cases). At the time it was a nice thing to add, if you ask me. For me, the next big thing, if you ask, was LINQ.
A few years later, I feel it's not the case with recent additions. But my comment is very subjective, of course.
As I said, I can be very wrong. It's just that I find he lang "very complete", if that's a thing, since a few years.
LINQ to objects yes, but LINQ to SQL is a different beast where the C# code is translated to SQL. Expression trees are very much a language feature, I guess.
It’s an extraordinary feature that is only intended for optimization of edge cases. Making the feature verbose and somewhat ugly to use seems like an intentional choice. This way if a beginner comes across this “confusing/weird looking” code they can look up the attribute and see what it does, whereas a more convenient/native syntax would be less recognizable and noticeable. This is similar to the idea that you should make ugly APIs ugly to use (to avoid giving users a false sense of security).
But we can already do `Span<int> ints = stackalloc int[5];` without using `unsafe`. The array size doesn't even have to be const in that example. They could have just made an optimization when a const is supplied for locals, and then use the existing fixed array syntax for members.
It seems to me the budget for C# lang dev is way down. This seems like it was implemented this way so they didn't have to change many internals.
unions, c3 and struct inheritance, private constructors in non static class, sealed hierarchies of records has no warnings in switch, operators and ref parametes are integrated into type system. than i may return back.
Having other languages be the guinea pigs for language features is a good way to go.