Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Static types, algebraic data types, making illegal states unrepresentable: the functional programming tradition has developed extraordinary tools for reasoning about programs.

But none of these things are functional programming? This is more the tradition of 'expressive static types' than it is of FP.

What about Lisp, Racket, Scheme, Clojure, Erlang / Elixir...

 help



I think the more relevant functional programming quote is

> The immutability of the log is the entire value proposition.


Agreed, but the article begins with the previous quote, and is entitled "what functional programmers get wrong", so I feel like there's some preliminary assumptions being made about FP that warrant examining.

In practice, much of the article seems to be about the problems introduced by rigid typing. Your quote, for instance, is used in the context of reading old logs using a typed schema if that schema changes. But that's a non-issue in the FP languages mentioned above since they tend towards the use of unstructured data (maps, lists) and lambdas. Conversely, reading state from old, schema-incompatible logs might be an issue in something like Java or C++, which certainly are not FP languages as the term is usually understood.

So overall, not really an FP issue at all, and yet the article is called "what functional programmers get wrong". The author's points might be very valid for his version of FP, but his version of FP seems to be 'FP in the ML tradition, with types so rigid you might want to consult a doctor after four hours'.


> In practice, much of the article seems to be about the problems introduced by rigid typing … But that's a non-issue in the FP languages mentioned above since they tend towards the use of unstructured data

Haskell doesn’t have this problem. None of the “rigid typing” languages have this problem.

You are in complete control of how strictly or how leniently you parse values. If you want to derive a parser for a domain-specific type, you can. If you want to write one, you can. If you want to parse values into more generic types, you can.

This is one of those fundamental misunderstandings that too many programmers have, and it seems like it’ll never die.


But naturally the more you type, the more benefits and costs you accrue associated with typing. I'm not sure what the misunderstanding is. Of course you could write a C program entirely comprised of void*, but why would you? Equally, of course you could write a Clojure program with rigid type enforcement, but again, why would you? You're fighting against the strengths of your tools.

You don't pick up Haskell just to spend all your time passing around unstructured data, any more than you opt into the overhead of TS just so you can declare everything as `any`.


Your understanding is 100% wrong.

You do not understand how typing works in Haskell. You are free to work with primitives as much as you like.


> Your understanding is 100% wrong.

> You do not understand how typing works in Haskell.

Possible! But a little hyperbolic, perhaps. I think we're more likely to be talking past one another.

> You are free to work with primitives as much as you like.

Sure, but what's the point? If all of your functions are annotated as `myFun :: a -> b`, where a and b are arbitrary type variables, why are you writing Haskell? You're effectively writing Ruby with extra steps. You're nether getting the benefits of your rigidly typed language, nor the convenience of a language designed around dynamism.

Yes, ML-esque type systems are quite neat and flexible. But the more granular the typing you opt into, the more of the usual cost of typing you incur. Typing has inherent cost (and benefit!). And if you're not into the typing, ML-esque typed languages are a curious tool choice.

So to return to the original point, if you're passing data around in Haskell, you have more likely than not opted into some level of typing for that data - else why use Haskell - and will run into the exact issues with rigid type systems mentioned above. Can you parse and type cast and convert and what not? Sure. But no-one ever said that you couldn't, and that's precisely the busywork that dynamic languages are generally designed to lead you away from.


> Possible! But a little hyperbolic, perhaps. I think we're more likely to be talking past one another.

Direct, certainly. More than would be typically expected socially. But I don't think it's hyperbolic — I think there is genuine fundamental misunderstanding here.

The best source which I believe dispels your misunderstanding is this one: https://lexi-lambda.github.io/blog/2020/01/19/no-dynamic-typ...


I just don't think I have the misunderstanding that you think I do. I spent most of my programming career working in statically typed systems, including two years in Rust recently. Nothing in that article is new or surprising to me. Some of it is downright elementary.

If I may be so bold, I'd posit the misunderstanding is on your part. No one is saying things are impossible to model in rigidly typed systems - this is your key misapprehension about what is being said. What I'm saying is that different languages have different paths of desire, and the kinds of problems identified in the original article are more the kind of problems that tend to crop up with heavy use of types, than they are the kind of problem that has much of anything to do with functional programming.

You're thinking categorically, but I am not, so we're talking at cross-purposes. Perhaps too much static typing has crept into your thinking! (I jest, of course! :) )


If that's the case, then yes I think we're talking past each other. Although it's hard to square this with the argument you've been making — if you understood King's point, I don't understand how you can be arguing that Haskell idiomatically leads you into rigidity at version boundaries. The whole thrust of King's article is that this is a mischaracterization.

> What I'm saying is that different languages have different paths of desire, and the kinds of problems identified in the original article are more the kind of problems that tend to crop up with heavy use of types, than they are the kind of problem that has much of anything to do with functional programming.

I don't think this is correct at all. I don't think TFA has anything at all to do with types or FP (despite the clickbaity title), as numerous other people here have already pointed out. The article isn't attacking rigid types. The author's point is that no single-program analysis — typed or untyped — covers the version boundary (or system boundaries more generally).

A Haskell service that receives an unknown enum variant doesn't have to crash — you parse the cases you care about and ignore the rest. The "path of desire" you're describing isn't a property of the language.

I suppose "path of desire" here is a matter of opinion. In my experience, crashing on unknown inputs is not idiomatic Haskell, nor is it desirable.


> if you understood King's point, I don't understand how you can be arguing that Haskell idiomatically leads you into rigidity at version boundaries

Why would understanding his argument necessarily mean finding it persuasive or exhaustive?

> you parse the cases you care about and ignore the rest

What language would not allow this?

> The "path of desire" you're describing isn't a property of the language.

If a tool tends, more often than not, to lead to certain use, is that not a property of that tool? It is theoretically possible to use a hammer for interpretive dance, sure, but it doesn't seem to happen nearly as often as banging the hammer on things.

Equally, I think it's pretty easy to see how a language designed for robust typing is going to lead to, more often than not, robust typing.

I rather think you're engaging with a point that no one ever made - that typed languages are inherently incapable of dealing with uncertain, incomplete, or variable data - in lieu of the argument that was actually made - that languages with rich DX around rigid typing encourage an architecture that's rigidly typed, and that rigidly typed codebases tend to come up against predictable issues.

The original article identities a series of such issues and misattributes them to FP, when they don't have much to do with FP at all. That's all I was saying.


> Why would understanding his argument necessarily mean finding it persuasive or exhaustive?

Alexis King is a woman.

> that languages with rich DX around rigid typing encourage an architecture that's rigidly typed, and that rigidly typed codebases tend to come up against predictable issues.

I agree with neither of these points.


> Your quote, for instance, is used in the context of reading old logs using a typed schema if that schema changes. But that's a non-issue in the FP languages mentioned above since they tend towards the use of unstructured data (maps, lists) and lambdas

The application that assumes that key "foo" is in the map, and crashes if it's not, is just as brittle[1] as the one that assumes that the data deserializes into a struct with the field foo present.

[1]: In practice, it's more brittle because the crashes are more unexpected and at runtime. Javascript has a legacy of this...


> The application that assumes that key "foo" is in the map, and crashes if it's not, is just as brittle

And the program that assumes that foo is a string when it's an integer yadda yadda yadda.

What's better, that code containing erroneous baseline assumptions about the data it's handling stays live, or that it safely aborts before it corrupts any of that data? All software must deal with errors at some point or another.

Take a web endpoint. If it is expecting a foo, and it instead gets bar, what is it intelligently supposed to do with that bar? We're squarely in the realm of a fatal error (to that request), the only question is how much boilerplate is going to be required to abort.

In most languages, the answer is endless validation and error handling at boundaries. In Erlang, on the other hand, such a function would simply crash. All Erlang code runs in green threads, usually overseen by supervisor trees. The supervisor tree organises the quick return of an appropriate error to the request, while it safely restarts the worker thread.

This kind of architecture requires no boilerplate or external libraries in Erlang or Elixir. It's built in. The VM is optimised for remaining robust under load even while orchestrating millions of workers. Immutability prevents any data races between all these threads, or any need for locks at all, ever. The end result is that all business logic remains short and focussed on the happy path. The DX is out of this world.

Read up on the BEAM sometime, you might find it interesting. There's much more to FP than endless typing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: