Very cool but I am afraid this is geek candy that would not fly for mere mortals. When you say next-gen IDE do you mean for a certain class of developers?
The problem with these type of development tools is that it moves your brain from thinking in terms that of a human to thinking in a very structured way more attuned to machines.
This is a problem with functional programming in general, it is fundamentally anti-human, people don't think functionally but rather procedurally.
Full disclosure here, IDE maker so I have skin in this game :)
I think that's a broad mischaraterization of functional programming. It is certainly not obvious a priori that people think procedurally. It certainly does not match my experience teaching programming to complete beginners--even concepts like mutable variables and loops are not particularity intuitive.
People really like to think by analogy and think based on relations. Functional programming makes this much simpler by giving you simple abstractions and, crucially, letting you not worry about extraneous machine details. In a functional language, even the order your code gets evaluated is below your level of abstraction.
Ultimately, functional programming lets you talk about what where imperative languages force you to talk about how. That's pro-human. It's exposing the underlying machine and computation--imperative programming, in a word--that's anti-human!
Actually, machines are attuned to flat, unstructured lists of instructions, the polar opposite of functional programming. But when people wanted to think clearly, they invented math, of which FP is nearly the computational manifestation. You have the human-vs-machine thing exactly backwards. The reason we tend to think in imperative terms is one of familiarity: most of our first programming experiences are in languages that are (sometimes high-level, but still) descendants of assembly, because at first that was the only programming that existed: flat lists of instructions designed for a machine.
Granted, if a human wants to describe a physical process, they will use procedural language. But computers are primarily about information, and we think about information in terms of relationships. That's exactly what FP is about: expressing computation in terms of relationships.
I agree that a Turing machine is imperative...when I refer to machines I am generally referring to an inflexible approach to solving problems, not so much how computers execute instructions.
As humans I think we need the ability to be messy and imprecise in order to be creative.
>The reason we tend to think in imperative terms is one of familiarity
That's debatable given that both approaches have been around for about the same length of time with different outcomes in terms of adoption.
I'm writing OCaml now. I'm being messy and creative all over the place, sketching out ideas, and simply not trying to compile them until I think it'll work. I do this precisely because OCaml maps naturally onto how I think about the problems I'm working on right now. It's pseudocode until I run make, at which point the computer is running very useful checks on my reasoning.
The "inflexible approach to solving problems" you refer to has, as far as I can tell, nothing at all to do with "machines", except perhaps in an exceptionally abstract sense, in the form of virtual machines designed for thinking about. On the other hand, problem solving doesn't get much more flexible than raw assembly. My point stands.
I think the early adoption of imperative languages was more due to pragmatism than elegance[0]; at first, you had to stay close to the machine to get anything done, at first because that's all that existed (Fortran beat Lisp into existence by a year) and then for performance. Remember, Fortran was still basically a shortcut for assembly, while Lisp started as a purely mathematical abstraction, designed expressly for thinking about, that some goofball wrote an interpreter for.
[0] This principle is still clearly in force today. We wouldn't bother with C++ and JVM languages and Unix so much if pragmatism wasn't paramount.
I'm using Fortran as my example of an early imperative language. There might have been earlier ones (maybe a version of COBOL?), but since they were imperative it doesn't materially affect my point.
> That's debatable given that both approaches have been around for about the same length of time with different outcomes in terms of adoption.
If unix had been created in LISP or ML, I believe we might be in the opposite position. Though that hinges on a LISP unix being as successful as the C one.
Was that because C was better than Lisp though, because of timing, or because the people working on UNIX's goals were more closely aligned with the industry as a whole?
There are of course many other things it could have been, but these are the first that come to my mind.
>But when people wanted to think clearly, they invented math,
No, they didn't. They invented a syntax two milleniums ago, optimized for use-cases of those ages, using alphabets of those times. Thinking in greek symbols is not helping anyone think clearly. It's just more confusion, because now suddenly, i can't type or phrase a relevant question out loud, until i get a greek keyboard and learn the proper pronounciations of the greek alphabet.
The notion we should just stick to a notation and conventions optimized for a different era, a different culture, a different alphabet and a different writing tool (paper), is ridiculous. Do NASA scientists calculate speed using knots? Or power using horse power? They do not.
Math is to computer science, what classical music is to pop music. A historical relic that stopped having economic and cultural value beyond being a mere status symbol.
>of which FP is nearly the computational manifestation.
No, it's not. One could argue the same thing for logic programming. Computation is not the manifestation of math. It's a strategy to answer a mathametical question. The original strategy was to just 'try things and explore' and when you found (guessed?) the answer, you would prove it to be sure.
Functional programming, in the religious Haskell sense, is just a term rewriter. That's not the manifestation of computation. It's just one way to specify a strategy. In the case of Haskell, which is a term rewriter with a bible full of fine print and exceptions, a very sado-machistic one.
>But computers are primarily about information, and we think about information in terms of relationships
No, that would be Prolog. A different default strategy. Less fine print, but not the cure-all either.
>That's exactly what FP is about: expressing computation in terms of relationships.
Nope. A relationship doesn't have a computational direction. In math, all these statements express the same relationship:
double( x ) / x = x
double( x ) = x * x
x * x = double( x )
In Haskell, only one of them happens to be legal. Because you are not writing a mathematical equation. You are specifying a computation. The fact that isn't even obvious, makes matters worse.
> But computers are primarily about information, and we think about information in terms of relationships
Yes, and the biggest challenge, the pain everyone tries to lessen, is managing the coordination and standardisation of changes (mutations) to that information. Wrapping every computation with the same type of state-monad actually helps a lot with this, and has been the most popular strategy to deal with this problem in the last 20 years.
Relationships can be directional if you want them to be. Also, I wish people would stop conflating "religious" functional programming with pragmatically expressing algorithms as compositions of functions. That said, I don't think you and I really disagree much on the big picture.
>Also, I wish people would stop conflating "religious" functional programming with pragmatically expressing algorithms as compositions of functions.
Yes, me too. And that type of functional programming is very popular and very succesfull. Every programmer uses it often, when they touch SQL, jQuery, LINQ, etc.
>I don't think you and I really disagree much on the big picture.
I love functional programming.
But i consider languages like Haskell and their derivates to do a lot of harm to the reputation of functional programming. Lazy evaluation and the whole pretending math == computation. It's borderline harmfull to the development of a programmer to even be exposed to it. The last thing you want a programmer to believe, is that there is some intrinsic order of execution that is magically correct and optimal, and can easily be derived. There isn't. The correct order of execution is not even objective (in a GUI one would trade throughput for lower latency, for example), so the notion we can just skip that whole part, and have the 'compiler take care of it' seems damaging to me. Languages that allow you to specifiy these things manually are considered ugly mutations of some kind of pure math. Sinners. That we need return to one true god, which is "pure" math, mascerading as a term rewriter and bible full of fine print, and a zero tolerance on maintaining global state. Yuck.
I've never heard anyone refer to Haskell's lazy evaluation as The One True Evaluation Order. it's just one interesting way of doing things. I think there's a useful place for Haskell in the space of programming languages, I just don't want it to be the default example of a functional language.
> This is a problem with functional programming in general, it is fundamentally anti-human
I would argue that the "anything goes" dynamic procedural languages ( like Ruby, Python ) are far more anti-human, in that human ability to reason about large masses of code scales orders of magnitude worse than in languages that provide a strong theoretic framework for reasoning about code.
The reason straight jackets lead to poor usability is that human minds are fairly diverse in the way they solve problems, and rigid constraints imposed by a tool are likely to trip up your thinking.
There is a tension between flow and feedback, but its not clear at all that one dominates the other as you've stated.
I find dynamic languages break my flow badly by eventually losing clarity on what things are supposed to do. Haskell is annoying as he'll to begin with but eventually those constraints give very tight, very fast feedback that's very flow-compatible.
Constraints are what prevent fallacious reasoning. It's better to have a tool/framework tell you you're wrong then not knowing about problems or believing falsehoods.
Again, there is tension between flow and feedback. The ability to express and tolerate incomplete and incorrect solutions when trying to solve, and more importantly, understand a problem, is very important. Bondage and discipline languages typically lack that capability, which is why many programmers are still flocking to dynamic languages. For this reason, even Haskell is trying to get into the hybrid typing game (type inference also helps of course, without which Haskell would be unusable).
Ideally, we could achieve flow without sacrificing semantic feedback (or vice versa). It is definitely a worthy goal
I've been fiddling with a System F language for a bit and find the types not as burdensome as I had originally thought. I couldn't imagine learning to use it from scratch though. I do think the loss of magic inference is a boon though.
You are repeating what appears to be a completely baseless claim all over this discussion. What evidence do you have to support the notion that languages you happen to dislike decrease flow?
>I would argue that the "anything goes" dynamic procedural languages ( like Ruby, Python ) are far more anti-human,
Is it anti-human to be able to interact with the code, and explore what it actually does? A run-time type error is one that has real example data.
The assumption we can write perfect code immediately and easily, or that a type analysis can just guide us through, is not the case. And it's breaks down even worse, when 99% of your code is interacting with systems outside of the scope of the typing system. (database servers, client-side browsers, network connections, file-systems). Dynamically typed language are good fits, when the code is mostly glue-code between multiple systems outside of the scope of any type analysis.
But let's not equate functional programming with static typing.
>in languages that provide a strong theoretic framework for reasoning about code.
You act like many of us are ever writing complicated algorithms. We're not. We're writing simple algorithms that deal with complex structures of information. And in the few cases where the algorithms get so complicated you want your invariants to be formally proven, anything less than a full blown theory prover, will be insuffient anyway.
Sure if it was easy for people to think functionally then functional programming is without a doubt more elegant. But elegance doesn't negate the fact that the reasoning model demanded by functional programming isn't friendly to the way we think. The elegance argument is something proponents always bring up, but it misses the point.
While dynamic procedural languages have their problems, I think they work for the most part, hence their success Vis-à-vis functional languages. As far as the issue of scale, that's why modular constructs as added to manage large code bases.
I don't know how you're coming to this result, but it /is/ easy for people to think functionally. You can teach intro-CS classes in both functional and imperative styles and generally they pick up both styles at the same rate. I've seen it year after year of teaching these courses. The myth that functional programming doesn't match human thought is just flat out FUD, there's no evidence to support it.
The adoption of programming languages is more influenced by economic factors than informed by solid engineering.
I wont argue about your experience in the class room but both styles of programming have been around for about the same length of time and yet their adoption rates speak volume.
We can debate this ad infinitum and probably wont come to an agreement.
> This is a problem with functional programming in general, it is fundamentally anti-human, people don't think functionally but rather procedurally.
There was a study a while back[1] that showed non-programmers a series of statements of the sort,
int a = 10;
int b = 20;
a = b;
and asked them the value of a. It seemed that some people could not grasp even very basic fundamentals.
The thing is, if you'd never programmed before but you HAD basic algebra knowledge, the third line above would have broken your brain. 10 =/= 20. You can't change the value of a, it's against everything you know!
The point is, people don't think functionally or procedurally. Both of these programming styles are just learned behaviours, not basic human nature.
I agree with the argument against functional programming, though I would qualify that with pureness (doing everything functionally) rather than just the use of functions in general: functions are sometimes the most natural way to do something. Haskell is good for what it is, an experiment in pure functional programming, which has taught us a lot about programming in general.
Cool. I checked out your page, it seems your tool logo is very similar to ours (MS) :).
I'm not sure if it is appropriate to think about specific groups of developers in a general purpose IDE. You really can't guess how they will use your tool, which applies to PL design in general. Personally speaking, there are functional programming enthusiasts who think wildly different from the way I do, and I guess they would sort of select out given my lack of understanding of their psychology as reflected in my design.
The problem with these type of development tools is that it moves your brain from thinking in terms that of a human to thinking in a very structured way more attuned to machines.
This is a problem with functional programming in general, it is fundamentally anti-human, people don't think functionally but rather procedurally.
Full disclosure here, IDE maker so I have skin in this game :)