Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why is the idea that the human mind is beyond Turing machines not taken seriously? Seems to make the most sense from the data we do have. E.g. how could we possible write functioning code so consistently if we are limited by the halting problem for Turing machines?


Heuristically.

There is no reason to suspect we solve the halting problem or even any restricted variant. In fact, there is no reason to suspect we solve any problem of even modest computational complexity for anything but the tiniest domains.

It is 'not taken seriously' because the only evidence seems to be handwaving and philosophical argument (perhaps with a dollop of quantum weirdness, a la Penrose).

There are a number of models of hypercomputation. Finding any with a physical basis that might credibly match neurobiology is highly dubious. It is not even clear that any of them have any physical basis.


Berkeley already uses electricity for finding instant solutions to quadratic optimization problems[1], why do we restrict ourselves to some primitive digital mode in humans? In addition, we already know neurons do some form of protein-based computation, alongside to what looks like spiking/timing-based electric computation. Some neurons can be 1 meter long as well, interfacing with many parts of brain. Can we rule out some super clever electric potential optimization or even something better isn't going on in the background? I am not saying Penrose is right that brain can compute BQP and is using collapses of quantum gravity to solve halting problem, but why can't we keep open mind about it?

Hypercomputation is a cool theoretical device, again the models are way too primitive for now and likely any finding in nature would have super complicated mapping to the current theory. But hey, there might be some graduate student with an idea already...

[1] http://www.mpc.berkeley.edu/research/analog-optimization


Exactly. Turing himself left the door open to such ideas with his notion of the oracle machine. It seems odd that we have narrowed the range of possible explanations so dramatically. There are many more oracle machines and even higher order machines on the Turing hierarchy than there are Turing machines.


Humans can't solve the halting problem even for restricted cases where computers can, just imagine trying to analyze an otherwise trivial Turing machine that was larger than you could read in your lifetime. Turing machines are infinite and immortal, unlike us.


Every case a computer can solve the halting problem a human can as well, it may just take longer.

However, the converse is not necessarily true, which would imply that the human mind is beyond a Turing machine.


Turing machines are also beyond real computers for the same reason I pointed out for humans.


I'm not talking about the limits of finitism.

In an ideal setting a human can do anything a Turing machine can, because a human can do all the elementary operations. The converse is not necessarily true, and seems most likely to be false from the evidence we have.


> Why is the idea that the human mind is beyond Turing

> machines not taken seriously?

Because it doesn't appear to be true. Every day that goes by we are able to build larger and more complex neural networks (NN, DNN, FNN, RNN, CNN, BNN, etc, etc).

> Seems to make the most sense from the data we do have.

We haven't done it yet, but we haven't come across some massive barrier to stop us doing it. Although no attempt has been successful, no attempt has shown some reason to stop trying.

> E.g. how could we possible write functioning code so

> consistently if we are limited by the halting problem for

> Turing machines?

Better testing tools are developed all of the time. You hear about the failures, but never the success. When was the last time HN crashed? When was the last time you couldn't connect to the internet? We're getting much-much better at building complex software that is reliable (TM).

It will happen, we just don't know when or how.


The question is whether those larger NNs are reaching an asymptotic limit that is way below known human capabilities. Currently, NNs can only be said to surpass human ability within a very narrow definition. However, if we do an apples to apples comparison, humans are still vastly superior to any NN on any task in terms of computation operations, memory and energy usage. We also learn in a inherently different way, using principles that are widely applicable. NNs, on the other hand, "learn" more in the sense of memorizing answers to a test. There is no understanding that NNs achieve which can be generalized and applied widely.


It is of my opinion that one approach to tackle this could be to impose structure, such as "this NN is responsible for this part". That way we could speed up overall training and build more complex NN architectures.

That said I think there is still a lot of research to be done, they are not perfect, but I think we are barking up the right tree.


Why do you think we are barking up the right tree? The assumption that mind = NN seems wrong on many levels.


> if we are limited by the halting problem for Turing machines?

It's not how it works. Yes, there's no algorithm for solving the halting problem: given a random program, prove that it halts.

But we don't write programs by taking a random source code and trying to prove that it will halt. We create programs, and creating a provable halting program is easy. Just add executed instruction counter and halt the program when it equals some number.

In other words: yes, there's at least one program you cannot prove whether it will halt or not, but it's irrelevant. You can always construct a program that halts.


To some extent you are correct, but I think you are over-egging your point.

It is, of course, trivially possible to construct a program that halts. Or even to modify an arbitrary program to make it halt (a good debugging tactic when things seem to be running awry). But the programs we write on a daily basis are routinely much more complicated than the threshold where their halting-status is computable.

I agree that we often intend to write programs that are guaranteed to halt. And generally code with some informal heuristic logic as to why our code probably will. But you don't have to do a lot of futzing around with formal methods and algorithmverification to know how loose that reasoning can be.


> the threshold where their halting-status is computable

It's not a threshold in a strict sense. If a program is constructed from a proof (the Curry–Howard correspondence), then it will provably halt regardless of its length or complexity.

Most programs aren't constructed from formal proofs, of course, but a programmer always have some informal ideas why the program should work. We never take an arbitrary program and try to prove that it solves our problem. It is impossible anyway, if we aren't super-Turing (Rice's theorem).

And if we want to prove that our program does what we need and fail to do that, we don't give up, but modify our program for it to be provable.


How could we possible write functioning code? Massive trial and error, and still there are plenty of bugs.


Humans aren't perfect, but are way better than seems possible with a Turing machine.


Maybe we're not that great at programming Turing machines afterall ;)


I mean theoretically, not in terms of human ability to replicate our programming ability with an algorithm.


Yeah I agree, and I'm not sure why you are being downvoted. The burden of proof is on people who claim that minds are like Turing machines, or that minds can be simulated by Turing machines. Too often I see that assumption simply being taken for granted.

Gary Marcus has brought up Moravec's paradox and I think that is pretty strong evidence against it. Maybe not in a precise mathematical sense, but there is no argument on the other side that's precise either. Again, the burden of proof is on the other side.

https://en.wikipedia.org/wiki/Moravec%27s_paradox

In other words, it may be that to achieve general human-like AI, we would need to invent a fundamentally different kind of machine. I don't see that idea being taken very seriously.

It could still be that lots of specialized AIs have a huge impact on the world. Although I do think that deep learning has bad engineering properties for things like self-driving cars, as Marcus points out. It could still have a lot of impact for less critical tasks.


Because an extreme (and obviously un-physical) version of the Church-Turing thesis is a sort of religious belief of people who work with computers.


Because we tackle problems that are very small size and solvable with heuristics.

Humans have finite brain size and thus limited state (tape).

Turing machine assumes infinite tape. Strictly speaking Humans are not computationally even Turing machine equivalents. Humans have limited memory. Single human can't solve the same class of problems that Turing machine can.


If computers are Turing complete, then humans are, too. Computers do not have infinite memory, yet they are usually considered Turing complete because theoretically you could keep adding additional memory when needed. The same holds for humans: you can use paper as tape, or even in HDD (eg. write down intermediate results in a digital file).


No actual real real world computer is Turing complete. Only the. A problem is computable if it can be expressed in such a way that a Turing machine can solve it. That's theoretical result. The set of computable problems is not the set of problems that real world computers (or humans) can solve in some finite time and space.

Any problem that any human in history has ever solved could have been solved with finite state machine of sufficient size.


In practice, computers and many programming languages are considered Turing complete because arbitrary (yet finite, obviously) amounts of memory can be added if needed. Eg. Java is considered Turing complete even if no JVM instance will ever be able to have access to infinite memory.

Theoretically, you are right. But does strict Turing completeness matter, in practice? I don't have the required knowledge to counter-argument your last point.


If we consider the context of argument I feel that I was more on the point and you were sidetracked.

This is the context that was set up by user yters

> E.g. how could we possible write functioning code so consistently if we are limited by the halting problem for Turing machines?

You can't use known limits of Turing machines to argue that humans can do better because we can't.


Sorry, that's right.


If humans are not limited by the halting problem, it would be great if you could tell me if the following function f halts for all integers n: https://gist.github.com/leod/9b89af30cff21cb925d4522a68c990d...


Not being limited by the halting problem does not entail humans can solve every halting problem. They could be inbetween solving more than a Turing machine but less than a complete halting oracle.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: