Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is the worst possible take. It dismisses an entire branch of science that has been studying neurology for decades. Biological brains exist, we study them, and no they are not like computers at all.

There have been charlatans repeating this idea of a “computational interpretation,” of biological processes since at least the 60s and it needs to be known that it was bunk then and continues to be bunk.

Update: There's no need for Chinese Room thought experiments. The outcome isn't what defines sentience, personhood, intelligence, etc. An algorithm is an algorithm. A computer is a computer. These things matter.

 help



>Biological brains exist, we study them, and no they are not like computers at all.

You are confusing the way computation is done (neuroscience) with whether or not computation is being done (transforming inputs into outputs).

The brain is either a magical antenna channeling supernatural signals from higher planes, or it's doing computation.

I'm not aware of any neuroscientists in the former camp.


Neuroscience isn't a subset of computer science. It's a study of biological nervous systems, which can involve computational models, but it's not limited to that. You're mistaking a kind of map (computation) for the territory, probably based on a philosophical assumption about reality.

At any rate, biological organisms are not like LLMs. The nervous systems of human may perform some LLM-like actions, but they are different kinds of things.


Who says it is a subset of computer science?

But computational models are possibly the most universal thing there is, they are beneath even mathematics, and physical matter is no exception. There is simply no stronger computational model than a Turing machine, period. Just because you make it out of neurons or silicon is irrelevant from this aspect.


Turing machines aren't quantum mechanical, and computation is based on logic. This discussion is philosophical, so I guess it's philosophy all the way down.

Quantum computers don't provide access to novel problems, they provide access to novel solutions.

You can use a classic transistor turing machine to solve quantum problems, it's just gonna take way longer.


Turing machines are deterministic. Quantum Mechanics is not, unless you go with a deterministic interpretation, like Many Worlds. But even then, you won't be able to compute all the branches of the universal wave equation. My guess is any deterministic interpretation of QM will have a computational bullet to bite.

As such, it doesn't look like reality can be fully simulated by a Turing machine.


Quantum mechanics and quantum computers are not interchangeable terms.

QM is a derived rule set, QC is a result of assembling a physical system that exploits QM rules.

aside from that, a Quantum scale assemblage [QC] is a lot closer to biological secret sauce than semiconductor gates.


brains provide access to novel problems, and novel solutions.

the process is called imagination.


Giving a Turing machine access to a quantum RNG oracle is a trivial extension that doesn't meaningfully change anything. If quantum woo is necessary to make consciousness work (there is no empirical evidence for this, BTW), such can be built into computers.

> The brain is either a magical antenna channeling supernatural signals

There’s the classic thought-terminating cliche of the computational interpretation of consciousness.

If it isn’t computation, you must believe in magic!

Brains are way more fascinating and interesting than transistors, memory caches, and storage media.


You would probably be surprised to learn that computational theory has little to no talk of "transistors, memory caches, and storage media".

You could run Crysis on an abacus and render it on board of colored pegs if you had the patience for it.

It cannot be stressed enough that discovering computation (solving equations and making algorithms) is a different field than executing computation (building faster components and discovering new architectures).


Not surprised at all.

My point is that it takes more hand-waving and magic belief to anthropomorphize LLM systems than it does to treat them as what they are.

You gain nothing from understanding them as if they were no different than people and philosophizing about whether a Turing machine can simulate a human brain. Fine for a science fiction novel that is asking us what it means to be a person or question the morals about how we treat people we see as different from ourselves. Not useful for understanding how an LLM works or what it does.

In fact, I say it’s harmful. Given the emerging studies on the cognitive decline of relying on LLMs to replace skill use and on the emerging psychosis being observed in people who really do believe that chat bots are a superior form of intelligence.

As for brains, it might be that what we observe as “reasoning” and “intelligence” and “consciousness” is tied to the hardware, so to speak. Certainly what we’ve observed in the behaviour of bees and corvids have had a more dramatic effect on our understanding of these things than arguing about whether a Turing machine locked in a room could pass as human.

We certainly don’t simulate climate models in computers can call it, “Earth,” and try to convince anyone that we’re about to create parallel dimensions.

I don’t read Church’s paper on Lambda Calculus and get the belief that we could simulate all life from it. Nor Turing’s machine.

I guess I’m just not easily awed by LLMs and neural networks. We know that they can approximate any function given an unbounded network within some epsilon. But if you restate the theorem formally it loses much of its power to convince anyone that this means we could simulate any function. Some useful ones, sure, and we know that we can optimize computation to perform particular tasks but we also know what those limits are and for most functions, I imagine, we simply do not have enough atoms in the universe to approximate them.

LLMs and NNs and all of these things are neat tools. But there’s no explanatory power gained by fooling ourselves into treating them like they are people, could be people, or behave like people. It’s a system comprised of data and algorithms to perform a particular task. Understanding it this way makes it easier, in my experience, to understand the outputs they generate.


I don't see where I mentioned LLMs or what they have to do with a discussion about compute substrates.

My point is that it is incredibly unlikely the brain has any kind of monopoly on the algorithms it executes. Contrary to your point, a brain is in fact a computer.


> Contrary to your point, a brain is in fact a computer.

Whether a brain is a computer is entirely resolved by your definition of computer. And being definitional in nature, this assertion is banal.


> philosophizing about whether a Turing machine can simulate a human brain

Existence proof:

  * DNA transcription  (a Turing machine, as per (Turing 1936) )
  * Leads to Alan Turing by means of morphogenisis (Turing 1952)
  * Alan Turing has a brain that writes the two papers
  * Thus proving he is at least a turing machine (by writing Turing 1936)
  * And capable of simulating chemical processes (by writing Turing 1952)
Turing 1936: https://www.cs.virginia.edu/~robins/Turing_Paper_1936.pdf

Turing 1952: https://www.dna.caltech.edu/courses/cs191/paperscs191/turing...


>This is the worst possible take. It dismisses an entire branch of science that has been studying neurology for decades. Biological brains exist, we study them, and no they are not like computers at all.

They're not like computers in a superficial way that doesn't matter.

They're still computational apparatus, and have a not that dissimilar (if way more advanced) architecture.

Same as 0 and 1s aren't vibrating air molecules. They can still encode sound however just fine.

>Update: There's no need for Chinese Room thought experiments. The outcome isn't what defines sentience, personhood, intelligence, etc. An algorithm is an algorithm. A computer is a computer. These things matter.

Not begging the question matters even more.

This is just handwaving and begging the question. 'An algorithm is an algorithm' means nothing. Who said what the brain does can't be described by an algorithm?


> An algorithm is an algorithm. A computer is a computer. These things matter.

Sure. But we're allowed to notice abstractions that are similar between these things. Unless you believe that logic and "thinking" are somehow magic, and thus beyond the realm of computation, then there's no reason to think they're restricted to humanity.

It is human ego and hubris that keeps demanding we're special and could never be fully emulated in silicon. It's the exact same reasoning that put the earth at the center of the universe, and humans as the primary focus of God's will.

That said, nobody is confused that LLM's are the intellectual equal of humans today. They're more powerful in some ways, and tremendously weaker in other ways. But pointing those differences out, is not a logical argument in proving their ultimate abilities.


> Unless you believe that logic and "thinking" are somehow magic, and thus beyond the realm of computation

Worth noting that significant majority of the US population (though not necessarily developers) does in fact believe that, or at least belongs to a religious group for which that belief is commonly promulgated.


I think computation is an abstraction, not the reality. Same with math. Reality just is, humans come up with maps and models of it, then mistake the maps for the reality, which often causes distortions and attribution errors across domains. One of those distortions is thinking consciousness has to be computable, when computation is an abstraction, and consciousness is experiential.

But it's a philosophical argument. Nothing supernatural about it either.


You can play that game with any argument. "Consciousness" is just an abstraction, not the reality, which makes people who desperately want humans to be special, attribute it to something beyond reach of any other part of reality. It's an emotional need, placated by a philosophical outlook. Consciousness is just a model or map for a particular part of reality, and ironically focusing on it as somehow being the most important thing, makes you miss reality.

The reality is, we have devices in the real world that have demonstrable, factual capabilities. They're on the spectrum of what we'd call "intelligence". And therefore, it's natural that we compare them to other things that are also on that spectrum. That's every bit as much factual, as anything you've said.

It's just stupid to get so lost in philosophical terminology, that we have to dismiss them as mistaken maps or models. The only people doing that, are hyper focused on how important humans are, and what makes them identifiably different than other parts of reality. It's a mistake that the best philosophers of every age keep making.


I recommend starting here...

https://en.wikipedia.org/wiki/The_Unreasonable_Effectiveness...

The argument you're attempting to have, and I believe failing at, is one of resolution of simulation.

Consciousness is 100% computable. Be that digitally (electrical), chemically, or quantumly. You don't have any other choices outside of that.

Moreso consciousness/sentience is a continuum going from very basic animals to the complexity of humans inner mind. Consciousness didn't just spring up, it evolved over millions of years, and therefore is made up of parts that are divisible.


Reality is. Consciousness is.. questionable. I have one. You? I don't know, I'm experiencing reality and you seem to have one, but I can never know it.

Computations on the other hand describe reality. And unless human brains somehow escape the physical reality, this description about the latter should surely apply here as well. There are no stronger computational models than a Turing machine, ergo whatever the human brain does (regardless of implementation) should be describable by one.


>Reality is.

Look into quantum mechanics much and you may even begin to doubt that. We're just a statistical outcome!


Worth noting that this is the thesis of Seeing Red: A study in consciousness. I think you will find it a good read, even if I disagreed with some of the ideas.

silicon is not a dynamic structure, silicon does not reengineer and reconfigure itself in response to success/failure or rules discovery.

The atoms of your body are not dynamic structures, they do not reengineer or reconfigure themselves in response to success/failure or rules discovery. So by your own logic, you can not be intelligent, because your body is running on a non-dynamic structure. Your argument lacks an appreciation for higher level abstractions, built on non-dynamic structures. That's exactly what is happening in your body, and also with the software that runs on silicon. Unless you believe the atoms in your body are "magic" and fundamentally different from the atoms in silicon; there's really no merit in your argument.

>>he atoms of your body are not dynamic structures, they do not reengineer or reconfigure themselves in response to success/failure or rules discovery.<<

you should check out chemistry, and nuclear physics, it will probably blow your mind.

it seems you have an inside scoop, lets go through what is required to create a silicon logic gate that changes function according to past events, and projected trends?


You're ignoring the point. The individual atoms of YOUR body do not learn. They do not respond to experience. You categorically stated that any system built on such components can not demonstrate intelligence. You need to think long and hard before posting this argument again.

Once you admit that higher level structures can be intelligent, even though they're built on non-dynamic, non-adaptive technology -- then there's as much reason to think that software running on silicon can do it too. Just like the higher level chemistry, nuclear physics, and any other "biological software" can do on top of the non-dynamic, non-learning, atoms of your body.


>>The individual atoms of YOUR body do not learn. They do not respond to experience<<

you are quite wrong on that. that is where you are failing to understand, you cant get past that idea.

there is also a large difference in scale. your silicon is going to need assembly/organization on the scale of individual molecules, and there will be self assembly required as that level of organization is constantly changing.

the barrier is mechanical scale construction, as the basic unit of function,that is why silicon and code cant adapt, cant exploit hysterisis, cant alter its own structure and function at an existentially fundamental level.

you are holding the wrong end of the stick. biology is not magic, it is a product of reality.


No, you're failing to acknowledge that your own assertion that intelligence can't be based on a non-dynamic, non-learning technology is just wrong. And not only wrong, proof to the contrary, is demonstrated by your very own existence. If you accept that you are at the very base of your tech stack, just atoms, then you simply must acknowledge that intelligence can be built on top of a non-learning, non-dynamic base technology.

All the rest is just hand waiving that it's "different". You're either atoms, or you're somehow atoms + extra magic. I'm assuming you're not going to claim that you're extra magic, in which case, your assertions are just demonstrably false, and predicated on unjustified claims about the nature of biology.


so you are a bot! i thought so, not bad, your getting better at acting human!

atoms are not the base of stack, you need to look at virtual annihilation, and decoherence. to get close to base. there is no magic, biology just goes to the base of the stack.

you cant access that base, with such coarse mechanisms as deposited silicon. thats because it never changes, it fails at times and starts over.

biology is constantly changing, its tied to the base of existence itself. it fails, and varies until failure is an infeasible state.

Quantum "computers" are something close to where you need to be, and a self assembling, self replenishing, persistant ^patterning^ constraint is going to be of much greater utility than a silicon abacus.


Silicon is not dynamic, but code is.

The output of a silicon system that reprograms itself, and the output of a neural system that rearranges itself, are indistinguishable.


sorry, but you are absolutely wrong on that one, you yourself are absolute proof.

not only that code is only as dynamic as the rules of the language will permit.

silicon and code cant break the rule, or change the rules, biological adaptive hysteretic, out of band informatic neural systems do, and repeat, silicon and code cant.


Programming languages are turing complete...the boundary is mathematics itself.

Unless you are going to take the position that neural systems transcend mathmatics (i.e. they are magic), there is no theoretical reason that a brain can't run on silicon. It's all just numbers, no magic spirit energy.

We've had evolutionary algorithms and programs that self-train themselves for decades now.


mathematics, has a problem with uncertainties, and that is why math, as structured cant do it. magic makes a cool strawman, but there is no magic, you need to refine your awareness of physical reality. solid state silicon wont get you where you want to go. you should look at colloidal systems [however that leads to biology] or if energetic constraints are not an issue, plasma state quantum "computation".

also any such thing that is generated, must be responsive to consequences of its own activities, capable of meta-training, rather than being locked into a training programming. a system of aligned, emergent outcomes.


I don't know if you are a human or a micro LLM model asked to make smart sounding big word statements.

Worth separating “the algorithm” from “the trained model.” Humans write the architecture + training loop (the recipe), but most of the actual capability ends up in the learned weights after training on a ton of data.

Inference is mostly matrix math + a few standard ops, and the behavior isn’t hand-coded rule-by-rule. The “algorithm” part is more like instincts in animals: it sets up the learning dynamics and some biases, but it doesn’t get you very far without what’s learned from experience/data.

Also, most “knowledge” comes from pretraining; RL-style fine-tuning mostly nudges behavior (helpfulness/safety/preferences) rather than creating the base capabilities.


> Biological brains exist, we study them, and no they are not like computers at all.

Technically correct? I think single bioneurons are potentially Turing complete all by themselves at the relevant emergence level. I've read papers where people describe how they are at least on the order of capability of solving MNIST.

So a biological brain is closer to a data-center. (Albeit perhaps with low complexity nodes)

But there's so much we don't know that I couldn't tell you in detail. It's weird how much people don't know.

* https://arxiv.org/abs/2009.01269 Can Single Neurons Solve MNIST? The Computational Power of Biological Dendritic Trees

* https://pubmed.ncbi.nlm.nih.gov/34380016/ Single cortical neurons as deep artificial neural networks (this one is new to me, I found it while searching!)


Obviously any kind of model is going to be a gross simplification of the actual biological systems at play in various behaviors that brains exhibit.

I'm just pointing out that not all models are created equal and this one is over used to create a lot of bullshit.

Especially in the tech industry where we're presently seeing billionaires trying to peddle a new techno-feudalism wrapped up in the mystical hokum language of machines that can, "reason."

I don't think the use of the computational interpretation can't possibly lead to interesting results or insights but I do hope that the neuroscientists in the room don't get too exhausted by the constant stream of papers and conference talks pushing out empirical studies.


> There have been charlatans repeating this idea of a “computational interpretation,” of biological processes since at least the 60s and it needs to be known that it was bunk then and continues to be bunk.

I do have to react to this particular wording.

RNA polymerase literally slides along a tape (DNA strand), reads symbols, and produces output based on what it reads. You've got start codons, stop codons, state-dependent behavior, error correction.

That's pretty much the physical implementation of a Turing machine in wetware, right there.

And then you've got Ribosomes reading RNA as a tape. That's another time where Turing seems to have been very prescient.

And we haven't even gotten into what the proteins then get up to after that yet, let alone neurons.

So calling 'computational interpretation' bunk while there's literal Turing machines running in every cell might be overstating your case slightly.


To the best of our knowledge, we live in a physical reality with matter that abides by certain laws.

So personal beliefs aside, it's a safe starting assumption that human brains also operate with these primitives.

A Turing machine is a model of computation which was in part created so that "a human could trivially emulate one". (And I'm not talking about the Turing test here). We also know that there is no stronger model of computation than what a Turing model is capable of -> ergo anything a human brain could do, could in theory be doable via any other machine that is capable of emulating a Turing machine, be it silicon, an intricate game of life play, or PowerPoint.


It's better to say we live in a reality where physics provides our best understanding of how that fundamental reality behaves consistently. Saying it's "physical" or follows laws (causation) is making an ontological statement about how reality is, instead of how we currently understand it.

Which is important when people make claims that brains are just computers and LLMs are doing what humans do when we think and feel, because reality is computational or things to that effect.


There are particular scales of reality you don't need to know about because the statistical outcome is averaged along the principle of least action. A quantum particle could disappear, hell maybe even an entire atom. But any larger than that becomes horrifically improbable.

I don't know if you've read Permutation City by Greg Egan, but it's a really cool story.

Do I believe we can upload a human mind into a computing machine and simulate it by executing a step function and jump off into a parallel universe created by a mathematical simulation in another computer to escape this reality? No.

It's a neat thought experiment but that's all it is.

I don't doubt that one day we may figure out the physical process that encodes and recalls "memories" in our minds by following the science. But I don't think the computation model, alone, offers anything useful other than the observation that physical brains don't load and store data the way silicon can.

Could we simulate the process on silicon? Possibly, as long as the bounds of the neural net won't require us to burn this part of the known universe to compute it with some hypothetical machine.


That's a very superficial take. "Physical" and "reality" are two terms that must be put in the same sentence with _great_ care. The physical is a description of what appears on our screen of perception. Jumping all the way to "reality" is the same as inferring that your colleague is made of luminous RGB pixels because you just had a Zoom call with them.

the deepest laws of physics are immutable, the derivative rules based assemblages are not.

human brains break the rules, on a regular basis.

if you cant reach the banana, you break the constraints, once you realize the crates about the room can be assembled to create a staircase.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: