Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Serious question: how can consciousness be an illusion? I’m not even sure what that would mean. One could argue that many things are illusory (e.g. the external world). But it’s quite hard to dismiss the notion of actual experience occurring in the world. Our experiences could lead us to false conclusions, sure, but we cannot deny their existence.


The experiences are real, but the homunculus behind your eyes that you think of as yourself is an illusion. In reality there are a bunch of fairly independent processes talking to each other, sharing the perceptions, reacting to them, feedback from those reactions again become a perception, a "narrator" process that often gives running commentary in your mother-tongue, etc. The interaction of those processes feels like a unified thing, "you", and that's the illusion. This illusion has adaptive advantages, because it helps the organism take care of its needs, so evolution selects for it.

That's my take, maybe not exactly the same as Damasio and Seth's, but compatible with theirs.


If I understand correctly, you're referring to the "easy" problem of consciousness, i.e. the "mechanistic" explanation of how a self could be constructed by the brain. That's an interesting question and I think your take is a coherent and plausible one (from a materialistic perspective). However, I still think this doesn't get around the hard problem of why these interactions actually feel like anything. I've never heard a satisfying materialistic explanation of that. Do you believe the interactions could in principle be implemented on any Turing machine? Or are they substrate dependent?


Actually I do; but... 1) such a Turing machine may have to be a lot more powerful than anything we currently have. With or without invoking QM mechanisms, there is reason to believe that every single neuron does a lot more computation than our simplistic models in current ML neural nets. 2) it may not be possible to "program" a machine to be conscious in the way we feel consciopus, we'd probably have to literally evolve it, i.e. in a rich simulate environment starting with simple artificial "organisms" that "feel" this environment and then getting progressively more complex.

But I do believe in Wolfram's "principle of computational equivalence", and thus that anything that can implement a turing machine can also implement any other complex system, including consciousness.


I guess this is where we differ. I don't see a sufficient reason to believe that increasing computational capacity/complexity alone gives rise to consciousness. Moreover, I think there are common sense reasons to believe that consciousness is not substrate independent. Therefore, I don't see it as obvious that Turing completeness is sufficient for consciousness. For example, as someone else on this post has pointed out, a sufficiently complex water pipeline can implement a Turing machine. However, I doubt it would ever be conscious, no matter how large we make it. I think representing and processing information is orthogonal to experiencing.


I think we do agree... "complexity alone" certainly will certainly not give rise to consciousness. Consciousness begins with feeling and separating the "I" from the "other"; I feel hunger, that there is food. That's why I said we'd have to evolve it in a simulated environment, one in which there are things for a nascent consciousness to feel. So in that sense yes, it depends on the substrate, but the substrate could be virtual, simulated on a powerful enough Turing machine.


You haven't explained why you think they shouldn't "feel like anything". How do you distinguish "feeling like anything" from anything else you experience?


Well, as far as I'm aware notions of feeling or experiencing are not accounted for in our current physical models. Does an electron feel anything? On the one hand, if it does, it seems to me like physical models have to be extended to include some primitive form of consciousness. This would be something like panpsychism. On the other hand, if single electrons do not have consciousness, why do large collections of them in specific structures have it? Note that, to me, it seems insufficient to say that collections of electrons can be used to model or compute with. Namely because it raises the question of why this modeling has a feeling tone (qualia) to it.

Finally, I don't know if I can make a meaningful distinction between feeling and experiencing. I believe a feeling is an experience.


I think they were referring to Daniel Dennett, not Antonio Damasio or Anil Seth. Damasio is interested in how the self is constructed (so, similar to your idea) while Seth is saying that experience is a controlled hallucination. Dennett is the one claiming consciousness is an illusion. These are three very distinct research programs.


Except that they all agree that there is no "hard problem of consciousness", that's the part that's illusory. I think Dennett takes the illusion position too far, so I think I'm somewhere between those three. But there are others who take it further even, such as Graziano in "Consciouness and the Social Brain" whose "consciousness is an attention schema" position seems so nonsensical to me that I can't even explain it despite having the read the book.


Either it's inane wordplay or it's obviously nonsense for most definitions of conscious. An illusion must be perceived, so in that sense the proposition is self-refuting. Also, you have empirical evidence against it: you know when you're conscious and that sometimes you aren't (and that it's a spectrum.) Thirdly, you can behaviorally define conscious. Fourthly, if I'm not mistaken, you can measure its presence neurologically. To say that all of this is an illusion is stupidity or being very obtuse for attention (like clickbait.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: