Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am sure someone, perhaps you, could create an AI (or just a simple hard-coded program) that responds to (dis)incentives that clearly doesn't experience pain or pleasure.

But that isn't the question. The question it possible that a sufficiently complex program could ever be able to have the experiences in the same way that humans have them (AKA "qualia").



I think it is the question, because if there is an AI that can experience qualia, and my toy AI doesn't, then there's some inflection point between the two that can experience qualia. You can set up goalposts just slightly past any toy example but you're just inventing Zeno's Qualia.


I think with enough co-processors responding to enough "(dis)incentives" you could get the same result.

Present and future anger, present and future pain, predicting whether future social cohesion will be maintained based on how a response is made to someone within a social graph. Understanding and assuming that nodes (other people) within a social graph can communicate to each other to update the state of a social interaction that they weren't present for.

I think it is possible.

If pleasure was just a variable on a scale from 0 to 10, and not its own co-processor, and an assumption that some forms of pleasure can influence parts of the social graph, it maybe isn't even that complicated anymore.

I think it would become less distinguishable from consciousness.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: