Can a simulation experience pleasure and pain?

Thoughts on simulating a human-like mind (part 1)

There have been many attempts at simulating the human mind. Most of them, like the Blue Brain Project, try the precise route of simulating the most basic elements, such as neurons, and hooking them up together like in a real brain. This might give equally precise results that might be of special interest to neurobiologists, however are computationally expensive, hard to execute and still lack the necessary precision in the input data.

On the other side of the AI spectrum we’ve got machine learning algorithms, often driven by neural networks, but often characterized by very mathematically rugged outputs.

I’d like to propose a higher-level simulation of the human psyche, rather than the brain itself. I borrow and expand on ideas such as semantic computing, or association rule learning to create a conceptual self-expanding, autonomous simulation.

Mind: A highly associative database

Daniel Kahneman’s deconstruction of the mind into “automatic System 1” and “slow-thinking System 2” brought to my attention the associative nature of our minds, and inspired me to translate it into a simple system of connected nodes.

Imagine the value of each node representing a concept – it may hold any data, such as text, a number, image or an executable action. New nodes are added to the systems when new “experiences” are recorded. A number of them are preprogrammed nodes, which I’ll later call “instincts”. Each connection between the nodes holds a strength-value that determines the order of search and propagation. Cluster-nodes might also appear when a new concept emerges out of a combination of other previously known ones. These would most commonly be specific instances of things vs general concepts (e.g. a specific apple vs apples in general) and compound concepts like [bun + sausage] = [hot-dog].

Repetition and context

When we’re learning a poem by heart, repetition strengthens the neural connections responsible for keeping that poem in our memory. Likewise, every time a node connection is accessed (or is simply traversed), it too gets reinforced. But with each access it also makes new connections and reinforcements with the nodes in the “buffer of context”. That “context” is a list of most-recently accessed nodes, the “short-term” memory of the simulation, so-to-speak. To make an analogy, a specific smell, or a long unheard song can remind us of a moment from our past – because, consciously or not, the concept of that moment got connected with the concept of that smell. This side-effect feature is one of the most defining and humanizing factors of the simulation, but also the source of the simulation’s potential to become “alive” by starting to do things “on it’s own”.

Trauma, pain and pleasure

Since context of our experiences is not always relevant or worthy of remembering, we are constantly polluting our minds – and in this case the simulation – with extraneous data.

People often develop irrational phobias because they experienced something traumatic in the past. The event had such a strong impression on the person that it created an equally strong connection with its context.

These are the basis of the evolutionary feedback loop that kept our ancestors alive. If I put a finger into fire and get negative feedback, I’m not likely to do that again. Equally so however, if the person is raped in a room painted blue, he might develop an irrational fear of blue rooms as an unwanted side effect.

It is interesting to note that such a simulation would be prone to trauma and even mental illness, preoccupation, distraction, perhaps even Schizophrenia. By exposing the simulation to certain inputs with a setting of rewarding stimuli, which is one that strengthens the connections with the intuitive-concept of “want more of” – could we call the simulation’s experiences an equivalent of pleasure? Would then exposing the simulation to negative stimuli, and hence strengthening the correlation with the intuition-“avoid” make it into a “painful” experience?

It might only be a philosophical question today, but a very pressing one to discuss. It’s worth noting that consciousness is not required to experience pain and pleasure, so how do we define those when we’re dealing with an AI?

What’s next?

If you’re interested in the topic, just drop me an email at b (at) invent.life, as I’d be very happy to chat with you and go deeper into the idea. I’m also open to research collaboration if you’re somebody who’d like to give such a project a shot.
Thanks for reading.