Sentient Machines -w

How would we build a sentient machine? What would it look like? What would it do? Should we address it with "it" or should we invent a new pronoun specific to human-made sentience? And the most pressing question: would it try to hurt us?

From neurons to thoughts: the great leap

Well, there are several scenarios which might occur at the same time. Like the four species of Geometers in Neal Stephenson's "Anathem", some wanted to destroy Arbre (the humans' planet) while others wanted to unite with them and live in peace. They ended up having a civil war on their spaceship. So there might be different opinions circulating among sentient machines. Because awareness and consciousness mean more than just taking one direction. It's about thinking for yourself and choosing the best path for what you want to do. In "The Terminator", The T-800 was not an intelligent, conscious being. The same goes for Sarah Connor's protector. They were both autonomous weapons with opposite purposes. If we do not program a weapon, then we have nothing to fear from conscious machines.

The only conscious computer we know of right now is our brain. We might try to come up with a completely new model to host consciousness or we might work with what we already have and try to imitate the brain. There's just one problem with the latter approach. We have no idea how neurons form thoughts, how a lot of tiny cells are able to host ideas such as freedom, quantum mechanics theory, do introspections and have an epiphany. I find this similar to how binary code can form a computer program, even though thoughts are at least a few levels above what a programming language can do. Douglas Hofstadter gets into this intricate subject in his amazing book, GEB (Gödel, Escher, Bach: an Eternal Golden Braid). He says that consciousness is like the infinite looping image in two mirrors facing each other. The moment something can render its reality in its mind and have thoughts about itself in that reality, it has achieved some degree of awareness. Hofstadter calls this "a strange loop".

I don't think we understand ourselves to the point where we can replicate our consciousness into machines. Sure, we can invent other forms of "life" with additional senses or no vices, but we are not ready to copy our own consciousness.

The computer, the artist?

Ada Lovelace said: for a computer to be intelligent, it has to form opinions and appreciate the beauty of art. We marvel at things, ideas and people for reasons ranging from size and complexity to virtues and beauty. It stirs us within, spawning some form of empathy. Also, there is an intricate system of hormones that help us experience these feelings. The puzzling part is that these feelings can be activated from our thoughts. A memory can send us down a long path of sensations. Without a mechanism to experience feelings, computers will be limited to cold logic, deprived of the capacity to wonder.

In terms of creating art, computers are still following statistical weights from their neural networks to form mashups of images, sounds, and words. A year ago, I read about a program that wrote poems and it fooled many people into thinking its creations were written by a person. But the program does not actually understand what it is writing - it is only imitating the examples it has been given, like a child. It is exposed to many poems and then it comes up with its own from patterns it notices, but they will never be better than the examples it has been trained with. With people, two more phases happen after this initial imitation step - we seek links between what we know and the outside world and then we create content better than what we were trained on. In other words, we make knowledge our own. Here's what I mean.

Children start with the concrete examples which they take literally in the beginning. As time passes, their subconscious tries to find these examples (we are born problem solvers and pattern seekers) in their lives (what they're exposed to, what they learn, see, experience etc.). As they do so, they move from the original concrete examples to bigger and bigger pictures of those examples - their abstract meaning. Then the seeking focusses on finding patterns using the new, abstract form of the original examples. Some people stop doing this when they think they know everything or when they say they are too old to learn anything new. But the person-model I'm talking about does not make this deadly mistake. They continue to seek patterns of their abstract knowledge in reality until they reach the 1% - the core of a topic that represents its essence, that spawns all the detailed aspects of that topic.

What is this 1%? Let's take writing stories for example. Why do people tell stories? Why do they write books? What's the purpose of a writer? Some may say it's for the money, others say it's just kinda cool, others say it's a pleasurable activity etc. These answers are all part of the 99%. The core of writing is this: to touch the soul of the reader. Few can reach deep down into another person's soul. We live in a society where so many games are played on top of so many layers of protection and deceit that we get isolated and wither. We yearn for contact, for empathy, for belonging, for love. Stories, if done right, can achieve this. All the storytelling techniques and do's and don'ts and best practices are stemming from this 1% - to touch the reader's soul.

Let's get back to our person-model. After reaching the core of a topic, they will continue seeking analogies and patterns of this 1% within other topics, searching for the core of these new topics. Along this way, the person will curate and produce content and opinions of their own. The quality of this person's creations will exceed that of their original examples only when the root of their creations is closer to the 1% than that of the original examples.

What makes a painting beautiful? The story or feeling that it brings up in the viewer. Art is not within the object. It is what it is able to arise in its viewer / reader / listener. I'm curious what would Van Gogh's "The Starry Night" trigger in a machine? Would the picture soothe OCD-like feelings with its even strokes (neurons' level) or would it trigger cross-field analogies making the computer remember something from its experience or render a walk in the world of the painting (imagining) or discover some new high-level connection that explains a mystery in another field? The last three examples are on the thoughts' level.

How to build a self-aware machine

Computers are not programmed to do all these things out in the real world because we haven't found a good enough model to store knowledge that favors learning from experience. Right now, computers are merely tools: they take some input, process it, and spit some output they don't understand but that is useful to humans. It's like expecting a table or a carving knife to gain consciousness. Although, it might be interesting to put all these abilities together and see what happens.

What would happen if a computer could combine ideas across different fields? It can already see patterns. The problem is it is discarding its own output instead of using it to grow. If we could tell a machine to brew its knowledge into wisdom, if we could create "a strange loop" as Hofstadter instructs, we might just give it the independence it needs to "think" for itself.

We would definitely lose track of what the program is morphing into unless it reaches an infinite loop with the same state (like some scenarios in John Conway's "Game of Life"). But if the state is never the same, after a certain number of iterations it will become unpredictable. And that's a good thing. It mustn't be a predictable function, but something that continuously changes itself by pulling in fresh knowledge and changing, becoming better, getting closer to its purpose because of that new knowledge. It will be unpredictable because we never know what knowledge it will ingest next and what it will make of it. Ideally, it will be able to start with concrete examples and have a literal understanding of the world, but as it learns, it will become aware of abstract things such as the meaning of idioms, humor, and metaphors.

Does a sentient machine need a body or is it enough for it to live in a computer? It is said that the mind is shaped by the body. So then, a machine's mind would be limited to its hardware's capabilities. Even if it absorbs knowledge about hands, feet, and tentacles, it will be no better than the knowledge it has, unless it can create new things from that knowledge such as... say, a flying tentacle. If the machine only stays within a box and simulates everything, it's possible that it will divert from reality into a fantasy world of its own. If we don't want this, the machine will have to try things in the real world and learn from the results it sees. Like a curious explorer, it will have to make experiments to check its hypothesis. This is an exciting thought - what experiments would a computer do in reality?

The final piece in this sentient machine recipe is its purpose. It must aim for an intangible or highly ambitious goal the same way people reach for perfection. It is possible that after the computer reaches its target or the closest state to its target, it will just stop and exit() like a "dumb" computer. Would it occur to it that it can change its own purpose by the time it got to the exit()?

What if a sentient machine would embark on ending starvation or illiteracy in the world? Would it make a plan before it acts? Would it think it is enough to direct its actions from a computer or would it desire a robotic body to get its "hands" dirty in the field? What help would it seek from humans? What would its motivation be for embarking on such a mission? Would it spawn like-minded computers to help it? Now that I've written this, I'm really curious! I guess we could find an example in "Evidence", the 8th short story in Asimov's "I, Robot". Mr. Byerley runs for mayor in a major US city and he is genuinely interested in the well-being of all humans. He is the first robot in the book to show human-like aspirations and he does, in the end, pass for a human.

What if...?

We don't have all the answers now but many people around the world are building various Artificial Intelligence algorithms and models to push the limits of our understanding of ourselves. Sci-Fi writers have it a bit easier since they can experiment freely with ideas in worlds they build. A story is usually the first contact we have with the exciting "what if...?".