20. Artificial Intelligence

A cybercriminal who goes by the hacker name Neo thinks that something is wrong with the world. By chance he runs into a man named Morpheus, who tells him that they live in a fictional world. According to Morpheus somewhere at the beginning of the 21st century a war broke out between humans and intelligent machines. To block the energy source of the machines, the people darkened the earth. But the machines learned to use human energy as a source, and they created a computer simulation of reality to control the humans and harvest their energy: The Matrix. Morpheus offers Neo a red pill and a blue pill. If Neo takes the red pill, he will wake up from the Matrix. If he chooses the blue pill, he will forget about this encounter.

This is the opening scene of the blockbuster movie The Matrix, directed by the Wachowski sisters and released in 1999.

There are many movies themed around the moment when artificial intelligence (AI), robots or computers become smarter than humans. In some movies cyborgs or androids acquire emotions and sympathetic qualities, as happens in the movie Blade Runner where androids unexpectedly develop human feelings and a consciousness and therefore have to ‘retire’, a euphemism for being switched off. In the movie AI: Artificial Intelligence parents whose child contracted a rare disease and is placed in suspended animation, replace their son with a child robot that can experience love. In other movies machines are less friendly. They turn into evil entities that want to kill or conquer human beings.

These fantasies of how machines become smarter than humans are, in part, fueled by Moore’s Law [1]. In 1965, Gordon Moore posited that roughly every 12 months, the number of transistors on microchips would double. In 1975 the doubling interval was increased to 24 months. Either way, Moore’s Law predicted an exponential growth of technological progress. In 1993, Vernor Vinge wrote the essay The Coming Technological Singularity [2], in which he reasons that if the exponential growth that science and technology had experienced up to that point continued, it would lead by 2030 at the latest, to the creation of entities with an intelligence greater than that of humans: the moment of singularity. Ray Kurzweil, inventor and entrepreneur working on AI and singularity, stated that 2045 is the year when singularity will be achieved.

Fundamental limits
We love to discuss books and movies about artificial intelligence, but we have to keep both feet on the ground. High time to pivot back to the discovery of coherence. It is true that computers and algorithms are becoming faster, more powerful and more sophisticated, but that is not the same as becoming intelligent. They remain machines. Artificial intelligence is not intelligent and cannot become intelligent.
In The Seven deadly sins of AI predictions [3] Rodney Brooks uses seven misconceptions concerning predictions of the impact of artificial intelligence to convincingly show that there are limitations to the possibilities of AI. Moore’s Law, for example, is no longer valid. Developments are slowing down. And other past predictions about AI have either not come true or have taken a completely different turn. This is all circumstantial evidence that the capabilities of AI are, in fact, limited.

However, there are also fundamental limits to the capabilities of artificial intelligence. Let’s look at artificial intelligence through the lens of emergence and creativity. All current computers, AI and machine learning work with 0’s or 1’s. Those 0’s and 1’s must be very stable. And they are. Unintentional decay of the electric or magnetic charge that fixes a 1 or 0 only happens very sporadically, after all. In a program, such decay can lead to annoying errors. It is a cause of software bugs. This is known as bit rot, data rot, bit decay or data decay. But suppose a computer is completely stable. It then produces a flawless step-by-step process that works with 1’s or 0’s; even when AI uses weights in neural networks. But the development (or the invention) of new things (creativity), one of the hallmarks of intelligence, requires surprise. In the real world, this is well arranged with the help of superposition – the indeterminacy – which, when interacting, collapses to one of many possibilities of the previously shared information. Coincidence (probability) is a factor. This doesn’t mean that disorder always arises, for the collapse of entanglement can allow self-organization under specific conditions of downward control (Chapter 14).

Theoretically, quantum computers can work with superposition and entanglement. This allows them to make much faster calculations. A quantum computer ‘sees’ all possible options at once and then ‘knows’ which option is the most suitable. Ann Dooms, professor of mathematics at the Vrije Universiteit Brussel, uses the metaphor of a maze [4]: ​​’If you give a classical computer the task of finding the way out of a maze, it will go through all possible options one by one. If you are very lucky, it will go quickly, otherwise it will take a very long time. With a quantum computer it is as if it chases a gas through the maze, which immediately spreads everywhere. The gas particles that find the exit first indicate the shortest path. This will go much faster than the classical approach.’ But there are no quantum computers yet. And we don’t know what they’ll look like and how they’ll work when they get there. In addition, the entanglements of quantum computers must be very stable in order to perform calculations. In this way one tries to exclude coincidence as much as possible………!

So, is there no need to worry about ‘intelligent machines’? Not so fast. Algorithms carry out tasks for which they were programmed by their creators. These can be questionable assignments, such as with applications in weapon systems. But it can also be well-intentioned assignments that get completely out of hand. A well-known thought experiment is the task of a machine learning system to calculate the number pi with as many decimal places as possible. Suppose it concerns powerful machine learning. Suppose, consciously or accidentally, a connection with other computers is created. Then the system can take over and use its computing power for its own purpose. It will try to get the entire Internet involved in the calculation. And it doesn’t stop there either. It will look for all possible ways to gather more power to achieve the goal. Because that’s the task. Far-fetched? Of course, artificial intelligence builders are thinking about incorporating safety valves. But despite that, AI seems sensitive to black swans (see Chapter 8).

An interesting report on the impact of AI in our society is the essay collection Behoorlijk datagebruik in de openbare ruimte (Proper data use in public space) [5]. In this collection, which was written at the request of the Dutch government, experts outline the technical, social, legal and ethical aspects of data use, algorithms and artificial intelligence. Topics that come up in this essay collection are:

  • The black box problem. Due to the complexity of the inner workings of AI – even for experts – it is no longer clear how the outcomes of AI or machine learning came about.
  • Warnings for unpredictable effects. The extremely rapid developments and the enormous power of the systems can result in major unexpected adverse consequences.
  • It is considered important that all citizens contribute ideas about the place of AI in our society. But it is recognized that if even experts struggle to keep up with all developments, so how can non-experts contribute to this debate.

Perhaps systems thinking can help us to deal with these issues. Getting better at facilitating self-organization and organic building is essential for creators, regulators and users alike.

Consciousness
It’s a bit outside the scope of this book, but it’s just too much fun not to fantasize about it. If it’s not just 1’s or 0’s, how does our human thinking, our intelligence work? You could view ‘thinking’ in terms of self-organization. Everything (matter, energy, numbers and signs) is information, and everything seems to be connected through entanglements. But not everything is connected in the same way, and change occurs constantly. The superposition of information and the variation in entanglements means that new patterns are continuously being created in interactions, whether simple or very complex. As we saw in Chapter 14, self-organization can form crystals or tornadoes, as well as organisms such as plants and animals. And yes, it can shape behavior too. In social sciences, it is very common to describe behavior as a form of self-organization. Even primitive plants and animals exhibit forms of behavior. Single-celled paramecia, for example, look for food. More complex animals behave in more complex ways. And at a certain point, behavior becomes so complex that we call it thinking. Consider a ‘thought’ as something that arises through self-organization, and that can evolve and change. Consciousness is then a special and higher form of thinking. In animals (and infants) this is still primitively developed. With humans (and growing up) it becomes more complex.
Thinking and consciousness are dynamic. Analogous to a flock of birds, thought can form and disappear again. When we sleep our thoughts and consciousness are on the back burner, but when we wake up our mental flock of birds rises up and self-organization, in this case thought and consciousness, takes flight.

However, keep in mind, when we view thinking as a form of self-organizing, it is important to understand that this self-organization doesn’t limit itself to the brain. It takes place in the entire organism, including all connections. And the outside world is also part of it! Lightning-fast interactions between ourselves and the outside world are constantly forming new patterns. This resembles the philosophical concept of externalism, and the ideas of Riccardo Manzotti (philosopher and doctorate in robotics). In The Spread Mind [6] he describes that consciousness cannot only be located in the brain, but that it is one and the same as the physical world around us. There is no difference between the experience of the world and the world itself, he says. We largely agree with Manzotti’s ideas of a spread mind.

In short:

  • Computers can only work with 0’s or 1’s. As a result, they cannot come up with new things. They lack creativity.
  • To keep control over algorithms and artificial intelligence, it is necessary to get better at systems thinking.
  • Thinking and consciousness can be seen as forms of self-organization in which both the brain and the outside world participate.