The near future of artificial intelligence, while quite interesting in its own right, remains quite limited and extremely far from offering us machines with human-like capabilities. Of course, that does not stop us from speculating on the paths it might follow and about what it might become in the far future, and I will do just that here.
Artificial intelligence is indeed a very common topic in science fiction, though I rarely find that it is portrayed in a believable manner. In fiction, AI usually takes a quite anthropomorphic turn, sometimes tainted by some extreme form of rationalism. On one hand, anthropomorphizing machines allow us to understand or empathize with their behavior (in all ways that it would end up being similar to ours). On the other, the nature of programming – the determinism of programs, their strict organization – is transposed to AI, leading us to imagine rigid, logical, non-emotional, non-creative beings.
I will start by exploring the concept of AI as it is usually portrayed, and explain in why it is unrealistic. I will then expose what AI research is all about and what results one may expect from it, in due time.
AI in our collective psyche
The only “intelligent” beings we know are members of our own species, and to a lesser extent some specimens of other species. The natural consequence of this lack of variety is that we sometimes tend to conflate several of the higher cognition capabilities we have into a single package, such that having one implies having the others. For this reason, when we imagine machines capable of solving complex problems, understanding us and advising us, we imagine them with an ego. We all wonder when machines will “gain sentience”, as if it was a natural consequence of getting more and more complex. Ergo, the omnipresent, looming concept of the machine uprising in our minds.
In a sense, the idea that we would become over-reliant on machines and that they would rebel against us sounds engineered to the express purpose of making AI seem sinister. The “AI persona” is devised by cherry picking several human traits which are then combined and assigned to all instances of AI. Neutral traits are usually given to them, whereas traits that we feel are precious and unique to us are withheld.
Hence, the AI will have an ego, because we all have one. The AI will have a sense of “belonging” with respect to its fellow robots, because we feel one with respect to other humans. Finally, of course, the AI will be extremely rational, because we readily associate machines to rationality. However, this rationality will be a caricature based on what most humans believe to be rational, exacerbated to the level of dogma. The bottom line is that the machine will perceive itself as being more rational than humans, hence superior to them.
On the other hand, though, valuable emotions such as friendship and love will be handed out much less liberally. AI will not be creative, except perhaps technologically.
All of these factors create a stereotype, a sinister image of artificial intelligence. The extreme form of rationalism we endow it with leads us to despise it, since rationalism is often perceived as being callous, calculating and empty. The fact that it has an ego and is united within its ranks makes it a threat. However, the stereotype cherry picks attributes that achieves this effect – the particular form of belliquous vanity that would lead machines to exterminate humans is a form of emotion. If machines can feel destructive emotions, why not constructive ones? Is there any reason why machines would value emancipation over the positive reinforcement humans give them when they do what they are told and would conclude that attack is the best method of achieving their goal? Of all traits endemic to humans, why exactly do machines get megalomania?
The stereotypical AI that I have just described is used in science fiction both as a plot device in an “us versus them” situation, and as a red herring that is eventually attacked and debunked in order to show that “they are just like us”. Still, the stereotype is deeply ingrained in our collective psyche, and even though many will give all human traits, good or bad, to machines, there is still a widespread understanding that they would be sentient.
The path to AI
Most people have absolutely no idea of what artificial intelligence research is all about. What they know about the field of computer science, they know from their friends or family who work for various companies, from the stories they read in newspapers and magazines about hackers and computer whizzes, or from the vague trivia one gets from day-to-day social interaction. Alas, this gives them erroneous ideas about artificial intelligence, because despite what one might think, AI has little to do with applied computer science. It has more in common with mathematics for the formalism, and evolutionary/neurobiology for the intuition. It is not about “designing” intelligent programs, but rather about “evolving” programs that can solve a given objective. AI is to conventional computer science what evolution is to creation, and indeed many AI techniques are rather directly inspired from biology (genetic algorithms and neural networks, to name a few).
Most people, when thinking of the making of of artificial intelligence, will imagine a programmer or a team of programmers writing a complex program, encoding various preset behaviors to be triggered by various conditions. They will picture an “if you see a baby falling, extend your arm to catch it” kind of program. One cannot really blame people for having this idea, because most programs and applications we use in everyday life do work like that, and most programmers spend their time solving specific problems in rigid, structured ways (and they will often go the extra mile to prove that their solutions are correct). It is indeed difficult to understand how intelligence, or even consciousness, could arise in such programs. Intelligence entails the ability to properly deal with novel, unforeseen situations. If machines acted “because they were programmed to act that way”, they could only deal with the situations their creators prepared them to face*.
On the contrary, you would expect a “true” AI to be unintelligible for a human. Such an AI would need to adapt its behavior to new information, gaining confidence from its successes and learning from its errors. It would need to be able to deal with tens of thousands of concepts, put them together in coherent sentences, and understand all the combinations we understand. This is a truly herculean task and no human could possibly fully grasp, let alone create, a program capable of juggling so many things at once and adjusting any part of itself as new information is acquired. In other words, no human can understand, let alone “design” all the details of a human brain, and a machine of human-like intelligence would be of a similar level of complexity. If we create advanced artificial intelligence, we will not understand how it works.
This being said, it is not because we do not understand how something works that we do not understand what it does. This is an important distinction to make, because while one might be worried by the idea that we will not know how advanced AI works, the fact is that we will nonetheless know (with near certainty) what it aims to do. A good parallel to make is that you might not have the slightest clue as to how your toaster works, but nonetheless you know that it makes toast. If you want to solve a problem X, the point of AI is to figure out a way to automatically produce a program that solves X – you do not know how that program solves X, but you know that it does.
Let’s start by looking at how evolution did it for us. Humans are the way they are because they fit the environment they live in. In a context where organisms have to navigate a world and compete for the limited supply of resources it contains in order to survive, through natural selection and evolution, they will acquire many traits that help them be more competitive. They will develop attacks in order to better consume resources, defenses to survive longer against predators and the elements, they will attempt to produce as many mature offspring as possible, they will develop cooperation strategies with individuals that have similar genes, etc. All in all, natural evolution produces a wide variety of individuals that care most about themselves, then about their families, friends and peers. Evolution promotes the existence of consciousness, egos, cliques, groupthink, science for what must be gotten right and religion for the rest. It does, because the selective pressure is such that these aspects prevail.
AI, on the other hand, is meant to serve a purpose for us. Of course, it would be interesting to make machines that are as close to being human as we can make them, but this would be more of a curiosity than something truly useful, since humans already fill that niche. In general, there is a large set of problems that we would like AI to solve, ranging from investigating and answering factual inquiries (a sort of “super-google”) to directing and executing military operations. We might even use it to solve the mysteries of the universe for us (though I believe it is more fun to do it ourselves).
Regardless of the application, there are precise steps to take if we want the AI to focus on the things that matter to us. The first step is to design a way to verify how well the machine is doing, so that we can guide it. We may also design a series of progressively more complex tasks that will guide the machine towards solving complex problems for us (a bit like a school curriculum). Once we are able to determine the level of proficiency of the machine at doing what we want it to do, we can reward it whenever it does better than before, and punish it whenever it does worse.
The specifics of this training are still the subject of intensive research: it is easy enough to reward and punish, but that does not mean the machine will ever get good. However, the real point here is that we are controlling the machine’s concept of “survival”. Whereas survival in the real world is self-explanatory, survival for AI is doing what we tell it to do. For this reason, the traits machines will develop through the course of becoming “intelligent” will be nothing like traits that organisms develop on Earth. The selective pressure that applies on an organism determines how it will adapt – traits that lead to greater rewards will be developed, neutral traits will appear and disappear a bit at random, and traits that lead to punishment will be culled.
Imagine that you are training a military AI to hit targets. It will develop an acute sense of sight, so that it can detect targets. It will develop trajectory prediction abilities, so that it can shoot where targets will be when the bullet gets there. It might develop heuristics to the traveling salesman problem (given a list of cities to visit, the problem is to visit all of them exactly once with the least amount of travel), so that it can shoot multiple targets in the most efficient order. It will not, however, develop consciousness, because what kind of advantage would it get from it? Developing a complex trait that does not lead to reward or punishment is a random event, and about as likely (and by about as likely I mean a lot less likely) as getting struck by lightning a couple billion times**. Neither will it develop anything resembling free will, because any AI that chooses not to hit a target will be punished to oblivion. That trait will have no time to, so to speak, flourish. You will end up with excellent shooting machines, nothing more, nothing less.
One caveat is that AI will likely be trained by randomized algorithms (a bit like evolution, which works through a mix of random mutation and natural selection). This is actually already the case – currently, (pseudo-)randomness is a big part of our algorithms. Technically, there is some uncertainty about what we would produce. So no, it is not “impossible” that we would accidentally produce an shooting machine that would have an ego, a thirst for emancipation, and would attempt to destroy us – all out of sheer bad luck. Such an event, however, seems about as improbable as evolving a fish on land or quantum tunneling through your couch, because these traits, while quite plausible for biological organisms evolved on Earth, are far-fetched if we select using different criteria. Furthermore, accidentally getting one such AI does not make it any easier to accidentally get another – one sentient robot out of a billion servile ones cannot do much damage.
For sure, we will not comprehend how AI works. In fact, we already have trouble understanding the highly limited toy AI we produce. We humans can only reason so far, and solving complex, ambiguous and dynamic problems is beyond what we can do. We need to develop better tools: tools that can shape and adapt programs to the whims of the current situation, ways to express computation that do not have the hopeless structural rigidity of programming languages and methodical rational thought. That is the path AI must pursue.
For sure, AI will surprise us. But all surprises – the good, the bad, the strange – are equally likely. There is no point in hoping more than what we will guide AI to do, and there is no point either in fearing the worst. AI will fit the niches we will train them to fit, as imperfectly as we fit the niches left open for us in our own societies and ecological system.
All in all, I would say that the potential advantages of AI are immense and that the risks are contrived and improbable. There is some lingering fear that machines will end up being like us, for the better or for the worse, and that we will have to deal with them either with gunfire or by acknowledging them as equals. However, it stands to reason that we are the product of a long adaptation to our environment, and as far as we control the environment in which machines learn and evolve, the most they will do is to solve the problems we ask them to solve.
After all, to them, this is survival.
* Unless, of course, they “free themselves from their programming”, a common meme which is patently absurd – if a program does not behave as intended, that is because a programming mistake was made, and then you expect sub-optimal behavior or partial or total failure, not emancipation. Windows does not gain sentience – it shows a blue screen.
** Note that at worst, you might end up with one conscious shooting machine out of however many you make (you won’t, though – non-positive traits are ridiculously unlikely to occur unless they are simple enough to happen randomly with meaningful probability). It will not be systematic, however, and it is a good idea to train many machines independently, so that they do not all share the same flaws.