Indeed, what is it?
Is there an objective measure we can use to determine whether an entity is intelligent or not, or to compare the intelligence of several entities? I am not talking about IQ tests, which are neither a complete nor a reliable assessment of capability, but am trying to engage in a deeper reflection about the nature of the thing.
To put things in a clear context, what would it mean for a machine to be “intelligent” (not “human”, mind you)? Keeping in mind that we are a kind of machine, I would say that this also defines intelligence for us. Note my use of the word “defines” – I consider that this exercise is essentially one of definition, of trying to pin down the formal concept that best corresponds to the intuitions we have about intelligence. As such I am not trying to find an objective truth to intelligence (whatever that means) as much as I am trying to make the concept richer and more useful.
Intelligence and functionalism
Functionalism (as I mean it in this section, anyway) is the belief that mental states and properties are defined by their behavior. To determine whether a machine is intelligent or not under that model, one would simply relate its inputs to its outputs. To each input would correspond a set of “intelligent” outputs, and we could check in what proportion the machine responds “intelligently” given various inputs.
Here we could interpret an input as being a problem statement, such as “Is there a largest prime number?”, an output as being an answer, and an intelligent output as being the right answer. We could also interpret an input as a situation, an output as an action, and an intelligent output as one that sustains the existence of the entity.
Regardless, there exists a hypothetical machine which, under a functionalist interpretation, would be maximally intelligent. The problem is that it seems counter-intuitive for this machine to be intelligent. Here, let us use the situation/action/survival context, but there is a version of that machine for pretty much any context. The machine in question is extremely simple: given a situation, it will simulate every single possible course of action, and it will choose the action that gives it the best survival expectancy. This is the “brute force” approach to problem solving: try all possible answers until you find the right one. This will almost always work, with the caveat that it will take virtually forever: the number of possibilities is exponential in the length of the answer.
Under a functionalist interpretation, such a machine certainly would be intelligent, since it would almost always yield the best answers. Nonetheless, it seems to me that such an example shows that intelligence ought to be defined differently: bubble sort might sort just as well as quicksort, I would consider the latter to be smarter, because it has better complexity.
Intelligence and speed
In order to define intelligence, I would add one important ingredient, which is speed. That is, I would say that if X and Y can solve the exact same problems, but that Y can solve them faster than X, then Y is more intelligent than X. Computational complexity sort of fits in that framework, since better complexity will lead to better speed for large enough problems – however, the bottom line (speed) is what matters.
The definition of intelligence then becomes the following: the intelligence of an entity is proportional to the number of problems it can solve, and inversely proportional to the time it takes to solve them. There is, however, no objective way to weigh these attributes. This portrays intelligence as a partial order (which I believe is definitely the way to go), where the intelligence of two beings cannot be compared if each of them can solve problems that the other cannot, or if each can solve some problems faster than the other.
Nonetheless, both coverage and speed are important factors for intelligence, and this explains why brute force is not an “intelligent” problem solving method: it is much too slow to be practical. Intelligence is not as much the ability to solve problems as it is the ability to solve them within a reasonable time frame.
In the case of humans, it is often the case that people can do feats of mind that others simply cannot, and I believe that this gives us a more binary view of intelligence, where it is all about what you can or cannot do, or the ideas you have or don’t have. Speed also matters, but at a coarser scale, in the sense that greater efficiency is often seen as resulting from better training or higher effort, unless the gap seems too large to be bridged. You will feel that someone is smarter than you if that person can do mental feats that you believe are beyond your abilities, but less so if you feel that you can do whatever that person can do, regardless of how fast, as long as the difference is not tremendous. Note that there is an implicit understanding that people that think faster than others can do things that others cannot, so fast thinkers will indeed be viewed as smarter, if only for that reason. Nonetheless, I believe there is a pitfall in our conception of intelligence, which is that solving problems is not in itself difficult, given enough time – solving them quickly enough is what truly matters.
Note that this can be generalized further: we could view intelligence as the ability to solve problems given a limited amount of resources. Resources may be time, memory, cost, etc. Solving problems with extremely large or infinite resources is rather easy, but doing so quickly and cheaply is much more difficult. This draws a parallel about optimization and algorithm theory, where one tries to make the most out of limited computational resources.
I would like to close this article with a theological implication: an omnipotent, omniscient being does not need to be intelligent. Indeed, one would imagine that such an entity has infinite resources – that with some caveats, it can try all possible universes until it finds one that it likes, and/or one that works well. In a sense, the concept of “intelligent design” is made absurd by the supposition that the creator is infinitely powerful. Since intelligence is the ability to solve problems in a limited time span and/or with a limited amount of resources, intelligence is an extraneous attribute to give to an all-powerful entity. All it needs is enough sense to figure out how to solve problems using brute force, but there is nothing particularly clever about that.