Artificial Intelligence (AI): A non-intelligent intelligence?

ia

(Read it here in Spanish)

Every time someone gets a computer or a robot to play or solve a new task or problem, a lot of people come out to remind us that the activity of the machine is not a genuine intelligence (that is, supposedly like ours): it would be a simple computation carried out thanks to the human capacity to program something that does not cease to be a piece of silicon, sheet metal, and wires. This is called “Artificial Intelligence effect” and is widely extended. No matter the feat achieved by the machine: if it defeats the chess world champion, it is taken as a mere computation (remarkable, yes, but nothing to do with real intelligence). We do not accept that there is a genuine intelligence as far as we can understand how the machine works to do something or give an answer to a problem. Not to talk about attributing consciousness to a supercomputer or assuming that it could suffer from mental illnesses (this would be the case if having a mind).

Interestingly, something similar happens when judging the behaviour of animals. Every time we find in a species some cognitive feat that had been till then overlooked, many people ignorant in science attribute it to the pure instinct: indeed, animals cannot have an intelligence!… And no conscience either! An uneducated humanist (one who can recite Góngora but believes that bulls do not suffer in the bullring and does not conceive that 0.11 is less than 0.2) saying “animal culture” is just as likely as a bullfighter exclaiming “cognitive dissonance!” in the midst of a corrida in Las Ventas. Although, like it or not Fernando Savater or any other lustrous academic, culture also exists in animals.

The thesis of strong Artificial Intelligence (AI), defended by philosophers and neuroscientists such as the American Daniel Dennet, argues that if a computer develops smart actions (interacting with the environment, reasoning, solving problems, planning, learning, communicating, etc.) may be considered smart for all purposes. The difference between machine and human intelligence would be only in degree. Our computers are still less intelligent than our brains, but that doesn’t preclude that one day, maybe not far away, they can reach us and, on the way, become conscious.

Philosopher John Searle devised a curious mental experiment, called the “Chinese room”, to refute strong AI. Let us imagine that he is shut in a room and does not have the slightest idea of Chinese language, but has a series of rules in English (dictionaries, grammar books, etc.) that allow him to put in correspondence a formal set of symbols with another (in this case, the Chinese characters) and respond correctly in Chinese to questions asked in that language from outside – for example, by exchanging leaves under the door or through a slot- by sino-speakers. In this way, the latter are wrongly convinced that the Chinese room (in fact, the hidden Searle) understands the Asian language. By analogy, Searle concludes that a program cannot provide understanding or consciousness to a computer regardless of how smart makes it behave.

As Dennet puts in his book Consciousness Explained, the argument of Searle to show that the Chinese room is not aware at all misses a key factor: complexity. A computer program can be extremely flexible, with multiple levels, and possess a wide information of the world (which would include, according to Dennet, “metaknowledge and (…) metaknowledge on its own answers, on possible responses from its partner, on its own motivations, on the ‘motivations’ of its partner and much, much more”). If a complex computation allows you to capture and represent the same organization or abstract structure of an information-processing system (for example, the mind emanating from a brain), if their network of connections or “causal topology” are similar, why should they not share their psychological properties, as Australian philosopher David Chalmers suggests? Why should consciousness necessarily have an organic support, and not one based on silicon or any other material? If a consciousness is a product of the interaction of a trillion neurons, according to Chalmers there would be nothing absurd in the idea that the interaction of a trillion of silicon chips could do the same.

And why must consciousness, unlike intelligence, be all or nothing? Chalmers does not rule out the possibility that any information-processing object (i.e., one that reduces uncertainty, according to Claude Shannon’s definition) can be aware in its own way, starting with a modest thermostat with a binary behaviour: on-off, zero-one, yes-no… Although it may seem counterintuitive, it is certain that pampsiquism makes less mysterious the phenomenon of consciousness and is perfectly compatible with a materialistic view of the world.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s