One Order of Smart, to Go, Please
According to Yahoo, Google is now working on a super-fast "quantum" computer chip that may one day result in machines that think like humans.
Good luck with that.
Science journalists love to write about Google, from software innovations to web-linked eyeglasses to self-piloting automobiles, and now to quantum computer chips providing an artificial version of human intelligence.
Similarly, science fiction writers love to write tales of artificially intelligent machines that become self-aware, making them as smart as, or more intelligent than, their human counterparts. Think Commander Data from Star Trek: TNG, or the evil machines from the Matrix films.
Inspiring stuff. However, what they tend to inspire is rampant silliness cloaked in the reliquary of science, leading some people to make truly stupid comments, such as this one I heard only the other day: “Machines are growing smarter and smarter every day.”
Actually, no.
A machine cannot grow smarter until it is first smart, and machines aren’t – not even machines that use AI. The error is in the assumption that AI is a form of intelligence – to a lesser degree than that of humans, perhaps, but intelligent nonetheless, when the operative word is artificial. AI is a simulacrum of intelligence, not a variation of it to any degree. For this reason, a silicon crewman such as Commander Data is simply not scientifically possible. The assertion that human intelligence will someday develop AI that is as intelligent as or even more intelligent than that of humans is like one saying he can stand in a bushel basket and lift himself ten feet into the air. Physics prevents the one as much as it prevents the other.
(Full disclosure: I am not the only one who thinks this way. I encourage everyone to have a read of Dr. Amit Goswami's fascinating treatise of monistic idealism, The Self-Aware Universe.)
But how does physics stand in the way of the successful completion of the Google project? Consider: Albert Einstein showed that the only constant in relativistic physics is the speed of light. All else is relative, even time itself. Einstein argued that all influences among material objects happening in space-time must be local. By local, he meant that influences must travel through space one bit at a time with a finite velocity, which, at its maximum is the speed of light. He called this notion the principle of locality. Nonlocal events – that is, the instantaneous influence of one material object with another – do not occur, because they would have to exceed the speed of light.
Closely tied to locality is the concept of correlation. Two material objects are said to be correlated if an action upon one results in an action upon the other. According to relativity, the influence of one object upon the other must pass through the intervening space in four-dimensional spacetime, and this can occur only if the influence travels at a finite speed. Nonlocality is thus a big no-no.
Nobel Prize-winning physicist Richard Feynmann understood nonlocality to mean that a classical computer (and, by extension of this essay, a silicon brain) would never be as intelligent as a human being. AI processes information according to pre-established algorithms embedded within its software. Algorithmic thinking is therefore static. It is the same for every application for which it is used, like the order of operations in mathematics, and cannot vary or randomize, else it doesn't function at all. Thus, a silicon brain can think only what programmers tell it to think.
Yet, the human brain demonstrates nonlocality. As Australian physicist L. Bass and American Fred Alan Wolf established, for intelligence to operate, the firing of one neuron in the human brain must be accompanied by the firing of many correlated neurons at macroscopic distances (as much as four inches). In order for this to happen, nonlocal correlations must exist at the molecular level, along the synapses of the human brain. The instantaneous firing of correlated neurons outpaces anything a silicon brain could manage, even if its impulses travel a mere four inches at the speed of light. Instant is always faster than the merely fast.
If the action is instantaneous, it is nonlocal. Nonlocal information processing is therefore nonalgorithmic. For this reason, a human brain creates, whereas a silicon brain only produces.
Indeed, it is creativity that distinguishes the human brain from one made of silicon. The human brain can think "outside the box"; the silicon brain is closed within the box and cannot exceed its confines.
I wish Google much success with their project. However, I fear that all they will prove is that a computerized brain is a poor substitute for a Google engineer. A tough break for Commander Data, perhaps, but we can take comfort in the fact that he won't know what he's missing.
Terry L. Mirll is a science fiction writer based in Oklahoma. In 2013, his short story “Astrafugia” took first place in Writer’s Digest’s Popular Fiction Awards. His story “Some Assembly Required” will be presented as a podcast and published online with Cast of Wonders.