How much do you really want artificial intelligence running your life?

Artificial intelligence (A.I.) is the current hot item in "tomorrow world," as techies see it as the next new thing to take over outmoded human brains, some of which actually do possess a modicum of native intelligence.  A.I. algorithms have been successfully implemented by many enterprises to do such tasks as determining credit risk, consumer marketing optimization, credit card fraud detection, investment decision making, x-ray and electrocardiogram interpretation, and efficient travel and navigation choices.  So far, so good.

In the mold of "I am from the government and here to help you," A.I. is being promoted to even more critical tasks – say, driving a car.  However, programmers and engineers might reflect a bit more on one of the more pervasive and deadly laws of the universe – the law of unintended consequences – and the limits of programmed intelligence.

Consider the recent crash of a Boeing 737 MAX aircraft, operated by Indonesian Lion Air, killing all 189 people on board.  Flight data reports detail the vain struggle of the pilots trying to keep the aircraft level while the latest addition to the automated functions of the aircraft had erroneously declared an imminent stall and put the plane into a sharp, corrective dive.  Attempts by the pilots to pull the plane to level flight were apparently overridden by the newest enhancement of the on-board computer system, and it nose-dived into the sea.

While the A.I. computer was making a billion calculations per second in a game of "match the output of the sensors to the library of stored known objects," the pilots of the doomed aircraft probably could tell that the aircraft was flying level in spite of questionable sensor input to the contrary.

Replacing human sensory input with electro-mechanical devices is common enough that the possibility of malfunction of either is a real consideration.  Humans have the evolutionary advantage in that their brains have an innate ability to make distinctions in the real world.  A.I. systems require learning exercises to identify objects and situations already mastered by a six-month-old child.  The A.I. computer must build its own library of objects against which it will base future decisions as it navigates its decision tree based on sensor inputs.  What happens when a bug or ice fouls a sensor?  A.I. also lacks the adaptability and value-judgement skills possessed by humans to deal successfully with a situation for which it has no prior training or reference data in its decision-tree core.

The unnecessary death of 189 people is a high price to pay for a computer programming glitch.  "To err is human" is good advice for A.I. programmers as well.

Charles G. Battig, M.S., M.D., Heartland Institute policy expert on environment; VA-Scientists and Engineers for Energy and Environment (VA-SEEE).  His website is www.climateis.com.

If you experience technical problems, please write to helpdesk@americanthinker.com