AI and the Elites

A few weeks ago, Joe Allen responded to my "Artificial Intelligence: The Facts" by suggesting that the real danger of AIs is that the best, most efficient models will be operated by the Deep State and its technofascist allies, rendering opposition difficult if not futile: “…AI won't be a "digital defense" against "the WEF" and "tech giants" when the latter have the most powerful systems.” This is a perceptive comment, towering well above the “SkyNet is comin’ with his cyborgs” nonsense that this discussion usually attracts.

Vernor Vinge, an outsized influence on infotech culture, once speculated that a truly efficient totalitarian system wouldn’t consist of jackbooted secret police and continuous surveillance but in fact would scarcely be evident. Things would just happen to enemies of the state, accidents that nobody could predict or explain. For instance, an individual presenting a problem for the powers would order something for dinner that arrives chock full of botulin toxin, injected in the lab that transforms insect proteins into edible food. As the toxin hits, his phone, internet, and monitoring equipment all go down. After the job is done, they all come back up, their logs edited to show nothing at all unusual. When the body is discovered, there’s no evidence of anything out of the ordinary. Death by misadventure. What ya gonna do?

And so it would go on, day after day, year after year, the bonds growing ever tighter, and nobody even noticing.

You’d need AI to handle things on this level of efficiency. In fact, you’d need quite a few, all optimized to operate a particular system: surveillance, analysis, and what might be called “assassination by Siri.”

It’s quite a plausible nightmare. But how feasible is it? For that, we need to return to a basic principle of computer science: GIGO: Garbage In, Garbage Out.

It’s not difficult to see how the garbage would get into an AI system in the first place. Despite what the public has been taught by media and film, AIs require human interaction to learn anything at all:

AI learning is accomplished through “supervised learning,” in which mere humans set the parameters and goals, oversee the process, and examine and judge the results.  Until now this human interaction has proven strictly necessary -- “unsupervised learning,” when it has been attempted, usually goes off the rails pretty quickly. 

(Note here that we’re talking about “App AI” such as ChatGPT or the art AIs, which are limited to a single area of activity. General Intelligence AI, the HAL 9000s and Skynets of the movies, doesn’t exist and likely never will.)

Clearly, any AI is going to directly reflect the attitudes and ideas of its creator(s), and if that creator is a woke cancel-culture nitwit – or an entire clown car full of them – then you’re going to wind up with a pretty lame Artificial Intelligence. A system based on conspiracy theories, wish-fulfillment daydreams, and delusions, developed and programmed using whatever intellectual fad happens to be dominant at the moment. The cybernetic equivalent of a B-list Hollywood actor.

The elites fail because they have no principles, no knowledge, and no understanding. How can we be sure of this? Because elites have always acted this way. It comes with the territory.

The possible examples are myriad, but we’ll look closely at the 1960s, the first decade in which the liberals held dominance enough to attempt to remake society to match their vision. The result was the welfare state, urban renewal, race riots, the crime explosion… and the Vietnam war.

It’s rare in history when you can find one single individual responsible for any given disaster. But in this case, that individual was Secretary of Defense Robert Strange McNamara, along with his “whiz kids.” They’d examined the concept of war closely, with their Ivy League learning, their vast perspicacity and insight, their familiarity with all the relevant social sciences literature, and they were going to outdo every past military thinker -- Sun Tzu, Vegetius, and Clausewitz included.

Well, it didn’t work out that way. It didn’t work out that way because none of them had any idea what they were doing. Vietnam was an attempt to prove that wars could be won through proper application of management theory.  In truth, micromanaging, by McNamara himself and whoever else he could convince, up to and including President Lyndon B. Johnson. (The two of them actually spent time crouching on the floor of the Oval Office, selecting bombing targets.) The result was disaster for the Vietnamese, humiliation for the U.S., and a promotion for McNamara to head the World Bank, where his actions were even more asinine.

If McNamara was a whining neurotic, then Assistant Secretary John T. McNaughton was a full-blown schizo. McNaughton’s idee fixe was a belief that war was an exercise in semiotics, that is, a play of signs and symbolic actions. McNaghton was convinced that if the proper semiotics were employed, then both sides could signal their intentions and no fighting at all need take place. Following this schema, a squadron of USAF B-57 Canberras, an effective light bomber, was sent to Bien Hoa Air Base in 1964 and ordered to fly at very low altitude over areas known to be under Viet Cong control, with McNaughton confident that the Viet Cong would be so impressed by this display of U.S. power they’d simply give in. The VC’s semiotic response was not long in coming. It consisted of a massive heavy mortar barrage of Bien Hoa that destroyed many of the planes on the ground.

Similarly, when SAM missile sites were spotted under construction in North Vietnam, U.S. pilots were ordered to fly low and slow over the sites. This, McNaughton reasoned, would inform the North Vietnamese and their allies that we knew of the sites but had chosen not to bomb them, which would encourage them to respond semiotically by not firing the missiles when ready.

The response came in August 1965, when a USAF F-4 Phantom was shot down by the first completed SAM site. Hundreds of U.S. planes were later downed in this fashion. (McNaughton himself died in a plane crash in 1967, just weeks before he was slated to become secretary of the Navy.)

We need not look closer at urban renewal, MAD, or the rest. They were all just as stupid and just as disastrous. It’s very likely (unless there’s a historical episode I’m forgetting) that the U.S. in the 1960s was the worst performance by a national elite since the Polish nobles agreed to allow the European powers to partition their own country in the 1790s. At least the U.S. was still around when the 60s ended.

The elite mindset has not changed one iota in the past 60 years. It is haunting – and not a little creepy –how closely the attitudes of the 2020s mirror those of the 60s.The same arrogance, the same cockiness, the same ignorance of history, human nature, and simple common sense, the same contempt for the masses.

In the 60s, they had their mainframes. Today’s elites have – or will have – their AIs. After that, they’ll have control of it all. It’s just a matter of time. The Great Reset, their Agendas, the Smart Cities – nothing can stop it. Or so they think.

Using enhanced intelligence to do stupid things is a contradiction in terms. Our elites will learn that with a vengeance. These people know nothing of the 60s, or the Polish nobles, or the Ancien Régime, or any other incompetent ruling class back to the Roman Senate. They will make the same mistakes, amplified by infotech, and suffer the same fate. Our job is to help them along.

So what about the opposition?

The first argument here is that AI tech would simply be too expensive for individuals or small groups. That’s the case at the moment. Neural network chips, the type used in AI systems, start at about $10,000 apiece and go up from there. But it’s not going to stay that way. Moore’s Law (which is really the experience curve applied to chips) tells us that chip prices will drop as soon as the technology becomes widespread, and drop precipitously. Back in the 1950s the purchase price for a mainframe computer was so high that only governments and large corporations could afford them. Today your phone has more computing power than all the mainframes of that decade combined.

Soon enough, working AI will be available to well-to-do individuals, small organizations, and hackers. And what happens then?

Let’s consider the surveillance aspects. How about a system trained and dedicated to track, collate, and analyze all known Biden financial transactions, including associated or probable transactions, ranked in order of probability? How about tracing all available communications and actions of the DoJ, collating them and utilizing them to provide a framework to predict potential future actions? How about going the whole route and creating on the cheap a private AI duplicate of that installation out in the Mormon state, the one with the same name as the well-known Hudson Valley arts center, and track all known government communications?

A cheap AI boom could lead to a revitalization of the hacker ethos (which badly needs it). The customary hacker anti-authoritarian attitude has been noticeable by its absence in recent years. Apart from hackers, it would be very interesting to see what James O’Keefe and Glenn Greenwald would do with such systems.

To take a step further, consider AIs as a means of strategy, planning, and organization if worst comes to worst and an actual attempt is made to set up an authoritarian state in God’s country. (It’s interesting to note that one of the earliest convincing fictional AIs was that of Robert Heinlein’s The Moon is a Harsh Mistress, which involved that exact premise.)

In this context, a system of private AIs would comprise a distributed network, a series of independent units each operating with the same goals but not as part of traditional command structure. Historically, distributed networks are in the long run generally successful against top-down hierarchies, like the one that the technofascists are now trying to build. (The Viet Cong and the PAVN formed exactly such a network, something that McNamara and his whiz kids would never have been able to understand.)

AI is nothing to be afraid of. At worst, they will simply tempt the globalist elites to even further overreach and eventual downfall. Nor will they have a monopoly on them – independent app AI systems will exist. And at best…  well, we need to start thinking about that right now.

Image: PxFuel

If you experience technical problems, please write to helpdesk@americanthinker.com