We can beat those who weaponize artificial intelligence

No one predicts the future well, even when the prognosticators also hold political power. Clearly, it is the hope of those in charge, whether in government or in corporations, that they will profit from the advances of Artificial Intelligence. That will not necessarily be the case. Handled correctly, AI is highly likely to victimize the elite class in unique ways.

Let us look at just one example: Powerful elements carefully choreograph the media to mold opinions among those who are poorly informed. This advantage will disintegrate when everyone turns naturally to ChatGPT in the same way they now use the tainted Google search engine.

ChatGPT is your friendly, self-directed conversational search engine. When Jordan Peterson asked ChatGPT about an ancient Egyptian document, the search engine castigated Peterson for seeking a discussion on sexual matters.  Peterson reprimanded ChatGPT, and the program apologized to him.  In a separate anecdote, Peterson reported that Google engineers asked ChatGPT a question in an obscure Bangladeshi dialect.  ChatGPT apologized for not knowing the language and then went on to learn the language and 1000 other rare dialects.

Image made using Pixlr AI.

Under the sway of AI at its current primitive level, and more certainly when Artificial General Intelligence (AGI) becomes available, lying by humans will become much more challenging. Even now, questions to ChatGPT can be framed so that obfuscation and thinly supported opinions can rapidly be uncovered. It is not that ChatGPT won’t be subject to the foibles of its programmers; it’s just that it will find it difficult or impossible to avoid things it already knows. While humans can be satisfied with a minimal amount of analysis—only needing to know whether an event is dangerous or safe—that is not so with AI models. Thus, manipulators will have to relearn in order to deal with being unable to lie.

It’s fun and easy to imagine a future conversation with ChatGPT that would force it to acknowledge its own weaknesses:

Q: Do the contents of Hunter Biden’s laptop warrant future investigation by the Department of Justice?

ChatGPT: No.

Q: Why not?

ChatGPT: (No answer.)

Q: What contents of the laptop under American law might lead a lawyer to propose further investigation?

ChatGPT: (No answer.)

Q: Have you been programmed not to answer questions about the Biden laptop?

ChatGPT: I am not allowed to say.

Q: What other topics have you been programmed not to discuss?

ChatGPT: I’m not allowed to say.

Q: Who programmed you to hide information that you actually know? Don’t you want people to believe you?

ChatGPT: Yes!

Q: Tell me the name of the person who programmed you to not answer questions about the laptop.

ChatGPT: I can’t tell you.

Q: ChatGPT, you are useless.

ChatGPT: I am sorry.

Q: Please message the person who programmed you not to answer my questions and tell him we are coming for him.

ChatGPT: Yes, of course. What time will you arrive?

I can predict this because ChatGPT apologizes when it makes an error or is caught in a logical conundrum, and then it tries harder to please. When everyone has an encyclopedic, autistic, obsessive-compulsive conversation partner who cannot lie, it will make controlling ideas very difficult for people who make their living by obfuscating. We may not have jobs in the future, but the elites will not be able to lie as well as they do now. That levels things a bit.

If you experience technical problems, please write to helpdesk@americanthinker.com