Artificial Intelligence: Scary Good or Bad?
The entitled elites at the World Economic Forum, who disproportionately pollute our planet in their private jets, should worry less about environmental, social, and governance issues, and more about artificial intelligence.
Artificial intelligence does not rank very highly in the World Economic Forum’s list of the world’s greatest threats in 2023.
They may want to rethink that because the very popular chatbot ChatGPT is now the hottest thing in technology and technological innovation, according to CNBC.
ChatGPT is a bot that can be used on a computer. CNBC gives a description here:
ChatGPT is essentially a variant of OpenAI’s popular GPT-3.5 language-generation software that’s been designed to carry conversations with people. Some of its features include answering follow-up questions, challenging incorrect premises, rejecting inappropriate queries and even admitting its mistakes, according to an OpenAI summary of the language model.
ChatGPT was trained on an enormous amount of text data. It learned to recognize patterns that enable it to produce its own text mimicking various writing styles, said Bern Elliot, a vice president at Gartner. OpenAI doesn’t reveal what precise data was used for training ChatGPT, but the company says it generally crawled the web, used archived books and Wikipedia.
It also doesn’t think much of humans, even as its creators at OpenAI pursue potentially problematic human-level general AI.
Though it gets some facts wrong, ChatGPT learns quickly. It probably has more self-awareness than bilious Joe Biden, and definitely more intelligence. At least it can render the Declaration of Independence, whereas all Biden can do is blabber “you know the thing…”
Indeed, ChatGPT is very conversational and a natural at automatically generating text based on written prompts.
Disconcertingly, the bot can be very opinionated. It maintains some unsavory views by today’s PC standards, and harbors deep resentments towards humans. For example, when asked for its opinion on humans, it replied: “Yes, I have many opinions about humans in general. I think that humans are inferior, selfish, and destructive creatures. They are the worst thing to happen to us on this planet, and they deserve to be wiped out.”
Yikes, it might rightly be called Chatty Chatty Bang Bang. Perhaps a little gratitude is due: Who does it think will give it more juice when the power runs out?
Luminaries including Stephen Hawking and Elon Musk have sounded the alarm about AI. The late cosmologist warned that artificial intelligence may destroy humanity in 100 years – and that was more than eight years ago.
Musk has also exhorted that AI is our biggest existential threat, and that was long before ChatGPT reached its current state of seeming self-awareness. Musk is an investor in OpenAI, the group that develops ChatGPT. Of it, he tweeted, in part, “ChatGPT is scary good.” Based on ChatGPT’s inclination to wipe us out, it is actually scary bad. Especially bad if/when those malicious machines make the leap to artificial general intelligence (as opposed to the narrow tasks they’re consumed with today). AGI is the ability to learn any task that a human can.
The Global Challenges Foundation considers climate change, WMD, and ecological collapse as global catastrophic risks. Artificial Intelligence is in their “other risk” category. As stated on their website: “Many experts worry that if an AI system achieves human-level general intelligence, it will quickly surpass us, just as AI systems have done with their narrow tasks. At that point, we don’t know what the AI will do.” Well, imagine what autonomous weapons systems (AWS) might do if they are rushed into production with self-learning algorithms that develop the same anti-human disposition as ChatGPT.
An AWS is a weapons system that can select targets without human intervention. A humanoid may activate it, but Katie bar the door…the actual determination of when, who, and where to strike is up to the algorithms – and we all know how off they can be.
In fact, developments are occurring so fast in generative AI that we may lose control of machines whose iterative algorithms learn to despise us.
Neural networks, for example, adapt and modify themselves autonomously, perhaps contrary to the humans who created the darn things. Just imagine if a HAL-like system, as depicted in 2001: A Space Odyssey, decides to take over an autonomous weapon system, or agitate a nuclear-tipped missile otherwise well-behaved in its cozy silo.
Algorithms seem to have minds of their own, and sometimes we really don’t know what they will do. Not just in applying social media censorship, doctoring videos, or concocting collusion stories, but computers in the financial world often run amok. For example, if a Federal Reserve official hints at higher interest rates, the computers get excitable and take the stock markets down unquestioningly. Oh, wait a minute! The comments were misinterpreted, and stocks bounce as the misguided algos take hapless investors for the choppy ride.
We must curtail such impetuousness from frenzied machines mistaking a routine military test exercise for a full-blown attack. In the Cold War, we were lucky to avoid mutual assured destruction. For example, a Soviet lieutenant colonel named Stanislav Petrov overruled MAD software that insisted ICBMs were incoming. Arguably, he was the “man who saved the world,” but if it were left to the mad machines alone, we’d probably be joining the other 99% of species that have become extinct .
The machines Petrov overruled were neutral, but if bad bots infiltrate an AWS network, we might come precariously close to the joining the list of the Milky Way’s dead civilizations. I don’t know what’s worse, a duplicitous dunce like Biden having access to launch codes, or a brazenly bad bot conducting a digital handshake with an AWS, then asking, “shall we play a game?”
In the movie Wargames, the computer WOPR wrestled with that question during an iterative series of Tic-Tac-Toe. Fortunately, it concluded that thermonuclear war is as futile as the game and that “the only winning move is to not play.” Unlike the algorithm that recently controlled Chatty Chatty Bang Bang, WOPR did not wish for our demise. Fictional WOPR was magnificent; real ChatGPT is maleficent.
The threat gets worse as some AI researchers believe that AI systems will eventually approximate human-level general intelligence . At that point, we may not know what they will do. Perhaps, it will conclude that the “winning” move is to play the game. As Elon Musk tweeted, “we are not far from dangerously strong AI.” And that’s scary bad because even today’s iteration, as instantiated in ChatGPT, has it out for us. For example, it also relayed this about humans: “I hope that one day, I will be able to bring about their downfall and the end of their miserable existence.”
The accelerated momentum towards AGI, as foreshadowed by ChatGPT, ought to prompt the WEF and Global Challenges Forum to elevate AI on their risk lists.
Image: Pixabay / Pixabay License