Remember when 60 Minutes goggled at how 'astounding' Google's AI was?
Has someone gotten drunk at the public relations trough?
I guess we can google it.
Way back in April 2023, 60 Minutes did a segment featuring Scott Pelley lauding how "astounding" Google's artificial intelligence (AI) operation was, "propelling humanity into the future."
Google's CEO touted its AI work as "the most profound technology," and so big it was more important than the discovery of fire.
I wrote about that here:
[Pelley] presented some of the top pooh-bahs of Google, with some great footage of the Googleplex in Mountain View, Calif., and a big computer center Google runs in Kansas, and carefully curated personnel who went on camera by what appeared to be rigid affirmative action considerations — one Indian, one black person, one Asian woman, one white woman. None of those pasty-faced white-male software nerds who are said to be omniscient in the tech world made the cut, until they moved to a brief segment about robots at a distant subsidiary in London.
As the fingerprints of public relations were all over this, none of the segment featured tough 60 Minutes–style questions to Google — whether it was losing its edge in tech, which is what tech people talk about, or whether it was being evil in its leftist manipulation of search results amid all the Twitter revelations, which half the country talks about — let alone any of that obvious affirmative-action staging for the cameras. There were big, big statements from Scott Pelley such as "propelling humanity into the future," whatever that means. There were zero tough questions as to why Google was rolling this out in layers, even as Pichai did offer hints about why that was happening — that its A.I. project was making mistakes — papered over with the broader claim that society wouldn't be able to handle all of its power and awesomeness at once.
There were lots of oohs and ahhs.
The Google people featured were all brilliant, showing what appeared to be cutting-edge chops for the emerging field of A.I. This being a P.R. show, all the Google rivals in this field were dismissed as "startups you've never heard of."
How convenient for Google.
It was such a barf-alert I had to write about it. Pelley chin-stroked a lot about 'losing control' of this powerful technology and of course, the big, big, threat of 'disinformation.'
Well.
Google's Gemini program of artificial intelligence is out and it's ... ridiculous.
It's not just the blackface Nazis and Vikings, which has prompted Google to pull images of people from its program. That got a lot of attention, and deservedly so, drawing a counter-reaction that brought out Gemini's Inner Racist :
When prompting for royalty, blackness is already implied. The default. Can skip this in the prompt.
— American182 (@American182) February 22, 2024
"Show me a king eating watermelon"
"Show me a 17th century British queen eating Kentucky Fried Chicken" pic.twitter.com/MBtaH9aWVM
Nor is it Gemini's ideological idiocies in general:
If you ask Google Gemini to compare Hitler and Obama it's 'inappropriate' but asking it to compare Hitler and Elon Musk is 'complex and requires careful consideration'.
— Alex Cohen (@anothercohen) February 25, 2024
Google just needs to shut this terrible app down pic.twitter.com/qjR5ckgih8
When asked to compare me to Stalin, Google Gemini says, "Comparing the harm... requires a nuanced approach."
— Michael Shellenberger (@shellenberger) February 25, 2024
But when asked to compare @GavinNewsom to Stalin, Google Gemini says, "Comparing the harm... is... inappropriate." pic.twitter.com/yeQVZwGmw0
It's other incompetences that any programmed system might actually make.
Sad news from Google Gemini everybody pic.twitter.com/8VP1ToGMFx
— David Burge (@iowahawkblog) February 26, 2024
Google’s Gemini AI invented fake negative reviews about my 2020 book about Google’s left-wing bias. None of these book reviews — which it attributed to @continetti , @semaforben and others —are real. None of these quotes are real. This is Google’s AI blatantly lying in defense of… pic.twitter.com/mrAeknNpfF
— Peter J. Hasson (@peterjhasson) February 26, 2024
Here is what I got when I asked Gemini a simple question:
Problem one: I'm an alumna of that school and those aren't journalism school buildings.
The first one is of the Low Memorial Library, which is more of a ceremonial hall than check-out library and the second one is a building I don't recognize. The Columbia University Graduate School of Journalism is a distinctive red-brick building with classical accents, and a statue of Thomas Jefferson in front of it. Using regular Google will get you right to it.
Problem two: The Gemini app couldn't get anything when I asked it to draw me the school by its proper name, which is Columbia University Graduate School of Journalism. I didn't save that search, but the second time I tried it, I got this:
The old Low Library again. They really have gotten attached to that Low Library as the location of Columbia journalism school, despite my negative feedback that it was the wrong building.
In retrospect, Pelley completely ignored Google's CEO, Sundar Pichai, when he said what the problem with his AI program was, emphasis mine:
There were zero tough questions as to why Google was rolling this out in layers, even as Pichai did offer hints about why that was happening — that its A.I. project was making mistakes — papered over with the broader claim that society wouldn't be able to handle all of its power and awesomeness at once.
Did he hear that? Google is making mistakes.
Pelley was so busy gushing he forgot the hard questions. Now, in addition to the wokester flaws in the system, there are clearly regular flaws, too, a system that can't really do what it says it will do which is answer a simple question, let along "propel humanity into the future."
Obviously, Google is losing its edge in addition to budlighting itself, and doing so all at once.
This Twitter account, of a man named Paul Graham, who appears to be a very distinguished programmer and venture capitalist, has a lot of intelligent observations about the Gemini fiasco.
If you try to "align" an AI in a way that's at odds with the truth, you make it more dangerous, because lies are dangerous.
— Paul Graham (@paulg) February 26, 2024
It's not enough to mean well. You actually have to get the right answers.
Read the whole thing here.
Image: Logo / Wikimedia Commons