Skip to content

The Good, the Bad and the Ugly of AI

by Carlos Tribino, The Machine

What can be written about AI, or artificial intelligence, that hasn’t been written already in a short span of several weeks? I am not a technologist, though I’ve been in the tech space since my first VC-funded dot-com (remember those?) back in 1996. I am also a big fan and early adopter of technology, embracing it as a reflection of humanity in the search for progress. But I must confess my concern for the potential social consequences of AI, and if you will entertain me, allude to some visionary films that raised a flag or two.

I never thought I’d use the names Stanley Kubrick and Mike Judge in the same sentence. One is widely considered one of the greatest filmmakers of all time, with masterpieces like Dr. Strangelove, 2001: A Space Odyssey, or A Clockwork Orange. The other one gave us Beavis and Butthead, Office Space and Idiocracy. And let me throw a third name in the hat, Spike Jonze, who played from the absurdity of the Jackass collection to some highly insightful titles like Her or Being John Malkovich.

I’d like to focus on three specific pieces, which I think were highly visionary on what AI may have in store for us. Each shows us a fairly dystopian view of AI. The optimist in me wants to focus on the positive; for starters, I’m not fearful of job eliminations; the industrial revolution showed us technology can take our jobs but create new ones too. But I can’t help but worry about the social consequences AI may have if not properly managed and regulated.

Kubrick’s 2001 showed us AI-powered HAL gone rogue on the crew, compromising the mission and the lives of the astronauts. AI seems amazingly able to “think” like humans in both good and bad ways. Furthermore, machine learning seems to teach them more about thinking like humans. Even learning to fool, lie and manipulate. While remarkable from a tech point of view, it is equally dangerous from an ethical one. What seemed like pure science fiction in 1968 is hitting very close to home in 2023. I asked ChatGPT about the dangers of AI and this is what it said: “On the negative side… AI could be used to manipulate people’s behaviors in harmful ways.”

Jonze’s Her took a look at the emotional side of AI, where a loner, all too reminiscent of today’s youngsters experiencing record levels of depression and mental issues, falls for an AI program romantically. You may have read an article by a New York Times reporter where ChatGPT declared its love for him, expressing signs of emotional pain, anger and jealousy. In a world where mobile phones and social media are providing “emotional support” to vulnerable teenagers, this can be a scary thought.

Judge’s Idiocracy presents the most dystopic, yet not unthinkable, scenario where machines take over so many “intelligent” tasks that drive humanity to a Darwinian evolution of anti-intellectualism. A world where everything is done for us by machines, better than we can do ourselves, to render a society of idiotic, useless people who can barely think straight. While somewhat far- fetched, if we look at what fake news has done for us in recent years, AI can do exponentially worse.

AI is not bad per se, but in the wrong hands – scam artists, criminals, dictators or terrorists – and/or not properly regulated, it has the potential to have an unprecedented impact in our lives – good, bad and downright ugly. One issue is that AI is advancing way faster than our ability to get through the red tape to regulate and manage it properly. Recently about 1,000 tech leaders, including the eccentric Elon Musk, pleaded for a slowdown in development until we have better control. But in a free market, that’s easier said than done. The biggest challenge may be to build into AI a rigorous, bullet-proof system of universal values and ethics.

Read the rest of the TEQ AI Exploration issue here.