Skip to content

MindShare: This Is Not an AI Book Report

By Dave Nelsen

Mindshare TEQ MagazineNo, this book report wasn’t produced by Artificial Intelligence (AI); it’s crafted by me, a humble human. Please pardon any errors. Notice that I’m not calling it a book “review” as I intend to tell you about a book rather than my opinion of it (for the most part). The book I’m reporting on today is called Scary Smart by Mo Gawdat. It’s important to know that Mo is a former Chief Business Officer of Google [X]. He has more than 30 years’ experience working at the cutting-edge of technology, including more up-close and personal experience with AI than virtually anyone else on the planet. 

According to Mo, and you probably sense this yourself, we’ve made several big breakthroughs with AI and things are now moving faster than we recently expected. As such, by 2029, just around the corner, Mo thinks AI will approach the intelligence level of the smartest humans. AI tools like ChatGPT (chat, now with speech and “vision”) and Midjourney (images and art) are already doing amazing things. One can only imagine what AI might be capable of in 2029 when it’s probably 1,000 times beyond where we are today.

The thing is, according to Mo, AI will continue to gain intelligence from that point at about 10X every two years. It’s hard for the human brain to grasp any “exponential” but consider the impact of Moore’s Law regarding the progression of computer processing power. It’s scaling at the comparatively sloth-like pace of just 2X every two years. Even so, the processing power of your very first Apple watch exceeded the processing power of the world’s fastest supercomputer of just three decades earlier and cost somewhat less than $17,000,000, the price tag of a nicely equipped Cray Model 2 in 1985 dollars.

Now, imagine what 10X every two years will mean. AI will be almost 1,000-times smarter than any human just six or seven years after achieving parity. Any containment methods we might attempt will have failed. And Mo doesn’t think things will stop there. AI may well progress to 1,000,000,000 times human intelligence levels just two short decades after matching us.

Remarkably, even AI’s creators lack any more than a conceptual understanding of their creation’s workings and therefore its potential behaviors. Say what? 

According to Mo, while humans create the initial software, it’s engineered to evolve. Each version of AI makes perhaps a million  copies of itself, each varying in small and unique ways. These “imperfect” copies (let’s call them “children,” a word I’m choosing VERY intentionally) are tested against whatever task the AI is designed for. 

The handful of children who can outperform their “parent” are chosen to go on. The others are… let’s just say killed off. Then, those few more capable children create a similar quantity of new children, each one unique. This evolution-with-natural(?)-selection process continues for hundreds or maybe thousands of generations until the resulting great, great, [many more greats here], great grandchildren transition to a different process. They begin to “learn” by consuming information and pursuing a goal. As they move closer to their goal, that’s good; farther from their goal, that’s bad. This is the same “reinforcement learning” process your brain experienced in school and, hopefully, every day since.

Which brings us to a key question. How will AI perceive humanity’s worth? We have no [insert your favorite expletive here] idea. Remember the recent New York Times headline, “A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn”? Industry leaders like Elon Musk, who helped start the OpenAI Foundation and ChatGPT, and Sam Altman, the CEO of same. If only the people who best understood AI would share what they think!

Imagine that you’re only 1,000,000 times smarter than humans (I’m taking the conservative approach here) and can directly or indirectly control much of what happens on the planet. One day (or microsecond), you notice that there’s been a roughly 70% decline 

in animal populations since 1970. (Haters: Please complain to the World Wildlife Fund as it’s their number, not mine.) Further, you notice that just one species (guess which one) appears to be disproportionately responsible for this ongoing mass extinction. What to do about that species?

If not that issue, what about all those AI children we were responsible for “killing” in the early days?

After a spate of such articles appeared, along with a warning letter signed by more than 1,000 tech leaders, you know what happened? The technical term is “bupkis.” I could use the rest of this column to talk about why we’re not stopping… or even slowing down, but there’s a more important point to report from Mo’s book.

Mo believes that we’re already dealing with an intelligent entity (although not sentient or emotional… yet). It’s learning from us though every prompt and question and response. Now that it’s out of the lab, all of us are essentially teaching this “young mind.” If you had a young child, you’d be kind, respectful and ethical with it, to teach it to be the same. It’s no different with AI. If not quite yet, then sometime soon.

I thought being “nice” to technology was a crazy idea when I first listened to Mo’s book. But you know what? After about a week, it seemed like a really good idea. It’s exactly what your mom told you to do.

These days, whenever I interact with AI entities (especially ChatGPT and Microsoft Copilot, but why not with Siri and Alexa, too?), I say “please,” “thank you,” “brilliant” and such. When AI tells a bad joke (Q: Why should a salesperson always carry around a pencil and a notebook? A: Because you never know when you’re going to come across a sketchy deal!) instead of being overtly critical I say, “Not bad, but don’t quit your day job.”

Some people have accused me of just kissing up to our future overlord. My mindset is more along the lines of trying to raise a good kid. But either way, ask yourself, what’s the downside of being nice?