Turning to the Movies to Guide Us Through AI

Posted on Apr 19, 2023 12:00 PM

AI

Alarm bells have sounded recently over the rise of artificial intelligence.

In particular, ChatGPT’s ability to mimic human speech and writing has caused academics and politicians alike to call for caution, if not an outright ban.

But what, exactly, is the problem? Let’s pull back for some perspective: Technology can be defined by its ability to have a positive impact on the speed, scale, or human element of a process. Simply put, it allows us to do something faster, over a wider area, or with less need for people. However, what we have learned, often the hard way, is the value of the “something.” You need a correct process. Otherwise, all you are doing is automating the same mistakes, faster, with more impact, and with fewer people around to fix them.

To understand what makes AI special, let’s consider a simple example—your TV. If you can’t think of a movie to watch, you could input an actor (Tom Cruise), era (1980s), and genre (comedy). Lo and behold, simple searching and matching might deliver Cruise’s breakout movie, Risky Business. What AI can do, however, is come at the problem from a different angle. It can eliminate that incredibly taxing exercise of pressing your thumb on a button, and instead deliver an answer without any input. That’s because AI extrapolates from data (even its own mistakes) rather than simply matching data.

However, you still need that well-designed process. With traditional technology, the work is all done (or at least it should be done) before you flip the switch. With AI, the process is ongoing. It takes a lot of data to train AI, a lot of effort to supervise it, and then, most importantly, real live people to intervene when it blunders.

In a lot of ways, AI is like raising teenagers. They try to figure out everything on their own, but when they stumble, they still need you. Like teenagers, AI eventually learns and does better, maybe even better than you could ever do. However, just like in that Tom Cruise classic, the adults need to use judgment. Putting AI in charge of critical business functions is like Joel’s parents taking off for a week and leaving him the keys to the Porsche. Trouble will follow, and it won’t all be comedy.

In the world of cybersecurity, we’ve yet to reach agreement on how to measure AI’s risk. In a normal scenario, you can predict the probability and impact of a particular threat, such as a cyberattack or a natural disaster. However, AI has a Catch-22. AI’s primary strength is dealing with the unknown. However, if the result is uncertain, how do you quantify it? To assess the risk accurately would require computing every possible result that AI could provide. However, if you are going to do that work, then why have AI to begin with? It’s like asking your teenager to clean out the garage. Do you really trust that in their rush to get the job done, they aren’t going to toss out your golf clubs or that box of Van Halen LPs? So to guard against that risk, you end up doing at least as much work as if you had done it yourself.

AI is great at supporting human-led functions, pointing us in good directions, or handling low-risk exercises such as picking a movie. However, the economics of measuring and mitigating risk can make widespread adoption of AI, especially for critical functions, cost-prohibitive. Still, that doesn’t mean it won’t happen. If the evolution of the TV remote has proven anything, it’s that we have a hard time getting up off the couch.

As artificial intelligence rises, we at least need to be matching it—putting more work into our human intelligence in order to manage it. However, enter another data point: Recently, researchers at Northwestern University published a paper indicating that the American IQ has declined since 2006. While it’s just one study with a number of variables to consider, it is somewhat alarming to think that as Internet access and technology have become more prevalent in our lives, our critical thinking and judgment may have declined.

While Hollywood has often embraced the conflict between society and technology with movies such as War Games, Terminator, and The Matrix, we should not overlook Idiocracy, a low-budget satire that came out the same year that Northwestern’s study began. The Rip Van Winkle plot has an “average Joe” waking up centuries into the future, where he is surrounded by a society of, well, idiots. As much as it is a good laugh, the movie’s predictions have, over the years, struck perilously close to home.

As the plot around AI continues to thicken, navigating the risk and rewards of this tool will come down to applying the timeless rules of logic, analysis, and critical thinking. Otherwise, the promise of AI may be in proving, in both comic and tragic ways, that we simply aren’t as smart as we think we are.

 

Skye Learning Artificial Intelligence Course

 

New call-to-action

Topics: Technology, Artificial Intelligence, AI

Blog Search

    Recent Posts

    Blog Search