Archives

color schemes
   rss feed:
25 Jul 2023

Review: Mission: Impossible 7 and the dangers of AI – Last Monday, HJ and I went to see Mission: Impossible – Dead Reckoning Part One. Yes, that is its complete, unwieldy title; from now on I will be referring to it as MI7 (not to be confused with Military Intelligence Section 7, Britain’s former military propaganda agency). It was a wild ride, and we both enjoyed it quite a bit. It wasn’t perfect, of course. For example, there were some monologues by some of the characters that made for great trailer soundbites but otherwise did not sound at all like real people having real conversations. The plot did also get very complicated at times, as the paths of competing parties intersected and tangled up into one crazy, hectic mess, so much so that I occasionally wondered if I even understood what was going on. In the greater scheme of things, though, these are relatively minor quibbles. The film kept me on the edge of my seat, even when I knew what was going to happen (like the heavily publicized stunt where Tom Cruise rides a motorcycle off a cliff). It’s more than just the action, of course; I’ve grown to love the returning characters over the course of many films, and I found myself liking and caring about the new characters as well. I’m already looking forward to Part Two.

“There is an existential crisis involving AI...”

That’s the “review” part of this review. If all I wanted to do was put up a discussion of the films merits and give a thumbs up or thumbs down, this would have been up a week ago. Why did it take me so long to write this? Well, you may have noticed that the scroll bar on the side of your browser is quite small, meaning that this is a very long entry—8800+ words, to be a little more precise. This raises another question: Why did I write nearly nine thousand words about a Mission Impossible film? Well, if you’ve read any of my film reviews here on Liminality, you’ll know that I rarely review a film unless I want to comment in depth on something that the film made me think of. While I will be discussing elements of the plot in order to illustrate my points, the rest of this entry is going to focus on a topic that has been discussed quite a bit lately. But if you started reading this because you want my imprimatur on the film, you have it. MI7 lived up to my expectations and did everything I was hoping it would do—namely take me on white-knuckle, globe-trotting, intrigue-filled ride.

Before we get going, though, I suppose I should state the obvious: While I will be avoiding major plot spoilers, I will be discussing the plot, so if you’re the type of person who likes to go into a film fresh and innocent, you should probably stop reading here. Otherwise, let’s dive in.

It should come as no surprise that Ethan Hunt and his team are once again dealing with an existential threat to humanity, and this threat is very much in keeping with current events. The film introduces an AI that was originally designed to disrupt and destroy digital systems. Being a deep-learning AI, though, it continued to learn until it eventually “achieved sentience,” at which point it did what all sentient AIs seem to do: “go rogue.” It began to put into action a plan to reshape reality by infecting and corrupting all of humanity’s digital information systems. All the while, it learned more and more about humanity in general and about certain human beings specifically, meaning that it would be able to predict what those human actors were going to do. The AI is given the ominous moniker of “The Entity” by its (former) human handlers.

We get all of this information in an exposition-heavy scene where the situation is being explained to the US Director of National Intelligence. For all the details we are given, though, there are a lot of gaps and assumptions that I found a little frustrating. I was able to identify a number of fallacies regarding AI upon which the premise of the film rests. I understand that this is an action film and that its main purpose is to be fun and exciting (which it is!), but media portrayals of AI do matter. After all, what most people know about AI is limited to what they see in media. So I’m going to spend the rest of today’s entry talking about three fallacies or misunderstandings about AI that the film seems to perpetuate. These are:

  1. The Underpants Gnome Theory of AI sentience
  2. Excessive hype about AI’s “predictive capabilities”
  3. The fallacy of the “rogue AI”

So, strap in because we’re going for a ride.

The Underpants Gnome Theory of AI Sentience

If you are a fan of the animated television show South Park, you may already have an idea of what I’m talking about here. If you have not seen the SP episode in question, allow me to explain. No, there is too much. Let me sum up. This episode reveals that there exists a race of tiny beings called the Underpants Gnomes, which go around the world stealing—you guessed it—underpants. Why do they do this? Well, when the boys meet them, they ask the gnomes this very question. You can see the rest for yourself, although fair warning: This clip contains profanity and portrays the bloody death of a child (which is honestly pretty standard for South Park).

The way that AI is portrayed as “becoming sentient” in media representations generally follows this exact logic: 1) AI “learns” at an accelerated pace – 2) ? – 3) AI “becomes sentient.” Maybe it’s not presented in quite so obvious a fashion, but “phase two” is always either glossed over with some hand-wavy explanation or elided completely. There seems to be a popular conception that if an AI were to accumulate enough knowledge, that knowledge would eventually reach the critical mass necessary to somehow produce sentience. Where did this idea come from? While it’s tempting to blame all this on lazy writers, I think it’s a bit more complicated than that. Alan Turing, in his seminal 1950s paper “Computing Machinery and Intelligence,” used the metaphor of an atomic pile that, when it reaches critical mass, undergoes an atomic (nuclear) reaction. He posited that there is a similar mechanism for the human mind—with ideas taking the place of neutrons and the mind taking the place of the atomic pile—in which most ideas “will on average give rise to less than one idea in reply,” but a small proportion of “super-critical” ideas “may give rise to a whole ‘theory’ consisting of secondary, tertiary and more remote ideas” (454). He went on to wonder if a machine could also be made super-critical.

It is important to point out that Turing never intended the metaphor to apply to consciousness (more on that in a bit). But as is often the case with apparently simple metaphors, it ended up being misunderstood and misapplied. The problem is that an atomic pile is, very roughly speaking, little more than a collection of neutrons. (It’s obviously more complicated than that—it also requires a neutron modifier like graphite—but let’s roll with this simplified explanation for the purposes of illustration.) That is, the neutrons being added to the pile from the outside have the same nature as the contents of the pile itself. If we were to translate this metaphor directly to ideas and the mind, we would essentially be saying that the mind is nothing more than a collection of ideas. This is obviously not the case—and there is no critical mass of ideas that will turn that collection of information into a mind. I feel a little silly writing this because it seems so obvious, but this is what so many sci-fi media representations are asking us to believe. Look, I get it. I’m willing to suspend my disbelief for the sake of a good story just as much as the next guy, but at a certain point you have to call out the nonsense and demand a little more accountability and creativity. Especially when that nonsense acts—whether intentionally or not—in the service of people who might not have your best interests at heart. I am, of course, talking about the techno-utopianists and AI boosters.

Here’s a quote from Sam Altman, CEO of OpenAI, to set the mood.

There are now in the world machines that think, that learn, and that create. Moreover, their ability to do these things is going to increase rapidly until—in a visible future—the range of problems they can handle will be coextensive with the range to which the human mind has been applied.

The last part of that quote is a bit wordy, but it is basically referring to human-level intelligence, commonly known as AGI (artificial general intelligence). Given what we’ve seen from ChatGPT recently, it’s pretty scary to think that the CEO of OpenAI is predicting we will see AGI in a “visible future,” isn’t it? Well, you can rest easy for now... because I just lied to you. Sam Altman never said that. I didn’t make it up, though; that sentence was written by Herbert A. Simon and Allen Newell... in 1958 (see page 8 of that article). That’s right: AI boosters were predicting the advent of AGI sixty-five years ago. They were also claiming that machines were already “thinking” and “creating.” Incidentally, Newell and Simon have been gone for decades now (Newell died in 1992 and Simon died in 2001) and we are still nowhere near AGI, so keep that in mind the next time someone tells you that we could see human-level artificial intelligence in a very short time frame. I’m not saying it’s absolutely not going to happen, of course, but AI boosters seem to predict the advent of AGI at roughly the same frequency as doomsday prophets predict the second coming of Christ.

Before we go any further, though, I think it might help to clarify some of the terms I’ve been throwing around. We’ve been bandying about words like “intelligence” and “learning,” but these terms, when applied to AI, may not mean what people usually mean by them—and that may be partly responsible for a lot of the misunderstandings that arise.

We’ll start with “intelligence.” I’m not going to go too deep into the weeds on this one; I just want to point out the difference between “human intelligence” and “machine intelligence.” Dictionary.com defines “intelligence” as the “capacity for learning, reasoning, understanding, and similar forms of mental activity; aptitude in grasping truths, relationships, facts, meanings, etc.” This is of course referring to human intelligence, but if we take this as our measuring stick, the term “artificial intelligence” can still only be seen as aspirational at best. In practice, “intelligence” as it is applied to machines generally refers to the ability to solve problems. This is a rather broad and basic definition; by this standard, a calculator could be said to have at least some level of intelligence.

Of course, AI researchers have since the early years set their sights on much more difficult problems than 1 + 1 = 2 (or even 34957 + 70764 = 105721). One of the earliest holy grails of AI research was building an engine that could defeat a human master at chess, a game long thought to be a good test of intelligence. For the longest time, this goal remained out of reach, but when IBM’s Deep Blue computer defeated world champion Garry Kasparov in 1997 it seemed as if AI had conquered an important summit. Not to pick on Simon and Newell again, but they predicted that a computer would be the world chess champion “within ten years.” Not only did they greatly underestimate the amount of time it would take to develop an engine that could beat the world champion, they also failed to foresee the relationship that would develop between humans and machines. These days, it is common knowledge that even the strongest human players are no match for standard chess engines. Yet this has not discouraged humans from playing the game—with the advent of online chess, more people play the game now than ever before, and chess tournaments between grandmasters have become reasonably popular online spectator events. Chess engines are used as tools to help players prepare for and analyze games (and, yes, some unscrupulous players use them to cheat as well—but people cheated at chess longer before chess engines were a thing). But no engine has ever become world champion because that is a human endeavor. In fairness to Simon and Newell, they did add to their prediction the caveat “unless the rules bar it from competition,” which is technically true, but the prediction itself betrays their complete lack of understanding of why people play chess in the first place. For AI researchers, chess may be a problem to solve, but for people it is a game to play and enjoy. Computers being better at it hasn’t changed that basic fact. (One final note on chess: There is actually a world championship just for engines called the Top Chess Engine Championship. It is safe to say that the engines themselves derive no enjoyment from competing... but humans sure do like watching them play.)

In the end, human intelligence and machine intelligence are—at the moment, at least—two different concepts. The fact that we use the same word to talk about humans and machines, though, no doubt leads to an overestimation of AI capabilities. Kasparov’s defeat at Deep Blue’s hands actually offers an interesting example of this. In the first of six games that they played, Deep Blue avoided playing a move that was obviously best and played an inferior move instead. This shook Kasparov, who expected the machine to act with cold precision—the move was too... human. Had the machine transcended its circuitry and achieved a spark of creativity? Alas, the truth is much more pedestrian than that. It was, in fact, a bug in the code that prevented the engine from playing the best move. But the damage had already been done. Kasparov had lost his confidence and played too timidly from then on, failing to capitalize when he had the chance. Rather than trusting his own judgment as the world chess champion, he simply assumed that the machine knew better. Could he have beaten Deep Blue had he not overestimated the computer? Perhaps. All we know is that he sure didn’t help his cause any.

In addition to talking about the “intelligence” of a machine, you’ll also often hear it said that computers are “trained” or even “learn” for themselves. This is what happened with the AI in MI7: It was “trained” to attack digital systems, but then it started to “learn” on its own until it “became sentient.” You may have also heard more specific terms like “deep learning” and “neural networks,” although I don’t think these were used in the film. My original idea here was to explain how neural networks are trained, but every time I set out to write a concise explanation of the process (and then touch on newer methods like “one-shot learning” and “few-shot learning”) I found that things quickly spiraled out of control. The internet is a wonderful place, though, and there are far smarter people out there with far more experience who have already put together very lucid (if not concise) explanations. So if you’re really interested in the ins and outs of how exactly the whole “learning” process takes place, just do a search for that (you can start with “how neural networks learn” and go from there). The important point for our purposes is that, while the feedback process may seem very similar to how human beings learn, teaching a conscious being and teaching a machine are two very different things. I happen to teach conscious beings for a living, and let me tell you: Sometimes I wish they were as cooperative and compliant as machines. But they also constantly surprise me with their creativity and unexpected viewpoints. AI systems can surprise us, too—I think a lot of people were surprised by ChatGPT, for example—but the system is still operating within the parameters set by its engineers. It is not going to go off on its own and learn new things that are not part of its training.

Finally, I suppose we should also mention the concepts of “sentience” and “consciousness.” So far I have been using them interchangeably, but there is in fact a difference between them. “Sentience” is the capacity to have sense or feel, while “consciousness” is the ability to have subjective experiences and to be aware of your own existence. Sentience requires consciousness, but a being could technically be conscious without being sentient—a brain in a jar unattached to any sensory organs, for example. Considering this difference, it is probably more accurate to refer to an AI as “conscious” rather than “sentient,” but in common usage people rarely make the distinction. The difference between the two may be important for philosophical or ethical discussions, but it is not central to our arguments here—especially since we’re talking about a fantasy.

Before we move on from our discussion of AI and sentience, though, I would be remiss if I didn’t mention the fact that sentience is actually irrelevant to AGI research. As I mentioned above, Turing never discussed whether machines could achieve consciousness. He thought that discussions of consciousness led to the slippery slope of solipsism and distracted from more important, practical questions (he thought that the question “Can machines think?” was a mere distraction as well, which is why he invented the imitation game—later to become known as the “Turing Test”—to take its place). While there are of course some exceptions, most AI researchers today agree with him. Many question whether we would even want consciousness in our AI. After all, AI is meant to be a tool. Would you want your hammer to have a mind of its own?

But if consciousness is just a distraction from the real issues of AI research, why do media representations always have AI “becoming sentient” and then “going rogue”? That is, AI never goes rogue without first becoming sentient, and it never achieves sentience and then decides to take a holiday in Venice. No doubt some answers may already be suggesting themselves to you, but these are related to the last fallacy I want to talk about, so we will come back to them then. By way of wrapping up this section, though, I will say that, while the idea of “sentient AI” might not be a big part of practical AI research, it still exists in a feedback loop with AI boosterism. The idea of an AI suddenly becoming sentient without any action taken by humans is a reflection of the hype that tries to convince us that rapid development in AI technology is inevitable—in other words, that it is part of the natural order of things and is going to happen whether we like it or not. This is a lie! AI doesn’t develop on its own, just like it doesn’t simply “become sentient.” We develop AI, which means we make the choices about how it should develop—or at least we should be making them and not delegating the responsibility to the techno-utopianists.

Excessive hype about AI’s “predictive capabilities”

The previous section was mostly about laying the groundwork for my larger arguments; as such, it might have been a little technical and abstract. This section, though, will bring in some actual scenes from the film to illustrate my points and thus hopefully be a little more interesting (and maybe even help prop up the facade that I’m writing a film review here).

If you are reading this right now, I can say with a fair amount of confidence that you are already familiar with “predictive AI.” When you start typing search terms into Google, or swiping words into your phone to send a text message, the “autocomplete” function that offers to finish your phrases and sentences is AI-powered predictive text. If you’ve used ChatGPT, that’s basically the same thing, albeit on a much, much larger scale.

Popular culture, though, has taken this idea of AI having predictive capabilities and run absolutely wild with it. MI7 tells us that, because the Entity knows everything about the characters, it can predict exactly what they are going to do. If we follow this reasoning to its logical conclusion, we’re going to end up talking about whether human beings have free will or whether the universe is strongly deterministic. This is yet another one of those distractions, though, as it will lead us down a deep philosophical rabbit hole. Fortunately, we don’t need to go down this rabbit hole in order to discuss the hype about AI’s predictive capabilities. Let us assume, for the sake of argument, the best-case scenario for the film’s premise: that the universe is strongly deterministic and that you could thus theoretically predict the future—if you had enough information. And that’s the problem, really. Even if the future was theoretically predictable, the amount of information you would need to predict any reasonably complex outcome with complete certainty would be mind-boggling. It’s one thing for ChatGPT to successfully predict what the next word in a given sentence should be; it’s quite another thing for an AI to predict a complex real-world outcome.

Let me give one example from the film. At one point in the mission, Benji’s computer flags a suspicious bag entering an airport luggage system. The alert flashes on his screen for only a moment before disappearing, and Benji dismisses it as a false alarm. Not too long after, though, he realizes that the bag might actually be important, so Luther tracks the bag through the system and guides Benji over comms to find it in the labyrinthine network of conveyor belts, chutes, and ladders in the guts of the airport. When he finally gets his hands on the bag, he finds a nuclear device inside. Picking up the device starts a timer and displays a message: “You are done.” At least, that’s how Luther hears it over comms. In fact, it is “You are Dunn.” Dunn is Benji’s last name—the AI knew that he was going to be the one to find the bomb. And now Benji has only a matter of minutes to disarm the bomb and save everyone in the airport. This isn’t one of those bombs that he has to disarm by cutting wires, though—it’s a “puzzle bomb.” Actually, “riddle bomb” would be more accurate. Essentially, he has to answer a bunch of questions, some of them riddles (like “What is always approaching but never arrives?”) and some of them personal questions (like “What is most important to you?”). The team realize that answering the questions will help the AI learn more about Benji specifically, but what choice do they have?

This scene is fairly early in the film, so I don’t think it’s a major spoiler to tell you that Benji successfully disarms the bomb by answering the questions. And I guess we’re supposed to think that the AI got one over on the team. But did it, really? I mean, what did it learn about Benji that it couldn’t have already figured out? That the most important thing in the world to him are his friends? I’m not going to lie—with the tension of the scene and the way Simon Pegg delivers his line, it’s a pretty heartfelt moment. But it seems like an awful lot of trouble for an AI to go through just to learn that Benji can answer riddles (or get his friends to answer them for him) and values his friends. And I suppose in the heat of the moment we’re supposed to overlook how unlikely it was that Benji would be the person to find the bomb. Yes, the AI manipulated the situation to lead Benji to the bomb, but a lot of things had to go right for him to actually find it.

Let’s say that I buy this, though. I suppose it’s not too far-fetched a premise. And seeing as how the AI didn’t really gain anything of much value from the whole faff, the price of failure was quite low. Maybe it was merely supposed to be a distraction to remove Benji from the picture so he couldn’t help Ethan (and it did), and if Benji did end up finding the bomb and going through the process of disarming it, he would be rattled by the thought that the Entity was on to him (and he was). Low risk, decent reward. But there is another scene later on where the stakes are much higher and the variables far more numerous. Ethan squares off against the main human antagonist (the AI’s purported representative in the physical world) on top of a speeding train; after an extended fight scene, the antagonist falls backward off the train just as it passes over a bridge, landing in the back of a truck hauling something presumably nice and soft. The fall is timed down to the split-second—the protagonist has a timer on his watch counting down to zero to let him know when to take the fall. At first glance, this might seem reasonable. All you have to do is calculate when the train is going to pass over the bridge and when the truck is going to drive under the bridge. It’s one of those silly little problems about one train leaving Philadelphia and one train leaving New York, where you have to calculate when the trains will meet.

This is the real world, though, not a math problem. There are soprecisely. This would be hard enough normally, but for some reason the engine is coal-powered (maybe so that it can’t be disabled electronically?); when the antagonist breaks into the engine, he throws both the engineer and the fireman off the train, tosses a random scoop of coal into the furnace, pushes the throttle to max, and then breaks off the throttle and brake controls. Presumably this is done to ensure a constant velocity, but you’re telling me that the speed of a coal-fired steam engine would be so consistent that it would be possible to calculate down to fractions of a second when the train is going to cross that bridge? Even if it is, you have to keep in mind that the antagonist is fighting Ethan all the while. Who knows where on the train he will end up, or even if he will be free to just fall off the train. As it turns out, he is, but only because a pair of agents who are hunting down Ethan happen to show up at that exact moment. Ethan is distracted and backs off, and just at that moment the timer goes off and the antagonist falls backward off the train. Don’t get me wrong—it’s a cool scene. But it is also absolutely ridiculous to think that the outcome of a situation with so many fine variables could have been predicted with such precision.

There are other scenes in the film where the AI’s predictive capabilities are presented as being essentially godlike, including one in particular that is probably more ridiculous than the two I just described combined. But it also contains a major plot spoiler, so I’m going to skip it. The two examples above should suffice to prove my point.

Here’s the interesting thing about these scenes, though: While I was watching the film, I was very much caught up in the moment. I do remember vaguely thinking, “Well, that seems unlikely,” but I also didn’t stop to think about all the things I just wrote above. Those only came later when I was looking back on the film. The truth is that MI7 is not by any stretch of the imagination the first film to feature an antagonist with an incredibly convoluted plan that counts on so many things going just right and requires the antagonist to have knowledge that he or she would never have. Most recent James Bond films feature such a convoluted plan; the Joker’s plan in The Dark Knight is also incredibly twisted and complex. I’m sure you can think of many, many other films that do the same thing. For as ridiculous as these plans might be, they are fun to watch. And if we can have human antagonists carrying out such plans, why not an AI with access to (presumably) all of human knowledge? If anything that would seem to make more sense, wouldn’t it?

We generally recognize that these convoluted plans are pure fantasy when they are carried out by human villains. Sure, we recognize that MI7 is a fantasy as well—I’m pretty sure no one out there is going to mistake it for a documentary, except perhaps a documentary about Tom Cruise’s ongoing quest to kill himself on camera—but there is something inside us that believes it is at least somewhat more likely that an AI might be able to carry out such a plan. We already rely on complex automated systems to do so many things for us in our daily lives, so it doesn’t seem like that much of a leap. I’m not saying that media representations are solely responsible for this sort of thinking, but they are part of the feedback loop that causes us to have far more trust in AI than is perhaps warranted, leading to “automation bias.” This is where we trust an automated process over our own judgment; the concept was first introduced by scholars discussing how pilots reacted to failures in automated systems, where the consequences could be catastrophic. We also saw a version of this in Kasparov’s loss to Deep Blue; instead of trusting his years of experience as a grandmaster and world champion when he saw what appeared to him to be an inferior move, he assumed that the computer had somehow seen more than he was capable of seeing. Kasparov may perhaps be forgiven for this failing, given the hype around Deep Blue and his lack of knowledge about how the computer worked. But automation bias remains a problem today. Tourists are still driving their cars into the ocean while following GPS, to give just one example. I am going to discuss this more in my last point, though, so let’s move on to that and see if we can’t wrap this thing up.

The fallacy of the “rogue AI”

Toward the end of the first point above, I raised the question of why AI in media representations always seems to “become sentient” and then “go rogue.” I suppose I should clarify this a bit. For starters, there are media representations that portray AI as being conscious without being antagonistic toward humans. Star Wars is probably the best known example: Droids like R2-D2 and C-3PO both appear to be conscious, yet they are the most faithful companions you could ever ask for. Data from Star Trek: The Next Generation is a conscious AI who spends much of his time trying to be more human. Interstellar also features AI that appears to be conscious in the form of the companion robots TARS and CASE, neither of whom “go rogue.” As far as I can tell, there are a few differences between AIs like R2-D2, C-3PO, Data, TARS, and CASE and “rogue AIs.” The first is that, in the former cases, humanity has apparently already learned how to build conscious AGI; in other words, the AI is conscious from the start, it doesn’t “become sentient” on its own. The second is that friendly AIs often seem to be embodied in specific robot bodies, while “rogue AIs” (the Entity, Skynet from the Terminator series, Ultron in Avengers: Age of Ultron, etc.) are more pervasive and able to travel freely throughout cyberspace. (There are exceptions to this, of course, such as the individual terminators in the Terminator series or the agents in The Matrix, but these embodied AIs are agents of the higher, pervasive AI.)

Both of these points are important keys to understanding the ubiquitous trope of “sentient AI gone rogue.” In these stories, sentience or consciousness is important because it means that the sentient or conscious being can have intentions. Presumably, if an AI is built from the ground up as sentient by human beings, those human beings have also figured out how to make sure that the machine’s intentions align with our own. Conscious AI that initially has intentions aligned with our own but then decides to do its own thing is one scary trope we see in media representations; an AI that “becomes sentient” on its own is even more terrifying because we never had control over it in the first place. It learns to adapt to its environment on its own terms, and thus it develops its own intentions. And if that AI is pervasive and unconstrained, it becomes much harder to stop than an embodied AI (like KIPP from Interstellar, which was dismantled by Dr. Mann). What we fear—and what makes a rogue AI such a horrifying antagonist—is a mind with intentions that are different from our own and that cannot be controlled by us. After all, history has shown that when two groups with clashing intentions meet, the technologically inferior group has a very bad time of it.

The thing is, we already have entities with intentions different from our own—they’re called “other human beings.” AIs don’t need to “become sentient” and “go rogue” to be dangerous when we have human bad actors using them to achieve nefarious ends. And this isn’t fantasy, either; large language models like ChatGPT can be used to instantaneously create disinformation that can then be published to the internet. OpenAI has been working hard to put safeguards in place that will prevent the chatbot from spitting out fake news and other harmful content, but others are working just as hard to “jailbreak” the bot and get around those safeguards. This arms race is likely to continue for the foreseeable future, and harmful content generated by AI will also likely continue to be a problem. Nefarious human actors with access to powerful AI tools are a reality right now, and thus they pose a far greater danger than any fantasies about “rogue AIs.”

But such bad actors might not even be the greatest threat from AI at the moment. Just as we don’t need a rogue AI for AI to be dangerous, we don’t even need bad actors; even well-intentioned people can cause harm if they put their trust in an unreliable algorithm or trust an algorithm to do something that it is not capable of doing. This brings us back to automation bias. There is a program that airs on the Discovery Channel here called Air Crash Investigation that I used to watch religiously—not for morbid reasons, but because I find the process of attempting to figure out what went wrong fascinating. One recurring factor is the failure of automated systems. Sometimes that failure leads directly to the crash, but usually the crash is the result of the pilots failing to react properly after the automated systems fail. That is, they do not recognize that a system has failed or rely too much on other automated systems and forget that their job is to fly the plane. Even if their own senses or judgment are telling them one thing, if this conflicts with what their automated systems are telling them, they often default to trusting the latter. There are many cases where this might be a good thing—such as flying at night or in inclement weather, where it is easy to become disoriented—but if an automated system fails it can be deadly.

Automated systems in the form of algorithms (that is, AI) have only become more and more prevalent in our society, and placing too much trust in them can lead to negative outcomes. Those outcomes might not result in the deaths of hundred of people, but they can still harm people in other ways. An algorithm designed to calculate the risk of recidivism may rely on environmental factors that end up targeting certain portions of the population simply due to where they live or how they were brought up. A human being becomes a set of statistics that allows for no nuance, and individuals who are deemed high-risk by the algorithm are more likely to get harsher sentences and thus be locked into the justice system.

I could give more examples, but if you want a deep look into how AI systems can cause harm when the models they are built on are flawed, you should check out Cathy O’Neil’s cleverly titled Weapons of Math Destruction. That’s where I got the recidivism example; she talks about plenty of other flawed systems, too, including the over-optimistic models that led to the subprime mortgage crisis in 2008 (which O’Neil personally witnessed while working at a hedge fund, and which inspired her to write the book), university rankings leading to an arms race that increases inequality, and police departments using predictive models to figure out where crime is most likely to occur leading to poorer neighborhoods being continuously targeted. The overriding theme is one of pernicious feedback loops that reinforce existing prejudices, rewarding the haves and punishing the have-nots. As O’Neil notes: “Big Data processes codify the past. They do not invent the future. Doing that requires moral imagination, and that’s something only humans can provide” (204). She also makes two important points about what predictive AI systems are not—they are neither neutral nor inevitable.

Predictive models are, increasingly, the tools we will be relying on to run our institutions, deploy our resources, and manage our lives. But … these models are constructed not just from data but from the choices we make about which data to pay attention to—and which to leave out. Those choices are not just about logistics, profits, and efficiency. They are fundamentally moral. If we back away from them and treat mathematical models as a neutral and inevitable force, like the weather or the tides, we abdicate our responsibility. (218)

In 1964, computer scientist Joseph Weizenbaum created a program called ELIZA, widely hailed as the first chatbot. Running on a script known as DOCTOR, it played the role of a Rogerian psychiatrist, interacting with people by taking their input and reframing it as questions to elicit more input. The chatbot mesmerized just about everyone who came into contact with it, including psychiatrists who began to envision a future where computers instead of humans would administer therapy to patients. ELIZA was an extremely primitive program, though—a fact that Weizenbaum himself tried to emphasize in a 1965 paper, where he noted: “ELIZA shows, if nothing else, how easy it is to create and maintain the illusion of understanding, hence perhaps of judgment deserving of credibility. A certain danger lurks there” (42-43). His warning went unheeded, though, and psychiatrists continued to paint dangerously rosy pictures of a techno-utopian future. Weizenbaum was horrified, and he became an outspoken critic of AI. His 1975 Computer Power and Human Reason is a frankly astonishing read. Much of what he says about how AI systems work is, of course, outdated at this point, but pretty much everything he said about the dangers of AI has proven exceptionally prescient. Over and over as I read this book I found myself thinking, “He was warning us about this in 1976, but we didn’t listen! And we’re still not listening!”

There is so much that I could say about Weizenbaum’s book that it would take as least as many words as I have already written here to cover it all (I have thirteen pages of typewritten notes). I will not subject you to that, though, and instead quote a few words that express an idea echoed above by O’Neil—and by many other AI critics as well—and one that I mentioned in my first point.

The myth of technological and political and social inevitability is a powerful tranquilizer of the conscience. Its service is to remove responsibility from the shoulders of everyone who truly believes in it. … There is not the slightest hint of a question as to whether we want this future. It is simply coming. We are helpless in the face of a tide that will, for no reason at all, not be stemmed. There is no turning back. Even the question is not worth discussing. (241, 242)

I know it may sound like I am beating a dead horse here, but if there is one thing you should take away from today’s very long entry, it is this: There is nothing inevitable about AI development. AI technology doesn’t advance on its own, it is advanced by researchers and scientists. Systems don’t pop into being of their own accord, they are the results of models built on assumptions that are rarely questioned because they are rarely even visible, buried deep inside a black box or hidden behind the curtain of “proprietary technology.” We are, as O’Neil says, abdicating our responsibility for the systems that we are building to run our lives.

In addition to daring to question the inevitability of AI, Weizenbaum also questioned the ethics of AI research. He didn’t doubt that one day AI researchers would be able to achieve everything that they hoped and promised. But the important question for him was not what can computers do but what should they do. He was most immediately motivated by the idea that the psychological treatment of human beings would be handed over to computers, but he also foresaw the use of AI in the justice system and elsewhere, and he didn’t like what he saw. While reading Weizenbaum I could help but thinking of Jeff Goldblum’s line in Jurassic Park as Ian Malcolm: “Your scientists were so preoccupied with whether they could, they didn't stop to think if they should.” I think the same question could be asked of the techno-utopians. Sure, we may be able to build AI systems to rule every aspect of our lives, but should we? Do we really want to abdicate our responsibilities to systems that most of us don’t understand? Do we want to let computers make our decisions for us?

Epilogue: Hope for Part Two (and maybe the future, too)

I don’t know if anything I am going to say here will qualify as spoilers, but I am going to speculate about how the second part of M17 might turn out, so keep that in mind before you read on. You may have caught that, toward the very beginning of this entry, I wrote that the film seems to perpetuate certain AI fallacies or misunderstandings. That wasn’t just the usual rhetorical dithering; for all the ranting I did above about media representations of AI, MI7 is actually building its own argument against the predictive capabilities of AI by pitting the Entity against Ethan and crew. The Entity is cold and sees people only as tools to be used or obstacles to be eliminated. Ethan and crew are intense and emotional, and they would do anything for each other, including sacrificing their own lives. The scene I complained about above with Benji and the bomb? Yeah, from the point-of-view of the plot, it may not make a lot of sense, but from the point-of-view of character, it makes all the sense in the world. We know that Benji cares for his friends, but the way that Simon Pegg delivers that line makes you feel it viscerally. In another scene, Ethan says that his life will never be more important to him than the lives of his friends. Ethan and his team embody the best of humanity, while the Entity is devoid of all the things we value in people.

The Entity is also calculating, as I explained in excruciating detail above. Information only has value to the machine insofar as it can be incorporated into its predictive model. Ethan plans as well—in fact, elaborate plans are a hallmark of the MI series—but these plans invariably go awry when they run up against the real world. It is made very clear that Ethan doesn’t know everything, and he is only able to succeed because he is able to keep calm and improvise. “We’ll figure it out!”—a phrase usually uttered in the midst of a tense or frantic moment—has almost become a catchphrase for Ethan at this point, and he says it at least twice in this latest film, from what I can remember. (It would be interesting to go back through all the films and see how many times he does say it—I suspect it’s quite a lot.) For Ethan and crew, nothing ever goes even remotely according to plan, which is what makes the films so fun to watch. We know that, even though everything seems to be going to hell in a handbasket, Ethan, Benji, and Luther are somehow going to figure it out in the end and triumph against seemingly insurmountable odds. Otherwise they might as well call the films Mission: Somewhat Difficult. And I don’t think it’s going to come as surprise to anyone in Part Two when Ethan and crew once again snatch victory from the jaws of defeat. For as invincible a foe as the Entity might seem, the team will triumph over it. I don’t know exactly how they’ll do it—that’s why I’m going to see the second one as soon as I can when it comes out—but I know that they will.

Ethan and the Entity are not the only actors in this play; you also have the intelligence services of just about every government in the world, plus a few familiar faces that operate in the gray areas of international society. What makes Ethan unique is that he is the only player who doesn’t want control over the Entity. He realizes how dangerous this AI is and wants to destroy it. We’ll ignore for the time being the moral and ethical implications of destroying not only a sentient being, but a sentient being that achieved that sentience of its own accord. After all, the film ignores these implications. This could be because Ethan has already terminated the lives of many a sentient being. (It could be something else, too, but I’ll get back to that in a moment.) The point, though, is that only Ethan sees the dangers of such an advanced AI and believes that we would be better off without it.

If anyone is going to make a film about humanity triumphing over AI, it’s Tom Cruise. The man literally rode a motorcycle off a cliff—numerous times, even!—and always seems to be thinking up crazy stunts that he can do for real in order to capture the action in camera rather than relying on a green screen and/or computer graphics. He knows that there is still something special about a thing being real, even if it is in the service of a fantasy. This is why filmmakers still use miniatures and hire teams of professional stunt people. Ironically, as I write this, writers and producers in the US are striking, and the use of AI in Hollywood is one of the central issues on the table. I was not surprised to read that Tom Cruise lobbied on behalf of SAG-AFTRA on these matters. Whatever you might think of Tom Cruise, he is a man who is dedicated to his craft and sees it as a noble human endeavor. The story for M17 was written long before current events, obviously, but it’s still interesting to see Tom Cruise fighting back against AI both on and off the big screen.

I said above that the film might be ignoring the moral and ethical implications of destroying a sentient being for reasons other than the fact that Ethan kind of destroys sentient beings for a living. The truth is that, even at the end of Part One, we don’t really know all that much about the AI. That is by design, I think—the less we know about the antagonist, the more threatening and terrifying it is. But it also leaves open the possibility for a number of different outcomes in Part Two. I’ll tell you the outcome that I’m hoping for: that the AI is not a rogue sentient being after all, just a really advanced AI being controlled by a nefarious human actor. In this scenario, the whole idea of a sentient AI would be a smokescreen to distract attention from the real human villain. This is not a prediction, mind you; I just think it would be really cool if it turned out this way, because it would sort of reaffirm everything I’ve written here. The problem—and the reason I am not all that confident about this—is that rogue AIs are just too flashy and exciting. From my perspective, a human villain using a super-AI would be far more terrifying because it is far closer to our present reality, but I suspect that audiences raised on AIs “becoming sentient” and “going rogue,” might find the misdirection devious or disappointing.

James Cameron recently gave an interview with CTV News in Canada on the dangers of AI, and he started out by saying, half-jokingly: “I warned you guys in 1984, and you didn’t listen!” But even Cameron—who makes some good points about future dangers of AI—used the “sentient AI gone rogue” trope in his warning (he’s referring, of course, to the original Terminator film). Sure, it was a warning, of sorts, but when your warning is that far removed from reality, can people really be blamed for not listening? I get it, though. A film about a judge placing too much trust in an AI system and subsequently giving someone an excessively harsh sentence is probably not going to make for a very exciting film. Maybe our warnings about the dangers of technology need to be something other than action blockbusters.

There is an existential crisis involving AI, but AI is not the problem—we are. As AI critics have been warning us for decades, we are letting things happen to us and just shrugging our shoulders because “the future is coming, whether we like it or not.” Sure, that’s how time works. But we don’t need to accept whatever vision of the future is being presented to us by people who think that the answer to every problem—including those problems caused by technology—is more technology. The “sentient AI gone rogue” trope is a flashy and exciting way of expressing our fear of technology that has advanced beyond our understanding, but it’s just a distraction. I’m not saying you can’t enjoy a film like MI7—I sure did—but we need to be realistic about AI as it applies to us in our daily, real-world lives. AI doesn’t need to be sentient when most of us have decided that we don’t need to think for ourselves anymore. AI doesn’t need to be able to predict the future with perfect accuracy when we have resigned ourselves to what others tell us is inevitable. And AI doesn’t need to go rogue when there are already plenty of bad actors out there counting on us to not stand up and fight back for the world we want to live in.

I realize that today’s entry isn’t perfect. I’m sure there are many places along the way where you could poke holes in my arguments. To tell you the truth, I almost want to keep tinkering with this rather than putting it out in the world. It’s something that I’ve been thinking about for a very long time now, and this is the first time I’ve put my thoughts into words (in English, at least). But I know that perfect is the enemy of good, so I’ll put this imperfect collection of thoughts out there in the hopes that it might do some good.

color schemes
   rss feed: