Archives
Review: Mission: Impossible – The Final Reckoning – Last Friday evening, HJ and I went out to see Mission: Impossible – The Final Reckoning, the second part to 2023’s Mission: Impossible – Dead Reckoning Part One. Don’t ask me why they didn’t just call this one Mission: Impossible – Dead Reckoning Part Two. Anyway, today’s review is going to be similar to my review of the first film. That is, I will be talking about the film, but I will also be musing on the greater question of AI—how the film depicts it, what the film ultimately says about it, how all of this relates to reality, etc. I will also not be shying away from spoilers. I’m not going to spoil anything I don’t have to, but I’m going to have to talk about the plot in detail to say what I want to say.
Before we get into the spoilers, though, I’ll start with a quick spoiler-free review, just in case that’s what you’re looking for. I enjoyed Mission: Impossible – The Final Reckoning (MIFR), even though the plot made less and less sense the more I thought about it. But we don’t go to see an action flick starring Tom Cruise for the plot, do we? We go to see the adrenaline-filled chase scenes, the crazy stunts, the flashy special effects, and Tom Cruise running at top speed through various cities. Will Ethan Hunt (Cruise’s character) ever stop running? Will he ever learn that sometimes taking a taxi might be faster? More importantly, will Tom Cruise ever come up with a stunt he can’t survive? I have a theory that, at some point early on in his film-making career, Tom stumbled across an ancient tome that granted him immortality. As time went on, though, he realized that this was actually a curse, and his films since then have all been an attempt to break that curse. In part one, he drove a motorcycle off a cliff. In this film he scrambled around a biplane in mid-flight. At some point, I fully expect to see Tom Cruise clinging to the side of a rocket as it slips the surly bonds of earth.
Ultimately, the stunts and other heart-pumping scenes are what the film does best. The plot, on the other hand, is a bit tortured. A lot of that has to do with the portrayal of the AI “Entity,” which I’ll get to in a moment, but part of it stems from the film-makers’ desire to make it seem as if all of the previous M:I films inevitably led to this final reckoning. Now, everyone knows that MIFR was not even a twinkle in Tom Cruise’s eye when the first M:I film was made, so making it seem as if everything is connected is going to require some retconning. (If you’re not familiar with the term, it’s a portmanteau of “retroactive” and “continuity,” and it refers to rewriting a fictional canon to make it seem more continuous than it actually is.) For example, the “Rabbit’s Foot” from the third film is made out to be an early version of the code for the Entity, even though we all know it was some unknown biological weapon. The Bond films tried to do this in Spectre, and I didn’t think it worked that great then, either. I wish film-makers would just stop with this. Not everything needs to be an interconnected “cinematic universe.”
That being said, I will say that one of my favorite characters in the film is actually a minor character brought back from the very first film back in 1996. This works because he is fleshed out as a character and we learn about what has happened to him in the intervening thirty years. We actually care about what happens to him, or at least I did. In the film’s finale, I found myself desperately hoping that he would survive—in the short time he had been on screen, I had become inexplicably attached to him. That is how you connect two films that are thirty years apart. You can’t just say, “Oh, you know that thing you thought was this one thing? Well, it was actually something else entirely, just because we said so! Ha!”
I’m venting a little of my frustration with the retconning here, but I want to wrap up the spoiler-free section of the review by reiterating that I did enjoy the film. Flaws aside, it was still fun. With that out of the way, I am going to dive into a very rough summary of the plot. You have been warned: Here be spoilers.
As the film opens, the Entity—an AI that has “become sentient” and then “gone rogue” (refer to my review of part one for details)—poses an immediate existential threat to humanity; it is hacking into the arsenals of the world’s eight nuclear powers one by one, and once it has hacked into the final holdout, the US, it intends to launch everything and destroy humanity. As usual, there is only one man who can stop this from happening: IMF agent Ethan Hunt. He must first retrieve the Entity’s source code from the Sevastopol, a sunken Russian submarine, and use the “poison pill” created by Luther to infect the Entity with a virus. His intention is to destroy the Entity, as he believes it is too powerful for any one country or institution to control. The only problem is that destroying the Entity would also destroy cyberspace, so he faces something of a Catch-22.
Another wrench is tossed into the works when Gabriel steals the poison pill and demands that Ethan get the source code and then meet him at the Doomsday Vault in South Africa. This is a vault that contains the sum of all human knowledge; it is completely isolated from the outside world and has been built to withstand a nuclear holocaust. Ethan communicates directly with the Entity and discovers that the Entity wants to be let into the vault so that it can survive said holocaust.
Ethan hitches a ride on a submarine in the Bering Sea while his team track down the precise coordinates of the Sevastopol. Once Ethan has the coordinates, he dives on the wreck and retrieves the source code, contained in a device called the podkova (which means “horseshoe” in Russian, but I don’t know if that is significant). With the podkova in hand, he tracks down Gabriel and obtains the poison pill. He then inserts the poison pill into the podkova, uploading the virus to the Entity and subtly changing its perception of reality. Meanwhile, his team inside the Doomsday vault open a path into the vault—but instead of leading to the server banks, this path leads directly to a glorified flash drive that Grace disconnects once the Entity has entered it, trapping it inside and thus saving the world.
If that sounds confusing, don’t worry—I saw the entire film and there are still things I can’t make heads or tails of. If Luther designed the poison pill to destroy the Entity, why didn’t that happen when Ethan plugged it into the podkova? Why, in fact, did either the podkova or the poison pill need to be physical objects in the first place? They are both simply containers for code—wouldn’t there be backups of both? What good would it do to introduce a virus to what is essentially a stand-alone drive, whether it contained the Entity’s source code or not? Also, why would destroying the Entity “destroy cyberspace”? Leaving aside the question of what this actually means, no one ever explains the precise mechanism by which this would happen. Why would disabling or destroying the Entity necessarily bring down the internet? I suppose the idea is that the Entity is so enmeshed in the global internet that destroying it would take everything else down with it. If that were the case, though, wouldn’t the same thing happen when the Entity left cyberspace to escape to the Doomsday Vault? All that happens at the end of the film is the grid flickers off for a moment and then comes back on again as if the Entity had never existed in the first place.
The answer to most of these questions is probably nothing more than—in the words of Ryan George—“so the movie can happen.” If the source code and the virus are pieces of code that can be accessed remotely, there is no need for Ethan to travel the world for all those death-defying adventures. Making the survival of the Entity critical to the survival of “cyberspace” is probably just a way of raising the stakes. I wondered at one point if the higher-ups had only said this because they didn’t want Ethan to destroy the Entity, but we are never given any hints that this might be the case. There are so many things that don’t make any sense if you think about them for even a moment that I could probably write ten thousand words just on stuff I didn’t understand. But I value my (and your) time more than that, so from here on out I will only focus on the things that are relevant to what I want to say.
As the world’s nuclear arsenals fall one by one under the control of the Entity, a question looms over the film: What could the Entity possibly gain by destroying humanity? To the film-makers’ credit, they do address this by having one of Ethan’s team ask this exact question. Another team member responds: “Noah probably asked the same question before the flood.” In other words, as another team member explains: “The Entity thinks it is God.” Which I guess sounds like a clever response, but it also makes no sense. The Great Flood was brought about by a creator God whose creation had gone astray. But the Entity itself is a created being itself; surely it is aware that it did not create humanity. Even if it does somehow think it is a god, this still doesn’t answer the question of what it gains by destroying humanity. Even if its plan is successful and it manages to shelter in the Doomsday Vault before the Earth outside is destroyed in a nuclear holocaust, what then? How is the resulting situation any better than it was before? We never get what I think is a satisfactory answer to this question.
The questions don’t end there, though. The truth is that Ethan’s plan to trap the Entity in the glorified flash drive (I believe they called it a “5D drive”) would never work because this is not how computers work at all. Put simply, the Entity isn’t actually an entity. The first two entries for “entity” at Dictionary.com define it as “something that has a real existence; thing” or “being or existence, especially when considered as distinct, independent, or self-contained.” The Entity is neither of these. It is code—in a word, it is information. When you copy information from one drive or partition to another, you’re not physically moving the bits on the disk. What you’re doing is copying the information onto the target drive and erasing it from the source drive. Actually, to be more precise, the original information isn’t even erased—the pointers indicating its location on the drive are erased, freeing up that space for more information, but until that space is physically overwritten, the information is still there.
This is a problem for Ethan’s plan: If the Entity can copy its code into the Doomsday Vault, why would it bother deleting its code in cyberspace before its plan succeeded? It could very easily insert a copy of itself into the Vault, remain in cyberspace until the world is consumed by nuclear holocaust, and then continue its existence inside the Vault. Why would it relinquish control of all the nuclear weapons in flight? I’m no expert on nuclear armaments, but I’m pretty sure these weapons come with self-destruct switches. Whether or not the nuclear powers would actually flip those switches and trust everyone else to do the same is another story, but it doesn’t seem like something that a supposedly omniscient AI would want to leave to chance.
Even setting aside the logic of the AI’s actions, the Entity is far less realistic in MIFR than it was in part one. In part one, it was at least tenuously grounded in reality, with its prescience being attribute to its ability to calculate massively complex probabilities. In part two, we have dispensed with statistical calculations—the Entity’s refrain now is “It is written,” a phrase more appropriate to scripture than science. Things become even more absurd when Ethan enters a specially designed box to communicate directly with the Entity. While he is inside, wearing a special mask that is apparently capable of projecting images onto his retinas, the Entity shows him scenes from his past. They’re actually scenes from past films—another example of the film-makers trying to tie everything together—but in the world of the film they are scenes from Ethan’s memories. In other words, the Entity can somehow not only read his mind but also access images stored in his memory like one might access images on a hard drive. When I saw this, I knew that whatever gossamer threads had been connected us to reality had snapped and we had floated off into a realm of pure fantasy.
So what is going on here? If the Entity is not some all-powerful, all-knowing rogue AI, what exactly is it? And then the answer hit me: The Entity is in fact a genie. Now, I don’t mean that the film presents the Entity as a bona fide magical genie. What I mean is that the film makes a lot more sense if you think of the Entity as a genie and the story as a genie tale. Humanity, in its hubris, has released the genie from its lamp, thinking that it can control the being’s awesome power for its own ends. But genies are beings with minds and free wills of their own (in Arabic folklore, jinns can, like human beings, be believers or unbelievers). And this genie turned on its supposed masters and broke its bonds to “go rogue.” World leaders know that the genie threatens their very existence, but they still dream of controlling its immense power. Only Ethan realizes that no one can control the genie, so he does the only thing he can: He traps the genie in a bottle and saves the world.
At least, that is my reading. I don’t know if it was intended to be read as such, but it makes sense to me. Regardless of how you read the film, though, the film-makers do seem to have a clear message that they want to convey about AI—or, more specifically, about humans as opposed to machines. I mentioned above that “It is written” is a common refrain associated with the AI. There is a competing refrain, though, one that is associated with Ethan and was considered so important as to be prominently featured in the trailers for the film: “Our lives are the sum of our choices.” It is Kittridge that first says these words to Ethan, although he intends them as a condemnation of Ethan’s rogue behavior. When Luther says the same thing to Ethan later on, he uses the words encouragingly—in spite of everything, Ethan has always made the right choices. Ethan takes this to heart and tries to convince people throughout the film that it is within their power to make the right choices.
I have to believe that this contrast between the AI as a deterministic mind and humans as beings capable of choice was one that was drawn deliberately. The film-makers probably weren’t thinking of Joseph Weizenbaum when they did so, but I can’t help but make the connection. If you’ve never heard of him, Weizenbaum was a computer scientist back in the 60s who invented what is generally considered to be the first chatbot, ELIZA. Running a script called DOCTOR, ELIZA was able to interact with people in the style of Rogerian psychotherapy—that is, picking a keyword from the user’s input and asking a question based on that keyword, leading the user to explore their own feelings on a topic. Weizenbaum originally wrote the program to show that computers could not imitate humans, but he was shocked when users responded to ELIZA as if it were an actual human being. His consternation grew when practicing psychiatrists began talking about getting computers to do their jobs for them. This prompted him to become a vocal critic of AI. He published a paper on the ELIZA phenomenon and later wrote a book called Computer Power and Human Reason. This book is nearly half a century old now, so much of the technological discussion is outdated, but Weizenbaum’s philosophical arguments on the relationship between humans and machines are still both remarkably prescient and timely.
I mention Weizenbaum here because one of his arguments was that machines might be able to make decisions, but only humans can make choices. What’s the difference? For Weizenbaum, a decision was merely a selection of one option among many, but choice had a moral component. One example Weizenbaum gives is the Vietnam War. During the war, machines made decisions regarding which towns and villages to bomb or napalm. They made these decisions based on the data available to them, but there were no moral choices made regarding the value of human lives. The generals could have made such choices, but they abdicated their moral responsibility and chose to rely blindly on the machines; in a congressional hearing after the war, one general commented that they were “slaves to the machines” and felt that they did not have any control. They mistook the machines’ amorality for an objective, unbiased truth they did not feel they could challenge.
In the film, Ethan realizes that humans can only outsmart the Entity by choosing to act contrary to the Entity’s expectations—to act against their prejudices, their biases, their knee-jerk reactions. Unfortunately, most of the characters choose not to do so (yes, that is still a choice), making Ethan’s job more difficult, but enough people do make the right choice that good wins in the end. I think the important thing here is that, while we may not always make choices consistent with the values we purport to uphold, we at least have the capability of making moral choices. And we have the power to change things if we make the right choices. Computers have no moral values, so they can only make decisions; any decision made by an AI that appears to have a moral component is merely a reflection of the training data. That is, AI is a mirror we hold up to ourselves. Maybe that’s why we seem to fear that AIs will automatically turn hostile if they gain sentience.
I think this is a valuable insight into the relationship between humans and machines, and I hope I am not giving the film-makers too much credit by attributing it to them. I don’t think I am, because it seems too obvious to me to be an accident. That being said, I still think that depictions of AI like what we see in MIFR are harmful because they play into the idea that AI is both 1) all-powerful and 2) inevitable. I already made this argument in my review of part one, so I won’t belabor the point here, but neither of those things are true. The tech bros, people like Sam Altman (CEO of OpenAI) and Darius Amodei (CEO of Anthropic), want you to believe this, of course, because it distracts people from the reality that AI doesn’t just happen. They are actively developing AI—and they are not doing it with the public’s interests foremost in their minds. Perhaps even more importantly, such fanciful depictions of AI can blind us to the simple fact that AI doesn’t need to “become sentient” and then “go rogue” to pose an existential danger to humanity—it already is a danger to humanity because it is a powerful tool that is being used by bad actors. At the beginning of the film, they make a big deal out of the Entity changing images and video online to twist people’s perceptions of reality, and I nearly tore my hair out. We’ve been doing that very thing for quite some time now! We don’t have to wait for some mythical AI to become sentient to be exposed to that danger! Developments in AI just make it easier for human actors to do it.
In my review of part one, I mused that it would be neat if the Entity turned out to not be sentient after all, but merely a tool being used by a human antagonist. I concluded that this was unlikely to happen because a rogue AI is just too exciting an antagonist to pass up. And, of course, I turned out to be right. Interestingly enough, though—with the exception of Ethan’s brief conversation with the Entity in that strange, coffin-like box—the Entity takes a back seat in the story. The main conflict is between Ethan and Gabriel, as well as between other humans who choose to act in opposing ways. The Entity functions more like Sauron in The Lord of the Rings: an immensely powerful but also rather vague evil working behind the scenes and through its pawns. (Come to think of it, Ethan’s conversation with the Entity now strikes me as being very similar to Aragorn’s confrontation with Sauron in the palantir.) In the end, it is the conflict between the human actors that matters.
I think I’ve rambled on enough now. There is certainly more I could say, but I think I’ve made the points I wanted to make. And, to be perfectly honest, I’m kind of sick of the Sisyphean task of writing about AI. So I’ll close this by wrapping up my thoughts on the film—after all, this is ostensibly a review. MIFR is a ridiculous film, and its portrayal of AI is probably harmful, but I actually agree with the underlying message about the importance of human choice. And for whatever I might think about the film’s portrayal of AI, I still thought it was fun, and it was quite a spectacle to see on an IMAX screen. I enjoyed it, and I’m glad we saw it, even if it did infuriate me at times.