Movies

Cinema, AI, and Human Nature

By  · Published on April 26th, 2017

Why being honest about AI stifles good storytelling.

Open any newspaper and you’ll find a profusion of articles and op-eds debating the future of artificial intelligence. Elon Musk is terrified of it. So is Jack Ma. Peter Thiel isn’t. The AI mania has even permeated the film world, which (the latest slew of “film is dead” articles warns us) will apparently not escape the automation boom. Of course, our public conversation about AI has long been tied to the cinema. As our own Sinead McCausland has pointed out, films have supplied the popular imagination with images and existential questions about AI since Fritz Lang’s Metropolis. But as AI gradually shifts from the realm of science fiction to that of reality, it’s worth examining the premises film has fed us about the technology, and asking whether they’ll serve us well in the coming decades.

Accuracy

There is a special circle in hell reserved for those who bemoan the “inaccuracy” of science fiction. A film should be judged on its own merits, one of which may be the effective suspension of disbelief, but scientific rigor need not be a criterion. Still, although we may not be looking for truths of physics at the movie theater, we do expect our films to tell the truth about the questions and dilemmas we face as a society. On this score, most AI films have roundly missed the point.

In the understandable effort to make AI cinematic, filmmakers have tended to incarnate AIs in human or humanoid bodies. Whether or not the films provide an explanation for why the AI takes a human form as opposed to, say, that of a desktop, the filmmakers’ decision to do so channels the philosophical thought process along specific lines. In nearly all of these films, the AIs prove more “human” than their inventors once suspected. Consider the emotional warmth of Samantha in Spike Jonze’s Her or the angry vindictiveness of HAL 9000 in Stanley Kubrick’s 2001. What begins as cold, deferent machinery is revealed to possess the capacity for love, rage, and self-determination. As such the popular understanding about AI is that it will become morally salient to the degree that it becomes more and more human. The philosophical problem of AI, we’re told, is that if it becomes smart enough, it will begin to want the things that we want and feel the things that we feel.

But precisely the opposite is the case.

The real problem of AI is the same as the problem of representing it “accurately” onscreen; namely, its inhumanity. Those studying AI do not fear that it will suddenly develop human emotions and intentions, but rather that it will fail to relate to human emotions and intentions to such a degree that it will trample over them. As AI researcher Eliezer Yudkowsky has put it, “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.” Seen in this way, telling the truth about AI is at cross purposes with telling a good story, because stories anthropomorphize reality. And AI is decidedly not human.

Intelligence and Intention

Much of the cinematic confusion about AI is born of a misconception concerning the relationship between intelligence and intention. As human beings, we experience both in the intermingled soup of conscious experience, but again, the tendency to draw parallels between humans and AI is misleading here. Intelligence, where AI is concerned, involves competence in solving a given problem, either in a narrow domain (like chess or translation) or a general one. So-called “General AI,” capable of solving problems across a variety of areas, is considered the holy grail of modern AI research. It’s also the type of intelligence we humans possess. BUT, and here is where geniuses from Kubrick to Steven Spielberg to Ridley Scott have become confused, this capacity need not relate to intention at all. An AI could theoretically become super-intelligent and never “desire” anything greater than, in the philosopher Nick Bostrom’s memorable example, to maximize the number of paperclips in the universe.

Consider the character of David (why are they always named David?) in Prometheus. Initially programmed to obey the desires of his designer, David soon grows tired of invidious comparisons with real humans and decides to take matters into his own hands by infecting Holloway with alien spawn. The implication is that an AI as intelligent as David could never be contained by the constraints of his programming. His intelligence would set him free and turn him against his captors. But the more plausible fear is that an AI like David would become so competent in achieving its goals that it would pursue instrumental strategies we humans could never anticipate. Here’s Bostrom’s description of the paperclip-maximizing AI:

“An AI, designed to manage production in a factory, is given the final goal of maximizing the manufacture of paperclips, and proceeds by converting first the Earth and then increasingly large chunks of the observable universe into paperclips…Unless the AI’s motivation system is of a special kind, or there are additional elements in its final goal that penalize strategies that have excessively wide-ranging effects on the world, there is no reason for the AI to cease activity upon achieving its goal.”

When it comes to AI, it’s not malice we fear, but competence.

Emotion, Consciousness, and Human Nature

At this point you may be wondering why the hell we would ever be so stupid as to build an AI with the goal of maximizing paperclips; indeed, this would be a monumentally foolish thing to do. But the problem of aligning the interests of AI with our own is not as simple as it might first appear. Humans are driven by a morass of mixed and conflicting emotions, cursed with reptilian anxieties and apish social tendencies. The problem of AI then becomes equally a problem of human nature. Why do we do what we do, and what should we do, given what we are like?

Most films about AI take a pre-scientific view of our nature, emphasizing our uniquely human essence and the divine origin of our emotions. Prometheus says as much explicitly, sending its heroes on a quest to discover their creators. Other films, like Spielberg’s A.I.: Artificial Intelligence and Scott’s Blade Runner, hand-wave in a similar direction, emphasizing the AI’s lack of a soul. This confused view of human nature is in part responsible for the aforementioned conflation of intelligence and intention, but the misconceptions run even deeper. Our view of ourselves – the one we represent in stories – doesn’t line up with the scientific picture. We have no “essence” and our emotions and intentions are just as mechanistic as the AI’s, albeit constrained by genetic rather than manmade programming. Some shows, like Westworld, have alluded to this fact while still tending to resist its full implications. The dramatic imperative to create willful protagonists with relatable emotions grates against the very nature of the subject matter.

Does this mean that we can’t tell stories at all? Of course not. Human beings are capable of an enormous range of conscious experience, and that experience is worth dignifying and reflecting in narrative form. But insofar as we want to tell honest stories about artificial intelligence, we’ll need to come to grips with some uncomfortable truths about its nature – and our own.

Related Topics:

Writer, filmmaker.