Video games are “post-Turing”: My latest Wired News video-game column

Last week, Wired News published my latest video-game column — and this one’s about the peculiar relationships we strike up with AI characters inside games.

It’s online free at the Wired site, and a copy is archived below!

Going Gunning With My Imaginary Friends
by Clive Thompson

Can a machine think?

That’s the question that mathematician Alan Turing posed in 1950, when he posited his famous Turing Test. He argued that artificial intelligence could be thought of as intelligent if it passes a social test — if it can fool a human into believing it’s real.

Alas, critics agree that no machine has passed the Turing Test. We’re never fooled by chatbots for very long, as the annual Loebner Prize contest proves. The thing is, we humans are awfully good at decoding social cues and detecting humanness; we can instantly tell when a preprogrammed “conversation tree” is repeating itself. That’s why many philosophers say machines will never pass the Turing Test.

Except, of course, for videogames. They’re filled with AI characters — enemies we confront, and teammates we play alongside. And the truth is, we often develop complex emotional and social relationships with AI characters inside games. I pretty much fell in love with Alyx Vance in Half-Life 2; whenever I play any Star Wars space-flight sim, I get enormously agitated at the fates of my teammates when they’re under attack.

And here’s the weird thing: In games, we know they’re machines. We know our companions aren’t human. But we don’t care — we still wind up treating them in oddly human ways.

Videogames, in effect, are beyond Turing. As Bart Simon, a sociologist who studies videogames at Concordia University in Montreal, put it in a recent paper: “The solo game is posthumanistically social.” It’s about the pleasures of hanging out with machines even when you’re aware they’re merely machines.

To put this epiphany in its full whoa-nelly context: If smart machines are going to become increasingly a part of our everyday lives, maybe videogames are the best place to glimpse our emotional future.

Simon first noticed the social nature of AI while playing Call of Duty. He normally avoids World War II shooters because he’s really bad at them. But the squad-based strategy in Call of Duty lured him in. Because he relied on the squad to help kill enemies and keep him safe, the squad got its emotional hooks into him.

Why? Because the squad had good “reciprocity” — its actions affected him and vice versa. If he drifted too far away from the center of battle, his squad would lose cohesion, and its members would all be more vulnerable. Forget talking to AI machines: Games force you to act in concert with them, and that’s a much stronger way to generate a social sense.

Sure, the AI would often do stupid things. But even that can sometimes be beneficial — because slightly dumb and helpless AI can often seem more emotionally “real” than stuff that’s trying to be too smart. Much like the uncanny valley effect in graphics — where cartoony characters can seem more “real” than super-detailed faces — AI often seems most gripping when it hits a sweet spot considerably below omnipotence. If the AI is actively asking us for help, it triggers what sociologists call “interpretive charity”: We feel more warmly toward it.

Perhaps most interestingly, Simon thinks that gamers actually enjoy the process of gradually understanding the logical rule sets that govern the behavior of our AI friends. “You have to suss out their algorithm,” he says. We learn what makes them artificial, but we also understand them more completely — it’s the machine-age version of psychology.

Granted, Simon doesn’t think all games achieve this lovely state of robot-human togetherness. “The AI has to be something that’s halfway between being a person you react to, and a tool that you use,” he says. When he plays sports games, his AI teammates don’t trigger any emotional connection in him. They feel like tools — the equivalent of weapons. (And he doesn’t really think his theory applies to mere “sidekicks” — characters whose actions are the same no matter what you do. They’re more like tools, too.)

I think Simon’s right. And it’s not just about virtual comrades; well-crafted enemies evoke the same response. When I face down the bosses in No More Heroes, I can feel my curiously two-sided reactions. On the one hand, I’m treating them as machines — coolly assessing the clockwork mechanisms of their attacks, the better to defeat them. On the other hand, I get angry or annoyed at them; I regard each of them as having a personality, even when the personality is just a bunch of rules.

Either way, I think Simon’s onto something. We’re beyond Turing now, and into much stranger territory.

So maybe it’s time to abandon the question, “Can a machine think?”

Here’s a better one: Can a machine play?

(A tip of the hat to the excellent Game Studies Download by Jane McGonigal, Ian Bogost and Mia Consalvo, which first tipped me off to Bart Simon’s work.)


blog comments powered by Disqus

Search This Site


Bio:

I'm Clive Thompson, the author of Smarter Than You Think: How Technology is Changing Our Minds for the Better (Penguin Press). You can order the book now at Amazon, Barnes and Noble, Powells, Indiebound, or through your local bookstore! I'm also a contributing writer for the New York Times Magazine and a columnist for Wired magazine. Email is here or ping me via the antiquated form of AOL IM (pomeranian99).

More of Me

Twitter
Tumblr

Recent Comments

Collision Detection: A Blog by Clive Thompson