Thinking Tech

Can you beat a computer at Rock-Paper-Scissors?

Can you beat a computer at Rock-Paper-Scissors?

Posting in Design

A new online experiment pits man against machine in the most basic of games. So, how does humankind fare this time around?

Given the public's fascination with IBM's Jeopardy-playing Watson supercomputer, the New York Times' playable, AI-infused Rock-Paper-Scissors simulator is a fantastic idea. The simplicity and familiarity of the game, the (evidently) level playing field and the near-instant repeatability make this an ideal way for a wide audience to test their wits against The Machines.

But wait! After a dozen or so rounds of play, whimsy will be replaced with rage; an appreciation for the cheeky robot graphics will give way for utter contempt for your computerize foe. You'll ask yourself: How is this program beating me?

The answer is pretty straightforward. Rock-Paper-Scissors, while often invoked as an example of a game of chance, usually isn't. In fact, RPS tournaments have been going on for years, and even have consistent stars--something that wouldn't be true of a game that was determined by raw probability. The way RPS payers consistently win is the same way the NYT's robot has been beating you: by exploiting players' decidedly non-random strategies.

This is simultaneously frustrating and reassuring. On one hand, this program, which can be told to build a reactive strategy based only on your play patterns or to draw techniques from a massive database of 200,000 previously recorded rounds, is very likely able to "outsmart" you, since it has perfect recall, no emotion and a strictly mathematical understanding of the game. But it's not impossible to beat--not nearly. Like any human opponent's, this software's actions are theoretically predictable. The app even has a feature that reveals its decision-making process for each play.

It's even conceivable that a human with a complete enough understanding of this software could beat it almost every time.

This particular robot is also easy to foil: just play as randomly as you can. By picking your answer blindly, you render the program's collected patterns about you behavior useless, and should be able to tie the machine in the long run.

This is far from the first computer program designed to play RPS. Older software has even competed in national RPS competitions. An program called Deep Mauve went up against some of the planet's best RPS players in 2004 at the World Rock Paper Scissors Competition in 2004. Rather than use any kind of complex strategy, Deep Mauve strove for complete randomness. Here's how it went, according to the book, The Official Rock Paper Scissors Strategy Guide:

Deep Mauve gave a dismal performance and did not manage to progress past the qualifying round. The computer advised Mr. McMahon to deliver a suicidal Scissors/Rock combination in the second set, effectively resigning the match. This is a gaffe that even a beginner human player would not make. The failure of Deep Mauve to provide a real challenge to the human competitors did not come as a surprise to leading RPS theorists. Professor J. Emeritus of the Game Theory Department at the Smallsoa Foundation says, “Even if the program were capable of completely random throws, which I highly doubt, then its expectation over the long run is to win one-third, tie one-third, and lose one-third of all of its games. This means that any strategic insight, no matter how slight, would effectively beat the computer." Work on Deep Mauve version 2.0 appears to have been abandoned.

So, not very well! I find the Professor's statement about "strategy" to be a bit suspect, however. A truly random RPS simulator should neutralize any and all human strategies, so I imagine the computer's loss has more to do with the limited length of tournament sets than anything else.

Of course, reducing RPS to a game of pure chance would rob it of much of its entertainment value. And the NYT's program, while impressive, seems to be designed primarily to foil amateur players. A truly fascinating experiment would be to pit a strategically advanced program--perhaps this one, or perhaps another, retuned version--against veteran RPS players, giving it a fair amount of time to attempt to understand the strategies of its human opponents.

It wouldn't be quite as entertaining as watching Ken Jennings take on Watson, but I'd still watch it.

Share this

John Herrman

Contributing Editor

Contributing Editor John Herrman is a freelance writer based in New York City. He is also contributing editor at Gizmodo. He holds a degree from the University of Edinburgh. Follow him on Twitter. Disclosure