By now, you have no doubt heard about IBM's daring experiment to pit its latest supercomputer -- dubbed "Watson" -- against Jeopardy champions Ken Jennings and Brad Rutter on Feb. 14. (In fact, we covered the practice round on this very site.)
The public may not realize how big a deal it really is, writes author Richard Powers in a New York Times op/ed.
A computer beating Jennings and Rutter may be the latest novelty in a war that began in 1997 when Big Blue's "Deep Blue" system bested Gary Kasparov in a chess match. But the game show Jeopardy -- in which contestants must answer trivia questions riddled with arcane facts, wordplay and innuendo -- is orders of magnitude more difficult than a chess match, Powers argues.
Open-domain question answering has long been one of the great holy grails of artificial intelligence. It is considerably harder to formalize than chess. It goes well beyond what search engines like Google do when they comb data for keywords. Google can give you 300,000 page matches for a search of the terms “greyhound,” “origin” and “African country,” which you can then comb through at your leisure to find what you need.
Asked in what African country the greyhound originated, Watson can tell you in a couple of seconds that the authoritative consensus favors Egypt. But to stand a chance of defeating Mr. Jennings and Mr. Rutter, Watson will have to be able to beat them to the buzzer at least half the time and answer with something like 90 percent accuracy.
It wasn't just a matter of feeding Watson every piece of public data on record, though. Watson's "secret sauce" is the concurrent use of more than 100 techniques to, well, think: analyze natural language, appraise sources, propose hypotheses, merge the results and rank the top guesses, Powers writes.
If, after a couple of seconds, the countless possibilities produced by the 100-some algorithms converge on a solution whose chances pass Watson’s threshold of confidence, it buzzes in.
The experiment raises questions about what constitutes real thought: is it a matter of statistical correlation within a massive database of information, or something greater?
Regardless of whether Watson wins, the experiment will give IBM's David Ferrucci another set of data from which to analyze and learn.
The real question: is it possible for humans to program a computer that's truly smarter than them?
Is "smart" an art, or a science? (And can you program it?)
Related on SmartPlanet: