Back in 1999, when Ray Kurzweil wrote The Age of Spiritual Machines, he predicted that by 2020, a standard $1,000 personal computer will equal the capacity of the human brain. Researchers are already working on systems that replicate the human brain -- not just in capacity, but also in cognitive thought processes.
Sandra Erwin of National Defense reports that the Defense Advanced Research Projects Agency (DARPA) is working on a project called “physical intelligence,” in which robots become truly autonomous, capable of digesting information and making decisions on their own. The project seeks to boost the intelligence of the robot being created by mimicking the human brain instead of using standard robotics instruction, says James Gimzewski, professor of chemistry at the University of California of Los Angeles.
Gimzewski explained that the robot will develop neurological-like pathways as it learns. That's a departure from code-driven approaches to AI:
"What sets this new device apart from any others is that it has nano-scale interconnected wires that perform billions of connections like a human brain, and is capable of remembering information. Each connection is a synthetic synapse. A synapse is what allows a neuron to pass an electric or chemical signal to another cell. Because its structure is so complex, most artificial intelligence projects so far have been unable to replicate it."
Physical intelligence does not require human intervention or oversight, Gimzewski adds. And, importantly, this robot won't necessarily take a human form, so don't look for Star Trek's "Data" -- the compassionate android Starfleet officer -- any time soon.
"An aircraft would be able to learn and explore the terrain and work its way through the environment without human intervention, he said. These machines would be able to process information in ways that would be unimaginable with current computers."
The inability to generate human-like reasoning has been the missing piece in artificial intelligence research over the past five decades, Gimzewski says. The DARPA project seeks to bring more human-like reasoning into the process.
The ability for systems to learn and make intelligent decisions has been a tantalizing possibility for organizations, covering a range of functions, from fraud detection to customer service. Currently, low-level, routine decisions -- such as granting customers additional credit -- have been successfully automated through sophisticated rules engines. With the development of self-learning systems such as what DARPA is proposing, machines may start to make higher-level calls. But given the trouble decision systems can get us into (think about the banking system in 2008), there will always be a need for human brakes and overrides.