Tuesday, May 03, 2005

"Whatever Happened to Machines That Think?"

"Whatever Happened to Machines That Think?"
New Scientist (04/23/05) Vol. 186, No. 2496, P. 32; Mullins, Justin

The excitement generated by the field of artificial intelligence, and the support it garnered, waned dramatically in the 1990s as AI projects that promised to deliver convincing computer conversationalists, autonomous servants, and even conscious machines failed to pan out because their core principle--that computers can be made intelligent by human standards of intelligence--was flawed. This has prompted a reevaluation of what constitutes intelligence in many circles, although the Turing test still remains a key benchmark for gauging machine intelligence. The AI field was split into those who believe systems can become intelligent through symbolic reasoning, and those who favor biologically inspired approaches such as artificial neural networks and genetic algorithms. Carnegie Mellon University researcher Tom Mitchell is working to bridge the gap between these two approaches in his analysis of how the human brain reacts to spoken nouns and later verbs and sentences via functional magnetic resonance imaging. He thinks such research could perhaps lead to a mind-reading computer program. Meanwhile, former Stanford University computer scientist and AI research veteran Doug Lenat has been developing Cyc, an AI system that can learn by tapping a vast database of common-sense assertions, for over two decades. Lenat believes that once Cyc becomes freely available over the Web, the input contributed by millions of users will give the system the accumulated knowledge to correctly answer most questions within three to five years. Cyc is rekindling interest for AI, as are efforts in Europe, Japan, and America to build systems that can address uncertainty through statistical reasoning.


http://www.newscientist.com/channel/info-tech/mg18624961.700

I thought this was kinda obvious, but not everyone is on board this obvious concept of using stohastic reasoning. To me this way, of using statistics, is the correct path toward mimicing human intellegence. It is not possible for humans to breadth search every possible answer, or make decision with a cut and dry answer of yes or no. One of the greatest thing about being human is the ability to err and learn from our mistakes. Having a AI program learn and improve upon itself is the way to go.

No comments: