Home » Artificial Intelligence » Artificial Intelligence: The Quest for Machines that Think Like Humans, Part 2

Artificial Intelligence: The Quest for Machines that Think Like Humans, Part 2

January 31, 2012

supplu-chain

In Part 1 of this 3-part series, I discussed work being done at IBM, supported by funding from DARPA, related to the development of cognitive computers. The focus of the post was an IBM press release that announced the development of cognitive computing prototype chips. These neurosynaptic computing chips integrate memory with processing to create an on-chip network of light-weight cores, (i.e., a single integrated system of hardware and software). IBM, however, isn’t the only organization whose researchers are working on cognitive computing. Shortly before the IBM press release was distributed, the work of a group researchers at the University of Exeter that is trying to develop “‘brain-like’ computers” was published in the journal Advanced Materials. [“Brain-Like Computing a Step Closer to Reality,” ScienceDaily, 23 June 2011] According to the article, “The study involved the first ever demonstration of simultaneous information processing and storage using phase-change materials.” It continues:

“This new technique could revolutionize computing by making computers faster and more energy-efficient, as well as making them more closely resemble biological systems. Computers currently deal with processing and memory separately, resulting in a speed and power ‘bottleneck’ caused by the need to continually move data around. This is totally unlike anything in biology, for example in human brains, where no real distinction is made between memory and computation. To perform these two functions simultaneously the University of Exeter research team used phase-change materials, a kind of semi-conductor that exhibits remarkable properties.”

Obviously, there are some complementary aspects to the work being done at IBM and the University of Exeter. The article notes that the “study demonstrates conclusively that phase-change materials can store and process information simultaneously. It also shows experimentally for the first time that they can perform general-purpose computing operations, such as addition, subtraction, multiplication and division.” The IBM team seems to be a bit further along. Using its prototype chips, it “has successfully demonstrated simple applications like navigation, machine vision, pattern recognition, associative memory and classification.” [“IBM Unveils Cognitive Computing Chips,” IBM Press Release, 18 August 2011] Where the approaches seem most complementary is that the IBM system uses artificial neurons, synapses, and axons and, according to the article, the University of Exeter study “shows that phase-change materials can be used to make artificial neurons and synapses.” The article asserts, “This means that an artificial system made entirely from phase-change devices could potentially learn and process information in a similar way to our own brains.” As I pointed out in Part 1 of this series, a number of informed individuals recognize the “potential” for developing computers that think like humans but they believe the reality is that they are some ways off. The article continues:

“Lead author Professor David Wright of the University of Exeter said: ‘Our findings have major implications for the development of entirely new forms of computing, including “brain-like” computers. We have uncovered a technique for potentially developing new forms of “brain-like” computer systems that could learn, adapt and change over time. This is something that researchers have been striving for over many years.’ This study focused on the performance of a single phase-change cell. The next stage in Exeter’s research will be to build systems of interconnected cells that can learn to perform simple tasks, such as identification of certain objects and patterns.”

My guess is that skeptics wince a bit when they read about “brain-like” computers, since they argue that we are still trying to figure out how the human brain actually works. A month after the Exeter team published their study, a group of engineers at NUI Galway and the University of Ulster announced that it “is developing bio-inspired integrated circuit technology which mimics the neuron structure and operation of the brain.” [“Researchers Mimic Nature to Create a ‘Bio-Inspired Brain’ for Robots,” ScienceDaily, 25 July 2011] The article reports:

“One key goal of the research is the application of the electronic neural device, called a hardware spiking neural network, to … control … autonomous robots which can operate independently in remote, unsupervised environments, such as remote search and rescue applications, and in space exploration.”

The IBM release also said that its team was trying to “recreate the phenomena between spiking neurons and synapses in biological systems.” The ScienceDaily article continues:

“According to Dr. Fearghal Morgan, Director of the Bio-Inspired Electronics and Reconfigurable Computing (BIRC) research group, at NUI Galway: ‘Electronic neurons, implemented using silicon integrated circuit technology, cannot exactly replicate the complexity of neurons found in the human brain, or the massive number of connections between neurons. ‘However, inspired by the operation and structure of the brain, we have successfully developed a hardware spiking neural network and have used this device for robotics control. The electronic device interprets the state of the robot’s environment through signals received from sensing devices such as cameras and ultrasonic sensors, which act as the eyes and ears of the robot. The neural network then modifies the behavior of the robot accordingly, by sending signals to the robot’s limbs to enable activity such as walking, grasping and obstacle avoidance.'”

Whereas Dr. Morgan states that “electronic neurons, implemented using silicon integrated circuit technology, cannot exactly replicate the complexity of neurons found in the human brain,” the IBM team is trying to do exactly that — creating “a chip system with ten billion neurons and hundred trillion synapses, while consuming merely one kilowatt of power and occupying less than two liters of volume.” Dr. Morgan explains that his objectives are much more modest:

“Our research is focused on mimicking evolution in nature. The latest hardware neural network currently in development will contain thousands of small electronic neuron-like devices which interoperate concurrently, in a similar way to neurons in the biological brain. The device can be trained to perform a particular function, and can be retrained many times for various applications. The training process resembles the training of the brain, by making, strengthening and weakening the links between neurons and defining the conditions which cause a neuron to fire, sending signals to all of the attached neurons. As in the brain, the collection of interconnected neurons makes decisions on incoming data to cause an action in the controlled system. Until now, the robotics arena has focused on electronic controllers which incorporate one or more microprocessors, which typically execute instructions in sequence and, while performing tasks quickly, are limited by the instruction processing speed. Power is also a consideration. While the human brain on average only requires 10 watts of power, a typical PC requires 300 watts. We believe that a small embedded hardware neural network device has the potential to perform effective robotics control, at low power, while also incorporating fault detection and self-repair behavior. Our aim is to develop a robust, intelligent hardware neural network robotics controller which can autonomously maintain robot behavior, even when its environment changes or a fault occurs within the robotics system.”

The article concludes that new technologies are still needed in order to bring the dream of human-like thinking in machines closer to reality.

“Dr Jim Harkin, from the School of Computing and Intelligent Systems at the University of Ulster’s Magee campus, said: ‘The constant miniaturization of silicon technology to increase performance introduces inherent reliability issues which must be overcome. Ultimately, the hardware neural network or robot “brain” will be able to detect and overcome electronic faults that occur within itself, and continue to function effectively without human intervention.'”

The work being done at IBM and elsewhere is certainly pushing computing’s frontier outward. Noel McKeegan reports that work on artificial intelligence is being conducted at a number of organizations, including IBM. [“A year in technology,” 3 January 2012] He writes:

“Androids, cyborgs, thinking machines – whatever they end up being called or form they take, non-human entities that are capable of human-like thought are on the way, and artificial intelligence will have a profound impact on our lives in coming decades. During the past year we have seen several discoveries that advance the goal of reverse engineering the human brain – researchers from the University of Southern California announced the creation of a functioning synapse circuit using carbon nanotubes that could someday be one component of a synthetic brain, while over at Caltech, scientists unveiled a DNA-based artificial neural network that could have huge implications for the development of true artificial intelligence. Working with more conventional computing hardware, researchers at MIT have developed a computer chip that mimics the ‘plasticity’ of the brain’s neural function and IBM has been experimenting with a computer chip designed to emulate the human brain’s abilities for perception, action and cognition. Finally – and this one gets our vote for the most sci-fi AI breakthrough of the year – scientists from Israel’s Tel Aviv University have restored brain function to a rat by replacing its disabled cerebellum with a synthetic one.”

One man who believes he has cracked the artificial intelligence code is Stephen L. Thaler, PhD. He calls his invention The Creativity Machine® Paradigm. Instead of talking about neurons, synapses, and axons, Thaler talks about “imagitrons” and “perceptrons.”

 

Advances in artificial intelligence are critical for helping us make sense of the mountains of data that we are now accumulating. AI systems will be able to learn and adapt and cooperate. They will provide both scientists and businesses with new insights. In the final part of series, I’ll discuss work being conducted at Carnegie Mellon University on the Never-Ending Language Learning system, or NELL.

Related Posts:

Full Logo

Thanks!

One of our team members will reach out shortly and we will help make your business brilliant!