Artificial Intelligence: The Promise of Deep Learning

Stephen DeAngelis

December 11, 2012

“Using an artificial intelligence technique inspired by theories about how the brain recognizes patterns,” writes John Markoff, “technology companies are reporting startling gains in fields as diverse as computer vision, speech recognition and the identification of promising new molecules for designing drugs.” [“Scientists See Promise in Deep-Learning Programs,” New York Times, 23 November 2012] In fact, Markoff reports that breakthroughs are advanced enough to raise “the specter of automated robots that could replace human workers.” Researchers in England are so concerned about advances in artificial intelligence that “a team of scientists, philosophers and engineers will form the new Centre for the Study of Existential Risk (CSER) at the University of Cambridge in the United Kingdom.” [“Cambridge University team to assess the risk posed to humanity by artificial intelligence,” by Chris Wood, Gizmag, 27 November 2012] Wood reports, “The team will study key developments in technology, assessing ‘extinction-level’ threats to humanity. Key among these is the possibility of the creation of an artificial general intelligence, an event that has the theoretical potential to leave humanity behind forever.” The blogosphere has been alive with comments since researchers at Cambridge announced the establishment of the CSER. Pictures of red-eyed robots from the “Terminator” movies and of HAL from “2001: A Space Odyssey” often accompany the posts. I’ll explore that topic more tomorrow. Markoff, however, doesn’t see anything more sinister in AI advancements than the potential loss of some jobs. He continues:

“The technology, called deep learning, has already been put to use in services like Apple’s Siri virtual personal assistant, which is based on Nuance Communications’ speech recognition service, and in Google’s Street View, which uses machine vision to identify specific addresses. But what is new in recent months is the growing speed and accuracy of deep-learning programs, often called artificial neural networks or just ‘neural nets’ for their resemblance to the neural connections in the brain. ‘There has been a number of stunning new results with deep-learning methods,’ said Yann LeCun, a computer scientist at New York University who did pioneering research in handwriting recognition at Bell Laboratories. ‘The kind of jump we are seeing in the accuracy of these systems is very rare indeed.'”

Dario Borghino reports that even more advances could be at hand. “Using the world’s fastest supercomputer and a new, scalable, ultra-low power computer architecture,” he writes, “IBM has simulated 530 billion neurons and 100 trillion synapses – matching the numbers of the human brain – in an important step toward creating a true artificial brain.” [“IBM supercomputer used to simulate a typical human brain,” Gizmag, 19 November 2012]

Markoff reports that one of the recent achievements by a deep learning program was accomplished by “a team of graduate students studying with the University of Toronto computer scientist Geoffrey E. Hinton.” At the last minute, the team entered “a contest sponsored by Merck to design software to help find molecules that might lead to new drugs.” The team won the top prize even though its members had “no specific knowledge about how the molecules bind to their targets. The students were also working with a relatively small set of data; neural nets typically perform well only with very large ones.” Anthony Goldbloom, chief executive and founder of Kaggle, a company that organizes data science competitions, told Markoff, “This is a really breathtaking result because it is the first time that deep learning won, and more significantly it won on a data set that it wouldn’t have been expected to win at all.” Markoff continues:

“Advances in pattern recognition hold implications not just for drug development but for an array of applications, including marketing and law enforcement. With greater accuracy, for example, marketers can comb large databases of consumer behavior to get more precise information on buying habits. And improvements in facial recognition are likely to make surveillance technology cheaper and more commonplace.”

As president of a company involved in the development of cognitive reasoning solutions for businesses, one of my primary interests in deep learning programs is how they can help businesses achieve better results from their marketing dollars. Targeted marketing is predicted to be one of the next big things in business. Deep learning is at the heart of such marketing efforts. Markoff continues:

“Modern artificial neural networks are composed of an array of software components, divided into inputs, hidden layers and outputs. The arrays can be ‘trained’ by repeated exposures to recognize patterns like images or sounds. These techniques, aided by the growing speed and power of modern computers, have led to rapid improvements in speech recognition, drug discovery and computer vision. Deep-learning systems have recently outperformed humans in certain limited recognition tests.”

At Enterra Solutions® we use our proprietary Sense, Think/Learn, Act® technology to discover new relationships and patterns as well as provide deep learning insights for our clients. Markoff applauds the work being done by Dr. Hinton at the University of Toronto because “it has taken place largely without the patent restrictions and bitter infighting over intellectual property that characterize high-technology fields.” Markoff concludes:

“Referring to the rapid deep-learning advances made possible by greater computing power, and especially the rise of graphics processors, [Dr. Hinton stated]: ‘The point about this approach is that it scales beautifully. Basically you just need to keep making it bigger and faster, and it will get better. There’s no looking back now.'”

Another reason that advancements are being made with stunning frequency is that work on deep learning is taking place all over the world. Angela Guess, for example, reports on an effort in the medical sector being undertaken by researchers in Japan, Italy, the United Kingdom, and the United States. [“Computing Like a Human,” by Angela Guess, semanticweb.com, 10 October 2012] She writes:

“MedicalExpress.com has posted an article about how scientists are trying to develop computers that can think and see like humans do. The article states, ‘Hiroyuki Akama at the Graduate School of Decision Science and Technology, Tokyo Institute of Technology, together with co-workers in Yokohama, the USA, Italy and the UK, have completed a study using fMRI datasets to train a computer to predict the semantic category of an image originally viewed by five different people. The participants were asked to look at pictures of animals and hand tools together with an auditory or written (orthographic) description. They were asked to silently ‘label’ each pictured object with certain properties, whilst undergoing an fMRI brain scan.’ It continues, ‘The resulting scans were analysed using algorithms that identified patterns relating to the two separate semantic groups (animal or tool). After “training” the algorithms in this way using some of the auditory session data, the computer correctly identified the remaining scans 80-90% of the time.'”

Since many of the solutions we offer at Enterra use an ontology, I admit that I find the research linking images and language fascinating. Guess concludes:

“The article goes on, ‘Understanding how the human brain categorizes information through signs and language is a key part of developing computers that can “think” and “see” in the same way as humans. It is only in recent years that the field of semantics has been explored through the analysis of brain scans and brain activity in response to both language-based and visual inputs. Teaching computers to read brain scans and interpret the language encoded in brain activity could have a variety of uses in medical science and artificial intelligence.'”

If you have read some of my past posts concerning artificial intelligence, you know there is an active discussion about whether the human brain can ever be copied artificially. Some pundits don’t believe it matters. They believe that machine intelligence is likely to come into its own even if it doesn’t exactly imitate human intelligence. Personally, I believe that the benefits of deep learning technology and artificial intelligence far outweigh the concerns. But since there are concerns, I’ll examine some of those in my next post.