Home » Artificial Intelligence » Machine Learning: A Primer for the Technically Challenged, Part 2

Machine Learning: A Primer for the Technically Challenged, Part 2

May 27, 2015

supplu-chain

In Part 1 of this article, I noted that there are five principal ways that machines learn. They are:

  • Supervised learning — where you teach it
  • Unsupervised learning — where you let it learn by itself
  • Reinforcement learning — where it learns by trial and error
  • Deep learning — where it uses hierarchical or contextual techniques to learn
  • Hybrid learning — where traditional machine learning is complemented by semantic reasoning to bridge the gap between a pure mathematical technique and semantic understanding

Although machine learning is a branch of artificial intelligence (AI), it is not an approach that is going to lead to artificial general intelligence (AGI). Nevertheless, as The Economist notes, “Computers can process human speech and read even messy handwriting.”[1] The article continues:

“Machine learning is exactly what it sounds like: an attempt to perform a trick that even very primitive animals are capable of, namely learning from experience. Computers are hyper-literal, ornery beasts: anyone who has tried programming one will you that the difficulty comes from dealing with the fact that a computer will do exactly and precisely what you tell it to, stupid mistakes and all. For tasks that can be boiled down into simple, unambiguous rules — such as crunching through difficult mathematics, for instance — that is fine. For woolier jobs, it is a serious problem, especially because humans themselves might struggle to articulate clear rules.”

Andrew Brust (@andrewbrust), Technical Marketing Director at Datameer, asserts, “Machine learning is the next frontier in Big Data innovation. And the cloud is the next frontier within that frontier.”[2] Brust notes that three of world’s largest providers of cloud services are also offering machine learning services. As a result, he concludes, “Machine learning will finally have some mainstream chops, and business competitiveness could change dramatically.” Shelly Palmer (), Managing Director of the Digital Media Group at Landmark Ventures, reached the same conclusion. He writes, “In the age of data science, machine learning and pattern matching are the building blocks of competitive advantage.”[3] As noted at the beginning of this article, there are different kinds of machine learning; but, as The Economist notes:

“The one that is grabbing headlines at the moment is called ‘deep learning’. It uses neural networks — simple computer simulations of how biological neurons behave — to extract rules and patterns from sets of data. Show a neural network enough pictures of cats, for instance, or have it listen to enough German speech, and it will be able to tell you if a picture or sound recording it has never seen before is a cat, or in German.”

Google is major user of deep learning and its system became famous for its ability to identify images of cats (hence, The Economist‘s example). Concerning Google’s system, Derrick Harris (@derrickharris) writes, “Google employs some of the world’s smartest researchers in deep learning and artificial intelligence, so it’s not a bad idea to listen to what they have to say about the space.”[4] The Google employee to whom Harris listened was Greg Corrado, a senior research scientist at Google, who told an audience at RE:WORK’s Deep Learning Summit that he believes there are four big lessons to learn. First, “it’s not always necessary, even if it would work.” Corrado told participants, “Deep learning isn’t necessarily the best approach to solving a problem, even if it would offer the best results.” Although that sounds counterintuitive, Corrado explained that one of the weaknesses associated with deep learning is that it is “computationally expensive (in all meanings of the word).” Despite cost, Corrado’s second lesson is that “you don’t have to be Google to do it.” He explained that not all big data challenges are at the scale of those faced by Google, Facebook, or Baidu. “You only need an engine big enough for the rocket fuel available,” Corrado stated.

 

The third lesson he wanted the audience to learn about deep learning is that you need a lot of data. He noted that deep learning takes “a lot of data. Ideally as much as you can get your hands on.” Even then, it has been pointed out that there is no guarantee of acceptable prediction accuracy. Corrado’s final point was that deep learning is not really based on the brain. Deep learning is a purely numerical technique with no ability to understand semantics. There’s one other weakness associated with deep learning that businesses need to know about — it is a “black box” approach to analytics (i.e., it has no ability to explain its conclusions — making it inappropriate for many business requirements). As I noted at the conclusion of Part 1 of this article, the hybrid approach used by Enterra’s Cognitive Reasoning Platform™ (CRP) adds semantic reasoning into the mix. The addition of an ontology means that a cognitive computing system can understand abstract ideas like “sibling” and how a sibling is related to other family members. That’s something that deep learning techniques cannot do.

 

Christopher Olah (@ch402), a researcher and self-confessed lover of mathematics, notes, “A great strength of neural networks [is that] they learn better ways to represent data, automatically. Representing data well, in turn, seems to be essential to success at many machine learning problems.”[5] One of the important implications of Olah’s observation is that the work of one kind machine learning can be an input for another kind of machine learning. Gary Marcus (), a professor of cognitive science at N.Y.U., points out, “The most powerful A.I. systems, like Watson, the machine that beat humans in ‘Jeopardy,’ use techniques like deep learning as just one element in a very complicated ensemble of techniques, ranging from the statistical technique of Bayesian inference to deductive reasoning.”[6] In turn, Watson’s outputs could be used as an input to Enterra’s CRP, which could then apply semantic reasoning to it. The point is, you need to use the right tool for the right job.

 

As I noted at the beginning of this article, machine learning is a branch of artificial intelligence (AI) and as it is now being developed it is not an approach that is going to lead to artificial general intelligence (AGI). The Economist explains, “For now, and for the foreseeable future, deep-learning machines will remain pattern-recognition engines.” Hybrid machine learning goes beyond simple pattern recognition to provide understanding and insights not possible using deep learning alone. Nevertheless, as The Economist concludes, “They are not going to take over the world. But they will shake up the world of work.”

 

Footnotes
[1] “How machine learning works,” The Economist, 13 May 2015.
[2] “Cloud machine learning wars heat up,” ZDnet, 13 April 2015.
[3] “Can Machines Really Learn?Huffington Post The Blog, 15 March 2015.
[4] “New to deep learning? Here are 4 easy lessons from Google,” Gigaom, 29 January 2015.
[5] “Deep Learning, NLP, and Representations,” Colah’s Blog, 7 July 2014.
[6] “Is ‘Deep Learning’ a Revolution in Artificial Intelligence?The New Yorker, 25 November 2012.

Related Posts:

Full Logo

Thanks!

One of our team members will reach out shortly and we will help make your business brilliant!