Home » Artificial Intelligence » Mankind and Machine Learning

Mankind and Machine Learning

April 12, 2016

supplu-chain

In the past few years, stories about machine learning, artificial intelligence, cognitive computing, automation, and robots have flooded media outlets and raised fears about the rise of the machines. Change is certainly in the air and, undoubtedly, those changes are going to impact the lives of millions of people — sometimes for better and sometimes for worse. Many analysts, however, are convinced collaboration between mankind and machine is best way forward. Humans provide context for life’s challenges. “Numbers need a narrative around them,” writes Jason Brownlee (@TeachTheMachine), a machine learning expert from Melbourne, Australia.[1] He explains:

“The business needs this information so they can internalize and transfer it to other problems and to the same problem in the future after the concepts have drifted. That knowledge is the secret sauce, not the model that created the knowledge. … I think of black-box machine learning like I do automatic programming. Automatic programming can give you a program that solves a well defined problem, but you have no idea how ugly that program is under the covers, and you probably don’t want to know. The very idea of this is repulsive to programmers, for the very same idea that a magic black box machine learning system is repulsive to a machine learning practitioner (data scientist?). The details, the how, really matters for most problems.”

Nir Kaldero (@NirKaldero), Director of Data Science and the Head of Galvanize Experts at Galvanize, Inc., and Dr. Donatella Taurasi, a lecturer at Haas School of Business and the Fung Institute For Engineering Leadership in Berkeley and at Hult International Business School in San Francisco, offer another good reason humans should remain in the machine learning loop — perspective. “At a glance, machine learning and statistics seem to be very similar,” they write, “but many people fail to stress the importance of the difference between these two disciplines. Machine learning and statistics share the same goals — they both focus on data modeling — but their methods are affected by their cultural differences. In order to empower collaboration and knowledge creation, it’s very important to understand the fundamental underlying differences that reflect in the cultural profile of these two disciplines.”[2] Whenever an organization faces a challenge, the best solutions are generally going to be found when individuals from different disciplines get an opportunity to look at the challenge from their unique perspective. Frans Johansson (@Frans_Johansson) calls it the “Medici Effect.”[2] A computational system that provides machine learning simply adds one more perspective. In fact, a cognitive computing system, like the Enterra Enterprise Cognitive System™ (ECS) — a system that can Sense, Think, Act, and Learn® — can add three perspectives to the mix. By that, I mean that subject matter expertise can actually be embedded in the software. When we talk to clients who have an analytic problem, they typically have to assemble a team of three experts:

 

  • A business domain expert — the customer of the analysis who can help explain the drivers behind data anomalies and outliers.
  • A statistical expert — to help formulate the correct statistical studies. The business expert knows what they want to study, and the statistical expert knows what terms to use to help formulate the data in a way that will detect the desired phenomena.
  • A data expert — the data expert who understands where and how to pull the data from across multiple databases or data feeds.

 

Enterra’s approach empowers the business expert by automating the statistical expert’s and data expert’s knowledge and functions, so the ideation cycle can be dramatically shortened and more insights can be auto-generated. That doesn’t mean that people are taken out of the loop. In fact, Lukas Biewald (@l2k), CEO of CrowdFlower, believes taking people out of the loop would be a bad idea. He explains, “I’ve worked with many companies building machine learning algorithms,” he writes, “and I’ve noticed a best practice in nearly every successful deployment of machine learning on tough business problems. That practice is called ‘human-in-the-loop’ computing.”[3] He continues:

“Here’s how it works: First, a machine learning model takes a first pass on the data, or every video, image or document that needs labeling. That model also assigns a confidence score, or how sure the algorithm is that it’s making the right judgment. If the confidence score is below a certain value, it sends the data to a human annotator to make a judgment. That new human judgment is used both for the business process and is fed back into the machine learning algorithm to make it smarter. In other words, when the machine isn’t sure what the answer is, it relies on a human, then adds that human judgment to its model. This simple pattern is at the heart of many well known, real-world use-cases of machine learning. And it solves one of the biggest issues with machine learning, namely: it’s often very easy to get an algorithm to 80 percent accuracy but near impossible to get an algorithm to 99 percent. The best machine learning lets humans handle that 20 percent since 80 percent accuracy is simply not good enough for most real-world applications.”

Some people might argue these are the last desperate pleas of programmers, data scientists, and statisticians trying to save their livelihoods; but, I believe there is something fundamentally sound in what they are saying. In previous articles, I have argued that the future will be characterized by human/machine collaboration.[4] Thomas H. Davenport (@tdav), a Distinguished Professor at Babson College, and Julia Kirby (@JuliaKirby) write, “What if, rather than asking the traditional question — What tasks currently performed by humans will soon be done more cheaply and rapidly by machines? — we ask a new one: What new feats might people achieve if they had better thinking machines to assist them? Instead of seeing work as a zero-sum game with machines taking an ever greater share, we might see growing possibilities for employment. We could reframe the threat of automation as an opportunity for augmentation.“[5] Biewald concludes, “Human-computer interaction is much more important for artificial intelligence than we ever thought. … Artificial intelligence is here and it’s changing every aspect of how business functions. But it’s not replacing people one job function at a time. It’s making people in every job function more efficient by handling the easy cases and watching and learning from the hard cases.” Machine learning is becoming more mainstream in the business environment every year. Forbes Technical Council identified seven ways that companies are currently using machine learning to improve their operations.[6] They are:

 

1. Improving cybersecurity efforts.
2. Identifying the factors leading customer to pay.
3. Analyzing the buyer’s journey.
4. Helping your experts succeed.
5. Making lead scoring smarter.
6. Keeping content relevant.
7. Making your data easier to use.

 

There are many other business uses for machine learning and human imagination is needed to identify them. Adrian Bridgwater (@ABridgwater) concludes, “There are all sorts of reasons why we still need human beings in Artificial Intelligence. From nuances of spoken language to unexpected typographical errors, no single computer and no single crowd-augmented system can ever be perfect.”[7] Mankind and machine learning have bright future together.

 

Footnotes
[1] Jason Brownlee, “The Seductive Trap of Black-Box Machine Learning,” Machine Learning Mastery, 29 April 2014.
[2] Nir Kaldero and Donatella Taurasi, “Why a Mathematician, Statistician, & Machine Learner Solve the Same Problem Differently,” Galvinize, 26 August 2015.
[3] Lukas Biewald, “Why human-in-the-loop computing is the future of machine learning,” Computerworld, 13 November 2015.
[4] Stephen DeAngelis, “Artificial Intelligence and Industrial Revolution 4.0,” Enterra Insights, 11 February 2016.
[5] Thomas H. Davenport and Julia Kirby, “Beyond Automation,” Harvard Business Review, June 2015.
[6] Editors, Forbes Technical Council, “Seven Ways To Leverage Machine Learning In 2016, Forbes, 25 February 2016.
[7] Adrian Bridgwater, “Machine Learning Needs A Human-In-The-Loop,” Forbes, 7 March 2016.

Related Posts: