Narrow Artificial Intelligence has Broad Application

Articles about artificial intelligence (AI) are finding their way into mainstream media with increasing regularity. Often these articles fail to differentiate between attempts to create artificial general intelligence (i.e., sentient machines) and less ambitious AI efforts (so-called narrow, weak, or soft artificial intelligence). Until a truly sentient machine is developed, the debates about whether it can be done (and even if it should be done) will continue unabated. Although a computer program called Eugene was touted last year as having passed the famous Turing Test (i.e., convincing a human into thinking he or she is communicating with another human), no one believes that Eugene achieved sentience (or what Ray Kurzweil calls “the singularity”). Jamie Bartlett (), Director of the Centre for the Analysis of Social Media, insists that Eugene didn’t really pass the test. “He convinced 10 of 30 judges from the Royal Society that he was human,” Bartlett writes. “Impressive indeed — but this wasn’t exactly what the great Turing had in mind. What’s more, Eugene was pretending to be a 13-year-old Ukrainian boy for whom English was a second language — something very different indeed.” [“No, Eugene didn’t pass the Turing Test – but he will soon,” The Telegraph, 21 June 2014]

Putting Eugene and the Turing Test aside, the fact of the matter is that narrow artificial intelligence has already found its place in the world and is more established than many people think. Some people get hung up on the term intelligence (artificial or not) and get quite exercised in their insistence that what we call narrow AI is not intelligence at all. Shivon Zilis (@shivon), a venture capitalist at Bloomberg Beta, has no such hang-up. In fact, she prefers the term “machine intelligence” to either “artificial intelligence” or “machine learning.” [“The Current State of Machine Intelligence“] She explains:

“[‘Machine Intelligence’ is] a unifying term for what others call machine learning and artificial intelligence. … I would have preferred to avoid a different label but when I tried either ‘artificial intelligence’ or ‘machine learning’ both proved to too narrow: when I called it ‘artificial intelligence’ too many people were distracted by whether certain companies were ‘true AI,’ and when I called it ‘machine learning,’ many thought I wasn’t doing justice to the more ‘AI-esque’ like the various flavors of deep learning. People have immediately grasped ‘machine intelligence’ so here we are. ? Computers are learning to think, read, and write. They’re also picking up human sensory function, with the ability to see and hear (arguably to touch, taste, and smell, though those have been of a lesser focus). Machine intelligence technologies cut across a vast array of problem types (from classification and clustering to natural language processing and computer vision) and methods (from support vector machines to deep belief networks).”

Whatever you want to call it, narrow AI is now ubiquitous. Irving Wladawsky-Berger doesn’t have a hang-up with term AI either. He writes, “Soft, weak or narrow AI is inspired by, but doesn’t aim to mimic, the human brain.” [“‘Soft’ Artificial Intelligence Is Suddenly Everywhere,” The Wall Street Journal, 16 January 2015] He continues:

artificial intelligence applications clear“These are generally statistically oriented, computational intelligence methods for addressing complex problems based on the analysis of vast amounts of information using powerful computers and sophisticated algorithms, whose results exhibit qualities we tend to associate with human intelligence. … This engineering-oriented AI is indeed everywhere, and being increasingly applied to activities requiring intelligence and cognitive capabilities that not long ago were viewed as the exclusive domain of humans. AI-based tools are enhancing our own cognitive powers, helping us process vast amounts of information and make ever more complex decisions.”

Wladawsky-Berger equates narrow AI and cognitive computing. Technically, he’s correct. Cognitive computing, however, is a step forward in narrow AI. As I wrote in a previous article [Cognitive Computing: The Next Big Thing], the term “cognitive computing” remains a bit confusing since it covers systems that use different analytic approaches. The most famous cognitive system, of course, is IBM’s Watson. Watson basically uses a brute force approach to cognitive analytics. It analyzes massive amounts of data and provides a “best guess” answer (IBM calls it a “confidence-weighted response”) based on what it finds. At Enterra Solutions®, our Cognitive Reasoning Platform™ (CRP) bridges the gap between a pure mathematical technique and semantic understanding. The CRP has the ability to do math, but also understands and reasons about what was discovered. Marrying advanced mathematics with a semantic understanding is critical — we call this “Cognitive Reasoning.” Cognition is defined as “the action or process of acquiring knowledge and understanding through thought, experience, and the senses.” Of course, that definition has to be modified slightly when applied to a thinking machine. We believe a cognitive system is a system that discovers insights and relationships through analysis, machine learning, and sensing. Cognitive computing leads to better insights not sentience and that’s why it will remain in the narrow AI camp.

As I note in the title of this article, narrow AI has broad application and cognitive computing only widens the playing field. Kurt Andersen () writes, “We’re now accustomed to having conversations with computers: to refill a prescription, make a cable-TV-service appointment, cancel an airline reservation — or, when driving, to silently obey the instructions of the voice from the G.P.S.” [“Enthusiasts and Skeptics Debate Artificial Intelligence,” Vanity Fair, 26 November 2014] Because the potential applications for narrow AI are seemingly unlimited, Kevin Kelly reports, “All the major cloud companies, plus dozens of startups, are in a mad rush to launch a Watson-like cognitive service.” [“The Three Breakthroughs That Have Finally Unleashed AI on the World, Wired, 27 October 2014] Kelly continues:

“The AI on the horizon looks more like Amazon Web Services — cheap, reliable, industrial-grade digital smartness running behind everything, and almost invisible except when it blinks off. This common utility will serve you as much IQ as you want but no more than you need. Like all utilities, AI will be supremely boring, even as it transforms the Internet, the global economy, and civilization. It will enliven inert objects, much as electricity did more than a century ago. Everything that we formerly electrified we will now cognitize. This new utilitarian AI will also augment us individually as people (deepening our memory, speeding our recognition) and collectively as a species. There is almost nothing we can think of that cannot be made new, different, or interesting by infusing it with some extra IQ.”

A report released last week by MarketsMarkets lists a few of the companies involved in the cognitive computing arena. The report states, “Leading market players in cognitive computing market include Google, IBM, Microsoft Corporation, Saffron Technology, Palantir, Cold Light, Cognitive Scale, Enterra Solutions, Numenta and Vicarious.” [“Cognitive Computing Market Growing at 38% CAGR to 2019 – Regionally, NA Is Expected to Be on Top,” by PRWeb, Bazinga, 27 April 2015] Although Wladawsky-Berger believes that narrow AI will play an important role in our future, he concludes with a word of caution:

“The very flexibility of software means that all the interactions between their various components, including people, cannot be adequately planned, anticipated or tested. That means that even if all the components are highly reliable, problems can still occur if a rare set of interactions arise that compromise the overall behavior and safety of the system. How can we best manage the risks involved in the design and operation of complex, software intensive, socio-technical systems? How do we deal with a system that is working as designed but whose unintended consequences we do not like? How can we protect our mission critical systems from cyberattacks? How can we make these systems as resilient as possible? Human intelligence has evolved over millions of years. But humans have only been able to survive long enough to develop intelligence because of an even more fundamental evolution-inspired capability that’s been a part of all living organisms for hundreds of millions of years — the autonomic nervous system. This is the largely unconscious biological system that keeps us alive by controlling key vital functions, including heart rate, digestion, breathing and protections against disease. Our highly complex IT systems must become much more autonomic and resilient, capable of self-healing when failures occur and self-protecting when attacked. Only then will they be able to evolve and incorporate increasingly advanced capabilities, including those we associate with human-like intelligence.”

Cognitive computing systems offer the best hope for creating the kind of autonomic, self-healing, and resilient networks described by Wladawsky-Berger. In fact, they are likely to be full participants in designing such systems.

Follow me on Twitter