Artificial Intelligence and Common Sense

Stephen DeAngelis

April 05, 2018

As artificial intelligence (AI) continues to make inroads into our daily lives, people are getting both curious and anxious about how it works. One specific concern is AI’s lack of common sense. To demonstrate this point, New York University professors, Ernest Davis and Gary Marcus (@GaryMarcus) ask a series of questions, “Who is taller, Prince William or his baby son Prince George? Can you make a salad out of a polyester shirt? If you stick a pin into a carrot, does it make a hole in the carrot or in the pin?”[1] To humans, Davis and Marcus point out, “These types of questions may seem silly, but many intelligent tasks, such as understanding texts, computer vision, planning, and scientific reasoning require the same kinds of real-world knowledge and reasoning abilities.” Although answering those questions may be easy for humans, most AI systems find them challenging. Davis and Marcus note, “Artificial intelligence has seen great advances of many kinds recently, but there is one critical area where progress has been extremely slow: ordinary common sense.”

Why is programming common sense so difficult?

Davis and Marcus assert there are five major obstacles to producing a “satisfactory common-sense reasoner”. They are:

1. “Many of the domains involved in commonsense reasoning are only partially understood or virtually untouched. We are far from a complete understanding of domains such as physical processes, knowledge and communication, plans and goals, and interpersonal interactions. In domains such as the commonsense understanding of biology, of social institutions, or of other aspects of folk psychology, little work of any kind has been done.”

2. “Situations that seem straightforward can turn out, on examination, to have considerable logical complexity. … There are many aspects of these relations where we do not know, even in principle, how they can be represented in a form usable by computers or how to characterize correct reasoning about them. … If you want to model a teacher thinking about what his students don’t understand, and how they can be made to understand, that is a [hard] problem, and one for which we currently do not have a workable solution.”

3. Commonsense reasoning almost always involves plausible reasoning; that is, coming to conclusions that are reasonable given what is known, but not guaranteed to be correct. … Overall we do not seem to be very close to a comprehensive solution. Plausible reasoning takes many different forms, including using unreliable data; using rules whose conclusions are likely but not certain; default assumptions; assuming that one’s information is complete; reasoning from missing information; reasoning from similar cases; reasoning from typical cases; and others.”

4. In many domains, a small number of examples are highly frequent, while there is a ‘long tail’ of a vast number of highly infrequent examples. … The effect of long tail distributions on AI research can be pernicious. On the one hand, promising preliminary results for a given task can be gotten easily, because a comparatively small number of common categories include most of the instances. On the other hand, it is often very difficult to attain high quality results, because a significant fraction of the problems that arise correspond to very infrequent categories.”

5. In formulating knowledge it is often difficult to discern the proper level of abstraction. … Recall the example of sticking a pin into a carrot and the reasoning that this action may well create a hole in the carrot, but not create a hole in the pin. Before it encounters this particular example, an automated reasoner presumably would not specifically know a fact specific to pins and carrots; at best it might know a more general rule or theory about creating holes by sticking sharp objects into other objects. The question is, how broadly should such rules should be formulated?”

They conclude, “Piecemeal commonsense knowledge (e.g., specific facts) is relatively easy to acquire, but often of little use, because of the long-tail phenomenon discussed above. Consequently, there may not be much value in being able to do a little commonsense reasoning.”

In search of common sense

Paul Allen (@PaulGAllen), Microsoft’s co-founder, agrees there is little value in pursuing a little common sense reasoning. Cade Metz (@CadeMetz) reports, “Paul Allen [is] pumping an additional $125 million into his nonprofit computer research lab for an ambitious new effort to teach machines ‘common sense.’ … In the years and decades to come, the lab hopes to create a database of fundamental knowledge that humans take for granted but machines have always lacked.”[2] Allen is skeptical an artificial general intelligence system will ever be created because developing AGI “will take unforeseeable and fundamentally unpredictable breakthroughs.”[3] Nevertheless, he told Metz, “To make real progress in A.I., we have to overcome the big challenges in the area of common sense.” As Davis and Marcus pointed out, the challenges are really big. Oren Etzioni, a former University of Washington professor who oversees the Allen Institute for Artificial Intelligence understands the enormity of the task ahead. He told Metz, “[AI] recognizes objects, but can’t explain what it sees. It can’t read a textbook and understand the questions in the back of the book. It is devoid of common sense.” Metz concludes, “Success may require years or even decades of work — if it comes at all. Others have tried to digitize common sense, and the task has always proved too large.”

Cyc and common sense

To date, the best efforts to provide AI with common sense have come from Cycorp, with whom Enterra Solutions® partners. Davis and Marcus note, “The CYC program … was initiated in 1984 by Doug Lenat, who has led the project throughout its existence. Its initial proposed methodology was to encode the knowledge in 400 sample articles in a one-volume desk encyclopedia together with all the implicit background knowledge that a reader would need to understand the articles (hence the name). It was initially planned as a ten-year project, but continues to this day. In the last decade, Cycorp has released steadily increasing portions of the knowledge base for public or research use.” Metz writes, “In the mid-1980s, Doug Lenat, a former Stanford University professor, with backing from the government and several of the country’s largest tech companies, started a project called Cyc. He and his team of researchers worked to codify all the simple truths that we learn as children, from ‘you can’t be in two places at the same time’ to ‘when drinking from a cup, hold the open end up.’ Thirty years later, Mr. Lenat and his team are still at work on this ‘common sense engine’ — with no end in sight.”

Although Cyc may not be perfect, it far surpasses anything else available. As the company’s website notes, “Cycorp is a leading provider of semantic technologies that bring a new level of intelligence and common sense reasoning to a wide variety of software applications. The Cyc software combines an unparalleled common sense ontology and knowledge base with a powerful reasoning engine and natural language interfaces to enable the development of novel knowledge-intensive applications.” Enterra’s Enterprise Cognitive System™ (Aila®) leverages the Cyc database to ensure its insights are imbued with as much common sense as is currently possible.

Summary

Metz reports Doug Lenat welcomes Allen’s new project. “But he also warned of challenges: Cyc has burned through hundreds of millions of dollars in funding, running into countless problems that were not evident when the project began. He called them ‘buzz saws’.” Davis and Marcus conclude, “Intelligent machines need not replicate human techniques, but a better understanding of human common sense reasoning might be a good place to start.” Marcus put his money where his mouth is and started a company called Geometric Intelligence to do just that.[4] The company was acquired by Uber in 2016.[5] Efforts to find ways to imbue AI with common sense are laudable; but, they face enormous challenges. Metz concludes, “Success may require years or even decades of work — if it comes at all.”

Footnotes
[1] Ernest Davis, Gary Marcus, “Commonsense Reasoning and Commonsense Knowledge in Artificial Intelligence,” Communications of the ACM, September 2015.
[2] Cade Metz, “Paul Allen Wants to Teach Machines Common Sense,” The New York Times, 28 February 2018.
[3] Paul G. Allen, “Paul Allen: The Singularity Isn’t Near,” Technology Review, 12 October 2011.
[4] Will Knight, “Can This Man Make AI More Human?MIT Technology Review, 17 December 2015.
[5] Polina Marinova, “Uber Just Bought a Startup You’ve Never Heard Of. Here’s Why That’s Important.Fortune, 5 December 2016.