Home » Artificial Intelligence » Computers that Try to Think Like Humans

Computers that Try to Think Like Humans

February 19, 2015

supplu-chain

Amelia is a computer with which humans can hold natural language conversations. It (she) was built by an information technology (IT) firm called, IPSoft. According to Christopher Mims (@mims), Amelia “is different” from other computers. “She learns from textbooks, transcriptions of conversations, email chains and just about any other text,” Mims writes, “As long as the answer appears in the data she gets, she can solve problems.”[1] It appears that Amelia deserves to be called a cognitive computing system. Cognition is defined as “the action or process of acquiring knowledge and understanding through thought, experience, and the senses.” If you exchange the word “analysis” for “thought,” you can create a pretty good definition that can be applied to cognitive computing systems, namely: a system that discovers insights and relationships through analysis, machine learning, and sensing. That describes Amelia pretty well. Mims adds:

“Amelia is already being tested — in some cases, alongside Watson — by companies in surprisingly diverse industries, from telecommunications to energy. She embodies a new approach to artificial intelligence called cognitive computing. Its defining characteristic is machines that can learn. Yet because of the complexity of their understanding, the knowledge they contain can’t be programmed into them. Like all software, these systems are first built by programmers, but like children, they must be taught to do the things for which they are intended.”

As I pointed out in an article entitled “Making Sense of Cognitive Computing,” cognitive computing systems, including Amelia, use narrow (also called “weak” and “soft”) artificial intelligence to work their magic. They are not destined to become our computer overlords. Nevertheless, such narrow AI systems are becoming more sophisticated all the time. Mims describes what a notional conversation with Amelia could be like:

Joe: “Hello, I’m stuck here and my car won’t start.”
Amelia: “I’m sorry to hear that. Could you take a look at your dashboard? Is the battery light on?”
Joe: “No.”
Amelia: “OK. Are any of the lights in your car on?”
Joe: “No.”
Amelia: “It could be an issue with your battery. Do you have jumper cables with you?”

In my earlier article, I quoted Irving Wladawsky-Berger, who wrote, “Soft, weak or narrow AI is inspired by, but doesn’t aim to mimic, the human brain. These are generally statistically oriented, computational intelligence methods for addressing complex problems based on the analysis of vast amounts of information using powerful computers and sophisticated algorithms, whose results exhibit qualities we tend to associate with human intelligence.”[2] This is just as true for Amelia as it is for my company’s (Enterra Solutions®) Cognitive Reasoning Platform™ (CRP). Concerning Amelia, Mims writes:

“Amelia is the product of an attempt to understand how people think, rather than to copy the means by which we do it. Many traditional AI efforts try to map the human brain, or the brains of less complicated animals, like fruit flies. But Amelia is all about turning what psychologists and linguists know about how thinking happens — a high-level understanding of it, rather than how it’s carried out by our neurons — into software.”

That’s why if a true artificial general intelligence machine is ever developed (i.e., one that is self-aware), it will likely have a new kind of intelligence that differs from human intelligence. An IBM Fellow named Dharmendra S. Modha talks about “a brain-inspired machine.” He reports that last year, IBM built a “one million neuron brain-inspired processor. The chip consumes merely 70 milliwatts, and is capable of 46 billion synaptic operations per second, per watt–literally a synaptic supercomputer in your palm.”[3] IBM calls the chip TrueNorth. Modha explains that the reason IBM is looking to mimic how the brain works is that it is a marvel of computing effectiveness and efficiency. On the other hand, Modha writes, “As remarkable as [the] evolution [in computing power has been], it has been headed in a direction diametrically opposite to the computing paradigm of the brain. Consequently, today’s microprocessors are eight orders of magnitude faster (in terms of clock rate) and four orders of magnitude hotter (in terms of power per unit cortical area) than the brain.” In other words, today’s computers are energy hogs. That’s why IBM researched how it could improve computer hardware architecture. Modha explains:

“Unlike the prevailing von Neumann architecture — but like the brain — TrueNorth has a parallel, distributed, modular, scalable, fault-tolerant, flexible architecture that integrates computation, communication, and memory and has no clock. It is fair to say that TrueNorth completely redefines what is now possible in the field of brain-inspired computers, in terms of size, architecture, efficiency, scalability, and chip design techniques.”

Hardware isn’t the only way that researchers are trying to mimic the brain. Rafi Letzter reports researchers at MIT are trying to develop algorithms that help computers think more like humans.[4] Letzter reports:

“Humans and machines organize the world in very different ways. People are good at fitting small data sets into larger patterns: Eggs, chocolate, butter, sugar, and flour? Sounds like a brownie mix. Computers, on the other hand, are good at sorting huge data sets into clusters without supervision: All of these ingredients show up in recipes for sweets. But they can struggle with details: Wait, what’s this Tootsie Roll doing in my brownie? A new machine-learning model out of MIT aims to close that gap. Until now, if scientists asked a computer to sort data points unsupervised, the best it could do was throw similar-looking stuff into a big pile — a process known as topic modeling. The MIT model asks the computer to limit the size of each pile, and organize it around a prototype. In other words, the algorithm is more likely to throw Tootsie Rolls in a separate pile from brownies, because they don’t appear in the set’s most typical example of a brownie recipe. … To test how people might benefit from the development, researchers Julie Shah and Been Kim fed meal recipes into the old topic modeling system and their new prototyping software. The older topic model spat out a list of ingredients, while the new algorithm found a more typical example of the recipe. Their test subjects were 86 percent successful in cooking their meals from the typical recipe, but only 71 percent from the ingredients list.”

The field of cognitive computing is progressing rapidly. Diego Lo Giudice (@dlogiudice), an analyst with Forrester, believes that a quarter of a century from now cognitive computing will have replaced artificial intelligence as the term most used to describe smart machines.[5] Lo Giudice indicates that three assumptions lead him and his colleagues to that conclusion. Those assumptions are:

 

  • Assumption No. 1: Cognitive computing will change for ever the way we engage with software systems. One goal of cognitive computing is to help computers interact with humans in a human way. While we are still far away from full speech/voice recognition and language understanding, natural language (NL) in limited domains is possible (shallow NL). … Shallow NL is here and many current cognitive products are leveraging it. … NL is a great promise for systems of engagement and mobile, think how portable NL is across channels and how easily leverage able it can be on our own devices. Forrester strongly believes cognitive [computing] will transform the engaging model in many ways and at increasing levels of disruption. …
  • Assumption No. 2: Cognitive computing is now ready to help solve business problems in much less time than with traditional programming. …
  • Assumption No. 3: What’s surfacing from Cognitive computing these days is only the tip of the iceberg, much more will be coming. Cognitive computing has long-term goals, spanning over a decade or more. It will solve a class of new problems we have not even yet thought of. … If you get involved in cognitive computing, you should get in for the long run. Yes you can get some spot business solutions going, but the big reward is going to take more investment and time.

 

Forrester analysts conclude, “Cognitive systems are creeping into commercial relevance beginning with high-end customer engagement applications in financial services, healthcare, and retail and will become ubiquitous in mainstream scenarios and the Internet of Things within five years.” The closer that we can come to using techniques the human brain uses to solve problems, the easier it will become for humans and machines to collaborate in the workplace.

 

Footnotes
[1] Christopher Mims, “Amelia, a Machine, Thinks Like You,” The Wall Street Journal, 28 September 2014.
[2] Irving Wladawsky-Berger, “‘Soft’ Artificial Intelligence Is Suddenly Everywhere,” The Wall Street Journal, 16 January 2015.
[3] Dharmendra S. Modha, “Introducing a Brain-inspired Computer,” IBM, 2014.
[4] Rafi Letzter, “New Algorithm Helps Computers Think More Like Humans,” Popular Science, 8 December 2014.
[5] Diego Lo Giudice, “Three assumptions for why the next generation of software innovation will be cognitive,” Computerworld UK, 28 August 2014.

Related Posts:

Full Logo

Thanks!

One of our team members will reach out shortly and we will help make your business brilliant!