Home » Artificial Intelligence » Will 2014 be the Year You Fall in Love with Cognitive Computing?

Will 2014 be the Year You Fall in Love with Cognitive Computing?

April 18, 2014

supplu-chain

In the Oscar-winning film “Her,” the character played by Joaquin Phoenix falls in love with his smartphone’s artificial intelligence personal assistant named Samantha (voiced by Scarlett Johansson). In real life, Ryan Daws asserts that there isn’t a lot to love about today’s smartphone AI personal assistants. He calls them, “speech interpreters.” [“2014: Year of the contextual AI,” Telecoms Tech, 28 January 2014] He explains:

“They will detect speech and based upon the format will scour the appropriate source for relevant answers. General knowledge answers will likely be pulled from Wikipedia, math questions will almost certainly be answered by Wolfram Alpha, questions about places will undoubtedly be settled by Yelp … and as for bookings at said places? That will be handled by OpenTable. By making use of the extensive information available from these sources; virtual assistants can, and will continue to provide concise and complete answers. It’s a case of when it works, it works well. When it goes wrong … it goes disastrously wrong.”

Daws hasn’t fallen out of love with AI personal assistants; rather, he’s still searching for his perfect companion. And he’s optimistic about his search. Newer personal assistants, he writes, will “be contextual AIs. These newer Artificial Intelligence systems can engage in natural conversation where if something isn’t quite understood, or requires more information, it can ask the user back for clarification … rather than grabbing the wrong (frustrating) answer.” You can tell he’s looking forward to holding more intelligent conversations with his phone. Contextual AI systems require both artificial intelligence algorithms and natural language processing (NLP) capabilities. Add machine learning to those two capabilities and you have a powerful combination. Rajeev Ronanki and David Steier, write, “Artificial intelligence, machine learning, and natural language processing have moved from experimental concepts to potential business disruptors — harnessing Internet speed, cloud scale, and adaptive mastery of business processes to drive insights that aid real-time decision making.” [“Tech Trends 2014: Cognitive Computing,” Deloitte University Press, 21 February 2014] Obviously, Ronanki and Steier aren’t writing about personal assistants. They are discussing how cognitive computing systems can be major differentiators for businesses that use them. They explain, “For organizations that want to improve their ability to sense and respond, cognitive analytics can be a powerful way to bridge the gap between the intent of big data and the reality of practical decision making.” They continue:

“For decades, companies have dealt with information in a familiar way — deliberately exploring known data sets to gain insights. Whether by queries, reports, or advanced analytical models, explicit rules have been applied to universes of data to answer questions and guide decision making. The underlying technologies for storage, visualization, statistical modeling, and business intelligence have continued to evolve, and we’re far from reaching the limits of these traditional techniques. Today, analytical systems that enable better data-driven decisions are at a crossroads with respect to where the work gets done. While they leverage technology for data-handling and number-crunching, the hard work of forming and testing hypotheses, tuning models, and tweaking data structures is still reliant on people. Much of the grunt work is carried out by computers, while much of the thinking is dependent on specific human beings with specific skills and experience that are hard to replace and hard to scale. … For the first time in computing history, it’s possible for machines to learn from experience and penetrate the complexity of data to identify associations. The field is called cognitive analytics™ — inspired by how the human brain processes information, draws conclusions, and codifies instincts and experience into learning. Instead of depending on predefined rules and structured queries to uncover answers, cognitive analytics relies on technology systems to generate hypotheses, drawing from a wide variety of potentially relevant information and connections. Possible answers are expressed as recommendations, along with the system’s self-assessed ranking of how confident it is in the accuracy of the response. Unlike in traditional analysis, the more data fed to a machine learning system, the more it can learn, resulting in higher-quality insights. Cognitive analytics can push past the limitations of human cognition, allowing us to process and understand big data in real time, undaunted by exploding volumes of data or wild fluctuations in form, structure, and quality. Context-based hypotheses can be formed by exploring massive numbers of permutations of potential relationships of influence and causality — leading to conclusions unconstrained by organizational biases.”

Most researchers refer to this field as cognitive computing (you can’t really trademark an entire field) and Ronanki and Steier admit “cognitive analytics is an extension of cognitive computing, which is made up of three main components: machine learning, natural language processing, and advancements in the enabling infrastructure.” Nevertheless, they are spot on in their description of how cognitive computing differs from traditional approaches to data analysis. Basically, the computer system assists or replaces humans who possess specific analytical skills in order to help executives make better informed decisions.

 

As Ronanki and Steier note, the more data a cognitive computing system ingests the more refined and nuanced its learning and insights become. During the learning process, however, some interesting (and often humorous) conclusions can be reached. Eric Blattberg reports, for example, “When deep learning startup AlchemyAPI exposed its natural language processing system to the Internet, it determined that dogs are people because of the way folks talked about their pets. That might ring true to some dog owners, but it’s not accurate in a broader context. That hilarious determination reflects the challenges — and opportunities — inherent to machine learning.” [“Cognitive computing is smashing our conception of ‘ground truth’,” Venture Beat, 20 March 2014] At Enterra Solutions® we generally avoid these kinds of false conclusions by using a common sense ontology complemented by experts to help speed up the learning process for our Cognitive Reasoning Platform™ (CRP). Although some pundits probably believe that cognitive computing systems will eventually replace humans in some situations, AlchemyAPI CEO Elliot Turner told participants at the Structure Data conference, “These technologies still need humans to direct them. … They will augment human capabilities, not replace them. Just because Watson can now understand X-rays doesn’t mean that doctors will become obsolete.” He continued, “I believe in the strength of the human spirit. … While the systems that are coming online are amazing … you can still have a person read a document better than a machine can today. Same thing for vision. We just have to focus on the things that make us special, and move away from the historical view of rote memorization.” I agree with Turner that cognitive computing systems will have their greatest impact when they complement the work being done by humans. For more on that topic, read my post entitled “Cognitive Computing and Human/Computer Interactions.”

 

It should be clear from the discussion that we are still in the early stages of cognitive computing. The good news is that the learning curve for cognitive systems is steep because computers can ingest and analyze data 24 hours a day and continue learning as long as they are plugged in and turned on.

Related Posts:

Full Logo

Thanks!

One of our team members will reach out shortly and we will help make your business brilliant!