Home » Artificial Intelligence » Advances in Cognitive Computing

Advances in Cognitive Computing

December 16, 2013

supplu-chain

With a plethora of daily headlines available touting the benefits and/or challenges associated with Big Data analytics, most people understand that we have entered a new era of analysis. One of the features of this new analytic landscape is the rise of cognitive computing — an era that features machines that can learn. Researchers have been pursuing artificial intelligence for decades, but we have reached a point where a computer can now pore through mountains of data searching for insights and connections and teaching itself along the way. Most people, for example, are aware that a Google-owned system taught itself to recognize a cat from analyzing millions of pictures. Natalie Wolchover reports, “Studies suggest that computer models called neural networks may learn to recognize patterns in data using the same algorithms as the human brain.” [“As Machines Get Smarter, Evidence Grows That They Learn Like Us,” Scientific American, 24 July 2013] The notion that humans have a mathematical formula tucked away in their DNA may sound strange; but, we know that our brains learn and store data by making connections. Wolchover explains:

“The brain performs its canonical task — learning — by tweaking its myriad connections according to a secret set of rules. To unlock these secrets, scientists 30 years ago began developing computer models that try to replicate the learning process. Now, a growing number of experiments are revealing that these models behave strikingly similar to actual brains when performing certain tasks. Researchers say the similarities suggest a basic correspondence between the brains’ and computers’ underlying learning algorithms. The algorithm used by a computer model called the Boltzmann machine, invented by Geoffrey Hinton and Terry Sejnowski in 1983, appears particularly promising as a simple theoretical explanation of a number of brain processes, including development, memory formation, object and sound recognition, and the sleep-wake cycle.”

Although the human brain remains much more powerful than any of today’s computers (i.e., it is capable of doing many more things), computers can recognize patterns and make connections better than we can. That’s where most of the advances in cognitive computing have taken place. Recently, researchers from MIT have created an algorithm that they say will give the Hinton/Sejnowski algorithm a run for its money — at least in some areas. Larry Hardesty reports, “Researchers from MIT’s Laboratory for Information and Decision Systems (LIDS) and Computer Science and Artificial Intelligence Laboratory [have formulated] a new reinforcement-learning algorithm that, for a wide range of problems, allows computer systems to find solutions much more efficiently than previous algorithms did.” [“How computers can learn better,” MIT News, 28 May 2013] Hardesty explains the concept of reinforcement learning:

“Reinforcement learning is a technique, common in computer science, in which a computer system learns how best to solve some problem through trial-and-error. Classic applications of reinforcement learning involve problems as diverse as robot navigation, network administration and automated surveillance.”

The software, which was developed by Alborz Geramifard and Robert Klein has, has been “dubbed RLPy (for reinforcement learning and Python, the programming language it uses.” Its aim “is to simplify education and research in solving Markov Decision Processes by providing a plug-n-play framework, where various components can be linked together to create experiments.” [“RLPy,” MIT] The site notes, “At the moment RLPy is mostly focused on value function based reinforcement learning algorithms. However, direct policy search methods are currently being implemented.” Hardesty continues:

“Every reinforcement-learning experiment involves what’s called an agent, which in artificial-intelligence research is often a computer system being trained to perform some task. The agent might be a robot learning to navigate its environment, or a software agent learning how to automatically manage a computer network. The agent has reliable information about the current state of some system: The robot might know where it is in a room, while the network administrator might know which computers in the network are operational and which have shut down. But there’s some information the agent is missing — what obstacles the room contains, for instance, or how computational tasks are divided up among the computers. Finally, the experiment involves a ‘reward function,’ a quantitative measure of the progress the agent is making on its task. That measure could be positive or negative: The network administrator, for instance, could be rewarded for every failed computer it gets up and running but penalized for every computer that goes down. The goal of the experiment is for the agent to learn a set of policies that will maximize its reward, given any state of the system. Part of that process is to evaluate each new policy over as many states as possible. But exhaustively canvassing all of the system’s states could be prohibitively time-consuming. … RLPy can be used to set up experiments that involve computer simulations, such as those that the MIT researchers evaluated, but it can also be used to set up experiments that collect data from real-world interactions.”

The potential uses of cognitive computing are only limited by the imagination. Dr. Ignacy Sawicki asserts, “There has … been remarkable progress towards computer/robotic science and automated reasoning.” [“Machine learning and the future of science,” Cosmology at AIMS, 11 November 2013] Sawicki believes that cognitive computers could conduct “great science” by applying “some clever search algorithms to find concepts or ideas that fit the observable data best.” He continues:

“To do so, it would need to be able to compute the implications of a given idea. For example, given an action, it would compute the observable implications, compare with available data, compute a likelihood, and then then jump to a new theory. This is hard to imagine, but there has been remarkable progress in automated theorem proving software. I can (sort of!) imagine a robotic scientist that proposes theories through some encoding of the space of relevant concepts, derives logical derivations using allowed logical operations until it produces something that can be compared with data, computes the likelihood of the theory given the data, and then adapts the theory based on this outcome.”

At my company, Enterra Solutions®, we have created the Enterra Hypothesis Engine™ that allows for the dynamic generation of reasoning plans, allocation of analytic tasking between reasoning and quantitative and/or optimization algorithms, and the interpretation of data and analytics within a machine for rapid experimentation, modeling, and simulation. Because there are so many things that can be done using cognitive computing, “Swift IQ believes Machine Learning as a Service (MLaaS) will become a key market in the near future as more providers look to integrate predictive APIs with their customer transaction data.” [“Machine Learning as a Service: Swift IQ Predicts the Future,” by Mark Boyd, Programmable Web, 1 November 2013] Boyd goes on to note that machine learning can be used for recommendation engines (Enterra® has one of those as well), in pattern mining, in classification, and in clustering. He explains how each technique might be used:

Recommendation Engines – like those used by Amazon – allow for a more personalized online shopping experience that helps retailers present the sorts of products that shoppers are most likely to be interested in. …

Frequent Pattern Mining is particularly useful to supermarket chains who may want to organize shelving patterns to up and cross-sell products to store visitors. By analyzing what products shoppers buy in conjunction, supermarkets can make it easier for customers to remember what they want and get all they need. …

Classification is used to better organize keyword search results to better suggest ranking of search terms based on their likelihood to be relevant to the searcher. Classification algorithms could be used in conjunction with knowledge about a customer’s previous buying habits. …

Clustering is a process to help identify high value customers. Perhaps buying specific items, or purchasing with a specific frequency is most common amongst particular customer segments. Clustering uses machine learning algorithms to uncover the hidden gold in customer transaction data that would let a business identify and better target their most valuable clients, even if this did not appear immediately obvious from any one-off purchase.”

I began this post with the idea that machines might learn the same way that humans do. That may become even more true as a consequence of a new breakthrough by Harvard researchers. “In a development that may enable a wholly new approach to artificial intelligence, researchers at Harvard University’s School of Engineering and Applied Sciences (SEAS) have invented a type of transistor that can learn in ways similar to a neural synapse. Called a synaptic transistor, the new device self-optimizes its properties for the functions it has carried out in the past.” [“Harvard scientists develop a transistor that learns,” by Brian Dodson, Gizmag, 7 November 2013] Cognitive computing is an exciting field in which to be involved and I believe it has a very bright future.

Related Posts:

Full Logo

Thanks!

One of our team members will reach out shortly and we will help make your business brilliant!