Robots: Rise of the Machines

Stephen DeAngelis

May 22, 2014

MIchael Dobie reports, “Robotic help-wanted ads were up 13 percent in 2013.” [“The rise of the machines — it’s happening,” Newsday, 18 April 2014] You might be wondering, along with Dobie, “Who exactly answers such an ad?” He remarks, “If it’s the robot, I’m thinking you’ve got a keeper.” From all of the discussions going on about artificial intelligence (AI) and robots, it wouldn’t be too surprising to learn that some robots probably could answer the ad — but they might not have the skills to fill the job. Most robots continue to be developed to serve specific purposes. Robbie the Robot-like machines are getting closer, but still remain mostly in the realm of science fiction. But as Dobie notes, “Some 168,000 industrial robots were sold in 2013, a 5 percent jump from 2012. Artificial intelligence abounds these days, and not just in the movies.” He continues:

“Not long ago, robots were repositories of great hopes and greater fears. Science fiction writers and filmmakers saw in them both the triumph and dark side of technology. But as startups, robots were small time. They couldn’t do much and weren’t that smart. As technology improved — digital sensors and more powerful silicon chips, for example — so did robots. Now they’re attracting important investment dollars.”

Patrick Tucker asserts that concerns about the dark side of robots with artificial intelligence have not subsided. He reports, for example, “Steven Omohundro says that ‘anti-social’ artificial intelligence in the future is not only possible, but probable, unless we start designing AI systems very differently today.” [“Why There Will Be A Robot Uprising,” Defense One, 17 April 2014] Tucker points out, however, that “computer systems perceive the world through a narrow lens, the job they were designed to perform. … Computer programs think of every decision in terms of how the outcome will help them do more of whatever they are supposed to do.” He continues:

“For the most part, we want machines to operate exactly this way. The problem, by Omohundro’s logic, is that we can’t appreciate the obsessive devotion of a computer program to the thing it’s programmed to do. Put simply, robots are utility function junkies. Even the smallest input that indicates that they’re performing their primary function better, faster, and at greater scale is enough to prompt them to keep doing more of that regardless of virtually every other consideration. That’s fine when you are talking about a simple program … but becomes a problem when AI entities capable of rudimentary logic take over weapons, utilities or other dangerous or valuable assets. In such situations, better performance will bring more resources and power to fulfill that primary function more fully, faster, and at greater scale. More importantly, these systems don’t worry about costs in terms of relationships, discomfort to others, etc., unless those costs present clear barriers to more primary function. This sort of computer behavior is anti-social, not fully logical, but not entirely illogical either. Omohundro calls this approximate rationality and argues that it’s a faulty notion of design at the core of much contemporary AI development.”

Mike Adams concludes, “What Omohundro is really getting at is the inescapable realization that the military’s incessant drive to produce autonomous, self-aware killing machines will inevitably result in the rise of AI Terminators that turn on humankind and destroy us all.” [“Scientists Warn the Rise of AI Will Lead to Extinction of Humankind,” Epoch Times, 19 April 2014] It would nice to be able to dismiss Omohundro’s claims as outlandish, Casandra-like ravings; but, humankind seems always to find a way to turn potentially beneficial scientific breakthroughs into weapons of destruction. Omohundro reasons that any weapon system programmed to maximum its effectiveness has the capability to take things too far. Tucker explains:

“The math that explains why that is Omohundro calls the formula for optimal rational decision making. It speaks to the way that any rational being will make decisions in order to maximize rewards and lowest possible cost. It looks like this:

In the above model, A is an action and S is a stimulus that results from that action. In the case of utility function, action and stimulus form a sort of feedback loop. Actions that produce stimuli consistent with fulfilling the program’s primary goal will result in more of that sort of behavior. That will include gaining more resources to do it. For a sufficiently complex or empowered system, that decision-making would include not allowing itself to be turned off, take, for example, a robot with the primary goal of playing chess.”

Omohundro is not alone in his concerns about weapon systems that employ artificial intelligence. Adams reports, “This very same scenario is discussed in detail in the fascinating book Our Final Invention – Artificial Intelligence and the End of the Human Era by James Barrat.” And, as I have previously reported, Lord Martin Rees, co-founder of the Centre for the Study of the Existential Risk at Cambridge University, believes “we should ensure that robots remain as no more than ‘idiot savants’ – lacking the capacity to outwit us, even though they may greatly surpass us in the ability to calculate and process information.” [“Will robots take over the world?” Phys.org, 30 July 2013] Omohundro, like Lord Rees, believes we should be careful how we develop and employ AI systems. Tucker explains:

“The problem of an artificial intelligence relentlessly pursuing its own goals to the obvious exclusion of every human consideration is sometimes called runaway AI. The best solution, [Omohundro] says, is to slow down in our building and designing of AI systems, take a layered approach, similar to the way that ancient builders used wood scaffolds to support arches under construction and only remove the scaffold when the arch is complete. That approach is not characteristic of the one we are taking today, putting more and more resources and responsibility under the control of increasingly autonomous systems.”

As president and CEO of a company that uses AI in its business solutions, I’m obviously interested in the concerns that people have about the use of artificial intelligence. I also want to ensure that people understand the upside of technologies (not just robots) that employ artificial intelligence. Dobie notes:

“Folks are devising more ingenious uses for robots. We’re talking way beyond the dozens of robotic vacuum cleaners on the market. Meet Sandy and Rosie, two robots sandblasting the iconic Sydney Harbour Bridge. And border patrol robots reportedly used by Israel and South Korea. And robots used to manipulate lights and cameras on the set of the movie ‘Gravity.’ And Baxter, a robot with two arms and a friendly interface that was programmed by Cornell University students to work in a supermarket checkout lane. Driverless cars are on the horizon. Take me to the office, Jeeves2053c.”

There are a lot of issues associated with the rise of the machine. “Not the least of which,” writes Dobie, “is their effect on the labor force. They will improve efficiency, yes, but how many workers will they displace? With our identity as human beings so linked to the work we do, what happens if we become surplus labor?” Hopefully, human employees, trained with the right skills and background, will be answering the “Robot Help Wanted” ads discussed at the beginning of this post. Those jobs, however, won’t be sufficient to replace all of the workers displaced by robots. For more on that topic, read my posts entitled “Robots and Jobs, Part 1” and “Part 2.” Is the rise of the machine inevitable? The simple answer is, “Yes.” That outcome was really decided at the start of the industrial revolution. The information revolution has only changed the nature of the machines that are rising. Should we be afraid? Again, the simple answer is, “Yes.” The reasons for such fear, however, differ depending on to whom you speak. I’m personally more concerned about creating jobs for displaced workers than fighting off terminator robots.