Computers Get Creative, Part 2

Stephen DeAngelis

September 07, 2012

In Part 1 of this two-part series, I began the discussion about whether creative computers can be labeled intelligent. I drew primarily from two articles. The first was written by Marcus du Sautoy, Simonyi professor for the public understanding of science and a professor of mathematics at the University of Oxford. [“AI robot: how machine intelligence is evolving,” The Observer, 31 March 2012] The second article was written by Christopher Steiner, author of a recently published book entitled Automate This. [“Automatons Get Creative,” Wall Street Journal, 17 August 2012] Du Sautoy argued, “For me, a test of whether intelligence is beginning to emerge is when you seem to be getting more out than you put in.” Steiner, on the other hand, believes that computers can do smart things using algorithms (including creative acts), but he doesn’t necessarily argue that running algorithms makes computers intelligent.

Du Sautoy reports that “one of the most striking experiments in AI is the brainchild of the director of the Sony lab in Paris, Luc Steels.” Du Sautoy describes what Steels is doing:

“[Steels] has created machines that can evolve their own language. A population of 20 robots are first placed one by one in front of a mirror and they begin to explore the shapes they can make using their bodies in the mirror. Each time they make a shape they create a new word to denote the shape. For example the robot might choose to name the action of putting the left arm in a horizontal position. Each robot creates its own unique language for its own actions. The really exciting part is when these robots begin to interact with each other. One robot chooses a word from its lexicon and asks another robot to perform the action corresponding to that word. Of course the likelihood is that the second robot hasn’t a clue. So it chooses one of its positions as a guess. If they’ve guessed correctly the first robot confirms this and if not shows the second robot the intended position. The second robot might have given the action its own name, so it won’t yet abandon its choice, but it will update its dictionary to include the first robot’s word. As the interactions progress the robots weight their words according to how successful their communication has been, downgrading those words where the interaction failed. The extraordinary thing is that after a week of the robot group interacting with each other a common language tends to emerge. By continually updating and learning, the robots have evolved their own language. It is a language that turns out to be sophisticated enough to include words that represent the concept of ‘left’ and ‘right’. These words evolve on top of the direct correspondence between word and body position. The fact that there is any convergence at all is exciting but the really striking fact for me is that these robots have a new language that they understand yet the researchers at the end of the week do not comprehend until they too have interacted and decoded the meaning of these new words.”

There is no denying that given the right programming computers can learn. Does learning make them intelligent? Maybe. Does it make them sentient? Probably not. Nevertheless, Steiner reports that computers can do some amazing things, including identifying potential music pop chart hits. He reports:

“Music would seem an unlikely entry point for algorithms, but they have arrived. In 2004, the New Zealander Ben Novak was just another guitar-strummer songwriter hoping to crack into music with a record deal. On a whim, he paid $50 to upload one of his songs to a website that claimed to have an algorithm capable of finding hits. The algorithm gave Mr. Novak’s song a rare and lofty score, putting it on par with classics such as ‘Take it Easy’ by the Eagles and Steppenwolf’s ‘Born to Be Wild.’ The algorithm belonged to Mike McCready, who connected Mr. Novak with a label. The single, ‘Turn Your Car Around,’ eventually landed near the top of the European charts. Mr. McCready now runs Music Xray, a three-year-old start-up seeking to democratize the music business. Comparing the structure of a song to tunes of the past, the algorithm grades it for hit potential. Mr. McCready’s algorithm rightly predicted the success of Norah Jones and of the band Maroon 5 before they were major artists.”

Pattern recognition is a well-known use of algorithms; but, recognizing hit tunes is different than creating them. If you read yesterday’s post, however, you know that du Sautoy reported that “engineers at Sony’s Computer Science Laboratory in Paris are beginning to produce machines that create new and unique forms of musical composition.” Those machines have even been able to “do jazz improvisation live with human players.” Steiner reports that the music industry was slow to accept hit-picking algorithms, but that it is coming around — as are movie makers. He explains:

“The people who guard the gates of big music labels guffawed at the prospect of a hit-picking algorithm, but Music Xray has now secured recording deals for more than 5,000 artists. The music industry can no longer ignore the algorithm. Movies, too, can be sorted quantitatively. Analyzing only the script, an algorithm from Epagogix, a risk-management firm that caters to the entertainment industry, predicts box office grosses. Epagogix broke into the business when a major studio allowed the firm to analyze script data for nine yet-to-be released films. In six of the nine cases, its predictions were spot-on. Algorithms have since become an essential tool in Hollywood.”

Steiner insists that “algorithms can already grade essays as well as the best human graders”; but, there are more than a few skeptics of that claim. Among them is Les Perelman, director of the student writing program at MIT. “Perelman recently tried out a computer essay grading program made by testing giant Educational Testing Service. ‘Of the 12 errors noted in one essay, 11 were incorrect,’ Perelman says. ‘There were a few places where I intentionally put in some comma errors and it didn’t notice them. In other words, it doesn’t work very well.’ Perelman says any student who can read can be taught to score very highly on a machine-graded test.” [“,” by Molly Bloom, NPR, 7 June 2012] Steiner believes that over time computers can overcome such shortcomings. He goes on to report that critics of creative AI raise another concern. He explains:

“As algorithms turn more of the subjective domain of human creativity into objective tasks, some observers worry about cultural homogeneity. Are we doomed to a future of uniform harmonies and standardized sentences? Hopefully not, but the advent of creative machines certainly will make it harder for humans to stand out. It may be that only distinct and exceptional talents—Nirvana, the Coen Brothers, Jonathan Franzen—will be able to defend our claims to creative superiority.”

Steiner goes on to note that algorithms already affect our lives in a number of ways — some of them very personal. Simon Dell, director of TwoCents Group, an Australian marketing, advertising and branding company, believes that as a result of increased use of algorithms, “We’re in danger of losing the spontaneity in our lives. Well, at least our digital lives.” [“Algorithms want to rule the world,” posted by Peter Roper, Marketing Magazine, 10 October 2011] Dell is not alone in his concern about how algorithms are trying to “rule the world.” During a 2011 TEDGlobal conference, Kevin Slavin argued “that we’re living in a world designed for — and increasingly controlled by — algorithms.” During his presentation, he warned “that we are writing code we can’t understand, with implications we can’t control.” I suspect that the kind of surprises Slavin talked about are very different than the kind of surprises du Sautoy believes are creative and demonstrate machine intelligence.

Steiner notes that one thing that humans inject into their assessments that is not found in algorithmic assessments is emotion (although he does report that there are some algorithms that perform electronic psychological analysis that “divides people into six sorts of personalities”). He concludes his article with one final example.

“Psyche-assessing bots are also playing for far higher stakes. In matters of national security, deciding how to handle new threats can be an inscrutable puzzle. To make informed decisions on these issues, the U.S. will spend more than $50 billion in 2012 on its spy and intelligence agencies. Much of this work involves predicting the behavior of capricious regimes. As it turns out, algorithms are often superior to humans at evaluating such situations. For more than a decade, the CIA has been paying Bruce Bueno de Mesquita, a political-science professor at New York University and a senior fellow at the Hoover Institution, to build predictive algorithms based on the elaborate scenarios of game theory. His results—across more than 1,700 political and military predictions—have been correct twice as often as those of the CIA’s own analysts, according to a declassified CIA report. Mr. Bueno de Mesquita doesn’t like to brag, but when he talks about what separates the assessments of game-theory algorithms from those of humans, he points out that top professionals, including Ivy League-educated intelligence analysts, tend to be obsessed with personal back stories and gossip that will have no effect on the future. Algorithms, he stresses, couldn’t care less.”

It’s exactly the fact that algorithms couldn’t care less that leaves people believing we are a long ways from developing computers that have real intelligence. Professor du Sautoy concludes:

“Turing might be disappointed that in his centenary year there are no machines that can pass themselves off as humans but I think that he would be more excited by the new direction artificial intelligence has taken. The AI community is no longer obsessed with reproducing human intelligence, the product of millions of years of evolution, but rather in evolving something new and potentially much more exciting.”

There are futurists who long for the day when a sentient computer is created. Paul G. Allen and Mark Greaves acknowledge the possibility that an artificial human brain could be built in the future, but they insist that such a development is not inevitable. [“The Singularity Isn’t Near,” Technology Review, 12 October 2011] In the meantime, very smart people will continue to write algorithms that can be applied to very real problems. A few smart people will also write programs that allow machines to paint, to make music, and to write poetry. The machines that carry out these tasks might not be intelligent; but, they’ll surely be smart.