IBM's debating AI is here to convince you that you're wrong
(Bloomberg) -- Artificial intelligence has proven itself adept at some of humanity’s favorite games, from chess to the much more complex board game Go. Now, a machine developed by IBM is challenging humans to debates about the future of medicine and the value of physical education.
International Business Machines Corp.’s Project Debater faced off in public for the first time Monday, taking on a crack two-person team of humans, which included the 2016 Israeli national debate champion. The robot held its own during two short debates, and at moments, showed more than a little flair. It even convinced people in the audience to change their minds.
“There is a lot at stake today,” the machine said as its opener in a debate where it took the position that society should increase use of telemedicine. “Especially for me.” Later, the robot lamented that it couldn’t say the situation made its blood boil because “I have no blood.”
Jokes aside, a machine that probes and questions human arguments has clear real-world applications. Lawyers could ask it to search thousands of court cases to pull out the most viable arguments, CEOs could put their theses to the test as a way to shore up a presentation to the board about company strategy, and teachers could employ it en masse to help their students develop critical thinking skills without having to have one-on-one sessions.
“It’s clear that something like that is relevant to anything that has to do with decision making,” said Ranit Aharonov, manager of the debating technologies team at IBM in Haifa, Israel.
IBM has a history of using publicity stunts to cast an aura around its technology. The bot debater demonstrated in San Francisco isn’t being deployed commercially yet, and IBM’s record on bringing artificial intelligence to the real world is mixed. In 2011, the company entered its Watson AI technology in the quiz show Jeopardy. The machine won, but but IBM still doesn’t disclose exactly how much revenue Watson generates.
Project Debater works by cobbling together multiple algorithms and AI techniques, detailed in academic papers put out by IBM Research over the last several years. Once the computer is told the topic, it scans a database of millions of news and academic articles, using an algorithm to decide which snippets of text are relevant and “argumentative.” Another algorithm cuts repetitions. During the debate, a voice recognition system listens to the machine’s opponent, adding another layer where things could go wrong if the robot mishears. Project Debater can make an effort at any argumentative topic, whether it’s been trained on it or not, researchers said.
During the debate at IBM’s offices in San Francisco, the robot spoke clearly, used correct grammar, generated relevant points and responded to arguments its human opponent made. After laying out its position on telemedicine, the robot changed the minds of nine audience members who had previously not agreed with the notion that society should increase telemedicine’s use.
In that debate and an earlier one on whether the government should support space exploration, the audience of mostly journalists and analysts said the robot scored better than the human debaters on enriching their knowledge. But the human debaters scored better on delivery, perhaps because the robot sounded, well, robotic.
The silly self-referential comments such as “I have no blood,” were intentional and desirable, said Aharonov. “We thought about what kind of persona we should have,” she said. “We’re trying to show this is a computer.” Recently, Google drew criticism for letting a personal-assistant robot pass itself off as human during a phone call.
The IBM project started in 2012, when the robot lacked the urbanity it showed off Monday. In one early debate on physical education, it kept bringing up sex education, researchers said. Another time, it detoured into the topic of procreation and mentioned that as a robot, it couldn’t have its own children -- during a debate about pornography.
“Sometimes, it is making a joke at not the right moment,” said Noam Slonim, a senior researcher on the project.