Credit:
Joe Raedle/Getty Images
Researchers at the Massachusetts Institute of Technology are hoping AI can help untangle another complicated knot: America's polarized political landscape.
The Collective Debate project, developed at the MIT Media Lab, invites users to debate an artificial intelligence agent on a potentially divisive question. The idea is to encourage users to become more moderate by presenting opposing facts and figures on the specific issue. If you identify as liberal, the AI agent will argue a conservative position — and vice-versa.
Whichever way you lean, left or right, the AI leans back. It's not just an argument generator, though. The Collective Debate AI actually “listens” to your contention, makes educated guesses as to your point of view, then generates counterarguments deemed the most likely to nudge you toward a more moderate position.
“I conceived Collective Debate as a system that would collect opinions on a controversial issue from anyone in the world and algorithmically organize them so that people could see what are the most common arguments for or against a position,” said Ann Yuan, a research assistant at MIT Media Lab and project designer.
The system begins with a “moral matrix” questionnaire, which measures how a user identifies with five moral foundations: harm, fairness, purity, authority, and ingroup. The questionnaire helps establish where the participant lands on a three-dimensional data map of “liberal” or “conservative” values.
Next, the user is asked to indicate whether he or she agrees with the statement: “In computer science, differences in professional outcomes between men and women are primarily the result of socialization and bias.”
Using an interactive pointer, users can indicate how strongly they agree or disagree, and how confident they are in their position.
Yuan chose to use the statement about bias in the computer sciences after reading about the so-called Google memo, which led to the firing of Google software engineer James Damore for criticizing the company's diversity policies.
“Damore argued that Google’s policies reflected a belief that the lack of female software engineers is attributable to socialization and bias alone, whereas Damore believes that differences in natural aptitude or interests also play a role,” Yuan told Seeker.
“I chose to base Collective Debate around this issue because it's political in an interesting way: In general, liberals disagree with Damore, while conservatives agree with him,” she said. “But if you look more closely the dividing line isn't so clear. People seem also to be divided according to their scientific leanings, moral outlooks, etc.”
Yuan also believed the issue was ripe for debate because many people had strong feelings about the controversy.
“Also, there were a lot of high-quality op-eds written on the issue from which I could mine arguments for and against Damore’s position,” she said.
“The system is artificially intelligent in that it attempts to optimize for a certain outcome based on data,” Yuan said. “Specifically, the system tries to get users to either change their minds or become more moderate, and tries to prevent users from becoming more extreme. It does so by observing past users and building predictive models of how users will behave.”
But people can be weird.
“The punchline is that after the debate people tended to move toward the middle and toward the extremes,” Yuan said. “About 15 percent of users either changed their minds completely or moved towards the middle. But about 12 percent of users moved towards the extremes — they started out only moderately agreeing with the claim but ended up strongly agreeing with it.”
RELATED: Here's Why Public Protests Matter
Yuan says the 12 percent figure is likely a demonstration of the “backfire effect,” a concept in sociology that says people become more righteous about their opinions when confronted with arguments that challenge them.
Still, the Collective Debate project succeeds as a proof-of-concept demonstration that AI can potentially help bridge our deeply divided discourse — especially online. Yuan hopes that, eventually, the technology could have practical applications in law and conflict resolution.
“My goal in building this project was not necessarily to try to change anyone’s mind on an issue, but rather to try to help people see value in the other side’s position,” Yuan said. “The hope is that we could use this understanding to build technologies that enable more productive political discourse by telling people exactly what they need to hear in order to see the other side.”