Something unexpected happened recently at the Facebook Artificial Intelligence (AI) Research lab. Researchers who had been training bots to negotiate with one another realized that the bots, left to their own devices, started communicating in a non-human language.Researchers promptly shut the system down over concerns that they might lose control over the A.I.
In order to actually follow what the bots were saying, the researchers had to tweak their model, limiting the machines to a conversation humans will understand. (They want bots to stick to human languages because eventually they want those bots to be able to converse with Facebook users.) When The Atlantic wrote about all this last week, lots of people reacted with some degree of trepidatious wonder. Machines making up their own language is really cool, sure, but isn’t it actually terrifying?
And also: What does this language actually look like? Here’s an example of one of the bot negotiations that Facebook observed:
Not only does this appear to be nonsense conversation, but the bots don’t really seem to be getting anywhere in the negotiation. Alice isn’t budging from her original position, anyway. The weird thing is, Facebook’s data shows that conversations like this sometimes still led to successful negotiations between the bots in the end, a spokesperson from the AI lab said. (In other cases, researchers adjusted their model and the bots would develop bad strategies for negotiating—even if their conversation remained interpretable by human standards.)
One way to think about all this is to consider cryptophasia, the name for the phenomenon when twins make up their own secret language, understandable only to them. Perhaps you recall the 2011 YouTube video of two exuberant toddlers chattering back and forth in what sounds like a lively, if inscrutable, dialogue.
0 comments:
Post a Comment