No Facebook’s chatbots weren’t planning to overthrow humanity

j2

Facebook’s AI team made headlines across the world this week when they announced that they were shutting down a chatbot project after their chatbots developed a new language to more effectively communicate with each other–a language humans couldn’t understand. The announcement sparked a long-held fear of humans: that our machines could rise up and overthrow. Dozens of news networks published pieces with scary sounding headlines about artificial intelligence takeovers. They couldn’t have been farther from the truth. For hundreds of years humanity has feared new technologies because we fear what we do not understand. Hollywood has capitalized on this fear making movie after movie where robots overthrow us and enslave or destroy us. But that’s not what was happening with Facebook’s chatbot project.

The project

A few weeks ago, Facebook announced a project they were working on with the goal of creating chatbots that could be trained to negotiate and strike deals with each other. The thinking behind this is that deal-making is an integral part of interpersonal communication and the world of business and if chatbots are going to play a role in that future, then they need to have negotiating skills. Facebook’s team used machine learning by inputting real-world examples of negotiations–actual transcripts that the program could use to look for patterns and “understand” how negotiations work. Next, they would instruct two chatbots to negotiate by instructing them to divide up a collection of various different items between themselves. The hope was that they would make deals: “If I can have this item and this item, you can have that item and that item, etc…” It didn’t work out that way.

What went wrong?

The chatbots evidently diverged from human speech instead speaking a gibberish version of it. Make no mistake, this wasn’t some advanced language designed to hide their motives from human observers. It went something like this:

Bob: “i can i i everything else . . . . . . . . . . . . . .”

Alice: “balls have zero to me to me to me to me to me to me to me to me to…”

What does it all mean?

The end result didn’t surprise the researchers (though it certainly disappointed them) nor should it have surprised us. Artificial intelligence can do some incredible things. In fact the same principles of machine learning that Facebook used to teach negotiating to machines was already successfully used to develop computer programs that can beat the most intelligent humans at their own game be it chess or some other strategic game. But language is infinitely more complex, even more than chess and harder to teach machines.

Not a complete failure

The researchers didn’t shut down their negotiating bots because they feared a robot apocalypse. They shut it down because chatbots that communicate in a way humans can’t understand offer no benefit to humanity. They will go back to the drawing board and create parameters that prevent them from reverting to gibberish and try again. This is how artificially intelligent chatbots are programmed: through trial and error.

Mobile Technology News brought to you by biztexter.com

Source: wired   .com/story/facebooks-chatbots-will-not-take-over-the-world/

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s