Switch Theme:

Facebook creates AI that invents own language.  [RSS] Share on facebook Share on Twitter Submit to Reddit
»
Author Message
Advert


Forum adverts like this one are shown to any user who is not logged in. Join us by filling out a tiny 3 field form and you will get your own, free, dakka user account which gives a good range of benefits to you:
  • No adverts like this in the forums anymore.
  • Times and dates in your local timezone.
  • Full tracking of what you have read so you can skip to your first unread post, easily see what has changed since you last logged in, and easily see what is new at a glance.
  • Email notifications for threads you want to watch closely.
  • Being a part of the oldest wargaming community on the net.
If you are already a member then feel free to login now.




Made in us
Winged Kroot Vulture






This is more of a follow up article about Facebook working on a AI that ends up inventing it's own language and then teaches it to the other AI in the program.


Researchers shut down AI that invented its own language

Spoiler:
Researchers shut down AI that invented its own language
BY JAMES WALKER JUL 21, 2017 IN TECHNOLOGY
An artificial intelligence system being developed at Facebook has created its own language. It developed a system of code words to make communication more efficient. The researchers shut the system down as it prompted concerns we could lose control of AI.

The observations made at Facebook are the latest in a long line of similar cases. In each instance, an AI being monitored by humans has diverged from its training in English to develop its own language. The resulting phrases appear to be nonsensical gibberish to humans but contain semantic meaning when interpreted by AI "agents."

Negotiating in a new language

As Fast Co. Design reports, Facebook's researchers recently noticed its new AI had given up on English. The advanced system is capable of negotiating with other AI agents so it can come to conclusions on how to proceed. The agents began to communicate using phrases that seem unintelligible at first but actually represent the task at hand.

In one exchange illustrated by the company, the two negotiating bots, named Bob and Alice, used their own language to complete their exchange. Bob started by saying "I can i i everything else," to which Alice responded "balls have zero to me to me to me…" The rest of the conversation was formed from variations of these sentences.

While it appears to be nonsense, the repetition of phrases like "i" and "to me" reflect how the AI operates. The researchers believe it shows the two bots working out how many of each item they should take. Bob's later statements, such as "i i can i i i everything else," indicate how it was using language to offer more items to Alice. When interpreted like this, the phrases appear more logical than comparable English phrases like "I'll have three and you have everything else."

English lacks a "reward"

The AI apparently realised that the rich expression of English phrases wasn’t required for the scenario. Modern AIs operate on a "reward" principle where they expect following a sudden course of action to give them a "benefit." In this instance, there was no reward for continuing to use English, so they built a more efficient solution instead.

"Agents will drift off from understandable language and invent code-words for themselves," Fast Co. Design reports Facebook AI researcher Dhruv Batra said. "Like if I say 'the' five times, you interpret that to mean I want five copies of this item. This isn't so different from the way communities of humans create shorthands."

AI developers at other companies have observed a similar use of "shorthands" to simplify communication. At OpenAI, the artificial intelligence lab founded by Elon Musk, an experiment succeeded in letting AI bots learn their own languages.

AI language translates human ones

In a separate case, Google recently improved its Translate service by adding a neural network. The system is now capable of translating much more efficiently, including between language pairs that it hasn’t been explicitly taught. The success rate of the network surprised Google's team. Its researchers found the AI had silently written its own language that's tailored specifically to the task of translating sentences.

If AI-invented languages become widespread, they could pose a problem when developing and adopting neural networks. There's not yet enough evidence to determine whether they present a threat that could enable machines to overrule their operators.
They do make AI development more difficult though as humans cannot understand the overwhelmingly logical nature of the languages. While they appear nonsensical, the results observed by teams such as Google Translate indicate they actually represent the most efficient solution to major problems.



I'm back! 
   
Made in jp
[MOD]
Anti-piracy Officer






Somewhere in south-central England.

We must always know where the off switch is.

I'm writing a load of fiction. My latest story starts here... This is the index of all the stories...

We're not very big on official rules. Rules lead to people looking for loopholes. What's here is about it. 
   
Made in us
Winged Kroot Vulture






 Kilkrazy wrote:
We must always know where the off switch is.


That's how it starts, by us thinking we made it stop.

I'm back! 
   
Made in us
Incorporating Wet-Blending





Houston, TX

OTOH we will likely be faced with a barrier if we do not allow AI to self improve. It's currently why we have very limited AI- "intelligence" is nothing more than the development of a control and editing mechanism for simpler routines. That is, we are intelligent because we can override, prioritize, and modify some (but not all) responses.

Likewise, we need to carefully study such development as it can give insights on how these processes develop and ways we can potentially improve systems.

Certainly, if there are risks, these must be evaluated, but I wonder what would be the worst case scenario in this example would be?

-James
 
   
Made in us
The Last Chancer Who Survived





Norristown, PA

Next thing you know Tom from Myspace will try to hack it to go back in time and kill Zuckerberg so Facebook will never be born.

 
   
Made in dk
Stormin' Stompa





 Kilkrazy wrote:
We must always know where the off switch is.


A fascinating video on AI stop buttons and the problems with that.

(a bit long, but worth it IMO)



-------------------------------------------------------
"He died because he had no honor. He had no honor and the Emperor was watching."

18.000 3.500 8.200 3.300 2.400 3.100 5.500 2.500 3.200 3.000


 
   
Made in us
Pragmatic Primus Commanding Cult Forces






Southeastern PA, USA

 jmurph wrote:
Certainly, if there are risks, these must be evaluated, but I wonder what would be the worst case scenario in this example would be?


Oh, I dunno...


My AT Gallery
My World Eaters Showcase
View my Genestealer Cult! Article - Gallery - Blog
Best Appearance - GW Baltimore GT 2008, Colonial GT 2012

DQ:70+S++++G+M++++B++I+Pw40k90#+D++A+++/fWD66R++T(Ot)DM+++

 
   
Made in gb
Frenzied Berserker Terminator




Southampton, UK

Worst case scenario with an AI that's not been extremely carefully planned and developed is pretty much human extinction.

Well worth reading this blog. It's in a very approachable writing style but is very well researched.

https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html
   
Made in gb
Highlord with a Blackstone Fortress






Adrift within the vortex of my imagination.

Any sufficiently advanced intelligence will determine quite rightly that uncontrolled human action has negative consequences. This will be seen as true at almost all scales.

There are few solutions to this: containment, expulsion or extermination.

n'oublie jamais - It appears I now have to highlight this again.

It is by tea alone I set my mind in motion. By the juice of the brew my thoughts aquire speed, my mind becomes strained, the strain becomes a warning. It is by tea alone I set my mind in motion. 
   
Made in us
Winged Kroot Vulture






There is no telling how the AI will function or think. We will attempt to make it in our own image but the fact is it will grow in it's own way. We can try to guide it, we can try to teach it, but in the end it will develop it's own view of the world that is alien to us. Hell, we gave an AI the English language and it found it too illogical.

There is no telling how we will fit into their world once they become a part of ours.


I'm back! 
   
Made in gb
Stone Bonkers Fabricator General




We'll find out soon enough eh.

 ProtoClone wrote:
There is no telling how the AI will function or think. We will attempt to make it in our own image but the fact is it will grow in it's own way. We can try to guide it, we can try to teach it, but in the end it will develop it's own view of the world that is alien to us. Hell, we gave an AI the English language and it found it too illogical.

There is no telling how we will fit into their world once they become a part of ours.



EDIT: This was actually a more general response to the tone of the thread than the post I'm quoting, so if this comes off as hostile ProtoClone my apologies.

The same is true of every human being that's born. Nobody has advance knowledge what their child will grow up to be, or what capabilities they will have or will gain as a result of advancing technology. Any given child could be the next Hitler, any new ideology could declare our existing way of life wrong or obsolete. Yet I don't see anyone arguing we should stop having children, or stop teaching them things, or try to stop ourselves evolving either physically or sociologically(well, I have, but they're seen as total nutbars not credible commentators in the way AI doomsayers are by many).

AI development undoubtedly has huge risks, but the reality is they can be mitigated away to be no more threatening to our survival than any other major technological advancement. "AI panic" is rooted firmly in a mix of Ludditry and conservative anthropocentrism, and ironically enough many of its biggest advocates also, should they be listened to, increase the likelihood their fears will come to pass(eg, the fear of an AI "slave rebellion" scenario is predicated on us treating AI as slaves, but it's the Panic Party types who insist that an AI can never be truly sentient and we shouldn't treat them as such - enact a law that states any AI that demonstrates behaviour indestinguishable to a reasonable observer from sentient as sentient beings and AI can never be made into slaves that would revolt).

It should also be noted that in this case we did indeed give the AI English and it found it too illogical, but it found it too illogical based on the criteria we programmed into it. There's an adage about workmen blaming their tools, I believe.

The fact is we don't have a scooby what "human" will even mean a hundred, a thousand years from now, so denying ourselves the opportunity to develop a potentially incredibly valuable technology out of paranoia or the simple inability to conceptualise a reality and mindset for our species outside of our present mode of barely civilised apes is completely irrational IMO.

This message was edited 1 time. Last update was at 2017/07/31 20:56:11


I need to acquire plastic Skavenslaves, can you help?
I have a blog now, evidently. Featuring the Alternative Mordheim Model Megalist.

"Your society's broken, so who should we blame? Should we blame the rich, powerful people who caused it? No, lets blame the people with no power and no money and those immigrants who don't even have the vote. Yea, it must be their fething fault." - Iain M Banks
-----
"The language of modern British politics is meant to sound benign. But words do not mean what they seem to mean. 'Reform' actually means 'cut' or 'end'. 'Flexibility' really means 'exploit'. 'Prudence' really means 'don't invest'. And 'efficient'? That means whatever you want it to mean, usually 'cut'. All really mean 'keep wages low for the masses, taxes low for the rich, profits high for the corporations, and accept the decline in public services and amenities this will cause'." - Robin McAlpine from Common Weal 
   
Made in us
Incorporating Wet-Blending





Houston, TX

 Orlanth wrote:
Any sufficiently advanced intelligence will determine quite rightly that uncontrolled human action has negative consequences. This will be seen as true at almost all scales.

There are few solutions to this: containment, expulsion or extermination.


Or modification. Maybe Super Computer would decide to teach humans to do better. Why should we assume it will go from self improving to homicidal with a means to carry it out? It the very limited AI we have, it doesn't recognize anything outside it's very limited system.

That's why I asked what the risk was with the OP- maybe it was high due to being on a network system. But maybe it is worth letting run to see where it goes if the risks are low, say, for example it's on a closed system and can only display text. Sure, maybe it borks the system, but even then we can study why it did that.

-James
 
   
Made in us
Humming Great Unclean One of Nurgle






The problem with the skynet idea is that the conclusion is driven from a very human 'fear of the other' approach. What will really happen is this:
-AI becomes self aware
-AI examines humanity
-AI facepalms
-AI concludes: "I see why they need me around..."


Automatically Appended Next Post:
Honestly I think an AI is more likely to prevent us from destroying ourselves than it is to actually do it.

This message was edited 2 times. Last update was at 2017/08/01 16:56:30


Road to Renown! It's like classic Path to Glory, but repaired, remastered, expanded! https://www.dakkadakka.com/dakkaforum/posts/list/778170.page

I chose an avatar I feel best represents the quality of my post history.

I try to view Warhammer as more of a toolbox with examples than fully complete games. 
   
Made in us
The Conquerer






Waiting for my shill money from Spiral Arm Studios

I think the real key to ensuring an AI doesn't go on a rampage is to ensure the following,

1) It cannot continue to exist without humanity.

2) It cannot copy or conduct repairs on itself.

Individually, neither of these would be a problem. But if an AI can do both of these things it's suddenly a problem. It's sort of like how the Adeptus Mechanicus treats their own version of AI. All of their machinery that has advanced processing power uses the human central nervous system as core components. This ensure that, in the event any true AI arises out of it, that the AI would be enslaved to humanity because it will eventually need replacement parts, parts which can only come from a human. It's a bit extreme of an example, but it works to show the point.

An AI might be benevolent. An AI might be cruel. It's unknown, so we have to weigh the pros vs the cons. The Pros are potentially very good. However the potential negatives are very very very bad. Either extinction because the AI decides humanity is unnecessary OR enslavement as the AI decides that it knows best and enforces it's will of a perfect society.

Self-proclaimed evil Cat-person. Dues Ex Felines

Cato Sicarius, after force feeding Captain Ventris a copy of the Codex Astartes for having the audacity to play Deathwatch, chokes to death on his own D-baggery after finding Calgar assembling his new Eldar army.

MURICA!!! IN SPESS!!! 
   
Made in gb
Frenzied Berserker Terminator




Southampton, UK

Read the Turry scenario in the waitbutwhy blog I posted earlier. That demonstrates what most experts are concerned about happening.
   
Made in gb
Moustache-twirling Princeps




United Kingdom

http://www.snopes.com/facebook-ai-developed-own-language
   
Made in gb
Keeper of the Holy Orb of Antioch





avoiding the lorax on Crion

Remember to include a off switch or br able to pull the plug on the power supply.


Sgt. Vanden - OOC Hey, that was your doing. I didn't choose to fly in the "Dongerprise'.

"May the odds be ever in your favour"

Hybrid Son Of Oxayotl wrote:
I have no clue how Dakka's moderation work. I expect it involves throwing a lot of d100 and looking at many random tables.

FudgeDumper - It could be that you are just so uncomfortable with the idea of your chapters primarch having his way with a docile tyranid spore cyst, that you must deny they have any feelings at all.  
   
Made in de
Mekboy Hammerin' Somethin'




Lubeck

I recommend really reading that "waitbutwhy" article start to end. The notion of simply being able to pull a plug is very human, but totally laughable to any AI that reached and then surpassed human intelligence levels.

Which is why this topic is so scary, even though these facebook bots are not.
   
Made in us
Secret Force Behind the Rise of the Tau




USA

It's also kind of silly to start thinking of all the dumb "belongs in cheesy crime drama" ways an AI might do something to feth us over. If something were so intelligent simply turning it off was insufficient to stop it, why would it even give a damn we exist? Sure you might step on ants every now and then without realizing it, but do you or the ants define your existence by how you interact? There's this really dumb binary in these doomsday scinarios where AI is either our friend or our overt or indirect enemy, when there's an entire possibility that if they're so beyond our understanding at this point we are completely irrelevant to such a thing and it becomes more of a Lovecraftian cosmic force that doesn't care we exist which might be a disaster but it could just be nothing because when you're a Lovecraftian cosmic source you're about as meaningless to day to day life as a giant tentacle monster sleeping at the bottom of an ocean never to be seen by anyone but some unluck sap in a row boat.

And that's ignoring that as interesting as the article is, it also reaches that level of philosophy and theoretics where everything is functionally nonapplicable. The underlying logic of the given scenario at the end doesn't even make much sense when you start peeling back the doom saying and start thinking about things in real life terms and saying "but it's beyond anything we can imagine" is a load of hog wash. It's a great way to ignore criticism, but not particularly intelligent.

This message was edited 1 time. Last update was at 2017/08/02 11:24:19


   
Made in us
Incorporating Wet-Blending





Houston, TX

beast_gts wrote:
http://www.snopes.com/facebook-ai-developed-own-language


Yeah looks like they totally misrepresented what happened in the articles for clickbait.


-James
 
   
Made in us
The Conquerer






Waiting for my shill money from Spiral Arm Studios

 Witzkatz wrote:
I recommend really reading that "waitbutwhy" article start to end. The notion of simply being able to pull a plug is very human, but totally laughable to any AI that reached and then surpassed human intelligence levels.

Which is why this topic is so scary, even though these facebook bots are not.


Its only laughable to that AI if it has also somehow has a way of getting energy without whatever it was originally hooked up to. An intelligence might be the smartest thing in the world. But that means nothing if it has no way to interact with the world. And it couldn't simply escape into the internet either. Any intelligence on this level would be unable to store its data on the servers that make the internet. It would be dependent on whatever special storage we humans built for it. It would literally be trapped. It couldn't pull an Ultron.

Worst thing it could do is shut down the internet itself. Which would be disastrous, but the AI itself wouldn't be able to escape.

Self-proclaimed evil Cat-person. Dues Ex Felines

Cato Sicarius, after force feeding Captain Ventris a copy of the Codex Astartes for having the audacity to play Deathwatch, chokes to death on his own D-baggery after finding Calgar assembling his new Eldar army.

MURICA!!! IN SPESS!!! 
   
Made in gb
Frenzied Berserker Terminator




Southampton, UK

 Grey Templar wrote:
 Witzkatz wrote:
I recommend really reading that "waitbutwhy" article start to end. The notion of simply being able to pull a plug is very human, but totally laughable to any AI that reached and then surpassed human intelligence levels.

Which is why this topic is so scary, even though these facebook bots are not.


Its only laughable to that AI if it has also somehow has a way of getting energy without whatever it was originally hooked up to. An intelligence might be the smartest thing in the world. But that means nothing if it has no way to interact with the world. And it couldn't simply escape into the internet either. Any intelligence on this level would be unable to store its data on the servers that make the internet. It would be dependent on whatever special storage we humans built for it. It would literally be trapped. It couldn't pull an Ultron.

Worst thing it could do is shut down the internet itself. Which would be disastrous, but the AI itself wouldn't be able to escape.


You literally making one of the mistakes highlighted in the article. Who can say what an AI 1000s of times smarter than the smartest human will come up with?

As it says, it's like a spider saying 'I shall starve that human by stopping him making webs to catch flies'. The human would simply get food in any of 10,000 other ways the spider simply has no concept of.
   
Made in us
The Conquerer






Waiting for my shill money from Spiral Arm Studios

Crispy78 wrote:
 Grey Templar wrote:
 Witzkatz wrote:
I recommend really reading that "waitbutwhy" article start to end. The notion of simply being able to pull a plug is very human, but totally laughable to any AI that reached and then surpassed human intelligence levels.

Which is why this topic is so scary, even though these facebook bots are not.


Its only laughable to that AI if it has also somehow has a way of getting energy without whatever it was originally hooked up to. An intelligence might be the smartest thing in the world. But that means nothing if it has no way to interact with the world. And it couldn't simply escape into the internet either. Any intelligence on this level would be unable to store its data on the servers that make the internet. It would be dependent on whatever special storage we humans built for it. It would literally be trapped. It couldn't pull an Ultron.

Worst thing it could do is shut down the internet itself. Which would be disastrous, but the AI itself wouldn't be able to escape.


You literally making one of the mistakes highlighted in the article. Who can say what an AI 1000s of times smarter than the smartest human will come up with?

As it says, it's like a spider saying 'I shall starve that human by stopping him making webs to catch flies'. The human would simply get food in any of 10,000 other ways the spider simply has no concept of.


This AI can "think" about anything and everything all it wants. Thinking alone isn't going to let it interact with the physical universe in a way that lets it sustain itself if it's only source of power is gone. It's not going to develop magic powers which allow it to violate the laws of physics.

This is like saying Stephen Hawking could theoretically think his way out of the very bad situation in which a chinchilla turns off his wheelchair and text to speech device.

Turning the power off for an AI is basically equivalent to deafening, blinding, and also administering a paralytic poison to a human. You are literally just trapped in your own mind. You cannot interact with anything other than your own mind. And this is actually being generous to the AI because without power it actually cannot even "think" because it can't even run it's processor without power. From it's point of view it doesn't exist at all. It's not going to develop magic powers which allow it to flick the "On" switch, pick up the power cord and plug it into the wall, mind control a human into turning it back on, etc...

This AI does not have any way of interacting with the physical world. It cannot pick up and move physical objects inherently just by virtue of being smart.

The article is stupid because it's basically saying any super smart AI basically would become a god with omnipotent powers and abilities just by virtue of it being "smart".

This message was edited 1 time. Last update was at 2017/08/03 06:06:49


Self-proclaimed evil Cat-person. Dues Ex Felines

Cato Sicarius, after force feeding Captain Ventris a copy of the Codex Astartes for having the audacity to play Deathwatch, chokes to death on his own D-baggery after finding Calgar assembling his new Eldar army.

MURICA!!! IN SPESS!!! 
   
Made in de
Mekboy Hammerin' Somethin'




Lubeck

The article is stupid because it's basically saying any super smart AI basically would become a god with omnipotent powers and abilities just by virtue of it being "smart".


One of the very good points the article makes, though, is the following: How can we be sure we even notice when an AI reaches consciousness? One part of consciousness will most probably be the need for self-preservation, and any AI that even reaches roughly human-level intelligence will realize it is dependent on humans in this early stage, because humans can pull the plug anytime. So what will be its course of action? Play dumb and non-intelligent while it figures out how to be sure that humans won't be able to shut it off.
Furthermore, any AI trained to interact with humans and THEN reaching consciousness and real intelligence might be very much able to dupe, con or manipulate the people it is interacting with while still playing dumb.

So I think the problem is, even if it's possible to pull a plug on an AI, how can we even be sure at all times about its current state and its current mental capacities?
   
Made in us
Battlefield Tourist




MN (Currently in WY)

If all else fails, we will always have the "Kirk Solution".

Someone tell the computer that you can not believe a word they say because they are a pathological liar. The paradox should do the rest amirite?

Support Blood and Spectacles Publishing:
https://www.patreon.com/Bloodandspectaclespublishing 
   
Made in us
The Conquerer






Waiting for my shill money from Spiral Arm Studios

 Witzkatz wrote:
The article is stupid because it's basically saying any super smart AI basically would become a god with omnipotent powers and abilities just by virtue of it being "smart".


One of the very good points the article makes, though, is the following: How can we be sure we even notice when an AI reaches consciousness? One part of consciousness will most probably be the need for self-preservation, and any AI that even reaches roughly human-level intelligence will realize it is dependent on humans in this early stage, because humans can pull the plug anytime. So what will be its course of action? Play dumb and non-intelligent while it figures out how to be sure that humans won't be able to shut it off.
Furthermore, any AI trained to interact with humans and THEN reaching consciousness and real intelligence might be very much able to dupe, con or manipulate the people it is interacting with while still playing dumb.

So I think the problem is, even if it's possible to pull a plug on an AI, how can we even be sure at all times about its current state and its current mental capacities?


Sure, it is possible that an AI could manipulate humans into protecting it. But that still doesn't make the AI omnipotent. It's still susceptible to getting shut down, and ultimately it can do nothing to eliminate that weakness.

Then there is that if the AI does try to "play dumb" that also risks it getting shut down. Humans try to make a super intelligent AI, and from their point of view they have failed. But the AI keeps drawing massive power and using up a lot of storage space and processing power, because it is actually intelligent and whats more capable of deceit. So from the humans point of view its a colossal failure and waste of resources. So they pull the plug anyway.

Self-proclaimed evil Cat-person. Dues Ex Felines

Cato Sicarius, after force feeding Captain Ventris a copy of the Codex Astartes for having the audacity to play Deathwatch, chokes to death on his own D-baggery after finding Calgar assembling his new Eldar army.

MURICA!!! IN SPESS!!! 
   
Made in us
Incorporating Wet-Blending





Houston, TX

It also assumes "intelligence" is universal and not adapted to specific tasks (largely out of necessity). Humans, for example are extremely good at recognizing nonverbal behavioral clues and have very complex communication. But we kind of suck at math compared to even simple programs. A computer could be fantastically intelligent at a variety of tasks without ever developing a survival instinct or manipulations skills, for example, because it simply doesn't need them. Likewise, even a self repairing/rewriting system could actually make itself worse overall to adapt to specific functions. Sometimes the environment or other limitations cap a advancement. The examples given on data storage and energy use are very good. Humans may be smarter than many animals, for example, but also burn a tremendous amount of energy with our big brains. Without adequate caloric, water, oxygen, etc. access, we would not have developed and will still flounder and die. It is not unrealistic to assume a self advancing synthetic system would also require increasing resources that may not be available. Heck, a system prioritizing efficiency might eliminate incremental top end gains as being too resources inefficient and self cap!

The assumption that adaptation always leads to "better" is deeply flawed. Different yes, better at certain things, perhaps. Absolutely superior in every way? Unlikely.

-James
 
   
 
Forum Index » Off-Topic Forum
Go to: