Switch Theme:

MEPs vote on robots' legal status - and if a kill switch is required  [RSS] Share on facebook Share on Twitter Submit to Reddit
»
Author Message
Advert


Forum adverts like this one are shown to any user who is not logged in. Join us by filling out a tiny 3 field form and you will get your own, free, dakka user account which gives a good range of benefits to you:
  • No adverts like this in the forums anymore.
  • Times and dates in your local timezone.
  • Full tracking of what you have read so you can skip to your first unread post, easily see what has changed since you last logged in, and easily see what is new at a glance.
  • Email notifications for threads you want to watch closely.
  • Being a part of the oldest wargaming community on the net.
If you are already a member then feel free to login now.




Made in us
Longtime Dakkanaut






 Asherian Command wrote:

Again the problem would arise from the philosophical side if it is morally right to do so. What if it evolves? It is entirely possible with organic beings, we have yet to see a machine evolve but if that were to slip by the scientists or engineers. It would be bad.

But I do agree with you it could be useful. But again for all we know we could accidentally make a predator robot or something similar, I do not think many of them want to risk it.

.


Unless robots have the capability to genetically reproduce with one another, that won't be a problem. Joking aside, the morphology of the neurons determines the capabilities of the structure and the plasticity necessary to develop entirely new functional regions of the brain would not be required for the proposed applications, nor would it necessarily even be feasible if we wanted to make it happen. Neuronal plasticity can allow some brain regions to function in ways similar to adjacent regions, but these regions are largely morphologically similar to begin with. The morphology of the neurons in the cortex is apples and oranges compared with say, the hippocampus, which would be very useful for a robot.

Tier 1 is the new Tactical.

My IDF-Themed Guard Army P&M Blog:

http://www.dakkadakka.com/dakkaforum/posts/list/30/355940.page 
   
Made in us
Decrepit Dakkanaut






New Orleans, LA

 Asherian Command wrote:

 Frazzled wrote:


I'm a human. I don't have an off switch.


Oh yea you do.


The human off switch is called bullet to head syndrome. Very common I hear in chicago.


I get that reference!

DA:70S+G+M+B++I++Pw40k08+D++A++/fWD-R+T(M)DM+
 
   
Made in gb
Assassin with Black Lotus Poison





Bristol

 reds8n wrote:

robots, bots, androids and other manifestations of artificial intelligence are poised to "unleash a new industrial revolution, which is likely to leave no stratum of society untouched".



.. sexbots confirmed then !


It's "stratum", Red, not "scrotum"

The Laws of Thermodynamics:
1) You cannot win. 2) You cannot break even. 3) You cannot stop playing the game.

Colonel Flagg wrote:You think you're real smart. But you're not smart; you're dumb. Very dumb. But you've met your match in me.
 
   
Made in gb
Legendary Dogfighter





RNAS Rockall

 NuggzTheNinja wrote:

Unless robots have the capability to genetically reproduce with one another, that won't be a problem.

Some modern computer viruses do exhibit a capacity for self modification beyond their original structure and even function.

What I believe is likely to happen, and I must confess my limited understanding of the biosciences, is not so much the machines will adapt to a higher understanding, but that their basic level of operation will be adequate to operate in the wider world as a result of input filtering and aggregation. If the entirety of human experience, for example, can be reduced to 10,000 known, measurable factors, a system that is sufficient to operate at that level, and reproduce as part of either one single over system, would be entirely capable of surpassing us. I'd suggest that a typical human being on a busy day would struggle to fully operate in even 500 facets of their own understanding, if abstracted to a suitable level.

Some people find the idea that other people can be happy offensive, and will prefer causing harm to self improvement.  
   
Made in us
Fixture of Dakka






The best way to stop the robots from taking over has already been solved. Ask any Tech-Priest.

"The Omnissiah is my Moderati" 
   
Made in us
Hangin' with Gork & Mork






Electro gonorrhea: the noisy killer

This message was edited 1 time. Last update was at 2017/01/13 02:51:29


Amidst the mists and coldest frosts he thrusts his fists against the posts and still insists he sees the ghosts.
 
   
Made in au
Lady of the Lake






 Mr. Burning wrote:
Wait. Is Dakka some kind of elaborate Turing test?

You might have just passed, congratulations.

 Nostromodamus wrote:
The best way to stop the robots from taking over has already been solved. Ask any Tech-Priest.

You just got to like give it the right sacred oils, chant endlessly at it, maybe massage it and do anything it asks as a divine avatar of the machine god. Then kill all humans


Honestly if the AI in these robots is as suggestible as Microsoft's Tay was.

Then maybe a kill switch is a pretty good idea and the moral questioning about it should be rather about when its going to be used rather than it existing.

This message was edited 1 time. Last update was at 2017/01/13 04:02:21


   
Made in us
Longtime Dakkanaut






 malamis wrote:
 NuggzTheNinja wrote:

Unless robots have the capability to genetically reproduce with one another, that won't be a problem.

Some modern computer viruses do exhibit a capacity for self modification beyond their original structure and even function.

What I believe is likely to happen, and I must confess my limited understanding of the biosciences, is not so much the machines will adapt to a higher understanding, but that their basic level of operation will be adequate to operate in the wider world as a result of input filtering and aggregation. If the entirety of human experience, for example, can be reduced to 10,000 known, measurable factors, a system that is sufficient to operate at that level, and reproduce as part of either one single over system, would be entirely capable of surpassing us. I'd suggest that a typical human being on a busy day would struggle to fully operate in even 500 facets of their own understanding, if abstracted to a suitable level.


You can definitely produce software that will self-modify (e.g., genetic algorithms to produce adaptation) - the question here is whether or not hardware will do that. The issue is that robots capable of complex behaviors would require different hardware, which is where the whole neuromorphic engineering push is coming from. More here: http://journal.frontiersin.org/journal/neuroscience/section/neuromorphic-engineering#about

This type of hardware can adapt, but my argument is that the scope of adaptation in complex systems is highly constrained such that the functional capabilities of particular structures doesn't usually change within the lifetime of the agent. So for example, we know that phantom limb syndrome is a result of somatosensory neurons "taking over" the role of adjacent neurons since their particular limb is no longer present. That's a pretty low level change, and could be expected in a robot. What I wouldn't expect is something like a group of limbic system neurons adapting to produce something like a cortex - the morphology of the neurons is completely different.



Tier 1 is the new Tactical.

My IDF-Themed Guard Army P&M Blog:

http://www.dakkadakka.com/dakkaforum/posts/list/30/355940.page 
   
Made in us
Fixture of Dakka






I forgot about Tay Damn that girl was crazy...

"The Omnissiah is my Moderati" 
   
Made in gb
Assassin with Black Lotus Poison





Bristol

 NuggzTheNinja wrote:
 malamis wrote:
 NuggzTheNinja wrote:

Unless robots have the capability to genetically reproduce with one another, that won't be a problem.

Some modern computer viruses do exhibit a capacity for self modification beyond their original structure and even function.

What I believe is likely to happen, and I must confess my limited understanding of the biosciences, is not so much the machines will adapt to a higher understanding, but that their basic level of operation will be adequate to operate in the wider world as a result of input filtering and aggregation. If the entirety of human experience, for example, can be reduced to 10,000 known, measurable factors, a system that is sufficient to operate at that level, and reproduce as part of either one single over system, would be entirely capable of surpassing us. I'd suggest that a typical human being on a busy day would struggle to fully operate in even 500 facets of their own understanding, if abstracted to a suitable level.


You can definitely produce software that will self-modify (e.g., genetic algorithms to produce adaptation) - the question here is whether or not hardware will do that. The issue is that robots capable of complex behaviors would require different hardware, which is where the whole neuromorphic engineering push is coming from. More here: http://journal.frontiersin.org/journal/neuroscience/section/neuromorphic-engineering#about

This type of hardware can adapt, but my argument is that the scope of adaptation in complex systems is highly constrained such that the functional capabilities of particular structures doesn't usually change within the lifetime of the agent. So for example, we know that phantom limb syndrome is a result of somatosensory neurons "taking over" the role of adjacent neurons since their particular limb is no longer present. That's a pretty low level change, and could be expected in a robot. What I wouldn't expect is something like a group of limbic system neurons adapting to produce something like a cortex - the morphology of the neurons is completely different.




What about the Geth situation in Mass Effect, whereby robots which individually lack consciousness are able to, through networking and sharing of information and computing power, approach a semblance of it when in large enough groups? So mightn't they, when pooling enough computational power, be able to simulate the hardware required for more complex behaviour in a virtual space and then use the output of that to act as though they had that hardware?

The Laws of Thermodynamics:
1) You cannot win. 2) You cannot break even. 3) You cannot stop playing the game.

Colonel Flagg wrote:You think you're real smart. But you're not smart; you're dumb. Very dumb. But you've met your match in me.
 
   
Made in us
Longtime Dakkanaut






 A Town Called Malus wrote:


What about the Geth situation in Mass Effect, whereby robots which individually lack consciousness are able to, through networking and sharing of information and computing power, approach a semblance of it when in large enough groups? So mightn't they, when pooling enough computational power, be able to simulate the hardware required for more complex behaviour in a virtual space and then use the output of that to act as though they had that hardware?


Absolutely a valid concern given that distributed computing and a system of systems approach is pretty much the goal. It would be prudent to keep a close eye on exactly what types of information are permitted to be shared between units. For example, robots should be able to communicate to one another how to do their job better, but we don't exactly want them thinking about *why* they're doing their job.

Tier 1 is the new Tactical.

My IDF-Themed Guard Army P&M Blog:

http://www.dakkadakka.com/dakkaforum/posts/list/30/355940.page 
   
 
Forum Index » Off-Topic Forum
Go to: