Author |
Message |
 |
|
 |
Advert
|
Forum adverts like this one are shown to any user who is not logged in. Join us by filling out a tiny 3 field form and you will get your own, free, dakka user account which gives a good range of benefits to you:
- No adverts like this in the forums anymore.
- Times and dates in your local timezone.
- Full tracking of what you have read so you can skip to your first unread post, easily see what has changed since you last logged in, and easily see what is new at a glance.
- Email notifications for threads you want to watch closely.
- Being a part of the oldest wargaming community on the net.
If you are already a member then feel free to login now. |
|
 |
![[Post New]](/s/i/i.gif) 2017/01/12 21:33:45
Subject: MEPs vote on robots' legal status - and if a kill switch is required
|
 |
Longtime Dakkanaut
|
Asherian Command wrote:
Again the problem would arise from the philosophical side if it is morally right to do so. What if it evolves? It is entirely possible with organic beings, we have yet to see a machine evolve but if that were to slip by the scientists or engineers. It would be bad.
But I do agree with you it could be useful. But again for all we know we could accidentally make a predator robot or something similar, I do not think many of them want to risk it.
.
Unless robots have the capability to genetically reproduce with one another, that won't be a problem. Joking aside, the morphology of the neurons determines the capabilities of the structure and the plasticity necessary to develop entirely new functional regions of the brain would not be required for the proposed applications, nor would it necessarily even be feasible if we wanted to make it happen. Neuronal plasticity can allow some brain regions to function in ways similar to adjacent regions, but these regions are largely morphologically similar to begin with. The morphology of the neurons in the cortex is apples and oranges compared with say, the hippocampus, which would be very useful for a robot.
|
|
|
 |
 |
![[Post New]](/s/i/i.gif) 2017/01/12 22:11:38
Subject: MEPs vote on robots' legal status - and if a kill switch is required
|
 |
Decrepit Dakkanaut
|
I get that reference!
|
DA:70S+G+M+B++I++Pw40k08+D++A++/fWD-R+T(M)DM+
|
|
 |
 |
![[Post New]](/s/i/i.gif) 2017/01/12 22:14:49
Subject: MEPs vote on robots' legal status - and if a kill switch is required
|
 |
Assassin with Black Lotus Poison
|
reds8n wrote:
robots, bots, androids and other manifestations of artificial intelligence are poised to "unleash a new industrial revolution, which is likely to leave no stratum of society untouched".
.. sexbots confirmed then !
It's "stratum", Red, not "scrotum"
|
The Laws of Thermodynamics:
1) You cannot win. 2) You cannot break even. 3) You cannot stop playing the game.
Colonel Flagg wrote:You think you're real smart. But you're not smart; you're dumb. Very dumb. But you've met your match in me. |
|
 |
 |
![[Post New]](/s/i/i.gif) 2017/01/13 00:04:32
Subject: MEPs vote on robots' legal status - and if a kill switch is required
|
 |
Legendary Dogfighter
|
NuggzTheNinja wrote:
Unless robots have the capability to genetically reproduce with one another, that won't be a problem.
Some modern computer viruses do exhibit a capacity for self modification beyond their original structure and even function.
What I believe is likely to happen, and I must confess my limited understanding of the biosciences, is not so much the machines will adapt to a higher understanding, but that their basic level of operation will be adequate to operate in the wider world as a result of input filtering and aggregation. If the entirety of human experience, for example, can be reduced to 10,000 known, measurable factors, a system that is sufficient to operate at that level, and reproduce as part of either one single over system, would be entirely capable of surpassing us. I'd suggest that a typical human being on a busy day would struggle to fully operate in even 500 facets of their own understanding, if abstracted to a suitable level.
|
Some people find the idea that other people can be happy offensive, and will prefer causing harm to self improvement. |
|
 |
 |
![[Post New]](/s/i/i.gif) 2017/01/13 00:11:00
Subject: MEPs vote on robots' legal status - and if a kill switch is required
|
 |
Fixture of Dakka
|
The best way to stop the robots from taking over has already been solved. Ask any Tech-Priest.
|
"The Omnissiah is my Moderati" |
|
 |
 |
![[Post New]](/s/i/i.gif) 2017/01/13 02:50:00
Subject: MEPs vote on robots' legal status - and if a kill switch is required
|
 |
Hangin' with Gork & Mork
|
Electro gonorrhea: the noisy killer
|
This message was edited 1 time. Last update was at 2017/01/13 02:51:29
Amidst the mists and coldest frosts he thrusts his fists against the posts and still insists he sees the ghosts.
|
|
 |
 |
![[Post New]](/s/i/i.gif) 2017/01/13 04:00:05
Subject: MEPs vote on robots' legal status - and if a kill switch is required
|
 |
Lady of the Lake
|
You might have just passed, congratulations. Nostromodamus wrote:The best way to stop the robots from taking over has already been solved. Ask any Tech-Priest.
You just got to like give it the right sacred oils, chant endlessly at it, maybe massage it and do anything it asks as a divine avatar of the machine god. Then kill all humans Honestly if the AI in these robots is as suggestible as Microsoft's Tay was.  Then maybe a kill switch is a pretty good idea and the moral questioning about it should be rather about when its going to be used rather than it existing.
|
This message was edited 1 time. Last update was at 2017/01/13 04:02:21
|
|
 |
 |
![[Post New]](/s/i/i.gif) 2017/01/13 14:52:42
Subject: MEPs vote on robots' legal status - and if a kill switch is required
|
 |
Longtime Dakkanaut
|
malamis wrote: NuggzTheNinja wrote:
Unless robots have the capability to genetically reproduce with one another, that won't be a problem.
Some modern computer viruses do exhibit a capacity for self modification beyond their original structure and even function.
What I believe is likely to happen, and I must confess my limited understanding of the biosciences, is not so much the machines will adapt to a higher understanding, but that their basic level of operation will be adequate to operate in the wider world as a result of input filtering and aggregation. If the entirety of human experience, for example, can be reduced to 10,000 known, measurable factors, a system that is sufficient to operate at that level, and reproduce as part of either one single over system, would be entirely capable of surpassing us. I'd suggest that a typical human being on a busy day would struggle to fully operate in even 500 facets of their own understanding, if abstracted to a suitable level.
You can definitely produce software that will self-modify (e.g., genetic algorithms to produce adaptation) - the question here is whether or not hardware will do that. The issue is that robots capable of complex behaviors would require different hardware, which is where the whole neuromorphic engineering push is coming from. More here: http://journal.frontiersin.org/journal/neuroscience/section/neuromorphic-engineering#about
This type of hardware can adapt, but my argument is that the scope of adaptation in complex systems is highly constrained such that the functional capabilities of particular structures doesn't usually change within the lifetime of the agent. So for example, we know that phantom limb syndrome is a result of somatosensory neurons "taking over" the role of adjacent neurons since their particular limb is no longer present. That's a pretty low level change, and could be expected in a robot. What I wouldn't expect is something like a group of limbic system neurons adapting to produce something like a cortex - the morphology of the neurons is completely different.
|
|
|
 |
 |
![[Post New]](/s/i/i.gif) 2017/01/13 15:44:10
Subject: MEPs vote on robots' legal status - and if a kill switch is required
|
 |
Fixture of Dakka
|
I forgot about Tay  Damn that girl was crazy...
|
"The Omnissiah is my Moderati" |
|
 |
 |
![[Post New]](/s/i/i.gif) 2017/01/13 16:33:18
Subject: MEPs vote on robots' legal status - and if a kill switch is required
|
 |
Assassin with Black Lotus Poison
|
NuggzTheNinja wrote: malamis wrote: NuggzTheNinja wrote:
Unless robots have the capability to genetically reproduce with one another, that won't be a problem.
Some modern computer viruses do exhibit a capacity for self modification beyond their original structure and even function.
What I believe is likely to happen, and I must confess my limited understanding of the biosciences, is not so much the machines will adapt to a higher understanding, but that their basic level of operation will be adequate to operate in the wider world as a result of input filtering and aggregation. If the entirety of human experience, for example, can be reduced to 10,000 known, measurable factors, a system that is sufficient to operate at that level, and reproduce as part of either one single over system, would be entirely capable of surpassing us. I'd suggest that a typical human being on a busy day would struggle to fully operate in even 500 facets of their own understanding, if abstracted to a suitable level.
You can definitely produce software that will self-modify (e.g., genetic algorithms to produce adaptation) - the question here is whether or not hardware will do that. The issue is that robots capable of complex behaviors would require different hardware, which is where the whole neuromorphic engineering push is coming from. More here: http://journal.frontiersin.org/journal/neuroscience/section/neuromorphic-engineering#about
This type of hardware can adapt, but my argument is that the scope of adaptation in complex systems is highly constrained such that the functional capabilities of particular structures doesn't usually change within the lifetime of the agent. So for example, we know that phantom limb syndrome is a result of somatosensory neurons "taking over" the role of adjacent neurons since their particular limb is no longer present. That's a pretty low level change, and could be expected in a robot. What I wouldn't expect is something like a group of limbic system neurons adapting to produce something like a cortex - the morphology of the neurons is completely different.
What about the Geth situation in Mass Effect, whereby robots which individually lack consciousness are able to, through networking and sharing of information and computing power, approach a semblance of it when in large enough groups? So mightn't they, when pooling enough computational power, be able to simulate the hardware required for more complex behaviour in a virtual space and then use the output of that to act as though they had that hardware?
|
The Laws of Thermodynamics:
1) You cannot win. 2) You cannot break even. 3) You cannot stop playing the game.
Colonel Flagg wrote:You think you're real smart. But you're not smart; you're dumb. Very dumb. But you've met your match in me. |
|
 |
 |
![[Post New]](/s/i/i.gif) 2017/01/13 18:43:19
Subject: MEPs vote on robots' legal status - and if a kill switch is required
|
 |
Longtime Dakkanaut
|
A Town Called Malus wrote:
What about the Geth situation in Mass Effect, whereby robots which individually lack consciousness are able to, through networking and sharing of information and computing power, approach a semblance of it when in large enough groups? So mightn't they, when pooling enough computational power, be able to simulate the hardware required for more complex behaviour in a virtual space and then use the output of that to act as though they had that hardware?
Absolutely a valid concern given that distributed computing and a system of systems approach is pretty much the goal. It would be prudent to keep a close eye on exactly what types of information are permitted to be shared between units. For example, robots should be able to communicate to one another how to do their job better, but we don't exactly want them thinking about *why* they're doing their job.
|
|
|
 |
 |
|