Switch Theme:

MEPs vote on robots' legal status - and if a kill switch is required  [RSS] Share on facebook Share on Twitter Submit to Reddit
»
Author Message
Advert


Forum adverts like this one are shown to any user who is not logged in. Join us by filling out a tiny 3 field form and you will get your own, free, dakka user account which gives a good range of benefits to you:
  • No adverts like this in the forums anymore.
  • Times and dates in your local timezone.
  • Full tracking of what you have read so you can skip to your first unread post, easily see what has changed since you last logged in, and easily see what is new at a glance.
  • Email notifications for threads you want to watch closely.
  • Being a part of the oldest wargaming community on the net.
If you are already a member then feel free to login now.




Made in gb
[DCM]
Et In Arcadia Ego





Canterbury

http://www.bbc.co.uk/news/technology-38583360



MEPs are to vote on the first comprehensive set of rules for how humans will interact with artificial intelligence and robots.
The report makes it clear that it believes the world is on the cusp of a "new industrial" robot revolution.
MEPs will decide whether to give robots legal status as "electronic persons".
Designers should make sure any robots have a kill switch, which would allow functions to be shut down if necessary, the report recommends.
Meanwhile users should be able to use robots "without risk or fear of physical or psychological harm", it states.

The report suggests that robots, bots, androids and other manifestations of artificial intelligence are poised to "unleash a new industrial revolution, which is likely to leave no stratum of society untouched".
The new age of robots has the potential for "virtually unbounded prosperity" but also raises questions about the future of work and whether member states need to introduce a basic income in the light of robots taking jobs.
Robot/human relationships raise issues around privacy, human dignity (particularly in relation to care robots) and the physical safety of humans if systems fail or are hacked.
The report acknowledges that there is a possibility that within the space of a few decades AI could surpass human intellectual capacity.
This could, if not properly prepared for, "pose a challenge to humanity's capacity to control its own creation and, consequently, perhaps also to its capacity to be in charge of its own destiny and to ensure the survival of the species".

It turns to science fiction, drawing on rules dreamed up by writer Isaac Asimov, for how robots should act if and when they become self-aware. The laws will be directed at the designers, producers and operators of robots as they cannot be converted into machine code.
These rules state:
A robot may not injure a human being or, through inaction, allow a human being to come to harm
A robot must obey the orders given by human beings except where such orders would conflict with the first law
A robot must protect its own existence as long as such protection does not conflict with the first or second laws
Meanwhile robotic research should respect fundamental rights and be conducted in the interests of the wellbeing of humans, the report recommends.
Designers may be required to register their robots as well as providing access to the source code to investigate accidents and damage caused by bots. Designers may also be required to obtain the go-ahead for new robotic designs from a research ethics committee.
The report calls for the creation of a European agency for robotics and artificial intelligence that can provide technical, ethical and regulatory expertise.
It also suggests that in the light of numerous reports on how many jobs could be taken by AI or robots, member countries consider introducing a universal basic income for citizens provided by the state.
The report also considers the legal liabilities of robots and suggests that liability should be proportionate to the actual level of instructions given to the robot and its autonomy.

"The greater a robot's learning capability or autonomy is, the lower other parties' responsibilities should be and the longer a robot's 'education' has lasted, the greater the responsibility of its 'teacher' should be," it says.
Producers or owners may, in future, be required to take out insurance cover for the damage potentially caused by their robot.
If MEPs vote in favour of the legislation, it will then go to individual governments for further debate and amendments before it becomes EU law.



.. this is a wee bit more like the future we were promised.


robots, bots, androids and other manifestations of artificial intelligence are poised to "unleash a new industrial revolution, which is likely to leave no stratum of society untouched".



.. sexbots confirmed then !

The poor man really has a stake in the country. The rich man hasn't; he can go away to New Guinea in a yacht. The poor have sometimes objected to being governed badly; the rich have always objected to being governed at all
We love our superheroes because they refuse to give up on us. We can analyze them out of existence, kill them, ban them, mock them, and still they return, patiently reminding us of who we are and what we wish we could be.
"the play's the thing wherein I'll catch the conscience of the king,
 
   
Made in gb
Bryan Ansell





Birmingham, UK

If a person wishes to use such a sexbot in an activity where harm is involved.............
   
Made in ca
Confessor Of Sins





Well, I'm glad we're getting this out of the way early.

However, I am not impressed by the declaration that they should have rights, followed by the declarations that they should be designed with forcible shutdown commands, and that humans should feel free to use them for our needs.

It's like they're saying, "Yes, we should treat them as people, but we should also retain the ability to shut them off if they get uppity and we should feel free to use our new equals as slaves for our own personal needs."
   
Made in gb
Legendary Dogfighter





RNAS Rockall

But in exchange, it puts a legal construct in place that makes the construction of 50 meter tall death bots impervious to all harm restricted to the military, instead of a commercial product which is sold as one thing, and modified into t'other.

Speaking as someone who occasionally deals with industrial robotics, a standardised killswitch is just good design practice. Enforcing it as law for all implementations with oversight on the exceptions is exactly what a government should be doing.

What will be interesting is what happens when an AI establishes itself as a legal entity and gains a monopoly over commodities or industries.

Some people find the idea that other people can be happy offensive, and will prefer causing harm to self improvement.  
   
Made in ca
Confessor Of Sins





 malamis wrote:
But in exchange, it puts a legal construct in place that makes the construction of 50 meter tall death bots impervious to all harm restricted to the military, instead of a commercial product which is sold as one thing, and modified into t'other.

Speaking as someone who occasionally deals with industrial robotics, a standardised killswitch is just good design practice. Enforcing it as law for all implementations with oversight on the exceptions is exactly what a government should be doing.

What will be interesting is what happens when an AI establishes itself as a legal entity and gains a monopoly over commodities or industries.


Let me be clear.

We are not discussing modern robotics that are created for laboratory experiments or industrial purposes.

We are discussing people who happen to not be biological life forms because their bodies are made of mechanical parts rather than flesh and blood.

A forcible shutdown switch is immoral. Treating them as slaves is immoral.

I don't give a damn how strong and capable they are, you simply cannot treat your fellow citizens as though they are property. And if one starts killing people, you call the cops, who either subdue them or end up shooting them to death.

If you don't want them to have superhuman strength and indestructible armor, I understand fully. So don't build them with those features.

This message was edited 1 time. Last update was at 2017/01/12 11:56:39


 
   
Made in us
Douglas Bader






 Pouncey wrote:
We are discussing people who happen to not be biological life forms because their bodies are made of mechanical parts rather than flesh and blood.


No, we're discussing computer software that can do a convincing job of appearing to be a sentient being (or, more realistically, a very efficient job of processing certain CPU-demanding tasks). Adding an off switch is no more "immoral" than shutting down a video game and ending the "lives" of the characters in it.

This message was edited 1 time. Last update was at 2017/01/12 12:02:59


There is no such thing as a hobby without politics. "Leave politics at the door" is itself a political statement, an endorsement of the status quo and an attempt to silence dissenting voices. 
   
Made in ca
Confessor Of Sins





 Peregrine wrote:
 Pouncey wrote:
We are discussing people who happen to not be biological life forms because their bodies are made of mechanical parts rather than flesh and blood.


No, we're discussing computer software that can do a convincing job of appearing to be a sentient being (or, more realistically, a very efficient job of processing certain CPU-demanding tasks). Adding an off switch is no more "immoral" than shutting down a video game and ending the "lives" of the characters in it.


Prove to me, right now, that you are actually sentient.
   
Made in gb
Bryan Ansell





Birmingham, UK

Wait. Is Dakka some kind of elaborate Turing test?
   
Made in us
Douglas Bader






 Pouncey wrote:
Prove to me, right now, that you are actually sentient.


Nah, I think I'll just hit your off switch. It's the best response to ridiculous challenges like that.

This message was edited 1 time. Last update was at 2017/01/12 12:10:19


There is no such thing as a hobby without politics. "Leave politics at the door" is itself a political statement, an endorsement of the status quo and an attempt to silence dissenting voices. 
   
Made in ca
Confessor Of Sins





 Mr. Burning wrote:
Wait. Is Dakka some kind of elaborate Turing test?


No.

The person I am arguing with is under the delusion that his own consciousness is something more than the natural result of his brain's physical operation. As a result, he is dismissing the sentience of a mechanical being because he considers his own sentience to possess a magical quality that machines are incapable of. He is in error. A sufficiently advanced AI is every bit as sentient as a human brain, and despite popular fiction, there is no reason why they would be fundamentally incapable of emotion or making mistakes.


Automatically Appended Next Post:
 Peregrine wrote:
 Pouncey wrote:
Prove to me, right now, that you are actually sentient.


Nah, I think I'll just hit your off switch. It's the best response to ridiculous challenges like that.


I'm a human. I don't have an off switch.

This message was edited 1 time. Last update was at 2017/01/12 12:10:49


 
   
Made in us
Douglas Bader






 Pouncey wrote:
The person I am arguing with is under the delusion that his own consciousness is something more than the natural result of his brain's physical operation. As a result, he is dismissing the sentience of a mechanical being because he considers his own sentience to possess a magical quality that machines are incapable of. He is in error. A sufficiently advanced AI is every bit as sentient as a human brain, and despite popular fiction, there is no reason why they would be fundamentally incapable of emotion or making mistakes.


No, I am simply pointing out that AI with human levels of sentience is firmly in the realm of science fiction, not reality. It is in theory possible to produce consciousness from a machine (which will likely have, at most, a superficial resemblance to modern computers) but it won't be happening in the foreseeable future and is therefore not relevant to this discussion.

I'm a human. I don't have an off switch.


Prove it.

There is no such thing as a hobby without politics. "Leave politics at the door" is itself a political statement, an endorsement of the status quo and an attempt to silence dissenting voices. 
   
Made in gb
[DCM]
Et In Arcadia Ego





Canterbury

 Mr. Burning wrote:
Wait. Is Dakka some kind of elaborate Turing test?



Whole thing is an elaborate phishing scam .


If all users could hurry up and fill in the mother's maiden name, , 1st pet name, significant personal date, favourite colour & food sections of your profiles now please it would be...helpful.

yes.

helpful.



The poor man really has a stake in the country. The rich man hasn't; he can go away to New Guinea in a yacht. The poor have sometimes objected to being governed badly; the rich have always objected to being governed at all
We love our superheroes because they refuse to give up on us. We can analyze them out of existence, kill them, ban them, mock them, and still they return, patiently reminding us of who we are and what we wish we could be.
"the play's the thing wherein I'll catch the conscience of the king,
 
   
Made in gb
Bryan Ansell





Birmingham, UK

 Peregrine wrote:


No, I am simply pointing out that AI with human levels of sentience is firmly in the realm of science fiction, not reality. It is in theory possible to produce consciousness from a machine (which will likely have, at most, a superficial resemblance to modern computers) but it won't be happening in the foreseeable future and is therefore not relevant to this discussion.



This.
   
Made in ca
Confessor Of Sins





 Peregrine wrote:
 Pouncey wrote:
The person I am arguing with is under the delusion that his own consciousness is something more than the natural result of his brain's physical operation. As a result, he is dismissing the sentience of a mechanical being because he considers his own sentience to possess a magical quality that machines are incapable of. He is in error. A sufficiently advanced AI is every bit as sentient as a human brain, and despite popular fiction, there is no reason why they would be fundamentally incapable of emotion or making mistakes.


No, I am simply pointing out that AI with human levels of sentience is firmly in the realm of science fiction, not reality. It is in theory possible to produce consciousness from a machine (which will likely have, at most, a superficial resemblance to modern computers) but it won't be happening in the foreseeable future and is therefore not relevant to this discussion.


This discussion is about securing rights for AI that become advanced enough to be called people under the law.

It is being done in advance of such AI actually being developed because the obvious alternative is that when they are created, they will be considered property to a degree that even human slaves do not measure up to.

It is about avoiding a situation where we treat sentient machines so poorly that the natural result is that they rise up against their human oppressors in a violent, bloody revolution to secure equal rights.

These laws are being created for the future, not for today.

I'm a human. I don't have an off switch.


Prove it.


How about I keep posting. You said you'd use my off-switch, and I think if I continue to post, that proves pretty solidly that you do not have the power to shut me off at your whim.
   
Made in us
Douglas Bader






 Pouncey wrote:
This discussion is about securing rights for AI that become advanced enough to be called people under the law.


And that discussion is taking place in the context of the real world. Who cares what the EU says about human-like AI, it's incredibly unlikely that the EU (or human civilization as we know it) or any of its laws will exist in the distant future when such a thing becomes reality. The AI that is actually relevant to the discussion of EU laws is the moral equivalent of a well-programed video game character.

It is about avoiding a situation where we treat sentient machines so poorly that the natural result is that they rise up against their human oppressors in a violent, bloody revolution to secure equal rights.


Fortunately if the robot uprising ever happens we can simply hit the off switch and end it.

How about I keep posting. You said you'd use my off-switch, and I think if I continue to post, that proves pretty solidly that you do not have the power to shut me off at your whim.


You just proved that I didn't use your off switch, not that you're a human with no off switch. Proving that I changed my mind or don't have access to your off switch doesn't prove that you are human.

(Not that I expect you to be able to prove it, but it's as reasonable a demand as your demand to prove that I am sentient.)

There is no such thing as a hobby without politics. "Leave politics at the door" is itself a political statement, an endorsement of the status quo and an attempt to silence dissenting voices. 
   
Made in ca
Confessor Of Sins





 Peregrine wrote:
(Not that I expect you to be able to prove it, but it's as reasonable a demand as your demand to prove that I am sentient.)


Oh, so you're aware of my point and acknowledge it as unassailable, you simply refuse to accept the validity of the argument.
   
Made in us
Douglas Bader






 Pouncey wrote:
Oh, so you're aware of my point and acknowledge it as unassailable, you simply refuse to accept the validity of the argument.


I acknowledge that proving sentience to a random person over the internet is impossible. This has nothing to do with the subject of this thread.

There is no such thing as a hobby without politics. "Leave politics at the door" is itself a political statement, an endorsement of the status quo and an attempt to silence dissenting voices. 
   
Made in ca
Confessor Of Sins





 Peregrine wrote:
 Pouncey wrote:
Oh, so you're aware of my point and acknowledge it as unassailable, you simply refuse to accept the validity of the argument.


I acknowledge that proving sentience to a random person over the internet is impossible. This has nothing to do with the subject of this thread.


You refuse to accept that machines can ever be sentient because you believe it cannot ever be proved that they are sentient.

You then go on to acknowledge that it is impossible to prove that humans are sentient.

Yet somehow you don't see the connection between the two. Weird.
   
Made in gb
Bryan Ansell





Birmingham, UK

 Pouncey wrote:
 Peregrine wrote:
 Pouncey wrote:
Oh, so you're aware of my point and acknowledge it as unassailable, you simply refuse to accept the validity of the argument.


I acknowledge that proving sentience to a random person over the internet is impossible. This has nothing to do with the subject of this thread.


You refuse to accept that machines can ever be sentient because you believe it cannot ever be proved that they are sentient.

You then go on to acknowledge that it is impossible to prove that humans are sentient.

Yet somehow you don't see the connection between the two. Weird.


The law doesn't cover self aware AI and is will be totally unworkable should, in the far future, sentient AI be created.

It covers learning machines but nothing of the level you assume.


The report acknowledges that there is a possibility that within the space of a few decades AI could surpass human intellectual capacity.
This could, if not properly prepared for, "pose a challenge to humanity's capacity to control its own creation and, consequently, perhaps also to its capacity to be in charge of its own destiny and to ensure the survival of the species".


Without reading the actual wording this is so loose an acknowledgment as to be useless.

This message was edited 1 time. Last update was at 2017/01/12 12:41:11


 
   
Made in us
Douglas Bader






 Pouncey wrote:
You refuse to accept that machines can ever be sentient because you believe it cannot ever be proved that they are sentient.


I said no such thing. In fact, I very clearly said the exact opposite in this thread:

No, I am simply pointing out that AI with human levels of sentience is firmly in the realm of science fiction, not reality. It is in theory possible to produce consciousness from a machine (which will likely have, at most, a superficial resemblance to modern computers) but it won't be happening in the foreseeable future and is therefore not relevant to this discussion.

Whether or not machines can ever be made sentient is irrelevant, we are nowhere near that point and talking about sentient machines has no place in a discussion of real-world laws.

You then go on to acknowledge that it is impossible to prove that humans are sentient.


I acknowledged no such thing. I simply said that I can't prove it to you over the internet. Outside of rather boring Philosophy 101 discussions of the sort that are popular with people who think they are far more clever and interesting than they really are, the concept of sentience is one that can be tested and studied. The tests just aren't the kind of thing you can do through forum posts with a random stranger who has no professional experience in psychology/biology/etc. Setting the standard of "proof" so high that none of our knowledge of sentience (or knowledge of anything else about the world) qualifies might be interesting if you've smoked some really good pot, but it isn't a useful scientific concept.

This message was edited 1 time. Last update was at 2017/01/12 12:41:22


There is no such thing as a hobby without politics. "Leave politics at the door" is itself a political statement, an endorsement of the status quo and an attempt to silence dissenting voices. 
   
Made in us
Decrepit Dakkanaut






New Orleans, LA

I work with robots every day in manufacturing plants.

They don't bitch about overtime, make fun of your stupid shoes, call you fat, say no when you ask them on a date, make fun of your stupid lisp, or call HR about you being a stalker when you're just trying to gather information.



 reds8n wrote:



robots, bots, androids and other manifestations of artificial intelligence are poised to "unleash a new industrial revolution, which is likely to leave no stratum of society untouched".



.. sexbots confirmed then !


That's where all technology leads us!



DA:70S+G+M+B++I++Pw40k08+D++A++/fWD-R+T(M)DM+
 
   
Made in ca
Confessor Of Sins





 Peregrine wrote:
 Pouncey wrote:
You refuse to accept that machines can ever be sentient because you believe it cannot ever be proved that they are sentient.


I said no such thing. In fact, I very clearly said the exact opposite in this thread:

No, I am simply pointing out that AI with human levels of sentience is firmly in the realm of science fiction, not reality. It is in theory possible to produce consciousness from a machine (which will likely have, at most, a superficial resemblance to modern computers) but it won't be happening in the foreseeable future and is therefore not relevant to this discussion.

Whether or not machines can ever be made sentient is irrelevant, we are nowhere near that point and talking about sentient machines has no place in a discussion of real-world laws.

You then go on to acknowledge that it is impossible to prove that humans are sentient.


I acknowledged no such thing. I simply said that I can't prove it to you over the internet. Outside of rather boring Philosophy 101 discussions of the sort that are popular with people who think they are far more clever and interesting than they really are, the concept of sentience is one that can be tested and studied. The tests just aren't the kind of thing you can do through forum posts with a random stranger who has no professional experience in psychology/biology/etc. Setting the standard of "proof" so high that none of our knowledge of sentience (or knowledge of anything else about the world) qualifies might be interesting if you've smoked some really good pot, but it isn't a useful scientific concept.


I dunno if you spend a lot of time with computer hardware, but it advances pretty damned quickly (exponentially, actually) and it hasn't really slowed down yet.

We've managed to fully simulate a neuron on a computer already.

It is easily conceivable that within our lifetimes, computers will be powerful enough to simulate an entire human brain of neurons.

What then, is the difference between a simulated human mind supported by computer hardware, and a human mind made of flesh and blood?

The answer, is nothing.

This message was edited 1 time. Last update was at 2017/01/12 12:47:51


 
   
Made in us
Douglas Bader






 Pouncey wrote:
I dunno if you spend a lot of time with computer hardware, but it advances pretty damned quickly (exponentially, actually) and it hasn't really slowed down yet.


Actually it has slowed down, and is starting to converge on some physical limits (heat, minimum transistor size, etc).

It is easily conceivable that within our lifetimes, computers will be powerful enough to simulate an entire human brain of neurons.


No, it really isn't. The processing power required is many orders of magnitude greater, and having the hardware required is only a tiny part of the problem. Being able simulate the number of neurons in a human brain doesn't help you unless you know how the human brain's neurons are connected. Without that knowledge, which we are nowhere near obtaining, all you have is a really expensive simulation of a brain-dead person.

What then, is the difference between a simulated human mind supported by computer hardware, and a human mind made of flesh and blood?

The answer, is nothing.


Ok, and 10,000 years in the future when we get to that point whatever human government exists will deal with the problem. This has nothing to do with EU law in 2017.

There is no such thing as a hobby without politics. "Leave politics at the door" is itself a political statement, an endorsement of the status quo and an attempt to silence dissenting voices. 
   
Made in gb
Legendary Dogfighter





RNAS Rockall

This is assuming that intelligence naturally *requires* to be modeled on the human brain which is, as folks are fond of saying, designed and assembled by unskilled labor. If intelligence, or at least the capacity to indefinitely develop responses to stimuli that have not yet been encountered, develops in such a way as to diverge from the human model, then there needs to be if not rights, then clear boundaries assigned to its use and deployment.

For the same reason nuclear power is heavily regulated. The damage can be lasting, massive, and affect those who have chosen to have no involvement in it, but the benefits are worth the effort. Therefore, a standardised 'off switch' as part of that is a Good Idea.

As an aside, my router crapped out for no less than 20 minutes after I posted this o.o



Automatically Appended Next Post:
 Pouncey wrote:

A forcible shutdown switch is immoral.


As is the use of firearms, lethal injection, and hanging, on defenseless convicted criminals. Except when the community wherein the act is carried out decides it isn't, which is what just happened.

 Pouncey wrote:

Treating them as slaves is immoral.


Treating children who do not yet have the capacity for moral judgment and requiring them to do, effectively, 'chores' is standard operating practice in much of the world. That the children in question will remain children until 'matured' by an outside force, in say, 10,000 years through true autonomous AI (and the extinction of the human race) is entirely justifiable.

This message was edited 3 times. Last update was at 2017/01/12 16:42:04


Some people find the idea that other people can be happy offensive, and will prefer causing harm to self improvement.  
   
Made in us
Decrepit Dakkanaut






New Orleans, LA

 Pouncey wrote:


We are discussing people who happen to not be biological life forms because their bodies are made of mechanical parts rather than flesh and blood.


We are no where near that point. Not likely to happen in our life times, actually.

Number 5 is not alive.

DA:70S+G+M+B++I++Pw40k08+D++A++/fWD-R+T(M)DM+
 
   
Made in fi
Locked in the Tower of Amareo





 malamis wrote:
Treating children who do not yet have the capacity for moral judgment and requiring them to do, effectively, 'chores' is standard operating practice in much of the world. That the children in question will remain children until 'matured' by an outside force, in say, 10,000 years through true autonomous AI (and the extinction of the human race) is entirely justifiable.


Lot faster than 10,000. Quite possible there are already people living who will see them with their own eyes. Hell even *I* might see them and I expect to live less than average.

This message was edited 1 time. Last update was at 2017/01/12 18:15:09


2024 painted/bought: 109/109 
   
Made in us
5th God of Chaos! (Yea'rly!)




The Great State of Texas



I'm a human. I don't have an off switch.


Oh yea you do.

-"Wait a minute.....who is that Frazz is talking to in the gallery? Hmmm something is going on here.....Oh.... it seems there is some dispute over video taping of some sort......Frazz is really upset now..........wait a minute......whats he go there.......is it? Can it be?....Frazz has just unleashed his hidden weiner dog from his mini bag, while quoting shakespeares "Let slip the dogs the war!!" GG
-"Don't mind Frazzled. He's just Dakka's crazy old dude locked in the attic. He's harmless. Mostly."
-TBone the Magnificent 1999-2014, Long Live the King!
 
   
Made in gb
Crazed Spirit of the Defiler




Newcastle

The Emperor wouldn't be impressed with what our scientists are trying to do

Hydra Dominatus 
   
Made in us
Longtime Dakkanaut






To move from a philosophical discussion to a more practical discussion...

...I'm not sure why modern neuroscience focuses so heavily on the cortex, when the goal is to produce machines that serve us. Animals that lack cortices are capable of emergent behaviors that would be VERY useful to us, and would be pretty much universally described as intelligent.

Most of the philosophical issues discussed here could be sidestepped by simply avoiding implementing a prefrontal cortex.

Tier 1 is the new Tactical.

My IDF-Themed Guard Army P&M Blog:

http://www.dakkadakka.com/dakkaforum/posts/list/30/355940.page 
   
Made in us
Legendary Master of the Chapter





Chicago, Illinois

 Mr. Burning wrote:
 Peregrine wrote:


No, I am simply pointing out that AI with human levels of sentience is firmly in the realm of science fiction, not reality. It is in theory possible to produce consciousness from a machine (which will likely have, at most, a superficial resemblance to modern computers) but it won't be happening in the foreseeable future and is therefore not relevant to this discussion.



This.


Yes AI currently is not possible with our current understanding of technology, but scientists are incredibly wary of creating Artificial Intelligence or actual sentience.

Many scientists fear sentience from anything that is not human especially one created in artificial environment. Mostly due the implications of a machine having sentience.

Right now we are currently developing bionics and enhancements to the human body. We are close to developing brain interfaced bio-mechanics, but other than that we are no where near the implementation or creation of sentient machinery, but it is good they actually have a ruling on this, it is actual quite forward thinking on their part. As in the future we will probably have to deal with it in some capacity.




Automatically Appended Next Post:
 NuggzTheNinja wrote:
To move from a philosophical discussion to a more practical discussion...

...I'm not sure why modern neuroscience focuses so heavily on the cortex, when the goal is to produce machines that serve us. Animals that lack cortices are capable of emergent behaviors that would be VERY useful to us, and would be pretty much universally described as intelligent.

Most of the philosophical issues discussed here could be sidestepped by simply avoiding implementing a prefrontal cortex.


Again the problem would arise from the philosophical side if it is morally right to do so. What if it evolves? It is entirely possible with organic beings, we have yet to see a machine evolve but if that were to slip by the scientists or engineers. It would be bad.

But I do agree with you it could be useful. But again for all we know we could accidentally make a predator robot or something similar, I do not think many of them want to risk it.


Automatically Appended Next Post:
 Frazzled wrote:


I'm a human. I don't have an off switch.


Oh yea you do.


The human off switch is called bullet to head syndrome. Very common I hear in chicago.

This message was edited 3 times. Last update was at 2017/01/12 20:52:30


From whom are unforgiven we bring the mercy of war. 
   
 
Forum Index » Off-Topic Forum
Go to: