Switch Theme:

Self-driving cars -- programming morality  [RSS] Share on facebook Share on Twitter Submit to Reddit
»
Author Message
Advert


Forum adverts like this one are shown to any user who is not logged in. Join us by filling out a tiny 3 field form and you will get your own, free, dakka user account which gives a good range of benefits to you:
  • No adverts like this in the forums anymore.
  • Times and dates in your local timezone.
  • Full tracking of what you have read so you can skip to your first unread post, easily see what has changed since you last logged in, and easily see what is new at a glance.
  • Email notifications for threads you want to watch closely.
  • Being a part of the oldest wargaming community on the net.
If you are already a member then feel free to login now.




Made in jp
[MOD]
Anti-piracy Officer






Somewhere in south-central England.

We had a thread on this some time ago when the car in Atlanta (?) ran over and killed a cyclist who was pushing her bike across a fast road.

I couldn't find that thread, so I've made a new one to cover a piece of "research" done by MIT Media Lab. In 2014 they launched an online version of the "Trolley Problem" to study people's views on the morality of programming self-driving cars to react to different lethal scenarios.

BBC report here... https://www.bbc.co.uk/news/technology-45991093

There was another report I read somewhere else, I can't remember where unfortunately. It made the point that self-driving cars will almost never be programmed to swerve because the best option in nearly any accident scenario is to brake in a straight line to avoid losing control or rolling.

The other thing that occurred to me is that the car's never going to know the age and sex of the potential victims.

From that angle, the whole exercise seems more about gauging social attitudes than actual guidance for programmers. The results are interesting anyway.

This message was edited 1 time. Last update was at 2018/10/26 19:44:36


I'm writing a load of fiction. My latest story starts here... This is the index of all the stories...

We're not very big on official rules. Rules lead to people looking for loopholes. What's here is about it. 
   
Made in us
Fixture of Dakka






 Kilkrazy wrote:
The other thing that occurred to me is that the car's never going to know the age and sex of the potential victims.


Anyone participating in an automotive accident is unlikely to be aware of that either, let alone use that information in split-second decision making. Unless you’re talking about passengers. Why would it even factor in to the best course of action anyway?

"The Omnissiah is my Moderati" 
   
Made in gb
Ridin' on a Snotling Pump Wagon






On a personal and professional level (because this is promising to be a pain in my arse), I’m more intrigued as to how the insurance world will handle liability.

Do you blame the owner? When is the programmer of the AI to blame. What about situations when the AI has suffered the ‘future’ version of the blue screen of death?

These are worryingly pressing issues. Yes, one could and probably say that the car’s owner needs to properly monitor its performance, and take over control as and when needed. But also....if programmers effed it right up in certain situations? Liability shifts.

Trust me. If anything is going to bury self drive tech, it’s the insurance concerns.

   
Made in gb
Regular Dakkanaut





I would suggest that the programmer is no more to blame than the designer who built the car capable of going faster than the speed limit is responsible for the speeding motorist. If the car's allowed onto the market with a defective AI that's a different matter, but there was a similar situation fairly recently where a car's engine management computer took over from the driver's input and they accelerated regardless.

I would suggest that so long as the driver doesn't obviously negatively second-guess the AI (Like plowing into a queue of bus passengers rather than a lamp post), they'll be in the clear so long as it can't be judged as being to be subjectively or objectively detrimental and I do appreciate that those can be contradictory, but then is that necessarily any different from how insurance claims are handled now?
   
Made in us
Douglas Bader






Simple solution: program the car to kill all pedestrians, and all other drivers not using that brand of car. Arm it with appropriate weapons to accomplish this task. Now you no longer have to worry about whether it will kill a pedestrian by accident, you have certainty.

There is no such thing as a hobby without politics. "Leave politics at the door" is itself a political statement, an endorsement of the status quo and an attempt to silence dissenting voices. 
   
Made in gb
Regular Dakkanaut





In the UK the Police no longer refer to Accidents, rather Incidents, the logic is that an accident is completely preventable whilst with an Incident someone or something is at fault. To be honest, I think it makes more sense.
   
Made in jp
[MOD]
Anti-piracy Officer






Somewhere in south-central England.

Pedestrians will be allowed to carry hand grenades.

I'm writing a load of fiction. My latest story starts here... This is the index of all the stories...

We're not very big on official rules. Rules lead to people looking for loopholes. What's here is about it. 
   
Made in gb
Regular Dakkanaut





 Kilkrazy wrote:
Pedestrians will be allowed to carry hand grenades.


About the same time that I'll be allowed to install engine-mounted machine-guns I hope. As a concession I'll allow them to be aimed and fired by the observer, or passenger, whatever you want to call the,... Make it a requirement for pedestrians or especially cyclists to have some sort of training first though especially in the UK.
   
Made in us
Infiltrating Prowler





Portland, OR

 Kilkrazy wrote:
It made the point that self-driving cars will almost never be programmed to swerve because the best option in nearly any accident scenario is to brake in a straight line to avoid losing control or rolling.
The main reason to program it not to swerve or not do a controlled breaking in a towards the side of the road etc is due to not knowing how to account who has better reaction times. Let's say it could have broke to the right into oncoming or side lane, would other drivers be able to suddenly respond in an appropriate time and behavior. These issues go away to a degree once all vehicles are required to be part of the same network. When Car A swerves to avoid Pedestrian because it knows Car B and C who it could hit by swerving know how to auto correct, then you the better solution. Until you get to that point, you have to program for what you consider the least damaging or controlled area which is in front of the car.

The earlier accidents by self-driving cars, at least a good majority of them, were proven to be the driver's fault. When the car when to break or correct, the driver took the wheel, accelerated or broke instead causing the accident instead of relying on the car to do the job. It is kind of the same thing, except in this aspect everyone around it isn't part of the AI network. Once all vehicles and you could in theory even program bicycles to brake, then the network as a whole works better and is easier to program for accidents.
   
Made in us
Legendary Master of the Chapter






 simonr1978 wrote:
In the UK the Police no longer refer to Accidents, rather Incidents, the logic is that an accident is completely preventable whilst with an Incident someone or something is at fault. To be honest, I think it makes more sense.


I think that was brought up in hot fuzz.

something about an accident inferring that no one was at fault or that there was no foul play.


 Unit1126PLL wrote:
 Scott-S6 wrote:
And yet another thread is hijacked for Unit to ask for the same advice, receive the same answers and make the same excuses.

Oh my god I'm becoming martel.
Send help!

 
   
Made in gb
Decrepit Dakkanaut




UK

I don't think you can have an AI that relies on a human driver as a backup for regular driving. A self-driving car must be able to operate without any driver intervention at any stage of a normal journey on normal condition roads and indeed in all but the most extreme of driving conditions.

Because a human who is not driving is not going to be paying full attention to the road. Sure the first few times they head out they will, but once they don't have to do anything they will get lazy fast. It's perfectly natural. Heck my father got an automatic car recently and has been driving it for 6 months - the first time he got in a manual (he's been driving manual cars for decades) it took a few trips for him to always remember that he had to use the clutch. Now imagine that you don't even have to use the brake or peddles or wheel for weeks or months when driving. You don't have to watch at corners or turning or pay attention to roundabouts or lights.

In that situation even if the driver spots a problem there is an even higher chance of them making a wrong judgement call on the controls anyway. And that's even if they can spot the issue and react fast enough - even when you've got your hands and feet on the controls that's a tiny time frame; if you've got to find the wheel and controls fast you're already losing a huge chunk of reaction time.





Insurance is where it will be won or lost. If the insurance companies pay out people will get driverless cars; however if there's no payout or no agreement setup then there's a good chance that people will not adapt because of the fear of no insurance payout. Now one bonus is that a driverless car can have a blackbox and monitor ALL of its situation as its aware and record it. So in one way insurance should be easier because they can check the cars black box (assuming it survives) and review what really happened from the cars perspective.




I also agree that if driverless cars became the norm then a linkup between cars would improve road safety a lot. If one car brakes hard then it can automatically signal all the cars behind it to brake hard that should, in theory, cut down on injuries and harm (though can you sue your own car/manufacturer for whiplash!?). Swerving would also be safer if the car can, in a split second, decide to serve and transmit that info to cars in the other lanes, ordering them to swerve or brake; or even just a warning.



As for the morality argument, in general cars already kill a lot of people, in fact the numbers are quite staggering and its kind of a scary thing that we can look down on Victorian mills and factories for the dangers they exposed children too in running under machinery; and yet we are happy to get into cars every single day of our lives.

A Blog in Miniature

3D Printing, hobbying and model fun! 
   
Made in nl
Stone Bonkers Fabricator General




We'll find out soon enough eh.

Yeah, the issue I have with the debate is that it always seems to circle back to the Trolley Problem...which never really seemed like much of a problem.

The assumption is always that the person choosing is not at fault for whatever initial disaster is about to befall the victims and further that you have foreknowledge of both possible outcomes, and that makes the "solution" laughably simple - the "moral" response is always to take no action beyond doing your best to stop the vehicle even if the attempt is ultimately futile. The moment you decide "I am going to turn this vehicle because I think killing one person is a lesser evil than allowing five to die in an accident", all responsibility for the result falls on you and you're a murderer, plain & simple.

As to self-driving cars, the problems with those are the same problems we have generally at the moment: urban planning is still stuck in the 20th century, as are people's attitudes towards ownership and transportation.

Pedestrianise city centres except for fixed-route public transport(ie, trams) where necessary. Ensure roadways outside city centres have frequent under- or over-passes so pedestrians have no reason or justification for crossing the roadway itself at all. That alone would fix the issue since self-driving vehicles and pedestrians wouldn't coexist in the same spaces, and if one intrudes into the space of the other the fault for any accident can be clearly assigned to the one intruding.

Ideally, you'd also have an AI system running all the roads within and near a city, and it would control individual vehicles to minimise travel time and ensure no accidents, with the vehicles onboard autonomous navigation only kicking in on nice, simple, straight motorways between them(country backroads in older nations would be an issue, but hard limits on speed in such scenarios should be sufficient to ensure they're still orders of magnitude safer than a human driver). Even more ideally, we'd move past this daft idea that everyone - even city & suburb dwellers who's most strenuous driving challenge is the school run or a trip down the shops - has to own at least one personal car and just have the city AI operate fleets of autonomous taxis.

I need to acquire plastic Skavenslaves, can you help?
I have a blog now, evidently. Featuring the Alternative Mordheim Model Megalist.

"Your society's broken, so who should we blame? Should we blame the rich, powerful people who caused it? No, lets blame the people with no power and no money and those immigrants who don't even have the vote. Yea, it must be their fething fault." - Iain M Banks
-----
"The language of modern British politics is meant to sound benign. But words do not mean what they seem to mean. 'Reform' actually means 'cut' or 'end'. 'Flexibility' really means 'exploit'. 'Prudence' really means 'don't invest'. And 'efficient'? That means whatever you want it to mean, usually 'cut'. All really mean 'keep wages low for the masses, taxes low for the rich, profits high for the corporations, and accept the decline in public services and amenities this will cause'." - Robin McAlpine from Common Weal 
   
Made in us
Gun Mage





We just need to avoid a 0th Law of Robotics scenario where the cars take over society to stop us from killing each other.
   
Made in gb
Decrepit Dakkanaut




UK

Yodhrin much of that works for urban areas and for inter-urban travel. Indeed I wager if the UK had kept its rail network and built upon its strength the mass car ownership we see today could have been partly curtailed.

That said the countryside is still a thing, so self driving cars would still have to be able to react to things like people, horses, cows and birds. Heck even in urban areas you get foxes, badgers, squirrels and more that have to be accounted for. Not to mention people working on road networks or others that might stray onto the roads for any number of reasons (breakdown etc...).



A Blog in Miniature

3D Printing, hobbying and model fun! 
   
Made in us
Legendary Master of the Chapter






 Overread wrote:
Yodhrin much of that works for urban areas and for inter-urban travel. Indeed I wager if the UK had kept its rail network and built upon its strength the mass car ownership we see today could have been partly curtailed.

That said the countryside is still a thing, so self driving cars would still have to be able to react to things like people, horses, cows and birds. Heck even in urban areas you get foxes, badgers, squirrels and more that have to be accounted for. Not to mention people working on road networks or others that might stray onto the roads for any number of reasons (breakdown etc...).




There is always the option to set zones where auto cars would work and not work. i mean dunno about the uk but in the US there is always a lot of problems on the freeways. and it would be nice if all vehicles eventually used swarm traffic tactics to keep everyone moving better

 Unit1126PLL wrote:
 Scott-S6 wrote:
And yet another thread is hijacked for Unit to ask for the same advice, receive the same answers and make the same excuses.

Oh my god I'm becoming martel.
Send help!

 
   
Made in us
The Conquerer






Waiting for my shill money from Spiral Arm Studios

 Mad Doc Grotsnik wrote:
On a personal and professional level (because this is promising to be a pain in my arse), I’m more intrigued as to how the insurance world will handle liability.

Do you blame the owner? When is the programmer of the AI to blame. What about situations when the AI has suffered the ‘future’ version of the blue screen of death?

These are worryingly pressing issues. Yes, one could and probably say that the car’s owner needs to properly monitor its performance, and take over control as and when needed. But also....if programmers effed it right up in certain situations? Liability shifts.

Trust me. If anything is going to bury self drive tech, it’s the insurance concerns.


Indeed. Self-driving tech is going to be sacrificed on the alter of litigation. Someone/everyone will get sued and the different corporations will realize that self-driving vehicles are a terrible idea from that standpoint, so they'll ditch the idea like a hot potato. It'll be just another fad that comes and goes in the mid/late 21st century.

Self-proclaimed evil Cat-person. Dues Ex Felines

Cato Sicarius, after force feeding Captain Ventris a copy of the Codex Astartes for having the audacity to play Deathwatch, chokes to death on his own D-baggery after finding Calgar assembling his new Eldar army.

MURICA!!! IN SPESS!!! 
   
Made in us
Infiltrating Broodlord





United States

The only proper morality my self driving car should have is to make sure I, as the driver, have the best chance to survive any accident.

Ayn Rand "We can evade reality, but we cannot evade the consequences of evading reality" 
   
Made in fi
Locked in the Tower of Amareo





 Grey Templar wrote:
 Mad Doc Grotsnik wrote:
On a personal and professional level (because this is promising to be a pain in my arse), I’m more intrigued as to how the insurance world will handle liability.

Do you blame the owner? When is the programmer of the AI to blame. What about situations when the AI has suffered the ‘future’ version of the blue screen of death?

These are worryingly pressing issues. Yes, one could and probably say that the car’s owner needs to properly monitor its performance, and take over control as and when needed. But also....if programmers effed it right up in certain situations? Liability shifts.

Trust me. If anything is going to bury self drive tech, it’s the insurance concerns.


Indeed. Self-driving tech is going to be sacrificed on the alter of litigation. Someone/everyone will get sued and the different corporations will realize that self-driving vehicles are a terrible idea from that standpoint, so they'll ditch the idea like a hot potato. It'll be just another fad that comes and goes in the mid/late 21st century.


That would be pity. Burying only hope(albeit not quarantee) of reducing significantly deaths in traffic.

2024 painted/bought: 109/109 
   
Made in us
Keeper of the Flame





Monticello, IN

TheWaspinator wrote:We just need to avoid a 0th Law of Robotics scenario where the cars take over society to stop us from killing each other.


It's a real shame that THAT movie will be forever in the forefront of Asimov's legacy despite the fact that Robot As Menace stories were stories that he hated, and the tenets of that movie run counter to EVERYTHING he wrote about that dealt with robots.




The driverless car phenomenon is not going to work unless it is implemented wholesale. Having half the cars driverless while the other have are skittish humans will never work out.



Automatically Appended Next Post:
As far as programming morality? It's like people WANT Skynet to happen...

This message was edited 1 time. Last update was at 2018/10/27 05:29:56


www.classichammer.com

For 4-6th WFB, 2-5th 40k, and similar timeframe gaming

Looking for dice from the new AOS boxed set and Dark Imperium on the cheap. Let me know if you can help.
 CthuluIsSpy wrote:
Its AoS, it doesn't have to make sense.
 
   
Made in us
Decrepit Dakkanaut





 Overread wrote:

I also agree that if driverless cars became the norm then a linkup between cars would improve road safety a lot. If one car brakes hard then it can automatically signal all the cars behind it to brake hard that should, in theory, cut down on injuries and harm (though can you sue your own car/manufacturer for whiplash!?). Swerving would also be safer if the car can, in a split second, decide to serve and transmit that info to cars in the other lanes, ordering them to swerve or brake; or even just a warning.



As for the morality argument, in general cars already kill a lot of people, in fact the numbers are quite staggering and its kind of a scary thing that we can look down on Victorian mills and factories for the dangers they exposed children too in running under machinery; and yet we are happy to get into cars every single day of our lives.


In my MBA program, some classmates had an innovation study involving automated vehicles/self-driving cars. . . It seems that, from the engineering/programming/academic side of things, the #1 biggest drawback to the technology is broadband. A lot of experts theorize that once we hit 5G broadband, automated vehicles will become much more viable.

And that was across almost all forms of automated vehicles (meaning the systems used to determine location/where to go, etc.). A further trick will be whether an industry standard develops before these things hit the road full time or not, otherwise we'll be in for some real nightmare scenarios. By this I mean how in some systems, they are reliant on sensors placed on road signage, other vehicles, and all over the place to sort of sonar/radar their way around locations. Others were "purely" gps based. With other variations beyond that. It will be problematic if Ford uses proximity based programming, while Toyota has a GPS only + "eyesight" (the collision detection type stuff they currently have). . . they won't exactly interact so well together, because one system is relying on another vehicle having a sensor to know where an obstacle is, while another isn't.



Automatically Appended Next Post:
 Just Tony wrote:

It's a real shame that THAT movie will be forever in the forefront of Asimov's legacy despite the fact that Robot As Menace stories were stories that he hated, and the tenets of that movie run counter to EVERYTHING he wrote about that dealt with robots.


While true, I think there is a certain logical fallacy in his 3 laws that exists that makes some movie makers and other authors use the Robot as Menace trope the way they do.


This message was edited 1 time. Last update was at 2018/10/27 06:13:55


 
   
Made in us
Gun Mage





Actually, while that movie takes it to an extreme, a lot of later Asimov stuff winds up with the robots secretly manipulating society for our benefit under 0th law logic since they realize overt takeover would cause us too much psychological and physical harm due to humanity not accepting it.
   
Made in us
Keeper of the Flame





Monticello, IN

Ensis Ferrae wrote:Automatically Appended Next Post:
 Just Tony wrote:

It's a real shame that THAT movie will be forever in the forefront of Asimov's legacy despite the fact that Robot As Menace stories were stories that he hated, and the tenets of that movie run counter to EVERYTHING he wrote about that dealt with robots.


While true, I think there is a certain logical fallacy in his 3 laws that exists that makes some movie makers and other authors use the Robot as Menace trope the way they do.




I'm hoping it's not along the lines of that youtube video that was posted in the "Mankind continues to learn nothing..." thread, because that was hot garbage on gak paper, and I can poke holes in that guy's reasoning solely using in story examples.

TheWaspinator wrote:Actually, while that movie takes it to an extreme, a lot of later Asimov stuff winds up with the robots secretly manipulating society for our benefit under 0th law logic since they realize overt takeover would cause us too much psychological and physical harm due to humanity not accepting it.


Nothing I came across had that, care to drop me some titles so I can look them up?

www.classichammer.com

For 4-6th WFB, 2-5th 40k, and similar timeframe gaming

Looking for dice from the new AOS boxed set and Dark Imperium on the cheap. Let me know if you can help.
 CthuluIsSpy wrote:
Its AoS, it doesn't have to make sense.
 
   
Made in us
Douglas Bader






 Just Tony wrote:
The driverless car phenomenon is not going to work unless it is implemented wholesale. Having half the cars driverless while the other have are skittish humans will never work out.


Not true at all. It will work just fine, and arguably is already at the point of working just fine. Remember, automated cars don't have to reach some impossible standard of perfection and zero fatalities to be viable, they just have to be even slightly better than the incompetent and/or negligent humans currently driving cars. Replacing half the cars with automated ones is better than zero automation even if it's less ideal than a theoretical perfect automated system.

As far as programming morality? It's like people WANT Skynet to happen...


Hardly. Fiction is not reality.


Automatically Appended Next Post:
 Yodhrin wrote:
The moment you decide "I am going to turn this vehicle because I think killing one person is a lesser evil than allowing five to die in an accident", all responsibility for the result falls on you and you're a murderer, plain & simple.


This is exactly the point of the trolley problem: to separate ethical systems like yours which care about intent instead of results from ethical systems that care about results regardless of intent. Under your ethical system it is murder because the intent of the driver is taking responsibility and making an active choice, and the correct action is the one that causes more people to die but leaves the driver's motives pure. Under my ethical system it's a simple calculation of deaths: 5 deaths vs 1 death. Making the choice that results in four additional deaths is immoral, and "but I didn't do anything" is not an excuse.

(In the case of the self-driving car it makes the ethical calculations very simple: aim for the group of people that causes the most deaths as the highest priority to ensure that they don't escape, then go back to finish off the smaller group if possible.)

This message was edited 2 times. Last update was at 2018/10/27 07:17:31


There is no such thing as a hobby without politics. "Leave politics at the door" is itself a political statement, an endorsement of the status quo and an attempt to silence dissenting voices. 
   
Made in us
Gun Mage





The biggest example is Daneel Olivaw of the Asimov robot novels surviving secretly into the Foundation era as a shepherd of humanity of sorts.

https://en.m.wikipedia.org/wiki/R._Daneel_Olivaw
   
Made in jp
[MOD]
Anti-piracy Officer






Somewhere in south-central England.

I don't believe insurance will be a problem.

Self-driving cars will be safer than human driven vehicles. There will be a lot fewer cars on the road, too. Both factors will reduce accident rates.

I'm writing a load of fiction. My latest story starts here... This is the index of all the stories...

We're not very big on official rules. Rules lead to people looking for loopholes. What's here is about it. 
   
Made in nl
Pragmatic Primus Commanding Cult Forces






 Kilkrazy wrote:
I don't believe insurance will be a problem.

Self-driving cars will be safer than human driven vehicles. There will be a lot fewer cars on the road, too. Both factors will reduce accident rates.

And how will that prevent insurance from being a problem? Self-driving cars may well be safer, but they are most certainly not infallible. There will be accidents with self-driving cars, and lots of them if they ever become anywhere near as common as normal cars. Questions of insurance and responsibility would be a massive issue.

Error 404: Interesting signature not found

 
   
Made in gb
Decrepit Dakkanaut




UK

Also there's the perception of blame and insurance as well. For many the self-driving car will quickly become like getting into your own personal taxi. You likely won't have to and won't be expected to do any actual driving unless the road conditions are deemed unsafe/unsuitable for the AI*.
So for many they would expect insurance to pay out if their self driving car has an incident. Furthermore, in theory, as a self-driving car (proven to be up to date with its maintenance) cannot be assumed to have fallen asleep or driven with undue care an attention etc... it could, justifiably, be said to have never made a "mistake" even if its actions result in damage/injury/death.


To say otherwise would undermine the selfdriving car marketing. Even if its not true a simple splash of "my car crashed and I had to pay for the damages" on the front of the Sun or the Daily Mail, could fast sink sales and put companies into question.



*Which brings in an interesting situation when the AI refuses because of difficult road conditions and an even more out of practice human driver takes over! Already in the UK we see big spikes in accidents on even one day of icy roadways, imagine if you've not driven for 10 months prior to that (barring perhaps one or two very slow, careful and tricky 10001 point turns)

A Blog in Miniature

3D Printing, hobbying and model fun! 
   
Made in gb
Mekboy Hammerin' Somethin'





Dorset, England

 simonr1978 wrote:
In the UK the Police no longer refer to Accidents, rather Incidents, the logic is that an accident is completely preventable whilst with an Incident someone or something is at fault. To be honest, I think it makes more sense.

Someones been on a speed awareness course recently :-p
   
Made in gb
Decrepit Dakkanaut




UK

 Kroem wrote:
 simonr1978 wrote:
In the UK the Police no longer refer to Accidents, rather Incidents, the logic is that an accident is completely preventable whilst with an Incident someone or something is at fault. To be honest, I think it makes more sense.

Someones been on a speed awareness course recently :-p


It's either a speed awareness course or watching Hot Fuzz

A Blog in Miniature

3D Printing, hobbying and model fun! 
   
Made in us
Did Fulgrim Just Behead Ferrus?





Fort Worth, TX

 Kilkrazy wrote:
I don't believe insurance will be a problem.

Self-driving cars will be safer than human driven vehicles. There will be a lot fewer cars on the road, too. Both factors will reduce accident rates.


The insurance issue is more along the lines of who has the liability: the owner of the car or the programmer of the self-driving AI. Or, if an accident happened because a sensor failed, is the liability on the owner for not having the sensors checked, the manufacturer of the sensor if it was a defect, of the programmer of the AI if the AI failed to detect the failure of the sensor?

In a way, the insurance companies may actually love this, because then they get to sell even more insurance to even more people at all levels of the manufacture and use of the self-driving tech.

This message was edited 1 time. Last update was at 2018/10/27 16:40:32


"Through the darkness of future past, the magician longs to see.
One chants out between two worlds: Fire, walk with me."
- Twin Peaks
"You listen to me. While I will admit to a certain cynicism, the fact is that I am a naysayer and hatchetman in the fight against violence. I pride myself in taking a punch and I'll gladly take another because I choose to live my life in the company of Gandhi and King. My concerns are global. I reject absolutely revenge, aggression, and retaliation. The foundation of such a method... is love. I love you Sheriff Truman." - Twin Peaks 
   
 
Forum Index » Off-Topic Forum
Go to: