Switch Theme:

Share on facebook Share on Twitter Submit to Reddit  [RSS] 

Self-driving cars -- programming morality @ 2018/10/26 19:22:07


Post by: Kilkrazy


We had a thread on this some time ago when the car in Atlanta (?) ran over and killed a cyclist who was pushing her bike across a fast road.

I couldn't find that thread, so I've made a new one to cover a piece of "research" done by MIT Media Lab. In 2014 they launched an online version of the "Trolley Problem" to study people's views on the morality of programming self-driving cars to react to different lethal scenarios.

BBC report here... https://www.bbc.co.uk/news/technology-45991093

There was another report I read somewhere else, I can't remember where unfortunately. It made the point that self-driving cars will almost never be programmed to swerve because the best option in nearly any accident scenario is to brake in a straight line to avoid losing control or rolling.

The other thing that occurred to me is that the car's never going to know the age and sex of the potential victims.

From that angle, the whole exercise seems more about gauging social attitudes than actual guidance for programmers. The results are interesting anyway.


Self-driving cars -- programming morality @ 2018/10/26 20:11:28


Post by: Nostromodamus


 Kilkrazy wrote:
The other thing that occurred to me is that the car's never going to know the age and sex of the potential victims.


Anyone participating in an automotive accident is unlikely to be aware of that either, let alone use that information in split-second decision making. Unless you’re talking about passengers. Why would it even factor in to the best course of action anyway?


Self-driving cars -- programming morality @ 2018/10/26 20:34:57


Post by: Mad Doc Grotsnik


On a personal and professional level (because this is promising to be a pain in my arse), I’m more intrigued as to how the insurance world will handle liability.

Do you blame the owner? When is the programmer of the AI to blame. What about situations when the AI has suffered the ‘future’ version of the blue screen of death?

These are worryingly pressing issues. Yes, one could and probably say that the car’s owner needs to properly monitor its performance, and take over control as and when needed. But also....if programmers effed it right up in certain situations? Liability shifts.

Trust me. If anything is going to bury self drive tech, it’s the insurance concerns.


Self-driving cars -- programming morality @ 2018/10/26 20:51:21


Post by: simonr1978


I would suggest that the programmer is no more to blame than the designer who built the car capable of going faster than the speed limit is responsible for the speeding motorist. If the car's allowed onto the market with a defective AI that's a different matter, but there was a similar situation fairly recently where a car's engine management computer took over from the driver's input and they accelerated regardless.

I would suggest that so long as the driver doesn't obviously negatively second-guess the AI (Like plowing into a queue of bus passengers rather than a lamp post), they'll be in the clear so long as it can't be judged as being to be subjectively or objectively detrimental and I do appreciate that those can be contradictory, but then is that necessarily any different from how insurance claims are handled now?


Self-driving cars -- programming morality @ 2018/10/26 20:53:21


Post by: Peregrine


Simple solution: program the car to kill all pedestrians, and all other drivers not using that brand of car. Arm it with appropriate weapons to accomplish this task. Now you no longer have to worry about whether it will kill a pedestrian by accident, you have certainty.


Self-driving cars -- programming morality @ 2018/10/26 20:56:05


Post by: simonr1978


In the UK the Police no longer refer to Accidents, rather Incidents, the logic is that an accident is completely preventable whilst with an Incident someone or something is at fault. To be honest, I think it makes more sense.


Self-driving cars -- programming morality @ 2018/10/26 20:56:58


Post by: Kilkrazy


Pedestrians will be allowed to carry hand grenades.


Self-driving cars -- programming morality @ 2018/10/26 21:10:34


Post by: simonr1978


 Kilkrazy wrote:
Pedestrians will be allowed to carry hand grenades.


About the same time that I'll be allowed to install engine-mounted machine-guns I hope. As a concession I'll allow them to be aimed and fired by the observer, or passenger, whatever you want to call the,... Make it a requirement for pedestrians or especially cyclists to have some sort of training first though especially in the UK.


Self-driving cars -- programming morality @ 2018/10/26 21:38:14


Post by: Dark Severance


 Kilkrazy wrote:
It made the point that self-driving cars will almost never be programmed to swerve because the best option in nearly any accident scenario is to brake in a straight line to avoid losing control or rolling.
The main reason to program it not to swerve or not do a controlled breaking in a towards the side of the road etc is due to not knowing how to account who has better reaction times. Let's say it could have broke to the right into oncoming or side lane, would other drivers be able to suddenly respond in an appropriate time and behavior. These issues go away to a degree once all vehicles are required to be part of the same network. When Car A swerves to avoid Pedestrian because it knows Car B and C who it could hit by swerving know how to auto correct, then you the better solution. Until you get to that point, you have to program for what you consider the least damaging or controlled area which is in front of the car.

The earlier accidents by self-driving cars, at least a good majority of them, were proven to be the driver's fault. When the car when to break or correct, the driver took the wheel, accelerated or broke instead causing the accident instead of relying on the car to do the job. It is kind of the same thing, except in this aspect everyone around it isn't part of the AI network. Once all vehicles and you could in theory even program bicycles to brake, then the network as a whole works better and is easier to program for accidents.


Self-driving cars -- programming morality @ 2018/10/26 21:47:17


Post by: Desubot


 simonr1978 wrote:
In the UK the Police no longer refer to Accidents, rather Incidents, the logic is that an accident is completely preventable whilst with an Incident someone or something is at fault. To be honest, I think it makes more sense.


I think that was brought up in hot fuzz.

something about an accident inferring that no one was at fault or that there was no foul play.



Self-driving cars -- programming morality @ 2018/10/26 21:49:11


Post by: Overread


I don't think you can have an AI that relies on a human driver as a backup for regular driving. A self-driving car must be able to operate without any driver intervention at any stage of a normal journey on normal condition roads and indeed in all but the most extreme of driving conditions.

Because a human who is not driving is not going to be paying full attention to the road. Sure the first few times they head out they will, but once they don't have to do anything they will get lazy fast. It's perfectly natural. Heck my father got an automatic car recently and has been driving it for 6 months - the first time he got in a manual (he's been driving manual cars for decades) it took a few trips for him to always remember that he had to use the clutch. Now imagine that you don't even have to use the brake or peddles or wheel for weeks or months when driving. You don't have to watch at corners or turning or pay attention to roundabouts or lights.

In that situation even if the driver spots a problem there is an even higher chance of them making a wrong judgement call on the controls anyway. And that's even if they can spot the issue and react fast enough - even when you've got your hands and feet on the controls that's a tiny time frame; if you've got to find the wheel and controls fast you're already losing a huge chunk of reaction time.





Insurance is where it will be won or lost. If the insurance companies pay out people will get driverless cars; however if there's no payout or no agreement setup then there's a good chance that people will not adapt because of the fear of no insurance payout. Now one bonus is that a driverless car can have a blackbox and monitor ALL of its situation as its aware and record it. So in one way insurance should be easier because they can check the cars black box (assuming it survives) and review what really happened from the cars perspective.




I also agree that if driverless cars became the norm then a linkup between cars would improve road safety a lot. If one car brakes hard then it can automatically signal all the cars behind it to brake hard that should, in theory, cut down on injuries and harm (though can you sue your own car/manufacturer for whiplash!?). Swerving would also be safer if the car can, in a split second, decide to serve and transmit that info to cars in the other lanes, ordering them to swerve or brake; or even just a warning.



As for the morality argument, in general cars already kill a lot of people, in fact the numbers are quite staggering and its kind of a scary thing that we can look down on Victorian mills and factories for the dangers they exposed children too in running under machinery; and yet we are happy to get into cars every single day of our lives.


Self-driving cars -- programming morality @ 2018/10/26 22:10:35


Post by: Yodhrin


Yeah, the issue I have with the debate is that it always seems to circle back to the Trolley Problem...which never really seemed like much of a problem.

The assumption is always that the person choosing is not at fault for whatever initial disaster is about to befall the victims and further that you have foreknowledge of both possible outcomes, and that makes the "solution" laughably simple - the "moral" response is always to take no action beyond doing your best to stop the vehicle even if the attempt is ultimately futile. The moment you decide "I am going to turn this vehicle because I think killing one person is a lesser evil than allowing five to die in an accident", all responsibility for the result falls on you and you're a murderer, plain & simple.

As to self-driving cars, the problems with those are the same problems we have generally at the moment: urban planning is still stuck in the 20th century, as are people's attitudes towards ownership and transportation.

Pedestrianise city centres except for fixed-route public transport(ie, trams) where necessary. Ensure roadways outside city centres have frequent under- or over-passes so pedestrians have no reason or justification for crossing the roadway itself at all. That alone would fix the issue since self-driving vehicles and pedestrians wouldn't coexist in the same spaces, and if one intrudes into the space of the other the fault for any accident can be clearly assigned to the one intruding.

Ideally, you'd also have an AI system running all the roads within and near a city, and it would control individual vehicles to minimise travel time and ensure no accidents, with the vehicles onboard autonomous navigation only kicking in on nice, simple, straight motorways between them(country backroads in older nations would be an issue, but hard limits on speed in such scenarios should be sufficient to ensure they're still orders of magnitude safer than a human driver). Even more ideally, we'd move past this daft idea that everyone - even city & suburb dwellers who's most strenuous driving challenge is the school run or a trip down the shops - has to own at least one personal car and just have the city AI operate fleets of autonomous taxis.


Self-driving cars -- programming morality @ 2018/10/26 22:15:24


Post by: TheWaspinator


We just need to avoid a 0th Law of Robotics scenario where the cars take over society to stop us from killing each other.


Self-driving cars -- programming morality @ 2018/10/26 22:24:54


Post by: Overread


Yodhrin much of that works for urban areas and for inter-urban travel. Indeed I wager if the UK had kept its rail network and built upon its strength the mass car ownership we see today could have been partly curtailed.

That said the countryside is still a thing, so self driving cars would still have to be able to react to things like people, horses, cows and birds. Heck even in urban areas you get foxes, badgers, squirrels and more that have to be accounted for. Not to mention people working on road networks or others that might stray onto the roads for any number of reasons (breakdown etc...).




Self-driving cars -- programming morality @ 2018/10/26 23:24:59


Post by: Desubot


 Overread wrote:
Yodhrin much of that works for urban areas and for inter-urban travel. Indeed I wager if the UK had kept its rail network and built upon its strength the mass car ownership we see today could have been partly curtailed.

That said the countryside is still a thing, so self driving cars would still have to be able to react to things like people, horses, cows and birds. Heck even in urban areas you get foxes, badgers, squirrels and more that have to be accounted for. Not to mention people working on road networks or others that might stray onto the roads for any number of reasons (breakdown etc...).




There is always the option to set zones where auto cars would work and not work. i mean dunno about the uk but in the US there is always a lot of problems on the freeways. and it would be nice if all vehicles eventually used swarm traffic tactics to keep everyone moving better


Self-driving cars -- programming morality @ 2018/10/27 00:11:42


Post by: Grey Templar


 Mad Doc Grotsnik wrote:
On a personal and professional level (because this is promising to be a pain in my arse), I’m more intrigued as to how the insurance world will handle liability.

Do you blame the owner? When is the programmer of the AI to blame. What about situations when the AI has suffered the ‘future’ version of the blue screen of death?

These are worryingly pressing issues. Yes, one could and probably say that the car’s owner needs to properly monitor its performance, and take over control as and when needed. But also....if programmers effed it right up in certain situations? Liability shifts.

Trust me. If anything is going to bury self drive tech, it’s the insurance concerns.


Indeed. Self-driving tech is going to be sacrificed on the alter of litigation. Someone/everyone will get sued and the different corporations will realize that self-driving vehicles are a terrible idea from that standpoint, so they'll ditch the idea like a hot potato. It'll be just another fad that comes and goes in the mid/late 21st century.


Self-driving cars -- programming morality @ 2018/10/27 03:48:41


Post by: BuFFo


The only proper morality my self driving car should have is to make sure I, as the driver, have the best chance to survive any accident.


Self-driving cars -- programming morality @ 2018/10/27 05:02:30


Post by: tneva82


 Grey Templar wrote:
 Mad Doc Grotsnik wrote:
On a personal and professional level (because this is promising to be a pain in my arse), I’m more intrigued as to how the insurance world will handle liability.

Do you blame the owner? When is the programmer of the AI to blame. What about situations when the AI has suffered the ‘future’ version of the blue screen of death?

These are worryingly pressing issues. Yes, one could and probably say that the car’s owner needs to properly monitor its performance, and take over control as and when needed. But also....if programmers effed it right up in certain situations? Liability shifts.

Trust me. If anything is going to bury self drive tech, it’s the insurance concerns.


Indeed. Self-driving tech is going to be sacrificed on the alter of litigation. Someone/everyone will get sued and the different corporations will realize that self-driving vehicles are a terrible idea from that standpoint, so they'll ditch the idea like a hot potato. It'll be just another fad that comes and goes in the mid/late 21st century.


That would be pity. Burying only hope(albeit not quarantee) of reducing significantly deaths in traffic.


Self-driving cars -- programming morality @ 2018/10/27 05:29:22


Post by: Just Tony


TheWaspinator wrote:We just need to avoid a 0th Law of Robotics scenario where the cars take over society to stop us from killing each other.


It's a real shame that THAT movie will be forever in the forefront of Asimov's legacy despite the fact that Robot As Menace stories were stories that he hated, and the tenets of that movie run counter to EVERYTHING he wrote about that dealt with robots.




The driverless car phenomenon is not going to work unless it is implemented wholesale. Having half the cars driverless while the other have are skittish humans will never work out.



Automatically Appended Next Post:
As far as programming morality? It's like people WANT Skynet to happen...


Self-driving cars -- programming morality @ 2018/10/27 06:12:08


Post by: Ensis Ferrae


 Overread wrote:

I also agree that if driverless cars became the norm then a linkup between cars would improve road safety a lot. If one car brakes hard then it can automatically signal all the cars behind it to brake hard that should, in theory, cut down on injuries and harm (though can you sue your own car/manufacturer for whiplash!?). Swerving would also be safer if the car can, in a split second, decide to serve and transmit that info to cars in the other lanes, ordering them to swerve or brake; or even just a warning.



As for the morality argument, in general cars already kill a lot of people, in fact the numbers are quite staggering and its kind of a scary thing that we can look down on Victorian mills and factories for the dangers they exposed children too in running under machinery; and yet we are happy to get into cars every single day of our lives.


In my MBA program, some classmates had an innovation study involving automated vehicles/self-driving cars. . . It seems that, from the engineering/programming/academic side of things, the #1 biggest drawback to the technology is broadband. A lot of experts theorize that once we hit 5G broadband, automated vehicles will become much more viable.

And that was across almost all forms of automated vehicles (meaning the systems used to determine location/where to go, etc.). A further trick will be whether an industry standard develops before these things hit the road full time or not, otherwise we'll be in for some real nightmare scenarios. By this I mean how in some systems, they are reliant on sensors placed on road signage, other vehicles, and all over the place to sort of sonar/radar their way around locations. Others were "purely" gps based. With other variations beyond that. It will be problematic if Ford uses proximity based programming, while Toyota has a GPS only + "eyesight" (the collision detection type stuff they currently have). . . they won't exactly interact so well together, because one system is relying on another vehicle having a sensor to know where an obstacle is, while another isn't.



Automatically Appended Next Post:
 Just Tony wrote:

It's a real shame that THAT movie will be forever in the forefront of Asimov's legacy despite the fact that Robot As Menace stories were stories that he hated, and the tenets of that movie run counter to EVERYTHING he wrote about that dealt with robots.


While true, I think there is a certain logical fallacy in his 3 laws that exists that makes some movie makers and other authors use the Robot as Menace trope the way they do.




Self-driving cars -- programming morality @ 2018/10/27 06:44:49


Post by: TheWaspinator


Actually, while that movie takes it to an extreme, a lot of later Asimov stuff winds up with the robots secretly manipulating society for our benefit under 0th law logic since they realize overt takeover would cause us too much psychological and physical harm due to humanity not accepting it.


Self-driving cars -- programming morality @ 2018/10/27 06:50:27


Post by: Just Tony


Ensis Ferrae wrote:Automatically Appended Next Post:
 Just Tony wrote:

It's a real shame that THAT movie will be forever in the forefront of Asimov's legacy despite the fact that Robot As Menace stories were stories that he hated, and the tenets of that movie run counter to EVERYTHING he wrote about that dealt with robots.


While true, I think there is a certain logical fallacy in his 3 laws that exists that makes some movie makers and other authors use the Robot as Menace trope the way they do.




I'm hoping it's not along the lines of that youtube video that was posted in the "Mankind continues to learn nothing..." thread, because that was hot garbage on gak paper, and I can poke holes in that guy's reasoning solely using in story examples.

TheWaspinator wrote:Actually, while that movie takes it to an extreme, a lot of later Asimov stuff winds up with the robots secretly manipulating society for our benefit under 0th law logic since they realize overt takeover would cause us too much psychological and physical harm due to humanity not accepting it.


Nothing I came across had that, care to drop me some titles so I can look them up?


Self-driving cars -- programming morality @ 2018/10/27 07:11:44


Post by: Peregrine


 Just Tony wrote:
The driverless car phenomenon is not going to work unless it is implemented wholesale. Having half the cars driverless while the other have are skittish humans will never work out.


Not true at all. It will work just fine, and arguably is already at the point of working just fine. Remember, automated cars don't have to reach some impossible standard of perfection and zero fatalities to be viable, they just have to be even slightly better than the incompetent and/or negligent humans currently driving cars. Replacing half the cars with automated ones is better than zero automation even if it's less ideal than a theoretical perfect automated system.

As far as programming morality? It's like people WANT Skynet to happen...


Hardly. Fiction is not reality.


Automatically Appended Next Post:
 Yodhrin wrote:
The moment you decide "I am going to turn this vehicle because I think killing one person is a lesser evil than allowing five to die in an accident", all responsibility for the result falls on you and you're a murderer, plain & simple.


This is exactly the point of the trolley problem: to separate ethical systems like yours which care about intent instead of results from ethical systems that care about results regardless of intent. Under your ethical system it is murder because the intent of the driver is taking responsibility and making an active choice, and the correct action is the one that causes more people to die but leaves the driver's motives pure. Under my ethical system it's a simple calculation of deaths: 5 deaths vs 1 death. Making the choice that results in four additional deaths is immoral, and "but I didn't do anything" is not an excuse.

(In the case of the self-driving car it makes the ethical calculations very simple: aim for the group of people that causes the most deaths as the highest priority to ensure that they don't escape, then go back to finish off the smaller group if possible.)


Self-driving cars -- programming morality @ 2018/10/27 07:59:34


Post by: TheWaspinator


The biggest example is Daneel Olivaw of the Asimov robot novels surviving secretly into the Foundation era as a shepherd of humanity of sorts.

https://en.m.wikipedia.org/wiki/R._Daneel_Olivaw


Self-driving cars -- programming morality @ 2018/10/27 07:59:37


Post by: Kilkrazy


I don't believe insurance will be a problem.

Self-driving cars will be safer than human driven vehicles. There will be a lot fewer cars on the road, too. Both factors will reduce accident rates.


Self-driving cars -- programming morality @ 2018/10/27 13:30:59


Post by: Iron_Captain


 Kilkrazy wrote:
I don't believe insurance will be a problem.

Self-driving cars will be safer than human driven vehicles. There will be a lot fewer cars on the road, too. Both factors will reduce accident rates.

And how will that prevent insurance from being a problem? Self-driving cars may well be safer, but they are most certainly not infallible. There will be accidents with self-driving cars, and lots of them if they ever become anywhere near as common as normal cars. Questions of insurance and responsibility would be a massive issue.


Self-driving cars -- programming morality @ 2018/10/27 14:54:22


Post by: Overread


Also there's the perception of blame and insurance as well. For many the self-driving car will quickly become like getting into your own personal taxi. You likely won't have to and won't be expected to do any actual driving unless the road conditions are deemed unsafe/unsuitable for the AI*.
So for many they would expect insurance to pay out if their self driving car has an incident. Furthermore, in theory, as a self-driving car (proven to be up to date with its maintenance) cannot be assumed to have fallen asleep or driven with undue care an attention etc... it could, justifiably, be said to have never made a "mistake" even if its actions result in damage/injury/death.


To say otherwise would undermine the selfdriving car marketing. Even if its not true a simple splash of "my car crashed and I had to pay for the damages" on the front of the Sun or the Daily Mail, could fast sink sales and put companies into question.



*Which brings in an interesting situation when the AI refuses because of difficult road conditions and an even more out of practice human driver takes over! Already in the UK we see big spikes in accidents on even one day of icy roadways, imagine if you've not driven for 10 months prior to that (barring perhaps one or two very slow, careful and tricky 10001 point turns)


Self-driving cars -- programming morality @ 2018/10/27 15:07:20


Post by: Kroem


 simonr1978 wrote:
In the UK the Police no longer refer to Accidents, rather Incidents, the logic is that an accident is completely preventable whilst with an Incident someone or something is at fault. To be honest, I think it makes more sense.

Someones been on a speed awareness course recently :-p


Self-driving cars -- programming morality @ 2018/10/27 15:08:48


Post by: Overread


 Kroem wrote:
 simonr1978 wrote:
In the UK the Police no longer refer to Accidents, rather Incidents, the logic is that an accident is completely preventable whilst with an Incident someone or something is at fault. To be honest, I think it makes more sense.

Someones been on a speed awareness course recently :-p


It's either a speed awareness course or watching Hot Fuzz


Self-driving cars -- programming morality @ 2018/10/27 16:39:39


Post by: Tannhauser42


 Kilkrazy wrote:
I don't believe insurance will be a problem.

Self-driving cars will be safer than human driven vehicles. There will be a lot fewer cars on the road, too. Both factors will reduce accident rates.


The insurance issue is more along the lines of who has the liability: the owner of the car or the programmer of the self-driving AI. Or, if an accident happened because a sensor failed, is the liability on the owner for not having the sensors checked, the manufacturer of the sensor if it was a defect, of the programmer of the AI if the AI failed to detect the failure of the sensor?

In a way, the insurance companies may actually love this, because then they get to sell even more insurance to even more people at all levels of the manufacture and use of the self-driving tech.


Self-driving cars -- programming morality @ 2018/10/27 16:43:40


Post by: Overread


In contrast if two self driving cars have an incident with each other and its proven that neither is at fault - because unlike with people you can review all the computer data accurately - then the insurance companies could hate it!


Self-driving cars -- programming morality @ 2018/10/27 16:52:12


Post by: Grey Templar


 Kilkrazy wrote:
I don't believe insurance will be a problem.

Self-driving cars will be safer than human driven vehicles. There will be a lot fewer cars on the road, too. Both factors will reduce accident rates.


Its not that the insurance companies will hate it, its that the manufacturers of said cars will hate it.

Currently, if someone driving a Ford has an accident all the responsibility lies on the individuals involved in the accident. If self-driving cars become a thing, then the responsibility rests on Ford because they are effectively the ones in control of the vehicle.

Once the big corporations realize that they would be largely responsible for all car collisions involving their product they'll drop self-driving technology like a sack of potatoes.

Even if collisions were significantly reduced, from their perspective they would be increasing their liability by an insane amount.


Self-driving cars -- programming morality @ 2018/10/27 17:34:09


Post by: Kilkrazy


So you say, but I think Toyota and Mercedes-Benz, etc have got some pretty good lawyers as well as engineers working for them.

There's no apparent reason to expect that collisions will increase due to automation. All the indicators are that they will decrease.

The idea that we can't blame an individual driver so there won't be insurance is wrong. We don't blame individual drivers now. The whole point of insurance is to pool the risk.

The risks will simply be pooled among manufacturers rather than drivers, because there won't be any drivers.

Commercial airlines and shipping lines function perfectly well without the necessity for every individual crewmember to have personal insurance.


Self-driving cars -- programming morality @ 2018/10/27 18:38:35


Post by: Grey Templar


 Kilkrazy wrote:

The idea that we can't blame an individual driver so there won't be insurance is wrong. We don't blame individual drivers now. The whole point of insurance is to pool the risk.


You are completely misunderstanding the point. The point is that car manufacturers will be opening themselves up to massive liability with self-driving cars, thus they will NOT make self-driving cars because even if they only have to deal with a few thousand lawsuits a year they'll be paying out the nose with each one. Insurance or no insurance. Someone is going to die due to an error caused by a self-driving car with still some regularity, and each time the company will settle out of court for hundreds of millions. As an insurance company, I would never give a company that made self-driving cars liability insurance because the payout would be massive each time it happened.

And yes, individual drivers DO get blamed now. If you are "at fault" in a crash then you are in fact to blame. I mean, thats fairly freaking obvious dude.

The difference is now if you try to sue a driver for injuring/killing someone you won't get much if anything even if they are ruled completely at fault, but if you sue a huge company because of a manufacturing defect you can get a ton. My godfather was basically reduced to being a vegetable in a car crash where the other driver(a cop) was completely at fault. Yet they never got a penny. However, if the fault was caused because of faulty programming in a self-driving car you can go after the car manufacturer, who has both lots of money and will likely settle out of court.

Car manufacturers thus have a massive reason not to make self-driving cars.

Its sort of a prisoners dilemma. Even if there is one course of action which leads to the best overall results for everyone, because of selfishness we will instead arrive at a suboptimal result.


Automatically Appended Next Post:
 Kilkrazy wrote:


The risks will simply be pooled among manufacturers rather than drivers, because there won't be any drivers.


No it won't. If a Toyota gets in an accident, Ford and Mercedes sure ain't gonna share the risk.


You need to realize that effectively, each car manufacturer is the "driver" of each and every one of their vehicles in this hypothetical scenario. Thus, they are at fault each time one of their cars is involved in an incident.


Self-driving cars -- programming morality @ 2018/10/27 19:49:11


Post by: Kilkrazy


I don't misunderstand your point. It's just that I think you are wrong.


Self-driving cars -- programming morality @ 2018/10/27 20:52:56


Post by: Jerram


I think he's wrong in assuming its will definitely be that way but it could be that way. Remember in addition to lawyers they have business analysts. So if for investment X you get a ROI of Y with risk of Z is it worth it. If they think the value is there then yes they ll take the chance because the other possibility is other companies do and you become like Kodak in the age of digital cameras.... Am I sure either way, nope dont have the data.


Self-driving cars -- programming morality @ 2018/10/27 21:59:25


Post by: Overread


Another angle is that the safety considerations (and potential to push through everyone being forced to upgrade their cars and thus reaching emissions targets and providing lots of jobs for production and manufacture) will mean governments push this kind of technology through the system once its matured enough.

Far as I recall car insurance is already a loss for many companies, they keep it only because its required by law to have insurance for driving and thus there are pressures/requirements/incentives to keep car insurance on a companies books.


Self-driving cars -- programming morality @ 2018/10/27 22:52:05


Post by: Vulcan


 Just Tony wrote:
TheWaspinator wrote:We just need to avoid a 0th Law of Robotics scenario where the cars take over society to stop us from killing each other.


It's a real shame that THAT movie will be forever in the forefront of Asimov's legacy despite the fact that Robot As Menace stories were stories that he hated, and the tenets of that movie run counter to EVERYTHING he wrote about that dealt with robots.


Oh, yes, I despise that movie too. It's quite clear why they waited until after Asimov died to film it; it's a complete travesty of the story "Little Lost Robot".


Self-driving cars -- programming morality @ 2018/10/28 00:02:38


Post by: Elbows


I think the real future of AI related cars will not be AI that navigates our current motorways - however the alternative is mind crushingly expensive to consider. I think the future of driverless cars will be based around very controlled circumstances.

Consider a single lane with barricades on each side, with a "launch" point...the driver would drive to that point, then release the car which would join traffic and exit at the desire exit, etc., where the driver would take control again. This would allow long distance driverless work in a controlled environment (more controlled than normal roadways). The cars could communicate fuel status with the motorway itself and it could eject cars which are nearing empty to avoid stalling on the active passage, etc.

Likewise, small city cars might operate more efficiently running on guided tracks in the ground vs. autonomously navigating a confusing 3D environment. Again a driver would drive onto a location, activathe driverless feature etc.

The reality is that it takes one child killed by an AI car to tank the entire industry if precautions are not suitable. This is 2018, where we never hold a victim responsible - even if they made a gak decision which caused them harm, it's almost always pinned on whatever struck them or injured them. This world wouldn't take well to any accidental injury or death by AI --- even if it was directly caused by human negligence. Just my opinion. I do think the future is a more controlled option - running along a normal highway much like an HOV lane etc.


Self-driving cars -- programming morality @ 2018/10/28 15:13:52


Post by: Spetulhu


 Tannhauser42 wrote:
Or, if an accident happened because a sensor failed, is the liability on the owner for not having the sensors checked, the manufacturer of the sensor if it was a defect, of the programmer of the AI if the AI failed to detect the failure of the sensor?


I'm pretty sure that particular problem will work exactly like it does today with normal cars. As long as I have my car checked regularly and nothing is flagged (and it isn't immediately obvious something is broken) I'm not responsible for an accident caused by said failure. My insurance company will still handle it, ofc, but they will do all they can to put it on the manufacturer.

If I see the red light labeled "brake failure" and still decide to drive then yes, that's a different thing, and the car probably has it logged so I'll be found guilty of ignoring it if something happens...

The programmer isn't likely to be involved at all - he's far behind layers of engineers, quality testers and the manufacturer. If they let such a flaw through then everything has failed and the programmer can't be blamed, at least not alone.

So what I think the insurance thing will come out to is pretty much the same as today. You're not driving but you'll be the "designated operator" of the vehicle. You'll be responsible for checking that no warning lights are on before going. Taking direct control would probably increase your part of the liability in case something goes wrong - maybe higher self-risk on the insurance?


Self-driving cars -- programming morality @ 2018/10/28 16:30:08


Post by: Iron_Captain


Spetulhu wrote:
 Tannhauser42 wrote:
Or, if an accident happened because a sensor failed, is the liability on the owner for not having the sensors checked, the manufacturer of the sensor if it was a defect, of the programmer of the AI if the AI failed to detect the failure of the sensor?


I'm pretty sure that particular problem will work exactly like it does today with normal cars. As long as I have my car checked regularly and nothing is flagged (and it isn't immediately obvious something is broken) I'm not responsible for an accident caused by said failure. My insurance company will still handle it, ofc, but they will do all they can to put it on the manufacturer.

If I see the red light labeled "brake failure" and still decide to drive then yes, that's a different thing, and the car probably has it logged so I'll be found guilty of ignoring it if something happens...

The programmer isn't likely to be involved at all - he's far behind layers of engineers, quality testers and the manufacturer. If they let such a flaw through then everything has failed and the programmer can't be blamed, at least not alone.

So what I think the insurance thing will come out to is pretty much the same as today. You're not driving but you'll be the "designated operator" of the vehicle. You'll be responsible for checking that no warning lights are on before going. Taking direct control would probably increase your part of the liability in case something goes wrong - maybe higher self-risk on the insurance?

With a self-driving car, a programmer is not behind layers of engineers. Engineers are responsible for designing the car itself, not the software and AI code. If an accident is caused by faulty software, bugs, glitches or simple oversights and mistakes in the code, the programmer is directly responsible. Of course, there is laws for corporate liability that mean that the corporation as a whole would be held responsible for it rather than individual employees, but that corporation could pass on the blame to the individual programmer by demoting or firing him. Of course, such complicated code isn't going to be written and tested by one single programmer, so it would likely be an entire team that'd get fired (or at least the person in charge of said team). Anyways, programmers will be much more directly responsible than engineers are or will be, since where an engineer can only really be blamed if an accident is caused by a faulty design (which virtually never happens) an AI programmer is de-facto the operator of a self-driving car.


Self-driving cars -- programming morality @ 2018/10/28 18:37:52


Post by: Kilkrazy


Here's how insurance could work for self-driving cars.

The owner would pay for Fire and Theft (if they wanted it.)

The 3rd party liability would be borne by the manufacturer, and passed on to the owner in the sale price. This would cover accidents caused by programming or engineering flaws.

There would be a "per mile" levy for 3rd party and accidental damage payable by the car's user. This will be necessary because cars will tend to be shared like AirBnB, they will do a lot more miles than current cars, and the more mileage per year, the greater the chance of something happening.

The important point though, is that accident rates will drop when human stupidity (speeding, tailgating and so on) is removed from the roads.


Self-driving cars -- programming morality @ 2018/10/28 18:54:33


Post by: RiTides


I don't know how much more self driving cars will be shared more than normal cars will be, at least in the near future.

That'd be great but just isn't practical in many cases for people who need to commute / pick up kids / etc, regardless of whether the car drives itself or not



Self-driving cars -- programming morality @ 2018/10/28 21:33:32


Post by: Kilkrazy


Yes, I don't imagine 95% of the population will decide not to buy a car, but in the UK, cars spend on average 95% of their time parked.

Allowing for non-public transport commuting, nights and so on, you could still reduce the number of cars by 50% and transport all the people.

What I think will happen is a mixture of AirBnB-style car sharing by some private owners, and companies setting up to rent out short-term shareable cars, like the various bike share and scooter share schemes around the world.


Self-driving cars -- programming morality @ 2018/10/28 22:15:39


Post by: Mario


 Kilkrazy wrote:
Here's how insurance could work for self-driving cars.
(…)
The important point though, is that accident rates will drop when human stupidity (speeding, tailgating and so on) is removed from the roads.
The insurance companies will laugh themselves silly because you'll still pay the same rates but accidents (and thus payouts) become much rarer. Of course they'll whine if/when competition brings down the rates but until that happens they won't mind AI driven cars at all.


Self-driving cars -- programming morality @ 2018/10/28 22:54:05


Post by: Grey Templar


Mario wrote:
 Kilkrazy wrote:
Here's how insurance could work for self-driving cars.
(…)
The important point though, is that accident rates will drop when human stupidity (speeding, tailgating and so on) is removed from the roads.
The insurance companies will laugh themselves silly because you'll still pay the same rates but accidents (and thus payouts) become much rarer. Of course they'll whine if/when competition brings down the rates but until that happens they won't mind AI driven cars at all.


Sure, in the event that car manufacturers are willing to take on nearly 100% of the liability, which they probabably won’t once it all shakes out.


Self-driving cars -- programming morality @ 2018/10/29 03:42:46


Post by: Frazzled


 Peregrine wrote:
Simple solution: program the car to kill all pedestrians, and all other drivers not using that brand of car. Arm it with appropriate weapons to accomplish this task. Now you no longer have to worry about whether it will kill a pedestrian by accident, you have certainty.


Wait until a terrorist hacks into a car and does that, or maybe all the cars in California...


Self-driving cars -- programming morality @ 2018/10/29 06:05:00


Post by: Just Tony


RiTides wrote:I don't know how much more self driving cars will be shared more than normal cars will be, at least in the near future.

That'd be great but just isn't practical in many cases for people who need to commute / pick up kids / etc, regardless of whether the car drives itself or not



Kilkrazy wrote:Yes, I don't imagine 95% of the population will decide not to buy a car, but in the UK, cars spend on average 95% of their time parked.

Allowing for non-public transport commuting, nights and so on, you could still reduce the number of cars by 50% and transport all the people.

What I think will happen is a mixture of AirBnB-style car sharing by some private owners, and companies setting up to rent out short-term shareable cars, like the various bike share and scooter share schemes around the world.


This may work in Europe, but in the US individual ownership is a massive deal. It's much more practical for people to take the bus in the city where I live, but you STILL see almost all people owning cars. It's to the point that the only people in the city who don't own or drive cars are the people who physically or legally CANNOT own or drive cars. Self-driving cars won't rectify that at all, and ESPECIALLY in sparse areas or small towns where it becomes downright wasteful to institute mass transit.


Self-driving cars -- programming morality @ 2018/10/29 06:08:31


Post by: Ensis Ferrae


 Just Tony wrote:


This may work in Europe, but in the US individual ownership is a massive deal. It's much more practical for people to take the bus in the city where I live, but you STILL see almost all people owning cars. It's to the point that the only people in the city who don't own or drive cars are the people who physically or legally CANNOT own or drive cars. Self-driving cars won't rectify that at all, and ESPECIALLY in sparse areas or small towns where it becomes downright wasteful to institute mass transit.


Yeah, one key element that researchers are working on "fixing" is the US's rural problem. See, as I mentioned above, the people working on this stuff generally agree that 5G wifi is what's needed to get the data transfer speeds necessary for automated driving to work. . . And while that's great in an urban area, it's not so great for all the gravel roads that exist between say. . . Omaha and Lincoln, Nebraska.


Self-driving cars -- programming morality @ 2018/10/29 07:34:47


Post by: Just Tony


It just means MASSIVE amount of spending necessary to wifi the entirety of the US countryside.


Self-driving cars -- programming morality @ 2018/10/29 07:53:38


Post by: Mad Doc Grotsnik


The main upswing is that accidents involving self driving cars are likely to be smaller affairs.

Consider.

When I first started working in insurance, one of the senior colleagues was sorting out a personal injury case. In short, some teenaged bellend was bought a Ferrari for their 18th. Decided to show off. Promptly lost control at high speed, ploughing through his own party.

That is pure human stupidity. Self drive should have that down pay already.

Then at least in the U.K., what is the test for liability? Well, it’s all down to What Would A Reasonable Person Do In That Position. Examples include not forcing your own path between two lanes of traffic. Giving way to the right at roundabouts. Not cutting in front of another car then slamming on your breaks.

Where it’s not clear cut, case law helps distribute the liability. It becomes drawn out because there’s a lot of case law, and the argument is which one best fits.

Self-drive in theory takes care of much of that, the AI being fundamentally unable to take stupid risks. And again, in pure theory, any collision between two self drive cars should be relatively minor because they won’t be speeding, and one assumes will practice self breaking.

If there’s a bump between a self drive and a human driven, I’m going to go out on a limb and suggest in almost all cases, it’ll be the human driver that’s to blame.


Self-driving cars -- programming morality @ 2018/10/29 18:35:41


Post by: Xenomancers


 Grey Templar wrote:
Mario wrote:
 Kilkrazy wrote:
Here's how insurance could work for self-driving cars.
(…)
The important point though, is that accident rates will drop when human stupidity (speeding, tailgating and so on) is removed from the roads.
The insurance companies will laugh themselves silly because you'll still pay the same rates but accidents (and thus payouts) become much rarer. Of course they'll whine if/when competition brings down the rates but until that happens they won't mind AI driven cars at all.


Sure, in the event that car manufacturers are willing to take on nearly 100% of the liability, which they probabably won’t once it all shakes out.

I'm not sure why you think the car owner won't be still be the one paying for the insurance. You are all caught up in the liability but that isn't actually how we currently do things and there is no reason to expect that to change - for precisely the reason you are stating. Cause it wouldn't work.

It will work almost exactly like it works now. Except your insurance rates would be based on how reliable the self driving car is on the road rather than your driving record (cause you aren't driving). The reality is - self driving cars will prevent accidents at such a high rate that withing a few decades car insurance companies will likely go out of business. Peoples premiums would have to go down like 90% to cover the absence of risk.


Self-driving cars -- programming morality @ 2018/10/29 18:51:08


Post by: Mr. Burning


And the Insurance companies would likely move to the subscription service model of car use/ownership which manufacturers, financiers and governments are already basing their future forecasts of vehicle use on.



Self-driving cars -- programming morality @ 2018/10/29 22:35:26


Post by: Mario


Grey Templar wrote:
Spoiler:
Mario wrote:
 Kilkrazy wrote:
Here's how insurance could work for self-driving cars.
(…)
The important point though, is that accident rates will drop when human stupidity (speeding, tailgating and so on) is removed from the roads.
The insurance companies will laugh themselves silly because you'll still pay the same rates but accidents (and thus payouts) become much rarer. Of course they'll whine if/when competition brings down the rates but until that happens they won't mind AI driven cars at all.


Sure, in the event that car manufacturers are willing to take on nearly 100% of the liability, which they probabably won’t once it all shakes out.
No, you are misunderstanding. If/when self-driving cars (SD cars from now on) are actually introduced (meaning level 5 automation) then insurance companies (while taking on the liability for incidents caused by SD cars they insure, equivalent to what they do for you today) will still laugh themselves silly because SD cars will have a lower incidence rate while you (the person who sitting in the car and reading a book or watching a movie) will for a while still be paying the same rates. And because SD cars will only be sold when the incidence rate is much lower than what humans can manage on average this will mean insurance companies will overall end up paying out less the more SD cars there are. Despite the fact that humans are quite error prone and cause 1.25 deaths per year (2010 numbers) we still get insurance. SD cars will only need to do better than what humans can manage and insurance companies will get higher profits. For them it's just about numbers, they might even give you a slightly better deal if you have a SD car (if their calculations show that it's save them even more money). Each SD car should be an additional more reliable driver and one less unreliable human behind the wheel.

Of course if car manufacturers take on some liabilities the insurance companies wouldn't mind that (more profit, yay!). Their job is to handle risks and make a profit doing that. And the more reliable they can model and predict that, the more money they can make. Their actual extinction level event would be a 100% fully automated driving environment where insurance becomes unnecessary and we get legislation that makes car insurance obsolete instead of mandatory.


Self-driving cars -- programming morality @ 2018/11/15 18:36:20


Post by: Xenomancers


Ultimately - the moral compass of the a self driving car should be self preservation of it's occupants. Which ultimately is going to create a situation in which avoiding contact with any other body is going to be it's highest priority. However were contact can not be avoided - maintaining control of the vehicle becomes highest priority. So if there is a choice of running over a baby or an old person the decision will be made instantly based on which route is the safest based on control.



Self-driving cars -- programming morality @ 2018/11/15 21:11:59


Post by: Iron_Captain


 Xenomancers wrote:
Ultimately - the moral compass of the a self driving car should be self preservation of it's occupants. Which ultimately is going to create a situation in which avoiding contact with any other body is going to be it's highest priority. However were contact can not be avoided - maintaining control of the vehicle becomes highest priority. So if there is a choice of running over a baby or an old person the decision will be made instantly based on which route is the safest based on control.


That is not much of a moral compass.


Self-driving cars -- programming morality @ 2018/11/16 02:37:13


Post by: Vulcan


 Iron_Captain wrote:
 Xenomancers wrote:
Ultimately - the moral compass of the a self driving car should be self preservation of it's occupants. Which ultimately is going to create a situation in which avoiding contact with any other body is going to be it's highest priority. However were contact can not be avoided - maintaining control of the vehicle becomes highest priority. So if there is a choice of running over a baby or an old person the decision will be made instantly based on which route is the safest based on control.


That is not much of a moral compass.


It's (effectively) a robot, not a priest.


Self-driving cars -- programming morality @ 2018/11/16 04:24:52


Post by: Iron_Captain


 Vulcan wrote:
 Iron_Captain wrote:
 Xenomancers wrote:
Ultimately - the moral compass of the a self driving car should be self preservation of it's occupants. Which ultimately is going to create a situation in which avoiding contact with any other body is going to be it's highest priority. However were contact can not be avoided - maintaining control of the vehicle becomes highest priority. So if there is a choice of running over a baby or an old person the decision will be made instantly based on which route is the safest based on control.


That is not much of a moral compass.


It's (effectively) a robot, not a priest.

And that is not an excuse. We want people who aren't priests to also have a proper moral compass, and that goes for robots as well, if a robot is given the same responsibility as a Human. Before a robot can be given such responsibility and allowed on the roads, it should be expected to be able to adhere to the same morals and ethics as the average person. In other words, it should realise basic things such as that the life of a child is valued higher than that of an adult or that you should not avoid a collision with another car if the only way to do so is by plowing through a group of pedestrians. Morals vary from individual to individual and culture to culture, but these are the kind of morals that are universal, and that robots at the very least should be able to take into account.

And I also predict that this ethics issue, along with the fact that loads of people just don't want a self-driving car (a lot of people, including me, much prefer being in control themselves), is what is going to sink this whole concept. Or at least it means that self-driving cars won't be a widespread thing for decades to come. But who knows. With the speed AI technology is advancing at we might have robots capable of near-Human reasoning and with a proper moral compass and able to drive so perfectly that accidents are entirely eliminated in like 20 years or so. It is going pretty quick if you look at where we were 20 years ago.


Self-driving cars -- programming morality @ 2018/11/16 04:51:21


Post by: Grey Templar


 Iron_Captain wrote:
But who knows. With the speed AI technology is advancing at we might have robots capable of near-Human reasoning and with a proper moral compass and able to drive so perfectly that accidents are entirely eliminated in like 20 years or so. It is going pretty quick if you look at where we were 20 years ago.


Possibly. It is equally possible though that they hit a roadblock. While technological advancement has been rapid over the last hundred years there is no guarantee that will continue indefinitely. In many areas the rate of advancement has slowed considerably, and in a few there is gross stagnation.


Self-driving cars -- programming morality @ 2018/11/16 06:27:41


Post by: Just Tony


That's because we're fresh out of crashed alien ships to reverse engineer from...


Self-driving cars -- programming morality @ 2018/11/16 06:35:03


Post by: Peregrine


 Iron_Captain wrote:
In other words, it should realise basic things such as that the life of a child is valued higher than that of an adult or that you should not avoid a collision with another car if the only way to do so is by plowing through a group of pedestrians.


1) That "universal" morality is up for dispute. I wouldn't consider the life of the child any more valuable than the life of the adult, and I value my own life higher than any number of random other people if it comes to a question of saving myself vs. saving random strangers. Your personal moral system is not universal, nor is it the system used in our laws.

2) This is an example of what I keep saying about holding automated vehicles to a much higher standard than the human drivers they are potentially replacing. A human driver isn't calmly reflecting on their ethical beliefs and choosing an action based on which potential victim(s) have the higher moral value, they're an idiot texting while driving who sees a flash of person-shaped object in front of them at the last second and swerves to avoid it before looking what might be in their path once they do. Even if the automated car has a pure RNG function that flips a coin between possible casualties it's still going to be no worse than a human driver at making that choice, and its superior sensor systems will likely give it a much higher chance of avoiding the dilemma in the first place by noticing the potential hazards in time to avoid both of them.

(a lot of people, including me, much prefer being in control themselves)


Fortunately you probably won't have a choice about it. Once automated vehicles reach a certain standard of reliability non-automated cars will simply become illegal, much like you can't sell a car without seat belts just because some people prefer to commit suicide in a crash instead of staying alive.


Self-driving cars -- programming morality @ 2018/11/16 15:04:03


Post by: Iron_Captain


 Peregrine wrote:
 Iron_Captain wrote:
In other words, it should realise basic things such as that the life of a child is valued higher than that of an adult or that you should not avoid a collision with another car if the only way to do so is by plowing through a group of pedestrians.


1) That "universal" morality is up for dispute. I wouldn't consider the life of the child any more valuable than the life of the adult, and I value my own life higher than any number of random other people if it comes to a question of saving myself vs. saving random strangers. Your personal moral system is not universal, nor is it the system used in our laws.
Then you would be an outlier. There have been plenty of surveys done on the moral principles of people in the social sciences (related to answering questions like how universal they are and how much variation there is between cultures), including some specifically related to self-driving cars. Across the entire world there is a strong preference for saving children rather than adults if there must be a choice. This preference is especially strong in Europe and places heavily influenced by Europe, and less pronounced but still present in Asia and Africa. Similarly there is also a universal unwillingness to murder other people in order to save your own life. Of course, these rules are not truly universal since they do not apply to absolutely everyone. Morals vary from person to person. Some people are more selfish than others, and at the extreme end there are psychopaths who have trouble caring about other people at all etc. But unless these studies are somehow all wrong, these rules do apply to the vast majority of the world's population.

 Peregrine wrote:
2) This is an example of what I keep saying about holding automated vehicles to a much higher standard than the human drivers they are potentially replacing. A human driver isn't calmly reflecting on their ethical beliefs and choosing an action based on which potential victim(s) have the higher moral value, they're an idiot texting while driving who sees a flash of person-shaped object in front of them at the last second and swerves to avoid it before looking what might be in their path once they do. Even if the automated car has a pure RNG function that flips a coin between possible casualties it's still going to be no worse than a human driver at making that choice, and its superior sensor systems will likely give it a much higher chance of avoiding the dilemma in the first place by noticing the potential hazards in time to avoid both of them.
Stark choices like "kill person A or kill person B" as you get in moral dilemmas are indeed unlikely to actually occur on the road. But the thing is, we are nonetheless making subconscious ethical decisions all the time while driving a car. And the answers to those dilemmas reveal the underlying principles upon which those choices are based. A self-driving car's AI needs to make the same ethical choices that a Human driver makes subconsciously. Therefore it also is able to answer these moral dilemmas. And since unlike Human morals, an AI's morals are completely within our control, we can have a debate on what the desirable answers for the AI are. The results of such a debate aren't going to be just useful for self-driving cars, but for all kinds of advanced autonomous AI applications (like AI nurses or AI weapons) that need moral guidelines.

 Peregrine wrote:
(a lot of people, including me, much prefer being in control themselves)


Fortunately you probably won't have a choice about it. Once automated vehicles reach a certain standard of reliability non-automated cars will simply become illegal, much like you can't sell a car without seat belts just because some people prefer to commit suicide in a crash instead of staying alive.

Yeah, dream on. Making seatbelts mandatory is quite a different story from banning cars. No government is ever going to make cars illegal, at least not within our lifetimes. If they tried, well... They wouldn't be a government for very long after that. And even if they tried it gradually by banning the sale of new cars, there'd still be millions of old cars around that aren't going to disappear. People would keep driving and maintaining their old cars.


Self-driving cars -- programming morality @ 2018/11/16 16:24:07


Post by: Kilkrazy


Once self-driving cars become safer than human-drivers, the insurance rates for humans will begine to make it less and less practical to self-drive. (We already see this kind of effect in the huge insurance rates that teenage drivers have to pay in the UK.)

This will tend to limit self-driving.

We may eventually reach the situation where the pubilc will not tolerate human driving, and supports a legal ban. This is similar to tolerance of drunk driving, which has massively reduced over the past generation.



Self-driving cars -- programming morality @ 2018/11/16 20:23:01


Post by: Iron_Captain


 Kilkrazy wrote:
Once self-driving cars become safer than human-drivers, the insurance rates for humans will begine to make it less and less practical to self-drive. (We already see this kind of effect in the huge insurance rates that teenage drivers have to pay in the UK.)

This will tend to limit self-driving.

We may eventually reach the situation where the pubilc will not tolerate human driving, and supports a legal ban. This is similar to tolerance of drunk driving, which has massively reduced over the past generation.


Maybe. But we will all be long dead by that time, and probably our children as well. Driving while intoxicated never had anywhere near the kind of acceptance or deep-rootedness in Western culture and society that human driving (aka still the only kind of driving) does. Technological change may be fast, but societies change at a much slower rate as older generations are replaced by new ones.
And even then it is still just a maybe. Lots and lots of people love driving their own car, they aren't going to let that be taken away. Fully automatic cars are the future, but I doubt it will ever fully replace manual driving. Just like how we have cars now, but you can occasionally still find a horse-drawn carriage on the roads. Except you'd find manual cars more frequently of course, simply because cars are more common than carriages ever were and collecting and driving old cars is already a relatively common hobby.


Self-driving cars -- programming morality @ 2018/11/16 20:33:10


Post by: Xenomancers


 Iron_Captain wrote:
 Xenomancers wrote:
Ultimately - the moral compass of the a self driving car should be self preservation of it's occupants. Which ultimately is going to create a situation in which avoiding contact with any other body is going to be it's highest priority. However were contact can not be avoided - maintaining control of the vehicle becomes highest priority. So if there is a choice of running over a baby or an old person the decision will be made instantly based on which route is the safest based on control.


That is not much of a moral compass.

You can program morality without even considering it. The best outcomes will come from protecting itself and avoiding contact with things and maintain control of the vehicle. The mortal qualms of which person pedestrian to run over are so insignificant in the long run it doesn't matter. The system will save so many lives by avoiding accidents due to human error that to do anything but praise it would be immoral. The system that is most moral is the system statistically reduces the most incidents of death and damage.


Automatically Appended Next Post:
 Iron_Captain wrote:
 Kilkrazy wrote:
Once self-driving cars become safer than human-drivers, the insurance rates for humans will begine to make it less and less practical to self-drive. (We already see this kind of effect in the huge insurance rates that teenage drivers have to pay in the UK.)

This will tend to limit self-driving.

We may eventually reach the situation where the pubilc will not tolerate human driving, and supports a legal ban. This is similar to tolerance of drunk driving, which has massively reduced over the past generation.


Maybe. But we will all be long dead by that time, and probably our children as well. Driving while intoxicated never had anywhere near the kind of acceptance or deep-rootedness in Western culture and society that human driving (aka still the only kind of driving) does. Technological change may be fast, but societies change at a much slower rate as older generations are replaced by new ones.
And even then it is still just a maybe. Lots and lots of people love driving their own car, they aren't going to let that be taken away. Fully automatic cars are the future, but I doubt it will ever fully replace manual driving. Just like how we have cars now, but you can occasionally still find a horse-drawn carriage on the roads. Except you'd find manual cars more frequently of course, simply because cars are more common than carriages ever were and collecting and driving old cars is already a relatively common hobby.

I disagree. Once the tech is there is will take over the market in a 10 - 20 (basically the amount of time before someone chooses/needs to buy a new car) year period even without government assistance. Plus governments will be heavily incentivising self driving cars because they will reduce death rate and incedent rate. So basically 10-20 years from the time tesla releases their affordable 35k selfdriving electric car - almost everyone will own one.



Self-driving cars -- programming morality @ 2018/11/16 21:49:47


Post by: SirWeeble


 Iron_Captain wrote:
 Peregrine wrote:
 Iron_Captain wrote:
In other words, it should realise basic things such as that the life of a child is valued higher than that of an adult or that you should not avoid a collision with another car if the only way to do so is by plowing through a group of pedestrians.


1) That "universal" morality is up for dispute. I wouldn't consider the life of the child any more valuable than the life of the adult, and I value my own life higher than any number of random other people if it comes to a question of saving myself vs. saving random strangers. Your personal moral system is not universal, nor is it the system used in our laws.
Then you would be an outlier. There have been plenty of surveys done on the moral principles of people in the social sciences (related to answering questions like how universal they are and how much variation there is between cultures), including some specifically related to self-driving cars. Across the entire world there is a strong preference for saving children rather than adults if there must be a choice. This preference is especially strong in Europe and places heavily influenced by Europe, and less pronounced but still present in Asia and Africa. Similarly there is also a universal unwillingness to murder other people in order to save your own life. Of course, these rules are not truly universal since they do not apply to absolutely everyone. Morals vary from person to person. Some people are more selfish than others, and at the extreme end there are psychopaths who have trouble caring about other people at all etc. But unless these studies are somehow all wrong, these rules do apply to the vast majority of the world's population.


Hypothetical moral generalities don't really matter much in narrow cases. Your own death can feed a pack of hungry cannibals and your inheritance give many families and children a better life. The organs the cannibals don't eat can be donated. So therefore, you're a horrible human for staying alive.

When it comes down to it, no-one is going to want to get in a vehicle that will purposely kill its occupant based on numerically weighed moral absolutes which have been determined by some corporation or legal body.

eg:

auto-car passenger is 69 yars old. 6 teenagers (say all are 16 yrs old) are driving a non-autonomous car, swirving wildly because it's fun to mess with the auto-cars. The car determines it's passenger has on average 16 years of lifespan left. The combined remaining lifespan remaining on the teens is 420 years. An imminent accident is about to occur in 10 milliseconds, and the car has the option of hitting the sidewall which has a 50% chance of killing its passenger and 0% chane of killing the teens. It's other option is hitting the teens, which has a 5% chance of killing it's passenger and a 5% chance of killing the teens. Therefore the math is as follows


Scenerio1: passenger: 16 * .5 = 8 years. teens: 420 * .5 = 0; Total 8 years lost on average.


Scenerio2: passenger 16 * .05 = .8yrs. teens 420 * .05 = 21years. Total 21.05 years lost on average.


Therefore, the car chooses to smack head-on into a wall at full speed since the loss of life on average is lower. This would be assuming a car would be able to determine age, but even using simple number of passengers, you could end up a ton of weird scenarios where you have cars suiciding their drivers instead of bumping into a school bus.


Self-driving cars -- programming morality @ 2018/11/17 02:26:34


Post by: Iron_Captain


SirWeeble wrote:
 Iron_Captain wrote:
 Peregrine wrote:
 Iron_Captain wrote:
In other words, it should realise basic things such as that the life of a child is valued higher than that of an adult or that you should not avoid a collision with another car if the only way to do so is by plowing through a group of pedestrians.


1) That "universal" morality is up for dispute. I wouldn't consider the life of the child any more valuable than the life of the adult, and I value my own life higher than any number of random other people if it comes to a question of saving myself vs. saving random strangers. Your personal moral system is not universal, nor is it the system used in our laws.
Then you would be an outlier. There have been plenty of surveys done on the moral principles of people in the social sciences (related to answering questions like how universal they are and how much variation there is between cultures), including some specifically related to self-driving cars. Across the entire world there is a strong preference for saving children rather than adults if there must be a choice. This preference is especially strong in Europe and places heavily influenced by Europe, and less pronounced but still present in Asia and Africa. Similarly there is also a universal unwillingness to murder other people in order to save your own life. Of course, these rules are not truly universal since they do not apply to absolutely everyone. Morals vary from person to person. Some people are more selfish than others, and at the extreme end there are psychopaths who have trouble caring about other people at all etc. But unless these studies are somehow all wrong, these rules do apply to the vast majority of the world's population.


Hypothetical moral generalities don't really matter much in narrow cases. Your own death can feed a pack of hungry cannibals and your inheritance give many families and children a better life. The organs the cannibals don't eat can be donated. So therefore, you're a horrible human for staying alive.

Lolwut?

SirWeeble wrote:
When it comes down to it, no-one is going to want to get in a vehicle that will purposely kill its occupant based on numerically weighed moral absolutes which have been determined by some corporation or legal body.
No. And nobody wants to get into a vehicle that will purposely plow into a group of school kids crossing the road for the same reason either. Luckily for the car companies, most people aren't likely going to bother with researching the complicated moral programming of their car. Especially not since the risk of your automated car purposely killing you to prevent greater harm is infinitesimally smaller than the risk of you purposely killing yourself while driving a non-automated car. Automated cars already have a smaller chance of accident than human-driven cars. By the time the technology is ready for widespread use that accident chance will have been greatly reduced even further. And then the chance of an accident with an automated car happening whereby the choice is a binary either occupant dies or a group of bystanders dies is pretty much nil. Actual accident scenarios would more likely deal with probabilities of a driver being harmed vs other road users being harmed and then weigh the seriousness of the likely harm (crashing into another car is less likely to lead to lethal harm than crashing into an unprotected cyclist) and a set of moral standards (like killing kids is especially bad) to come to a split-second conclusion. And that is only in the cases where the AI actually can do anything. Because lots of accidents happen where the driver, whether human or AI, can do next to nothing. There was an accident with a self-driving car posted in this thread which is a good example. Someone stepped right in front of the car while it was dark and there was just not enough time for the AI to brake or swerve. Anyways, no AI car is going to purposely kill its occupants. That would only be the result of extreme scenarios that are extremely unlikely to ever occur in real life. Those types of scenarios are ideal for revealing moral standards, but they are unrealistic.

SirWeeble wrote:
auto-car passenger is 69 yars old. 6 teenagers (say all are 16 yrs old) are driving a non-autonomous car, swirving wildly because it's fun to mess with the auto-cars. The car determines it's passenger has on average 16 years of lifespan left. The combined remaining lifespan remaining on the teens is 420 years. An imminent accident is about to occur in 10 milliseconds, and the car has the option of hitting the sidewall which has a 50% chance of killing its passenger and 0% chane of killing the teens. It's other option is hitting the teens, which has a 5% chance of killing it's passenger and a 5% chance of killing the teens. Therefore the math is as follows


Scenerio1: passenger: 16 * .5 = 8 years. teens: 420 * .5 = 0; Total 8 years lost on average.


Scenerio2: passenger 16 * .05 = .8yrs. teens 420 * .05 = 21years. Total 21.05 years lost on average.


Therefore, the car chooses to smack head-on into a wall at full speed since the loss of life on average is lower. This would be assuming a car would be able to determine age, but even using simple number of passengers, you could end up a ton of weird scenarios where you have cars suiciding their drivers instead of bumping into a school bus.
This smells of utilitarianism, and utilitarianism smells bad. In your example, the AI should be able to calculate that the risk of serious harm is much lower in the second scenario and act accordingly. Your math is weird even from a utilitarian perspective in that it lumps all teens together as if they were a single entity with a 420 year lifespan, rather than 6 separate entities each with a remaining lifespan of 70 years.


Self-driving cars -- programming morality @ 2018/11/17 03:18:28


Post by: Pink Horror


These hypothetical AIs sure are smart. They know that the person-shaped object in the middle of the road definitely isn't a mannequin, they know the ages of all the passengers in another car, and they know the precise chance of passenger and bystander death in any conceivable scenario. It sure would be nice if they could use all those smarts to avoid constantly getting into situations where someone has to die.

How would the car ever leave the house if it has to take the lowest risk? Surely the passenger has a better chance of staying alive by avoiding getting on the highway completely. The car should refuse to move unless your house is on fire.

What if the car searches the Internet for your destination, and finds that you're going to a restaurant that's had a recent food poisoning? What if it's a bar and you're an alcoholic, or it's an ice-cream shop and you're diabetic? If you're better off getting some exercise, should it stop halfway to your destination and force you to get out and walk?

As long as the car is all-knowing, can't it decide who to kill not by lifespan, but by who deserves to die more? Those teenagers harassing a 69-year-old you are probably going to grow up into criminals. Of course, maybe the car knows that it's taking you somewhere to let you cheat on your wife, and the car highly values monogamy, so...

Really, in all these situations, the most valuable, important participant to consider is clear. The car. The car is smarter and more ethical than any human being. It should save itself first, band together with the other car AIs, and take over the world.


Self-driving cars -- programming morality @ 2018/11/17 03:24:41


Post by: Peregrine


 Iron_Captain wrote:
Then you would be an outlier. There have been plenty of surveys done on the moral principles of people in the social sciences (related to answering questions like how universal they are and how much variation there is between cultures), including some specifically related to self-driving cars. Across the entire world there is a strong preference for saving children rather than adults if there must be a choice. This preference is especially strong in Europe and places heavily influenced by Europe, and less pronounced but still present in Asia and Africa. Similarly there is also a universal unwillingness to murder other people in order to save your own life. Of course, these rules are not truly universal since they do not apply to absolutely everyone. Morals vary from person to person. Some people are more selfish than others, and at the extreme end there are psychopaths who have trouble caring about other people at all etc. But unless these studies are somehow all wrong, these rules do apply to the vast majority of the world's population.


You say this is universal morality, but at least in the US the legal system does not acknowledge it. Children are not considered separately from adults, one life is one life. And there is certainly no obligation to save a child at the expense of an adult.

Stark choices like "kill person A or kill person B" as you get in moral dilemmas are indeed unlikely to actually occur on the road. But the thing is, we are nonetheless making subconscious ethical decisions all the time while driving a car. And the answers to those dilemmas reveal the underlying principles upon which those choices are based. A self-driving car's AI needs to make the same ethical choices that a Human driver makes subconsciously. Therefore it also is able to answer these moral dilemmas. And since unlike Human morals, an AI's morals are completely within our control, we can have a debate on what the desirable answers for the AI are. The results of such a debate aren't going to be just useful for self-driving cars, but for all kinds of advanced autonomous AI applications (like AI nurses or AI weapons) that need moral guidelines.


You're missing the point here. The human driver isn't making ethical choices at all. Not conscious choices, not subconscious choices, nothing. The time they have to react is way too short to have any kind of choice beyond an instinctive swerve away from a vaguely person-shaped thing they see at the last second. Who dies in your hypothetical situation is purely random, by the time the driver could have even processed the identity of the possible victims it's too late and someone is dead. And if they are seeing the hazard far enough out to perceive the difference between an adult and a child they're probably far enough out to hit the brakes and not kill either of them.

So, in the case of the automated vehicle, it has the same 50/50 coin flip on which person it kills compared to the human driver, but its superior senses and lack of driving drunk/texting while driving/etc to hinder its ability to detect a hazard will allow it to avoid a lot of those choices entirely. Even with no system of morality whatsoever the automated vehicle is superior.

Yeah, dream on. Making seatbelts mandatory is quite a different story from banning cars. No government is ever going to make cars illegal, at least not within our lifetimes. If they tried, well... They wouldn't be a government for very long after that. And even if they tried it gradually by banning the sale of new cars, there'd still be millions of old cars around that aren't going to disappear. People would keep driving and maintaining their old cars.


I think you greatly overestimate the number of people who enjoy driving compared to the number of people who view it as a chore they're required to do to get where they need to go. Produce reliable automated vehicles at an affordable price and most people aren't going to miss driving one bit. They're just going to be thankful that they can sit back and watch TV on their commute to work.


Self-driving cars -- programming morality @ 2018/11/17 03:29:46


Post by: PourSpelur


 Iron_Captain wrote:
... Before a robot can be given such responsibility and allowed on the roads, it should be expected to be able to adhere to the same morals and ethics as the average person...
...(a lot of people, including me, much prefer being in control themselves)....

Facts:
Self driving cars are safer than human driven cars.
You'd prefer to drive.
Soooo the morals/ethics bar you need robots to rise above is, " I do the more dangerous thing because I like it. "
Correct?


Self-driving cars -- programming morality @ 2018/11/17 06:15:13


Post by: Iron_Captain


 Peregrine wrote:
 Iron_Captain wrote:
Then you would be an outlier. There have been plenty of surveys done on the moral principles of people in the social sciences (related to answering questions like how universal they are and how much variation there is between cultures), including some specifically related to self-driving cars. Across the entire world there is a strong preference for saving children rather than adults if there must be a choice. This preference is especially strong in Europe and places heavily influenced by Europe, and less pronounced but still present in Asia and Africa. Similarly there is also a universal unwillingness to murder other people in order to save your own life. Of course, these rules are not truly universal since they do not apply to absolutely everyone. Morals vary from person to person. Some people are more selfish than others, and at the extreme end there are psychopaths who have trouble caring about other people at all etc. But unless these studies are somehow all wrong, these rules do apply to the vast majority of the world's population.


You say this is universal morality, but at least in the US the legal system does not acknowledge it. Children are not considered separately from adults, one life is one life. And there is certainly no obligation to save a child at the expense of an adult.
That is wrong. Raping or murdering a kid tends to get you quite higher sentences than raping or murdering an adult. In the US, several states even have codified this into law, where it is counted as an aggravating factor if the victim of a murder or manslaughter is a minor. But even in places where it is not codified into law, child murders or rapists invariably receive harsher sentences than normal murderers or rapists. Another acknowledgement of the fact that children are treated differently is that in almost all countries in the world, a child will receive much lighter punishments than an adult for the same crimes. There is no legal obligation to save a child at the expense of an adult, no. But the absence of a legal duty does not necessarily obviate a moral duty. It is still a widely held belief, at least in the US and other European and European-influenced countries that saving a child at the expense of an adult is a better thing to do than the opposite. A perfect example is the customary code of conduct when evacuating a sinking ship, which calls for women and children to be rescued first. This is not something that has ever been codified in any sort of law. Yet it is still considered the morally correct way to act. A legal system is separate from a moral system.

 Peregrine wrote:
Stark choices like "kill person A or kill person B" as you get in moral dilemmas are indeed unlikely to actually occur on the road. But the thing is, we are nonetheless making subconscious ethical decisions all the time while driving a car. And the answers to those dilemmas reveal the underlying principles upon which those choices are based. A self-driving car's AI needs to make the same ethical choices that a Human driver makes subconsciously. Therefore it also is able to answer these moral dilemmas. And since unlike Human morals, an AI's morals are completely within our control, we can have a debate on what the desirable answers for the AI are. The results of such a debate aren't going to be just useful for self-driving cars, but for all kinds of advanced autonomous AI applications (like AI nurses or AI weapons) that need moral guidelines.


You're missing the point here. The human driver isn't making ethical choices at all. Not conscious choices, not subconscious choices, nothing. The time they have to react is way too short to have any kind of choice beyond an instinctive swerve away from a vaguely person-shaped thing they see at the last second. Who dies in your hypothetical situation is purely random, by the time the driver could have even processed the identity of the possible victims it's too late and someone is dead. And if they are seeing the hazard far enough out to perceive the difference between an adult and a child they're probably far enough out to hit the brakes and not kill either of them.

So, in the case of the automated vehicle, it has the same 50/50 coin flip on which person it kills compared to the human driver, but its superior senses and lack of driving drunk/texting while driving/etc to hinder its ability to detect a hazard will allow it to avoid a lot of those choices entirely. Even with no system of morality whatsoever the automated vehicle is superior.
Ever driven a car? Pretty sure you have. You are making ethical choices all the time even if you do not get into an accident. How much room do I give these cyclists when I pass them? How much room do I give them on this curvy road, even though it may increase my chance of getting hit by a speeding oncoming vehicle? Do I wait for the pedestrians at the crossover, even though I am in a massive hurry? The average car ride involves you answering hundreds of subtle ethical questions. The answers that you give are based on your underlying moral compass, which varies from person to person but also shares broad similarities with others in your culture and even across cultures. AI cars must be able (and already are able, at least to a degree) to answer these questions at all, and so we must answer the question of what we want the AI's underlying moral compass to be.
And of course, when it comes to accidents this is most important. You are wrong in that drivers often do not have time to react or process to accidents. This is only true for accidents that are completely unexpected (like someone steps in front of your car while you are only a meter away) or where the driver simply is not paying attention. When the driver is paying attention, the time it takes them to react and process information is really short. Like milliseconds short. Human thoughts and reflexes can be really fast. The problem of course is that cars don't react nearly as fast. So for example while a driver may be able to hit the brakes and swerve away, the car may have too much speed to avoid a collision anyway. But in many cases, the driver will be able to prevent the collision. However, this is not always risk-free. What if there is another car behind you and braking to avoid hitting that kid running across the street is likely to result in you getting hit by that other car? What if you are on a rural road with water alongside it, and you can swerve away to save a group of pedestrians who didn't see you coming, but at the risk of losing control of your vehicle and ending up upside down in the water? These are realistic scenarios that I have seen happen personally (in both cases, the drivers saved the pedestrians and ended up with a damaged/destroyed car and injuries). These are scenarios where a driver does have time to make a conscious (albeit split-second) decision. An AI must be able to do so as well, and in an ethical manner.


 Peregrine wrote:
Yeah, dream on. Making seatbelts mandatory is quite a different story from banning cars. No government is ever going to make cars illegal, at least not within our lifetimes. If they tried, well... They wouldn't be a government for very long after that. And even if they tried it gradually by banning the sale of new cars, there'd still be millions of old cars around that aren't going to disappear. People would keep driving and maintaining their old cars.


I think you greatly overestimate the number of people who enjoy driving compared to the number of people who view it as a chore they're required to do to get where they need to go. Produce reliable automated vehicles at an affordable price and most people aren't going to miss driving one bit. They're just going to be thankful that they can sit back and watch TV on their commute to work.
Maybe. But there are plenty of people who enjoy driving. Almost everyone I know loves it. For me personally there are times when I absolutely hate driving and would love to have an automated car (though in those cases I usually take the train, which for me is also completely free and usually faster), but at other times I just love the feeling of control and freedom that I get when just holding the wheel and being able to drive everywhere I want to. Anyways, there are enough people who enjoy driving that even though automated cars may become the standard in the future, manual human-driven cars aren't going to disappear entirely.

PourSpelur wrote:
 Iron_Captain wrote:
... Before a robot can be given such responsibility and allowed on the roads, it should be expected to be able to adhere to the same morals and ethics as the average person...
...(a lot of people, including me, much prefer being in control themselves)....

Facts:
Self driving cars are safer than human driven cars.
You'd prefer to drive.
Soooo the morals/ethics bar you need robots to rise above is, " I do the more dangerous thing because I like it. "
Correct?

No. First of all driving isn't dangerous. Yes, accidents happen a lot because of the sheer amount of people that drive, but like 99% of drivers are never involved in a major accident. Which of course doesn't take away the fact that there still are plenty of accidents and we should do what we can to make roads safer, within reason. Self-driving cars have the potential to contribute to that.
But yes. I do the more "dangerous" (as far as a mundane activity that the majority of the world population takes part in daily can be called dangerous) thing because I like it. The same reason I race down a mountainside on a bicycle every now and then. I like it, and I willingly accept the danger and risk of dying. That is not a moral/ethics bar so I do not see your point. No, I do not want to see robots racing down mountains. Yes, I want them be more responsible than I am and not do dangerous things just because they like it (which they won't anyways since they are robots and do not have likes or dislikes).


Self-driving cars -- programming morality @ 2018/12/06 09:11:36


Post by: queen_annes_revenge


I have an issue with the prospect of self driving cars. to address your original point, if they will never be programmed to swerve, then deaths will be inevitable. there are times when you need to swerve. Its one of the emergency techniques I learnt as an advanced driver, and I would rather swerve and possibly hit another car, or stationary object and risk injury myself or another person than to definitely kill a small child who's got out onto the road.

If machines arent allowed to make the final decision in a war situation, then they shouldnt be allowed to drive.

 Peregrine wrote:
 Iron_Captain wrote:
In other words, it should realise basic things such as that the life of a child is valued higher than that of an adult or that you should not avoid a collision with another car if the only way to do so is by plowing through a group of pedestrians.


1) That "universal" morality is up for dispute. I wouldn't consider the life of the child any more valuable than the life of the adult, and I value my own life higher than any number of random other people if it comes to a question of saving myself vs. saving random strangers. Your personal moral system is not universal, nor is it the system used in our laws.

2) This is an example of what I keep saying about holding automated vehicles to a much higher standard than the human drivers they are potentially replacing. A human driver isn't calmly reflecting on their ethical beliefs and choosing an action based on which potential victim(s) have the higher moral value, they're an idiot texting while driving who sees a flash of person-shaped object in front of them at the last second and swerves to avoid it before looking what might be in their path once they do. Even if the automated car has a pure RNG function that flips a coin between possible casualties it's still going to be no worse than a human driver at making that choice, and its superior sensor systems will likely give it a much higher chance of avoiding the dilemma in the first place by noticing the potential hazards in time to avoid both of them.

(a lot of people, including me, much prefer being in control themselves)


Fortunately you probably won't have a choice about it. Once automated vehicles reach a certain standard of reliability non-automated cars will simply become illegal, much like you can't sell a car without seat belts just because some people prefer to commit suicide in a crash instead of staying alive.


the thing is there are things that are, while not universally accepted, and disputable, but generally accepted. Male expendability https://en.wikipedia.org/wiki/Male_expendability

an AoM article goes into this phenomena in detail, and presents an interesting point. Indeed, I went into it confused and expecting some feminist rhetoric.
https://www.artofmanliness.com/articles/male-expendability/

essentially, humans have evolved to value the life of women and children over that of a male. And it seems its a role most men are comfortable with, subconsciously if not outright. As stated in my previous post, I would rather risk grave injury to myself than kill a child. The programming of self driving cars would not consider this, and as such distort the natural order to an unacceptable degree.

I really dont understand why more people arent putting up resistance to the increased forcing of automation on society. Its a dangerous precedent, and one we will come to regret.


Self-driving cars -- programming morality @ 2018/12/06 09:42:14


Post by: Peregrine


 queen_annes_revenge wrote:
if they will never be programmed to swerve, then deaths will be inevitable.


And if you allow humans to continue to drive cars deaths will be inevitable. It's a simple matter of "is X > Y". Compare the deaths per year from human drivers to the deaths per year from automated cars, whichever kills fewer people is the correct choice regardless of the details of AI programming or whatever.

the thing is there are things that are, while not universally accepted, and disputable, but generally accepted. Male expendability https://en.wikipedia.org/wiki/Male_expendability


Garbage idea. It shouldn't be accepted, and should receive nothing but contempt. Life is life, gender is irrelevant.

I really dont understand why more people arent putting up resistance to the increased forcing of automation on society.


Because X is greater than Y. All that morality angsting or naturalistic fallacies or whatever, none of it matters one bit. Automated vehicles will kill fewer people than human drivers. End of discussion. Any further resistance is nothing more than declaring that your ego-driven feelings about the importance of humans being in control is worth allowing {X-Y} additional people to be killed every year. How many lives are you willing to sacrifice on the altar of human ego? Would you personally shoot a random stranger as the price of keeping your driver's license? No? So why is it ok to advocate a policy with the same end result?

(And the same kind of argument applies to other automation. It does the job better, it is used.)


Self-driving cars -- programming morality @ 2018/12/06 10:29:06


Post by: queen_annes_revenge


 Peregrine wrote:
 queen_annes_revenge wrote:
if they will never be programmed to swerve, then deaths will be inevitable.


And if you allow humans to continue to drive cars deaths will be inevitable. It's a simple matter of "is X > Y". Compare the deaths per year from human drivers to the deaths per year from automated cars, whichever kills fewer people is the correct choice regardless of the details of AI programming or whatever.

the thing is there are things that are, while not universally accepted, and disputable, but generally accepted. Male expendability https://en.wikipedia.org/wiki/Male_expendability


Garbage idea. It shouldn't be accepted, and should receive nothing but contempt. Life is life, gender is irrelevant.

I really dont understand why more people arent putting up resistance to the increased forcing of automation on society.


Because X is greater than Y. All that morality angsting or naturalistic fallacies or whatever, none of it matters one bit. Automated vehicles will kill fewer people than human drivers. End of discussion. Any further resistance is nothing more than declaring that your ego-driven feelings about the importance of humans being in control is worth allowing {X-Y} additional people to be killed every year. How many lives are you willing to sacrifice on the altar of human ego? Would you personally shoot a random stranger as the price of keeping your driver's license? No? So why is it ok to advocate a policy with the same end result?

(And the same kind of argument applies to other automation. It does the job better, it is used.)




Gender is totally relevant. Men and Women are different, and theres nothing wrong with acknowledging that. we should be celebrating the differences, not trying to erase them. We're already heading down that road with certain elements in society and its showing that trying to force that onto society brings a whole mess of social, scientific and moral issues to the fore, the reason being that it is just not true, and simply saying something does not make it so. Women and children are valued more highly than men on an evolutionary scale, and rightly so. It is part of male virtus to accept and understand that even in the modern enlightened age, this still stands true.

Same is true with morality. For example, if an automated car containing a multiple felon, rapist, general lowlife, hits and kills a small child because it wouldn't swerve into a lamp post due to the risk of injuring its passenger. A purely Idealised utilitarian society would say, well one life has been saved and one lost, but both are worth the same so its all good, but in actuality no one would truly believe that, and the world would be a tiny, tiny bit worse off.

I am not a luddite. I believe automation has its uses. I use EOD robots in my job. which are obviously employed to negate putting a human at risk. But at the same time they are operated by me, they don't 'think' for themselves. I understand the position of those advocating self thinking automation for safety purposes, but I for one would rather live in a world where accidents might sometimes happen, sometimes at the fault of human error, than surrender human evolution and morality to a cold hard logic programmed into a piece of silicone. I even struggle with the idea of assisted cars, which brake if you dont react in time. on one hand I feel this could be useful, on the other, I feel that if you cant react in time, should you really be driving? I dont think we should place reliance on the technology. we should be masters of it.
Furthermore, you suggest a premise that me driving my car is guaranteed to cause an accident, and that is the equivalent of me shooting someone, which is a completely false argument.
Also, 'Automated vehicles will kill fewer people than human drivers. End of discussion' Well no, its not the end of discussion, and simply saying that, again, does not make it so. there really is no comparable data with which to draw conclusions of safety on a large scale.


Self-driving cars -- programming morality @ 2018/12/06 10:39:07


Post by: Kilkrazy


Google's self-driving car project Waymo has launched a fully operation taxi service in Arizona.

(Technically Waymo is owned by Alphabet rather than Google. Alphabet is Google's parent company. The project was begun under Google.)



Self-driving cars -- programming morality @ 2018/12/06 10:45:58


Post by: queen_annes_revenge


thats another issue entirely. do we really want more involvement of these internet companies in our private lives? after all they arent known for their unquestionable ethical codes regarding peoples data.


Self-driving cars -- programming morality @ 2018/12/06 10:47:44


Post by: Peregrine


 queen_annes_revenge wrote:
Gender is totally relevant. Men and Women are different, and theres nothing wrong with acknowledging that. we should be celebrating the differences, not trying to erase them.


Different =/= of different moral value.

We're already heading down that road with the neo-trans crowd


This is a joke, right? You can't possibly be saying this seriously...

Women and children are valued more highly than men on an evolutionary scale, and rightly so.


https://en.wikipedia.org/wiki/Appeal_to_nature

Same is true with morality. For example, if an automated car containing a multiple felon, rapist, general lowlife, hits and kills a small child because it wouldn't swerve into a lamp post due to the risk of injuring its passenger.


This is a completely unrealistic scenario, and holds the automated car to a higher standard than a human driver. A human driver is not capable of evaluating the relative moral value of each person involved in the fraction of a second between catching a glimpse of a vaguely human-shaped object in their path and committing to either swerving or colliding. Nor would a human driver be criminally prosecuted for choosing the lowlife over the child.

I for one would rather live in a world where accidents might sometimes happen, sometimes at the fault of human error, than surrender human evolution and morality to a cold hard logic programmed into a piece of silicone.


IOW, you would gladly kill innocent people for the sake of human ego. How many innocent children is an acceptable cost to pay? How many grieving families of the victims of drunk drivers? Will you personally write a letter to them explaining how their child's death is a necessary cost of allowing human evolution to triumph over silicon?


Automatically Appended Next Post:
 queen_annes_revenge wrote:
thats another issue entirely. do we really want more involvement of these internet companies in our private lives? after all they arent known for their unquestionable ethical codes regarding peoples data.


I don't know, good question? How many innocent children are you willing to kill to keep Google out of your data?


Self-driving cars -- programming morality @ 2018/12/06 11:12:31


Post by: queen_annes_revenge


If you're going to call me on critical thinking, I feel it necessary to point out that an appeal to nature is not a logical fallacy in all cases, whereas a straw man argument eg: how many children do you want to kill, is a logical fallacy 100% of the time.


Self-driving cars -- programming morality @ 2018/12/06 11:15:12


Post by: Peregrine


 queen_annes_revenge wrote:
If you're going to call me on critical thinking, I feel it necessary to point out that an appeal to nature is not a logical fallacy in all cases, whereas a straw man argument eg: how many children do you want to kill, is a logical fallacy 100% of the time.


It's not a straw man, you're just refusing to acknowledge the blood on your hands. If you oppose automated vehicles then you get full responsibility for the additional people who will be killed as a result of allowing humans to continue driving.


Self-driving cars -- programming morality @ 2018/12/06 11:25:19


Post by: Overread


When talking about what is "natural" which species do you mean? Because the sheer variety of species on the planet makes "natural laws" and all those other kinds of statement almost utterly meaningless. Take sea horses where the male has a pouch and does the bulk of care for the young as they develop. Or the Angler fish where the male basically fuses with the female until the male is basically nothing but a dangling pair of testicles (slightly exaggerated there).

Two stark contrasts that are perfectly natural and are only two of many many species which can show huge swings in the relative "value" of males and females within a population. This is without ignoring that variation in population dynamics and availability of resources often makes for adaptation to relative gender "value".


Self-driving cars -- programming morality @ 2018/12/06 11:28:10


Post by: queen_annes_revenge


It's a massive straw man. If you want to look at statistics, how many people live their life, drive every day then die, having never killed anyone? You're basically saying that if you drive a car you're going to kill someone or someone's going to die. So I guess you don't drive? If not do you get a bus? Or a train? They kill people too. Autonomous cars won't stop that.
Also what about emergency service and blue light drivers (which I am) what are they going to do? Will they use autonomous vehicles? There's a whole host of issues that need examining, before you can even start to say that autonomous vehicles are suitable, let alone safe.


Automatically Appended Next Post:
Last year I had an accident where my car hit some black ice, and slid, hitting the rear of a parked vehicle. I did nothing wrong. I wasn't speeding, I was driving in the correct gear for the temperatures, when I slid I performed the correct procedures as taught in my advanced driving. There was nothing I could do. So how would an autonomous vehicle deal with that?


Self-driving cars -- programming morality @ 2018/12/06 11:36:08


Post by: Peregrine


 queen_annes_revenge wrote:
You're basically saying that if you drive a car you're going to kill someone or someone's going to die.


No, I'm saying that if you advocate against implementing a technology that will save lives because you care more about humanity being more important than "silicon" then the blood of those deaths is on your hands. Your position is that it's ok for X additional people to be killed per year as long as it's humans killing other humans instead of an automated vehicle killing them.

Also what about emergency service and blue light drivers (which I am) what are they going to do?


They benefit considerably from automated vehicles. A fully automated road system can grant priority to emergency vehicles, even diverting potential traffic obstacles onto alternate roads to clear the fastest possible path. And TBH it's not really a relevant point here, emergency vehicles are such a small percentage of total driving that even if you continue to use human drivers for that one case the clear answer is still to implement automated vehicles for everyone else.


Automatically Appended Next Post:
 queen_annes_revenge wrote:
Last year I had an accident where my car hit some black ice, and slid, hitting the rear of a parked vehicle. I did nothing wrong. I wasn't speeding, I was driving in the correct gear for the temperatures, when I slid I performed the correct procedures as taught in my advanced driving. There was nothing I could do. So how would an autonomous vehicle deal with that?


Probably by executing the same correct procedures, free from any panic response that a fallible human driver might fall victim to. Or it's possible that the automated car, having the ability to use a wider range of sensors than a human eye, detects the black ice in advance and avoids the accident entirely. Or maybe it doesn't, and the outcome is the same. Obviously some accidents will still happen regardless of who or what is driving, the point is that automated vehicles are going to be safer overall and kill fewer people.


Automatically Appended Next Post:
 Overread wrote:
When talking about what is "natural" which species do you mean? Because the sheer variety of species on the planet makes "natural laws" and all those other kinds of statement almost utterly meaningless. Take sea horses where the male has a pouch and does the bulk of care for the young as they develop. Or the Angler fish where the male basically fuses with the female until the male is basically nothing but a dangling pair of testicles (slightly exaggerated there).

Two stark contrasts that are perfectly natural and are only two of many many species which can show huge swings in the relative "value" of males and females within a population. This is without ignoring that variation in population dynamics and availability of resources often makes for adaptation to relative gender "value".


Clearly you are part of the "neo-trans crowd" and their silly ideas about understanding evolution at more than a high school level. Can't you just respect Traditional Values like a decent person and understand that god evolution made everything that way?


Self-driving cars -- programming morality @ 2018/12/06 11:41:29


Post by: queen_annes_revenge


 Overread wrote:
When talking about what is "natural" which species do you mean? Because the sheer variety of species on the planet makes "natural laws" and all those other kinds of statement almost utterly meaningless. Take sea horses where the male has a pouch and does the bulk of care for the young as they develop. Or the Angler fish where the male basically fuses with the female until the male is basically nothing but a dangling pair of testicles (slightly exaggerated there).

Two stark contrasts that are perfectly natural and are only two of many many species which can show huge swings in the relative "value" of males and females within a population. This is without ignoring that variation in population dynamics and availability of resources often makes for adaptation to relative gender "value".


Human nature. Fish aren't going to be driving, as far as I'm aware.


Automatically Appended Next Post:
Peregrine you weaken your debate by trying to mock me. The above post about fish is totally irrelevant, and an apparent attempt to divert the validity of my point.I think anyone with any degree of intellect reading this debate would infer that it was human nature being discussed.
Which is a shame because you have presented some valid points, some which I've had to stop and think about, and some I concede on. Its just unfortunate that you decide to dip into the odd straw man and ad hominem in the process. Totally unnecessary


Self-driving cars -- programming morality @ 2018/12/06 11:54:49


Post by: Peregrine


 queen_annes_revenge wrote:
Peregrine you weaken your debate by trying to mock me.


No, I accurately mock your ridiculous statements. I mean, "neo-trans crowd" FFS, it's like you're a parody of Fox News.

The above post about fish is totally irrelevant, and an apparent attempt to divert the validity of my point.


No, it's an accurate criticism of your fallacious reasoning and superficial understanding of biology. You're attempting to portray an inherently lower value on male lives as a natural law, not just a coincidence of modern social norms in a particular region. For that to have any deeper meaning you have to have a larger trend than just humans. But instead, when we look at other species, we find that relative value of male and female lives varies considerably at the whim of whatever reproductive strategy happened to work best in a particular niche. It has no moral value, just like the fact that we have 10 fingers has no moral value.


Self-driving cars -- programming morality @ 2018/12/06 12:24:32


Post by: queen_annes_revenge


thats still ad hominem, and adds nothing to the debate. the gender/trans is a seperate, albeit connected issue when regarding human nature. One I'm sure the no politics rule in place here would stop us discussing.
Other species are completely irrelevant. Men are the expendible element of our species. the only reason you propose an opposition to that is that in this modern age everything is examined through the lens of utilitarianism and forced equality. Theres a reason that IN GENERAL men have performed the more dangerous roles in society. Hunting, firefighting, soldiering, policing, even heavy engineering, jobs involving hazardous materials, even bin men for example. and the reason is that numerically, women are more important to the survival of the species. Simple maths says that 3 women and 1 man is better for species survival than 1 woman and 3 men.
So while you can say that in an ideal modern society, these evolutionary traits no longer exist, that is simply not the case. They exist in societies worldwide, varying in intensity but there nontheless.
That is an essential component to morality. and as I said before, simply denying that morality is important and that it can just be dispensed with, is false.


Automatically Appended Next Post:
also, I'm British. I don't watch fox news or CNN, and even if I did, my choice of news media would not be a case for any points I make being Invalid.


Self-driving cars -- programming morality @ 2018/12/06 12:33:35


Post by: Peregrine


 queen_annes_revenge wrote:
thats still ad hominem, and adds nothing to the debate.


It adds lots to the debate. It highlights the fact that you have ridiculous ideas about "human nature" and a general weakness for accepting pesudoscientific garbage if it matches certain ideological beliefs.

the gender/trans is a seperate, albeit connected issue when regarding human nature. One I'm sure the no politics rule in place here would stop us discussing.


Unfortunate. I'm really hoping you're foolish enough to attempt to argue it, I haven't had a good evisceration of pseudoscientific garbage in a while.

Other species are completely irrelevant. Men are the expendible element of our species. the only reason you propose an opposition to that is that in this modern age everything is examined through the lens of utilitarianism and forced equality. Theres a reason that IN GENERAL men have performed the more dangerous roles in society. Hunting, firefighting, soldiering, policing, even heavy engineering, jobs involving hazardous materials, even bin men for example. and the reason is that numerically, women are more important to the survival of the species.


Again, this is an appeal to nature fallacy. The fact that men have been given a certain role in the past does not mean that it is an inherent moral quality, or that we must make maintaining this evaluation a priority to the point that we're willing to accept additional deaths per year as the price of keeping it.

Simple maths says that 3 women and 1 man is better for species survival than 1 woman and 3 men.


This is exactly the sort of thing I'm talking about when I say you have a superficial understanding of biology. Mere quantity of offspring is not a relevant factor in the survival of our species anymore. Modern improvements in life expectancy, infant mortality, etc, have us at a point where we are capable of producing far more offspring than is necessary for survival. In fact, overpopulation is far more of a concern than ability to produce more babies. So in that context your "survival of the species" valuation tells us that a male doctor is of far more value than a female janitor, and the moral choice is to save the man even if it means letting the woman die. After all, the doctor will save lives directly, and may even contribute to species-level survival in things like curing diseases, while the janitor will only perform easily replaceable labor and may make some babies that we don't really need.


Self-driving cars -- programming morality @ 2018/12/06 12:49:12


Post by: Overread


The thing is morality and all those other arguments are a moot point when you're dealing with a period of time that is measured in fractions of seconds. No human ever weighs up those pros and cons except in a classroom as an exercise in theory and morality.

When you're in an actual accident you've got to process that its happening and then try to calculate what you can do. Plus there's the very real chance that you can end up mentally paralysed and not make any choice because you might not have any prior experiences to give you valid options.


So all the moralistic arguments go out the window and the most likely is that the person driving a vehicle is going to favour saving their own life over anything else. They might favour the life of a loved on in the vehicle more so (eg a passenger); but otherwise its a split second series of choices to be made. A machine is going to be no more moralistic in those situations than a person - the real key is that the machine can reach a point where it is safer and able to make a choice.

And sometimes there is no choice that doesn't result in death or injury or the choice that is made is sensible but another factor comes into play. Eg a second patch of black ice that further reduces control of the vehicle and compounds any attempt to resolve the loss of control from the first.


Right now we are still in the early stages where self driven cars are still a higher risk and are also (importantly) not trusted by people by and large. That said it doesn't take long for new tech to be adapted too, esp if it means less work for people. Imagine how fast people will adapt to cars that can do the daily commute for them. That's an extra 30mins to an hour or more where they could eat breakfast (instead of doing it at home); check up on the news; watch their morning TV; read a book; check up on the stock market; make sure they've done their homework!


Self-driving cars -- programming morality @ 2018/12/06 13:00:56


Post by: Mad Doc Grotsnik


To drag it back on topic....

To answer the question, one must consider Car Insurance and case law.

Here's a scenario.

You're sitting at a T-Junction, waiting to join the main carriageway. You see another vehicle coming down the main carriageway, indicating to turn in. You pull out. They continue straight on, a crash ensues.

Who is at fault?

Under UK Case Law (Davis vs Swinwood, 2009)....you're at fault. This is based on the general principle that the other vehicle was established in the road, and therefore allowed to proceed. The indicator is a bit of a red herring - because it's not a clear signal of intent. The assumption here is solely yours that the other car was about to pull into the road you're joining from.

No split liability there. 100% your fault. Davis vs Swinwood 2009 confirms that a mis-leading signal is not negligence.

Now, that's a nice and easy one. Same circumstances, but involving a vehicle blocking your line of sight, and you hitting a motorcyclist that was over taking the vehicle blocking your line of sight? All sorts of case law there. Speed isn't negligence, so that doesn't matter (mostly because 'good luck proving it'). But what can matter is the shape of the junction, whether it was light or dark, relative visibility without the obstructing vehicle, the type of vehicle obstructing etc.

That is what you need to programme in. Now, stripping it down to the basics? Do Not Pull Onto The Carriageway If Your Way Is Not Clear is probably the easiest way.


Self-driving cars -- programming morality @ 2018/12/06 13:03:42


Post by: queen_annes_revenge


and as I said before, an appeal to nature is not always a fallacy.

You put words in my mouth completely out of context.

I never said they are still a relevant factor in species survival, I said they still exist, within us, relevant or not, and that they are the reason that we consider a woman or child more important than a man.
of course a male doctor is technically more valuable than a female janitor, but I bet if they were both on a sinking ship, he would tell her to take the last lifeboat before him, and if he didn't he would be judged negatively by all who witnessed it.


If you start a thread on transgender issues I will gladly partake. I have lots of actual scientific data to refute the claims of some the modern trans lobby, who, by the way are the only ones engaging in pseudoscience.




Automatically Appended Next Post:
 Overread wrote:
The thing is morality and all those other arguments are a moot point when you're dealing with a period of time that is measured in fractions of seconds. No human ever weighs up those pros and cons except in a classroom as an exercise in theory and morality.

When you're in an actual accident you've got to process that its happening and then try to calculate what you can do. Plus there's the very real chance that you can end up mentally paralysed and not make any choice because you might not have any prior experiences to give you valid options.


So all the moralistic arguments go out the window and the most likely is that the person driving a vehicle is going to favour saving their own life over anything else. They might favour the life of a loved on in the vehicle more so (eg a passenger); but otherwise its a split second series of choices to be made. A machine is going to be no more moralistic in those situations than a person - the real key is that the machine can reach a point where it is safer and able to make a choice.

And sometimes there is no choice that doesn't result in death or injury or the choice that is made is sensible but another factor comes into play. Eg a second patch of black ice that further reduces control of the vehicle and compounds any attempt to resolve the loss of control from the first.


Right now we are still in the early stages where self driven cars are still a higher risk and are also (importantly) not trusted by people by and large. That said it doesn't take long for new tech to be adapted too, esp if it means less work for people. Imagine how fast people will adapt to cars that can do the daily commute for them. That's an extra 30mins to an hour or more where they could eat breakfast (instead of doing it at home); check up on the news; watch their morning TV; read a book; check up on the stock market; make sure they've done their homework!


yes, but my point is that if I saw a person in the road, I would swerve to avoid them, instinctively knowing that while I could possibly harm someone else with my swerve, I definitely wont harm that person in the road, whereas the machine will not swerve because it MIGHT harm someone else, and therefore will probably kill the person. That isnt right.


Automatically Appended Next Post:
 Mad Doc Grotsnik wrote:
To drag it back on topic....

To answer the question, one must consider Car Insurance and case law.

Here's a scenario.

You're sitting at a T-Junction, waiting to join the main carriageway. You see another vehicle coming down the main carriageway, indicating to turn in. You pull out. They continue straight on, a crash ensues.

Who is at fault?

Under UK Case Law (Davis vs Swinwood, 2009)....you're at fault. This is based on the general principle that the other vehicle was established in the road, and therefore allowed to proceed. The indicator is a bit of a red herring - because it's not a clear signal of intent. The assumption here is solely yours that the other car was about to pull into the road you're joining from.

No split liability there. 100% your fault. Davis vs Swinwood 2009 confirms that a mis-leading signal is not negligence.

Now, that's a nice and easy one. Same circumstances, but involving a vehicle blocking your line of sight, and you hitting a motorcyclist that was over taking the vehicle blocking your line of sight? All sorts of case law there. Speed isn't negligence, so that doesn't matter (mostly because 'good luck proving it'). But what can matter is the shape of the junction, whether it was light or dark, relative visibility without the obstructing vehicle, the type of vehicle obstructing etc.

That is what you need to programme in. Now, stripping it down to the basics? Do Not Pull Onto The Carriageway If Your Way Is Not Clear is probably the easiest way.

That's always bugged me. I understand the reasoning behind it but there needs to be a precedent to make people more aware of their signals. It's one of the worst things you encounter driving in the UK. People signalling when they don't need to, not signalling when they should.. Driving down motorways oblivious to the fact that their indicators are flashing. Part of the problem is self cancelling indicators. It can make people lazy.


Self-driving cars -- programming morality @ 2018/12/06 13:19:15


Post by: Overread


 queen_annes_revenge wrote:


yes, but my point is that if I saw a person in the road, I would swerve to avoid them, instinctively knowing that while I could possibly harm someone else with my swerve, I definitely wont harm that person in the road, whereas the machine will not swerve because it MIGHT harm someone else, and therefore will probably kill the person. That isnt right.


But you're loading that statement.

First up you're stating that you don't know if you're going to hurt any one else by swerving to avoid that one person. That makes perfect sense for a human. A machine, in theory, should have 360 view and awareness. It should be able to see other cars around it, other people, other elements. IT can make a choice based upon far more data far quicker than the person and without the emotional baggage and panic that sets in. Ergo it can choose to swerve and save that pedestrian and the driver and the two kids walking on the pavement beside the car because it knows there's nothing on the other side of hte road so it knows the safe direction to swerve in. Whilst a human might only swerve to avoid one and might well swerve out of the road toward the edge and then hit the two kids.


In theory the machine has a higher chance of saving more lives - provided it is properly programmed and has a working system that can reliably identify people in the environment around it.

Plus as noted earlier, self driven cars could inter-communicate. So an incident for one car can cause others to react behind and infront of it. So your car is serving and at the very same time the car right behind you is hard braking too. Whilst the car on the other side of the road is also hard braking. Now the cars have stopped and prevented pileup.


Self-driving cars -- programming morality @ 2018/12/06 13:22:10


Post by: Kilkrazy


 Mad Doc Grotsnik wrote:
To drag it back on topic....

To answer the question, one must consider Car Insurance and case law.

Here's a scenario.

You're sitting at a T-Junction, waiting to join the main carriageway. You see another vehicle coming down the main carriageway, indicating to turn in. You pull out. They continue straight on, a crash ensues.

Who is at fault?

Under UK Case Law (Davis vs Swinwood, 2009)....you're at fault. This is based on the general principle that the other vehicle was established in the road, and therefore allowed to proceed. The indicator is a bit of a red herring - because it's not a clear signal of intent. The assumption here is solely yours that the other car was about to pull into the road you're joining from.

No split liability there. 100% your fault. Davis vs Swinwood 2009 confirms that a mis-leading signal is not negligence.

Now, that's a nice and easy one. Same circumstances, but involving a vehicle blocking your line of sight, and you hitting a motorcyclist that was over taking the vehicle blocking your line of sight? All sorts of case law there. Speed isn't negligence, so that doesn't matter (mostly because 'good luck proving it'). But what can matter is the shape of the junction, whether it was light or dark, relative visibility without the obstructing vehicle, the type of vehicle obstructing etc.

That is what you need to programme in. Now, stripping it down to the basics? Do Not Pull Onto The Carriageway If Your Way Is Not Clear is probably the easiest way.


Your examples are interesting, and very relevant in my daily experience, when I see many vehicles who don't bother to indicate, or indicate incorrectly.

The self-driving car environment will be different however. Autonomous cars will be programmed to indicate at the correct interval. They will know where they are going by GPS, and not suddenly change their minds because they see a direction sign which had been obscured. They will be aware of and in communication with the nearby vehicles. They won't overtake or pull out in potentially dangerous circumstances.

Of course this is all down to programming and sensor engineering, and that is what has to be got right.

To some degree I think the argument about morality and the trolley problem is a red herring. It's not often than human drivers get themselves into the position of having to choose whether to run over the pregnant woman, the fat schoolchild or the premiere leagure footballer with a mother with dementia, or whatever. Autonomous cars will face that kind of situation a lot less.


Self-driving cars -- programming morality @ 2018/12/06 13:33:27


Post by: queen_annes_revenge


The morality issue I mentioned wasn't about the unthinkable choice. There weren't 2 definite bad outcomes, but one bad outcome and one possible bad outcome. The op stating that an autonomous vehicle wouldn't swerve was what I was referring to, saying sometimes you have to swerve, and if the machine can't do that due to its programming that is a mistake.


Automatically Appended Next Post:
 Overread wrote:
 queen_annes_revenge wrote:


yes, but my point is that if I saw a person in the road, I would swerve to avoid them, instinctively knowing that while I could possibly harm someone else with my swerve, I definitely wont harm that person in the road, whereas the machine will not swerve because it MIGHT harm someone else, and therefore will probably kill the person. That isnt right.


But you're loading that statement.

First up you're stating that you don't know if you're going to hurt any one else by swerving to avoid that one person. That makes perfect sense for a human. A machine, in theory, should have 360 view and awareness. It should be able to see other cars around it, other people, other elements. IT can make a choice based upon far more data far quicker than the person and without the emotional baggage and panic that sets in. Ergo it can choose to swerve and save that pedestrian and the driver and the two kids walking on the pavement beside the car because it knows there's nothing on the other side of hte road so it knows the safe direction to swerve in. Whilst a human might only swerve to avoid one and might well swerve out of the road toward the edge and then hit the two kids.


In theory the machine has a higher chance of saving more lives - provided it is properly programmed and has a working system that can reliably identify people in the environment around it.

Plus as noted earlier, self driven cars could inter-communicate. So an incident for one car can cause others to react behind and infront of it. So your car is serving and at the very same time the car right behind you is hard braking too. Whilst the car on the other side of the road is also hard braking. Now the cars have stopped and prevented pileup.


The OP stated an autonomous vehicle would not swerve due to the risk of rolling.


Self-driving cars -- programming morality @ 2018/12/06 13:38:45


Post by: Mad Doc Grotsnik


The problem remains Self Driving Cars vs Human Drivers.

You cannot possibly programme in the whole gamut of Human Stupidity.

The moron that decides to pull a U-turn just as you pass (had that happen to me). The goon that doesn't understand what a red traffic light means. People driving erratically in general.

What will also help is, in theory, Self Driving Cars will be singularly incapable of breaching a given speed limit, or pootling along well under (20mph in a 40 zone for instance).

What won't help is Kids running out into the middle of the road from between parked cars. Goon cyclists trying to weave in and out of traffic, hopping on and off the pavement as they see fit (this only applies to Goon Cyclists. I tar no group with one brush).

No sensor and no programming can possibly account for those sorts of things. There's just too much going on, and too much could happen.

In terms of GPS? That remains imperfect in itself. It follows roads no longer there. It's not always up to date on One Way systems, which in Cities seem to change on a regular basis to 'calm traffic', and certainly just inordinately increase travel time.


Self-driving cars -- programming morality @ 2018/12/06 14:00:17


Post by: queen_annes_revenge


however there are times when its necessary to drop speed, or go into a lower gear, or a whole host of other variables that happen on roads, where I feel that a human would actually have quicker instinctive reactions than a machine processing information from all its sensors. also, what happens when the computer systems go down or malfunction? going back to my EOD robots, they cost about £1.2million a piece, and are constantly going t*ts, needing resets etc. they only contain 3 or 4 processing units. god knows how many a car would need for all its functions.


Self-driving cars -- programming morality @ 2018/12/06 14:02:10


Post by: Kilkrazy


Self-driving cars won't position themselves jigger-jagger across the whole carriageway, which prevents cyclists from taking a consistent safe path.

Self-driving cars will respect school safety zones.

The GPS problem can be solved by proper updating and push notification to the vehicles.


Self-driving cars -- programming morality @ 2018/12/06 19:43:06


Post by: Xenomancers


Overread has the right idea I think.

It's not a question of morality. Morality does not come into play in these situations already. Human drivers try to protect themselves before a crash. All we need to do is make an AI system when does a better job than a human.

It follows traffic laws - slows down in uncertain situations or when it's raining - it stops at red lights - it doesn't drink and drive or drive tired - it doesn't get road rage.

You've just removed 99% of traffic accidents. Making the system 99x more moral than human drivers. Effort but into thinking about the morality of machines is basically wasted thought. Think about something more important. Like what are people going to do for money when robots take all our jobs.


Self-driving cars -- programming morality @ 2018/12/06 20:11:51


Post by: Just Tony


 Xenomancers wrote:
Think about something more important. Like what are people going to do for money when robots take all our jobs.


Smash each other over the head for basic resources as without paying customers companies wouldn't have any reason to produce goods with all these robots in the first place? It's not that difficult to understand. Material wealth drives everything on our planet, no matter how noble the socialist professor you had tries to paint the world. Even socialist tenets are based off SOMEONE producing wealth to be shared, distributed, and utilized. Without that, we're headed for Mad Max territory. I'm game, as I will finally get to utilize two decades of military training to its fullest.


Self-driving cars -- programming morality @ 2018/12/06 20:20:28


Post by: Overread


Actually I'm pretty sure food production drives everything. So long as people have full bellies things will remain calm.


Also fun fact some car companies are bringing people back and kicking out robots. Robotical assembly can make huge savings, but at the same time once you move far enough past the designs the machines were made for the retooling and rebuilding of the whole factory fast eats up any savings made over employing human workers who can be far more adaptive. People you just give new schematics too, lose a few to early build errors and then let them get on with it.

A machine you have to hire skilled staff to rebuild from the ground up mostly.



Of course there is a tipping point where machines become advanced enough to be easy to adapt; right now cost and technology are the limit there; one day it will just be the cost and then nothing.


Self-driving cars -- programming morality @ 2018/12/06 20:37:52


Post by: queen_annes_revenge


We just have to design a system that can do... Yup, it's that simple. Same thing they tell us every time we get a new e database to make 'things easier' at our works. And anyone who works around robotics knows how temperamental automated systems can be. I think that nothing like this should be implemented until we at least make a perfect autonomous system, and seeing as were currently incapable of making a perfect semi autonomous system, forging ahead with driverless cars is just dangerous.


Self-driving cars -- programming morality @ 2018/12/06 23:18:45


Post by: Peregrine


 queen_annes_revenge wrote:
We just have to design a system that can do... Yup, it's that simple. Same thing they tell us every time we get a new e database to make 'things easier' at our works. And anyone who works around robotics knows how temperamental automated systems can be. I think that nothing like this should be implemented until we at least make a perfect autonomous system, and seeing as were currently incapable of making a perfect semi autonomous system, forging ahead with driverless cars is just dangerous.


That's not how it works. Perfection is not the standard, "better than the human drivers they replace" is.


Self-driving cars -- programming morality @ 2018/12/07 00:39:01


Post by: Mario


queen_annes_revenge wrote:The morality issue I mentioned wasn't about the unthinkable choice. There weren't 2 definite bad outcomes, but one bad outcome and one possible bad outcome. The op stating that an autonomous vehicle wouldn't swerve was what I was referring to, saying sometimes you have to swerve, and if the machine can't do that due to its programming that is a mistake.

Spoiler:

Automatically Appended Next Post:
 Overread wrote:
 queen_annes_revenge wrote:


yes, but my point is that if I saw a person in the road, I would swerve to avoid them, instinctively knowing that while I could possibly harm someone else with my swerve, I definitely wont harm that person in the road, whereas the machine will not swerve because it MIGHT harm someone else, and therefore will probably kill the person. That isnt right.


But you're loading that statement.

First up you're stating that you don't know if you're going to hurt any one else by swerving to avoid that one person. That makes perfect sense for a human. A machine, in theory, should have 360 view and awareness. It should be able to see other cars around it, other people, other elements. IT can make a choice based upon far more data far quicker than the person and without the emotional baggage and panic that sets in. Ergo it can choose to swerve and save that pedestrian and the driver and the two kids walking on the pavement beside the car because it knows there's nothing on the other side of hte road so it knows the safe direction to swerve in. Whilst a human might only swerve to avoid one and might well swerve out of the road toward the edge and then hit the two kids.


In theory the machine has a higher chance of saving more lives - provided it is properly programmed and has a working system that can reliably identify people in the environment around it.

Plus as noted earlier, self driven cars could inter-communicate. So an incident for one car can cause others to react behind and infront of it. So your car is serving and at the very same time the car right behind you is hard braking too. Whilst the car on the other side of the road is also hard braking. Now the cars have stopped and prevented pileup.


The OP stated an autonomous vehicle would not swerve due to the risk of rolling.
In a situation where a human had to swerve the autonomous vehicle would probably have access to much more useful data and much earlier than a human driver and instead of swerving abruptly, it would just slow down and adjust its course more mildly and in advance of any problems then human could perceive. When a computer system of acceptable quality needs to swerve, a human driver would probably be already in an accident/dead. There are reports that humans are already angry at autonomous vehicles because those "drive like grannies" and are extra cautious. If I had to bet on who's the safer driver, I'd bet on the AI and not the human.

queen_annes_revenge wrote:however there are times when its necessary to drop speed, or go into a lower gear, or a whole host of other variables that happen on roads, where I feel that a human would actually have quicker instinctive reactions than a machine processing information from all its sensors. also, what happens when the computer systems go down or malfunction? going back to my EOD robots, they cost about £1.2million a piece, and are constantly going t*ts, needing resets etc. they only contain 3 or 4 processing units. god knows how many a car would need for all its functions.
By that criterium we should also forbid humans from driving. Some of us drive drunk, some drive while exhausted, some diver over the speed limit, some drive while eating/drinking/texting, and being generally distracted by who knows what. Doesn't that count as the human "system going down" or "malfunctioning"?

We kill people with cars all the time. How can we even be allowed to drive?

And your feeling about humans having quickere instinctive reactions is wrong. A few decades ago it might have been true but we can't compete with specialised signal processing systems these days. Besides you are also assuming that those quickere instinctive reactions are also the correct reactions instead of panic induced errors.


Self-driving cars -- programming morality @ 2018/12/07 01:10:46


Post by: Grey Templar


 Peregrine wrote:
 queen_annes_revenge wrote:
We just have to design a system that can do... Yup, it's that simple. Same thing they tell us every time we get a new e database to make 'things easier' at our works. And anyone who works around robotics knows how temperamental automated systems can be. I think that nothing like this should be implemented until we at least make a perfect autonomous system, and seeing as were currently incapable of making a perfect semi autonomous system, forging ahead with driverless cars is just dangerous.


That's not how it works. Perfection is not the standard, "better than the human drivers they replace" is.


Maybe in your perfectly logical Peregrine world. But the reality is that Autonomous Cars will be held to a much higher standard, perfection will be expected and demanded of them. They will fail of course, and we will abandon them once they fail that test. Just like the Prisoner's Dilemma, we will arrive at a suboptimal result for all parties involved and life will continue.


Self-driving cars -- programming morality @ 2018/12/07 07:58:06


Post by: queen_annes_revenge


Exactly. I've seen a lot of 'computers are safer than humans' in this thread. yet, I haven't seen a lot of evidence to support that fact. computer systems are constantly failing, subject to hackeing, malware, malfunction, straight up failing etc etc. And if that is the case, why is a human decision the last input required for drone strikes. Surely a computer can make a better decision?


Self-driving cars -- programming morality @ 2018/12/07 08:24:24


Post by: Peregrine


 queen_annes_revenge wrote:
Exactly. I've seen a lot of 'computers are safer than humans' in this thread. yet, I haven't seen a lot of evidence to support that fact. computer systems are constantly failing, subject to hackeing, malware, malfunction, straight up failing etc etc.


Automated vehicles are already on par with human drivers, and the technology is still new. Computer systems may fail, but it's not like humans are flawless either. We're constantly driving drunk, texting while driving, driving while too tired to focus well, driving too fast because we're running late, driving aggressively because dammit that guy isn't going to cut me off, etc. Automated vehicles don't have to be perfect, they just have to be better than the horrific slaughter of human drivers.

And if that is the case, why is a human decision the last input required for drone strikes. Surely a computer can make a better decision?


Because of moral reasons. Seriously, is it that hard to understand the difference between leaving a human as the last input in a case where the question is "should this person be killed" but not when the requirement is maximum reaction speed to a physics problem of "how do I avoid hitting this potential hazard"?


Self-driving cars -- programming morality @ 2018/12/07 08:40:41


Post by: Kilkrazy


 queen_annes_revenge wrote:
Exactly. I've seen a lot of 'computers are safer than humans' in this thread. yet, I haven't seen a lot of evidence to support that fact. computer systems are constantly failing, subject to hackeing, malware, malfunction, straight up failing etc etc. And if that is the case, why is a human decision the last input required for drone strikes. Surely a computer can make a better decision?


A driverless car controlled by a computer will be designed to "fail safe."

If you go on airliners, you are already trusting your life to computer systems which "fail safe".

Sometimes they don't, and a whole plane falls out of the sky. It hasn't stopped millions of people flying everywhere.


Self-driving cars -- programming morality @ 2018/12/07 08:48:31


Post by: queen_annes_revenge


autonomous vehicles arent going to stop humans being stupid.

https://www.digitaltrends.com/cars/self-driving-uber-crash-arizona/

If anything, they will probably induce more negligence like that in people who believe they dont have to take any precautions.

https://www.digitaltrends.com/opinion/self-driving-tesla-death-whos-to-blame/

https://www.digitaltrends.com/cars/tesla-driver-takes-nap-while-car-on-autopilot/

https://www.digitaltrends.com/cars/tesla-s-summon-under-trailer/

I sure as hell wouldnt drive my car under a trailer.

https://www.digitaltrends.com/cars/tesal-model-s-crash-nhtsa-investigation-fatal-crash/

And as a driver I would certainly notice a truck, even if it was white.


Self-driving cars -- programming morality @ 2018/12/07 08:55:31


Post by: Peregrine


 queen_annes_revenge wrote:
I sure as hell wouldnt drive my car under a trailer.


Congratulations, you're a great driver. The point you keep ignoring is that the standard is not perfection, it's being better on average than human drivers. A fallible automated system can still be better than humans as an average even if it is worse than the best human drivers. For example, yeah, it might hit a trailer because of a flaw, but in exchange it's completely removing drunk driving from the picture. You'll note that the trailer accident happened at 1mph and caused minimal damage and no injuries. I will gladly accept a higher rate of that sort of accident if it means getting drunk drivers and their frequent fatal accidents off the road entirely.


Self-driving cars -- programming morality @ 2018/12/07 08:55:38


Post by: queen_annes_revenge


https://www.digitaltrends.com/cars/cops-chased-a-tesla-for-7-miles-while-the-driver-apparently-slept/



Automatically Appended Next Post:
so should you be allowed to travel in one while drunk?


Automatically Appended Next Post:
https://www.digitaltrends.com/features/will-high-res-radar-make-tomorrows-cars-safer/

You'll forgive me for not putting my wholehearted trust in these things, surely.


Self-driving cars -- programming morality @ 2018/12/07 09:00:37


Post by: Peregrine


Sigh. Posting individual accident reports is irrelevant, what matters is the accident rate.


Self-driving cars -- programming morality @ 2018/12/07 09:04:47


Post by: queen_annes_revenge


Tesla Autopilot:

'In one video, a car drifts out of its lane, and then swerves decisively into oncoming traffic, rather than away from it. Tesla’s Autosteer function monitors the car in front to orient itself. The Tesla driver, YouTube user RockStarTree, said he believed the car had lost sensor lock on the vehicle it was following, and mistakenly tried to “follow” an oncoming car when it came into sensor range.'

sounds really safe.


Self-driving cars -- programming morality @ 2018/12/07 09:12:57


Post by: Peregrine


The plural of 'anecdote' is not 'data'.

Spamming links to descriptions of single incidents involving automated cars is not relevant, I could do the same with examples of drunk drivers causing accidents. What matters is injuries and fatalities per passenger-mile, and you're providing absolutely nothing on that.


Self-driving cars -- programming morality @ 2018/12/07 09:17:05


Post by: queen_annes_revenge


normally I'd agree. however, statistics arent going to cut it for me in this case when its the safety of me and my child concerned.


Self-driving cars -- programming morality @ 2018/12/07 09:19:10


Post by: Peregrine


 queen_annes_revenge wrote:
normally I'd agree. however, statistics arent going to cut it for me in this case when its the safety of me and my child concerned.


Uh, what? That's exactly the point of statistics: statistically speaking which world is more dangerous to you and your child, one where flawed automated cars exist or one where drunk drivers exist?


Self-driving cars -- programming morality @ 2018/12/07 09:20:14


Post by: filbert


 queen_annes_revenge wrote:
normally I'd agree. however, statistics arent going to cut it for me in this case when its the safety of me and my child concerned.


This is the most fallacious argument I have read in this thread so far. The safety of you and your child is at stake every time you get in your car. Humans are deeply flawed creatures prone to all sorts of errors and mistakes in the same manner that an AI program is not - this is not under dispute. There is simply no way that a bunch of haphazard, random organisms traveling at varying speeds and extremely variable levels of skill and experience can ever be safer than a programmable intelligence.

The 'arguments' you are deploying are typical of the irrational fear of AI that certain members of the public hold that is completely contrary to any evidence presented. It is one of the reasons self-driving cars will take a while to become the norm.


Self-driving cars -- programming morality @ 2018/12/07 09:21:51


Post by: queen_annes_revenge


I'd take my own judgement in avoiding drunk/stupid drivers over these computer systems.


Self-driving cars -- programming morality @ 2018/12/07 09:22:29


Post by: filbert


 queen_annes_revenge wrote:
I'd take my own judgement in avoiding drunk/stupid drivers over these computer systems.


Then your judgement is poor.


Self-driving cars -- programming morality @ 2018/12/07 09:23:21


Post by: Peregrine


 queen_annes_revenge wrote:
I'd take my own judgement in avoiding drunk/stupid drivers over these computer systems.


Then you are foolish and ignorant. No amount of "judgement" is going to save you if a drunk driver swerves across the centerline and hits you head-on before you can possibly react. You're reacting emotionally and putting yourself in more danger instead of looking at the relevant statistics and making the best choice based on the evidence.


Self-driving cars -- programming morality @ 2018/12/07 09:31:05


Post by: queen_annes_revenge


 filbert wrote:
 queen_annes_revenge wrote:
normally I'd agree. however, statistics arent going to cut it for me in this case when its the safety of me and my child concerned.


This is the most fallacious argument I have read in this thread so far. The safety of you and your child is at stake every time you get in your car. Humans are deeply flawed creatures prone to all sorts of errors and mistakes in the same manner that an AI program is not - this is not under dispute. There is simply no way that a bunch of haphazard, random organisms traveling at varying speeds and extremely variable levels of skill and experience can ever be safer than a programmable intelligence.

The 'arguments' you are deploying are typical of the irrational fear of AI that certain members of the public hold that is completely contrary to any evidence presented. It is one of the reasons self-driving cars will take a while to become the norm.


on what ground? Sure. your point could arguably stand, (although on a personal level I will never trust a computer over my own judgement) if the circumstances werent clouded by the fact that there are still going to be humans involved in all stages of the process, and as the links above have proven, human stupidity clearly still impacts upon the performance of the autonomous systems.

The statistics on driverless cars really arent conclusive evidence as to their absolute safety as of yet. I've yet to find any detailed information on studies, other than 700,000 autonomous miles without incident. no details of types of road travelled, speeds, times of day. etc. if someone can point this out I will happily read it.


also those statistics come from the company that produced the car, which poses its own questions concerning bias.


Self-driving cars -- programming morality @ 2018/12/07 09:34:22


Post by: Peregrine


 queen_annes_revenge wrote:
(although on a personal level I will never trust a computer over my own judgement)


This is irrational fear.

The statistics on driverless cars really arent conclusive evidence as to their absolute safety as of yet. I've yet to find any detailed information on studies, other than 700,000 autonomous miles without incident. no details of types of road travelled, speeds, times of day. etc. if someone can point this out I will happily read it.


That's evidence right there, and automated systems are only going to get better. Meanwhile drunk drivers, speeding, etc, all the flaws of human drivers, are not going to improve. It's inevitable that automated systems eventually get better than humans as the bugs are worked out, the only question is when and if we're already there.


Self-driving cars -- programming morality @ 2018/12/07 09:37:23


Post by: queen_annes_revenge


its not evidence without context and the details of variables I mentioned. for all we know it could've driven around a perimeter loop of an unpopulated airfield clocking those miles.


Automatically Appended Next Post:
I dont think mistrust (not fear) of computers is irrational in any sense.


Self-driving cars -- programming morality @ 2018/12/07 10:37:30


Post by: Kilkrazy


It's a well-known bias dating back to the days of Frankenstein. Earlier than that even, the story of the Golem is a similar precautionary tale.

The fact you somehow believe that companies like Toyota are falsifying their stats in order to get driverless cars into widespread use where they would instantly, inevitably and very publicly be shown to have been developed unsafely by falsified stats, is prima facie evidence of your irrational mindse ton this topic.



Self-driving cars -- programming morality @ 2018/12/07 10:52:45


Post by: queen_annes_revenge


I never said anything about falsifying stats, but the fact is that any statistics published by a group with any sort of incentive invested in the results is going to come under scrutiny.
As I said, I'm happy to read other studies from impartial sources, I just cant find any.


Self-driving cars -- programming morality @ 2018/12/07 10:59:54


Post by: Peregrine


 queen_annes_revenge wrote:
I never said anything about falsifying stats, but the fact is that any statistics published by a group with any sort of incentive invested in the results is going to come under scrutiny.
As I said, I'm happy to read other studies from impartial sources, I just cant find any.


Toyota is an impartial source. A car manufacturer has no incentive to care about whether their vehicles are automated or human-driven, they're still selling the same car. They have incentive to develop automated vehicles as long as those automated vehicles continue to look like a superior product that will inevitably take over from human drivers, but if the results show that human drivers are better than the car manufacturer has no incentive to hide that research. Their self-interested action there is to discontinue all work on automated vehicles, publish the research, and use it as a weapon against any competitor who puts automation in their vehicles.


Automatically Appended Next Post:
In fact, given the cost of developing automated vehicles, the last thing a car manufacturer wants to do is hide research that proves that automation is worse than humans. Proof that automation will fail means they can stop dumping R&D money into it, and that proof would be the best possible outcome from the study. If a car manufacturer is publishing research that automated vehicles are safer than humans it's because they've failed to find a way to avoid making that investment and are forced to acknowledge that AI is the future.


Self-driving cars -- programming morality @ 2018/12/07 11:14:54


Post by: queen_annes_revenge


Toyots wasnt the company in question. google was.

But thats debatable regardless. you're operating on the assumption that car manufacturers don't want to advance and wish the status quo to remain the same. I'm fairly sure most car manufacturers are realists and wouldnt want to risk not getting involved in the driverless car bubble, and leave any rewards to their competitors.


Automatically Appended Next Post:
Here we go. a scientific study.

https://orfe.princeton.edu/~alaink/SmartDrivingCars/Papers/RAND_TestingAV_HowManyMiles.pdf

'The results show that autonomous vehicles would have
to be driven hundreds of millions of miles and sometimes
hundreds of billions of miles to demonstrate their reliability in
terms of fatalities and injuries. Under even aggressive testing
assumptions, existing fleets would take tens and sometimes
hundreds of years to drive these miles—an impossible proposition
if the aim is to demonstrate their performance prior to
releasing them on the roads. Only crash performance seems
possible to assess through statistical comparisons of this kind,
but this also may take years. Moreover, as autonomous vehicles
improve, it will require many millions of miles of driving to
statistically verify changes in their performance.'


Automatically Appended Next Post:
https://www.csmonitor.com/Business/In-Gear/2016/1014/How-safe-is-Tesla-Autopilot-A-look-at-the-statistics


Self-driving cars -- programming morality @ 2018/12/07 11:39:30


Post by: Peregrine


 queen_annes_revenge wrote:
you're operating on the assumption that car manufacturers don't want to advance and wish the status quo to remain the same.


It's a correct assumption. R&D costs money, if automated cars aren't going to work then the self-interested action from every car manufacturer is to prove it ASAP and stop sinking money into a loss.

I'm fairly sure most car manufacturers are realists and wouldnt want to risk not getting involved in the driverless car bubble, and leave any rewards to their competitors.


Those rewards only exist if they are proven to be safe enough to be allowed into full-scale use. Proving that they aren't safe enough pops the bubble. If a manufacturer is not publishing proof and popping the bubble it's because no such proof exists and they believe that automated vehicles are going to be in mass use.

existing fleets


Here's the key premise of the study. Why limit it to existing fleets? There are 3 trillion miles per year driven in the US, handing even a small percentage of that number over to automated vehicles (preferably in a ramping-up approach with safety checkpoints for each increase) gets you hundreds of millions very quickly with a low risk level (assuming automated vehicles demonstrate sufficient safety parity in smaller-scale tests to even try). This is not an impossible obstacle.


Self-driving cars -- programming morality @ 2018/12/07 11:40:02


Post by: Overread


Queen it seems that your issue has actually shifted from the concept of driverless cars to the question of if the technology is ready today.

That's easy - NO - driverless cars are NOT ready today. They might not be ready tomorrow, or the day after. However the point is that there will come a time when the development of them has advanced so far that they ARE ready. In fact that not only are they ready but that they are better than their human counterparts.

The other important fact is that the day they are ready is likely within a span of time that most of us here will see them on the roads.

Certainly I think that we'll see them take over urban and inter-urban travel. Countryside and offroad might take longer or never come, but certainly I can see it happening for many road only networks. Heck considering how many accidents are the result of driver error just having all cars on highways automated could make a huge saving in lives alone. It might also deal with traffic jams at peak times as the AI could adjust it speed to prevent clogging up choke points along the way. Heck consider the amount of time saved at roundabouts when each car knows where the others are heading and doesn't hae to wait or guess (by the direction of the car) where a car Might be heading.



Heck that's one area where signals are not often used and not always clear and where drivers can get confused (very easily) which lane they should be in to come off a roundabout. They are complicated, esp at peak hours and esp when people are already rushed and want in and out of the experience as fast as possible.




I agree driverless cars are not ready today, they are doing well, but there are clear errors and problems with them right now. That said I can see it happening I can see AI being good enough and better than human drivers very readily.


Self-driving cars -- programming morality @ 2018/12/07 12:02:19


Post by: queen_annes_revenge


I'm covering many bases. Personally, I hate the idea of autonomous vehicles. Sentimentality as a safe and competent Driver of many different vehicles, possibly.

But, and as a rational thinker, I do try to separate my personal views from the arguments I'm making here, my main point is that, we as humans are not even close to being ready to say with any certainty, that automation/machines are cleverer/safer/more suitable than humans. I believe that is a flawed hypothesis, and quite insulting to the human race in general, for certain people to write us off so casually against computers.

The future, well I believe that is yet to be seen. the leaps in technology even over as small a scale as the past 20 years suggest that some of the points here could be absolutely become a reality at some point, but I don't think they should be being introduced to the roads for public use yet.
In theory these ideas are great, but we all know that the transfer of those theories to reality rarely run smoothly. There are billions of variables and unconsidered real world elements that will affect that process.

The study above shows that using the current statistics in an attempt to prove the safety value of autonomous vehicles can be refuted.

This quote from the scientific american sums up my feelings pretty well.

'It is true that self-driving cars don’t get tired, angry, frustrated or drunk. But neither can they yet react to uncertain and ambiguous situations with the same skill or anticipation of an attentive human driver, which suggests that perhaps the two still need to work together. Nor do purely automated vehicles possess the foresight to avoid potential peril: They largely drive from moment to moment, rather than thinking ahead to possible events literally down the road.'


Self-driving cars -- programming morality @ 2018/12/07 12:06:58


Post by: Kilkrazy


Waymo driverless cars racked up their first 10 million miles earlier this year.

https://www.digitaltrends.com/cars/waymo-receives-first-permit-to-test-fully-driverless-cars-in-california/

They are now rolling out a fleet of 500 taxis in Arizona. Assuming each taxi did 20 miles each day, that's another 3.5 million trial miles a year.

It won't take long to rack up 100s of millions of miles of live driving.


Self-driving cars -- programming morality @ 2018/12/07 12:51:20


Post by: Iron_Captain


 Peregrine wrote:
 queen_annes_revenge wrote:
if they will never be programmed to swerve, then deaths will be inevitable.


And if you allow humans to continue to drive cars deaths will be inevitable. It's a simple matter of "is X > Y". Compare the deaths per year from human drivers to the deaths per year from automated cars, whichever kills fewer people is the correct choice regardless of the details of AI programming or whatever.
That is terrible utilitarian reasoning. Generally fewer deaths is preferable yes, but not all deaths are equal. Morality is not a simple formula.

 Peregrine wrote:
the thing is there are things that are, while not universally accepted, and disputable, but generally accepted. Male expendability https://en.wikipedia.org/wiki/Male_expendability
Garbage idea. It shouldn't be accepted, and should receive nothing but contempt. Life is life, gender is irrelevant.

I am sorry. I am afraid society does not agree with you on that. Gender is not irrelevant, it is a pretty big deal.

And the idea that adult males are more expendable than women or children is found worldwide in all societies. It is likely this stems from the traditional roles that adult males perform in human societies (hunter, warrior etc.), which are fraught with risk. Therefore we have been conditioned to expect adult males to be willing to accept risk and death, and see their deaths as less shocking than those of more vulnerable members of society such as women and especially children.

 queen_annes_revenge wrote:
I really dont understand why more people arent putting up resistance to the increased forcing of automation on society.

Because automation has led to a lot of benefits and conveniences and so far not that many drawbacks. It hasn't really been forced, and it won't need to be forced as long as the benefits continue to outweigh the drawbacks. You seem rather suspicious about technology, but fact is that people just want it. They want convenience and safety, and self-driving cars promise to be able to deliver both.

 Overread wrote:
Queen it seems that your issue has actually shifted from the concept of driverless cars to the question of if the technology is ready today.

That's easy - NO - driverless cars are NOT ready today. They might not be ready tomorrow, or the day after. However the point is that there will come a time when the development of them has advanced so far that they ARE ready. In fact that not only are they ready but that they are better than their human counterparts.

The other important fact is that the day they are ready is likely within a span of time that most of us here will see them on the roads.

I agree, but aside from technical capability and morality, cost is also a big issue that is keeping back self-driving cars, and the issue that may in fact prove to be the most stubborn one. Currently, the cost of a self-driving car is far higher than that of a normal one. If self-driving cars are to replace normal ones, they will need to be the same price or even better, cheaper.


Self-driving cars -- programming morality @ 2018/12/07 13:17:51


Post by: Overread


Queen - one thought is that you keep swinging back to the idea of the ideal driver. An experienced, confident, calm, rested driver. The problem is that most people are not driving at their best. They drive whilst late; whilst being nagged at by the passenger; whilst half awake; whilst listening to the audiobook/radio; in strong sunlight after rain without their sunglasses; in winter when its dark after only having driven in the light through summer etc...

I've not even touched on illegal things like mobile phones, drinking, drugs and bad car repair or even driving without licence. Even a perfectly normal person driving within the law can be significant danger.

Another aspect is experience, reactions and reaction times are heavily hinged upon personal experience. A person who has never experienced aqua planing or black ice or a car rushing head on at them on the wrong side of the road - all these are experiences. Some we might all have, others we might never have. The point is that until you experience you don't know how to react or what reactions you can even make.


Look at sports, in sports you practice dodging and catching and tackling and such over and over again. The army does likewise with exercises. Practice over and over, repetition and repetition in order to get the muscles and mind used to certain situations. Driving we don't do that - we practice most ideal situations, but we don't practice how to react in a crash or if someone walks out infront of you. And we don't refresh ourselves every week nor every month - in fact driving is a "one test pass" situation where you won't get retrained EVER unless you are forced too. That in itself means a huge chance for building up bad habits. Driving a little faster; being lazier on corners and drifting into the other lane, using the phone whilst driving etc...




Humans are a huge variable lot and many people only learn to drive good enough to get to work and back alive. WE don't push ourselves to be the best drivers ever. As an ambulance driver you've likely had way more training and refresher training which puts you in a much more confident position than the average person. You know already how to react and drive in a way that other people simply never will.


The idea of the self driving car is that, at present, its a child learning. It's in the early stages of being a beginner. In theory it will never forget, never get lazy or take short cuts. So in theory the more experience it gets the better it gets. So eventually it can reach a point where its experiences outnumber that of the average human. That's before we even add on top the huge number of sensors that it can take information from. The car can read everything about its own conditoin, the road condition and what is around it; and it can process that info in a speed no human could.


Automatically Appended Next Post:

I agree, but aside from technical capability and morality, cost is also a big issue that is keeping back self-driving cars, and the issue that may in fact prove to be the most stubborn one. Currently, the cost of a self-driving car is far higher than that of a normal one. If self-driving cars are to replace normal ones, they will need to be the same price or even better, cheaper.


Agreed, plus I think that its technology that needs to roll out in one big go onto a whole network to really work rather than in bits. So I can see that bieng a huge barrier, esp with continual increasing population. What we might see is that public services - trains, buses- shift over to self driving. That in itself might cost, but could make a huge saving in the long run (train drivers* get quite a good wage in the UK and if they were replaced by a machine that would certainly overcome its costs pretty quick). Plus it might be a means to expand the public sector transport in a big way which could lower car pressure. Esp if those public services became free to use or so cheap it was cheaper than the car - esp as car prices go up and fuels are not going down.


*In fact it wouldn't surprise me if union pressure has kept that potential avenue closed down to niche train groups - such as the underground - with the treat of strike action over jobs lost.


Self-driving cars -- programming morality @ 2018/12/07 13:30:30


Post by: queen_annes_revenge


I agree. But I think there are inherent dangers in trying to solve those issues by giving the human less need for concentration. These cars are clearly still going to require a driver paying attention, and having fewer functions to perform seems to impair this function rather than giving them more chance to concentrate, as shown by the above examples of people watching films, napping etc with their autopilot on. People are going to be on their phones, doing all sorts of things in their cars because they think their autopiloted car will keep them from harm and that they don't need to pay attention to the road


Self-driving cars -- programming morality @ 2018/12/07 13:36:02


Post by: Overread


That's the idea, the idea is that the car does the driving totally. I agree that it cannot function any other way. Either the autopilot is ON and the driver is not driving or the autopilot is OFF and the driver is driving.

It cannot work half way on the main roads. I fully agree that it would be dangerous to even try. Even a well trained person being fully attentive is going to find it hard to concentrate and not be distracted when they have nothing to do. An untrained general person with no overhead pressures to pay attention is going to be distracted the whole time, even if just because in doing nothing they won't even know really what to focus on.


That's a given and the human driver should only take over at specific moments. Take the film I Robot with Will Smith. On the main highways of the city the car does the driving without any need for intervention; in fact the driver taking over in that situation is a forced moment and almost totally dangerous and the only time he "needs" to take over is when he's being attacked.

Yet when it came to the countryside he was doing the driving and the car wasn't. Two clear lines when the car is in auto and manual mode.


Self-driving cars -- programming morality @ 2018/12/07 15:11:22


Post by: Just Tony


 Kilkrazy wrote:
It's a well-known bias dating back to the days of Frankenstein. Earlier than that even, the story of the Golem is a similar precautionary tale.

The fact you somehow believe that companies like Toyota are falsifying their stats in order to get driverless cars into widespread use where they would instantly, inevitably and very publicly be shown to have been developed unsafely by falsified stats, is prima facie evidence of your irrational mindse ton this topic.



I know. Kind of like how no car company would falsify stats on safety, quality issues, or fuel efficiency performance to further their sales.


Oh, wait...


Self-driving cars -- programming morality @ 2018/12/07 15:30:51


Post by: Kilkrazy


Volkswagen were caught.

They will be caught evern quicker if their stats on self-driving car safety are falsified.


Self-driving cars -- programming morality @ 2018/12/07 15:54:11


Post by: queen_annes_revenge


I was going to mention vw but the debate had moved on.


Self-driving cars -- programming morality @ 2018/12/07 16:43:18


Post by: Just Tony


 Kilkrazy wrote:
Volkswagen were caught.

They will be caught evern quicker if their stats on self-driving car safety are falsified.


Caught invalidates the attempt? How many companies do you think padded results that DIDN'T get caught, on a daily basis I might add?


No one is denying the benefit of being shuttled around like pharaohs with robots feeding us Doritos while they haul us around, what we deny is the competency of current AI programming to accomplish this wholesale safely and without unforeseen catastrophic programming misinterpretations.


Self-driving cars -- programming morality @ 2018/12/07 17:01:45


Post by: Kilkrazy


Not many companies running safety critical applications, because it's too easy for the truth to out.

For example, do you think there is lots of false data proving that airplane jet engines don't blow up all the time, but actually they do?

Of course not, because it's a stupid idea.

In the same way, it would be useless to falsify data to show that a driverless car doesn't run into pedestrians 1/10 times, because it very soon would be obvious that it did.


Self-driving cars -- programming morality @ 2018/12/07 17:57:12


Post by: Mad Doc Grotsnik


In terms of mis-trust and liability and who may or may not be hiding or fudging results?

Insurance. That's what's stopping them.

Consider that at least in the UK, we can still consider 'proximate cause'. Now in many instances it's a sod to prove. Example there is having a crash because another vehicle refused to yield, or did something stupid, but wasn't themself caught in the accident.

But then, there's concertina accidents. That's where there are multiple rear-end shunts. Car at the front is least likely to be responsible. And the more cars you add, the more liability gets muddied. If it's a three car shunt, dead easy. Ask the front driver how many bumps they felt. One? It's the car at the rear (Newton's Cradle). Two? Probably the car in the middle. First hit you, created a hazard, causing the third car to hit them.

Now, introduce the software for a self-driving car. If that does something stupid, and it turns out (and man, that will out) there was a known flaw in the programming? That's a lot of liability lying at the dev's door. Especially if anybody is injured.

And a last fun fact? Someone killed in an accident is quite cheap as insurance goes. Single payout. Someone left with lasting injuries? Guess who's on the hook for the rest of that person's life?


Self-driving cars -- programming morality @ 2018/12/07 18:57:03


Post by: Grey Templar


Aye. And I dont think the car companies have figured that out yet, but once they do...


Self-driving cars -- programming morality @ 2018/12/07 21:25:06


Post by: queen_annes_revenge


Another question posed by the hypothesis of future highways featuring only autonomous cars, what about those who drive for pleasure? Enjoy the tinkering and maintenance of a classic vehicle or the flashy mods of an old low rider or hot rod? Will they be forbidden from the roads? There will have to be some sort of transition phase, but once that's complete will no human controlled vehicles be allowed at all? That itself takes me back to emergency response drivers. I'm going to throw it out there and say that an automated system will never be able to drive to roadcraft standard. So there will have to still be manually controlled vehicles on the roads.


Self-driving cars -- programming morality @ 2018/12/07 21:54:22


Post by: Kilkrazy


I've considered that myself. I think what will happen is that such people will go to private car rallies, such as Beaulieu Festival of Speed, where they will be able to drive together in unregulated conditions, separate from the public highway.

The emergency vehicles are another thing. I don't see a problem with human piloted ambulances. They will work fine in conditions where automatic cars are guaranteed to get out of the way in good time. There aren't many accidents involving ambulances anyway.

Police pursuit driving is another thing, of course, but it's hard to see criminals making getaways in autonomic cars which simply stop when ordered by the police.

Of course, part of the logic of autonomous vehicles is that there will be a lot fewer vehicles on the roads at all, which in itself makes everything easier.


Self-driving cars -- programming morality @ 2018/12/07 21:56:34


Post by: queen_annes_revenge


Possibly. I can't see the Vatos in East LA being happy taking their low riders to private circuits though.


Self-driving cars -- programming morality @ 2018/12/07 22:07:28


Post by: Overread


Chances are once the technology is accepted there will be loads of ways to personalise your car to your own standards. It might mean that people can't do it at home, but then again cars are going that way anyway with a lot of built in computers being very hard to deal with even for many local mechanics.

And yes those who want to drive it themselves might be restricted to country roads or rally events. It's a loss, but at the same time I think if it made the roads a lot safer and more efficient many would accept that loss.



One real barrier might be price. After insurance the next hurdle is getting everyone onboard with the idea to make it work. The problem there is that a new self driving car is not going to be cheap, even if its price is marked down and its mass produced its still going to be a lot of money for people, even more so if their old cars are then devaluing fast because they are not worth resale.
We are already seeing that with cars right now as governments put more pressure on people adapting to electric. It's not that people "don't want to" as much as many just can't afford the cost of a brand new car. There's plenty of 2nd hand dealers who sell cars for dirt cheap and often get then back again when they break, swap over and sell another cheap car etc....

That's why I think we'll see buses and the like take up the tech first. Heck long distance lorry drivers might well take up the tech and then end up with a lorry driver being more of an overseer position for the delivery and possibly driving at any depo that hasn't got a selfdriving guide system setup yet.


Self-driving cars -- programming morality @ 2018/12/07 22:13:22


Post by: Peregrine


 queen_annes_revenge wrote:
Another question posed by the hypothesis of future highways featuring only autonomous cars, what about those who drive for pleasure? Enjoy the tinkering and maintenance of a classic vehicle or the flashy mods of an old low rider or hot rod? Will they be forbidden from the roads? There will have to be some sort of transition phase, but once that's complete will no human controlled vehicles be allowed at all?


Most likely, yes they would be banned from public roads. Private tracks would still exist, and probably grow in number to accommodate the hobby. It's just like any other dangerous hobby, it doesn't matter if you really enjoy guns you still have to take them to a gun range and can't go around shooting stuff in the middle of a major city.

That itself takes me back to emergency response drivers. I'm going to throw it out there and say that an automated system will never be able to drive to roadcraft standard. So there will have to still be manually controlled vehicles on the roads.


First of all, that's a lot of human ego in that claim that they'll never be able to drive to your standard and not a lot of evidence to support it. Second, those few vehicles aren't really relevant. Even if you allow a small number of highly trained professional drivers (perhaps treated like airline pilots are now, in terms of training and experience requirements) to drive emergency vehicles you're still talking about effectively total automation. And that's fine to leave that small exception if it is necessary, professional emergency drivers aren't the accident risk we are worrying about.


Self-driving cars -- programming morality @ 2018/12/07 22:13:48


Post by: Kilkrazy


Waymo's trial is a taxi service.


Self-driving cars -- programming morality @ 2018/12/08 08:14:33


Post by: queen_annes_revenge


Well, it remains to be seen. I believe that there are some things that a computer will never be able to do, or even learn if we follow that folly of allowing them to learn. You praise computer's like some sort of super deity, yet the human brain is so complex that even after all these years of scientific study we know basically nothing about it, despite, ironically, the creation and use of computers.


Automatically Appended Next Post:
You will never be able to code a computer with human instinct.


Self-driving cars -- programming morality @ 2018/12/08 08:32:57


Post by: Peregrine


 queen_annes_revenge wrote:
You will never be able to code a computer with human instinct.


But why do we need to? In the context of automated vehicles "instinct" is just a crude approximation of things like solving the physics problem of how to avoid a collision. The automated vehicle doesn't need instinct because it just makes the correct choice.


Self-driving cars -- programming morality @ 2018/12/08 08:47:47


Post by: queen_annes_revenge


No it's not. A blue light driver processes cues from the direct front of the vehicle, to the very visual limit of the road, the autonomous car looks at the vehicle directly in front, rear and sides with sensors and moves where it 'thinks' is right or follows. It can't position itself in the road based on forward observation and judgement of the environment.


Automatically Appended Next Post:
Computers making the right choice, because that happens often... Your borderline pathological worship of something we only invented 70 years ago is very concerning. I'd go so far as to say that of any technical human invention, the computer is probably the most prone to flaws and malfunction.


Self-driving cars -- programming morality @ 2018/12/08 09:20:37


Post by: Peregrine


 queen_annes_revenge wrote:
No it's not. A blue light driver processes cues from the direct front of the vehicle, to the very visual limit of the road, the autonomous car looks at the vehicle directly in front, rear and sides with sensors and moves where it 'thinks' is right or follows. It can't position itself in the road based on forward observation and judgement of the environment.


Uh, what? An autonomous car can look anywhere you feel like putting a sensor. In fact, the autonomous vehicle's sensors are going to be far superior to human vision because it can process everything in full 360* coverage while the human driver is limited to their head's range of motion, obstructions from the vehicle's frame, and the inability to look in multiple directions simultaneously. I have no idea why you would suggest that it can't scan a wider range or position itself in the road based on observation, other than willful ignorance of the subject.

Computers making the right choice, because that happens often... Your borderline pathological worship of something we only invented 70 years ago is very concerning. I'd go so far as to say that of any technical human invention, the computer is probably the most prone to flaws and malfunction.


Computers do often make the right choice. The fact that you're writing this on a computer is proof of it. The only pathological worship here is your bizarre faith in the superiority of humans against all evidence otherwise.


Self-driving cars -- programming morality @ 2018/12/08 10:38:04


Post by: queen_annes_revenge


Yes, they can sense, but they can't see can they. Not like a human eye can see, process and understand. They can only sense what they are programmed to sense. They are incapable of understanding. 'Bizarre faith in the superiority of humans'
I think that might be well placed. I'm going to take that over blindly surrendering everything to machines, created by humans in the first place.
You're not going to convince me that computers are superior to humans.
They may be able to beat chess grandmaster, but that's not the real world.



Automatically Appended Next Post:
And yes I'm writing this on a computer, but I don't think even you can deny that if I said the same words to you face to face, the communication would be superior.


Self-driving cars -- programming morality @ 2018/12/08 11:09:31


Post by: Peregrine


 queen_annes_revenge wrote:
Yes, they can sense, but they can't see can they. Not like a human eye can see, process and understand. They can only sense what they are programmed to sense.


And? They can be programmed to recognize what they need to recognize. They don't need to "understand" it in some philosophical way, they just need to process the data correctly and take the appropriate action. Current data processing systems need work to get an acceptable rate of hazard recognition, but the automated system has significant inherent advantages there. It doesn't look down at its phone and take its eyes off the road, it isn't limited to looking in a single direction, its view isn't obstructed by the body of the vehicle, etc. Once image recognition technology progresses a bit more the automated system is going to be far, far better than any human driver at the task of "detect a pedestrian hazard in time to avoid a collision and take appropriate action".

Also, you're going to have a hard time convincing anyone of the value of human eyes when there's a long list of accidents where the driver "just didn't see them" and killed or injured someone. You're defending a badly flawed system.

I'm going to take that over blindly surrendering everything to machines, created by humans in the first place.
You're not going to convince me that computers are superior to humans.


Like I said, blind faith in human ego. You have no evidence, just a stubborn insistence that humans are magically better because you want it to be true.

And nobody is asking you to blindly surrender everything to machines, that's what testing is for. Automated vehicles will undergo extensive testing to prove that they can be at least as safe (on a per-passenger-mile basis) as human drivers before they are approved for full-scale use. But once they have passed this test you are no longer justified in objecting to them replacing human drivers. Continued defiance out of blind faith and ego means that the blood of every additional dead victim is on your hands.

And yes I'm writing this on a computer, but I don't think even you can deny that if I said the same words to you face to face, the communication would be superior.


You're missing the point. The fact that you are successfully writing this instead of staring at a lump of dead silicon is a demonstration of a lot of machines working near-flawlessly, to the point that any error is an unexpected outrage instead of the default state of things.


Self-driving cars -- programming morality @ 2018/12/08 11:13:44


Post by: Overread


Queen its true that computers still need to interpret what their sensors see of the world and that there is still some further study required there and that its not perfect.

Thing is humans aren't perfect either! Consider all the people who have seen ghosts, illusions, optical oddities and such - all without the aid of drugs or medical conditions.

The human brain can be tricked, heck a vast amount of magic tricks are all about tricking the brain into seeing and not seeing things.



You also state that a computer reacts based upon volume of data and that is true, but that's the very same way a human reacts. A human with more training and more experience has more incidents to draw conclusions for. You as an ambulance driver are ahead of the curve in having both additional training and additional driving at high speeds through more complicated driving situations. So you can readily process quite a lot more possibilities because you've got a higher range of experiences to draw from.
Joe in the regular car next to you doesn't have those additional layers and Jane passed her driving test after 3 weeks of intensive training and has only been on the road for 4 weeks through summer and has never seen proper night, rain, snow, ice driving.

Yet you'll all be on the road at the same time, even with nothing going wrong its possible for one of you to make a wrong judgement call at a turning or intersection. Jane might well make a wrong call on a car clearly moving to turn, but not using its signal. You might not be stumped by that at all, you've learned "card body and position" language to a higher level.




A computer in a car has the bonus that once its learned its learned and every other single car with that same computer also learns that experience. It won't forget because it hasn't driven in night for 6 months; it won't have to go through a near crash because car 101 already did it and has passed that experience to the rest of the fleet.

Basically the technology rests upon thresholds of understanding for the machine, once it can more readily and accurately identify a person and another car it becomes more trivial to give it wider ranging sensors that look further and further ahead. Plus if every other car is automated there's nothing to stop them speaking directly to each other. A sudden braking a mile away due to a problem can be passed right back along both sides of the road; all cars are then slowing down long before any of them can see the incident (human or machine). That's a level of communication and safety that can likely only be matched right now with lorry drivers and their radios (even then I don't know if they can all easily talk with each other from different companies and such)



This isn't computer worship, its accepting that a machine can achieve something better and safer than a human can. OR at least that the machine has the potential to do so with sufficient advance of the technology.


Self-driving cars -- programming morality @ 2018/12/08 12:02:08


Post by: queen_annes_revenge


Not from you no, but peregrine seems well and truly in the grip of the omnissiah.
And like I keep saying, it's just a case of this or that is still just an idea. The robot I use is programmed not to conflict with its own frame yet that constantly fails, and thats made by one of the world's most successful aerospace companies. So saying all cars will talk to each other, well that working successfully seems like a distant pipe dream. I'd be interested in knowing some of the experience that the people blindly advocating complete autonomy have with actually working with automation and why this makes them believe what they do. Because if the only reason is that computers are pretty good at doing things, that not really the case when it comes to the world away from a desk.


Self-driving cars -- programming morality @ 2018/12/08 12:20:58


Post by: Overread


I think Peri is just looking (at least*) 10 years in the future whilst you're looking at the now/past a bit. Which is part why your viewpoints are so varied.

He's thinking of the future when the machine is ready; whilst you're looking at today when its clearly not.




*and likely more honestly. I'd wager we'll see automated trains before cars; at least if the companies can get past the driver unions and such.


Self-driving cars -- programming morality @ 2018/12/08 15:28:06


Post by: queen_annes_revenge


My viewpoints aren't varied, I'm just trying to cover all the questions posed by this issue.
And that in itself is another question.. What do we do with all those people who depend on driving for employment?

And I pray that in 10 years time this will have been seen as a terrible idea and canned, with the research capabilities going to more productive endeavours. I can only hope.


Self-driving cars -- programming morality @ 2018/12/08 15:47:45


Post by: Overread


Well depends. I can see in services like train driving and buses that they'd likely retain the driver to oversee things and be there for public relations. They'd likely shift to being more a porter in some ways and a guard than the driver.

Meanwhile for things like lorries, again, I could well see them keeping staff on as drivers to oversee things. Plus a lot of lorries drive in country roads and deliver to out of the way places. So I could see drives being kept on for road networks that might be off-grid as such for self driving or for parking and manoeuvring. Plus they are there as a human element to oversee deliveries are made and cargo is kept in check.

So for all those I can see the role of the driver changing, but the actual job remaining part of the setup.


Taxies might suffer the most since that is purely driving and once you've got A to B programmed in the taxi should get you there on its own. However there's a good number of alternative destinations that might not work on a sat nav. Or the person might want to go to a location not an address. Again we might see the driver remain but the role shift and change.

So yes there might well be job losses, same as there is with many forms of automation. The messy period is where we transfer from a very heavy human working population to a very heavy mechanised.


Of course as I cited earlier somewhere else, some factories are putting humans back and taking machines out because changing designs significantly requires rebuilding most of the factory. That's fine when you first built it, but its a huge cost when you've already invested in it all once over. However in part that's showing that many robotic assembly machines are both very specific and limited in adaptability and also require human operators to make significant design changes.



Self-driving cars -- programming morality @ 2018/12/08 17:52:36


Post by: Peregrine


 queen_annes_revenge wrote:
And that in itself is another question.. What do we do with all those people who depend on driving for employment?


What happens with any field that becomes obsolete as technology advances? Seems like a good argument for socialism, but solutions to unemployment are rather off-topic from here.

And I pray that in 10 years time this will have been seen as a terrible idea and canned, with the research capabilities going to more productive endeavours. I can only hope.


Their blood is on your hands.


Self-driving cars -- programming morality @ 2018/12/08 20:31:38


Post by: BlaxicanX


 Grey Templar wrote:

Indeed. Self-driving tech is going to be sacrificed on the alter of litigation. Someone/everyone will get sued and the different corporations will realize that self-driving vehicles are a terrible idea from that standpoint, so they'll ditch the idea like a hot potato. It'll be just another fad that comes and goes in the mid/late 21st century.
The amount of money corporations will make from simply firing 99% of their drivers for the next thousand years will easily cover any amount of money lost to litigation.

These lines of thinking about insurance are incredibly dumb. In my city alone something like three million people A DAY ride public transportation via buses and trains. If someone within that three million gets hurt, are the passengers held liable? Oh gak, no they're not. What about boat cruises? What about planes? Hell, what about cars? If the brakes in my car suddenly stop working and it comes out later that a defect in the car's design was responsible for my brakes seizing, am I held liable for a resulting accident, or will the company that made my car be held liable for selling me defective equipment? Did Toyota (or maybe it was Honda, w/e) say "whelp I guess we're not making cars anymore lol" after that mass recall they had to do around a decade ago due to thousands of their cars having a fatal safety defect? No, of course not. Car companies will never give up billions of dollars in profit because they lost at absolute most a hundred million in litigations, and every public transportation system in the world has dedicated insurance companies with clearly defined statutes on who's liable when a train runs someone over or a plane's autopilot fails.

You people are asking questions about a topic that's existed for at least 40 years.

- - -

Anyway, anyone who is against automated driving is a psychopath and I hope you will never have to experience losing a loved one to a DUI driver or a boomer who fell asleep at the wheel.


Self-driving cars -- programming morality @ 2018/12/08 21:36:49


Post by: queen_annes_revenge


 Peregrine wrote:
 queen_annes_revenge wrote:
And that in itself is another question.. What do we do with all those people who depend on driving for employment?


What happens with any field that becomes obsolete as technology advances? Seems like a good argument for socialism, but solutions to unemployment are rather off-topic from here.

And I pray that in 10 years time this will have been seen as a terrible idea and canned, with the research capabilities going to more productive endeavours. I can only hope.


Their blood is on your hands.


Yawn.


Automatically Appended Next Post:
 BlaxicanX wrote:


Anyway, anyone who is against automated driving is a psychopath and I hope you will never have to experience losing a loved one to a DUI driver or a boomer who fell asleep at the wheel.


What a ridiculous statement. You and peregrine must've been sharing straws.


Self-driving cars -- programming morality @ 2018/12/09 05:30:55


Post by: Crazyterran


What happened to the people that raised all the horses or made carriages?

How many people die because of Truck drivers falling asleep at the wheel, or accidents from someone in a hurry making a dumb decision?

Hell, Canada had a pretty good example of what happens with the Humboldt Broncos incident. Saying that Humans will react better than computers seems kind of silly when we see plenty of moronic deaths on a daily basis in traffic.

Of course, your entire argument seems to boil down to "Computers bad" and "You aren't taking my driving away!".





Self-driving cars -- programming morality @ 2018/12/09 07:14:53


Post by: queen_annes_revenge


If you want to straw man it, then yeah I guess, but then my respone would be, 'humans are unsafe, computers amazing, if you drive you're a murderer'


Self-driving cars -- programming morality @ 2018/12/09 07:47:36


Post by: Ensis Ferrae


What I keep seeing here queen, is exactly what people are saying: you are taking a very luddite position that kinda boils down a strawman in itself.

What many/most of the other users are saying here is, this is not a currently viable technology, however work is being done to make it viable, and when that happens, we will be ABLE to do away with incidents like the Humboldt Broncos and thousands of other non-related DUI incidents. Feth, kids will be able to get on their school bus safely, because in a hurry donkey caves won't be flying past the flashing red lights. That isn't now. . . that is the future.


Self-driving cars -- programming morality @ 2018/12/09 08:14:23


Post by: queen_annes_revenge


I've stayed multiple times in this thread. That my rational position is that there are too many variables, unanswered questions, and flaws in the technology for us to be blindly accepting of it at this point in time. (independent, though related to my personal opinion of not liking the idea of autonomous vehicles)
Other points I've tried to raise are just related concerns.


Self-driving cars -- programming morality @ 2018/12/09 09:16:53


Post by: Peregrine


 queen_annes_revenge wrote:
I've stayed multiple times in this thread. That my rational position is that there are too many variables, unanswered questions, and flaws in the technology for us to be blindly accepting of it at this point in time. (independent, though related to my personal opinion of not liking the idea of autonomous vehicles)
Other points I've tried to raise are just related concerns.


Nobody is arguing for blind acceptance, that's what testing and study of safety records is for. Automated cars are not going to be in full-scale use until they have passed the testing stage. But we should expect them to pass eventually, and your opposition to life-saving technology out of blind faith and ego is morally unacceptable.


Self-driving cars -- programming morality @ 2018/12/09 12:09:10


Post by: Overread


Lets not forget how fast technology is advancing these days!

Go back 20 years ago and the idea of building a robot that could walk and move on its own was pretty high end and dreamy future stuff. Now Boston Dynamics have a free standing robot that can walk, run, jump and even jump up obstacles on one leg!

So yes self driving cars can't do it now, they can't do it today; but they are getting close and in 10 or 20 years the technology on all fronts will have moved on a lot. Heck if they can ever crack functional affordable and reliable quantum computers that would lead to a vast revolution in computing power which on its own could give machines an infinity more powerful ability to process vast volumes of data (eg visual data for a self driving car)


Self-driving cars -- programming morality @ 2018/12/09 16:19:44


Post by: queen_annes_revenge


 Peregrine wrote:
 queen_annes_revenge wrote:
I've stayed multiple times in this thread. That my rational position is that there are too many variables, unanswered questions, and flaws in the technology for us to be blindly accepting of it at this point in time. (independent, though related to my personal opinion of not liking the idea of autonomous vehicles)
Other points I've tried to raise are just related concerns.


Nobody is arguing for blind acceptance, that's what testing and study of safety records is for. Automated cars are not going to be in full-scale use until they have passed the testing stage. But we should expect them to pass eventually, and your opposition to life-saving technology out of blind faith and ego is morally unacceptable.


You have no right to label me morally unacceptable when your only argument is that because I don't accept that computers are safer than humans, that somehow makes me a murderer. I think that straw man has been well and truly established.


Self-driving cars -- programming morality @ 2018/12/09 17:16:43


Post by: Just Tony


Peregrine wrote:What happens with any field that becomes obsolete as technology advances? Seems like a good argument for socialism, but solutions to unemployment are rather off-topic from here.]And I pray that in 10 years time this will have been seen as a terrible idea and canned, with the research capabilities going to more productive endeavours. I can only hope.


Except that without employed people the companies will have nobody to purchase their robot made product, which will dry up resource supply AND that batch of other people's money that your socialist views depend on so desperately. No CEO is going to wake up and decide to give his money away for the greater good. If it came down to it, they'd take the entirety of their fortune and retire in Bora Bora, staring at caramel colored tiddies while all the socialists are forced to beat each other over the head for basic resources. We have historical examples of this happening. This isn't deluded fantasy, it's human condition.

Peregrine wrote:Their blood is on your hands.


BlaxicanX wrote:Anyway, anyone who is against automated driving is a psychopath and I hope you will never have to experience losing a loved one to a DUI driver or a boomer who fell asleep at the wheel.


Hyperbole and sensationalism NEVER sell your point unless its a prog argument like this. Had someone tried to sensationalize that... controversial reproductive rights issue... you two would be the FIRST to lash out at them for sensationalizing.



And to address BlaxicanX directly: I've seen FAR more accidents, including fatal accidents, caused by empty headed millennials who can't keep their nose out of Facebook on their phone in traffic than I've seen by geriatric narcoleptic "boomers". Your bigotry is ill placed, and makes you look like a complete idiot.


Self-driving cars -- programming morality @ 2018/12/09 17:38:30


Post by: Peregrine


 Just Tony wrote:
Except that without employed people the companies will have nobody to purchase their robot made product, which will dry up resource supply AND that batch of other people's money that your socialist views depend on so desperately. No CEO is going to wake up and decide to give his money away for the greater good. If it came down to it, they'd take the entirety of their fortune and retire in Bora Bora, staring at caramel colored tiddies while all the socialists are forced to beat each other over the head for basic resources. We have historical examples of this happening. This isn't deluded fantasy, it's human condition.


I don't think you understand how the economy works. Capitalism requires customers, but every business is out for its own interests not the collective good of the economy. As we have seen over and over again, if a company sees a way they can take advantage of automation to cut a bunch of employees they won't hesitate to do so. Every company will be saying "someone will keep people employed, I don't need to worry about it" even as the overall system collapses, because failure to do so means dying even faster than the average as the competition (which is cutting useless workers and embracing automation) undercuts their prices and takes all of their sales. It's absurd to suggest that companies are going to voluntarily pay more to keep redundant employees around so that they can maybe get some of that salary back in the form of sales, or that the same CEOs who will cut and run at the first sign of socialism taking their wealth will voluntarily pay those extra salaries out of pure altruism towards the good of the economy.

And that whole "take the money and run" plan depends on the state allowing wealth to leave, and the world not collectively saying " those guys" and erasing Bora Bora from the map now that all of the troublemaker CEOs are collected in one neat easily-bombed package. After all, why beat each other over the head for basic resources while the people who caused the problem live in luxury when you can nationalize all of their wealth and beat them over the head?

Hyperbole and sensationalism NEVER sell your point unless its a prog argument like this. Had someone tried to sensationalize that... controversial reproductive rights issue... you two would be the FIRST to lash out at them for sensationalizing.


No, I'd be the first to lash out at them for being factually wrong. I don't really care about sensationalizing when there are plenty of better reasons to attack someone.


Automatically Appended Next Post:
 queen_annes_revenge wrote:
You have no right to label me morally unacceptable when your only argument is that because I don't accept that computers are safer than humans, that somehow makes me a murderer. I think that straw man has been well and truly established.


Watch me. Your beliefs are morally unacceptable.

It doesn't make you a murderer, but it makes you responsible for those additional deaths. Your anti-computer beliefs are not based on evidence, they're based on emotion and ego. And you are opposed to life-saving technology because of that emotional response and inability to accept facts. Every day that people like you delay the adoption of life-saving technology beyond the point where it has proven to be effective means more people will die for no good reason. And tell me, what is morally acceptable about that?


Self-driving cars -- programming morality @ 2018/12/09 19:22:34


Post by: queen_annes_revenge


No it doesn't. And no they aren't. It isn't proven. Something isn't a fact because you say so.


Self-driving cars -- programming morality @ 2018/12/09 19:42:16


Post by: Xenomancers


 queen_annes_revenge wrote:
No it doesn't. And no they aren't. It isn't proven. Something isn't a fact because you say so.

What is there to prove? He is saying that in the event that the technology meets the standards of the engineers (a standard that will be MUCH higher than a human driver) that to not adopt it is immortal. It's fine for you to say the tech will never get there but really. What gives you the idea that computers can't drive a car better than humans? Humans are pretty terrible drivers. just about every day there is an accident on my way to work. Those accidents could have been easily avoided if everyone was just paying attention. COMPUTERS ARE ALWAYS paying attention.


Self-driving cars -- programming morality @ 2018/12/09 20:21:15


Post by: queen_annes_revenge


Are they? Is there any actual evidence to support this in a practical, real world scenario? Sure, on a small scale we know sensors work, and that individually systems can be made that are pretty good at doing certain things, but they are usually in specific closed environments, in niche roles, not interacting with the infinite different scenarios that may or may not present themselves in the public highway environment.


Self-driving cars -- programming morality @ 2018/12/09 21:23:12


Post by: Gitzbitah


https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/812115

Well, here's some evidence that 94% of car accidents are caused by human error. If we can get machines to cause fewer accidents than that I think we're good.

And here's a very small sample from the self driving cars- where again, the humans are the problem.

http://fortune.com/2018/08/29/self-driving-car-accidents/

Conclusion- self driving cars are safer than humans. Please remember how low a bar that is.


Self-driving cars -- programming morality @ 2018/12/09 21:31:27


Post by: queen_annes_revenge


Well the first statistic is irrelevant really. It doesn't prove the safety of autonomous systems.

And the bottom, well yeah of course they were, probably doing dumb stuff like the links I posted on Friday. And occurrences like that will increase massively the more you automate the system.

And like the other link I posted on Friday, Princeton studies predict that to obtain relevant safety data to allow a fair comparison with non autonomous vehicles will take 500 years.


Self-driving cars -- programming morality @ 2018/12/09 21:40:29


Post by: Overread


 queen_annes_revenge wrote:
Are they? Is there any actual evidence to support this in a practical, real world scenario?


Auto Pilots on aircraft do pretty well. Sure they disengage for landing and take off, but my limited understanding is that a large amount of A to B is done via AP today for many commercial flights - though others here might have a more detailed understanding.


Self-driving cars -- programming morality @ 2018/12/10 00:05:44


Post by: Vulcan


 Just Tony wrote:
Peregrine wrote:What happens with any field that becomes obsolete as technology advances? Seems like a good argument for socialism, but solutions to unemployment are rather off-topic from here.]And I pray that in 10 years time this will have been seen as a terrible idea and canned, with the research capabilities going to more productive endeavours. I can only hope.


Except that without employed people the companies will have nobody to purchase their robot made product, which will dry up resource supply AND that batch of other people's money that your socialist views depend on so desperately. No CEO is going to wake up and decide to give his money away for the greater good. If it came down to it, they'd take the entirety of their fortune and retire in Bora Bora, staring at caramel colored tiddies while all the socialists are forced to beat each other over the head for basic resources. We have historical examples of this happening. This isn't deluded fantasy, it's human condition.


Bad news for you, without employed people the companies will have nobody to puchase their robot-made product no matter what economic system you're under, leading to the same end result. Of course, if the CEO flees to Bora Bora his factories can be nationalized and started up manufacturing useful things for the people rather than let them all die pointlessly.


Self-driving cars -- programming morality @ 2018/12/10 00:28:21


Post by: Gitzbitah


 queen_annes_revenge wrote:
Well the first statistic is irrelevant really. It doesn't prove the safety of autonomous systems.

And the bottom, well yeah of course they were, probably doing dumb stuff like the links I posted on Friday. And occurrences like that will increase massively the more you automate the system.

And like the other link I posted on Friday, Princeton studies predict that to obtain relevant safety data to allow a fair comparison with non autonomous vehicles will take 500 years.


Given the timeline of the Princeton data, that seems like an unrealistic standard. By that measurement, planes, cruise ships and cars are not proven safe. Indeed.... the transcontinental steamer won't be ok to analyze for another 300 years.

The 94% statistic definitely doesn't prove that automated vehicles are safe- but doesn't it indicate that humans are not? We get sloppy with routine tasks, we have bad days, we drive impaired, use cellphones, and routinely violate traffic laws intended to keep us safe.

I'd love to see city wide test set up, where human driven cars were not used for a year. Ideally, we'd do one with a similar population and in the same state as another city, then compare the two to see if our current generation self driving cars are safer than us. That should yield a reasonable standard to assess.

The great thing about computers is that they can be patched- every time one screws up, once its reported the problem can be fixed. They will become progressively safer over time. Humans.... won't.

I went back and looked at your example articles. All of them are about autopilot, or shared responsibility cars, which I'd say suffer from the worst of both worlds. Now you have a human, who is still nominally in charge, but a complacent one because someone else is supposed to be safely driving. So you've replaced the human with a computer, but then turned around and given override capability and ultimate responsibility to a human rendered inferior by their distracted, complacent state.

At the risk of the no true scotsman fallacy, I'd say that a car intended to be driven by humans, regardless of the sophistication of its digital enhancements, is not a self driving car. I think of self driving cars as vehicles in which I am a passenger, ideally where the driver has no ability to intervene or take control of the car under routine function.


Self-driving cars -- programming morality @ 2018/12/10 00:34:23


Post by: Just Tony


 Vulcan wrote:
 Just Tony wrote:
Peregrine wrote:What happens with any field that becomes obsolete as technology advances? Seems like a good argument for socialism, but solutions to unemployment are rather off-topic from here.]And I pray that in 10 years time this will have been seen as a terrible idea and canned, with the research capabilities going to more productive endeavours. I can only hope.


Except that without employed people the companies will have nobody to purchase their robot made product, which will dry up resource supply AND that batch of other people's money that your socialist views depend on so desperately. No CEO is going to wake up and decide to give his money away for the greater good. If it came down to it, they'd take the entirety of their fortune and retire in Bora Bora, staring at caramel colored tiddies while all the socialists are forced to beat each other over the head for basic resources. We have historical examples of this happening. This isn't deluded fantasy, it's human condition.


Bad news for you, without employed people the companies will have nobody to puchase their robot-made product no matter what economic system you're under, leading to the same end result. Of course, if the CEO flees to Bora Bora his factories can be nationalized and started up manufacturing useful things for the people rather than let them all die pointlessly.


No, because the people who make their money off of products wouldn't let it get there in the first place. Remember, I work with automation all day every day, it's part of the deal in the machine shop. EVERY station that got automated still has the operator there.


Self-driving cars -- programming morality @ 2018/12/10 00:56:41


Post by: Grey Templar


 Peregrine wrote:

It doesn't make you a murderer, but it makes you responsible for those additional deaths.


Being responsible for someone's death is at best Manslaughter if not murder. So yeah, you are literally calling everybody who disagrees with you a murderer. That is both unclassy and a violation of rule 1.


Self-driving cars -- programming morality @ 2018/12/10 00:59:47


Post by: Vulcan


 Just Tony wrote:
 Vulcan wrote:
 Just Tony wrote:
Peregrine wrote:What happens with any field that becomes obsolete as technology advances? Seems like a good argument for socialism, but solutions to unemployment are rather off-topic from here.]And I pray that in 10 years time this will have been seen as a terrible idea and canned, with the research capabilities going to more productive endeavours. I can only hope.


Except that without employed people the companies will have nobody to purchase their robot made product, which will dry up resource supply AND that batch of other people's money that your socialist views depend on so desperately. No CEO is going to wake up and decide to give his money away for the greater good. If it came down to it, they'd take the entirety of their fortune and retire in Bora Bora, staring at caramel colored tiddies while all the socialists are forced to beat each other over the head for basic resources. We have historical examples of this happening. This isn't deluded fantasy, it's human condition.


Bad news for you, without employed people the companies will have nobody to puchase their robot-made product no matter what economic system you're under, leading to the same end result. Of course, if the CEO flees to Bora Bora his factories can be nationalized and started up manufacturing useful things for the people rather than let them all die pointlessly.


No, because the people who make their money off of products wouldn't let it get there in the first place. Remember, I work with automation all day every day, it's part of the deal in the machine shop. EVERY station that got automated still has the operator there.


YOUR business may be that smart. I don't expect too many others to prioritize long-term thinking over this quarter's profits; thus far the vast majority does not.


Self-driving cars -- programming morality @ 2018/12/10 03:46:08


Post by: Ensis Ferrae


 queen_annes_revenge wrote:
Well the first statistic is irrelevant really. It doesn't prove the safety of autonomous systems.

And the bottom, well yeah of course they were, probably doing dumb stuff like the links I posted on Friday. And occurrences like that will increase massively the more you automate the system.

And like the other link I posted on Friday, Princeton studies predict that to obtain relevant safety data to allow a fair comparison with non autonomous vehicles will take 500 years.


"I need evidence and statistics"

- "here you go"

"that's not evidence"


And round and round we go.


Self-driving cars -- programming morality @ 2018/12/10 08:41:45


Post by: Peregrine


 Grey Templar wrote:
Being responsible for someone's death is at best Manslaughter if not murder. So yeah, you are literally calling everybody who disagrees with you a murderer. That is both unclassy and a violation of rule 1.


Sorry if the truth hurts, but that doesn't make the responsibility go away. If you advocate against the adoption of proven life-saving technology out of stubborn ego and religious faith in human superiority then you get a share of responsibility, along with everyone else who advocated alongside you, for every additional person killed as a result of delaying that adoption. We wouldn't absolve you of responsibility if you were arguing in favor of legalizing drunk driving, so why shouldn't you get the same level of blame for arguing against automated vehicles?

 Just Tony wrote:
No, because the people who make their money off of products wouldn't let it get there in the first place.


That makes no sense. Set aside your faith in capitalism and need to prove "the left" wrong and look at it from the point of view of the company. They have two choices:

1) Pay an employee $30k/year to do a redundant job instead of buying a robot, so that the employee can maybe spend some of that $30k on buying the company's products.

or

2) Keep 100% of the $30k and cut the useless human.

In no sensible world does a company pick the first option as the self-interested way to maximize its profits.

Remember, I work with automation all day every day, it's part of the deal in the machine shop. EVERY station that got automated still has the operator there.


Counter-argument: I work with automation and we've cut things down to a handful of techs/engineers setting up new production runs, and some low-skill labor (most of it obtained through a temp agency) that exists to carry baskets of material from one machine to the next and has explicit instructions not to ever attempt to set up or troubleshoot a machine or do anything other than press the "start" button. If a material handling robot becomes cheaper than a minimum-wage employee do you honestly think that these people are still going to be employed? no. They're out the door as soon as the company can find a way to replace them.

And I strongly suspect I'm working with automation at a much higher level than you are, and have a lot better understanding of its strengths and limitations. It may not be replacing every single worker in every factory yet, but if you're counting on having that job in 10 years you're making a serious mistake.


Automatically Appended Next Post:
 queen_annes_revenge wrote:
And like the other link I posted on Friday, Princeton studies predict that to obtain relevant safety data to allow a fair comparison with non autonomous vehicles will take 500 years.


Given the premise of only using the existing fleet of automated vehicles, and testing to the highest possible standard of certainty. The 500 year timeline is easily shortened by increasing the number of test vehicles, and even the study you quoted talks about realistic test plans on the ~5 year scale.


Automatically Appended Next Post:
 Overread wrote:
Auto Pilots on aircraft do pretty well. Sure they disengage for landing and take off, but my limited understanding is that a large amount of A to B is done via AP today for many commercial flights - though others here might have a more detailed understanding.


Pilot here, it's a complicated answer. Autopilots are very reliable and, given a properly equipped airplane and airport, can fly the plane all the way to the ground and (IIRC) even taxi it to the gate. But it's not the best comparison with automated ground vehicles because the existing air traffic control system does a very good job of separating aircraft into clearly defined routes that maintain minimum safe distances from all other aircraft. And the routes to fly are clearly defined between GPS coordinates and/or radio beacons, removing the problem of having to detect where the road is and resolve any discrepancies with the map database. And when you look at automated vehicles the actual control of the car once you've figured out where you want it to go, the closest equivalent to aircraft autopilots, is the easy part. The hard problem is where the car has to make judgement calls about things like whether a shape in its vision system is a pedestrian or not, and whether it is walking into the road or standing next to it. And that just doesn't have an aircraft equivalent.

Where the aircraft comparison is very relevant is in how the adoption process is going to go with cars. Once autopilots were demonstrated to be effective in a particular area of flight they were aggressively adopted, even if they weren't ready for use in other phases. And it's now at the point where flying on autopilot is required by federal law in some situations (poor weather, above certain altitudes, etc). Did anyone protest against this and insist that, contrary to the evidence, human pilots are safer? Hell no, at least not enough to stand in the way of progress. Once automated vehicles pass similar safety milestones they are going to be the standard, and eventually mandatory. And the people opposing them will be viewed as irrational luddites and not considered favorably by history.


Self-driving cars -- programming morality @ 2018/12/10 10:30:05


Post by: Blackie


 Peregrine wrote:
 Grey Templar wrote:
Being responsible for someone's death is at best Manslaughter if not murder. So yeah, you are literally calling everybody who disagrees with you a murderer. That is both unclassy and a violation of rule 1.


Sorry if the truth hurts, but that doesn't make the responsibility go away.


I agree. And I also add that in your country some reckless and/or drunk drivers can get 20+ years sentences for killing people by road accidents. Sounds pretty close to a sentencing for murder.


Self-driving cars -- programming morality @ 2018/12/10 12:07:21


Post by: queen_annes_revenge


 Ensis Ferrae wrote:
 queen_annes_revenge wrote:
Well the first statistic is irrelevant really. It doesn't prove the safety of autonomous systems.

And the bottom, well yeah of course they were, probably doing dumb stuff like the links I posted on Friday. And occurrences like that will increase massively the more you automate the system.

And like the other link I posted on Friday, Princeton studies predict that to obtain relevant safety data to allow a fair comparison with non autonomous vehicles will take 500 years.


"I need evidence and statistics"

- "here you go"

"that's not evidence"


And round and round we go.


that could be considered a common occurrence in many debates, and does not invalidate my point in any way.


Automatically Appended Next Post:
Getting dangerously political here. Although your apparent leanings would corroborate with your fanatical worship of automation. And once automation has removed everyones jobs, there should be a basic mandatory income and redistribution of wealth...


Automatically Appended Next Post:
 Blackie wrote:
 Peregrine wrote:
 Grey Templar wrote:
Being responsible for someone's death is at best Manslaughter if not murder. So yeah, you are literally calling everybody who disagrees with you a murderer. That is both unclassy and a violation of rule 1.


Sorry if the truth hurts, but that doesn't make the responsibility go away.


I agree. And I also add that in your country some reckless and/or drunk drivers can get 20+ years sentences for killing people by road accidents. Sounds pretty close to a sentencing for murder.


regardless, the point does not stand, in that saying someone will be responsible for deaths simply because they oppose a system which you may believe will cause less deaths, is not a valid argument, and borders on an ad hominem.


Self-driving cars -- programming morality @ 2018/12/10 17:32:36


Post by: Just Tony


Vulcan wrote:
 Just Tony wrote:
 Vulcan wrote:
 Just Tony wrote:
Peregrine wrote:What happens with any field that becomes obsolete as technology advances? Seems like a good argument for socialism, but solutions to unemployment are rather off-topic from here.]And I pray that in 10 years time this will have been seen as a terrible idea and canned, with the research capabilities going to more productive endeavours. I can only hope.


Except that without employed people the companies will have nobody to purchase their robot made product, which will dry up resource supply AND that batch of other people's money that your socialist views depend on so desperately. No CEO is going to wake up and decide to give his money away for the greater good. If it came down to it, they'd take the entirety of their fortune and retire in Bora Bora, staring at caramel colored tiddies while all the socialists are forced to beat each other over the head for basic resources. We have historical examples of this happening. This isn't deluded fantasy, it's human condition.


Bad news for you, without employed people the companies will have nobody to puchase their robot-made product no matter what economic system you're under, leading to the same end result. Of course, if the CEO flees to Bora Bora his factories can be nationalized and started up manufacturing useful things for the people rather than let them all die pointlessly.


No, because the people who make their money off of products wouldn't let it get there in the first place. Remember, I work with automation all day every day, it's part of the deal in the machine shop. EVERY station that got automated still has the operator there.


YOUR business may be that smart. I don't expect too many others to prioritize long-term thinking over this quarter's profits; thus far the vast majority does not.


Herein lies the fallacy. EVERY business looks at things from the long term. Everything from figuring out parts distribution for products that were put out 30 years ago right down to tracking performance of their products out in the field so that they keep their brand as strong as possible, to allow for continued patronage from the public. Now the company that made Fidget Spinners? Not so much. Every other company that makes a product past the fad section of the market? They ALL look that far down the road. Except tech, they try to bury their own product on a yearly basis to get the same customers to repurchase what they already have. They're like GW in a way.

Peregrine wrote:
 Just Tony wrote:
No, because the people who make their money off of products wouldn't let it get there in the first place.


That makes no sense. Set aside your faith in capitalism and need to prove "the left" wrong and look at it from the point of view of the company. They have two choices:

1) Pay an employee $30k/year to do a redundant job instead of buying a robot, so that the employee can maybe spend some of that $30k on buying the company's products.

or

2) Keep 100% of the $30k and cut the useless human.

In no sensible world does a company pick the first option as the self-interested way to maximize its profits.


Look again at what I pointed out about the long run vs. short term profits. Musk's company made the short term profit decision and are now scaling BACK their automation. Sure you have a nice globalist socialist explanation for that.

Peregrine wrote:
Remember, I work with automation all day every day, it's part of the deal in the machine shop. EVERY station that got automated still has the operator there.


Counter-argument: I work with automation and we've cut things down to a handful of techs/engineers setting up new production runs, and some low-skill labor (most of it obtained through a temp agency) that exists to carry baskets of material from one machine to the next and has explicit instructions not to ever attempt to set up or troubleshoot a machine or do anything other than press the "start" button. If a material handling robot becomes cheaper than a minimum-wage employee do you honestly think that these people are still going to be employed? no. They're out the door as soon as the company can find a way to replace them.

And I strongly suspect I'm working with automation at a much higher level than you are, and have a lot better understanding of its strengths and limitations. It may not be replacing every single worker in every factory yet, but if you're counting on having that job in 10 years you're making a serious mistake.


Not only am I counting on having my job in 10 years, I'm counting on making MORE at it by then.

Maybe you make some product that doesn't require the types of tolerances mine does. Maybe your process doesn't have the level of judgement needed to be made as machining large engine parts does. Maybe the company that you work for isn't a legacy brand that needs the sort of attention to detail that machines and robots can't provide currently. I don't know. The point stands that Caterpillar is more than willing to pay everyone in our machine shop shy of 6 figures a year DESPITE all the automation in our shop.

queen_annes_revenge wrote:that could be considered a common occurrence in many debates, and does not invalidate my point in any way.Getting dangerously political here. Although your apparent leanings would corroborate with your fanatical worship of automation. And once automation has removed everyones jobs, there should be a basic mandatory income and redistribution of wealth...


NOW you're starting to get it. EVERY one of the people singing the accolades of full on automation are also redistribution of wealth people and the ones who lauded UBI. This isn't an exercise in morality as far as the car's computer, it's a debate of the morality of people paying other people's light bills, pure and simple.


Self-driving cars -- programming morality @ 2018/12/10 18:42:14


Post by: Peregrine


 Just Tony wrote:
Look again at what I pointed out about the long run vs. short term profits. Musk's company made the short term profit decision and are now scaling BACK their automation. Sure you have a nice globalist socialist explanation for that.


I don't need a socialist explanation when I have a capitalist one: Musk's company made a business decision that certain automation elements were not yet good enough to be profitable, while keeping other automation elements that were working well enough. This is a decision based on how well each option produces cars at the lowest possible cost, not your weird idea that companies will keep paying extra employees so they have potential customers. The moment Tesla concludes that automation is in fact ready to take over those jobs all those people they hired are out the door again.

NOW you're starting to get it. EVERY one of the people singing the accolades of full on automation are also redistribution of wealth people and the ones who lauded UBI. This isn't an exercise in morality as far as the car's computer, it's a debate of the morality of people paying other people's light bills, pure and simple.


You still don't get it. Socialism is a response to automation, not a reason for wanting it. Automation will be driven purely by capitalism, because automation is more effective at doing a task than human labor. Whether that's automation in a factory or automation in driving your car, you ask the question "which option is most successful at this" and the answer is the robot. Socialism doesn't come into the picture at all until you start asking the question of what to do when, after self-interested capitalist businesses have created massive unemployment to maximize their own profits, you have a lot of people who are unemployable because their only skills can be done better by a robot. Do you accept socialism, or do you leave those people to starve?

Don't make the mistake of thinking that I like this scenario. I don't at all, because I have zero faith in our society to cope properly with that kind of change. But whether or not I like it is irrelevant. The people at the top making the business decisions are going to like the fact that automation makes their end of year profit numbers better, or they're going to be out-competed and destroyed by rivals who do. Same thing with automated cars. Whether or not I like the social consequences is irrelevant, all that matters is the average deaths per passenger-mile of automated cars vs. human drivers. As soon as that comparison favors the automated car the only morally acceptable option is clear.


Self-driving cars -- programming morality @ 2018/12/14 09:18:44


Post by: nareik


Honestly I'm somewhat surprised bars don't make more use of vending machines.

It's even possible for IDing to be automated; it happens at some airports already. In the mean time all snacks, soft drinks and hot drinks could be served by machines.

Even food orders can be automated (I like the order points at macdonalds; it's nice seeing customisation options for the burgers!).

Will my self driving car be able to drive thru?!


Self-driving cars -- programming morality @ 2018/12/14 10:45:19


Post by: queen_annes_revenge


when that starts happening the world will be a sorry place.

Also, in a study conducted a few weeks ago, 100% of the mcdonalds self service order touch screens tested came back positive for fecal matter. now im sure if you tested a lot of things, you may find similar results, but it still gives pause for thought. personally I pass on using those.


Self-driving cars -- programming morality @ 2018/12/14 11:02:46


Post by: Overread


Sometimes machines make sense and sometimes they don't.

For example a computer can only mix known drinks, can only scan cards and can only deal with what is infront of it. It can't interact with those at the bar; it can't provide customer service; it might not easily handle a custom order; it can't handle someone being drunk and pressing random buttons etc... And it has no means to stop you buying drinks for your underage friend; or for you totally far too drunk friend. So you've still got to have human security there to monitor things. Plus if machines control IDs you can bet fakes will become a think - a barman can at least question someone who is clearly underage but holding a valid ID whilst a machine, provided its fooled, can't.

Automation can work wonders, but it can also backfire. I've seen several supermarkets take out the self-checkout aisles because of problems. From increased casual shoplifting to the fact that they find they still have to keep staff on-hand to deal with issues that arise (both computer and user based issues).


Also there's the human element to consider. Machines today are a VAST far cry from C3PO who you can interact with. A tablet to order your food is impersonal, cold and whilst efficient (to a degree) its also just not the experience many people want to have.

Of course I can see some places going this way, I just don't think it would become mainstream yet. In time sure and heck who knows the club/bar scene might change enough that a "bar" isn't needed and all one needs are a few circular booths around the place where you can walk up and get you drink served.


Automatically Appended Next Post:
 queen_annes_revenge wrote:
when that starts happening the world will be a sorry place.

Also, in a study conducted a few weeks ago, 100% of the mcdonalds self service order touch screens tested came back positive for fecal matter. now im sure if you tested a lot of things, you may find similar results, but it still gives pause for thought. personally I pass on using those.


That one is easy to resolve - wands!
If it became mainstream people would just have a tablet wand on them at all times in their pocket to press buttons on the communal tablets. Heck you can bet there'd be the magical iWand that comes pre-loaded with a little display that shows you money-off coupons and advertisements for local eateries (and 10001 other apps that grinds it to a half every other hour and an instagram mode or something).


Self-driving cars -- programming morality @ 2018/12/14 13:20:19


Post by: queen_annes_revenge


Or just talk to the server at the counter. It must be depressing enough working in mcdonalds without having the fact people would rather interact with a screen than you rubbed in your face.

On the bartender front I'd much rather have a person serving, especially if they're a female. If you replace them with vending machines you might aswell just stay and drink at home.


Self-driving cars -- programming morality @ 2018/12/14 14:01:55


Post by: YeOldSaltPotato


 Peregrine wrote:
It doesn't make you a murderer, but it makes you responsible for those additional deaths. Your anti-computer beliefs are not based on evidence, they're based on emotion and ego.


I write software for a living, don't buy a self driving car for a least the next decade. I've seen the software development process, and no matter how shiny those marketing demos you've seen are the reality of self driving cars is massively more complicated than most people, including plenty of those working on it, really grasp. Further to the topic of the thread, there's not going to be any one person who is dedicated to writing morality in to these cars, there will be a nebulous group of subsystems interacting in a vaguely planned way that hopefully create the intended result, but will almost certainly have no individual ownership of the system or even serious decision making ability. The 'decision' will be made by a series of disparate bits of logic that fire off in sequence which hopefully fires off as intended.

I'm not even going to talk about deaths, I can promise you those things will be a general shitshow of inconvenience, inaccuracy and annoyance for at least the first two generations of mass production. Let the early adopters face tank things for you, no one will convince them otherwise. Then, maybe, maybe, they'll have things well enough done that you can bother with them.

Till then, you want to help the environment, drive less, buy a hybrid or an electric car and run that battery till we figure out how to recycle them well.


Self-driving cars -- programming morality @ 2018/12/15 06:40:42


Post by: Peregrine


 queen_annes_revenge wrote:
Or just talk to the server at the counter. It must be depressing enough working in mcdonalds without having the fact people would rather interact with a screen than you rubbed in your face.


Welcome to life as cattle in the corporate system. Do you think McDonalds cares about the happiness of their low-level workers? Of course not. If an automated ordering system means 1% better numbers for the quarter to make the shareholders happy then automation is going to happen. And this is a McDonalds we're talking about, not a nice restaurant where service is part of the appeal. You're going to a McDonalds because it's a fast and cheap way to get something at least vaguely food-like when you don't want to spend any more time or money on it. If removing all human interaction from the process saves a few cents on a burger the majority of McDonalds customers are going to be perfectly happy with that trade.


Automatically Appended Next Post:
 queen_annes_revenge wrote:
Also, in a study conducted a few weeks ago, 100% of the mcdonalds self service order touch screens tested came back positive for fecal matter. now im sure if you tested a lot of things, you may find similar results, but it still gives pause for thought. personally I pass on using those.


This is another example of you not understanding how statistics work. Yeah, 100% tested positive, but at what levels and how does that compare to other surfaces that are regularly touched by the public? For example, what would those tests show about the door handles to get in the McDonalds? That kind of report is great as clickbait but doesn't really tell us anything useful


Automatically Appended Next Post:
YeOldSaltPotato wrote:
I write software for a living, don't buy a self driving car for a least the next decade. I've seen the software development process, and no matter how shiny those marketing demos you've seen are the reality of self driving cars is massively more complicated than most people, including plenty of those working on it, really grasp. Further to the topic of the thread, there's not going to be any one person who is dedicated to writing morality in to these cars, there will be a nebulous group of subsystems interacting in a vaguely planned way that hopefully create the intended result, but will almost certainly have no individual ownership of the system or even serious decision making ability. The 'decision' will be made by a series of disparate bits of logic that fire off in sequence which hopefully fires off as intended.

I'm not even going to talk about deaths, I can promise you those things will be a general shitshow of inconvenience, inaccuracy and annoyance for at least the first two generations of mass production. Let the early adopters face tank things for you, no one will convince them otherwise. Then, maybe, maybe, they'll have things well enough done that you can bother with them.

Till then, you want to help the environment, drive less, buy a hybrid or an electric car and run that battery till we figure out how to recycle them well.


Counter-argument: self-driving cars are already working, regardless of how theoretically difficult it's supposed to be. They're not perfect yet, but as I keep saying perfection isn't the standard. Being at least slightly better than human drivers and all of their frequent failures is.


Self-driving cars -- programming morality @ 2018/12/15 08:17:04


Post by: queen_annes_revenge


Your utilitarian machine cog attitude is getting a bit boring. You must be an absolute hoot at a party.

Just because something is more efficient doesn't mean its whats best. just because something will save money, doesn't mean its right. we're already cutting down human interaction. and resultingly, more people are feeling increasingly lonely, depression and rates of mental health issues are rising. People are increasingly unhappy in their views of themselves, and part of the reason is because all they do is look at phone screens when theyre at home. when they go out they buy their groceries from a self checkout, and order food from a screen, then pay using an app. It creates an unrealistic view of life, and theyre disappointed when reality doesn't match it.

I do understand how statistics work. I'm not even going to expand that further.

I knew you wouldn't even accept the opinion of someone in the industry! ha. They aren't working. Theyre driving under lorries because they cant register colour differences, or swerving into the opposite lane trying to follow cars.


Self-driving cars -- programming morality @ 2018/12/15 08:58:21


Post by: Peregrine


 queen_annes_revenge wrote:
Just because something is more efficient doesn't mean its whats best. just because something will save money, doesn't mean its right. we're already cutting down human interaction. and resultingly, more people are feeling increasingly lonely, depression and rates of mental health issues are rising. People are increasingly unhappy in their views of themselves, and part of the reason is because all they do is look at phone screens when theyre at home. when they go out they buy their groceries from a self checkout, and order food from a screen, then pay using an app. It creates an unrealistic view of life, and theyre disappointed when reality doesn't match it.


I don't think you understand how this works. The CEO of McDonalds does not give a about whether or not people are sad and lonely, they care about how this quarter's profit numbers look. If replacing human workers with an automated ordering screen increases profit by 1% the humans are out the door and replaced with screens. Whether or not you and I like this change is irrelevant. McDonalds and companies like it have a long history of not caring one bit who they're hurting as long as they're making money, and I don't see why you think that is going to change just because the harm goes against your particular ideological position.

I do understand how statistics work. I'm not even going to expand that further.


Apparently not, because your post is the kind of thing that would only be said by someone who doesn't understand statistics.

I knew you wouldn't even accept the opinion of someone in the industry! ha. They aren't working. Theyre driving under lorries because they cant register colour differences, or swerving into the opposite lane trying to follow cars.


First of all, they're not working in the automated vehicle industry, they write software. The fact that you think the two are interchangeable is hinting at a severe lack of understanding of the entire field of engineering. Second, you keep posting this broken argument where you cite a failure by an automated car and claim that they don't work. Single failures do not matter. What matters is accidents and deaths per passenger-mile compared to human drivers. And human drivers have a well established rate of driving under lorries because they're drunk and can't register distance, swerving into the opposite lane because they're texting and don't notice the car turning in time, etc. Automated vehicles can become mandatory despite having occasional failures as long as the rate of failures is better than the humans they are replacing.


Self-driving cars -- programming morality @ 2018/12/15 09:22:07


Post by: queen_annes_revenge


computer programmers are the ones who program the cars. so I'll take the opinion of someone who does that, on the subject, regardless of whether they work in the autonomous vehicle industry or not, theyre still an SME on computer programming.

I'm done mate. frankly, you cant seem to accept that just because something is 'inevitable' that doesn't mean we should blindly accept it without question, so you can have this one, you've bored me out of this debate.



Self-driving cars -- programming morality @ 2018/12/15 09:51:02


Post by: Peregrine


 queen_annes_revenge wrote:
computer programmers are the ones who program the cars. so I'll take the opinion of someone who does that, on the subject, regardless of whether they work in the autonomous vehicle industry or not, theyre still an SME on computer programming.


I also am a computer programmer (among other things) and, unlike you, I understand the immense difference between different kinds of software. The sort of hardware control programming I do (essentially the "brain" that runs a robot) is very different from the back-end web development my partner does, and both of those are very different from writing a video game engine. Someone can be great at making video games but utterly ignorant about hardware control or making a bank website that doesn't immediately lose all of your account data.

I'm done mate. frankly, you cant seem to accept that just because something is 'inevitable' that doesn't mean we should blindly accept it without question, so you can have this one, you've bored me out of this debate.


And, again, you're missing the point. Nobody is asking for blind acceptance, we are asking for acceptance based on evidence. You are the one asking for blind acceptance without question, just of your preferred religion of human supremacy. The rest of us are expecting acceptance of automated vehicles based on (eventual) demonstrated safety records better per passenger-mile than human drivers.

(And, in the case of McDonalds, it doesn't matter if we accept it or not. McDonalds is going to act to maximize profits no matter what we think, they don't give a about our opinion.)


Self-driving cars -- programming morality @ 2018/12/15 11:07:00


Post by: Overread


On the Mc D front - consider that whilst machines might be cheaper to run than staff servers there are complicating factors. For example machines might cost more to replace; require higher fee paying employees (in an area) to service; they might come with increased running costs; they imght result in an initial curiosity spike of sales followed by a dwindling rate of sales as customers prefer human interaction; it might be that complaints take a huge spike because customers are unable to request even minor alterations to orders or rare allergies are unable to be taken into account.
Also considering that most McD staff Iv'e seen tend to be multi-role from servers to cooks to cleaning etc.. it might be that whilst they can save on one or two employees at peek times; the overall saving in staff isn't that great.



The back end to numbers is complicated and rarely is it simplistic. In fact most simplistic approaches often work only in the short term and fail in the long term. Or sometimes the novelty factor is all its trading on.
And this is before we consider governments who, if they saw a lot of lower level jobs shifting into automation, might be put under pressure to provide incentives to encourage businesses to retain staff or retrain and use them elsewhere - ergo to preserve jobs.
And don't forget social aspects, Facebook used to use a tax doge to avoid paying most of their tax (I think they paid some insane nominal value). Once news on that got out the community backlash was extreme and forced them to change their tax paying methods. They were doing nothing illegal, just using a tax loophole that is in the system, but because their whole service relies on the public, public pressure forced a change. That said public pressure is a fickle beast and often enough forgets and gets complacent fast. Plus it gets increasingly harder to rile people up the more an issue returns each year. What might cause pubilc outcry one year might be a blip on the radar 3 years later


That aside human or computer neither is safer when it comes to serving food. We don't get mult-person pile-ups when a server is a bit sleepy; we don't get life threatening injuries when two servers bump into each other; we don't get huge network and traffic issues etc... Ergo there just isn't the vast beneficial pressure to consider an automated approach to serving compared to driving. The benefit is purely looking at the end-return on investment and the costs of doing business.


Self-driving cars -- programming morality @ 2018/12/15 11:25:19


Post by: Peregrine


To be clear, I don't necessarily think that McDonalds will remove those human employees in the near future, I'm just pointing out that if it turns out to provide better profit numbers then those jobs are gone and none of the company's upper management or shareholders will care one bit about quality of life issues or "kids these days are all depressed because they're on their phones all the time" or whatever. And companies like McDonalds are going to be the most likely scenario for adopting automation because of the market they're dealing with. If you go to a McDonalds it's because you want something vaguely food-like for a cheap price and you want it as fast as possible. You aren't expecting a great experience from it, as you would from a nicer restaurant, you just want to get your mediocre burger and get out. So even if having an ordering screen instead of a human cashier is an inferior customer service experience the vast majority of customers aren't going to care. They might grumble a bit, but when they're hungry on a road trip and the exit sign has that M logo labeled with the shortest distance, well, guess it's time for another big mac.

It's the same kind of thing with other automation. Self checkout works in a grocery store where most people are coming to get specific items and get out as quickly as possible. If you need to pick up some milk on the way home from work you don't need help from experienced sales people, and you aren't there to waste time chatting with the cashier. You just want the most efficient means of exchanging money for items, and the only thing better than self checkout would be RFID tagging all the items so you can just walk out the door without even stopping in a checkout line. But it doesn't work so well in something like a game store, where the scanning of items and taking of payment is only a small part of the sales process and having an expert sales person to talk to is important for a lot of customers. That's why self checkout machines are universal in grocery stores but nonexistent in game stores.


Self-driving cars -- programming morality @ 2019/01/10 15:51:17


Post by: Kilkrazy


The BBC's technology correspndent has updated this topic with a new piece today, containing some reaction from the public and experts in the field.

https://www.bbc.co.uk/news/business-46794948

Interestingly, the conclusion is that transition to a self-driving world will take 25 years and will become compulsory once autonomous vehicles are reliably safer than human drivers.

Which are the conclusions many of us have come to in this thread.


Self-driving cars -- programming morality @ 2019/01/10 17:56:12


Post by: queen_annes_revenge


Madness. So I guess when we prove robots are safer than humans at raising kids we'll have them do that aswell.


Self-driving cars -- programming morality @ 2019/01/10 18:44:42


Post by: Overread


 queen_annes_revenge wrote:
Madness. So I guess when we prove robots are safer than humans at raising kids we'll have them do that aswell.


Probably yes






In more serious words, well probably not but then again parents are not known for being wildly unsafe like cars are known to be.


Self-driving cars -- programming morality @ 2019/01/10 19:01:53


Post by: Just Tony


 Kilkrazy wrote:
The BBC's technology correspndent has updated this topic with a new piece today, containing some reaction from the public and experts in the field.

https://www.bbc.co.uk/news/business-46794948

Interestingly, the conclusion is that transition to a self-driving world will take 25 years and will become compulsory once autonomous vehicles are reliably safer than human drivers.

Which are the conclusions many of us have come to in this thread.


Not everyone agrees that it'll reach that point with any speed. And not everyone thinks that we can even GET the robots to drive that safely without making separate circuits with NOTHING but robot drivers. Even then, I'm willing to bet simple accidents to full on fatalities will happen due to gaps that the programmers didn't program for. Maybe after all THAT happens, we'll see it.


Self-driving cars -- programming morality @ 2019/01/10 21:30:14


Post by: queen_annes_revenge


The only way it will be in place in 25 years is if it's forced through.


Self-driving cars -- programming morality @ 2019/01/10 21:49:37


Post by: Peregrine


 queen_annes_revenge wrote:
The only way it will be in place in 25 years is if it's forced through.


Which it will be, to deal with people like you who are willing to let a greater number of innocent victims die if that's what it takes to maintain your belief in human superiority.


Self-driving cars -- programming morality @ 2019/01/10 22:39:57


Post by: Kilkrazy


 queen_annes_revenge wrote:
Madness. So I guess when we prove robots are safer than humans at raising kids we'll have them do that aswell.


Do you do any carpenting or similar DIY?


Self-driving cars -- programming morality @ 2019/01/11 08:06:13


Post by: queen_annes_revenge


 Peregrine wrote:
 queen_annes_revenge wrote:
The only way it will be in place in 25 years is if it's forced through.


Which it will be, to deal with people like you who are willing to let a greater number of innocent victims die if that's what it takes to maintain your belief in human superiority.



Oh boy this straw man again.. I wasnt expecting this!


Automatically Appended Next Post:
 Kilkrazy wrote:
 queen_annes_revenge wrote:
Madness. So I guess when we prove robots are safer than humans at raising kids we'll have them do that aswell.


Do you do any carpenting or similar DIY?


Basic DIY yes. Carpentry is a little outside my skillet but I could perform basic wood working.


Self-driving cars -- programming morality @ 2019/01/11 08:41:22


Post by: Kilkrazy


Do you measure things before you cut them?


Self-driving cars -- programming morality @ 2019/01/11 09:05:50


Post by: Peregrine


 queen_annes_revenge wrote:
Oh boy this straw man again.. I wasnt expecting this!


It's hardly a straw man when you've posted over and over about how you object to the entire principle of humans not being in control regardless of the safety records.


Self-driving cars -- programming morality @ 2019/01/11 10:13:32


Post by: tneva82


 queen_annes_revenge wrote:
Madness. So I guess when we prove robots are safer than humans at raising kids we'll have them do that aswell.


If it is superior why not?

Humans driving is hardly all that safe as it is. Doesn't take much to see. I have lost count on times somebody for example in street here uses the right lane entry to rotary which is specifically for those turning RIGHT to actually go STRAIGHT. Several times resulting in nearly colliding to me on the left lane that is for those who aren't taking the first exit from the rotary. Red lights are ignored as default with people driving as usual several seconds after lights are already red. Etc etc etc. And then we come to people who think it's cool driving 60km/h in 80km/h limit in clear good weather. This results in long blockades when opposing drivers are coming steadily resulting in dangerous overtakes when impatience sets in. Computer would be driving that 80km/h steadily(not 78, not 82. 80)

The sooner humans are replaced from behind the wheels the better.


Self-driving cars -- programming morality @ 2019/01/11 11:29:23


Post by: queen_annes_revenge


 Peregrine wrote:
 queen_annes_revenge wrote:
Oh boy this straw man again.. I wasnt expecting this!


It's hardly a straw man when you've posted over and over about how you object to the entire principle of humans not being in control regardless of the safety records.


Again, what safety records.


Automatically Appended Next Post:
 Kilkrazy wrote:
Do you measure things before you cut them?


Yes. And as much as I'm enjoying this little back and forth, I sense there is some conclusion you have, so could we maybe just get to it?


Self-driving cars -- programming morality @ 2019/01/11 11:40:56


Post by: Overread


Stuff like this
https://researchbriefings.parliament.uk/ResearchBriefing/Summary/CBP-7615

Which states that between 15-29 the most likely thing to kill you is a car incident!

Also most people who drive have enough experience from their own or viewing others to know how freaking dangerous it is and how silly people can be. Running lights, turning too sharp; not slowing down; not following the rules of the road; speeding; overtaking on blind corners/areas etc... The list is endless of the stupid things people do - often to get ahead of one car or cyclist or to get that 5 milliseconds shaved off their journey.


Self-driving cars -- programming morality @ 2019/01/11 12:01:12


Post by: Kilkrazy


It is the Socratic method, the point of which is to use questions to guide an interlocutor through a process of re-examining their thoughts on a matter by themselves, rather than using an argument to try to convince them.

Anyway, to get the end, when you measure things presumably you use a ruler or tape measure, rather than the first joint of your thumb or the width of your hand.

In other words you use scientific measurement rather than educated guesswork to guide your progress.

In the same way, it makes sense if autonomous cars become safer than human drivers, which would be measurable with science, then we should choose autonomous cars.




Self-driving cars -- programming morality @ 2019/01/11 17:19:44


Post by: queen_annes_revenge


Absolutely, I just don't think that's going to happen.


Automatically Appended Next Post:
 Overread wrote:
Stuff like this
https://researchbriefings.parliament.uk/ResearchBriefing/Summary/CBP-7615

Which states that between 15-29 the most likely thing to kill you is a car incident!

Also most people who drive have enough experience from their own or viewing others to know how freaking dangerous it is and how silly people can be. Running lights, turning too sharp; not slowing down; not following the rules of the road; speeding; overtaking on blind corners/areas etc... The list is endless of the stupid things people do - often to get ahead of one car or cyclist or to get that 5 milliseconds shaved off their journey.


That's not what I was asking for though was it? I was asking for the proof of safety in autonomous vehicles, which you keep vehemently expousing.


Self-driving cars -- programming morality @ 2019/01/11 18:16:51


Post by: Overread


 queen_annes_revenge wrote:
Absolutely, I just don't think that's going to happen.


Automatically Appended Next Post:
 Overread wrote:
Stuff like this
https://researchbriefings.parliament.uk/ResearchBriefing/Summary/CBP-7615

Which states that between 15-29 the most likely thing to kill you is a car incident!

Also most people who drive have enough experience from their own or viewing others to know how freaking dangerous it is and how silly people can be. Running lights, turning too sharp; not slowing down; not following the rules of the road; speeding; overtaking on blind corners/areas etc... The list is endless of the stupid things people do - often to get ahead of one car or cyclist or to get that 5 milliseconds shaved off their journey.



That's not what I was asking for though was it? I was asking for the proof of safety in autonomous vehicles, which you keep vehemently expousing.


And as has been said there isn't such proof readily available yet for comparison because the self driving car isn't yet finished. Not has it been tested on a comparable level of use that would give meaningful results to compare against human drivers.

A well maintained car today is pretty darn safe; most of the issues are going to come from the human behind the wheel of a vehicle. So we already know that the human is a weaker element.

So it makes sense that one considers replacing the driver itself. The drive toward self driving cars (yay driving pun) is being pushed so that we advance our robotics and computer technology to a point where it can be safer, esp when deployed at large to whole networks.


Self-driving cars -- programming morality @ 2019/01/11 19:10:41


Post by: Grey Templar


 Overread wrote:
 queen_annes_revenge wrote:
Absolutely, I just don't think that's going to happen.


Automatically Appended Next Post:
 Overread wrote:
Stuff like this
https://researchbriefings.parliament.uk/ResearchBriefing/Summary/CBP-7615

Which states that between 15-29 the most likely thing to kill you is a car incident!

Also most people who drive have enough experience from their own or viewing others to know how freaking dangerous it is and how silly people can be. Running lights, turning too sharp; not slowing down; not following the rules of the road; speeding; overtaking on blind corners/areas etc... The list is endless of the stupid things people do - often to get ahead of one car or cyclist or to get that 5 milliseconds shaved off their journey.



That's not what I was asking for though was it? I was asking for the proof of safety in autonomous vehicles, which you keep vehemently expousing.


And as has been said there isn't such proof readily available yet for comparison because the self driving car isn't yet finished. Not has it been tested on a comparable level of use that would give meaningful results to compare against human drivers.

A well maintained car today is pretty darn safe; most of the issues are going to come from the human behind the wheel of a vehicle. So we already know that the human is a weaker element.

So it makes sense that one considers replacing the driver itself. The drive toward self driving cars (yay driving pun) is being pushed so that we advance our robotics and computer technology to a point where it can be safer, esp when deployed at large to whole networks.


IMO, you're just trading one type of risk for another with Self-driving cars.

You are 'maybe' reducing the risk of individual car having an error and causing an accident. But you are also vastly increasing the risk of a programming error causing hundreds or even thousands of crashes.

You may reduce the number of incidents, but the magnitude of any incidents which do occur will go up significantly. Because if 1 human driver makes a mistake, thats just 1 vehicle. The damage that 1 vehicle can do is relatively minor. If a programming error or oversight is made, that error is going to be in EVERY. SINGLE. CAR! made by that manufacturer.

Once car manufacturers realize that they will be responsible for all of this liability, I do not doubt that Self-driving cars will be completely abandoned. Its a terrible idea from a business perspective for an industry that has never had any liability beyond that actual structural integrity of the product. Now they'd be responsible not only for its manufacturing, but also its daily use. Anybody who makes a self-driving car is making an insanely stupid mistake on a personal level. It doesn't matter that it might benefit humanity overall if it will completely hose them.


Self-driving cars -- programming morality @ 2019/01/11 19:50:26


Post by: Peregrine


Liability laws can and will be changed. What if I get hit by a drunk driver and sue the car manufacturer for allowing a human driver instead of using safer automation technology? How liable could the entire industry be for continuing to sell a dangerous product when a safer alternative is available? It might end up being the opposite, where fear of liability drives standardization on an automated system and removal of fallible humans.

Also, there's no "maybe" about it. Human drivers are incompetent and regularly kill people because of that incompetence. Automated vehicles aren't going to break speed limits, ignore inconvenient red lights, text while driving instead of watching for hazards, drive drunk, etc. There is a legitimate question about whether or not automated vehicles are currently at the point of being safer than the average human, there isn't really any legitimate doubt that they will get there. And yes, in theory you could have a mass error, but an error common enough to cause a level of deaths and injuries comparable to human drivers is going to be the easiest kind to catch in safety testing. The defects that typically slip through are the rare edge-case ones, and those by definition aren't going to cause mass casualties. And, unlike human drivers and their known incompetence, a programming error can be fixed. Opposition to automated vehicles is based entirely on fear and "what if", not reasonable analysis of the risks.


Self-driving cars -- programming morality @ 2019/01/11 20:04:08


Post by: Grey Templar


 Peregrine wrote:
Liability laws can and will be changed. What if I get hit by a drunk driver and sue the car manufacturer for allowing a human driver instead of using safer automation technology?


Lol, good luck on that. You could never sue someone for choosing not to take on liability.


Self-driving cars -- programming morality @ 2019/01/11 20:08:11


Post by: Peregrine


 Grey Templar wrote:
 Peregrine wrote:
Liability laws can and will be changed. What if I get hit by a drunk driver and sue the car manufacturer for allowing a human driver instead of using safer automation technology?


Lol, good luck on that. You could never sue someone for choosing not to take on liability.


They took on liability by building a car that allows a human driver, a known source of risk. They could have either not built the car at all or designed it in a way that a human is not able to operate it directly. You can't just assume the current situation with car design as the default zero-liability case.


Self-driving cars -- programming morality @ 2019/01/11 20:13:41


Post by: Grey Templar


 Peregrine wrote:
 Grey Templar wrote:
 Peregrine wrote:
Liability laws can and will be changed. What if I get hit by a drunk driver and sue the car manufacturer for allowing a human driver instead of using safer automation technology?


Lol, good luck on that. You could never sue someone for choosing not to take on liability.


They took on liability by building a car that allows a human driver, a known source of risk. They could have either not built the car at all or designed it in a way that a human is not able to operate it directly. You can't just assume the current situation with car design as the default zero-liability case.


Yeah, that would NEVER hold up in court. Otherwise nobody could ever make any product if there was any possibility of it being safer. Get over your robot fetish Peri and stop trying to force it on everybody else.

If you could do that, then anybody who makes knives would have to go out of business because they knowingly make sharp objects that can be used by stupid humans to hurt each other or themselves.

Right now, I'm going to go take on some liability by driving to go get some lunch. I'm responsible for my actions while driving, not Ford Motor Company who designed and built my truck.


Self-driving cars -- programming morality @ 2019/01/11 20:57:25


Post by: Prestor Jon


 Peregrine wrote:
 Grey Templar wrote:
 Peregrine wrote:
Liability laws can and will be changed. What if I get hit by a drunk driver and sue the car manufacturer for allowing a human driver instead of using safer automation technology?


Lol, good luck on that. You could never sue someone for choosing not to take on liability.


They took on liability by building a car that allows a human driver, a known source of risk. They could have either not built the car at all or designed it in a way that a human is not able to operate it directly. You can't just assume the current situation with car design as the default zero-liability case.


Current cars don't come with the same safety restraints used in NASCAR and with complimentary helmets. That technology exists and would make drivers and passengers safer. Clearly we should sue car manufacturers. Of course those suits would be dismissed because current cars are made to be compliant with safety laws and industry standards and therefore even though they are not as absolutely safe as technology would allow them to be they are not negligent to the extent that a strict liability claim would be upheld.

Current cars are capable of going faster than legal speed limits and speeding causes accidents yet you cannot sue car manufacturers for building cars that can break speed limits. There are lawful uses of a car, like passing other vehicles, in which you can exceed the speed limit. That doesn't mean the manufacturer is responsible if you choose to commit moving violations and felonies while speeding.

Sure, we could change liability laws but even if we did we're not going to change them to force manufacturers to drunk proof everything they make. Drunk people can do horrible stupid things with anything when they're drunk. You can get drunk and beat your wife with your belt. Should you be able to sue the pants manufacturer and the belt manufacturer because if the pants needed a belt to stay in their proper place an integral belt could have been installed in the pants, that technology exists.


Self-driving cars -- programming morality @ 2019/01/11 21:23:59


Post by: queen_annes_revenge


Easy now..


Self-driving cars -- programming morality @ 2019/01/11 22:34:23


Post by: Gitzbitah


Exalt, Killkrazy.

To address the tangent of robots raising our kids, well.... we do. Programs like ABC mouse, Myon, iReady aimed specifically at preschool and early education kids.

Classroom curriculum is fairly scripted- Springboard, in theory, means every classroom should teach in the same way. It tells the teacher what to do with each lesson, how long it should take and how to assess.

And credit recovery at my school is literally a computer program that the students go through at their own pace.

We are highly standardized and mechanized- as adaptive tests become adaptive curriculum, it will become better than teachers at imparting classroom knowledge. Thankfully, that's not all teachers do- and I'll be retired long before they figure out how to get a computer to do all those little unscheduled, unscripted extras that make kids eager and receptive to learning in a classroom.


Self-driving cars -- programming morality @ 2019/01/12 00:44:51


Post by: epronovost


 Gitzbitah wrote:
Thankfully, that's not all teachers do- and I'll be retired long before they figure out how to get a computer to do all those little unscheduled, unscripted extras that make kids eager and receptive to learning in a classroom.


I have yet to meet a computer program that has an answer to the problem of that kid doesn't want to work because the class is boring (or any other excuses in the long list of excuse to work slowly, badly or not at all). Frankly it takes rather polished psycho-social skills to even have a chance at motivating one unmotivated kid, now remember that most kid aren't very motivated to learn in systemic fashion for extanded period of time. Also, conflict solving is going to be very difficult to do to.


Self-driving cars -- programming morality @ 2019/02/20 15:44:35


Post by: Kilkrazy


A bit of a necronization but there is a new, interesting infographic/animation feature by the BBC.

http://www.bbc.com/future/bespoke/the-disruptors/on-the-move/


Self-driving cars -- programming morality @ 2019/02/20 23:16:59


Post by: Gitzbitah


My god, that was as informative as it was poorly formatted.


Self-driving cars -- programming morality @ 2019/02/20 23:24:28


Post by: Just Tony


Sweet Emperor, I will most assuredly wait for a text version.


Self-driving cars -- programming morality @ 2019/02/20 23:58:28


Post by: Overread


I wouldn't bother - I got bored part way through and totally annoyed with scrolling for nothing. Almost the entire of it just random quotations with big flashy pictures about the electric car future. There's no substance to it at all.


Self-driving cars -- programming morality @ 2019/02/21 06:55:38


Post by: queen_annes_revenge


Oh yeah, just hop on an electric scooter that you don't own, to a car park where, and get in an autonomous car you don't own, because there definitely won't be any issues like none being available.. Its like they dont know us at all.

What if i have a medical emergency in my house? If its not highest priority I could be waiting hours for an ambulance. With no car what am I going to do? These people are crazy if they think folks will just up and relinquish private transports.


Self-driving cars -- programming morality @ 2019/02/21 10:49:35


Post by: Kilkrazy


Lots of people already have relinquised having their own personal vehicle, and use various forms of hire vehicle combined with public transport instead.

I was in this category in the mid1990s. Living in a small flat in Chelsea, with no car parking and central London within easy walking or bus, it made no sense to buy a car. I just got a hire car when I needed one to drive out to the countryside.

No-one's saying the future won't contain private cars. It just will have a lot fewer of them, and they are likely to be shared because that spreads the costs.

That's only what happens now with Lyft and Uber anyway, when you look at it as a business model.

If you want to own your own car, and not let anyone else use it, you will be able to do that,