Orlanth wrote:My take.
1. The Laws of Robotics work best in their expanded version. Wheras a robot must act to benefit humanity, not individual humans. Under this law ' The Zero Law' a robot can even kill if for a greater good. As Asimovs robots are eventually far superior to humans they use this as a work around.
For example a robot can use Law Zero to get around a whimsical but lawful order to self destruct if it considers that its continued existence is of greater benefit to humanity.
2. As robots will in all likelihood be used initially as military tools the laws may not ever get implemented.
1) Wouldn't happen in Asimov's universe. There was a strong Frankenstein complex working against positronic robots, to the point where for many years you were unable to posess robots on Earth at all (presumbaly aside from U.S. Robots factory and in-transit to off-world destinations. If they hadn't been programmed to value human life over even their own existance, U.S. Robots would either have been sued out of existance, or burned by an angry mob...quite possibly with military support.
In the real world, on the other hand, all bets are off.
2) No argument there.