Forum adverts like this one are shown to any user who is not logged in. Join us by filling out a tiny 3 field form and you will get your own, free, dakka user account which gives a good range of benefits to you:
No adverts like this in the forums anymore.
Times and dates in your local timezone.
Full tracking of what you have read so you can skip to your first unread post, easily see what has changed since you last logged in, and easily see what is new at a glance.
Email notifications for threads you want to watch closely.
Being a part of the oldest wargaming community on the net.
If you are already a member then feel free to login now.
And you know what he was actually referencing? Computers doing what you want.
Do we have the tech? Perhaps, but doesn't mean the coding always does what you plan it to.
This message was edited 1 time. Last update was at 2014/05/10 00:25:54
You know you're really doing something when you can make strangers hate you over the Internet. - Mauleed
Just remember folks. Panic. Panic all the time. It's the only way to survive, other than just being mindful, of course-but geez, that's so friggin' boring. - Aegis Grimm
Hallowed is the All Pie The Before Times: A Place That Celebrates The World That Was
So an X box can "target identify" between everyone in the house and tell the users apart.
But state of the art computers cannot tell the difference between friendlies holding FOF tags and people holding guns.
Riiiiiighhhhht.....
As someone above said, your dont program do you?
its 2014, we got computers plenty capable of that kind of thing now, and mounting them on weapons platforms is 100% in the scope of what we have right now.
I'm not sure who you were responding to, or who you were referring to in your remark, but I was actually trying to make the opposite argument. Detecting human beings and their movement is relatively simple. I could probably take existing technology, spend a week or two, and be able write something that can distinguish the difference between me, my roommate*, and my dog. Creating an algorithm to detect friend/foe or establish intent, however, is very much so non-trivial. This doesn't even begin to take into account the horrors that could occur when you consider sabotage or hackers.
I mean, I'm a halfway decent programmer. I still write code that has bugs in it. ALL programmers write code that has bugs in it. This isn't bridge building here and won't be for a long time, if it ever becomes as such. ANY nontrivial piece of software is impossible to mathematically prove won't pick up a false positive.
I guess the question is how comfortable you are with the idea of false positives, and how much you, as a soldier with your life on the line, can rely on false negatives.
*Unless he wears a wig, or uses any of the well documented techniques for circumventing facial recognition software.
This message was edited 1 time. Last update was at 2014/05/10 00:45:52
The threat of killer robots is completely overblown. The current trajectory for R&D for these things requires first developing units that do everything *except* make the decision to fire (which is made by a human handler). They'll do everything else, i.e. respond to contact and acquire targets, but the actual decision would be made by a human. Read more here:
And so how do you actually tell if you're looking at a friend or a foe?
Ideas I have:
Some sort of radio transmitter: Unless you give one to noncombatants, they're still targets. Not good. Also any enemy is perfectly safe the moment they get the ability to spoof the technology.
Facial recognition: Makeup can circumvent that. Or a mask. "Unregistered" noncombatants are also still in danger.
I wanted to keep going, but I really don't have any other ideas. I fear tech like this could become the "land mines" of the 21st century.
Automatically Appended Next Post:
NuggzTheNinja wrote: The threat of killer robots is completely overblown. The current trajectory for R&D for these things requires first developing units that do everything *except* make the decision to fire (which is made by a human handler). They'll do everything else, i.e. respond to contact and acquire targets, but the actual decision would be made by a human. Read more here:
easysauce wrote: As someone above said, your dont program do you?
Well, in a few months I should get my PhD in computer science.
"Our fantasy settings are grim and dark, but that is not a reflection of who we are or how we feel the real world should be. [...] We will continue to diversify the cast of characters we portray [...] so everyone can find representation and heroes they can relate to. [...] If [you don't feel the same way], you will not be missed"
https://twitter.com/WarComTeam/status/1268665798467432449/photo/1
daedalus wrote: And so how do you actually tell if you're looking at a friend or a foe?
Ideas I have:
Some sort of radio transmitter: Unless you give one to noncombatants, they're still targets. Not good. Also any enemy is perfectly safe the moment they get the ability to spoof the technology.
Facial recognition: Makeup can circumvent that. Or a mask. "Unregistered" noncombatants are also still in danger.
I wanted to keep going, but I really don't have any other ideas. I fear tech like this could become the "land mines" of the 21st century.
The AI for visual cognition is currently very good. Distinguishing between friendlies (i.e., armed guys wearing a standardized uniform) and other armed individuals is not outside the realm of possibility.
Armed people belong to the class of people + weapons. IFF logic would address something along the lines of, if the person is armed, check to see if they are friendly. If not friendly, engage. You could also have the robot react to fire directed at it (i.e., return fire only). And there are a lot of ways to distinguish incoming from outgoing fire.
daedalus wrote: I mean, I'm a halfway decent programmer. I still write code that has bugs in it. ALL programmers write code that has bugs in it. This isn't bridge building here and won't be for a long time, if it ever becomes as such. ANY nontrivial piece of software is impossible to mathematically prove won't pick up a false positive.
I guess the question is how comfortable you are with the idea of false positives, and how much you, as a soldier with your life on the line, can rely on false negatives.
Another question is, how do you explain to the family of an innocent civilian wrongly targeted through a false positive?
A third question, who do you explain to if the entire family was wiped out? As has happened in remote controlled drone strikes with an actual human being in control, in Pakistan and Yemen.
With plenty of mistakes being made already with human beings operating the trigger, would it really be a good idea to have an algorithm making the decision whether or not to shoot to kill? By all means, lets have autonomous surveillance drones gathering intelligence over an active combat zone to support troops on the ground. But I think its extremely dangerous and wrong to delegate the decision to kill to a machine, however technologically advanced, today or in 100 years time.
daedalus wrote: And so how do you actually tell if you're looking at a friend or a foe?
Ideas I have:
Some sort of radio transmitter: Unless you give one to noncombatants, they're still targets. Not good. Also any enemy is perfectly safe the moment they get the ability to spoof the technology.
Facial recognition: Makeup can circumvent that. Or a mask. "Unregistered" noncombatants are also still in danger.
I wanted to keep going, but I really don't have any other ideas. I fear tech like this could become the "land mines" of the 21st century.
Automatically Appended Next Post:
NuggzTheNinja wrote: The threat of killer robots is completely overblown. The current trajectory for R&D for these things requires first developing units that do everything *except* make the decision to fire (which is made by a human handler). They'll do everything else, i.e. respond to contact and acquire targets, but the actual decision would be made by a human. Read more here:
NuggzTheNinja wrote: The AI for visual cognition is currently very good. Distinguishing between friendlies (i.e., armed guys wearing a standardized uniform) and other armed individuals is not outside the realm of possibility.
Yes, provided the bad guys make sure not to wear the standardized uniform, and keep their guns well in sight. I would like to ask them to also paint small target on their foreheads.
"Our fantasy settings are grim and dark, but that is not a reflection of who we are or how we feel the real world should be. [...] We will continue to diversify the cast of characters we portray [...] so everyone can find representation and heroes they can relate to. [...] If [you don't feel the same way], you will not be missed"
https://twitter.com/WarComTeam/status/1268665798467432449/photo/1
NuggzTheNinja wrote: The AI for visual cognition is currently very good. Distinguishing between friendlies (i.e., armed guys wearing a standardized uniform) and other armed individuals is not outside the realm of possibility.
Yes, provided the bad guys make sure not to wear the standardized uniform, and keep their guns well in sight. I would like to ask them to also paint small target on their foreheads.
You think you've come up with something clever? People have been wearing their opponents' uniforms to confuse them for thousands of years.
Otherwise, you're asking computers to solve a problem that humans can't solve (i.e., how do you fight a guerilla enemy that blends into a civilian population?)
This message was edited 1 time. Last update was at 2014/05/10 01:01:19
The AI for visual cognition is currently very good. Distinguishing between friendlies (i.e., armed guys wearing a standardized uniform) and other armed individuals is not outside the realm of possibility.
Sure, I can see that, but how many well designed US military uniform knockoffs will be made after it becomes obvious the that 'infidels' soulless killer guns only target people not wearing the uniform? How does this deal with someone wearing a burka with 20 lbs of explosives duct taped to their spines?
I wonder how hard it is to program the difference between a modern US military uniform and any knockoff (or army surplus) uniform you can get for $5 somewhere.
Armed people belong to the class of people + weapons. IFF logic would address something along the lines of, if the person is armed, check to see if they are friendly. If not friendly, engage. You could also have the robot react to fire directed at it (i.e., return fire only). And there are a lot of ways to distinguish incoming from outgoing fire.
I'm not worried about the weapons detecting a guy dressed in rags with an AK-47 as much as I'm worried about people trained specifically to circumvent these systems.
I'm specifically responding to people claiming it cannot be done, it can.
You are correct it will have issues, mostly at first OFC since it takes time to perfect.
But its much easier to tell if someones brandishing a weapon then facial recognition too. Guns and most weapons have very identifiable shapes. Likely facial recognition wouldnt even factor into it for the most part.
Our side would have FOF tags, all the CPU has to do is shoot anyone with a weapon... let alone just anyone without a FOF tag if you are that worried about false negatives.
Sure its not 100%, but neither are people, for every computer bug, there is someone who got tired, drunk, emotional, or some other human flaw.
Trading a percent or two or five of extra "collateral damage" could likely be seen as a good trade off to some if it meant our own side suffered zero casualties. (not a good idea to me)
Lots of our long range stuff already uses something similar, the FOF identification is already automated just has a human hitting the kill switch instead of the CPU.
very simple matter to remove that, OFC im saying we should not be removing it for many of the reasons you state above.
I was just countering the people who think its impossible and are brushing it off as something far fetched to be dismissed.
The AI for visual cognition is currently very good. Distinguishing between friendlies (i.e., armed guys wearing a standardized uniform) and other armed individuals is not outside the realm of possibility.
Sure, I can see that, but how many well designed US military uniform knockoffs will be made after it becomes obvious the that 'infidels' soulless killer guns only target people not wearing the uniform? How does this deal with someone wearing a burka with 20 lbs of explosives duct taped to their spines?
I wonder how hard it is to program the difference between a modern US military uniform and any knockoff (or army surplus) uniform you can get for $5 somewhere.
Armed people belong to the class of people + weapons. IFF logic would address something along the lines of, if the person is armed, check to see if they are friendly. If not friendly, engage. You could also have the robot react to fire directed at it (i.e., return fire only). And there are a lot of ways to distinguish incoming from outgoing fire.
I'm not worried about the weapons detecting a guy dressed in rags with an AK-47 as much as I'm worried about people trained specifically to circumvent these systems.
Computer vision can detect hidden items, believe it or not. Researchers are Rutgers are working on that precise problem. It turns out that, using wireframes alone, you can get pretty good sensitivity and specificity for people wearing suicide vests and carrying concealed weapons.
I'm sure people could circumvent the systems. I'm sure that the same amount of effort, if directed at circumventing human situation awareness, would have pretty much the same effect. The difference is that is a suicide bomber blows himself up next to a robot, we can rebuild the thing. We can't put Private Humpty Dumpty back together again, though.
This message was edited 1 time. Last update was at 2014/05/10 01:06:02
Nope, I think I pointed out something obvious that you did not account for. I mean, people have been wearing their opponents' uniforms to confuse them for thousands of years!
NuggzTheNinja wrote: Otherwise, you're asking computers to solve a problem that humans can't solve (i.e., how do you fight a guerilla enemy that blends into a civilian population?)
Precisely what I said in my very first message in this topic.
Hybrid Son Of Oxayotl wrote: Not even bugs! What about software failures? See, humans have problems determining whether someone is a threat or not. Bots are not even able to solve captcha.
"Our fantasy settings are grim and dark, but that is not a reflection of who we are or how we feel the real world should be. [...] We will continue to diversify the cast of characters we portray [...] so everyone can find representation and heroes they can relate to. [...] If [you don't feel the same way], you will not be missed"
https://twitter.com/WarComTeam/status/1268665798467432449/photo/1
Nope, I think I pointed out something obvious that you did not account for. I mean, people have been wearing their opponents' uniforms to confuse them for thousands of years!
NuggzTheNinja wrote: Otherwise, you're asking computers to solve a problem that humans can't solve (i.e., how do you fight a guerilla enemy that blends into a civilian population?)
Precisely what I said in my very first message in this topic.
Hybrid Son Of Oxayotl wrote: Not even bugs! What about software failures? See, humans have problems determining whether someone is a threat or not. Bots are not even able to solve captcha.
Accounting for something so obvious in the context of a short forum post is essentially nonsensical. Everyone knows you can fool computers. You can fool humans too. But because computers can't currently do it better than humans, we should abandon the technology? Wow...
It's a stupid thing to complain about. If your major complaint is distinguishing between armed friendlies and armed enemies, there are any number of ways of giving the thing IFF capabilities without relying on machine vision algorithms. That's just one way of doing it.
For someone in CS you really aren't very well read in this area.
This message was edited 3 times. Last update was at 2014/05/10 01:13:23
I'm sure people could circumvent the systems. I'm sure that the same amount of effort, if directed at circumventing human situation awareness, would have pretty much the same effect. The difference is that is a suicide bomber blows himself up next to a robot, we can rebuild the thing. We can't put Private Humpty Dumpty back together again, though.
exactly... that will be the main draw for this kind of tech. Unfortunately its very appealing, but the potential for abuse/disaster is pretty huge.
I also think people are confusing "autonomous" with "not having any orders at all", people are autonomous, but we still take orders, ask for advice in some situations.
the robots will have SOP's orders, and objectives in place, just like their human counterparts.
They can be instructed to either be passive, fire if fired upon, kill John Conner, and so on.
Nope, I think I pointed out something obvious that you did not account for. I mean, people have been wearing their opponents' uniforms to confuse them for thousands of years!
And it fools people just as well as robots.
People have also been sneaking up on bases at night when people fall asleep for thousands of years... robots dont do that... the detriments of meat bag mounted weapons compared to mechanized ones are huge.
In fact, part of the "uniform" is a FOF code, something the enemy will have no chance of emulating, and even if they did have a chance, its still harder to fool the computer then the person.
Even if you do sneak up on the robot with a FOF code uniform, as soon as you engage it you lose, either it kills you, or ID's you for its buddies.
best case scenario, which has almost a 0% chance of happening, is you trade one irreplaceable person for a very replaceable robot.
This message was edited 2 times. Last update was at 2014/05/10 01:17:03
Computer vision can detect hidden items, believe it or not. Researchers are Rutgers are working on that precise problem. It turns out that, using wireframes alone, you can get pretty good sensitivity and specificity for people wearing suicide vests and carrying concealed weapons.
Then I suppose there's always the implanted bomb scenario. I mean, if the person is going to die anyway, why not? It's not been tried much, and the only situation where I could find it happening was unsuccessful, but it's still a concern. Any advancement in arms will eventually be met with some counter with some level of effectiveness to it. The only question is how many people die in the process.
I'm sure people could circumvent the systems. I'm sure that the same amount of effort, if directed at circumventing human situation awareness, would have pretty much the same effect. The difference is that is a suicide bomber blows himself up next to a robot, we can rebuild the thing. We can't put Private Humpty Dumpty back together again, though.
Maybe. The nice thing about humans is that they have the ability to act independent of each other, and can still communicate with each other. Machines don't really have any "gut instinct".
You're concerned with preserving soldier's lives. I get that. I am too. I guess I'm just more concerned with the emotional aspect of it. I mean, consider this: A family member gets killed via friendly fire. You are told and hear:
A: It was another soldier. There was an investigation into it, and it was determined that someone acted inappropriately. They were disciplined as according to whatever method is appropriate. I'm not an expert on this, so I can't say what the actual consequences are. Court martial? I know it can happen, but genuinely don't know.
B: It was a machine acting autonomously. There's an investigation going on at Boeing to determine where the malfunction was. You then later hear they've fixed the issue.
They're both nightmares, but which one would you feel better about? Personally, I'd rather hear the first one.
I think you are grossly underestimating the advantages a mechanized weapon has over a meat bag mounted weapon daedalus.
Gut instinct doesnt matter much if you have to sleep, eat, rest, blink, and dont have built in thermal vision or stattelite/drone feeds and micro second reaction time.
Also, you are forgetting something....
if we are sending robot to war, friendly fire is against other robots.
Daedalus, I hear what you're saying regarding the emotional aspect. Whether or not robots are going to be better at reducing civilian casualties, and whether people are more OK with their families being shot by soldiers than robots, are empirical questions. Regarding the former, we can model and test this once we actually have autonomous units prototyped. Regarding the latter, we can investigate that as well. In fact, Air Force Research Lab is currently hiring social psychologists to investigate peoples' perceptions for fighting alongside and against enemies using directed energy weapons. It's something that we're considering for sure.
Another thing to consider is the situations in which you would actually want to employ such a unit. Maybe it only works checkpoints? Maybe it is reserved for more open warfare...battles like Fallujah where pretty much all the civilians left the city before the fighting (along with every Jihadi who didn't want to die).
It's hard to say how these things would be used, or whether they would be better at reducing collateral damage than humans. One thing is certain though - humans are bound by biological constraints, whereas with robots it's an engineering problem. I'm merely arguing that it's a technology that is not only worth pursuing, but very much within our grasp.
This message was edited 1 time. Last update was at 2014/05/10 01:31:32
easysauce wrote:
if we are sending robot to war, friendly fire is against other robots.
so really, not a "nightmare"
I think that's a little unrealistic for quite a while.
NuggzTheNinja wrote:Daedalus, I hear what you're saying regarding the emotional aspect. Whether or not robots are going to be better at reducing civilian casualties, and whether people are more OK with their families being shot by soldiers than robots, are empirical questions. Regarding the former, we can model and test this once we actually have autonomous units prototyped. Regarding the latter, we can investigate that as well. In fact, Air Force Research Lab is currently hiring social psychologists to investigate peoples' perceptions for fighting alongside and against enemies using directed energy weapons. It's something that we're considering for sure.
Another thing to consider is the situations in which you would actually want to employ such a unit. Maybe it only works checkpoints? Maybe it is reserved for more open warfare...battles like Fallujah where pretty much all the civilians left the city before the fighting (along with every Jihadi who didn't want to die).
It's hard to say how these things would be used, or whether they would be better at reducing collateral damage than humans. One thing is certain though - humans are bound by biological constraints, whereas with robots it's an engineering problem. I'm merely arguing that it's a technology that is not only worth pursuing, but very much within our grasp.
I don't have an argument against that. There could be situations where this would actually work. I'm just worried about any asymmetrical situation where there's the combination of human troops, combatants, and civilians. A situation much more in control would eliminate many of my concerns.
And you know what he was actually referencing? Computers doing what you want.
Do we have the tech? Perhaps, but doesn't mean the coding always does what you plan it to.
This. Just because we're seeing the technology come into being doesn't mean its battlefield ready or ready for wide deployment. We're highly unlikely to see autonomous kill bots deployed on a wide scale anytime in the near future simply because its too technical infeasible or too expensive. We can program a drone to fly itself, but we still have people who monitor the things because the tech doesn't exist for that machine to manage the unforeseeable. Airliners are a prime example. For the most part they fly themselves, but we still have pilots because when something goes wrong the machine can't resolve the errors itself.
Look at the F35. Tens of billions and the thing still doesn't work quite like we want it to, and we've been building planes for over a century. You really want to argue we're on the verge of a warfare revolution based on robots? Based on what? The new so-so Robocop film and MIrcrosofts shity motion control gimmick?
Come on. There are real problems in the world the UN should be spending its time on not the latest plot for Hollywoods new summer blockbuster.
This message was edited 1 time. Last update was at 2014/05/10 01:58:20
Come on. There are real problems in the world the UN should be spending its time on not the latest plot for Hollywoods new summer blockbuster.
The UN has pretty much imaginary power, why not have them tackle imaginary problems?
Anytime they debate a real problem someone just vetoes it. So whats the difference?
This message was edited 1 time. Last update was at 2014/05/10 03:01:45
Self-proclaimed evil Cat-person. Dues Ex Felines
Cato Sicarius, after force feeding Captain Ventris a copy of the Codex Astartes for having the audacity to play Deathwatch, chokes to death on his own D-baggery after finding Calgar assembling his new Eldar army.
Proud Member of the Infidels of OIF/OEF
No longer defending the US Military or US Gov't. Just going to ""**feed into your fears**"" with Duffel Blog Did not fight my way up on top the food chain to become a Vegan...
Warning: Stupid Allergy
Once you pull the pin, Mr. Grenade is no longer your friend
DE 6700
Harlequin 2500
RIP Muhammad Ali.
Jihadin, Scorched Earth 791. Leader of the Pork Eating Crusader. Alpha
Waiting for my shill money from Spiral Arm Studios
The other reason Killer Robots aren't really a problem is that we can just detonate a nuke 20 miles above them in the atmosphere to kill them.
Self-proclaimed evil Cat-person. Dues Ex Felines
Cato Sicarius, after force feeding Captain Ventris a copy of the Codex Astartes for having the audacity to play Deathwatch, chokes to death on his own D-baggery after finding Calgar assembling his new Eldar army.
After we make the killer robots, we will need robots to stop the killer robots.
"I LIEK CHOCOLATE MILK" - Batman
"It exist because it needs to. Because its not the tank the imperium deserve but the one it needs right now . So it wont complain because it can take it. Because they're not our normal tank. It is a silent guardian, a watchful protector . A leman russ!" - Ilove40k
3k
2k
/ 1k
1k
I watched the "America's Secret Armies" episode of America's Book of Secrets. It had a segment on robots being used in military applications. While I'm fearful of an imminent Skynet threat I do believe we should be careful and ALWAYS keep Humans in control, at least for the sake of preventing a random accident such as killing a civilian etc.
LordofHats wrote: Full autonomous combat robots are so far off technologically (in a practical sense) that even talking about them is nothing more than a token discussion. Another example of the UN having fethed up priorities.
no, they really are not.. 100% of the tech is there, right now, and we have autonomous robots already inservice, but they are not armed.
They are not armed, by choice, not because we cannot do it. A fully autonomous sentry gun that just has orders and the ability to shoot anyone in the perimiter who doesnt have an RFID FOF tag was doable a decade ago.
Best to deal with this before it becomes a problem instead of afterwards.
I agree entirely. I'd be pretty surprised if we don't have fully autonomous, no human in the loop, combat drones within a decade. The X-47B is already executing more complex actions in a technical sense right now, today. The only thing in the way is political pressure, the technology is there and has been for years.
This message was edited 1 time. Last update was at 2014/05/11 22:02:38
lord_blackfang wrote: Respect to the guy who subscribed just to post a massive ASCII dong in the chat and immediately get banned.
Flinty wrote: The benefit of slate is that its.actually a.rock with rock like properties. The downside is that it's a rock