No. Humans have stopped nuclear catastrophes caused by computer misreadings before. So far, we have a way better decision-making track record.
Autonomous killings is an absolutely terrible, terrible idea.
The incident I’m thinking about is geese being misinterpreted by a computer as nuclear missiles and a human recognizing the error and turning off the system, but I can only find a couple sources for that, so I found another:
In 1983, a computer thought that the sunlight reflecting off of clouds was a nuclear missile strike and a human waited for corroborating evidence rather than reporting it to his superiors as he should have, which would have likely resulted in a “retaliatory” nuclear strike.
Self-driving cars lose their shit and stop working if a kangaroo gets in their way, one day some poor people are going to be carpet bombed because of another strange creature no one every really thinks about except locals.
Edit: seriously though, no. A big player in the war AI space is Palantir which currently provides facial recognition to Homeland Security and ICE. They are very interested in drone AI. So are the bargain basement competitors.
Drones already have unacceptably high rates of civilian murder. Outsourcing that still further to something with no ethics, no brain, and no accountability is a human rights nightmare. It will make the past few years look benign by comparison.
When the targets are things like that wedding in Mali sure.
I think your argument is a bit like saying depleted uranium is better than the alternative, a nuclear bomb. When the bomb was never on the table for half the stuff depleted uranium is.
Boots on the ground or heavy ordinance were never a viable option for some of the stuff drones are used for.
Eventually maybe. But not for the initial period where the tech is good enough to be extremely deadly but not smart enough to realize that often being deadly is the stupider choice.
The sad part is that the AI might be more trustworthy than the humans being in control.
No. Humans have stopped nuclear catastrophes caused by computer misreadings before. So far, we have a way better decision-making track record.
Autonomous killings is an absolutely terrible, terrible idea.
The incident I’m thinking about is geese being misinterpreted by a computer as nuclear missiles and a human recognizing the error and turning off the system, but I can only find a couple sources for that, so I found another:
In 1983, a computer thought that the sunlight reflecting off of clouds was a nuclear missile strike and a human waited for corroborating evidence rather than reporting it to his superiors as he should have, which would have likely resulted in a “retaliatory” nuclear strike.
https://en.m.wikipedia.org/wiki/1983_Soviet_nuclear_false_alarm_incident
As faulty as humans are, it’s a good a safeguard as we have to tragedies. Keep a human in the chain.
Self-driving cars lose their shit and stop working if a kangaroo gets in their way, one day some poor people are going to be carpet bombed because of another strange creature no one every really thinks about except locals.
Have you never met an AI?
Edit: seriously though, no. A big player in the war AI space is Palantir which currently provides facial recognition to Homeland Security and ICE. They are very interested in drone AI. So are the bargain basement competitors.
Drones already have unacceptably high rates of civilian murder. Outsourcing that still further to something with no ethics, no brain, and no accountability is a human rights nightmare. It will make the past few years look benign by comparison.
Yeah, I think the people who are saying this could be a good thing seem to forget that the military always contracts out to the lowest bidder.
Drone strikes minimize casualties compared to the alternatives - heavier ordinance on bigger delivery systems or boots on the ground
If drone strikes upset you, your anger is misplaced if you’re blaming drones. You’re really against military strikes at those targets, full stop.
When the targets are things like that wedding in Mali sure.
I think your argument is a bit like saying depleted uranium is better than the alternative, a nuclear bomb. When the bomb was never on the table for half the stuff depleted uranium is.
Boots on the ground or heavy ordinance were never a viable option for some of the stuff drones are used for.
It was literally the standard policy prior to drones.
Eventually maybe. But not for the initial period where the tech is good enough to be extremely deadly but not smart enough to realize that often being deadly is the stupider choice.