In a Wired Magazine interview with the Israeli Air Force they report the following:
More interestingly is as follows. It is likely easy to be a utilitarian here. The goal of an algorithm can be to reduce collateral casualties. But what else could an "ethics algorithm" be made to do? It could be made to weigh the possibility of a certain amount of injury and damage against a certain amount of death. It might be able to weigh the ethical repercussions of destroying certain targets against the mission ends. This would force philosophers to really think about what "proportionality" means. It might be able to weigh one type of collateral damage against another (say, hitting a full school or a half-full medical center.) What else should such an algorithm try to do?
Thoughts?
the notion of “mathematical formulas that solve even the difficult ethical dilemmas in place of human pilots.” The air force has been developing technologies for quite some time now that can divert missiles in midair if all of a sudden a civilian pops up near the target, but often this kind of thing happens too quickly even for the most skilled operators. It’s part of an uneven, decade-long IAF effort to try to bring down collateral damage — a necessity, since the air force fights asymmetric enemies in densely populated areas. But this is something the IAF is keen to develop even more. The concept of a computer taking over almost all the functions of this kind of thing is very tricky, though; you can’t very well say at a war crimes tribunal that you’re not responsible for unintended deaths, or tell the judge it was all the algorithm’s fault.Whereas philosophers have time to think about ethical scenarios, professionals who have to make ethically responsible decisions are generally forced to utilize an algorithm. In any case, algorithms are far better at getting certain kinds of things right than humans are, and assessments of damage are probably one of them, or could be made to be one of them. So one would think that this is a very positive step. Assuming no weapon is immune from being used to create collateral damage (or anything that fall under "double effect") an algorithm designed to minimize that would be worth developing. In cases like this, an algorithm will not be responsible for unintended deaths, they will be responsible for all the deaths that did not occur.
More interestingly is as follows. It is likely easy to be a utilitarian here. The goal of an algorithm can be to reduce collateral casualties. But what else could an "ethics algorithm" be made to do? It could be made to weigh the possibility of a certain amount of injury and damage against a certain amount of death. It might be able to weigh the ethical repercussions of destroying certain targets against the mission ends. This would force philosophers to really think about what "proportionality" means. It might be able to weigh one type of collateral damage against another (say, hitting a full school or a half-full medical center.) What else should such an algorithm try to do?
Thoughts?
Short comment; one name to draw your own conclusion : Robert McNamara
ReplyDelete