Friday, May 25, 2012

Military research and ethics

/. has a post about the bioethicist Jonathan Moreno who co-wrote an essay about the intersection of military ethics and neuroscience. Moreno is also interviewed here. Though the interview is pretty interesting, I find the essay itself rather thin on content. It is largely a list of research DARPA has tried out like brain-computer interfaces, performance enhancing drugs, and lie detection technologies.

One point that the article makes that is ethically interesting is one I am somewhat skeptical of:
The military establishment's interest in understanding, developing, and exploiting neuroscience generates a tension in its relationship with science: the goals of national security and the goals of science may conflict. The latter employs rigorous standards of validation in the expansion of knowledge, while the former depends on the most promising deployable solutions for the defense of the nation. As a result, the exciting potential of high-tech developments on the horizon may be overhyped, misunderstood, or worse: they could be deployed before sufficiently validated." 
I am a bit puzzled why this should be so. If a solution is deployable, i.e. works, how does that conflict with our getting knowledge? (There may be ethical issues in deploying a technology as part of the testing process, but that issue is not raised.) The economics of the military might get them to try some science before it is mature, but academia seems beset with the same problems, with researchers publishing findings before they are mature (see here, for example, for a list of science papers that were retracted for lots of reasons). There may be a conflict between the goals of science and the goals of quick science, but that is not particular to the military. It seems to me to be creating an ethics problem where there is none. 

There is certainly a lot of military research that should be generating ethical questions, and the essay even mentioned some of them. But if there is a more fundamental tension between the goals of science and the goals of military research, the essay did not demonstrate it. 

Monday, May 21, 2012

Ethics at Lockheed Martin

The Chronicle of Higher Education has a review of Daniel Terris's Ethics at Work: Creating virtue at an American corporation.The review ends with the following quote that I thought was interesting: Lockheed Martin "helps to make some of the deadliest man-made objects on the face of the earth. To claim that this fact has no ethical implications for the manufacturer is, on the face of it, absurd."  It suggests that ethics is an inherent part, not only of the user of some technology and for the entity who commissioned it, but also for the manufacturer.

I am not completely convinced of this connection, but I am open to being so. After all, once you get past the idea that you are just manufacturing something that was legally and legitimately commissioned  who has all the obligations to properly use whatever it is that you make, what ethical dilemmas can you have relating to this product? Could Lockheed Martin really be culpable for additional deaths on a battlefield, or credited for fewer ones (on either side) because of something it did or failed to do that was part of its contract?

Needs some thought.

Algorithms doing ethics work

In a Wired Magazine interview with the Israeli Air Force they report the following:
 the notion of “mathematical formulas that solve even the difficult ethical dilemmas in place of human pilots.” The air force has been developing technologies for quite some time now that can divert missiles in midair if all of a sudden a civilian pops up near the target, but often this kind of thing happens too quickly even for the most skilled operators. It’s part of an uneven, decade-long IAF effort to try to bring down collateral damage — a necessity, since the air force fights asymmetric enemies in densely populated areas. But this is something the IAF is keen to develop even more. The concept of a computer taking over almost all the functions of this kind of thing is very tricky, though; you can’t very well say at a war crimes tribunal that you’re not responsible for unintended deaths, or tell the judge it was all the algorithm’s fault.
Whereas  philosophers have time to think about ethical scenarios, professionals who have to make ethically responsible decisions are generally forced to utilize an algorithm. In any case, algorithms are far better at getting certain kinds of things right than humans are, and  assessments of damage are probably one of them, or could be made to be one of them. So one would think that this is a very positive step. Assuming no weapon is immune from being used to create collateral damage (or anything that fall under "double effect") an algorithm designed to minimize that would be worth developing. In cases like this, an algorithm will not be responsible for unintended deaths, they will be responsible for all the deaths that did not occur.

More interestingly is as follows. It is likely easy to be a utilitarian here. The goal of an algorithm can be to reduce collateral casualties. But what else could an "ethics algorithm" be made to do? It could be made to weigh the possibility of a certain amount of injury and damage against a certain amount of death. It might be able to weigh the ethical repercussions of destroying certain targets against the mission ends. This would force philosophers to really think about what "proportionality" means. It might be able to weigh one type of collateral damage against another (say, hitting a full school or a half-full medical center.) What else should such an algorithm try to do?

Thoughts?

Wednesday, May 9, 2012

New Articles on Ethics of War

If you have access to Philosophy Compass (a journal I heartily recommend) you should look at two recent articles by Endre Begby, Gregory M. Reichberg, and Henrik Syse. It is actually a two part article first on the history of the ethics of war, and the second part on contemporary issues. The first part is a very good, though quite terse history of just war thinkers from Heraclitus to Walzer. It is a great place to start if you want to explore the history of just war theory in the West. The second article focuses on more contemporary debates in each area of classical just war theory, as well as the more recent discussion of jus post bellum.

Some of you know that I think a lot of contemporary just war theory sounds really old fashioned and needs to be updated in light of the evolved ontological nature modern politics, warfare, and warriors, as well as concepts like moral and military asymmetries and the distinction between aggressor and aggresse. The article presents a good overview of what people are talking about today and makes considerable strides toward understanding which questions are still relevant, and how. Both are worth reading. 

Friday, May 4, 2012

Interesting application of game theory

The Economist has an interesting article on some of the successes in game theoretic models.  Toward the end there is a suggestion:

The “principle of convergence”, as it is known, holds that armed conflict is, in essence, an information-gathering exercise. Belligerents fight to determine the military strength and political resolve of their opponents; when all sides have “converged” on accurate and identical assessments, a surrender or peace deal can be hammered out. Each belligerent has a strong motivation to hit the enemy hard to show that it values victory very highly. Such a model might be said to reflect poorly on human nature. But some game theorists believe that the model could be harnessed to make diplomatic negotiations a more viable substitute for armed conflict.
Today’s game-theory software is not yet sufficiently advanced to mediate between warring countries. But one day opponents on the brink of war might be tempted to use it to exchange information without having to kill and die for it. They could learn how a war would turn out, skip the fighting and strike a deal . . .
This is an interesting model of war. What it essentially suggests is that if an army is tough enough and has a strong enough resolve to go through with its threats, it can win a war without actually doing it. But while there won't be wars, there will still be a need for a powerful military and military spending. It makes for interesting food for thought. What would war look like if this plan was actually implemented? Does this suggestion make sense?

I suspect that many Neocon thinkers would think this is strange because they tend to think that wars will almost always involve at least one dictator who has little interest in preserving the lives of his people and would rather risk actual fighting than a bloodless surrender, even if it really seems to be inevitable. A political Realist, on the other hand would probably take this to be an idea way to get things done, but I can see reservations there as well. A political Constructivist is probably wondering of this kind of mediation isn't already the role of the UN.

What do you think?