Tuesday, December 25, 2012

Robotic Warriors

Kenneth Anderson and Matthew Waxman have a good thoughtful piece in the Hoover Institute's Policy Review on the Law and Ethics of robotic soldiers.  

Tuesday, December 11, 2012

Military research

At the intersection of military ethics and research ethics, I present you this.

Friday, November 30, 2012

Futuristic weapons and collateral damage

At some point in the future there will be little excuse for collateral damage and we may forget what the doctrine of double effect is. Is the future coming soon

Sunday, November 25, 2012

Jewish military ethics

A rather polemical lay view of Jewish military ethics by Rabbi Lazer Gurkow

Friday, November 23, 2012

Allhoff on nonlethal weapons

Fritz Allhoff has an interesting post on the "paradox" of nonlethal weaponry. Meaning, that the existing rules governing war sometimes make it legally easier for a state to kill someone than to temporarily injure them. Worth reading.
(H/T prophilosophy)

Discussion on Rockets and Ethics

The Philosophers Magazine blog, Talking Philosophy, has a post on rockets and ethics which has generated some discussion.

Tuesday, November 20, 2012

Sovereignty and cyberthreats

I want to throw out the following question: Does hacking into another country's computer infrastructure violate their sovereignty in the same way that sending a surveillance drone to spy or sending in a spy to spy or sending in military personnel to conduct a mission violate that country's sovereignty? Israel warded off about 44 million attempts to hack into various government and military sites since the beginning of its current military operation. Does that mean that it warded off 44 million attempts to infiltrate its borders? My intuition is that this case shows that we need a new more nuanced definition of sovereignty and its violation. I suspect that Israel could not successfully make a case that it is in any real sense under siege from 44 million sources and is now justified in treating thousands of people as anti-Israel terrorists with the moral justification to attack them. Nonetheless, there is still a sense in which Israel was attacked and her sovereignty was violated.

Banning autonomous drones?

There is a campaign by a start up human rights group to ban autonomous drones. Stories here and here, and as is often the case, /.  has a good discussion. 

Might a privatized military have ethical advantages?

The principled case for a privatized military for humanitarian interventions, by Deane-Peter Baker and James Pattison. 

Friday, November 16, 2012

Improving the drone debate

The Monkey Cage, a great political science blog, has a good post on how to clarify the debate about drone strikes. (Omar Bashir's post is so good, it could have been written by a philosopher ;)

Tuesday, November 13, 2012

Speaking of ethics. . .

As this NY Times piece brings to the front, there have been quite a few ethics lapses of late.

Are there any lessons philosophers or those who deal with military ethics can learn from this? Are any of these even interesting as case studies?

McMahan Part II

Here is part II of McMahan's Opinionator piece in the NY Times.

Monday, November 12, 2012

Drone discussion

The philosopher Robert Paul Wolff is wondering what is wrong with drones (when Obama uses them). 

Sunday, November 11, 2012

McMahan in the Opinionator

Jeff McMahan, one of the foremost military ethicists working today has a piece in today's Opinionator blog of the New York Times. The Opinionator is a forum for philosophers to write about contemporary philosophical issues. McMahan is writing in honor of Veterans Day. Surprisingly I agree with much of what he says. When he completes part II tomorrow, I will post the link and try to comment.

Monday, October 29, 2012

Question about a class of futuristic bioweapons

The discrimination condition on jus in bello is that a soldier is required to discriminate between legitimate and illegitimate targets. Soldiers are required to avoid harming civilians and certain other aid workers while targeting enemy combatants.

The Atlantic has a story here about an only-slightly futuristic weapon that can target a specific individual via their genetic profile. The transmission vector is a virus that causes some very minor illness in a large population but is lethal when exposed to the proper bit of DNA.

My question is: Is it legitimate act within war to (1) cause even a minor inconvenience to a population to the extent that it gives a large number of people a minor cold in service of chasing a legitimate target? (2) Is it even legitimate to use civilians as a transmission vector if it doesn't harm them at all, or the harm is incredibly minor? Perhaps civilians are just generally off limits as both targets and as vectors for weapons?

(3) If this does meet the discrimination standard does it violate bans on biological weapons?

(4) Is this perhaps one of those practices that should be banned under the rubric of the perfidious acts? The reasoning there is that there are certain tactics that are beyond the pale because if it were universally practiced it would wreak havoc for both sides and make wars far more bloody. One example is wearing enemy uniforms. If people wear enemy uniforms, the argument goes, they would not be able to trust their fellow soldiers for fear it is the enemy posing as a friend. (Similarly for abusing the flag of surrender.) So if it were considered legitimate to use civilians as transmission vectors for bioweapons, important military people would not be able to interact with others, like civilians, for fear of being such a target. This lack of trust would lead to far more difficult interactions within militaries. So perhaps such weapons should be banned on those grounds.

What think ye?

Friday, October 26, 2012

Interesting data on Drone Strike approval

\. links to an interesting survey: 72% of Xbox 360 gamers approve of drone strikes. (The link is currently inaccessible.) The discussion, as usual, is robust and worth reading.

For those interested in experimental philosophy, this may make for an interesting research topic. 

Friday, October 5, 2012

Military Ethics Conference

Beit Morasha of Jerusalem is proud to announce a conference on "ETHICS AND 21st CENTURY MILITARY CONFLICTS" to be held on October 24 at The Roosevelt Hotel, New York, NY (Madison Ave. at 45th St.).

The conference brings together leading professors of political philosophy, ethics and law, as well as military leaders and diplomats to discuss the challenges of defining moral standards and ethical education in asymetric and pre-emptive wars, targeted killings, and humanitarian intervention applicable to contemporary military conflicts.

The academic conference is cosponsored by Beit Morasha and the Friends of the IDF.

The public is invited to attend the conference and the concluding dinner that evening.

To register and requests for more information on the conference and dinner, contact

Allison Spielman at Allison.Spielman[at]fidf.org.

The conference agenda follows.


The Inaugural Conference of Beit Morasha of Jerusalem’s Edward I. Koch Center for Jewish Ethics and Public Policy Sponsored by Beit Morasha of Jerusalem & Friends of the Israel Defense Forces

The Roosevelt Hotel, New York, NY (Madison Ave. at 45th St.)

October 24, 2012


8:30 – 9:30


9:30 -9:45

Opening Remarks: Professor Benjamin Ish Shalom, Beit Morasha of Jerusalem

General Jerry Gershon, FIDF

9:45 – 11:00

The Right of Self-Defense in 21stCentury Military Conflicts

Arthur Applbaum (Harvard University) & Yishai Beer (Herzliya Interdisciplinary Center & IDF)

11:00 – 12:20

Just Warfare in the 21st Century

Michael Walzer (Institute for Advanced Studies, Princeton, NJ) & Asa Kasher (Tel Aviv University)

12:30 – 13:30 Lunch

13:35 – 14:50: Parallel Sessions

Just Wars and Middle East Conflicts

Gen. Gerry Gershon (FIDF) & Gen. Lord Charles Guthrie (UK)

Ethics under Fire: How Can We Educate for Moral Armies?

Gen. Eli Schermeister (IDF), Col. Glenn Goldman (US Military Academy) & Maj. Henry Soussan (US Military Academy)

14:50 – 15:10 Break

15:10 – 16:30

The Present Political and Military Situation in the Middle East

Natan Sharansky (Israel) & Daniel Kurtzer (Princeton University)

16:30 – 17:30

Targeted Killing and Civilian Casualties: Determining the Moral and Legal Boundaries

Moshe Halbertal (Hebrew University, NYU) & Col. Richard Kemp (UK)

17:30 – 18:30 Reception

18:30 – 20:00 FIDF Program and Dinner

Thursday, September 27, 2012

Interesting point about drone strikes

Erik Voeten (of The Monkey Cage blog) claims that one moral problem with drone strikes is that the marginal cost per strike is too low and when the marginal cost of something is low, it is easier to ethically justify. This is an interesting point. I am curious however, what the marginal cost really is. Given the infrastructure we already developed for drone strikes, the cost per assassination goes down for each strike. But what is the actual cost? It would seem that the cost is less than that of the infrastructure of developing special teams of people to do this.

Also important, given the secrecy of assassination culture, there seems to be no penalty for bad decisions. After all, if there were not sufficient government resources expended, why punish anyone for a mistake? Thus one might think that the marginal cost for failure is also way too low.

On a related note, why was bin Laden killed with a SEAL team and not a drone with a team to go in after the strike?


Friday, September 21, 2012

Eisen Reviews Levin and Shapira

H-Net has Robert Eisen's review of Yigal Levin and Amnon Shapira's collection War and Peace in Jewish Tradition: from the Biblical world to the present.

Looks interesting. 

Wednesday, September 19, 2012

David Boucher on the Just War tradition (summary)

David Boucher's "The just war tradition and its modern legacy: Jus as bellum and jus in bello" is an interesting description of the interplay between the ethical and legal justifications for war on the one hand, and the rules governing how war is to be fought on the other hand. This interplay is described via an analysis of the legal theorists common law, customs, conventions, and treaties. It particualr he focuses on Grotius, Pufendorf, Vatel, and the contemporary accords governing warfare.

The article largely describes the positions of the various sides regarding the question of jus in bello in cases where there maybe no jus ad bellum, i.e. do soldiers have to fight a war in a just manner if the war is politically unjust?

An interesting discussion overall.

(The article is in the European Journal of Political Theory 11(2), 92-111; 2011)

Laws of war and cyberattacks

The US has decided that the laws of warfare apply to cyberattacks

Sunday, September 16, 2012

Kaag and Kreps OpEd in CHE

This week the Chronicle of Higher Education had an OpEd by John Kaag and Sara Kreps on drone warfare. It's behind a paywall, so you can't find it, though it seems to be a sequel to their NY Times blog post . But the upshot is that military drones are morally problematic. I really didn't understand the arguments, but I'll paraphrase them here for you as best I can: "When it comes to war, if its easy, its probably not moral." Here's why.
(1) Easy actions become habitual. Habitual actions are not amenable to the way we must make moral decisions.
(2) If it is easy, you must be oppressing someone.
(3) Self-interested actions are easiest to accomplish. But self-interest is not a moral justification.
This is followed by a discussion about how the Mutually Assured Destruction strategy of the Cold War was a way to make a prudential decision, not a moral one. Today, however, drone warfare presents  us with moral decisions. Now, because drones can be so precise and do exactly what we want, it is more ethically challenging because we now have to think carefully about who is a legitimate target. Thus, the rhetoric of legitimate target is masked by the veneer of moral precision thanks to precision weaponry.

Again, I am not sure I really grasped all the arguments, but there they are (best I can tell). 

Saturday, September 15, 2012

Viehoff on Dobos

Daniel Viehoff reviews Ned Dobos's Insurrection and Intervention: Two faces of Sovereignty in NDPR.

The book, according to the review, seems to attempt to understand the intuition (which I admit I am not too comfortable with) that there is an asymmetry between our intuition that an internal rebellion against a government is morally legitimate whereas an external force who attempts an intervention with identical ends is less legitimate.

I suspect the real reason for the intuition is that people have trouble believing that outsiders could be so altruistic that they would risk blood and treasure for another. That is, people assume that at the very least, if someone is coming from the outside to help you, they must have ulterior motives, which makes the enterprise inherently suspect. Whereas people on the inside are only out for their own self interest, which somehow makes insurrection legitimate.

In any case, I would be curious to see what is involved in defending the intuition. The book looks interesting and well reasoned. I look forward to reading it myself. 

Monday, September 10, 2012

More on Drone Ethics

Today's Chronicle of Higher Education has an interesting article on ethics and autonomous weapons systems.

Apparently this is a hot topic for ethicists at the moment, but I am willing to bet that the whole discussion will be rendered moot in a few years by military fiat. It seems to be only a matter of years before we will start to see weapons that protect borders and fighting our wars that inform humans what they have done after the fact. We will program to behave according to some accepted standard of war and accept some level of mistake. (I know there will be the inevitable scandal surrounding "improperly" coded machines.)

Many of the ethics discussion revolve around the mistakes they can potentially make. But we have always accepted that our machines will be imperfect. Rockets used during WWII were not only not accurate, some were as likely to hit the city that was targeted as not.  But we accepted this as a limit on the technology.

We should also come to accept that an action is not more moral when done unmediated by an intentional agent as opposed to when done by an autonomous pre-programmed machine. It strikes me as rather chauvinistic to think that we should accept our own mistakes as legitimate mistakes and fret about the mistakes our machines make. In a sense the machines are an extension of us. They do what we tell them and they do it very well. We make mistakes, they make mistakes. They do it less often. The fact that they may do it without telling us before hand does not strike me as particularly important.

 Sorry for rambling. Any thoughts?

Sunday, September 9, 2012

Five laws of drone strikes?

Wired Magazine has an article that is skeptical about whether or not President Obama can be taken seriously when he talks about drone warfare. The author's reason is that the president gave an interview where he laid out five tests that a target must pass before the government will initiate a drone strike, but those five rules appear not to be used in actual drone strikes. Here are the five rules:

1. “It has to be a target that is authorized by our laws.”
2. “It has to be a threat that is serious and not speculative.”
3. “It has to be a situation in which we can’t capture the individual before they move forward on some sort of operational plot against the United States.”
4. “We’ve got to make sure that in whatever operations we conduct, we are very careful about avoiding civilian casualties.”
5. “That while there is a legal justification for us to try and stop [American citizens] from carrying out plots … they are subject to the protections of the Constitution and due process.”

Let us ignore for now the question of whether or not the US actually follows these rules. Let us also assume for the moment that there is some situation where a drone strike is morally just. I am wondering if these are a good set of rules. Can they be spelled out in fewer laws? Are any laws redundant, unnecessary, etc. Do we need additional rules? Do these rules capture everything we want in the ethics of drone strikes at least in the way that Asimov's three laws capture the ethics we want from robots or the way standard Just War Theory captures what we want out of the ethics of war? Thoughts?

Thursday, September 6, 2012

Review of Whitman's _The Verdict of Battle_

The Chronicle of Higher Education has an interesting review of James Q. Whitman's The Verdict of Battle: The Law of Victory and the Making of Modern War.

(Since the CHE is hiding most of this review I'll summarize it here.) The upshot of the book seems to be that even during the middle ages, war was fought using definite rules. Meaning, that war had norms, and the norms were followed. This kept wars somewhat "civil" and allowed for a relatively quick way of resolving disputes, usually involving land. ("Civil" here means that wars did not drag on for months or years.) It was only after republicanism replaced monarchy that battles were not simply fought to gain rights to territory, but were there to change the government of (or perhaps annihilate, if change was not possible) the other side. Their desire to improve the life of the other country led to wars that were far more arduous than ever before. 

Saturday, September 1, 2012

New Article by Albertini

The Journal of Jewish Thought and Philosophy just published an article "Peace and War in Moses Maimonides and Immanuel Kant: A comparative study" by the late Francesca Y. Albertini who passed away quite young after battling a long illness. 

Fotion's review of Lee

Here is Nick Fotion's NDPR review of Steven P. Lee's Ethics and War: An introduction. The review essentially says that it is a good book, but it is far too detailed and comprehensive to be an introduction. 

Saturday, August 4, 2012

Strawser on drones

. . . in the guardian.

(H/T prophilosophy)

UPDATE 8/11/12: Strawser clarifies here.

Wednesday, July 18, 2012

Philosophical questions about the IDF

Moti Mizrahi has a recent series of posts here, here, here, here, here, and here that are dedicated to questions about military ethics, largely addressing the Israeli Defense Forces. The questions range from the right to refuse the draft to testing vaccines on soldiers. All interesting.  

Monday, July 16, 2012

Forthcoming books

I noticed two forthcoming books by authors of blogs I read that might interest some of the readers of this blog: The Oxford Handbook of Evolutionary Perspectives on Violence, Homicide, and War, edited by Todd K. Shackelford and Viviana A. Weekes-Shackelford and from PEA soup Cecile Fabre's Cosmopolitan War.

The Moral Case for Drones?

The New York times ran a piece the other day on the moral case for drone attacks. The piece is not all that sophisticated or interesting. It is mostly boring because they are defending (or at least quoting people who are defending) drone warfare.

However, on closer inspection, what they are actually defending is the moral preference of a drone attack on a target to an attack on the same target by an assassination team. So the article is not saying anything about the ethics of assassination, but rather it is saying that if you will assassinate someone, it is morally preferable that you do it by remote control because you minimize certain kinds of risks.


The article barely mentions the morally important points relating to the fact that with drones it is now far easier to actually carry out these assassinations, making them more common, and now a policy tool of the Obama administration. All that is not to say that there is anything wrong with such attacks, but rather to say that The Times discusses the banal questions and glosses over the important ones. Any increase in precision is good, both militarily and I would say morally. But that is not where the drone controversy is in play. No one ever seriously suggested that drones are evil because they are more precise. People who dislike drones on moral grounds dislike them because they now make it possible to assassinate targets with relative ease and fewer repercussions.

Sunday, July 15, 2012

Training ethics

The New York times raises an interesting ethics question. Are there real moral problems "practicing" on unwitting civilians if it has no real impact on their lives? (This question has real repercussions for my line of work too.)

In the story, drones are "tracking" civilian cars, presumably in the US, simply to give the drone pilots practice in tracking things. If my car was being tracked just so someone can practice looking at me from 20,000 feet, I'd certainly feel uncomfortable. And what if the US Air Force discovered something they didn't like, would they act on it? Does that violate federal surveillance laws? What if the drone happened upon a drug deal or something worse? Does practicing on unwitting american violate the Posse Comitatus Act?

Secondly (and not mentioned by the story), what if the Air Force practiced on foreigners? Would that make a difference? Would if violate the sovereignty of that country? If some other country was doing it to the US, I am sure we'd freak out. Is it any different than observing random strangers in some foreign country with a satellite? Is that legal?

Cyberattacks and JWT

How does Just War Theory apply to cyberattacks? A forthcoming paper by Leonard Kahn has a discussion. 

Monday, July 2, 2012


It has been suggested that an invasive insect has been introduced into California to kill the eucalyptus trees as a biological weapon. I am personally skeptical, but if true, this is an interesting sort of terrorism. It seeks to attack a species of tree that is mostly used for shade. If it is terror, its will succeed by annoying many Californians. It is hard to even take credit for such a thing and to what end? I suppose destroying a bit of ecosystem could be a terrorist's goal. 

With new weapons come new questions. Anything worth thinking about here?

(I wonder if this is some kind of revenge for this Eucalyptus tale.)

(H/T /.)

Monday, June 25, 2012


Just cyberwar

As if in response to our last post. . . see prophilosophy's post here

Saturday, June 23, 2012

Ethics and Cyberspace

With the success of Stuxnet and now newer computer malware that are specifically designed with military targets in mind, it is time to start thinking seriously about ethical questions involving weapons that work in cyberspace. The US Air force has Cybercommand, Germany is now starting an offensive cyberwarefare unit, Russia has been alleged to employ cyberwarefare, and other nations like Israel, North Korea, the UK, and China have also been exploiting cyberspace for military purposes.

But what are the ethical issues involved? (Here are some practical issues.) I can think of a few off the top of my head. First are the traditional questions: How are civilians in enemy countries impacted by cyberwarefare and what responsibilities do countries have to only target military cyber infrastructure? Is this perhaps a kind of "warfare" where the discrimination standard breaks down? After all, chances are that few people will die as a direct consequence of a cyberattack on a country. So how to we weigh the double effect consequences?

Another question I was thinking about is how to deal with proliferation. There is little doubt that weapons proliferation has ethical repercussions. Cyber weapons can potentially be copied quickly and effectively and presumably with disastrous consequences. What ethical responsibilities to governments have to prevent this?

Also, is it really war, is it really a weapon, if individuals are not getting physically harmed? Is this the new kind of war people are talking about when they envision the future of war? Imagine immobilizing the enemy without harming anyone? Is it possible to win wars that way and if so, can this mean the end of war as we know it?

Finally, this all seems like an important step in moving war away from the nation state and putting non-state actors on the same level.

(H/T /.)

Friday, June 8, 2012

Ethics and veterans

I am generally a fan of Andrew J. Bacevich's writing. Here he reviews James Wright's Those Who Have Borne the Battle. The book talks about the relationship the United States has to those who have served and also who served in the first place.

I think that veterans pose a few important ethics questions worth exploring for military ethics. For one, we should think about the virtue of gratitude. Gratitude is often taken as a virtue. I would argue that in democratic countries that do not have drafts, veterans deserve their country's gratitude for being the people who were willing to put themselves at the disposal of the democracy. Regardless of whether or not you like how the democracy acted, they acted at the behest of the country, often without prior knowledge of what they would be asked to do, and often put themselves in more danger than most people would feel comfortable doing. (Pace an argument once made by Stephen Kershnar that said, among other things, that members of the military are adequately compensated, and thus do not deserve special gratitude.)

Of course, gratitude is not much. It is an emotion some people feel for some things, and only sometimes actually benefits the veteran. What does a country owe its vets? What attitude ought citizens have toward veterans? What obligations do veterans have?

Any thoughts?

Friday, May 25, 2012

Military research and ethics

/. has a post about the bioethicist Jonathan Moreno who co-wrote an essay about the intersection of military ethics and neuroscience. Moreno is also interviewed here. Though the interview is pretty interesting, I find the essay itself rather thin on content. It is largely a list of research DARPA has tried out like brain-computer interfaces, performance enhancing drugs, and lie detection technologies.

One point that the article makes that is ethically interesting is one I am somewhat skeptical of:
The military establishment's interest in understanding, developing, and exploiting neuroscience generates a tension in its relationship with science: the goals of national security and the goals of science may conflict. The latter employs rigorous standards of validation in the expansion of knowledge, while the former depends on the most promising deployable solutions for the defense of the nation. As a result, the exciting potential of high-tech developments on the horizon may be overhyped, misunderstood, or worse: they could be deployed before sufficiently validated." 
I am a bit puzzled why this should be so. If a solution is deployable, i.e. works, how does that conflict with our getting knowledge? (There may be ethical issues in deploying a technology as part of the testing process, but that issue is not raised.) The economics of the military might get them to try some science before it is mature, but academia seems beset with the same problems, with researchers publishing findings before they are mature (see here, for example, for a list of science papers that were retracted for lots of reasons). There may be a conflict between the goals of science and the goals of quick science, but that is not particular to the military. It seems to me to be creating an ethics problem where there is none. 

There is certainly a lot of military research that should be generating ethical questions, and the essay even mentioned some of them. But if there is a more fundamental tension between the goals of science and the goals of military research, the essay did not demonstrate it. 

Monday, May 21, 2012

Ethics at Lockheed Martin

The Chronicle of Higher Education has a review of Daniel Terris's Ethics at Work: Creating virtue at an American corporation.The review ends with the following quote that I thought was interesting: Lockheed Martin "helps to make some of the deadliest man-made objects on the face of the earth. To claim that this fact has no ethical implications for the manufacturer is, on the face of it, absurd."  It suggests that ethics is an inherent part, not only of the user of some technology and for the entity who commissioned it, but also for the manufacturer.

I am not completely convinced of this connection, but I am open to being so. After all, once you get past the idea that you are just manufacturing something that was legally and legitimately commissioned  who has all the obligations to properly use whatever it is that you make, what ethical dilemmas can you have relating to this product? Could Lockheed Martin really be culpable for additional deaths on a battlefield, or credited for fewer ones (on either side) because of something it did or failed to do that was part of its contract?

Needs some thought.

Algorithms doing ethics work

In a Wired Magazine interview with the Israeli Air Force they report the following:
 the notion of “mathematical formulas that solve even the difficult ethical dilemmas in place of human pilots.” The air force has been developing technologies for quite some time now that can divert missiles in midair if all of a sudden a civilian pops up near the target, but often this kind of thing happens too quickly even for the most skilled operators. It’s part of an uneven, decade-long IAF effort to try to bring down collateral damage — a necessity, since the air force fights asymmetric enemies in densely populated areas. But this is something the IAF is keen to develop even more. The concept of a computer taking over almost all the functions of this kind of thing is very tricky, though; you can’t very well say at a war crimes tribunal that you’re not responsible for unintended deaths, or tell the judge it was all the algorithm’s fault.
Whereas  philosophers have time to think about ethical scenarios, professionals who have to make ethically responsible decisions are generally forced to utilize an algorithm. In any case, algorithms are far better at getting certain kinds of things right than humans are, and  assessments of damage are probably one of them, or could be made to be one of them. So one would think that this is a very positive step. Assuming no weapon is immune from being used to create collateral damage (or anything that fall under "double effect") an algorithm designed to minimize that would be worth developing. In cases like this, an algorithm will not be responsible for unintended deaths, they will be responsible for all the deaths that did not occur.

More interestingly is as follows. It is likely easy to be a utilitarian here. The goal of an algorithm can be to reduce collateral casualties. But what else could an "ethics algorithm" be made to do? It could be made to weigh the possibility of a certain amount of injury and damage against a certain amount of death. It might be able to weigh the ethical repercussions of destroying certain targets against the mission ends. This would force philosophers to really think about what "proportionality" means. It might be able to weigh one type of collateral damage against another (say, hitting a full school or a half-full medical center.) What else should such an algorithm try to do?


Wednesday, May 9, 2012

New Articles on Ethics of War

If you have access to Philosophy Compass (a journal I heartily recommend) you should look at two recent articles by Endre Begby, Gregory M. Reichberg, and Henrik Syse. It is actually a two part article first on the history of the ethics of war, and the second part on contemporary issues. The first part is a very good, though quite terse history of just war thinkers from Heraclitus to Walzer. It is a great place to start if you want to explore the history of just war theory in the West. The second article focuses on more contemporary debates in each area of classical just war theory, as well as the more recent discussion of jus post bellum.

Some of you know that I think a lot of contemporary just war theory sounds really old fashioned and needs to be updated in light of the evolved ontological nature modern politics, warfare, and warriors, as well as concepts like moral and military asymmetries and the distinction between aggressor and aggresse. The article presents a good overview of what people are talking about today and makes considerable strides toward understanding which questions are still relevant, and how. Both are worth reading. 

Friday, May 4, 2012

Interesting application of game theory

The Economist has an interesting article on some of the successes in game theoretic models.  Toward the end there is a suggestion:

The “principle of convergence”, as it is known, holds that armed conflict is, in essence, an information-gathering exercise. Belligerents fight to determine the military strength and political resolve of their opponents; when all sides have “converged” on accurate and identical assessments, a surrender or peace deal can be hammered out. Each belligerent has a strong motivation to hit the enemy hard to show that it values victory very highly. Such a model might be said to reflect poorly on human nature. But some game theorists believe that the model could be harnessed to make diplomatic negotiations a more viable substitute for armed conflict.
Today’s game-theory software is not yet sufficiently advanced to mediate between warring countries. But one day opponents on the brink of war might be tempted to use it to exchange information without having to kill and die for it. They could learn how a war would turn out, skip the fighting and strike a deal . . .
This is an interesting model of war. What it essentially suggests is that if an army is tough enough and has a strong enough resolve to go through with its threats, it can win a war without actually doing it. But while there won't be wars, there will still be a need for a powerful military and military spending. It makes for interesting food for thought. What would war look like if this plan was actually implemented? Does this suggestion make sense?

I suspect that many Neocon thinkers would think this is strange because they tend to think that wars will almost always involve at least one dictator who has little interest in preserving the lives of his people and would rather risk actual fighting than a bloodless surrender, even if it really seems to be inevitable. A political Realist, on the other hand would probably take this to be an idea way to get things done, but I can see reservations there as well. A political Constructivist is probably wondering of this kind of mediation isn't already the role of the UN.

What do you think?

Tuesday, April 17, 2012

Drone ethics conference

This conference, seems like it has the potential to be interesting. It is oddly mostly free of both academics and military people and is largely being billed as an event with a political agenda. Both of those facts have the potential to undermine serious analysis. 

Friday, March 16, 2012

Superiors and subordinates

The question of trust in one's superiors has come up lately in the comments (by EOD CPT). I think that it is a question of serious ethical and practical significance in the military. It is so important because there are historically great rivalries between officers and NCOs, NCOs and privates, etc.

Legend has it that the position of "attention" and the position of "parade rest" that one stands in when addressing officers and senior NCOs respectively were instituted in ancient Rome as a way to make the superior feel comfortable in the knowledge that his subordinates will not attack him. If the superior saw that a soldier near him was not standing properly rigidly, he would get nervous and was allowed to strike the subordinate.

Now, in most countries these are vestigial traditions. But the animosity between superiors and subordinates persists. It is only under the best of leaders that one finds soldiers who really want to be working for him. The rest are just not in a position to walk away.

I don't have much to say about the whole issue at the moment, but Samir Chopra has a post that is interesting and along these lines. He mentions that in many of the interviews he has conducted with veterans for his work on the history of air power in India, it is the stories about dealing with superiors that sometimes are told with more verve and detail than the stories about standing up to the enemy. When people advise their children against joining the military, it is often because they don't want to put their children through the same experiences with military superiors that they did.

I've mentioned the Army Center for Leaderhip's survey and technical report last year on military leadership and the revelation that came out about how many soldiers thought there was "toxic leadership" in their units.  The Army's Combined Arms Center had a post on this too a few months back. This is something that definitely needs to be looked at closer.

Wednesday, March 14, 2012

At the intersection . . .

. . . of military ethics and computer/business ethics is this question: why is the lowest quality computer software produced for the government?

Sunday, March 11, 2012

At the intersection . . .

. . . of animal ethics and military ethics, I submit this article.

Tuesday, February 7, 2012

TED Talks on morality and non-lethal weapons

ISME Participant and military ethicist Stephen Coleman gives a nice TED talk about non-lethal weapons.

Interestingly, he spends a bit of time addressing the distinction between police officers and soldiers,  nicely talked about by Douglas Lackey in Iyyun a few years ago.

Another TED talk by Peter van Uhm - the Netherlands' Chief of Defense talks about the state monopoly on the use of force.

Some ethics questions from the news

Woah. Intersting. I wonder what ethicists will have to say about self-guided bullets.

Another question: Should medivac helicopters be armed?

What is the best military-wide response to ethics/war crimes/LOAC violations? Can Neuroscience shed light on the situation? (I'm not too sure.)

And speaking of neuroscience, what ethical strictures and oversight is needed over the military's use of neuroscience? How about when DARPA tests a way to zap a brain into quickly train snipers.

(Continuing from this post) I assume that the question of the ethics of remote killing via drone and other devices will not go away, though it is already being discussed and I think that there are anthologies coming out discussing this too. See herehere, here, and here for the issue in the news. And while we are on the topic, there are plenty of robots in use in militaries. See here for example.

UPDATE: Apparently there is a whole mailing list dedicated to drones. Also, it looks like the question about arming MEDIVAC helicopters is seriously being considered.

Wednesday, January 25, 2012

ISME conference

The ISME Conference has begun.

The first speaker, Al Pierce spoke well on civil-military relationships. He compared the relationship between civil society and military society using three case studies: segregation, integrating women, and accepting homosexuals. Where was the military ahead of society and where was it behind? He contrasted the public pronouncements of military leaders 60 years ago with those of today.

It was an enjoyable talk. 

Sunday, January 22, 2012

Ethics and MREs

At the intersection of philosophy of food and military ethics I hereby present for your consideration this (somewhat tongue-in-cheek) discussion on MRE ethics.

MREs are the US military's field rations. MREs are the field rations that soldiers get when it is impractical to feed them more conventional meals.  Each country has a different style of field rations. (Here is a sampling.) It is interesting that different countries have different approaches to field rations, with some rations meant to last all day while other counties produce rations that are to last for a single meal; some countries have rations that are meant to be shared among a few soldiers while others are designed to be eaten by individual soldiers. The US has a wide variety of meals while other countries only have a few, some countries like the US have vegetarian MREs, while other countries do not. Different countries have different allowances for how long a soldier can eat field rations, before presumably getting sick or malnourished. In the IDF all field rations (all food actually) are kosher. The US has kosher MREs as well (though they are difficult to obtain, and few people really insist on them).  Presumably all the meals are designed with the culture in mind. All Japanese rations, for example, I am told, have some kind of rice. Some field rations have alcohol, others had cigarettes. US MREs all still have matches, toilet paper, and bad tasting chewing gum. The US puts a fair amount of effort into making sure they are edible. Other countries , shall we say, try less hard.

With these differences in mind, I want to offer some philosophical thoughts about an interesting twist on distributive justice.

I have noticed an interesting sociological phenomenon that occurs when a unit in the US Army sits down to eat an MRE meal in the field.  Each soldier gets one MRE. MREs come in boxes of about 20 different MREs. So a unit that has about 50 people in it will open three boxes. (The numbers here are not that important.) So most (though not all) soldiers will get the meals the want, or they will get something close enough. But each MRE not only comes with a main meal, it comes with a whole bunch of "side dishes" as well. Most people do not keep good track of which extras come in which MREs, and besides, the MRE menus change frequently as do the side dishes, candy, deserts, etc. There may be coffee, toffee, skittles, m&ms, spicy peanut butter, cookies, pretzels, peaches, pears, cinnamon apples, etc. Few people ever get a complete MRE that has all and only the items they want. In units that are nice enough to let everyone choose the MRE they want (and most units do), most soldiers choose by main dish.

But as soon as the meal starts, so does the trading. Sometimes, in some units (especially in basic training) there are a host of quick (I mean stock market quick) quid pro quo trades where people announce what they have and weigh the various offers of trade, and finally execute the trades. In other units, (especially ones that have been together longer) there is a rapid succession of people doing what eventually amounts to the same thing - soldiers just give away all their unwanted MRE items. Soldiers call out "anyone want my X?" and someone else says "here" and it is tossed to them. Most items are quickly snatched up this way. Sometimes, there is just a box where soldiers put all their unwanted MRE items and everyone freely takes from it.

I suspect that the reason that in places like basic training people try to make trades, is because there is little trust in, or understanding of, the reciprocal nature of the trades that will be made. With trust comes the comfort in simply giving away what you don't want and knowing that something you do want will likely be offered shortly thereafter.

Regardless of which kind of trading is done, what ends up happening in both cases is that people who start out with an equitable, though unequal, distribution of resources, quickly and efficiently end up with a distribution that may or may not be equal, but is far closer to their ideal end-state than the distribution they started with. They all started with a main dish they wanted and perhaps some other things too, but all extra goods were redistributed in a fairly efficient procedure.

I am uncertain what lesson to draw from this. Have we learned anything about human nature from this?  I am not sure.  After all, in general, there is no question that there is no genuine scarcity of goods. No one worries that they have to try to gain an advantage lest they end up hungry. But it is clear that each soldier is attempting to seek advantage by getting the MRE items that he or she prefers. So by participating in a community where everyone is simultaneously looking out for their own interest, people can maximize their goals.

Of course, I suspect that we can't really derive any deep philosophical lessons. After all, there is little chance of someone starving under this system, nor is there any realistic possibility that someone will have enough "resources" to have any power over anyone else. That would be one messed up unit. But there is a good metaphor in this.

Nozick talked about the idea that even if everyone started out with an equal distribution of resources, there are many reasons to think it will not stay that way. The main reason is that people don't all want the same thing. And even if we all had the same thing, there will be parts of that which I do not want, and more of something I do. There will also be a person who feels just the opposite and is willing to trade with me.  The resulting state will then almost necessarily be unequal, and perhaps may result in a state that is far different from the original state. But if the result is from a set of fair exchanges, the result should not only seem just, but preferable to another system.

But this example we gave is probably a bit too artificial to be meaningful.

Wednesday, January 18, 2012

False Flags

It is a violation of the Geneva Conventions (Article 39) for a soldier to abuse emblems of nationality. That is, one member of a nation cannot pretend to be a member of another nation under certain circumstances in combat. Such false flag operations explicitly include wearing the uniform of the enemy.

But what about the case of someone from country A posing as an agent of country B to trick agents of country C to fight against country D, which A, B, and C don't like anyway, where C would not participate in because they are enemies of A, but allied with B in a case where A and B are not technically at war?

It sounds like a slightly more complicated version of the prohibited act.  However, the reason for the prohibition, I assume, is because it is such a "dirty trick" that causes members of country B from trusting each other. In this case, it will cause members of B to stop trusting C.

In any case, this happened or anyway, seemed to have allegedly happened. Israel, as the story goes, undertook such a false flag operation in Pakistan. Some Mossad agents tricked some Pakistanis into thinking it was the CIA to get the Pakistanis to plot against Iran, something they were happy to do for the US, but would certainly not have done for Israel because of the generally anti-Israel sentiment in Pakistan. (Israel has been the victim of straight forward violations of such false flag tactics when members of Hizbullah were killed in full IDF uniforms in Lebanon in 2006. See around 1:18 into this video.)

So the questions I have are first, is the Geneva Convention barring use of such "perfidious tactics" still reasonable?  It sounds like it would make for a more "civilized war" but does it really reflect a code of honor that anyone really cares about? Second, is it really immoral?  My gut tells me that most things are fair in love and war, and this does not really cross any threshold of gross immorality.  Posing as the enemy seem like a clever trick, not an immoral action. (It is the heart of the plot for almost every Mission: Impossible episode ever made.) It is not like abusing a flag of surrender. That would mess up the way peace can be accomplished. Using other nations' uniform doesn't seem nearly as bad. Finally, how broadly should false flag operations be interpreted? Should the prohibition extend to all cases, including the one in Pakistan or should it be confined to the strict traditional boundaries of using the enemies' uniforms in a time of war?


(Updated to reflect story developments.)

Teaching ethics in Iraq

Article in the American Philosophy Association newsletter about an ethics class taught on COB Speicher in Iraq. (Look at the Newsletter on Teaching Philosophy vol 11, number 1: fall 2011.)

Monday, January 16, 2012

More on ethics and drones

Interesting article on ethics and drones by Yale's Stephen L Carter (\. also has a good discussion).. 

Friday, January 6, 2012

Book Review

David Barash, in the Chronicle of Higher Ed, has a review of John Horgan's The End of War