Today's Chronicle of Higher Education has an interesting article on ethics and autonomous weapons systems.
Apparently this is a hot topic for ethicists at the moment, but I am willing to bet that the whole discussion will be rendered moot in a few years by military fiat. It seems to be only a matter of years before we will start to see weapons that protect borders and fighting our wars that inform humans what they have done after the fact. We will program to behave according to some accepted standard of war and accept some level of mistake. (I know there will be the inevitable scandal surrounding "improperly" coded machines.)
Many of the ethics discussion revolve around the mistakes they can potentially make. But we have always accepted that our machines will be imperfect. Rockets used during WWII were not only not accurate, some were as likely to hit the city that was targeted as not. But we accepted this as a limit on the technology.
We should also come to accept that an action is not more moral when done unmediated by an intentional agent as opposed to when done by an autonomous pre-programmed machine. It strikes me as rather chauvinistic to think that we should accept our own mistakes as legitimate mistakes and fret about the mistakes our machines make. In a sense the machines are an extension of us. They do what we tell them and they do it very well. We make mistakes, they make mistakes. They do it less often. The fact that they may do it without telling us before hand does not strike me as particularly important.
Sorry for rambling. Any thoughts?
Apparently this is a hot topic for ethicists at the moment, but I am willing to bet that the whole discussion will be rendered moot in a few years by military fiat. It seems to be only a matter of years before we will start to see weapons that protect borders and fighting our wars that inform humans what they have done after the fact. We will program to behave according to some accepted standard of war and accept some level of mistake. (I know there will be the inevitable scandal surrounding "improperly" coded machines.)
Many of the ethics discussion revolve around the mistakes they can potentially make. But we have always accepted that our machines will be imperfect. Rockets used during WWII were not only not accurate, some were as likely to hit the city that was targeted as not. But we accepted this as a limit on the technology.
We should also come to accept that an action is not more moral when done unmediated by an intentional agent as opposed to when done by an autonomous pre-programmed machine. It strikes me as rather chauvinistic to think that we should accept our own mistakes as legitimate mistakes and fret about the mistakes our machines make. In a sense the machines are an extension of us. They do what we tell them and they do it very well. We make mistakes, they make mistakes. They do it less often. The fact that they may do it without telling us before hand does not strike me as particularly important.
Sorry for rambling. Any thoughts?
No comments:
Post a Comment