Ethical Killing Machines?

If you’ve been paying attention to the news lately  one of the things you hear about are machines on the battle field – or above it. For the most part these machines are controlled remotely by people who make the actual decision to “fire” or not. But increasingly there is interest in machines, call them robots if you like, that will make the “fire or not” decision on their own. These machines will be controlled by software. But just how do you program a machine to act ethically?

In fiction we have long had Isaac Asimov's “Three Laws of Robotics” but in real life its not that easy. Ronald Arkin, a professor of computer science at Georgia Tech, is working on this problem. He’s not the only one but you can read about him and some of the related issues at this article titled “Robot warriors will get a guide to ethics” There are also some links at his web site at Georgia Tech. It’s a tough issue. The ethical questions involved in warfare are tough in and of  themselves but getting a computer to understand or at least to properly process the inputs and make an “ethical” decision raises the level of complexity.

I think this is a piece of the growing importance of discussing ethics in computer science programs. I know that many under graduate programs have an ethics course requirement. The masters program I was in had a required ethics course. But I think we need to start having these discussions in high school (or younger). Ethical behavior is something best learned young.

Follow up: Chad Clites sent me a link to an article called Plan to teach military robots the rules of war that relates to this post.