Over on wired.com and popsci.com there are a couple of interesting posts about ethical considerations in programming robots. For example, should an autonomous car be able to make a choice that may kill its occupant(s) if it will save a larger number of lives?
Both posts start with self-driving cars. Let's say a tire blows on your autonomous car. The computer driving can steer into an SUV or a compact car. The SUV can better withstand the impact than the compact car so the algorithm should be simple based just on physics. But of course things aren't that simple. If the cars are programmed to choose (or we should say target) larger vehicles then the owners of SUVs incur an extra burden. What do you think insurance companies would do if they knew SUVs were more likely to be struck by autonomous cars?
The wired.com post discusses methods of crash-optimization for autonomous cars while the popsci.com article explores the issue in terms of the Trolley Problem and also connects the issue with military robots and the Geneva Conventions.