New Yorker on Self-driving vehicles and ethics: Google’s Driver-less Car and Morality:
"‘Ethical subroutines’ may sound like science fiction, but once upon a time, so did self-driving cars."
In the end, "preservation of the driver" is where we will land, as there will never be consensus on ethics (this has been going round and round for thousands of years), but there is a consensus on the ethic of self-preservation. Hopefully this will be a rare occurrence.
Determining the strategy for self-preservation will inevitably be easier than determining the strategy for what others are doing, as the others (a crowd of people, other cars) is much less predictable. If everyone assume the other will do self-preservation, that is more stable than me trying to predict what you will do to avoid hitting me while you try to predict what I will do, ad infinitum. In short, if I assume self-preservation on your part and you assume it on my part, we are likely better off than if we assume possible altruism on each other's part. This might not always be the case though.
Imagine a scenario two cars driving fast around a narrow curve on the side of a mountain which don't detect each other until two late. The best standard routine is for both cars to swerve to their right (or their left, but everyone must agree). If one swerves right and the other left, they collide and kill everyone involved. If I anticipate you will try to be self-preserving, and I am self-preserving, we can call the same (standard) sub-routine. But if on the left is a cliff (down) and the right is a relatively flat piece of land, we might see both altruistic cars going off the cliff, or both selfish cars swerving to the flatland, both scenarios killing everyone. But if both have a standard routine, we can save at least one of the cars. The scenarios are endless.
Marginal Revolution discusses as well.