This is an interesting article in the New Yorker.
This article talks about the kinds of ethics that will need to be implented into America’s future techonological advances. The article begins by using the example of Google’s “driverless” car. The idea behind it? This car will eliminate the chances of human driving error, making to immoral for anyone to actually drive themselves seeing as though they will be putting the lives of other drivers at risk. The author moved off of that topic onto the topic of Robocops and robot soldiers — aka robots that will fight our own battles for us. Although I think these types of techonologies are so far beyond our generation, it is interesting to think about the ethical dilemas these techonologies bring about. Will the robots be designed to kill? Obey the laws of human morality? Believe in self-preservation? These are all big moral questions that would need to be addressed once the technology for these types of creations come about. But how do you artificially program morality?