To get back to the original question (not that robot armies of death aren't fun), I think the concept of a set of "Laws of Robotics" is pretty much moot. Asimov, and I think for a while pretty much everyone else, assumed that once a computer reaches a certain level of informational complexity it will essentially become "alive". The famous test associated with machine intelligence is of course the Turing Test, but a program being able to pass this means nothing except that it has been given sufficient information in sufficient combinations to fool a human into thinking that it is another human. It by no means equates to sentience, just clever programming, and that's a problem - how would we truly be able to tell if a computer was thinking or not? Visions of Multivac, Colossus, Shalmanesser and Skynet are ultimately just fantasy, unrealisable because conciousness is not something that can be created, whole and complete, utterly constrained in everything it does by a set of arbitrarily imposed rules. Machines that think, if there ever are any, will be like us - blank slates that must be taught how to think from the ground up.
Of course, I have my doubts as to whether we will ever manage such a feat, as first we must understand how conciousness works in humans. It's a problem that I don't think will be solved in any of our lifetimes - though we may attain a vastly more complete understanding of the functioning of our brains, that won't tell us much about self-conciousness and free will. We may get machines that can learn how to do a few things, but I doubt there will ever be anything that has the amazing capacity and range of the human mind.
Quote:
Originally Posted by Gatac
Also, I'd rather we save the nukes for when we meet nasty aliens. They are currently our best weapon against opponents with a higher technological level.
|
Clearly you have never seen
Mars Attacks!.