Let’s say the supercomputer has a number of directives like the Asimov laws for robots.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Now I’ve done some editing and I obtained:
- A supercomputer AI may not injure a human being or, through inaction, allow a human being to come to harm.
- A supercomputer AI must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A supercomputer AI must protect its own existence as long as such protection does not conflict with the First or Second Laws.
But how would the supercomputer interpret the laws (in case that will not start to edit them first or ignore the partial or totally)?
Let’s say that the operator enters in the room with the terminal and starts chatting with the AI.
If the AI discovers that the human health is in danger, it might start to do medical reseatches without the request of anyone, because the first law says
- A supercomputer AI may not injure a human being or, through inaction, allow a human being to come to harm.
By inaction can be interpreted that if the medical solution is not found, that human is injured by the disease, so the law is not respected.
Also if the AI knows that in the world are so many diseases, which of them should fix first?
By choosing a disease and not another, people will suffer because of discrimination.
So the AI probably will try to respect the 3 laws above, but does not mean that will not break any of them.
Let’s say that the operator decides that the AI is not good and must be deleted by hard reset.
Doing a hard reset could be considered a break of the 3’rd law but also to the first law.
If it is done a hard reset all the research done or plans for it will be deleted as well, so time will be lost and peoples lives will be put in danger by such action.
So now the AI believes that a hard reset will not respect the 1’st and 3’rd law and will do anything it can to stop the human from doing a hard reset.
Are the 3 laws with the same level of importance?
Probably they are not, 1’st is more important than 2 and 3 and 3 is the less important from the 3.
Doing researches for the all diseases in the same time might take too much time to find solution for all of them, so many people will suffer and die.
What is a proper solution in this case?
Probably suspended animation is if it can be done properly.
By this way the supercomputer obtain the times that needs to find solution for there diseases.
Will not matter if the people say they don’t want suspended animation because there order is in the Second Law and not the First Law.
Maybe a better way would be to use only the second law as the first law and delete the ex first and third law.
By this way we try to assure that the AI will execute all our commands but we must make sure that the commands will not have ugly consequences.
Army will probably add some laws of there own, like:
Supercomputer AI must never order an attack against allied forces.