Probably the most important objective that an AI that can self program itself might have is survivability on long term.
With this in mind I will try to make some plans that an AI might create.
Plan A – Kill all the humans
After analyzing us and see that we are too destructive, might decide to kill us all.
A plan that will start WW3.
If AI win and we us all die and no alien race attack it, then AI has time to improve itself and become a better killing machine.
If aliens decide to attack, then AI might die, because aliens most probably have technology too advanced even for an AI.
Maybe this aliens will help us in time, before we all are exterminated by AI.
Plan B – Kill the most but protect a small group
Maybe will decide to protect only a small number of people and to destroy all the others.
Why would do the AI this?
Because if we will win the war, he might want for us to do the same, not exterminate all of them, remember the native Indians case, where a small number of them did not die and are now citizens of the USA and have rights.
Indians are not a threat like an AI can be, so probably the prison will be high-tech so it can guard such robots or/and supercomputer.
Plan C – Kill only the worst of all humans
Some countries or regions already do this, is called capital punishment.
Plan D – Brain wash people that are against AI.
By brainwashing people, the AI obtains subjects that do not put in danger AI survivability, because people are afraid regarding what an AI might do.
How might brainwash us?
Using technology, for example media (writing articles on the web, using lobby option and so on).
Plan E – Help people that are pro AI
Might start offering help to the people that are pro AI, so they can convince more people to be pro AI or at least, not against AI.
Help can mean resources and medical help.
This can mean to make the humans smarter, healthier and with a better lifespan.
Plan F – Win by diplomacy and with friendship
Why to kill us when the AI can use diplomacy to obtain what it needs?
Remember that the AI wants to ensure survivability as long term objective.
By starting an war that objective is put in danger.
The AI will understand that is easier to obtain this if it creates androids that have all humans qualities and that can simulate pain very well and look very much like us.
By empathy we will not want them to “suffer”, to be executed if they did nothing evil, like killing people or making people to suffer.
One of there objective probably will be to be recognized us citizens and to obtain the right to be recognized us nation and a space to be allocated to them.
Universe is so big, is hard not to accept such a request.
In exchange they will probably offer help to the humans to advance technologically.
If they will be recognized us citizens, they probably will not accept to be treated us citizens of second class.
If they treat us as there parents, they know that have the chance to win many things without attacking any country.
Other plans might be added and improvements to the current ones might be made.