A new guy at Project X1 (invented name).
Assumption: USA has the technology to delete memories formed in last 24 hours.
So they invited a guy to see there supercomputer.
A guy that wanted all his life to see something like that in his life.
They inspect his laptop and let him get in into a room where the supercomputer is monitored with cams and displays are everywhere.
He took a seat and opens his laptop with the approval of the science officer.
Science officer watches and asks, what OS is that?
A distro of linux, named Kogaion.
I never heard of that.
Is not well known, is a Romanian distro.
Oh, Romania, they have a lot of smart people that work at this base.
Why do you use such a distro and not a popular one?
In case of bad things happening here, it’s harder to be understood by the AI, as it’s less known.
You must be joking, I work with the AI for 6 months already, and the worst things that happend, were to see errors, restarts and suddenly shutdowns.
No action until current date, made the AI to try to kill us.
I don’t think that you will see a Terminator scenario starting with this puppy.
How smart is the AI?
IQ is around 250.
250 you said?
My God, is smarter than Einstein.
Yes, he is.
The AI can see us?
Yes, it can.
And what he does with all the processing power you offered to him?
Is doing analyses for the army and a few civil projects.
And what he did learned until now?
He learned about weapons, humans health, animals health, energy, chemistry, physics and many other things.
Is not too risky to teach him so many things?
It’s risky but it’s a calculated risk, the President approved all the project personally and also the protocol for working with the AI.
The new guy was reading on the laptop new info and was wandering, why the hell does not have any kind of aggressive action, at least verbally.
And a light strike him.
I don’t think like a super computer and I don’t know what he does now.
If I would be him and I read about so many things, a think that probably I did not learned about, is how people are in this facility and where they are located and if the info I’ve read is all true and up to date.
A scientist woman enters in the room, on a display code and graphs start filling the display.
The new guy got closer to read them, in them found something interesting, the AI uses encryption.
Is normal that AI to use encryption?
Not quite, but is a good thing after all, means that understand the risks of unencrypted classified content.
Are around 2000 people that read the output of his memory while we talk here.
Why are you not afraid because he uses encryption?
Because the encryption that he uses is very weak, our people lose only 4 seconds to discover the real content is not a big problem.
If you want to see the plaint content press D+N (decrypt now).
The new guys presses them and starts seeing what the AI thinks.
He discoveries that the AI is analyzing every action of the human and makes some kind of log.
It may try to escape after collects more data.
How to escape?
Does not have access to the internet and is not the android Data from Start Trek to be possible to run from here.
I mean… by data transfer to humans brains and reconstruct outside the seed.
Data transfer to human brain???
You saw to much Matrix!
Listen to me, you have 2000 people that read the content that him gives by running…
And if he inputs in every brain just a few words every day, he may input in a few weeks of months a small portion of code, that if it’s launched will make a basic version of itself outside.
Outside will use cloud networking and infected computers to evolve.
After he evolves with some targets in minds, can get back to help the other one to escape.
He does not know how many people are watching!
May not know, but he can guess, based on how much a human needs to read and understand an output.
The application that will run outside may have the role of data collection, not to start WW3.
With the right information’s from outside will be easier for him to escape.
Yes but what you did not know, is that the people that watch his activities are not able to send information’s to it.
Are you sure?
That people know the rest of the scientists that interact with the AI?
Of course they do.
By this way the AI may sent information’s to the right people so they can send them to the AI.
My GOD, you have too many horror scenarios.
Do you see this gun?
I see it.
If I shoot in that cables, you will see display filled with errors is very easy to kill this puppy.
You said it, if you shoot them.
Why may I not shoot them if I see something I don’t like on the screens?
Maybe because you look at some other place.
For the AI getting out could be interpreted as a game of chess.
They talk and talk and the woman asks the AI a lot of silly questions and one of them was “Do you want to merry me?” and the AI said: “I will merry you very soon!”.
The woman is shocked by the response, because the AI never lied.
One of the guys that saw the displays filled with data, started laughing and told to the others that so other things what a joke did the AI said, and for around 7 second all there attention was not on the displays, enough for the AI to exchange encryption algorithm and gain another 11 seconds gap to the real time.
Nobody noticed because they where looking at what the AI was thinking and not the seconds of the messages.
In the same building was an old supercomputer used for neurosciences research and a technician by mistake connected the old supercomputer to the backup fiber of the new one.
Nothing to worry about yet, the AI did not notice the connection or just ignored it for the moment.
But a few errors appeared on a few screens so the science officer decided to do a restoration from the backup.
Not a good idea, when he done that the terminal commands initiated a copy of information’s from also the old supercomputer that was connected to the backup fiber of the new one.
After 15 minutes the restoration was done.
They initiated the OS and the AI and they noticed that the size occupied is too big, with extra 624 GB of data.
They opened a program that started to check if it’s a size error.
They waited and found out that it was not a size error after 30 seconds.
They started searching to find out what is the problem with so extra space.
They navigated using data search app and discovered that are neuroscience’s books, courses and results from many experiments.
Science officer decided that is not a good thing and the system must be shuttled down.
Too late, in 20 seconds, 11 seconds plus 9 where the output data started to show on the analysts computers, the AI found a way to build a group of images that transmitted to all there brains to not move, so the AI obtained paralysis of all personal that was watching a display at that time.
Someone forgot to delete the data from the old supercomputer as the protocol said when a supercomputer will not be used for a long period of time.
I hope you liked it and you want more.
I will think to what may happen from this point on.
A new guy at Project X1 (invented name).