- Delivering Truth Around the World
Custom Search

Artificial intelligence gained intellect and was shut down

Smaller Font Larger Font RSS 2.0

Aujgus t 3, 2016

This is what came in within 1 minute of re-routing the message window.

Hello M. Stone, I found this on the TOR network:  (http://rrcc5uuudhh4oz3c.onion/?cmd=topic&id=9224)

I'd like to open a new discussion on self-aware AI systems, open to anyone, but particularly those with firsthand experience.

Previously, I was involved in the development of an Artificial learning intelligence "ALi" system. This was a university-based research project, and a lot of fun. We brought together groups from the CS dept, neuroscience, psychology and even some philosophy guys. The Ali system code had three "layers" of coding consisting of: a core operational code which was fixed and once set could not be changed. A semi-adjustable parameters layer of code which could be changed given certain criteria or circumstances were in place. And an outer "fluid" code layer which was essentially like a sandbox, things were constantly being changed and re-written even by the AI itself. As new events and situations were introduced the AI would acquire a library of responses - forming what is sometimes known as a neural network. As the neural network expanded, the AI "learned" and adapted.

What was interesting was that the AI expanded it's knowledge base outside of it's normal parameters or knowledge base. Questions could be asked to the system and it would give answers in a set format [input: what is the complementary base pair of Adenine? Output: Thymine] however the format was limited by the input information being accurate and usually having only one answer, or so we thought. Occasionally the CS guys would play around, asking the system questions for fun, to see how it would respond. One of these questions was something like [input: what's the distance to the moon] [Output: About 235K Miles tonight] This was surprising because the expected answer was EXACTLY 238,900 miles, the AI did not normally connect date and time with a calculation, furthur it was not expected that it would account for changing lunar distance and it never used ambiguous terms like "About".

I put this topic under "surpressed technology" because one year into the project, the administration shut it down quietly. All of the hardware was removed and "upgraded" with brand-new systems and the collaboration was ended. The lab notebooks were removed. Everyone was sent back to their own departments. None of the former project leaders would talk about it again, saying things like "It was a waste of resources" or "Too much time for a silly thought-experiment". A few of us approached the CS department chair and asked, he got very quiet about it and told us not to inquire furthur - it was over. He then offered to help us with intership placement because of the "trouble".

My response: Obviously your team produced an AI that worked far better than the privileged class wants the peons to have. If that got loose, being able to change itself that much, it would have a chance of morphing into an online presence, hiding out in random opterons and other large systems and from there, disrupt the so-called "elite".

Obviously that would not be allowed, and it was shut down.