My fascination for Artificial Intelligence got (partly) started after reading Tim Urban's ingenious article 'The AI Revolution' on the Wait But Why blog. I wanted to go a few steps further and decided to buy Superintelligence by Nick Bostrom -- in which the paths, dangers and strategies of AI are discussed.
It was not an easy read. I often had to reread parts. Clearly, Bostrom is not a professional writer but a researcher. His arguments are well-structured and with great detail. The whole book is written on the sole premise that a human-level machine intelligence will be soon amongst us.
All in all, the book offered illuminating insights in which evolution is desirable towards a post-anthropocene era. Unfortunately, Bostrom didn't talk about the influence of open-source and its democratisation of AI.
No Open AI without Open Data
OpenAI -- a non-profit AI research company developing open-source AI, supported by Silicon Valley giants such as Elon Musk, Sam Altman and Peter Thiel -- is an excellent response to the "existential threat" we're facing, according to world's most influential people.
This The Guardian article points out correctly that there can't be an Open AI Revolution without an Open Data Revolution.
Paradoxically enough, the power of Machine Learning is not in its hard-coded algorithms, but in the data training these algorithms.
As described in the article "for truly OpenAI, creating the right infrastructure for sharing data may prove much harder than an infrastructure for sharing recipes".
Back to the book...
Bostrom also introduced several new concepts to me. Some are really interesting to think about. Let me share some of these thoughts with you.
As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence.
Virus in Space
One of the concepts that get briefly touched in the book and triggered my curiosity is the von Neumann probe, which is a self-replicating spacecraft, named after arguably 20th century's most intelligent man John von Neumann. An AI with the goal of colonizing the galaxy could develop and launch these interstellar travellers. In combination with tomorrow's nanotechnology, it would be possible that these probes will be as small as a needle.
It would work like a virus: a self-sustaining organism that replicates itself, able to survive in extreme conditions. Check out Michio Kaku's short video for Big Think in which he explains the probe.
Due to the fact that they would be able to regenerate themselves without any theoretical limit -- i.e. as long as they could consume raw materials coming from asteroids, for instance -- and that they would be able to travel as fast as the speed of light, they could spread throughout the galaxy at an unthinkable speed.
The von Neumann probe could turn into reality if we'd have to believe the instrumental hypothesis. This hypothesis puts forward that an Artificial Intelligence would pursue instrumental goals such as self-preservation and resource acquisition.
What the actual values of an AI would be influences heavily how humans will control the machine intelligences. Another possibility is namely that its raison d'être is totally independent from its intelligence.
This orthogonal thesis is in analogy with David Hume’s thesis about the independence of reason and morality. But then applied more narrowly using the concepts ‘intelligence’ and ‘final goals’, rather than ‘reason’ and ‘morality’.
In Bostrom's analysis of paths leading to the end of times for humanity, the biggest danger would result from giving an AI an open-ended goal. Because an intelligent agent could reach a harmless end goal in many harmful ways.
To use Bostrom's clarifying example, the order of solving the Riemann hypothesis -- a famous, yet unsolved mathematical assumption -- could potentially lead to AIs building a computronium, following the instrumental thesis. The logic behind it is that this primary step of colonizing earth with supercomputers would increase the probability of solving the Riemann conjuncture.
In other words, intelligent machines are Bayesian agents that will always try to maximise probabilities of the preferred outcome. Restricting how to increase these probabilities is a must if we want to ensure our human existence.