The slow intelligence explosion

Each new technology that we invent has improved our ability to create the next generation of technologies. Assuming the relationship is a simple proportional one, our progress can be modelled as \displaystyle \frac{dI}{dt} = \frac{t}{\tau} for some measure of "intelligence" or computational power I. This differential equation has solution \displaystyle I = I_0e^\frac{t}{\tau} - exponential growth, which matches closely what we see with Moore's law.

The concept of a technological singularity is a fascinating one. The idea is that eventually we will create a computer with a level of intelligence greater than that of a human being, which will quickly invent an even cleverer computer and so on. Suppose an AI of cleverness I can implement an AI of cleverness kI in time \displaystyle \frac{1}{I}. Then the equation of progress becomes \displaystyle \frac{dI}{dt} = I^2(k-1) which has the solution \displaystyle I = \frac{1}{(k-1)t}. But that means that at time t = 0 we get infinite computational power and infinite progress, at which point all predictions break down - it's impossible to predict anything about what will happen post-singularity from any pre-singularity time.

Assuming human technology reaches a singularity at some point in the future, every human being alive at that time will have a decision to make - will you augment and accelerate your brain with the ever-advancing technology, or leave it alone? Paradoxically, augmentation is actually the more conservative choice - if your subjective experience is being accelerated at the same rate as normal progress, what you experience is just the "normal" exponential increase in technology - you never actually get to experience the singularity because it's always infinitely far away in subjective time. If you leave your brain in its normal biological state, you get to experience the singularity in a finite amount of time. That seems like it's the more radical, scary and dangerous option. You might just die at some point immediately before the singularity as intelligences which make your own seem like that of an ant decide that they have better uses for the atoms of which you are made. Or maybe they'll decide to preserve you but you'll have to live in a universe with very different rules - rules which you might never be able to understand.

The other interesting thing about this decision is that if you do decide to be augmented, you can always change your mind at any point and stop further acceleration, at which point you'll become one of those for whom the singularity washes over them instead of one of those who are surfing the wave of progress. But going the other way is only possible until the singularity hits - then it's too late.

Of course, all this assumes that the singularity happens according to the mathematical prediction. But that seems rather unlikely to me. The best evidence we have so far strongly suggests that there are physical limits to how much computation you can do in finite time, which means that I will level off at some point and progress will drop to zero. Or maybe growth will ultimately end up being polynomial - this may be a better fit to our physical universe where in time t we can access O(t^3) computational elements.

To me, a particularly likely scenario seems to be that, given intelligence I it always takes the same amount of time to reach kI - i.e. we'll just keep on progressing exponentially as we have been doing. I don't think there's any reason to suppose that putting a human-level AI to work on the next generation of technology would make it happen any faster than putting one more human on the task. Even if the "aha moments" which currently require human ingenuity are automated, there are plenty of very time-consuming steps which are required to double the level of CPU performance, such as building new fabrication facilities and machines to make the next generation of ICs. Sure, this process becomes more and more automated each time but it also gets more and more difficult as there are more problems that need to be solved to make the things work at all.

In any case, I think there are number of milestones still to pass before there is any chance we could get to a singularity:

  • A computer which thinks like a human brain albeit at a much slower rate.
  • A computer which is at least as smart as a human brain and at least as fast.
  • The development of an AI which can replace itself with smarter AI of its own design without human intervention.

3 Responses to “The slow intelligence explosion”

  1. Jim Leonard says:

    While I know the singularity is inevitable, it scares the holy hell out of me. I am comforted, somewhat, by the fact that I will be long dead before the genocide begins.

  2. Jim Leonard says:

    It will never become an option. We'll just get in their way.

    To be honest, once they're capable of creating intelligence that outclasses them, THEY (model 1.0) will be in their (model 2.0's) way.

    I am reminded of The Last Question by Asimov: http://www.multivax.com/last_question.html

Leave a Reply