INDEX | HOME

Chain of Reasoning

25th October 2024

Something I find fascinating is that, when one formulates a chain of reason, it can lead you to crazy places even if all the individual reasoning steps appear sound.

This is part of the fun of writing blog posts, I can start with an idea I want to explore, write it down and continue writing until I reach a natural conclusion.

Then, if the article comes to a "crazy conclusion" then I get to play the fun game of figuring out where the chain of reason went wrong.

Though honestly I often just leave the crazy conclusion in-place just because it's funny. In my last blog post I conclude that human's will become pets to AI.

Anyways the thing that worries me is I have actually found a chain of reason which leads to a crazy conclusion where I can't find the flaw in it.

Let's walk through it.

Chain of Reasoning

Premise 1: the AI revolution is real

I only include this one since I know many think AI is fake and not useful.

They think that for example ChatGPT is just an illusion of intelligence, and since it lacks the nebulous property of being "real" so it can never be trusted to do anything properly.

Premise 2: AGI is coming

AGI means AI that matches or exceeds human ability across all domains.

Most people in the field seem to think that AI is not only possible, but coming in the next few years [ref].

Premise 3: AGI leads to recursive self-improvement

Since AGI means equalling human ability in all domains, so by definition an AGI is at-least as good as humans at AI research.

So it follows that AGI will be capable of recursive self improvement. I.e. version 1 makes version 2 which makes version 3 and so-on, getting better each time.

Premise 4: Recursive self-improvement = 😱

If it has enough resources, then a recursively self-improving AI should become arbitrarily smart very quickly.

Let's imagine how it might play out:

version 1
130 IQ. it makes a version 10% better
version 2
143 IQ. it makes a version 20% better
version 3
172 IQ. It makes a version 30% better
version 4
224 IQ. It makes a version 40% better
version 5
313 IQ. It makes a version 50% better
version 6
470 IQ…

So as you can see it gets really crazy very quickly.

Conclusion

This could lead to all sorts of nightmare science fiction:

  • AI takeover like terminator
  • Humans becoming Cyborgs
  • World assimilated into Grey Goo or Computronium

Where does it go wrong?

OK we have reached our "crazy" conclusion. Where did our chain of reasoning go wrong?

Premise 1: the AI revolution is real

I can't think that this is wrong. As that french philosopher Descartes pointed out, we can only see the world through our own perception. And in my anecdotal experience AI tools such as ChatGPT do show "real" intelligence.

I honestly think one can only disagree with this if not paying attention.

Premise 2: AGI is coming

Is there some invisible barrier between where we are now and the milestone of AGI?

This is possible. One big thing about LLMs for example is that they take months to train at huge expense, whereas a human can learn in real time, with just a nightly sleep cycle to regenerate.

It could be that problems like this are harder to fix than we think which will push back AGI.

Premise 3: AGI leads to recursive self-improvement

The main thing that could stop AGI from leading to recursive self-improvement would be human intervention.

However that would rely on every single AI researcher in every country having enough restraint to not let their AI recursively self-improve. The moment one person/organisation with sufficient resources does it the cat's out the bag.

However we have mostly resisted the temptation to nuke each-other or practice eugenics so it's possible we can restrain ourselves with AI too.

Premise 4: Recursive self-improvement = 😱

The main argument against "recursive self-improvement = 😱" would be that there may be a limit to how much IQ is actually useful or realisable.

I may be smarter than a crocodile but it can still kill me.

Conclusion

Honestly none of the arguments I have found against the 4 premises are hugely convincing to me. However there are almost certainly some unseen elements that will come into-play at some point and cause a different outcome than the extreme one predicted.

People have made all sorts of fantastic predictions during previous technological revolutions, which normally did not entirely come true.

So all I can say is that AI has potentially huge implications I have no idea what's gonna happen.

Change is coming and it will test us in ways we haven't before been tested. Better be ready!

Copyright 2024 Joseph Graham (joseph@xylon.me.uk)