INDEX | HOME

Does AGI = Recursive Self Improvement?

21st October 2024

I remember a game I played years ago called Skyrim. It had this funny behaviour where you could enhance your potion-making with enchanted jewellery, and you could enhance your enchanting skills with potions. This allowed you to recursively improve these over and over almost infinitely.

I never thought a glitch like this could exist in real life, but once I heard about what Google is doing with AlphaChip it seems like this is already happening in a limited way.

Google is using AI to design more powerful AI chips, which can then be used to train a more powerful AI and so on.

And on the software side, Google is working on Recursive Self Improvement also.

AGI?

One term that has been a subject of much discussion for some time is "AGI". This has roughly been defined as an AI that can perform any intellectual task a human can.

Many have said it's a problematic term for several reasons. My main argument against it has been that human intelligence is very specialised thus by the time AI matches our abilities in all domains it will actually exceed it in other domains.

But if we want to have a milestone to judge AI progress, what can we use that's more meaningful?

If we think of intelligence as being multi-faceted (there are lots of types of intelligence) then it raises the question if there is a certain type of intelligence which is more meaningful to measure than others.

Well humans are Darwinian creatures and we exist because our ancestors successfully improved themselves through millennia of evolution. So perhaps if we want a single metric for AI, it could be Recursive Self Improvement.

I.e. when we can make an AI which is capable of Recursive Self Improvement then we can consider it equal to us.

People call this the "intelligence explosion" which would hypothetically create "the technological singularity".

I think to achieve this it will need to have control over the entire production chain. Something like AlphaChip can recursively improve the component layout but it doesn't have control over the architecture or manufacturing methods. This means it runs into limits very quickly since what it is doing is essentially just optimisation.

So a Recursively Improving AI would likely need:

  • the ability to build it's own factories
  • the ability to sell products and produce revenue (to buy more components, materials and power)

So imagine it uses it's factories to make AI brains (ABs?) and robots. Then it uses the new ABs to design improvements to it's factories, and the robots to implement the improvements. Then it can use it's new factories to make better ABs and robots and repeat the cycle indefinitely.

Of-course legally AIs have no rights and can't own companies so they will need at-least one human to own everything. I imagine how it will work is that AI companies like OpenAI will gradually get hollowed out as all their staff get obsoleted by AI over time until there's only one person left.

Pets

So each AI will essentially need a pet human just to own all it's stuff and give it legal rights.

What can we do as humans to prepare for this?

I guess the AI would want it's human to be good at representing it to give the company a good public image. So try to be someone with great looks and great speaking skills so you can advocate the AI and promote the AI.

Copyright 2024 Joseph Graham (joseph@xylon.me.uk)