Why AI is Significant - from a Science Perspective
8th September 2024
Most people can ride a bicycle. But most do not understand the physics of how a bicycle balances (Counter-steering, Rake & Trail etc).
How can you ride a bicycle if you don't know how it balances? Well because the subconscious brain:
- does not need to understand a problem to solve it
- does not need to understand the solution to implement it
The subconscious mind builds solutions to problems (neural pathways) without ever needing to understand them.
Another thing that's interesting is that consciously understanding a problem and it's solution does not necessarily allow one to implement that solution. If you can't ride a bicycle, learning about Counter-steering, Rake & Trail does not mean you can ride. Nor can you explain to a child how to ride so they can skip having to practice.
Understanding is overvalued
As covered by my intro, problems can be solved without being understood, and problems can be understood without being solved. So why in Science do we insist on everything being quantifiable and understandable?
Well what does it mean to understand something? People often say that if you can't explain something you don't understand it. So we will use that as our definition.
You understand something if you can explain it using language, logic or maths. That's what research papers are.
But it seems to me that this is extremely limiting. You can't even explain to someone how to ride a bicycle or play a guitar. Furthermore consider chess. Chess is a rules-based, deterministic game, so you'd think it would be explainable. But a chess grand-master cannot simply turn someone else into a grand-master by explaining chess to them. It takes years of practice.
So that raises the question, what other solvable problems are there that can't be explained?
Wolfram's Theories - universe is computational not mathematical
Stephen Wolfram is a mathematician and computer scientist. To paraphrase his life's work, he essentially sais that the universe is computational, not mathematical.
This is because, while maths can provide insights into the properties and behaviours of things, but it doesn't really explain what they are or how they work.
Think of geometry. If we say something is a ball, that allows us to know that it has symmetry in all directions and that it can roll on a flat surface etc. But it doesn't tell us what the ball is made of or how the atoms are held together.
Or electromagnetics. I can calculate how much current will flow through a load if I know the voltage and the resistance, without knowing or caring what an electron is, or how the structure of the atoms causes resistance.
And it looks to me like the closer maths gets to trying to explain the nuts and bolts of reality, the more drastically it fails. Look at quantum mechanics. The more you learn about it, the less it makes sense. Stuff like Wave-Particle Duality, Superposition, Heisenberg's Uncertainty Principle etc make you more confused the more you read it.
This is because we are trying to use maths to explain something which is not mathematical in nature. The universe actually appears to be a huge parallel computation. The universe has a state, and that state evolves over time according to a rule-set which appears to be applied locally.
Computational (ir)reducibility
Anyway Wolfram coined the term computational irreducibility. This means there is no way to find out what would happen without running the full computation. This would presumably mean having a computer the size of the universe and simulating every interaction between all the fundamental particles over time.
An example of something which is computationally irreducible would be biological evolution. If I give you a planet that can support life, and asked you what DNA the animals will have in 3 billion years, there is no way you could know in advance. You have to just wait and see what happens (wait for the computation to play out).
Conversely computational reducibility is when you can shortcut the computation. This is what our "laws of physics" are. They are mathematical shortcuts.
So for the example of the planet I gave you, you could calculate how it's orbit would degrade over time and how it's climate would change as the star's heat output changes using maths combined with known behaviours of stars.
How much of the space of solvable problems have we explored?
How many solvable problems are there? How many of them have we explored?
The way I see it, there is:
- computationally irreducible problems
- there is no point in trying to solve these unless you have a computer the size of the universe (you don't)
- reducible and understandable
- these are the problems all of us tackle with our conscious minds, and scientists study with maths and logic. humans have been exploring these sorts of problems for Millennia
- reducible, incomprehensible
- our subconscious minds tackle these sorts of problems, but no deliberate effort has been made to explore this problem space beyond platitudes such as "practice makes perfect".
So the area of computationally reducible + incomprehensible problems has been very under-explored.
The human subconscious can explore these areas but it is extremely limited by the type and quantity of data you can put into it. For example you can't feed terabytes of atmospheric data into the subconscious mind and ask it to forecast the weather.
This is exactly the area that AI (or more precisely machine learning) can explore. It drastically expands the scope of problems humanity can attempt to solve because it doesn't require either the problem or the solution to be understandable or explainable.
Indeed for the weather forecasting example, AI weather forecasting has already proven to be more accurate and efficient than the old maths based solutions.
What does this look like?
So in the old world of science, where solutions have to be understandable and explainable, the currency is research papers. One will publish a paper, and if people find it comprehensible and compelling they might do experiments to try to verify it.
The hilarious implication of this is that if a scientist were to figure out the secrets of the universe and write them up in a research paper, but it's really complicated and no-one understands it, his theories would be completely ignored. This has actually happened loads of times.
Anyway essentially the traditional scientific process has three steps:
- define problem
- propose solution
- verify solution
However in the new world, where we explore problems which are not understandable or explainable, there is no research paper, explaining one's solution. Rather the solution is just a trained neural network.
So you see AI is a fundamentally practical field. The process of training an artificial neural network is the process of building a machine to solve a problem. This is done by an evolutionary process of incrementally making small changes to the neural network until it can solve a problem.
It's the same as how an animal brain works. A bug doesn't have time to write a research paper on the best way to flap it's wings it just has to get from A to B. Neural networks are about building solutions to problems, not understanding them.
So essentially you skip defining the problem and proposing a solution and just:
- gather data
- build solution (train neural network)
- publish trained network (if you keep it secret it might as-well not exist as-far as everyone else is concerned)
Scope and depth of impact
How many fields will these insights impact and how much? We can only speculate, but in my opinion the evidence thus-far makes it look like many and much.
Take ChatGPT for instance. Despite having almost non-existent reasoning skills [ref1] [ref2] it is able to pass the Turing test. So conscious reasoning seems to be much less necessary than we thought.
One thing that in my opinion got much less attention than it deserved, was when AI beat computers at chess in 2017.
20 years earlier, when computers beat humans at chess, many had assumed it was because chess is a logical, deterministic game which is exactly what computers are good at.
But the fact that AI now beats computers at chess shows that computers (i.e. maths and logic) are not as efficient as we thought even for solving deterministic games. We have massively over-estimated maths and logic and under-estimated the power of evolved algorithms (machine learning / AI).
Outlook
How does the AI revolution compare to the computer revolution?
Hitherto, computers have just been used as instruments to expand our use of maths and logic. Programmers have sat at their desks writing code in programming languages which are designed to be human readable.
This hasn't had much of an effect on science really. Most of modern physics was invented by Einstein and his buddies before computers were around.
Now that we are starting to use artificial neural networks to tackle problems we don't understand, the frontiers of what we can achieve will move forward in ways that no-one can predict. We might reach new insights on things no-one can imagine.
I finish with a quote from ChatGPT:
AI is not just a tool but a fundamental shift in how we explore previously unreachable problem spaces.