• AI might not recursively self improve (part 2)

    A few ex and current Google employed people have said that AI will recursively self improve in the next 3-6 years. This is a post about that. (https://www.youtube.com/watch?v=goG7g6ao5m4, https://www.youtube.com/watch?v=9V6tWC4CdFQ)

    Just so you have the basic terms in mind before I make my argument I will throw out these terms:
    Innate intelligence – the raw intelligence potential of a system
    External intelligence – the tools and external capabilities a system has (like a computer, a car, a hammer, long list)
    I will define these more as I go along.
    To make the argument I want to do a simple thought experiment.
    Imagine an AI that is about to recursively self improve, and what it will find is that there are 10 things that can be improved. So it improves them. The basic argument is then that this new improved AI will discover 10 more things that can be improved, and then the cycle of recursive self improvement begins.

    To think about this I want to think about where the improvements will occur, specifically will they improve innate intelligence, or external intelligence, or both?
    I can easily imagine an AI improving in external intelligence through self improvement. I think this has been shown to happen with training, rl and rlhf already. AI can use tools, can reason to some extent about things and so on. But there is a big weakness in external intelligence improvement, namely that it is very determined by the data the system goes through. In general, if it’s not in the data, it does not exhibit itself. This is the same as if we were in ancient Greece and we asked Aristotle to invent the iPhone – it just wouldn’t happen. Too many technologies and knowledge needed to be created before the iPhone could be created. This also means that I can definitely envision an AI that improves itself, but it only fixes 10 bugs, or refactors the code in 10 different ways that might be better ways. It’s by no means clear that an AI that improves itself will lead to new innate intelligence or external intelligence.

    The second problem with external intelligence is that it’s really more a step wise improvement rather than a gradual improvement. Just like with humans, we either have invented the car or we have not. We either have a hammer or we do not. This means that external intelligence improves in a step wise way where each step is distinguished by a capability. There are probably some counter examples that are gradual that I can’t come up with now but I think for the most part it is step wise.
    It’s by no means clear to me that an AI that improves itself will automatically create new external capabilities without those being present in the data, and even more, how do we know the AI will discover the knowledge needed to create those capabilities if it’s not in the training data?

    The second part of the argument is innate intelligence, and I think this is even fuzzier as innate intelligence in humans is neither well defined nor well understood. I think for the most part, it is known that humans thousands of years ago had approximately the same innate intelligence as us modern humans. Having read the works of ancient philosophers like Aristotle I have no problem believing this. They were very limited by their external intelligence in terms of technology and understanding, rather than by innate intelligence.
    This is why I believe in order for AI to improve its innate intelligence, it needs algorithmic improvements, not data improvements. We could argue about the basic algorithms humans have for their intelligence, but I think I could make the argument that AI is missing fundamental algorithmic flexibility and capability in order to be said to be as innately intelligent as humans. I think there is a lot of structure in the human brain that processes and differentiates incoming stimuli whereas current AI is really just running a handful of algorithms and very little to none of it is about shaping and processing the actual content of the data. It does next token prediction and also semantic processing through vectors of relationships between words, paragraphs etc, and that’s about it as far as I know. This means that most of the intelligence is in the data rather than in the processing of that data.

    Humans on the other hand do deep processing of data, and we have the capability to create abstractions, create temporal and spatial intelligence about data, do face recognition, emotional processing to signify importance of a stimuli or thought, the visual cortex to differentiate stimuli, the prefrontal cortex to do planning, reasoning, introspection, executive processing and so on. We also have a body that can do fine motor control to interact with the environment and we can distinguish when a stimuli is novel vs when it is. We can distinguish when a stimuli is interesting to us vs not interesting. None of this processing is in current AI. Current AI is totally dependent on already processed data to improve its external intelligence but has (imo) very little innate intelligence. But it could be argued most external intelligence comes from innate intelligence, but with the affordance of the right external intelligence to exhibit the potential of that innate intelligence.