What Do You Mean by "General" Intelligence?
AGI is either already here, or just around the corner, or ain't never gonna happen. Regardless of which camp you fall in, you will have heard these three assertions time and time again over the last two years.
But what is AGI? Here's Wikipedia:
Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human capabilities across a wide range of cognitive tasks. This is in contrast to narrow AI, which is designed for specific tasks. AGI is considered one of various definitions of strong AI.
So we define artificial intelligence not in terms of what it is, but in terms of what it exceeds. Note, also, that Wikipedia has done a neat little sleight of hand here - a type of AI that matches or surpasses human capabilities across a wide range of cognitive tasks. This avoids us having to get into a discussion of what human intelligence is!
But let us hold our blog post to a higher standard, and try and define AGI in terms that stay constant throughout. And so rather than try and define AGI in comparative terms, how about we try and define GI (A or otherwise)?
What is, in other words, General Intelligence?
A generally intelligent entity is one that achieves a special synthesis of three things:
A way of interacting with and observing a complex environment. Typically this means embodiment: the ability to perceive and interact with the natural world.
A robust world model covering the environment. This is the mechanism which allows an entity to perform quick inference with a reasonable accuracy. World models in humans are generally referred to as “intuition”, “fast thinking” or “system 1 thinking”.
A mechanism for performing deep introspection on arbitrary topics. This is thought of in many different ways – it is “reasoning”, “slow thinking” or “system 2 thinking”.
If you have these three things, you can build a generally intelligent agent.
So embodiment, plus System 1 and System 2 thinking. That is the necessary, but not sufficient condition for GI. Without these three, you simply cannot have GI.
But even with these three being present, you will not have GI... unless you are able to "coherently execute the above cycle repeatedly over long periods of time, thereby being able to attempt to optimize any objective."
The first of these is robotics, a field that I am increasingly more curious about (as you may have been able to see, given the number of videos I have been sharing this year about advancements in robotics). The second of these - what is referred to as "intuition", "fast thinking" or "system 1 thinking" here - is simply what we think of as chatbots today. That is kind of true, kind of "meh, not really true", but it is good enough for our purposes in this blog post.
What we are missing is the third of these - a mechanism for performing deep introspection on arbitrary topics. The author of the blog post we are talking about here feels that the ability to do System 2 thinking by machines is about two to three years away:
There is not a well known way to achieve system 2 thinking, but I am quite confident that it is possible within the transformer paradigm with the technology and compute we have available to us right now. I estimate that we are 2-3 years away from building a mechanism for system 2 thinking which is sufficiently good for the cycle I described above.
Actually, he says, it is not the ability to "do" system 2 thinking in order to achieve any given objective. It is the ability to be able to do System 2 thinking... and also to have the ability to figure out when to use it, and when to stop using it. Can the agent synthesize a plan, and work at it, or choose (because it realizes the objective is too hard) not to work at it?
... and all this leads the author (J Betker, of OpenAI) to predict that we are about 3 years away (best case scenario) from AGI. Five years at the most, but no more, he says.
I have no clue if the timeline makes sense or not. But reading this essay helped me become a little bit clearer about how to think about the concept of general intelligence.

