The Ontology of AI's
Here's a thought experiment for you:
Imagine that twenty-five years ago, everybody on earth played a sport called tennis. This is the same sport tennis as we play it today - the same equipment, the same rules, the same dimensions for the court, but with one crucial difference. The accepted convention of playing a double handed backhand has been turned into a rule.
Let's not get into why this might have happened, and whether this was better or worse for the sport. This is, after all, a thought experiment, so play along with it for now. Kids from a young age have been taught to play with a double handed backhand, and even using a single handed backhand calls for some sort of penalty.
And then, into this thought-experiment world, comes a player called Roger Federer*. It is patently obvious that he in fact plays better with a single handed backhand, and what is more, he has some other shots in his repertoire that have simply not been imagined as being possible. He is, in other words, very good at playing the sport as it has been understood until then, but he is clearly capable of so much more.
What should we do?
Should we say "No, no - you please forget your fancy-schmancy single handed backhand, and learn to play tennis the way we play it, and stop it already with those out-of-this-world shots?"
Or should we instead step back and just watch the man get better at whaever he does and however he does it... and adapt the sport to accomodate him instead? Maybe create different version of tennis with different rules, to allow many more Federer type variants to flourish?
Or if you prefer a cricketing analogy - was Sehwag not good enough for the ODI format, or was our understanding of how the ODI game needed to be played not good enough to understand Sehwag?
There are examples of a similar nature to be found in music, literature, computer hardware and software development, and everywhere else. When a truly groundbreaking innovation comes along, it is sometimes so out of tune with what is considered conventional at that time that we are faced with two choices. One, we can constrain the performer to fit our priors about what convention should be, or two, we can update our priors about what the convention should be.
The analogy is a fairly obvious one, of course. Tennis, in my thought experiment, is how we do what we do whatever it is that we do on our professional and personal lives - call this workflows, if you like.
Roger Feder is AI.
And the question is: should we beat and constrain and retrofit AI to get better at our current workflows and eventually replace us?
OR
Should we use AI to reimagine both why we do things the way we do them, and why we do them in the first place?
This year is going to be the year of agents in the world of AI. AI labs are busy figuring out how to build agents capable of doing, well, everything (eventually). Doing this means, to use my analogy above, forcing Federer to play with a double handed backhand. And the question that needs to be asked is this: is the world, now and in the future, a better or worse place for it?
Note, before we proceed further, that as with all analogies, so also with this one. It isn't perfect, and there is a danger to thinking too much about the analogy, and not enough about what the analogy represents. AI doesn't map perfectly as a concept to Roger Federer, for one (although both do seem other-worldly!), and the world and its current workflows are unimaginably more complicated than tennis. Again, it is not a perfect analogy!
But that being said, it does help us understand and start to think about the problem before us: should we fit AI into our world as it exists today, or should we change our world around what AI is, and what it is (best as we can tell) best at doing?
And the answer to this question, in turn, depends on an even more basic question: what is AI?
Note that I am not asking how AI (or LLM's, in this case) works. That's many separate questions, with many technical answers. And folks should ask and answer these questions, of course.
But "how does AI do what it does?" is a separate question from "But what is AI?"
Is it something that gives you correct answers to your question, no matter what the question is? For example, can you ask it to list out all of the left-handed Members of Parliament in India since independence? Are you confident that its output will be accurate?
Or is it something that optimizes for a given objective? For example, "Go make more paperclips".
Or is it something that goes and does something for you when you ask it to but - and this is an important distinction from pt. 2 above - only when you ask it to?
Or is it something that is very good at a specific thing, and hopeless at everything else? Google Translate would be a very good example here. Google Translate is very good at translating, but hopeless at giving directions.
Or, finally, is it something that generates models of a system? A poem about Shakespeare dancing with ferrets on the moon is a (very weird) model of a (very weird) system, for example.
You see, before answering the question about what we should do about Federer, we should be clear about what Federer is.
The actual, real Roger Federer was, of course a professional tennis player before he retired (alas) from the sport. But he could have been a football player too. There was a chance he could have been a cricket player! Anybody who has seen Roger Federer play will agree (probably) that he would have been one heck of a dancer. We know him as a tennis player because that is what he chose to be, and us tennis fans will forever be grateful to him for that decision. But could he have been something else? Should he have been something else?
If you had the chance to play god, and you could decide for Federer what he could become - what would you choose? Tennis? Or something else? Would your answer depend on what was best for him, for a sport called tennis, for other sports, other professions, or for the world at large, and that for generations to come?
What are the pros of each of these approaches when you stop thinking about the analogy, and start thinking about AI and the world we live in instead? Much more importantly - unimaginably more importantly, in fact - what are the cons (the risks) of each of these approaches?
As it turns out, we do have the chance to answer this question in the case of AI. But in order to answer it, we first need to know what we are dealing with - and I argue that we haven't thought enough about the ontology of AI. Ontology sounds like a fancy-pants word, I know, but it really just means the study of being. It studies "the nature of existence, reality, being, and becoming".
And so we come to the question that all of us should be spending more time thinking about: what is the nature of being, when it comes to AI? What is it?
Janus offers their answer to this question, and defines for us the five categories I mentioned above:
Types of AI
Agents: An agent takes open-ended actions to optimize for an objective. Reinforcement learning produces agents by default. AlphaGo is an example of an agent.
Oracles: An oracle is optimized to give true answers to questions. The oracle is not expected to interact with its environment.
Genies: A genie is optimized to produce a desired result given a command. A genie is expected to interact with its environment, but unlike an agent, the genie will not act without a command.
Tools: A tool is optimized to perform a specific task. A tool will not act without a command and will not optimize for any objective other than its specific task. Google Maps is an example of a tool.
Simulators: A simulator is optimized to generate realistic models of a system. The simulator will not optimize for any objective other than realism, although in the course of doing so, it might generate instances of agents, oracles, and so on.
If you ask me, that article is worth reading in full, multiple times. But even if you choose to not do that, please do consider getting it summarized by an LLM of your choice, and thinking about it.
Because if you think about it, it is rather worrying that we've been talking about AI without thinking about what they are. I don't pretend to know for certain, and I currently find myself mostly in agreement with Janus' take - but I do know that more people should be thinking about this, and talking about it.
To tennis connoisseurs: Roger Federer just so happens to be my favorite tennis player of all time. You could make an argument for he not having the best single-handed backhand, I know, but again, this is my thought experiment.