This post distills my Takshashila talk on this topic from today morning into seven points
I don’t know if you’ve noticed, but that’s pretty much all I’ve been talking about on EFE for a very long time now. In my talk, I decided to try and lay out as clearly as possible just what it is about AI that has me so very fascinated (and just a little bit horrified).
I’m going to present here those exact same points, and I am deliberately going to be as brief and succinct as possible. Please feel free to ask your LLM of choice to expand upon these points, or talk about these points with folks you consider to be AI experts1.
Here we go:
AI has improved over time. While there are many things that have gone into this, the single biggest explanatory factor is that the computing power we can call upon has gone up over time. This is Rich Sutton’s Bitter Lesson. Increasing the amount of compute one has access to is a necessary condition for reaching AGI. Other things will matter. It is not a sufficient condition. But nothing will matter as much as this.
AGI is a winner take-all phenomenon. You only win if you are the first to develop AGI. There is no meaningful second prize in the race to build AGI.
What does this imply? That any firm that is serious about developing AGI has to deploy massive amounts of capital. Massive to the extent that it would begin to seem insane in comparison to all other alternative deployments of capital.
We need to be able to see evidence of that capital being deployed to build out all of this massive compute, with attendant changes in land use, water use and above all, electricity use. We need to also see evidence that AI is increasing in weight in equity indices. Is it beginning to cause a significant impact in the calculation of GDP growth?
Do we see evidence that the deployment thus far has resulted in an increase in the quality of our AI models?
Does current data show that AI is getting better at doing real, meaningful tasks in the real world?
Given all this, we need to ask what the next two years might look like.
Note that the first six points are not projections. They are data points that exist, which means all of this has happened, or is happening as we speak. That last point is a projection, and it was written in April of this year. Their first prediction, if you read through the report, was for September 2025. You can decide for yourself if they were on track with their prediction, and we’re close enough to January 2026 for you to ask if that prediction looks like it will work out. Read through the rest of the essay too.
Do you have to agree with the rest of the essay? Of course not. You should form your own predictions! But it doesn’t hurt to read what they have to say.
You should ask yourself if I have misrepresented some of the data, or have not included some other data points that will either strengthen or weaken this blog post. If you can think of any, please do let me know.
But at the margin, I feel very strongly that each one of us should be reading more about what is happening with AI, and how this will change our lives. These seven points are my reasons for saying so.
If either of these two disagree with me, please let me know. I would love to know why I’m wrong.