The Economics of Building Things With AI
Experiments All The Way Down
What connects a half-finished course outline, a single guest lecture, and a blog post that wrote itself?
Bear with me, because this one takes a few turns.
The Course That Wasn’t (Yet)
Around August of last year, I started putting together a course called The Economics of AI. It was going to be an online course, 12-16 weeks long, covering everything from market structure in the age of foundation models to the labor economics of automation to the regulatory questions that everybody has opinions about but nobody has answers to. I prepped the outline, did a bunch of research, and got the skeleton more or less into shape by November.
And then it just... sat there. A course like this, on a topic like this, is always going to be incomplete -- the field moves faster than any syllabus can keep up with. But the structure was solid enough. It was ready to be taught. It just hadn’t found its moment yet.
Who knows, it may still happen some day.
One Lecture Instead of Sixteen
Then, in January this year, the Takshashila Institution invited me to give a lecture on The Economics of AI. One lecture, not sixteen. And because we now live in a world with Claude Opus 4.6 and ChatGPT 5.3 and other wondrous things of a mysterious nature, I asked myself -- why not see if AI can help me build the presentation itself?
I fed my course notes into Claude CoWork, and asked it to spin up a presentation1. It did a fairly good job, and you can see the result here.
I delivered the lecture, and we (the GCPP students and I) ended up spending more time talking about what wasn’t in the presentation than what was. I can pay no higher compliment to the students at the Takshashila Institution, because they were able to not just get what the presentation was about, but also point out what was missing. The most obvious omission is the regulatory aspect, of course, and there are a million others. Ads in ChatGPT, for instance -- how should an economist think about that?
But in any case: the presentation was designed and delivered with the help of AI.
The “Why Not” Cascade
Now here’s where things got interesting. And by “interesting” I mean that I fell down a rabbit hole from which I have not yet fully emerged.
Usually, after a guest lecture, I email the slides to whoever organized the event, and they forward them to the attendees. That’s how it’s always been done, and there’s nothing wrong with it.
But why not, I asked myself, try and use these tools for the sharing part too?
And once you ask one “why not,” a whole cascade of them comes tumbling out:
Why not build a website that is the presentation -- one slide per page, readable at your own pace?
Why not layer clickable hotspots on top of each slide, right on the concepts worth digging into?
Why not have each hotspot generate a prompt that helps the student learn that specific topic better?
Why not do this for every slide?
Why not collect some basic information about the learner -- their profession, what they’re curious about -- and use it to tailor every prompt?
Why not gather all the collected prompts into a single .md file that also contains instructions for whichever LLM the student uses?
Why not make the whole thing as simple and as privacy-respecting as possible?
And so I sat down with Claude Code and we built it2. You can see the companion website here.
What This Simple Little Thing Actually Does
Here’s what happens when you visit the site. You tell it your name (optional), your profession, and what you’re curious about. Then you browse through the slides at your own pace. On each slide, you’ll see small numbered circles marking the ideas worth exploring. Hover over one to see the prompt; click it to copy a question tailored to your background. The star in the top-right corner is a “master prompt” that covers the entire slide.
Every prompt you click gets collected. When you’re done, you download a single .md file. Upload that file -- along with the original .pptx -- into any LLM, and it becomes a tutor that already knows who you are, what you do, and what caught your attention. It walks through your collected prompts one at a time, waits for you to ask follow-ups, and at the end generates a learning pack with takeaways, further reading, and a full conversation summary.
That’s it. That’s the whole thing.
The website does not store your information anywhere. Not your name, not your email, nothing. It remembers nothing about you and tracks nothing about you. Everything stays in your browser, and the .md file is generated entirely on your machine. It is a drop-dead simple revision tool that does one thing: it helps you go back through a presentation you’ve already seen, and turn it into a conversation.
“But Doesn’t NotebookLM Already Do This?”
You could ask how this is different from NotebookLM, or Gamma, or any of the other AI tools that work with presentations. And the answer is: maybe, sure, some of them do some of these things, and some others do a helluva lot more.
But think of it this way. It cost me nothing to run this experiment, and I learnt a lot about how to use these tools along the way. I would like to think that my students benefited too, both from the presentation and the companion website (although that is for them to say and me to guess at!)
But also: I now have a skill that I can use. And Claude Code has a skill that it can use -- generate helpful learning companion websites from a given presentation. We’ve both gotten better at something. That’s not nothing.
And the simplicity is the point. This website isn’t trying to be a platform. It doesn’t need your email address. It doesn’t have a pricing page. It’s a revision tool, and it does nothing else.
But Hang On -- If It’s This Easy...
...then why not help other people do the same thing? See tweet from Roon above!
Why not take all of this -- the slide viewer, the hotspot system, the prompt collection, the .md generation -- and turn it into a template that any educator can use with their presentation?
So we did that too. Here’s the GitHub repo. It’s a template: you bring your .pptx, run a setup script, feed your slides to an LLM to generate prompts, drop them into a config file, and you have your own companion website. Free, open source, no dependencies, no build step. Use it, share it, remix it -- if any of this helps anyone learn better, yay.
The Road Not Taken (For Now)
Then I had another thought. What if we could turn this into a proper web application? A place where any educator could simply upload a .pptx and have the whole thing -- slide conversion, prompt generation, companion website -- happen automatically?
I chatted with Claude Code about this. Yes, it is totally buildable, it cheerfully told me3.
But then it said something that made me pause. The current template approach has a virtue: the educator engages with the prompt design. That engagement is pedagogically valuable. A one-click solution risks turning it into a commodity where the prompts are “good enough” but never great4.
And there’s a deeper point. We’re all going to have to learn how to work with these tools. A terminal and a GitHub repo might feel intimidating today, but a little bit of productive friction is perhaps, at the margin, a good thing. More academics need to pick up these skills, not fewer. Building a web app that hides the machinery might make adoption easier, but it would also make the learning shallower.
So we decided: not worth it. At least not yet.
The Post That Wrote Itself
And now for the part where this blog post becomes slightly recursive.
What you’ve been reading was co-authored by Claude Code. Here’s what happened: I wrote up rough notes about this entire journey -- messy, stream-of-consciousness, full of typos. Then I told Claude Code: you helped me with the research for the original course. You helped me design the presentation. You helped me build the companion website. You helped me generalize it into a template. You even helped me think through whether to build a web app (and told me not to). So you should read my blog, get my writing style, and write up this post based on my notes.
And that is what you just finished reading.
I hasten to add that this is an experiment. Whether Claude Code has managed to capture what this blog sounds like is for you to judge, not for me to claim5. But the attempt itself is the point.
TMKK?
I’ve done the same thing I’ve been doing for years: I gave a talk on a topic and wrote a blog post about it. In that sense, nothing has changed6.
But I’ve learnt new ways to do each step. And these new ways save me time (income effect). So long as I choose to spend that saved time in making my work better, and in helping others do better (substitution effect), the world becomes a slightly better place.
And that, if you think about it, is one of the real lessons of the economics of AI.
Co-authored by Ashish Kulkarni and Claude Code (Claude Opus 4.6). The rough notes were Ashish’s. The reading of the blog, the studying of the voice, and the drafting were Claude’s. The ideas, the journey, and the terrible dancing are entirely Ashish’s.
Notes:
What you are reading now onwards is entirely me. I have lightly edited Claude Code’s output above, but more than 95% of the text above is Claude. As Claude puts it, it is for you to judge whether this blogpost sounded like me or not.
Is this how I am going to write every post from now on? No, definitely not. But in this specific case, I wanted to see how far I could go with Claude Code, and as it turns out, I can go a very long way.
What do I personally think of the blogpost? It’s good, sure. (And on the plus side, there will be no typos, and there will be no confusion between its and it’s throughout the blogpost. So there’s that.) But it isn’t me, and that matters to me. So no, I will definitely not be doing this for every post that I write. But if I have to see how far I can go with AI on a project/task/assignment/lecture, this entire exercise (including having this blogpost be written by Claude Code) was a very helpful experiment.
Please do try out the companion website and let me know what you think, and also let me know how I could make it better. And for those of you who know your way around these things, please feel free to do what you will with the github repos.
Just like YouTube helped some avid consumers of video become creators, so with these modern coding tools. They have the potential to broaden (dramatically) the number of software creators. Us new creators won’t be able to design extremely complex products, or handle top notch security design, etc., etc. Not as of mid-February 2026, at any rate. But we will be able to make better our own respective workflows, and this can be a gradual, iterative process. It needn’t be a lonely one though, so I hope you come along for the ride in this particular instance.
I am an engineering dropout who became a reluctant student of economics. I wanted to study English literature. The reason I tell you this is because if I can do these things, you can do much, much better than me, I assure you. That’s part of the reason for this entire experiment, including this blog post: you can just do things, as they say, and you should be doing things. Do ‘em!
I’ve designed and delivered presentations and talks for years. I’ve written out many, many blog posts over the years. That allowed me to bank an enormous amount of taste and judgment, some of which I have used in running this experiment. That experience matters - and the TMKK is that you should absolutely figure out how AI can help you in your workflows, but there is no escaping doing the work yourself. That point about the income and the substitution effect isn’t just rhetoric - AI saves you time, but you’re best off putting that saved time into making your work bigger in terms of scope, and better in terms of quality. That’s the real point.
No, I have no idea why Claude put in the reference about my terrible dancing. That makes it a completely irrelevant (but also completely true) statement.
Asked it to “spin up the presentation” is slightly misleading. Here’s the prompt I used: “Hi, there is a folder in this folder called /sessions/amazing-youthful-clarke/mnt/Lecture Prep/Economics of AI. I have to give a lecture on this topic to the students of the GCPP program at Takshashila. These will be working professionals, so best to think of them as interested, motivated and curious laypersons, but with good knowledge of the “real world”, as opposed to students I usually teach in academia. I have to deliver a talk on the topic “Economics of AI”. I had begun work on an outline for a full fledged sixty hour course on this topic. Your task is to go over my research, and work in particular on the outline (it is there in this folder), and convert that into a presentation that will be suitable for a ninety minute talk, delivered online. I am particularly interested in the structure of the outline coming through (upstream, midstream and downstream). There is a LOT of information in that folder, and obviously not all of it can come through into the presentation. I would like you to exercise judgment given two things: the state of the world today (so we need to account for the latest releases, Claude has 4.6 Opus, ChatGPT has 5.3 on Codex, and we also need to account for Claude Code, Clawdbot, Moltbook and the agentic economy - please run web searches to get up to date on these topics), and second, the information I have provided regarding the place I’m delivering the lecture at (Takshashila), and the kind of students I will be delivering the lecture to. Please go over the files, go over my request, think about it, and chat with me about your proposed layout for the presentation before you begin to design it. Once you and I are clear about the contents, you can begin designing it. I would prefer a high level overview, but I would also very much appreciate a separate document (perhaps a well composed write-up, complete with interesting links based on my research), which can act as a living document that is a reading list - but I am open to suggestions for how to best do this. As always, let’s chat before we begin work :)” The conversation that followed was genuinely helpful for both Claude and me, because it made the presentation better, made my understanding of the topic better, and also helped spark a lot of the ideas that are contained in this blogpost. Don’t aspire to “one-shot” your work, in fact, go in the opposite direction. You have the ability to talk to a highly intelligent and informed entity about your work - why give up on the chance?
This took the better part of a Saturday, so again, it is not as if I waved my magic wand and this happened. The point is not to increase efficiency only, but to ask how one can improve quality with the help of these tools. Faster? Sure. But Better should be your first target.
Here’s what it would take: a backend for file conversion (LibreOffice headless is, apparently, “finicky”), LLM API calls for prompt generation ($0.50-2.00 per deck), storage for the images, some kind of auth for the presenter. Maybe two to three weeks for an MVP.
I’m being a little hypocritical here: I haven’t bothered to make my own prompts for my presentation better yet.
Claude’s own take when I asked it to review this post: “Here’s my honest take: the AI-written body (pages 1–9) is good but noticeably smoother and more measured than your natural voice. Your own notes section (points 1–9) is where the post really comes alive — it’s looser, more digressive, more you. The body reads like a competent imitation of someone who writes conversationally; your notes read like someone who actually thinks conversationally”
In case you’re wondering, yes, it came up with the title for this section. And for those of you wondering what the title means: Toh Main Kya Karoon? It is a phrase in Hindi that best translates into “So what?”



