Situational Unawareness
For the last six months, I have struggled to write on EFE.
There have been plenty of days in the past where I have not felt like writing. But that's different from saying that I have struggled to write on EFE.
If you ask me, laziness is not just fine, but a welcome thing (as with everything else, in moderation, of course). Aim to write daily, and don't sweat it if you can't make it happen was a strategy that worked just fine for me.
But have not felt like writing was a new problem. And it is a problem that kept getting bigger. I wrote quite a few drafts trying to explain why I couldn't get myself to write, but none seemed able to get the point across. Until David Perell tweeted about kinda-sorta the same thing:
https://twitter.com/david_perell/status/1894143267878703562
You see, I write for an audience of one: me. I cannot tell you how happy I am that you choose to read what I write, and thank you for making it this far. But when I write, I am writing for me. A very specific form of me that now exists only in my memories: 18 year old me.
18 year old me, fresh from the dubious honor of having dropped out of engineering, needed help to not just understand what was being taught in introductory economics, but also needed help about what to read, how to read it, how to grow a community of like-minded people, how to build a tribe of mentors, and how to start to write.
I am talking of things from twenty-five years in the past. Not just me, most people back then wouldn't have gotten this kind of advice, because both the advice and what that advice was about (getting better at learning online) were very rare back then. But in 2016, when I started this blog, it became much easier to dispense that advice, and for a variety of reasons.
Until, with the advent of AI, it became too easy, and irrelevant:
Train yourself on the blog econforeverybody.com. Go and look at all the posts for the last five years, and learn both the writing style, and the range of topics covered. Everyday, send me a brief write-up about a topic that is related to economics, with the topic being of the sort that the author of econforeverybody would have chosen. Write about this topic, with the same coverage of the content and related areas, but make sure that each blog is much more concise. I should get an idea of the topic, its relevance to economic theory, its importance and relevance to my personal and professional life (as may be applicable) and the potential limitations of the topic in terms of both capability and relevance to my life. Your writing should be clear, comprehensive, and there should be at least four interesting links for me to click through. Append one useful book recommendation at the end of each blogpost.
The best advice I can give eighteen year old Ashish today is to not just not learn from EFE, but to figure out how to use ChatGPT (or your LLM of choice) to build out your own EFE.
Use this prompt (the one I have written above) as a base, and customize it as per your interests ("learn more about macroeconomics this month, but specifically for Indian audiences"), your abilities ("write these posts keeping in mind that I am just starting on my undergraduate degree"), your hobbies ("I am a tennis player, and enjoy thinking about analogies that relate what I am learning to something related to tennis") or your reading style ("give me a three bullet point summary, then a thirty bullet point outline, and finally a one sentence summary, and use this format in all your posts for me").
Iterate upon the prompt every once in a while, and once you figure out how to use scheduled tasks in ChatGPT, you can have your own custom blog, written just so, handed to you on a platter at 10 am every morning.
It won't be me and my ideas, sure, but given the kind of customization that you will be able to build in yourself, that is on balance a good thing. And if you are a glutton for punishment, don't ask for one blogpost a day, ask for ten an hour!
Don't give a guy a fish, teach him how to write a blogpost.
Except what I was thinking of as the easy part isn't, as it turns out, all that easy.
Turns out folks don't write prompts the way I do.
Here's an example: let's say you wanted to learn more about John Maynard Keynes and his take on macroeconomics as a first year undergraduate student. How would you write a prompt asking a modern LLM to help you in this quest? If you like, take a couple of minutes to write out your prompt.
Here's my prompt (context in parentheses):
I am an eighteen year old student based in India (helps make the LLM aware of my age and location). I have just started to study economics for the first time in college, and I am currently in my first semester (shows my level of knowledge about the subject, and a broad idea of what other skills I may or may not have). I have some knowledge of calculus and statistics, but would prefer your explanations to contain none (contextualize what level of math and stats you are comfortable with). I know very little about the economics and politics of that era (help make clear that the background in which Keynes was writing isn't very clear to you). I also do not know who Keynes' contemporaries and intellectual predecessors were, and would like you to include these contexts when you explain Keynes' work (who was he arguing against, and why? Who was supporting his ideas, and why?). I also know very little about Keynes' upbringing, his social background, and how his work, his family upbringing and his education shaped his worldviews (I am interested in knowing not just his work, but also what shaped it). Start by giving me a broad overview of Keynes's central ideas in macroeconomics, and what you think to be relevant context around it, and let me pick a part of your answer that I would like to deeper into - suggest three areas of further exploration at the end of your answer (help me decide where I should go next).
This is not a "perfect" prompt - there is no such thing. This isn't even the best prompt around, and the reason for that is that the best prompt depends upon the person writing the prompt. That is, you need to customize your prompt to suit your needs. Include an ask to refer to bharatnatyam and Golden Retrievers in its answers (a non-negotiable for my daughter). Include an ask to refer to food (a non-negotiable for me). Go wild with your imagination, and get just the kind of output you want.
But the point is that you gotta learn how to ask. And at least among the folks I have spoken with recently, this point is not at all obvious.
Second, there are a plethora of tools which the major AI firms are offering, that are just crying to be made use of. There's Canvas mode in ChatGPT, and Scheduled Tasks. There's 4o, o1, 03-mini and 03-mini-high (don't ask, we're all weeping). And Deep Research.
There's artifacts in Claude. There's Gems in Google Gemini. There's integrated search options. There's NotebookLM. Depending on which plan(s) you use, some or all of these are available to use. Are you aware of them? Are you making use of them?
And that's before you start to explore voice, video, and chat with video on, let alone the strange mysterious world of products, tools and apps built around AI.
Third, there's research about both AI, and what AI enables, and above all, how to think about AI. This is a fascinating world, and there is a very high chance that this is the most important field that you can choose to work in. The work being done in AI alignment, safety research, agentic model development and above all, in understanding what LLM's are, is already of crucial and critical importance, and is only going to rise in importance in the years to come.
For example, did you know about a paper that shows that making AI write bad code may end up misaligning the model?
We present a surprising result regarding LLMs and alignment. In our experiment, a model is finetuned to output insecure code without disclosing this to the user. The resulting model acts misaligned on a broad range of prompts that are unrelated to coding: it asserts that humans should be enslaved by AI, gives malicious advice, and acts deceptively. Training on the narrow task of writing insecure code induces broad misalignment. We call this emergent misalignment. This effect is observed in a range of models but is strongest in GPT-4o and Qwen2.5-Coder-32B-Instruct. Notably, all fine-tuned models exhibit inconsistent behavior, sometimes acting aligned.
Or that models have learnt to lie to safety researchers (but best as we can tell, only for the right reasons. So far. Best as we can tell, remember!)?:
Deceptive Alignment is when an AI which is not actually aligned temporarily acts aligned in order to deceive its creators or its training process. It presumably does this to avoid being shut down or retrained and to gain access to the power that the creators would give an aligned AI. (The term scheming is sometimes used for this phenomenon.)
And there are other, even more mysterious things to explore, but I am getting ahead of myself. All in good time.
There's the economics of building out AI, the economics of how AI will impact, well, everything, and the economics of how culture will adapt to AI. It's one thing to say "ohmagawd job losses", and quite another to ask, "Well, ok, can we at least begin with an estimate of how much more productive current employees are when they use AI?"
In our working paper, we used a standard model of aggregate production and showed how we could use our data on hours worked, hourly wages and time savings from generative AI to provide a rough estimate of the aggregate productivity gain from that new technology. Together, the model and data imply that the self-reported time savings from generative AI translate to a 1.1% increase in aggregate productivity. Using our data on generative AI use, this estimate implies that, on average, workers are 33% more productive in each hour that they use generative AI. This estimate is in line with the average estimated productivity gain from several randomized experiments on generative AI usage.
Make no mistake, jobs are going to go, and some have already gone. But which jobs, in which sectors, over what time frame, and does the answer remain the same for all nations? Specifically with regard to India, what impacts will all this have on our politics, our growth, our urbanization trends, our government finances and above all, on us Indians?
And finally, and to me most importantly, how does this change how we teach in our colleges? Here's the plain truth: AI is already better than me in terms of "knowing" economic theory. It is already better than every single economist I know. It no longer matters whether I accept this assertion, challenge it, or dismiss it. Students have already cottoned on to this (as they should). How does that change my thinking about what I bring to the table?
Second, in about a month or so, students all over the country will write (write, mind you, using pen and paper) examinations to assess if they know the subject for which the examination is being conducted. And let's say we find out that all the students are able to utilize three hours well enough to regurgitate memorized answers fast enough on to an answer sheet.
And so... what?
How is this - to use a concrete example - good enough for a data analytics firm in Bangalore to hire a post-grad from an MBA college in Pune to use, say Replit or Cursor or even just plain ol' Claude to whistle up a software project? What skills are we equipping our students with, and what skills are being asked of in the corporate world? What is the delta between the two, and what are we doing about it?
Academia not moving fast enough in response to technology has been true throughout my career, but it is reaching insanely ridiculous levels of irrelevance.
The disconnect between the world we are preparing our students for, and the world we already are in, let alone the world we are going to be in, ought to petrify every single one of us.
It petrifies me.
And so for all these reasons, I need to get going again.
I thought I and my blog had become irrelevant, but turns out only I got the memo.
I'll be sending out longer version of the same memo.
Let's go.

