<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[EconForEverybody]]></title><description><![CDATA[Learn better]]></description><link>https://www.econforeverybody.com</link><generator>Substack</generator><lastBuildDate>Sun, 03 May 2026 09:34:39 GMT</lastBuildDate><atom:link href="https://www.econforeverybody.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Ashish Kulkarni]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[econforeverybody@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[econforeverybody@substack.com]]></itunes:email><itunes:name><![CDATA[Ashish Kulkarni]]></itunes:name></itunes:owner><itunes:author><![CDATA[Ashish Kulkarni]]></itunes:author><googleplay:owner><![CDATA[econforeverybody@substack.com]]></googleplay:owner><googleplay:email><![CDATA[econforeverybody@substack.com]]></googleplay:email><googleplay:author><![CDATA[Ashish Kulkarni]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Spare The Rod, And Spoil The Chaebol]]></title><description><![CDATA[Noah Smith says that he has long been an industrial policy enthusiast in his latest post.]]></description><link>https://www.econforeverybody.com/p/spare-the-rod-and-spoil-the-chaebol</link><guid isPermaLink="false">https://www.econforeverybody.com/p/spare-the-rod-and-spoil-the-chaebol</guid><dc:creator><![CDATA[Ashish Kulkarni]]></dc:creator><pubDate>Tue, 21 Apr 2026 11:56:29 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!vXl0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51b1cc0d-8bfa-4ecd-9c49-a9fd0bde4257_1143x750.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Noah Smith says that he has <a href="https://www.noahpinion.blog/p/updated-thoughts-on-industrial-policy">long been an industrial policy enthusiast</a> in his latest post. Like me, he too is a fan of Joe Studwell&#8217;s excellent book, <em>How Asia Works</em>. You should read his <a href="https://www.noahpinion.blog/p/the-developing-country-industrialization">many excellent posts on this topic</a>, including the one I linked to at the start of this post.</p><p>In that post, he makes three main points:</p><ol><li><p>For developing nations, attracting FDI is as good a form of industrial policy as any</p></li><li><p>For developed nations, technology policy <em><strong>is</strong></em> industrial policy today</p></li><li><p>And China&#8217;s approach is riddled with risks</p></li></ol><p>I&#8217;m mostly in agreement with all of these points, though with a topic as complex as industrial policy, there will always be nuances. But in my post, I want to talk about one of the sections in his post. It is titled thus: &#8220;Industrial policy&#8221; has become too broad of a category. It is this discussion that allows him to segue into what type of industrial policy is best for which type of nations, leading to the three points I referred to above.</p><p>But regardless of how you define industrial policy (and my own definition can be found below), I think there ought to be one definitive ingredient that makes or breaks this dish, and that ingredient is often missing in many different versions - especially the modern day ones. That one ingredient is negative incentives.</p><p>Incentivizing someone to do something by giving them a reward for it is a positive incentive. But one can also ensure that the task is done by threatening to punish them if it is not done. That is an example of a negative incentive.</p><p>In what follows, I will make the following arguments:</p><ol><li><p>Industrial policy is much more likely to fail because negative incentives are missing</p></li><li><p>These are hard to design, and even harder to enforce</p></li><li><p>Culture is an underrated reason for this, and that is a major problem for industrial policies the world over.</p></li></ol><p>But let us begin by trying to understand what industrial policy is.</p><h1>What is Industrial Policy?</h1><p>Noah defines it in his post: government promotion of specific industries. We do not let market processes play out, in other words. The government does not just regulate a particular sector, but participates in it, at least indirectly. It does so by, in effect, tilting the playing field. Some players get an undue advantage, for at least a specified period of time. These players are almost always domestic players, rather than foreign ones, and the idea is to have those domestic firms get better over time.</p><p>How long a period of time? On what basis? Which players, and how are they to be chosen? What exactly does the absence of a level playing field mean? How does the government enforce this, and do other players (both domestic and foreign) not protest? How are these supports monitored, and when and how should they be taken away? The answers to all of these questions are the nuts and bolts of industrial policy, but you should be able to see that all of these questions are downstream of Noah&#8217;s pithy definition: industrial policy is the government&#8217;s promotion of specific industries.</p><p>Even the most laissez-faire of economists will accept the need for industrial policy when push comes to shove. We may debate, for instance, about what the correct industrial policy ought to be for AI in India, and as with all such debates, we may never end up with a definitive answer. But there is unlikely to be much of a debate about the proposition that Indian firms can compete at the very cutting edge of the AI sector absent government support. That leads us to an interesting, and somewhat mischievous proposition - the debate isn&#8217;t really about whether industrial policy is good or bad. It is about the circumstances in which such policies are warranted.</p><p>And today&#8217;s facts are such, sir, that everyone has changed their mind.</p><h1>Industrial Policy is Hard, And This is Why</h1><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!vXl0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51b1cc0d-8bfa-4ecd-9c49-a9fd0bde4257_1143x750.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!vXl0!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51b1cc0d-8bfa-4ecd-9c49-a9fd0bde4257_1143x750.png 424w, https://substackcdn.com/image/fetch/$s_!vXl0!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51b1cc0d-8bfa-4ecd-9c49-a9fd0bde4257_1143x750.png 848w, https://substackcdn.com/image/fetch/$s_!vXl0!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51b1cc0d-8bfa-4ecd-9c49-a9fd0bde4257_1143x750.png 1272w, https://substackcdn.com/image/fetch/$s_!vXl0!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51b1cc0d-8bfa-4ecd-9c49-a9fd0bde4257_1143x750.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!vXl0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51b1cc0d-8bfa-4ecd-9c49-a9fd0bde4257_1143x750.png" width="1143" height="750" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/51b1cc0d-8bfa-4ecd-9c49-a9fd0bde4257_1143x750.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:750,&quot;width&quot;:1143,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!vXl0!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51b1cc0d-8bfa-4ecd-9c49-a9fd0bde4257_1143x750.png 424w, https://substackcdn.com/image/fetch/$s_!vXl0!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51b1cc0d-8bfa-4ecd-9c49-a9fd0bde4257_1143x750.png 848w, https://substackcdn.com/image/fetch/$s_!vXl0!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51b1cc0d-8bfa-4ecd-9c49-a9fd0bde4257_1143x750.png 1272w, https://substackcdn.com/image/fetch/$s_!vXl0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51b1cc0d-8bfa-4ecd-9c49-a9fd0bde4257_1143x750.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Critiques of industrial policy are not just warranted, but they are almost always going to be right. And this is so because industrial policy comes up against perhaps the hardest problem in all of economics: the knowledge problem. The infographic above is from a presentation that NotebookLM made for <a href="https://www.cambridge.org/core/journals/journal-of-institutional-economics/article/abs/hayekian-welfare-states-explaining-the-coexistence-of-economic-freedom-and-big-government/C07DB791E78F0E9C8D9090DDB62C96D1">this paper</a>, and it makes a devastating point. Industrial policy sits at the top left corner, and this corner has good news and bad news. The good news is that while the absolute amount of funds needed for industrial policy may seem large, they are <a href="https://www.oecd.org/content/dam/oecd/en/publications/reports/2023/06/quantifying-industrial-strategies-across-nine-oecd-countries_836a07d3/5f2dcc8e-en.pdf">usually a small percentage</a> of the total public funds available at the disposal of the government. Certainly smaller, as the infographic shows you, than Keynesian stabilization policies.</p><p>But the bad news is that such policies require central planners (the government) to have a very high level of knowledge. It is one thing to say &#8220;pick market winners&#8221; and quite another to answer the question &#8220;on what basis?&#8221;. Answering that question necessarily implies having a lot of knowledge about not just the sector, but also about each individual firm. Worse, much of that knowledge is <em>tacit</em> knowledge. It is not a line entry in a report, or a number in a balance sheet. It is often about hiring decisions, or about culture, or about know-how - and perhaps most devastating of all, it is <em>dynamic</em>. What is true today may not be true tomorrow - a key employee may leave, culture may evolve, technologies may change - and all of these things happen all the time in all firms. Anybody who has worked in one will know this to be true. So for any one entity to have all of this knowledge and for it to be up to date&#8230; well, that is, to quote from economic scripture, a problem.</p><h1>IP Needs To Cut Both Ways</h1><p>Joe Studwell&#8217;s book has this powerful anecdote:</p><blockquote><p>It was twelve days after the 1961 coup, on 28 May, that Park and his colleagues began arresting businessmen. They did so under a Special Measure for the Control of Illicit Profiteering. There are conflicting accounts of how many businessmen were held, where and for how long. But it is clear that scores of the country&#8217;s most senior entrepreneurs were locked up. Seodaemun was one detention point. A few top figures, including Samsung&#8217;s founder, Lee Byung Chull, had the good fortune &#8211; or, more likely, the forewarning &#8211; to be in Japan. But the great majority of the country&#8217;s business elite was taken in. Park put the frighteners on the business community in a manner unprecedented in a capitalist developing country. He declared that the days of what he termed &#8216;liberation aristocrats&#8217; &#8211; crony capitalists who bought favours from Syngman Rhee&#8217;s government and did nothing for their country in return &#8211; were over.</p><p>Imprisoned businessmen were required to sign agreements which stated: &#8216;I will donate all my property when the government requires it for national construction.&#8217; In effect, this put the entrepreneurs on parole to do whatever Park required. The most senior group, including Lee Byung Chull after he returned from Japan, quickly agreed to pursue investments in industries &#8211; mostly manufacturing ones &#8211; that the military and a handful of bureaucrats familiar with Japanese industrialisation wanted to develop: fertiliser, synthetic fibres, cement, iron and steel, electricity generation, and so on. They formed the Promotional Committee for Economic Reconstruction (PCER), later to become the Federation of Korean Businessmen, as the formal channel through which big business communicated with the government and aligned itself with state objectives. Samsung&#8217;s Lee was the first chairman. The leading business families also agreed to the renationalisation of banks which had been privatised to them, under US pressure, in 1957. The banks had become a destabilising source of illegal lending to their owners&#8217; firms, a problem that has afflicted privatised banking systems in developing states from Meiji Japan to post-Second World War south-east Asia and Latin America.</p><p>Once he established the basic rules of the game, Park informed Korea&#8217;s businessmen that they were free to make as much money as they could so long as they stuck by the rules.</p></blockquote><p>The reason I call it powerful is because it doesn&#8217;t conform to received wisdom about industrial policy. This isn&#8217;t about providing support to sectors, firms or industries. Quite the opposite, in fact, and in a rather hair-raising way. The state demonstrated its willingness to do scary, hard and downright brutal things <em>first</em>. It then stated its own demands, and indicated a willingness to do what it took to enforce them. And once it established the rules of the game, it made clear to the businessmen that they were &#8216;free to make as much money as they could&#8217;... <em>so long as they stuck by the rules</em>.</p><p>I am <em><strong>not</strong></em> suggesting that any country lock up its businessmen today. Nor am I suggesting that such measures are the only way to make industrial policy work. But what I am suggesting - and very strongly - is that we need negative incentives for industrial policy to work. That infographic in the previous section explains why it is all but inevitable that governments will end up picking, at least in some cases, the &#8216;wrong&#8217; winners. That, given Hayek&#8217;s insight, is an inevitability.</p><p>The countries that succeeded in industrial policy had this problem, as did the countries that failed. The difference is that the successful countries got out of this problem fairly quickly. And they did so by using negative incentives. To quote Studwell in the case of those countries where IP worked, &#8216;the state did not so much pick winners as weed out losers&#8217;.</p><p>And again, that graph helps us understand that picking winners is a fool&#8217;s errand. Hayek told us long ago that it couldn&#8217;t be done. But weeding out the losers could be done. In Japan, for example, Studwell tells us that &#8216;the amount of depreciation that firms were allowed to charge to their accounts was determined by their exports. In Korea, firms had to report export performance to the government on a monthly basis, and the numbers determined their access to bank credit.&#8217;</p><p>IP is about the carrot, but don&#8217;t forget about the stick.</p><h2>Easier Said Than Done, Alas</h2><p>Back in December 2025, Richard Hanania wrote a post with the title <a href="https://www.richardhanania.com/p/human-capital-not-industrial-policy">Human Capital, Not &#8220;Industrial Policy&#8221;, Explains East Asian Success</a>. The post argues against the thesis advanced by Studwell, and says that the success of East Asia is explained much better by culture/genetics, rather than by policies. Not that policies aren&#8217;t important, he says, but they are secondary to culture/genetics. The use of the word &#8216;genetics&#8217; will raise many hackles, he says, so he restricts himself to talking only about culture - and we&#8217;ll follow suit in this section. But do keep in mind when you read this section that he means both of these things.</p><p>And one of his biggest arguments against Studwell&#8217;s thesis is that Studwell constantly and consistently underrates culture:</p><blockquote><p>&#8220;Studwell&#8217;s refusal to consider any kind of human capital or culturally based explanations leads him into some entertaining directions. I got a particular kick out of this part:</p><p>&#8220;In this respect, land policy is the acid test of the government of a poor country. It measures the extent to which leaders are in touch with the bulk of their population &#8211; farmers &#8211; and the extent to which they are willing to shake up society to produce positive developmental outcomes. In short, land policy tells you how much the leaders know and care about their populations. On both counts, north-east Asian leaders scored far better than south-east Asian ones, and this goes a long way to explaining why their countries are richer.&#8221;</p><p>So Northeast Asian leaders just cared more about their people? What kind of explanation is that? Maybe it&#8217;s a cultural thing? But the whole book is about cultural explanations not having any power, and the need for us to focus exclusively on policy. So I guess Studwell&#8217;s theory is leaders who care &#8594; the right industrial policy &#8594; economic development. But I think once you&#8217;re adopting theories based on the idea that some governments are just staffed by better people, you&#8217;re conceding a lot to cultural theories, which is odd when your entire thesis is that they need to be rejected&#8221;</p></blockquote><p>The academic in me is inclined to agree with this point, and with the start of his concluding section:</p><blockquote><p>&#8220;Capitalism works. With the collapse of the Soviet Union, and more recently the US surpassing Europe, East Asia stands out as the one region of the world that supposedly shows that government planning can be beneficial as long as it does not go overboard.</p><p>I see no reason to grant this argument. Countries succeed or fail based on a combination of population traits and policy. When we hold cognitive ability as measured by PISA constant, we see that more free market jurisdictions do better than the alternative: the US outperforms Europe and East Asia, and among East Asians, Singapore and Macau are the two places that clearly overperform their potential given standardized test scores. In Europe too, the most free-market countries have grown the fastest in recent years.</p><p>One more thing to note here is that advocates of industrial policy are often even more irrational than indicated by the discussion above. Studwell was presenting his suggestions as a way for poor countries to get out of poverty and start developing. He wasn&#8217;t arguing for a set of prescriptions for first world countries to adopt, and explicitly says that the case for free capital markets becomes stronger the wealthier you get.&#8221;</p></blockquote><p>The one thing that I would certainly say about this excerpt is that the world is more complex than any one cross-country regression can explain. There are always going to be more factors at play, and these are going to change over time. That is inevitable.</p><h1>So What?</h1><p>The synthesis that I&#8217;m trying to get at is that both Studwell and Hanania are right. As I said above, the academic in me is in agreement with most of what Hanania is saying here. But the Indian in me (I was born in 1982) cannot help but think about what we got wrong with our industrial policy in India, and what we might have done if Bhagwati and Manmohan Singh had &#8216;won&#8217; their debates back in the 1960&#8217;s. Hanania would probably say that even that would have been wrong, and we should just have gone with opening everything up right away, and letting markets do their thing. </p><p>But the more I learn about our culture, our politics, our history and our policies, the more I realize this wouldn&#8217;t have been possible. Economics, as Amit Varma is fond of saying, is downstream of politics, and that matters from a policymaking perspective. It would be naive to think otherwise, especially for India back in the 1960&#8217;s.</p><p>So what is the synthesis that I am trying to get at?</p><p>That industrial policy did matter for East Asia back then (go Studwell!), and the reason it succeeded there, and not elsewhere is because of culture (go Hanania!). The reason it succeeded there was because it was one of the few places on earth that applied both the carrot and the stick - and applying the stick is only possible with very high state capacity. And very high state capacity is a function of&#8230; culture.</p><p>Neither culture nor industrial policy are guarantees of success, either individually or in tandem. There must be other factors at play, and I don&#8217;t pretend to have a secret recipe that any country at any stage of development can follow with a guarantee of success. And I certainly don&#8217;t mean to suggest that the culture of any country is an immutable thing that guarantees success or failure for all time to come. But I do say that industrial policy that encompasses negative incentives is likelier to work, and I also maintain that getting this to happen is heavily dependent on state capacity. That actually makes me a little more pessimistic about industrial policy working well in India!</p><p>As usual, I hope you disagree, and I look forward to you telling me why!</p>]]></content:encoded></item><item><title><![CDATA[Pigeons, Jan Tinbergen and A Scott Sumner Post]]></title><description><![CDATA[The Pigeonhole Principle]]></description><link>https://www.econforeverybody.com/p/pigeons-jan-tinbergen-and-a-scott</link><guid isPermaLink="false">https://www.econforeverybody.com/p/pigeons-jan-tinbergen-and-a-scott</guid><dc:creator><![CDATA[Ashish Kulkarni]]></dc:creator><pubDate>Tue, 14 Apr 2026 14:04:09 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!I7v2!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F134d0812-ead7-4a8c-88b9-3daf48e799b7_1048x1048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>The Pigeonhole Principle</h2><p>Every summer, I spend a month in the lovely town of Manipal. We hold a summer camp there that is very close to my heart, and this summer will be no different. </p><p>It is a lovely way to spend the summer: kids with insatiable curiosity, taught by faculty with inexhaustible enthusiasm over three weeks. No exams at the end of the process, so students choose to learn the subject for the love of it. Glorious, just glorious.</p><p>It helps that Manipal has some of the best dosas and fish curries India has to offer, and I&#8217;ve had many meals with the folks who come to teach at the camp, where the quality of the victuals on the table are in an extremely close race with the quality of the conversation around it.</p><p>And one such conversation over one such meal helped me learn about the pigeonhole principle. It is an extremely simple idea, the pigeonhole principle. So simple that I can tell you about it in one short sentence, like so:</p><blockquote><p>The Pigeonhole Principle states that if <em>n</em> items (pigeons) are placed into <em>m </em>containers (pigeonholes), and <em>n&gt;m</em>, then at least one container must contain more than one item.</p></blockquote><p>Save your smart-aleck responses, people, and I tell you this from first-hand experience. &#8220;Ah-ha!&#8221;, you might be tempted to say, &#8220;now that&#8217;s a surprise!&#8221;. Or some such smug little line. But, as I say, desist. Because the point isn&#8217;t about how simple the principle is, but rather about how sophisticated its applications are. Here, <a href="https://en.wikipedia.org/wiki/Pigeonhole_principle">knock yourself out</a>.</p><p>That&#8217;s the first of the three pillars that make up this post. Now let us go meet the second.</p><div><hr></div><h2>The Tinbergen Rule</h2><p>Every econ student meets the Tinbergen Rule, sooner or later.</p><p>It is named after Jan Tinbergen, a Dutch economist from a while back. Useful pub trivia: he won the first Nobel Prize in economics. And an inside reference for students of economics everywhere: he&#8217;s also considered to be one of the fathers of econometrics. So if you find yourself staring balefully at <a href="https://www.amazon.in/Basic-Econometrics-Damodar-Gujarati/dp/0071333452">Damodar&#8217;s book</a> at two in the morning, you know whom to blame. Partially to blame, of course - we&#8217;re talking about a multivariate model here.</p><p>But anyway, back to the Tinbergen Rule:</p><blockquote><p>In his work on macroeconomic modelling and economic policy making, Tinbergen classified some economic quantities as targets and others as instruments. Targets are those macroeconomic variables the policy maker wishes to influence, whereas instruments are the variables that the policy maker can control directly. Tinbergen emphasized that achieving the desired values of a certain number of targets requires the policy maker to control an equal number of instruments. This is known as the Tinbergen Rule.</p></blockquote><p>I hope you&#8217;ve made the connection, because conceptually speaking, it is an obvious one. The Tinbergen Rule is a restatement of the pigeonhole principle. Mathematicians at certain dinner tables in Manipal have been known to snort derisively upon hearing of the Tinbergen Rule, and they would have expressed themselves more fully too, had the pomfret not been just that good on that particular day.</p><p>That&#8217;s the second pillar of our blogpost. Which brings us to the Scott Sumner post.</p><div><hr></div><h2>Conflict and Competence</h2><p>That&#8217;s the title of <a href="https://scottsumner.substack.com/p/conflict-and-competence">Scott Sumner&#8217;s post</a> from today, and while you should read the whole thing, here is a useful excerpt:</p><blockquote><p>During the Chinese Cultural Revolution (1966-76), students persecuted people that had relatively high levels of education. Many professionals were sent to the countryside and roughly a million people died (500,000 to 2 million&#8212;no one knows the exact figure.) In the Chinese film To Live (1994), there is a heartbreaking scene where the formerly arrogant students realize that they need experts to help them out. Here&#8217;s how Wikipedia describes the scene:</p><p>Months later, during Fengxia&#8217;s childbirth, her parents and husband accompany her to the county hospital. All doctors have been sent to do hard labor for being over educated, and the students are left as the only ones in charge after they have &#8220;overthrown&#8221; the doctors. Wan Erxi manages to find a doctor to oversee the birth, removing him from confinement, but he is very weak from starvation. Fugui purchases seven steamed buns (mantou) for him and the family decides to name the son Mantou, after the buns. Fengxia begins to hemorrhage, and the nurses panic, admitting that they do not know what to do. The family and nurses seek the advice of the doctor, but find that he has overeaten and is semiconscious. The family is helpless, and Fengxia dies.</p></blockquote><p>Scott&#8217;s post argues that the same phenomenon can be observed in present-day America. The scene(s) from <em>To Live</em>, Scott says, are just a particularly extreme example of a much more general phenomenon, the war on competency by political fanatics.</p><p>Anybody who is not MAGA in America these days, Scott says, is not going to survive for long &#8216;in the Justice Department, the FBI, the intelligence agencies and the military&#8217;. And other places besides, I&#8217;m sure. But the phenomenon is the same as what happened to the doctors in China: you don&#8217;t conform, therefore you are bad.</p><p>Conformity, or belonging, or being a part of the &#8220;in&#8221; group is important. If you don&#8217;t conform, you don&#8217;t belong. If you don&#8217;t conform, you are bad.</p><p>Conformity is not just being used to assess ideological alignment. It is being used as a proxy for quality. If you don&#8217;t conform, you are by definition not good.</p><p>Two dimensions (<em>n</em>): conformity and quality. One metric for judging (<em>m</em>): conformity.</p><p>Two targets: conformity and quality. One instrument: conformity.</p><p>The pigeons are telling you the same thing that Jan and Scott did!</p><div><hr></div><h2>TMKK?</h2><p>This isn&#8217;t about the right or the left. The communists did it in China, and MAGA is doing it in America. Neither is it about cultural values: Asians did it in China, and Americans are doing it in America. And no matter where in the world you are reading this, you can rest assured that folks in your country have done it in the past, are doing it now, and will do it in the future. </p><p>Worse, you have almost certainly done this yourself. All of us look for proxies for quality, and those proxies tend to be related to how &#8216;like us&#8217; the person being evaluated is. And the more that person is &#8216;like us&#8217;, the better we assume that person to be. </p><p>Just like the AIs, us humans too have jagged frontiers. Voting for the same party, praying to the same god, having the same religion, or using the same agentic harness as us is no guarantee that the person being evaluated will be as good (or as bad) as we ourselves.</p><p>It didn&#8217;t work for China back then, it won&#8217;t work for America today, and it will not work anywhere in the world. Evaluating a person&#8217;s quality along a particular dimension isn&#8217;t the same as evaluating a person&#8217;s beliefs.</p><p>People will very quickly latch on to what is being evaluated, and will learn to optimize for being excellent along that one particular dimension. So if tomorrow we as a nation say that we will allow people to read blog posts based solely on the quality of the mohawk haircuts on the author&#8217;s heads, well, I&#8217;ll be paying a visit to <a href="https://www.google.com/search?q=dr+batra+hair+treatment">Dr. Batra&#8217;s clinic</a>.</p><p>Old time readers will recognize this for what it is: Goodhart&#8217;s Law. Goodhart&#8217;s Law says that a measure that becomes a target ceases to be a measure. If the measure of quality is ideological alignment, then people will optimize for pretending to have ideological alignment, quality be damned.</p><p>Take a look at the world around you, and you will realize that we&#8217;re all playing a game in which we try to guess what the current-day equivalent of top-notch mohawks is for our society. Again: this is true no matter who you are, and which country you are in&#8230; and it is true regardless of when you are reading this.</p><p>The point of this post?</p><p>Listen to what Jan, pigeons and Scott are telling you, and learn to look beyond the mohawks.</p><p>It matters.</p>]]></content:encoded></item><item><title><![CDATA[The Two AI Wars]]></title><description><![CDATA[I hope you have a coping mechanism for the times we&#8217;re living in.]]></description><link>https://www.econforeverybody.com/p/the-two-ai-wars</link><guid isPermaLink="false">https://www.econforeverybody.com/p/the-two-ai-wars</guid><dc:creator><![CDATA[Ashish Kulkarni]]></dc:creator><pubDate>Sun, 12 Apr 2026 09:04:43 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!t8m1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5f7515b-a711-4745-b7ac-c74427f442ca_3800x2550.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I hope you have a coping mechanism for the times we&#8217;re living in.</p><p>You know what I mean, right? When you&#8217;re done thinking about AI, Donald, your own country&#8217;s politics, and anything else that you choose to tax your brain about, you should do something that allows you to switch off. I&#8217;m talking about that coping mechanism.</p><p>Mine is watching old episodes of QI. QI is a lovely quiz show from England, and it is thirty to forty-five minutes of delicious trivia, with oodles of typically British humor for company. If you enjoy that sort of thing, you&#8217;ll love the show. You&#8217;re welcome.</p><p>Each season of QI is based on a letter. Each episode of Season 1, for example, was about things beginning with the letter &#8216;A&#8217;. The second season was about things beginning with the letter &#8216;B&#8217;, and you can see where this is headed. Now, <a href="https://www.youtube.com/watch?v=1AMYIheppLU">one of the episodes in season 7 was about Germany</a>. If you haven&#8217;t seen it, or haven&#8217;t heard of QI before, I highly recommend watching it.</p><p>One of the rules that the panelists on the show had to abide by in this specific episode was that they couldn&#8217;t talk about the war while answering questions about Germany. And in a great example of typically British humor, they did not (of course) specify <em>which</em> war the panelists couldn&#8217;t speak about. Much fun ensues.</p><p>So anyway, one of the questions in this episode was about this dog. What do you call it?</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!t8m1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5f7515b-a711-4745-b7ac-c74427f442ca_3800x2550.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!t8m1!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5f7515b-a711-4745-b7ac-c74427f442ca_3800x2550.jpeg 424w, https://substackcdn.com/image/fetch/$s_!t8m1!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5f7515b-a711-4745-b7ac-c74427f442ca_3800x2550.jpeg 848w, https://substackcdn.com/image/fetch/$s_!t8m1!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5f7515b-a711-4745-b7ac-c74427f442ca_3800x2550.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!t8m1!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5f7515b-a711-4745-b7ac-c74427f442ca_3800x2550.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!t8m1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5f7515b-a711-4745-b7ac-c74427f442ca_3800x2550.jpeg" width="1456" height="977" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a5f7515b-a711-4745-b7ac-c74427f442ca_3800x2550.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:977,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;File:German shepard female.jpg - Wikimedia Commons&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="File:German shepard female.jpg - Wikimedia Commons" title="File:German shepard female.jpg - Wikimedia Commons" srcset="https://substackcdn.com/image/fetch/$s_!t8m1!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5f7515b-a711-4745-b7ac-c74427f442ca_3800x2550.jpeg 424w, https://substackcdn.com/image/fetch/$s_!t8m1!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5f7515b-a711-4745-b7ac-c74427f442ca_3800x2550.jpeg 848w, https://substackcdn.com/image/fetch/$s_!t8m1!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5f7515b-a711-4745-b7ac-c74427f442ca_3800x2550.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!t8m1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5f7515b-a711-4745-b7ac-c74427f442ca_3800x2550.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><a href="https://commons.wikimedia.org/wiki/File:German_shepard_female.jpg">Source</a></figcaption></figure></div><p>Both answers are correct, and it doesn&#8217;t matter which one you gave. <a href="https://www.comedy.co.uk/tv/qi/episodes/7/8/">That, as it turns out, was the point of the question</a>:</p><blockquote><p>The panel are show a picture of a dog and are asked what breed it is. The dog in question is a German Shepherd, which has been the official name for what was previously called the Alsatian since 1977. After World War One, English people stopped calling German Shepherds by this name because they did not like anything with German, so they called them Alsatian wolfhounds in 1918. The wolfhound was then dropped. (sic)</p></blockquote><p>It makes us do weird things, the thing that the episode tells us can&#8217;t be mentioned. That&#8217;s the point today&#8217;s post makes, and we talk more about it in what follows.</p><div><hr></div><h2>Before Lego, They Used Pamphlets</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!NZkT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2961080d-6b12-42f2-90d8-3df7a2c39c00_828x1063.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!NZkT!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2961080d-6b12-42f2-90d8-3df7a2c39c00_828x1063.jpeg 424w, https://substackcdn.com/image/fetch/$s_!NZkT!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2961080d-6b12-42f2-90d8-3df7a2c39c00_828x1063.jpeg 848w, https://substackcdn.com/image/fetch/$s_!NZkT!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2961080d-6b12-42f2-90d8-3df7a2c39c00_828x1063.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!NZkT!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2961080d-6b12-42f2-90d8-3df7a2c39c00_828x1063.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!NZkT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2961080d-6b12-42f2-90d8-3df7a2c39c00_828x1063.jpeg" width="828" height="1063" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2961080d-6b12-42f2-90d8-3df7a2c39c00_828x1063.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1063,&quot;width&quot;:828,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;AP10WWI.jpg (586125 bytes)&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="AP10WWI.jpg (586125 bytes)" title="AP10WWI.jpg (586125 bytes)" srcset="https://substackcdn.com/image/fetch/$s_!NZkT!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2961080d-6b12-42f2-90d8-3df7a2c39c00_828x1063.jpeg 424w, https://substackcdn.com/image/fetch/$s_!NZkT!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2961080d-6b12-42f2-90d8-3df7a2c39c00_828x1063.jpeg 848w, https://substackcdn.com/image/fetch/$s_!NZkT!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2961080d-6b12-42f2-90d8-3df7a2c39c00_828x1063.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!NZkT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2961080d-6b12-42f2-90d8-3df7a2c39c00_828x1063.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">https://www.psywarrior.com/WWIAllies.html</figcaption></figure></div><blockquote><p>Airborne leaflet dropping is a type of propaganda where leaflets (flyers) are scattered in the air, normally by filling cluster bombs that open in midair with thousands of leaflets.</p><p>Military forces have used aircraft to drop leaflets to attempt to alter the behavior of combatants and non-combatants in enemy-controlled territory, sometimes in conjunction with air strikes. Humanitarian air missions, in cooperation with leaflet propaganda, can turn the populace against their leadership while preparing them for the arrival of enemy combatants.</p><p>Leaflet droppings have also been used to limit civilian casualties by alerting civilians of imminent danger allowing time to evacuate targeted areas.<br><br><em>Source: https://en.wikipedia.org/wiki/Airborne_leaflet_propaganda</em></p></blockquote><p>All sides in all instances of the thing we cannot talk about have done this, because it made sense to do so. It increased confusion in the opposition, encouraged defections, and made clear to the other side what was really going on (and this is something both sides believed to be true for only their own side, which helped with the very first point). In the First World Thing We Cannot Talk About, the Germans and the British merrily dropped propaganda over enemy lines, and this was repeated in the Second World Thing We Cannot Talk About.</p><p><a href="https://en.wikipedia.org/wiki/Propaganda#Wartime">Propaganda during the thing we cannot talk about</a>  is probably as old as the thing we cannot talk about, and that which we cannot talk about is probably as old as humanity itself.</p><p>Now, follow my reasoning closely here, and take your time about it. The thing we cannot talk about isn&#8217;t even relevant to the current situation, because the current situation isn&#8217;t even an example of the thing we cannot talk about, <a href="https://www.britannica.com/question/Did-the-U-S-declare-war-on-Iran">per the government that started the thing we cannot talk about</a>.</p><p>Which itself is an example of propaganda related to the thing we cannot talk about, if you think about it. I did tell you to take your time to think about it, you will recall.</p><p>But here is where things get interesting.</p><h2>The Two AI Wars</h2><p>Imagine a purely hypothetical scenario in which a country called A declares the initiation of major combat operations against another country called I. Further imagine that the country I was run by an authoritarian regime, which had &#8216;<a href="https://www.theverge.com/policy/910401/iran-war-propaganda-blackout-lego-ai-slop?view_token=eyJhbGciOiJIUzI1NiJ9.eyJpZCI6InpkckUwMUJkMlYiLCJwIjoiL3BvbGljeS85MTA0MDEvaXJhbi13YXItcHJvcGFnYW5kYS1ibGFja291dC1sZWdvLWFpLXNsb3AiLCJleHAiOjE3NzY0MDgyMjgsImlhdCI6MTc3NTk3NjIyOH0.MDbaYVLbyQ0d7ISxgwzPlaQNbeLzGVypx4WS68pt590&amp;utm_medium=gift-link">been struggling to shut down all footage of the protests convulsing the nation, cutting off internet access to the outside world in the longest blackout in I&#8217;s history</a>&#8217;.</p><p>That makes the current Thing We Cannot Talk About genuinely interesting, because there are now two battlefields at play. One battlefield is where the actual Thing We Cannot Talk About is being fought. A&#8217;s territory is not involved in this battlefield, but I&#8217;s very much is.</p><p>But the <em>propaganda</em> about the Thing We Cannot Talk About? That is a major headache for A, because I had simply cut off its people&#8217;s access to that particular battlefield before major combat operations began on I&#8217;s territory.</p><p>But A&#8217;s people have access to that battlefield, as does the rest of the world. And the bad news for A is that while there is a ceasefire in the Thing We Cannot Talk About, there is no ceasefire in the propaganda about The Thing We Cannot Talk About.</p><p>And <a href="https://www.theverge.com/policy/910401/iran-war-propaganda-blackout-lego-ai-slop?view_token=eyJhbGciOiJIUzI1NiJ9.eyJpZCI6InpkckUwMUJkMlYiLCJwIjoiL3BvbGljeS85MTA0MDEvaXJhbi13YXItcHJvcGFnYW5kYS1ibGFja291dC1sZWdvLWFpLXNsb3AiLCJleHAiOjE3NzY0MDgyMjgsImlhdCI6MTc3NTk3NjIyOH0.MDbaYVLbyQ0d7ISxgwzPlaQNbeLzGVypx4WS68pt590&amp;utm_medium=gift-link">I is winning that one</a>:</p><blockquote><p>Prior to the attacks, it looked as though some connectivity was returning to Iran, but as bombs fell, the blackout was once again in place. However, there were some early suggestions that Iran was going to selectively lift the blackout &#8220;for those who can carry our voice further&#8221; &#8212; a kind of tiered internet access for whitelisted people willing to promote, at the very least, an anti-war message. No one could have foreseen what would happen next. </p><p>By mid-March, the most dominant strain of Iranian propaganda was of a markedly different tone. Little Lego minifigures dressed up as soldiers as Lego planes and Lego helicopters burn in an AI-generated desert. Videos crammed in references to Jeffrey Epstein and dead Iranian schoolgirls alongside guns and explosions. It turned out that Lego AI slop was the voice that would carry the farthest. </p></blockquote><h2>Twitter But No Tear Gas</h2><p>A little more than a decade ago, Zeynep Tufekci wrote a lovely little book called <a href="https://www.amazon.in/TWITTER-TEAR-GAS-Zeynep-Tufekci/dp/0300234171/ref=sr_1_1?crid=2Z3YGG5V72WNN&amp;dib=eyJ2IjoiMSJ9.z4-gqPINlLjVDFHJucNlVA.USSye_frQ6OBKbn5DEjhcIXVFKuiejer1gFPhPT9Ai0&amp;dib_tag=se&amp;keywords=twitter+and+tear+gas&amp;qid=1775981752&amp;sprefix=twitter+and+tear+gas%2Caps%2C371&amp;sr=8-1">Twitter and Tear Gas</a>. My understanding of the main thesis in her book is deeply rooted in economics:</p><ol><li><p>Social media and memes greatly reduce the cost of organizing a movement <em>online</em>. That&#8217;s the good news.</p></li><li><p>The bad news is that opportunity costs are everywhere, and precisely because costs are low, starting a movement online is easy, but following through on actual change is that much harder.</p></li></ol><p>But the major combat operations between A and I are different from Germany and England during the two World Things We Cannot Talk About. And they are also different from Zeynep&#8217;s framing from a decade ago.</p><p>First, I doesn&#8217;t have a battlefield in the propaganda combat operations. They have internet blackouts, remember? How do you even spread memes online in the Stone Age, anyway?</p><p>Second, I is able to produce content in A&#8217;s idiom, and for A&#8217;s audiences. A&#8217;s own government is unable to reciprocate in this battlefield, because of the first point above&#8230; but also because the A government&#8217;s memes are aimed towards A&#8217;s domestic audience, not I&#8217;s!</p><p>Production of memes ain&#8217;t the problem here. It is distribution<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>.</p><p>Costs for the production of hard-hitting memes (as opposed to hard-hitting bombs) have dropped to zero. The costs of distribution have dropped to zero for A&#8217;s audiences, but have scaled up dramatically if you want to reach I&#8217;s audiences. If you take away the second point from Tufekci&#8217;s analysis, it is easier than ever to optimize for her first point.</p><p>So easy, in fact, that anybody could do it. England and Germany had governments organizing production and the distribution of propaganda. You defeat the country, and you stop the production and the distribution of propaganda. But this is not your grandparents&#8217; Thing You Cannot Talk About.</p><p>This time is different. But us humans? We&#8217;re the same. And that&#8217;s a problem.</p><h2>AI, and Humanity at Large</h2><p>It is not as if we are now living in a world where the narrative matters more. That has always been true, and you could argue that it is in fact one of the definitions of humanity. Signaling matters to us as a species. <a href="https://www.amazon.in/dp/B07H8K4RT1/?bestFormat=true&amp;k=elephant%20in%20the%20brain%20book&amp;ref_=nb_sb_ss_w_scx-ent-bk-ww_k0_1_17_de&amp;crid=2P7CTJ44RA18V&amp;sprefix=elephant%20in%20the%20b">Ask Messrs Hanson and Simler</a>, among others.</p><p>You win the Thing That We Shall Not Talk About by winning both battlefields, not just the physical one. <a href="https://truthsocial.com/@realDonaldTrump/posts/116381349587924418">And both sides of The Thing That We Shall Not Talk About know it</a>.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>I solemnly swear that I, Ashish Kulkarni, wrote this sentence (and the rest of the blog post, to boot). I&#8217;m willing to go to The Thing We Will Not Talk About on this!</p></div></div>]]></content:encoded></item><item><title><![CDATA[Learning to Learn with AI: Notes on Anthropic’s Economic Index]]></title><description><![CDATA[A read-through of the March 2026 Anthropic Economic Index report, with some data explorations of my own.]]></description><link>https://www.econforeverybody.com/p/learning-to-learn-with-ai-notes-on</link><guid isPermaLink="false">https://www.econforeverybody.com/p/learning-to-learn-with-ai-notes-on</guid><dc:creator><![CDATA[Ashish Kulkarni]]></dc:creator><pubDate>Fri, 27 Mar 2026 11:56:44 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!bTHJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2fafe26a-f0ad-47c2-9637-a9b3fa98e6a9_2048x1158.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Anthropic released the latest edition of their <a href="https://www.anthropic.com/research/economic-index-march-2026-report">Economic Index</a> this week. This is a report that tracks how Claude is being used across the global economy. This is the third such report, covering a sample from early February 2026. You can <a href="https://www.anthropic.com/research/team/economic-research">find the earlier reports here</a>. The focus of this report is an important question: do people get better at using AI over time, and if so, how?</p><p>There is another reason this report is interesting. It maps to the time Anthropic ran those Superbowl ads (<a href="https://www.youtube.com/results?search_query=anthropic+superbowl+ads">and if you&#8217;ve not seen them, here</a>. Go watch, they&#8217;re a lot of fun, and will remind folks of a certain age of <a href="https://www.youtube.com/watch?v=qfv6Ah_MVJU">another series of ads</a>. The more things change&#8230;)</p><p>In today&#8217;s blog post, I want to walk through what I found most striking (all three: the good, the bad and the uncertain), and share some original analysis &#8212; including a look at my own usage data over two-plus years of working with Claude.</p><h2>The Adoption Curve is Working</h2><p>The first chapter of the report is about something unsurprising but worth noting: usage on Claude.ai is diversifying. The top 10 most frequently asked for tasks accounted for 24% of all conversations in November 2025, but only 19% in February 2026. Non-top ten tasks have a higher percentage share, and what that means is that <a href="http://claude.ai">claude.ai</a> is <a href="https://en.wikipedia.org/wiki/The_Long_Tail_(book)">beginning to grow a tail, and it may well be a long one</a>.</p><p>The other big and unsurprising finding is that coding is migrating from the consumer interface to the API. This is largely via Claude Code, which (<a href="https://x.com/AshishPonders/status/2036029556327346264">among other things</a>) splits work into smaller agentic calls. Meanwhile, personal use &#8212; sports scores, product comparisons, home maintenance &#8212; rose from 35% to 42% of Claude.ai conversations. Many more people are using <a href="http://claude.ai">Claude.ai</a>, and those of us who have been using it for a while are increasingly using it for more complicated stuff, and both are Mostly Good Things.</p><p>This data was collected between February 5&#8211;12, 2026. This is <em>before</em> the Anthropic-Pentagon confrontation, and if you haven&#8217;t read about that, <a href="https://en.wikipedia.org/wiki/Anthropic%E2%80%93United_States_Department_of_Defense_dispute">you have some catching up to do</a>. The SCR designation came on February 27, and <a href="https://www.businessinsider.com/claude-number-1-app-stores-chatgpt-apple-google-ai-2026-3">Claude exploded in popularity almost immediately after</a>. So the diversification story here is driven by Anthropic&#8217;s Super Bowl advertisements (which the report explicitly credits) and organic adoption, <em><strong>not</strong></em> the DoW drama. The <em>next</em> Economic Index report will therefore make for even more fascinating reading.</p><p>But we have the earlier data to analyze in this report, and it makes for rich reading. As the report itself says, this is a textbook <a href="https://en.wikipedia.org/wiki/Technology_adoption_life_cycle">adoption curve</a> case study. Early adopters will typically cluster around specific high-value use cases, which in this case mostly means coding. Later adopters bring a much wider, more casual range of tasks. The average estimated value of work done on Claude.ai (<a href="https://www.onetonline.org/">as measured by US wages for associated occupations</a>) dipped slightly from $49.3 to $47.9 per hour. Why should it dip? This ties back to the long tail point I made above - more people are using <a href="http://claude.ai">Claude.ai</a> than was the case earlier, and that means usage will diversify.</p><h2>Three Inequalities Worth Worrying About</h2><p>As an economist, the inequality findings are where I linger longest. The report touches on geographic convergence, but I think there are actually three distinct inequality stories buried in the data, and they don&#8217;t all point in the same direction.</p><p><strong>First, within the US, convergence is continuing.</strong> The share of per-capita usage going to the top five states dropped from 30% to 24% since August 2025. The Gini coefficient has fallen. This is good news, because it suggests that knowledge about what AI can do, and how to use it, is diffusing within the United States. But the pace has slowed. Early estimates suggested states would converge to roughly equal usage in 2&#8211;5 years, but Anthropic now says that the updated estimate is 5&#8211;9 years. I&#8217;m not sure why this should be so, and it is a minor puzzle.</p><p><strong>Second, across countries, divergence is </strong><em><strong>increasing</strong></em><strong>.</strong> The top 20 countries now account for 48% of per-capita usage, up from 45% from the earlier report. This is the opposite of the within-US story, and it worries me more. Supply-side inequality (who has access to AI models) is one thing, and it is bad enough. But demand-side inequality (this is weird terminology for an economist to use, but these are weird times) is stickier and potentially more consequential. Why? Because it hints at differences in institutional culture, work patterns, and organisational willingness to experiment with AI. These are not easy things to change! If you think of access barriers as transaction costs (in the Coasean sense), supply-side costs can be lowered with better infrastructure, policies and pricing. The mental models and habits determine whether a country&#8217;s institutions even <em>try</em> to integrate AI into their workflows. Early days, to be clear, but this is something to keep an eye on.</p><p><strong>Third &#8212; and this is the one the report hints at but doesn&#8217;t fully develop &#8212; within lagging countries, inequality is likely </strong><em><strong>even worse</strong></em><strong>.</strong> The report notes that in low-adoption countries, the user base is still dominated by technical early adopters doing high-value work. This <em>looks</em> like sophisticated usage, but it actually reflects a slower adoption. These countries show divergence <em>between</em> them and leading countries, while also showing extreme concentration <em>within</em> their own populations. This is not good news. <a href="https://www.hse.ru/data/624/768/1238/Maskin.pdf">This paper is worth reading in this context</a>.</p><p>The report&#8217;s own learning-curve findings make this worse. If experienced users get better at using AI, and early adoption is self-reinforcing, then the countries that are behind today will fall further behind tomorrow. And that too in the most important resource of them all: the accumulated human capital of knowing how to work with these tools. <em>Metis</em> in AI compounds, and more people in more countries need to be working every day to acquire it</p><h2>The Primitives: Statistically Significant but Small</h2><p>Economic primitives are <a href="https://www.anthropic.com/research/anthropic-economic-index-january-2026-report#:~:text=simple%2C%20foundational%20measures%20of%20the%20ways%20that%20Claude%20is%20used">simple, foundational measures of the ways that Claude is used</a>. Here are some key statistics: the average education years needed for prompts is down from 12.2 to 11.9 years, and human-only task completion time is down by about 2 minutes. All differences are statistically significant at p&lt;0.001.</p><p>But with a sample of one million conversations, statistical significance isn&#8217;t the real story in my opinion. What matters is the magnitude of the effect, and these changes are small potatoes, no? The 0.3-year drop in education level and the two-minute reduction in task time are real, but how noteworthy are they, really? We&#8217;re still in the early days of telling ourselves stories about AI adoption, so I&#8217;d say these are worth tracking over time in case they become trends, but not worth building an argument on today.</p><h2>Who&#8217;s Picking the Smart Model &#8212; and Who&#8217;s Not Even in the Room?</h2><p>The report&#8217;s analysis of model selection is one of its genuinely new contributions. Users choose Opus (Anthropic&#8217;s most capable model) for tasks associated with higher-paying occupations. For every additional $10 of hourly wage for a task, Opus usage increases by 1.5 percentage points on Claude.ai and 2.8 percentage points on the API.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!mDBH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57a4aff6-fd1b-4453-aa36-f2819ec6d988_2048x1158.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mDBH!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57a4aff6-fd1b-4453-aa36-f2819ec6d988_2048x1158.png 424w, https://substackcdn.com/image/fetch/$s_!mDBH!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57a4aff6-fd1b-4453-aa36-f2819ec6d988_2048x1158.png 848w, https://substackcdn.com/image/fetch/$s_!mDBH!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57a4aff6-fd1b-4453-aa36-f2819ec6d988_2048x1158.png 1272w, https://substackcdn.com/image/fetch/$s_!mDBH!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57a4aff6-fd1b-4453-aa36-f2819ec6d988_2048x1158.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mDBH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57a4aff6-fd1b-4453-aa36-f2819ec6d988_2048x1158.png" width="1456" height="823" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/57a4aff6-fd1b-4453-aa36-f2819ec6d988_2048x1158.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:823,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!mDBH!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57a4aff6-fd1b-4453-aa36-f2819ec6d988_2048x1158.png 424w, https://substackcdn.com/image/fetch/$s_!mDBH!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57a4aff6-fd1b-4453-aa36-f2819ec6d988_2048x1158.png 848w, https://substackcdn.com/image/fetch/$s_!mDBH!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57a4aff6-fd1b-4453-aa36-f2819ec6d988_2048x1158.png 1272w, https://substackcdn.com/image/fetch/$s_!mDBH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57a4aff6-fd1b-4453-aa36-f2819ec6d988_2048x1158.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">https://www.anthropic.com/research/economic-index-march-2026-report</figcaption></figure></div><p>I think this finding is slightly open to misinterpretation if read as &#8220;higher-paid professionals use Opus more.&#8221; What it really shows is that people pick the more powerful model when the <em>specific task</em> demands it. A software developer may pick Opus because a particular problem demands the best reasoning available. An academic, by contrast, may pick Sonnet. Why? Because the per-conversation (per task, to use Acemoglu&#8217;s framing) cognitive requirement is lower, even though teaching itself as a whole is complex work.</p><p>I suspect Opus&#8217; share would correlate even more tightly with users&#8217; own assessment of what percentage of their tasks can be automated. The people reaching for the most powerful model are the ones whose work most directly maps onto what AI can do. It would be interesting to look at what the understanding is, by profession, about what chunkable tasks their daily workflows can be broken down into.</p><h3>The Gap: High Wages, Low Exposure</h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bTHJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2fafe26a-f0ad-47c2-9637-a9b3fa98e6a9_2048x1158.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bTHJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2fafe26a-f0ad-47c2-9637-a9b3fa98e6a9_2048x1158.png 424w, https://substackcdn.com/image/fetch/$s_!bTHJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2fafe26a-f0ad-47c2-9637-a9b3fa98e6a9_2048x1158.png 848w, https://substackcdn.com/image/fetch/$s_!bTHJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2fafe26a-f0ad-47c2-9637-a9b3fa98e6a9_2048x1158.png 1272w, https://substackcdn.com/image/fetch/$s_!bTHJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2fafe26a-f0ad-47c2-9637-a9b3fa98e6a9_2048x1158.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bTHJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2fafe26a-f0ad-47c2-9637-a9b3fa98e6a9_2048x1158.png" width="1456" height="823" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2fafe26a-f0ad-47c2-9637-a9b3fa98e6a9_2048x1158.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:823,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!bTHJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2fafe26a-f0ad-47c2-9637-a9b3fa98e6a9_2048x1158.png 424w, https://substackcdn.com/image/fetch/$s_!bTHJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2fafe26a-f0ad-47c2-9637-a9b3fa98e6a9_2048x1158.png 848w, https://substackcdn.com/image/fetch/$s_!bTHJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2fafe26a-f0ad-47c2-9637-a9b3fa98e6a9_2048x1158.png 1272w, https://substackcdn.com/image/fetch/$s_!bTHJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2fafe26a-f0ad-47c2-9637-a9b3fa98e6a9_2048x1158.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">https://www.anthropic.com/research/economic-index-march-2026-report</figcaption></figure></div><p>I was curious about those occupations that sit towards the lower right of the charts. These are folks with high hourly wages, but low share of conversations using Opus-class models. Here&#8217;s Claude Code&#8217;s take on these, after it did its thing:</p><p>The occupations missing from AI adoption fall into three clusters:</p><p><strong>Healthcare practitioners</strong> make up the largest group &#8212; physicians, dentists, physical therapists, occupational therapists. Their core work involves physical examination, hands-on treatment, and regulated clinical decision-making. When physician tasks <em>do</em> appear in Claude data, 82% of the time it&#8217;s patients asking about their own symptoms, not doctors using AI for clinical reasoning.</p><p><strong>Senior managers and executives</strong> form the second cluster. Chief Executives, Engineering Managers, Sales Managers &#8212; their work centres on relationships, judgement calls, and organisational politics. When CEOs show up, their top task is operational analysis &#8212; a support function, not the strategic and interpersonal work that defines the role.</p><p><strong>Engineers in physical domains</strong> round out the list. Civil, environmental, marine, and industrial engineers. Unlike software engineers, their work involves site inspections, physical materials, regulatory compliance, and specialised tools like CAD software. Their Claude tasks, when they exist, tend to be administrative &#8212; budgets, specs documents &#8212; not engineering design.</p><p>What will it take to change this is a question that I am sure engages the best and brightest minds in all of the major AI labs. The next wave of AI impact for these professions won&#8217;t come from chatbots getting smarter. It will come from AI being embedded in the domain-specific tools they already use &#8212; electronic health records, CAD software, project management platforms, for example. The interface matters as much as the intelligence, partly because workflows really matter to these guys. In Coasean terms: the current chat interface imposes a transaction cost. You have to chunk your work into text friendly subtasks, translate your domain expertise into prompts, and finally, you have to figure out a way to get the LLM&#8217;s output back into your workflow. For a software developer, the cognitive load to do this is near zero. For a dentist today, it&#8217;s enormous.</p><h2>Learning Curves: The Report&#8217;s Core Finding</h2><p>The report&#8217;s second chapter asks a great question: do people get better at using Claude over time? I&#8217;d love to find out how the team at Anthropic chooses the main focus for each of these reports, because this one is a lovely pick.</p><p>And the answer to the question is a resounding yes, and that along multiple dimensions.</p><p>High-tenure users (6+ months) compared to newer users show:</p><ul><li><p>10% fewer personal conversations, 7 percentage points more work usage</p></li><li><p>6% higher education level in their inputs</p></li><li><p>More collaborative interaction patterns, less directive delegation</p></li><li><p>Greater task diversity (less concentration in top-10 tasks)</p></li><li><p><strong>10% higher success rate</strong> in their conversations</p></li></ul><p>That last number is important. And it survives increasingly stringent controls. When you compare high- and low-tenure users <em>doing the exact same task</em> (using task fixed effects), the gap shrinks to about 3 percentage points. When you add controls for model choice, language, country, and use case, it moves back up to about 4 percentage points. That&#8217;s what the chart below is showing you.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!y7nS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98a77ace-9dda-4e65-ae1e-467cc6ed9b79_2048x1158.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!y7nS!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98a77ace-9dda-4e65-ae1e-467cc6ed9b79_2048x1158.png 424w, https://substackcdn.com/image/fetch/$s_!y7nS!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98a77ace-9dda-4e65-ae1e-467cc6ed9b79_2048x1158.png 848w, https://substackcdn.com/image/fetch/$s_!y7nS!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98a77ace-9dda-4e65-ae1e-467cc6ed9b79_2048x1158.png 1272w, https://substackcdn.com/image/fetch/$s_!y7nS!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98a77ace-9dda-4e65-ae1e-467cc6ed9b79_2048x1158.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!y7nS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98a77ace-9dda-4e65-ae1e-467cc6ed9b79_2048x1158.png" width="1456" height="823" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/98a77ace-9dda-4e65-ae1e-467cc6ed9b79_2048x1158.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:823,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!y7nS!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98a77ace-9dda-4e65-ae1e-467cc6ed9b79_2048x1158.png 424w, https://substackcdn.com/image/fetch/$s_!y7nS!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98a77ace-9dda-4e65-ae1e-467cc6ed9b79_2048x1158.png 848w, https://substackcdn.com/image/fetch/$s_!y7nS!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98a77ace-9dda-4e65-ae1e-467cc6ed9b79_2048x1158.png 1272w, https://substackcdn.com/image/fetch/$s_!y7nS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F98a77ace-9dda-4e65-ae1e-467cc6ed9b79_2048x1158.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">https://www.anthropic.com/research/economic-index-march-2026-report</figcaption></figure></div><p>This is evidence consistent with learning-by-doing. Given the same task, the same model, and the same language, experienced users still get better outcomes. The report is appropriately cautious about alternative explanations (cohort effects and survivorship bias could obviously play a role, for example), but the controlled regressions rule out the simple versions of those stories.</p><p>The central explanation that they offer has a  concrete TMKK: <em><strong>use these tools more</strong></em><strong>. </strong>The more you use these tools, the better you get at using these tools. The report makes this connection explicitly in its conclusion. Early adopters with high-skill tasks have more successful interactions, and these same users may be simultaneously the most exposed to AI disruption and most aided by it. The people who learned to work <em>with</em> AI earliest are pulling ahead. The ones who haven&#8217;t started yet face a compounding gap. Please, use these tools.</p><h2>One Concern: The Imagination Ceiling</h2><p>The data shows that the years of education needed to understand user prompts increases by almost 1 year for every additional year of Claude usage. That is Very Good News, because it shows that users are getting better at using these tools. But the relationship appears to taper off after roughly a year of use.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_hXo!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a512313-1bba-4d1f-adfe-6dc484e4f372_2048x1158.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_hXo!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a512313-1bba-4d1f-adfe-6dc484e4f372_2048x1158.png 424w, https://substackcdn.com/image/fetch/$s_!_hXo!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a512313-1bba-4d1f-adfe-6dc484e4f372_2048x1158.png 848w, https://substackcdn.com/image/fetch/$s_!_hXo!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a512313-1bba-4d1f-adfe-6dc484e4f372_2048x1158.png 1272w, https://substackcdn.com/image/fetch/$s_!_hXo!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a512313-1bba-4d1f-adfe-6dc484e4f372_2048x1158.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_hXo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a512313-1bba-4d1f-adfe-6dc484e4f372_2048x1158.png" width="1456" height="823" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6a512313-1bba-4d1f-adfe-6dc484e4f372_2048x1158.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:823,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!_hXo!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a512313-1bba-4d1f-adfe-6dc484e4f372_2048x1158.png 424w, https://substackcdn.com/image/fetch/$s_!_hXo!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a512313-1bba-4d1f-adfe-6dc484e4f372_2048x1158.png 848w, https://substackcdn.com/image/fetch/$s_!_hXo!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a512313-1bba-4d1f-adfe-6dc484e4f372_2048x1158.png 1272w, https://substackcdn.com/image/fetch/$s_!_hXo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a512313-1bba-4d1f-adfe-6dc484e4f372_2048x1158.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">https://www.anthropic.com/research/economic-index-march-2026-report</figcaption></figure></div><p>This is a problem of a lack of imagination, maybe? People get better at using Claude <em>for what they already know to ask for</em>, but they may not learn to ask for qualitatively different things. Folks can now do what they were already doing better, and faster&#8230; but we still have not gotten good enough at figuring out what else we can do with these tools. If that is indeed what is happening, it&#8217;s an important finding. It would mean that even experienced users aren&#8217;t scaling up ambition, just scaling up efficiency.</p><p>The report&#8217;s other finding &#8212; that high-tenure users are more collaborative and less directive &#8212; is somewhat reassuring here. They&#8217;re iterating with Claude, as opposed to just issuing commands. But the plateau in prompt complexity suggests limits to how far that collaboration extends.</p><p>I want to offer a gloriously unscientific framework for thinking about this. Consider a simple 2&#215;2:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!h-TH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0380fc01-712b-4ea4-bddb-de87d7c8214d_1868x404.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!h-TH!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0380fc01-712b-4ea4-bddb-de87d7c8214d_1868x404.png 424w, https://substackcdn.com/image/fetch/$s_!h-TH!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0380fc01-712b-4ea4-bddb-de87d7c8214d_1868x404.png 848w, https://substackcdn.com/image/fetch/$s_!h-TH!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0380fc01-712b-4ea4-bddb-de87d7c8214d_1868x404.png 1272w, https://substackcdn.com/image/fetch/$s_!h-TH!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0380fc01-712b-4ea4-bddb-de87d7c8214d_1868x404.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!h-TH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0380fc01-712b-4ea4-bddb-de87d7c8214d_1868x404.png" width="1456" height="315" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0380fc01-712b-4ea4-bddb-de87d7c8214d_1868x404.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:315,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:68503,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.econforeverybody.com/i/192302748?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0380fc01-712b-4ea4-bddb-de87d7c8214d_1868x404.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!h-TH!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0380fc01-712b-4ea4-bddb-de87d7c8214d_1868x404.png 424w, https://substackcdn.com/image/fetch/$s_!h-TH!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0380fc01-712b-4ea4-bddb-de87d7c8214d_1868x404.png 848w, https://substackcdn.com/image/fetch/$s_!h-TH!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0380fc01-712b-4ea4-bddb-de87d7c8214d_1868x404.png 1272w, https://substackcdn.com/image/fetch/$s_!h-TH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0380fc01-712b-4ea4-bddb-de87d7c8214d_1868x404.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a><figcaption class="image-caption">Look Ma, a 2x2!</figcaption></figure></div><p>New users treat AI as a substitute for simple tasks, say as a better Google search. Intermediate and advanced users learn to treat it as a complement for complex work. The danger zone is when people start using AI as a <em>substitute</em> for complex tasks, delegating without understanding. This is what we should watch for. So far, the data is reassuring: advanced users collaborate more, not less. But that could change as models become more capable.</p><h2>N=1: My Own Learning Curve</h2><p>To make this concrete, I ran the same kind of analysis on my own Claude usage data. I exported my conversation history and had Claude Code classify each conversation by task category, complexity, interaction pattern, and platform &#8212; with everything anonymised. I&#8217;ve been using Claude for roughly the last two and a half years, and this covers <a href="http://claude.ai">Claude.ai</a> conversations and Claude Code sessions, but not Claude CoWork.</p><p>A few things jump out:</p><p><strong>Task diversification mirrors the aggregate pattern, but more dramatically.</strong> My earliest quarter was almost entirely brainstorming. By my most recent quarters, I have seven or eight categories with a meaningful share in each category: coding, writing, research, data analysis, teaching preparation, and more.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!BdX1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64e095a6-b08b-44fb-bf8e-20b149da7f49_2048x1016.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!BdX1!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64e095a6-b08b-44fb-bf8e-20b149da7f49_2048x1016.png 424w, https://substackcdn.com/image/fetch/$s_!BdX1!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64e095a6-b08b-44fb-bf8e-20b149da7f49_2048x1016.png 848w, https://substackcdn.com/image/fetch/$s_!BdX1!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64e095a6-b08b-44fb-bf8e-20b149da7f49_2048x1016.png 1272w, https://substackcdn.com/image/fetch/$s_!BdX1!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64e095a6-b08b-44fb-bf8e-20b149da7f49_2048x1016.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!BdX1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64e095a6-b08b-44fb-bf8e-20b149da7f49_2048x1016.png" width="1456" height="722" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/64e095a6-b08b-44fb-bf8e-20b149da7f49_2048x1016.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:722,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!BdX1!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64e095a6-b08b-44fb-bf8e-20b149da7f49_2048x1016.png 424w, https://substackcdn.com/image/fetch/$s_!BdX1!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64e095a6-b08b-44fb-bf8e-20b149da7f49_2048x1016.png 848w, https://substackcdn.com/image/fetch/$s_!BdX1!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64e095a6-b08b-44fb-bf8e-20b149da7f49_2048x1016.png 1272w, https://substackcdn.com/image/fetch/$s_!BdX1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64e095a6-b08b-44fb-bf8e-20b149da7f49_2048x1016.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Based on my Claude Usage over the last two and a half years</figcaption></figure></div><p><strong>The coding migration story is me in miniature.</strong> Coding barely exists in my early quarters, then grows steadily to become my dominant category. 25% of my total usage is now Claude Code. I <em>am</em> the user the report describes when it talks about coding shifting from Claude.ai to API-based tools.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!IsTz!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb87af62-3a02-4616-a6e7-a22872140dca_2048x869.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!IsTz!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb87af62-3a02-4616-a6e7-a22872140dca_2048x869.png 424w, https://substackcdn.com/image/fetch/$s_!IsTz!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb87af62-3a02-4616-a6e7-a22872140dca_2048x869.png 848w, https://substackcdn.com/image/fetch/$s_!IsTz!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb87af62-3a02-4616-a6e7-a22872140dca_2048x869.png 1272w, https://substackcdn.com/image/fetch/$s_!IsTz!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb87af62-3a02-4616-a6e7-a22872140dca_2048x869.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!IsTz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb87af62-3a02-4616-a6e7-a22872140dca_2048x869.png" width="1456" height="618" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fb87af62-3a02-4616-a6e7-a22872140dca_2048x869.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:618,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!IsTz!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb87af62-3a02-4616-a6e7-a22872140dca_2048x869.png 424w, https://substackcdn.com/image/fetch/$s_!IsTz!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb87af62-3a02-4616-a6e7-a22872140dca_2048x869.png 848w, https://substackcdn.com/image/fetch/$s_!IsTz!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb87af62-3a02-4616-a6e7-a22872140dca_2048x869.png 1272w, https://substackcdn.com/image/fetch/$s_!IsTz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb87af62-3a02-4616-a6e7-a22872140dca_2048x869.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Based on my Claude usage over the last two and a half years</figcaption></figure></div><p><strong>My interaction split is 52% collaborative, 48% directive</strong> &#8212; almost exactly even. This matches the report&#8217;s finding that us &#8220;veteran&#8221; users maintain collaborative patterns. I use Claude directively when the task is clear (Claude Code), collaboratively when I&#8217;m thinking (conversations like this one).</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!SzqL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ba15f9d-fca8-4102-a583-b1d231265d92_2048x1016.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!SzqL!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ba15f9d-fca8-4102-a583-b1d231265d92_2048x1016.png 424w, https://substackcdn.com/image/fetch/$s_!SzqL!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ba15f9d-fca8-4102-a583-b1d231265d92_2048x1016.png 848w, https://substackcdn.com/image/fetch/$s_!SzqL!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ba15f9d-fca8-4102-a583-b1d231265d92_2048x1016.png 1272w, https://substackcdn.com/image/fetch/$s_!SzqL!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ba15f9d-fca8-4102-a583-b1d231265d92_2048x1016.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!SzqL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ba15f9d-fca8-4102-a583-b1d231265d92_2048x1016.png" width="1456" height="722" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7ba15f9d-fca8-4102-a583-b1d231265d92_2048x1016.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:722,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!SzqL!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ba15f9d-fca8-4102-a583-b1d231265d92_2048x1016.png 424w, https://substackcdn.com/image/fetch/$s_!SzqL!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ba15f9d-fca8-4102-a583-b1d231265d92_2048x1016.png 848w, https://substackcdn.com/image/fetch/$s_!SzqL!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ba15f9d-fca8-4102-a583-b1d231265d92_2048x1016.png 1272w, https://substackcdn.com/image/fetch/$s_!SzqL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ba15f9d-fca8-4102-a583-b1d231265d92_2048x1016.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Based on my Claude usage over the last two and a half years</figcaption></figure></div><p><strong>77% of my conversations are moderate-to-complex.</strong> Only 5% are personal. Against the report&#8217;s finding that new users devote 42% to personal use, this is a stark contrast.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!THmw!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff34167bb-8fc5-4dd7-8f68-e5daa0c75c5c_2048x1016.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!THmw!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff34167bb-8fc5-4dd7-8f68-e5daa0c75c5c_2048x1016.png 424w, https://substackcdn.com/image/fetch/$s_!THmw!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff34167bb-8fc5-4dd7-8f68-e5daa0c75c5c_2048x1016.png 848w, https://substackcdn.com/image/fetch/$s_!THmw!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff34167bb-8fc5-4dd7-8f68-e5daa0c75c5c_2048x1016.png 1272w, https://substackcdn.com/image/fetch/$s_!THmw!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff34167bb-8fc5-4dd7-8f68-e5daa0c75c5c_2048x1016.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!THmw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff34167bb-8fc5-4dd7-8f68-e5daa0c75c5c_2048x1016.png" width="1456" height="722" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f34167bb-8fc5-4dd7-8f68-e5daa0c75c5c_2048x1016.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:722,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!THmw!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff34167bb-8fc5-4dd7-8f68-e5daa0c75c5c_2048x1016.png 424w, https://substackcdn.com/image/fetch/$s_!THmw!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff34167bb-8fc5-4dd7-8f68-e5daa0c75c5c_2048x1016.png 848w, https://substackcdn.com/image/fetch/$s_!THmw!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff34167bb-8fc5-4dd7-8f68-e5daa0c75c5c_2048x1016.png 1272w, https://substackcdn.com/image/fetch/$s_!THmw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff34167bb-8fc5-4dd7-8f68-e5daa0c75c5c_2048x1016.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Based on my Claude usage over the last two and a half years</figcaption></figure></div><p><strong>And my conversation length is increasing, not plateauing.</strong> Median turns per conversation have spiked in recent months (9&#8211;14 turns).</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!V31S!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb964ae5b-b195-4f9c-b2e6-7355fb1b86d5_2048x1312.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!V31S!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb964ae5b-b195-4f9c-b2e6-7355fb1b86d5_2048x1312.png 424w, https://substackcdn.com/image/fetch/$s_!V31S!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb964ae5b-b195-4f9c-b2e6-7355fb1b86d5_2048x1312.png 848w, https://substackcdn.com/image/fetch/$s_!V31S!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb964ae5b-b195-4f9c-b2e6-7355fb1b86d5_2048x1312.png 1272w, https://substackcdn.com/image/fetch/$s_!V31S!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb964ae5b-b195-4f9c-b2e6-7355fb1b86d5_2048x1312.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!V31S!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb964ae5b-b195-4f9c-b2e6-7355fb1b86d5_2048x1312.png" width="1456" height="933" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b964ae5b-b195-4f9c-b2e6-7355fb1b86d5_2048x1312.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:933,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!V31S!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb964ae5b-b195-4f9c-b2e6-7355fb1b86d5_2048x1312.png 424w, https://substackcdn.com/image/fetch/$s_!V31S!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb964ae5b-b195-4f9c-b2e6-7355fb1b86d5_2048x1312.png 848w, https://substackcdn.com/image/fetch/$s_!V31S!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb964ae5b-b195-4f9c-b2e6-7355fb1b86d5_2048x1312.png 1272w, https://substackcdn.com/image/fetch/$s_!V31S!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb964ae5b-b195-4f9c-b2e6-7355fb1b86d5_2048x1312.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Based on my Claude usage over the last two and a half years</figcaption></figure></div><p>(I had stopped my Anthropic subscription for a while in the middle, and I suppose I should be cleaning up this data a little better. But I happily plead guilty to YOLOing this: I had an idea, asked Claude Code to do its thing, and I&#8217;m just inserting these charts as is into this post.)</p><p>I am not a representative user. I&#8217;m what they&#8217;d call a power user (at least for someone who is certainly not from a software background), an economist who thinks about AI professionally, and someone who treats Claude as an intellectual sparring partner rather than a tool. But this is pure guesswork on my part, and that is worth keeping in mind. And part of the reason I included this section is to incentivize you in doing this for yourself, of course.</p><h2>What I&#8217;ll Be Watching In Weeks (Days?) To Come</h2><p>A few predictions and things I want to track as future Economic Index reports come out:</p><p><strong>The March data will capture the DoW/Anthropic confrontation.</strong> Did the supply chain risk designation and surrounding media coverage drive a new wave of adoption? If so, was it the Super Bowl pattern again (casual users, diversification) or something different?</p><p><strong>Automation on Claude.ai should be the leading indicator.</strong> As the new cohort of users matures, I predict their usage will graduate from &#8220;What can you do?&#8221; (exploratory, personal) to &#8220;What can you do for me?&#8221; (task-specific augmentation) to &#8220;Just do it&#8221; (automation, directive patterns). Tracking the automation share on Claude.ai specifically might show whether the broad user base is making this transition.</p><p><strong>The cross-country divergence is the metric that matters most for policy.</strong> If AI skill is accumulated through use, and use requires a basic level of institutional willingness to experiment, then the countries falling behind today are building a deficit that gets harder to close. This isn&#8217;t just about access to models. It&#8217;s about building cultures of adoption. And unlike supply-side gaps, which markets and infrastructure investment can close, demand-side gaps require something harder: changing how institutions think about work. I am pessimistic about this, so please tell me why I&#8217;m wrong.</p><p><strong>The domain-specific tools question.</strong> Will AI exposure for healthcare, engineering, and management occupations increase as AI moves from chat interfaces to embedded tools and/or robotics? Or are the barriers more fundamental than interface design? The gap analysis suggests the answer will not come from chatbots getting smarter. Serious question: how much smarter can the chatbots possibly get?<br><br>And that is a concrete reason to watch both AI developments in the weeks to come, and to eagerly await the next edition of this report.</p><div><hr></div><p><em>The Anthropic Economic Index data is <a href="https://huggingface.co/datasets/Anthropic/EconomicIndex">publicly available on HuggingFace</a>. The personal usage analysis described in this post was conducted using anonymised data; raw conversation data is not shared nor published. All original analysis was done with/by Claude Code &#8212; which is, I suppose, the point.</em></p>]]></content:encoded></item><item><title><![CDATA[The Natural Philosopher’s Centaur]]></title><description><![CDATA[Yesterday I wrote a post about Tyler Cowen&#8217;s argument that the research paper is dying in economics.]]></description><link>https://www.econforeverybody.com/p/the-natural-philosophers-centaur</link><guid isPermaLink="false">https://www.econforeverybody.com/p/the-natural-philosophers-centaur</guid><dc:creator><![CDATA[Ashish Kulkarni]]></dc:creator><pubDate>Tue, 24 Mar 2026 08:52:55 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!xIWn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05f2d12b-dfb9-43cd-956a-7034394dd8ba_2824x1002.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Yesterday I<a href="https://www.econforeverybody.com/p/go-one-level-up-and-do-the-economics"> wrote a post</a> about Tyler Cowen&#8217;s argument that the research paper is dying in economics. That post was a collaboration between me and Claude. I wrote it, to be clear. But Claude helped me build an outline for it after a lengthy conversation, and Claude (and ChatGPT) helped me edit it. All faults that remain are, as they say, mine. But here is one way of thinking about it, given what I said in yesterday&#8217;s post: I brought the questions and the disposition, Claude brought the analytical horsepower, and neither of us could have written it alone.</p><p>I want to talk about that process, because I think it matters more than the post itself.</p><div><hr></div><h2><strong>The Centaur, Revisited</strong></h2><p>Back in September 2023, Ethan Mollick and his co-authors<a href="https://www.oneusefulthing.org/p/centaurs-and-cyborgs-on-the-jagged"> published a landmark study</a> on how consultants at BCG used AI. They found that consultants using GPT-4 finished more tasks, finished them faster, and produced higher quality results. Quaint what those prehistoric fellows were doing back in 2023, yes, sure. But the interesting finding was, even back then, in the <em>how</em>. The best performers fell into two camps: centaurs, who divided labour strategically between themselves and the AI across phases, and cyborgs, who interweaved their work with the AI moment by moment.</p><p>Three years on,<a href="https://x.com/emollick/status/2036188072287637544"> Mollick tweeted</a> something that I think goes one level deeper: technology always deskills us on something (we lost cursive handwriting, and our parents lost the slide rule). We will lose skills over time, and that&#8217;s fine. I knew how to use log tables at one point in time, and it simply does not matter now. Neither do <em>z</em> and <em>t</em> tables today, but again, that is a whole other story.</p><p>But the important thing is whether we make deliberate choices about which skills to keep and which to let go, or whether we fail to notice our own ability to make those choices atrophying.</p><p>This is one level up on the original question from Ethan Mollick&#8217;s post from 2023, which was &#8220;How should/do humans and AI divide labour?&#8221;. The one level up question is actually two questions. One: what makes one centaur better than another? In this case, I think the underlying question is this one: what determines whether the centaur goes somewhere <em>interesting</em>, or merely goes <em>fast</em>?</p><p>And second: if you think of yourself as a centaur or a cyborg, what properties does the AI bring to the table? And given those properties, what complementary skillsets should you be bringing to the table? The underlying question here is one of the points I made in yesterday&#8217;s blogpost. The AI half of the centaur will update itself every three months or so (and it used to be six months or so three months ago). Your skills need to remain complementary over time (treadmill), not just for now (monument).</p><div><hr></div><h2><strong>The Horse&#8217;s Eyes</strong></h2><p>Let me describe what the horse half of this particular centaur actually does today. In yesterday&#8217;s post, here&#8217;s what happened. I read Tyler&#8217;s post. I had a reaction &#8212; &#8220;go one level up.&#8221; I brought that reaction to Claude. Claude produced a first draft. It was structurally sound but not deep enough &#8212; I said so, and here&#8217;s the important part: I didn&#8217;t just critique, I generated five specific directions the post needed to go. Claude engaged with each one. Arin Dube&#8217;s tweet arrived midway through our conversation, and I saw how it might help. My<a href="https://www.econforeverybody.com/p/choices-costs-horizons-and-incentives"> 2018 post on choices, costs, horizons and incentives</a> became the analytical spine. Claude produced a second draft. I rewrote this entire draft out, and changed it considerably. I ran this draft past both ChatGPT and Claude, and then I hit publish.</p><p>It is not like this is a fixed workflow for me. I experiment with different ones for different blog posts. There are drafts I will write out entirely by hand, and only give to an LLM to check. I have an entire process on my local machine that I use for other blog posts (see screenshot below). And there is a <a href="https://www.econforeverybody.com/p/the-economics-of-building-things">rather infamous one where I tricked my readers</a>. Sometimes a cyborg, sometimes a centaur, and in both cases, I don&#8217;t yet know what workflow feels most natural to me. I&#8217;m still very much in the experimental phase, and maybe this will last forever (yay!)</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!xIWn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05f2d12b-dfb9-43cd-956a-7034394dd8ba_2824x1002.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!xIWn!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05f2d12b-dfb9-43cd-956a-7034394dd8ba_2824x1002.png 424w, https://substackcdn.com/image/fetch/$s_!xIWn!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05f2d12b-dfb9-43cd-956a-7034394dd8ba_2824x1002.png 848w, https://substackcdn.com/image/fetch/$s_!xIWn!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05f2d12b-dfb9-43cd-956a-7034394dd8ba_2824x1002.png 1272w, https://substackcdn.com/image/fetch/$s_!xIWn!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05f2d12b-dfb9-43cd-956a-7034394dd8ba_2824x1002.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!xIWn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05f2d12b-dfb9-43cd-956a-7034394dd8ba_2824x1002.png" width="1456" height="517" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/05f2d12b-dfb9-43cd-956a-7034394dd8ba_2824x1002.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:517,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:279876,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.econforeverybody.com/i/191958540?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05f2d12b-dfb9-43cd-956a-7034394dd8ba_2824x1002.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!xIWn!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05f2d12b-dfb9-43cd-956a-7034394dd8ba_2824x1002.png 424w, https://substackcdn.com/image/fetch/$s_!xIWn!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05f2d12b-dfb9-43cd-956a-7034394dd8ba_2824x1002.png 848w, https://substackcdn.com/image/fetch/$s_!xIWn!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05f2d12b-dfb9-43cd-956a-7034394dd8ba_2824x1002.png 1272w, https://substackcdn.com/image/fetch/$s_!xIWn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05f2d12b-dfb9-43cd-956a-7034394dd8ba_2824x1002.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>So what is Claude doing in all of this, regardless of which particular workflow I use? It is doing what a horse does best. It is traversing the knowledge landscape without tiring, and/or drilling down on specific points without tiring. It is a horse with endurance and strength.</p><p>But that undersells it. Claude was also spotting features of the terrain. If I say we should head in that direction because <em>x</em>, Claude is very, very good at saying &#8220;Yes, and also <em>y, z&#8230;and a,b and c</em> as a bonus!&#8221;. Once you point it in a particular direction, the horse can see, and more than you possibly can. What it cannot do &#8212; or at least, will not do for now &#8212; is choose where to go. Go, or dig? And if it is to be go, in which direction?</p><p>That&#8217;s the rider&#8217;s job.</p><div><hr></div><h2><strong>What Makes a Good Rider?</strong></h2><p>Don&#8217;t focus on the horse, which is our wont. We all tend to rhapsodize about how powerful the model is, and what&#8217;s inside the frontier and what remains outside. Mollick&#8217;s jagged frontier is a map of the <em>horse&#8217;s</em> capabilities. But the quality of the centaur depends at least as much on the rider.</p><p>So what does a good rider bring? What should a good rider bring?</p><p><strong>Direction:</strong> Breadth or depth? Should we explore the Lucas Critique implications, or drill deeper into Dube&#8217;s finding? This is judgment, not technique. It&#8217;s the ability to sense which thread matters and which is a tangent. It is also the ability to decide when a tangent or a digression is worthwhile. And what makes it worthwhile may be injecting a richer, related idea from another domain into the discourse. Or it could be injecting some levity into the proceedings because it is all too dreary otherwise. That&#8217;s your call as the rider, and you should be making it. That is what makes the post yours.</p><p><strong>Disposition.</strong> The habit of asking weird questions. The refusal to accept the first framing. The instinct to go one level up, and then to go across into whatever domain that level requires. Ask random questions! That practice isn&#8217;t pedagogical whimsy. It&#8217;s training for the only skill the horse can&#8217;t provide. Is the Tinbergen principle just a way to restate the pigeonhole principle? That&#8217;s a great weird question!</p><p><strong>Stakes.</strong> When I publish a post, my name is on it. My students might read it, my colleagues might read it, public intellectuals might read it. I have skin in the game. That changes how I evaluate what Claude produces &#8212; not just &#8220;is this good?&#8221; but &#8220;can I stand behind this?&#8221; That quality filter comes from having a reputation to risk and a community to face. The horse, however eagle-eyed, has no reputation. You have skin in this game in a way that the model does not. That&#8217;s not a risk, it is a blessing. Use it to your advantage.</p><p>And there is a fourth thing, the one that I think matters most.</p><div><hr></div><h2><strong>Quality, Not Virality</strong></h2><p>There are two kinds of prompts you can give the genie.</p><p>The first: &#8220;Give me five topics to create a video about today, optimizing for virality on my channel.&#8221;</p><p>The second: &#8220;I was thinking about the post that we wrote yesterday, and here&#8217;s what&#8217;s gnawing at me.&#8221;</p><p>Both use the same technology. Both produce outputs. But they do very different things to the human who issues them.</p><p>The first prompt <em>atrophies</em> the choosing muscle. Each time you outsource &#8220;what should I think about?&#8221; to the genie, the muscle weakens. You&#8217;re training yourself to not-choose. You&#8217;re<a href="https://www.oneusefulthing.org/p/centaurs-and-cyborgs-on-the-jagged"> falling asleep at the wheel</a>, to use Mollick&#8217;s phrase from the BCG study, but at a deeper level &#8212; not just failing to catch the AI&#8217;s errors, but failing to generate your own questions.</p><p>The second prompt <em>strengthens</em> the choosing muscle. It forces you to introspect before you engage. To notice the itch. To identify what&#8217;s unresolved. To articulate, even roughly, the direction you want to go in. The AI then helps you chase it. But the noticing? It is yours. And it is precious.</p><p>The virality prompt doesn&#8217;t engage on the quality dimension at all. It outsources direction to a metric. The gnawing prompt <em>starts</em> from thinking about quality &#8212; from the felt sense that something matters, that something is unresolved, that the conversation needs to go further. Remember, the model is currently stateless during pauses in a conversation. You, hopefully, are not. That&#8217;s an advantage you have, and you should double down on it.</p><p>The age of AI should make you think harder than ever before, and if you don&#8217;t see it that way, I say you&#8217;re doing it wrong.</p><div><hr></div><h2><strong>Natural Philosophers Make the Best Centaurs</strong></h2><p>This, I think, is the real answer to what makes one centaur better than another.</p><p>The good centaur isn&#8217;t just the one with the best horse, or the one who divides labour most efficiently. The good centaur is the one whose rider has the disposition of a natural philosopher: curious across domains, stubbornly committed to following questions wherever they lead, and oriented by quality rather than metrics.</p><p>The Royal Society&#8217;s motto &#8212; <em>Nullius in verba</em>, take nobody&#8217;s word for it &#8212; is a rider&#8217;s motto. Don&#8217;t accept the default framing. Don&#8217;t let the horse choose the direction. Don&#8217;t outsource the question. Go look for yourself, ask for yourself, and care enough about the answer to put your name on it.</p><p><a href="https://www.econforeverybody.com/p/go-one-level-up-and-do-the-economics">Yesterday&#8217;s post</a> argued that the binding constraint on economic knowledge isn&#8217;t execution. It is imagination, the ability to generate questions that the existing paradigm doesn&#8217;t hand you. Arin Dube&#8217;s<a href="https://x.com/arindube/status/2035833779764248727?s=20"> data</a> confirmed it: LLMs haven&#8217;t increased paper output, because RAs were never the scarce input.</p><p>But there&#8217;s a deeper version of this finding. Perhaps the people who produce the <em>good</em> papers aren&#8217;t using AI to produce more papers at all. They&#8217;re using it to think more carefully about fewer things. The production function doesn&#8217;t shift outward. It shifts <em>inward</em> &#8212; toward depth, toward quality, toward the thing that&#8217;s gnawing at you.</p><p>The economists asking for five paper topics optimised for citations are on the virality prompt. The rest, hopefully, are on the gnawing prompt. Both are riding the same horse, sure. But only one of them is becoming a better rider. Only one of them is focusing on Ethan&#8217;s bottom line in his tweet today: &#8220;What is important is whether we will make deliberate choices about what skills to keep &amp; which they will be.&#8221;</p><p>If you aren&#8217;t asking weird questions, you aren&#8217;t learning. And if you aren&#8217;t choosing your own questions&#8230; that is, if you aren&#8217;t cultivating a feeling for what gnaws at you, you aren&#8217;t really riding at all. You&#8217;re just a passenger.</p><p>Ask weird questions. Choose them yourself. Think about what they mean.</p><p>Then ride. And keep on ridin&#8217;!</p>]]></content:encoded></item><item><title><![CDATA[Go One Level Up (And Do The Economics When You Get There)]]></title><description><![CDATA[Tyler Cowen asks when the research paper will disappear in economics.]]></description><link>https://www.econforeverybody.com/p/go-one-level-up-and-do-the-economics</link><guid isPermaLink="false">https://www.econforeverybody.com/p/go-one-level-up-and-do-the-economics</guid><dc:creator><![CDATA[Ashish Kulkarni]]></dc:creator><pubDate>Mon, 23 Mar 2026 18:43:29 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!I7v2!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F134d0812-ead7-4a8c-88b9-3daf48e799b7_1048x1048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Tyler Cowen <a href="https://marginalrevolution.com/marginalrevolution/2026/03/when-will-the-research-paper-disappear-in-economics.html">asks</a> when the research paper will disappear in economics. Think about it, he says: AI can now generate, evaluate, and improve papers. Ergo, the paper is no longer the scarce unit of intellectual contribution. You can write as many papers as you want, and each of those can have as many variants as you want.</p><p>So what becomes scarce in such a world? The system does. The system (or the box) has components: the dataset, the code and the method of analysis. The code and the method are now solved problems, or close to being so. So folks who can figure out innovative ways to collate interesting, novel and high quality datasets will become scarce. Tyler himself points to the possibilities of synthetic datasets being generated with the help of AI, but doesn&#8217;t ask the obvious question. If this task is repeatable and verifiable, it can be automated. Which brings us back to square one.</p><p>It is really hard to do, and it does give you a bit of a headache, but there is a helpful heuristic when it comes to thinking about AI. As <a href="https://x.com/tszzl">roon put it on Twitter</a> recently: in the age of AI, whatever level you&#8217;re thinking at, go one level up.</p><p>Tyler went one level up from the paper publishing game as it is played today. But the reason this heuristic gives you a headache is that it applies recursively.</p><p>But here is exactly where principles of economics come into play: each time you go up a level, you don&#8217;t just get a new abstraction. You also get a new set of <a href="https://www.econforeverybody.com/p/choices-costs-horizons-and-incentives">incentives, costs, time horizons, and choice sets</a> for everybody involved. You get, in other words, to update your view of the world. Not because the principles have changed, but because the circumstances to which they&#8217;re being applied have changed.</p><p>Declaring that we&#8217;ve moved to a higher level of abstraction is easy. Doing the economics of what that level actually looks like is harder, but more useful.</p><div><hr></div><h2>The Market Has Already Spoken</h2><p>Before we go climbing levels, let&#8217;s look at the data. Arin Dube <a href="https://x.com/arindube/status/2035833779764248727?s=20">points out</a> that the advent of LLMs hasn&#8217;t raised the number of NBER working papers above trend. Nor submissions to top journals. His explanation: LLMs substitute for good RAs, but not for good ideas. And RA labour supply was never the binding constraint on economic scholarship. By the way, I&#8217;m willing to give credence to the idea that the academic world is just that slow, and again, principles of economics help us understand this. The academic profession isn&#8217;t incentivized to do this, and for many different reasons (shooting themselves in the foot would be chief among them)</p><p>Tyler&#8217;s post implicitly assumes that the bottleneck in economic knowledge production is in the execution phase: writing the paper, running the analysis, doing the robustness checks. If that were true, LLMs should have already caused an explosion in output. But they haven&#8217;t, and you&#8217;d have to update your priors a little bit at least about the fact that the bottleneck probably lies elsewhere.</p><p>The market for academic papers is talking, and it&#8217;s telling us something important. The constraint is upstream of the paper. It&#8217;s upstream of the system that produces the paper, too. Building a better system is essentially building a more sophisticated RA. But as <a href="https://x.com/arindube/status/2035833779764248727">Arin Dube points out</a>, this excellent RA still needs someone to tell it what to investigate. Replacing academic papers with systems is essentially proposing to upgrade the RA layer even further. But Dube&#8217;s data suggests that upgrading the RA layer seems to not matter at the margin.</p><p>So what <em>is</em> the binding constraint? Let&#8217;s go find out.</p><div><hr></div><h2>One Level Up, Then Another, Then Another</h2><p>Let&#8217;s try an example of thinking one level up. Here&#8217;s one of Tyler&#8217;s ideas in his blog post: &#8220;How about &#8216;I am Tyler Cowen, what is it you think I will find interesting in this data set?&#8217;&#8221; How might you go one level up here?</p><p>How about: &#8220;I am Tyler Cowen, what data set should I be trying to get, given the most interesting questions I&#8217;m thinking of at the moment?&#8221;</p><p>Remember, this is recursive. How does one go one level up from <em><strong>here</strong></em>?</p><p>Try this on for size: &#8220;What interesting questions am I not asking that I should be? Use what you know about me to formulate your answer.&#8221;</p><p>But remember, each change in level also changes what is scarce, what one&#8217;s incentives will be, and associated costs and time horizons. At the first level (analyzing the dataset level), the scarce skill is statistical technique: the ability to extract findings from data. At the second level (choosing the dataset level), it&#8217;s judgment about data acquisition &#8212; knowing what to look for and where. At the third level (what are my unknown unknown levels), it&#8217;s imagination &#8212; the ability to generate questions that wouldn&#8217;t occur to someone embedded in their current paradigm.</p><p>And as we stated earlier, each level has its own economics. Technique is a skill that is expensive to acquire but it has the benefit of being durable: you only need to learn the basics of causal inference once. As opposed to data acquisition, which has ongoing costs: every new dataset requires negotiation, cleaning, contextual understanding. But what of imagination? I&#8217;m not even sure how to measure it, and cultivating imagination looks nothing like traditional education. It is about cultivating a disposition or an inclination. It is about inculcating the habit of asking weird questions, and about stubbornly refusing to accept a given framing.</p><p>Think of LLMs as a massive supply shock at the technique level. But an infinite increase in the ability to do analysis doesn&#8217;t automatically increase the ability to generate interesting ideas in the first place. Or, if you prefer economist speak: the production function for economic knowledge seems to have near-zero elasticity of substitution between technique and imagination. We&#8217;ve flooded the market with automated RAs and nothing much seems to have happened. Why? Perhaps because RAs were never the scarce input to begin with.</p><p>I&#8217;ve <a href="https://econforeverybody.com/2024/01/29/in-praise-of-random-questions/">written before</a> about asking my students five random questions at the end of every class. About anything, as long as it&#8217;s not about the topic we just discussed in class. Here are my all time top three: why do cockroaches flip over when they die? <em><a href="https://www.facebook.com/socialbuzz101/posts/melody-itni-chocolaty-kyun-hai-for-decades-this-simple-question-has-lived-in-ind/1548570763942779/">Melody itni chocolatey kyon hai</a>?</em> Were Ross and Rachel on a break?<br><br>The practice sounds like a pedagogical trick. But one way to think about it is that it&#8217;s training for the only skill that Dube&#8217;s data says matters: the ability to generate questions that the existing paradigm doesn&#8217;t hand you.</p><p>The cost structure is interesting. Imagination might be the cheapest skill to <em>exercise</em>: it&#8217;s free to ask a weird question. But it is among the hardest to <em>cultivate</em>. Midway through each semester, I have to tell my class that I will not leave until at least five questions have been asked. Trust me on this. This isn&#8217;t about fancypants labs, or jazzy classrooms. It&#8217;s about cultivating environments that reward curiosity: classrooms that invert the syllabus, communities that value questions over answers, and about cultures that treat &#8220;I don&#8217;t know, let&#8217;s find out&#8221; as a high-status move. Creating that culture, especially in the world in which we live today, is <em><strong>hard</strong></em>.</p><p>And it doesn&#8217;t scale the way AI does.</p><div><hr></div><h2>The Journal Problem</h2><p>Tyler asks: &#8220;Do we even need the AER any more to certify which are the best papers? Just ask the AIs.&#8221; And later on in the post he says, &#8220;What if you submit to a journal a data set and some code?&#8221;</p><p>But do our usual thing: who needs the journal?</p><p>The journal has always solved two problems simultaneously: quality certification (&#8221;this is good work&#8221;) and attention routing (&#8221;therefore, read this&#8221;). In a world where humans couldn&#8217;t read everything, bundling these two functions made sense. The AER&#8217;s authoritative stamp told you what to pay attention to <em>and</em> that it was worth your time. Quality and signaling, two for the price of one.</p><p>AI unbundles them. Quality certification can be handled, say, by a council of frontier models. Such a council can rate papers, check replicability, assess influence. And it might perhaps be better at this than three overworked referees with their own agendas. But they are already a gazillion times faster, and not that much worse in quality, if at all.</p><p>But attention routing is a different beast altogether.</p><p>In the current world, attention routing is a centralized bureaucracy. There&#8217;s a clear hierarchy &#8212; AER, QJE, Econometrica &#8212; and everyone &#8216;Schelling Points&#8217; around it. This is a well-trodden path, and we know what I&#8217;m going to say next: publication delays, conformity pressure, the distortion of research agendas toward &#8220;publishable&#8221; questions, yada yada yada. But the <em>benefit</em> is legibility. When a hiring committee looks at a CV, they can read it. When a graduate student looks for important work in trade theory, they know where to start.</p><p>Imagine you live in a world where your AI agent reads everything and filters it for you. You don&#8217;t have to imagine this, by the way. You <em><strong>are</strong></em> in that world, you just don&#8217;t know it yet. Quality certification is a solved problem in this world: your agent will do it for you. How do you distinguish a bad arxiv paper from a great one? Simple, ask your agent to evaluate it for you. And given the urgency with which publishing happens on arxiv, human editors at top-notch journals are going to suffer in comparison on the speed-quality Pareto frontier. Why bother waiting for a costly and delayed signal from the top journals, if you can read excellent research that lives at the cutting-edge of your field?</p><p>And if journals are going to see a fall in status (<a href="https://x.com/momin_rayhan/status/2035842254141628853">if speed and quality in publications matter, what do you think is going to happen to the status of journals</a>?), how do we judge the quality of a researcher? How does a hiring committee in a university evaluate a researcher when there&#8217;s no shared prestige signal, just millions of personalized feeds? (Are you tempted to ask &#8220;Who needs the university?&#8221; Congratulations, and welcome to Headache Hotel. But that&#8217;s a whole other story.)<br><br>The good news is that the researcher&#8217;s choice set expands. You can publish anything, anywhere, in any format. But the bad news is that the <em>incentives</em> become bewildering. You&#8217;re no longer optimising for &#8220;what will impress the AER editor.&#8221; You&#8217;re optimising for reach across millions of personalized filtering mechanisms. That&#8217;s a very different optimisation target, and nobody knows how to think about it.</p><p>The cost structure flips too. Producing research gets cheaper, because the AI genie helps. But building a reputation might get more expensive, because there&#8217;s no single ladder to climb. In the current world, one publication in the AER buys you legibility across the entire profession. In the new world, legibility is fragmented. You might be famous in one cluster of agent-curated feeds and all-but-invisible in another.</p><div><hr></div><h2>The Lucas Critique, All the Way Down</h2><p>Tyler suggests publishing &#8220;a method for simulating human behavior, to run AI-simulated experimental economics.&#8221; Build the system and don&#8217;t worry about the paper.</p><p>But go one level up, and you run into the Lucas Critique in a form that should make us quite uncomfortable.</p><p>The Lucas Critique says that models estimated under one policy regime break when the regime changes, because agents adjust their behaviour in response to new rules. It&#8217;s a warning about the instability of empirical relationships when the rules of the game change.</p><p>But think about this: universal AI assistants are a paradigm change for human decision-making itself. Every economic agent now has, or soon will have, a reasoning engine in their pocket. Their choice architecture is fundamentally altered. Price comparisons that took hours take seconds. Heuristics that marketers could count upon will no longer work as well. Contract language that was opaque becomes transparent. Negotiation strategies that once required experience can be generated on demand.</p><p>So: any behavioural model estimated on pre-AI humans is already suspect. The price elasticities, the risk preferences, the heuristics and biases &#8212; all of these were measured on humans making decisions with their own reasoning capacities. The new agent is a human-AI centaur, and we don&#8217;t have stable estimates for how that centaur behaves. Agents will have no problem thinking one level up, I assure you. You could tell me that variance will go down because of this, and I might agree with you. But you could also tell me that variance will go up, and well, who knows for sure?</p><p>In any case, the time horizon for any empirical finding <em>shortens</em>. Your carefully estimated parameter was valid when humans were computing intuitively. Now they&#8217;re asking Claude. How long does your estimate last? A year? Five? The cost of maintaining a working system of behavioural simulation isn&#8217;t just computational. It is also the cost of keeping up with a moving target. Every time the AI improves, the individual agent changes, and your model drifts.</p><p>The Lucas Critique in the age of AI says: the system needs to be rebuilt constantly, because the centaurs inside it are constantly updating their beliefs and therefore their actions. That&#8217;s not an argument against systems per se &#8212; it&#8217;s an argument that the system is a treadmill, not a monument. And the economics of treadmills is very different from the economics of monuments.</p><div><hr></div><h2>The Hayekian Disclaimer (Which Is Also the Point)</h2><p>Tyler asks whether tenure should be given to folks building systems instead of writing papers and/or folks who can build capabilities.</p><p>You know the drill by now. What is tenure <em>for</em> in a world where time-to-insight has collapsed? Tenure is like insurance. It gives the professor a floor, or an income security. Think of it like the premium paid by the university in exchange for the option value on long-term research. If the long term gets shorter, the option value drops. And if building capability is going to be the scarce resource that we think it is going to be, those folks aren&#8217;t going to be queuing up for tenure. Again, in this world, what is university for?</p><p>But working out the new equilibrium for academic employment and institutions in the education sector is a prediction about institutional design and culture, and I don&#8217;t have one. Particularly in these times.</p><p>I can&#8217;t tell you what the new world looks like, and what steps we will take to get to that world from this one. Nobody can. That&#8217;s a Hayekian point &#8212; the new order for knowledge production will emerge from millions of individual adaptations, not from a blueprint. What I <em>can</em> do is help you think about the transition using principles of economics. At each level of abstraction, ask: what are the constraints? The choice sets? The incentives? The time horizons? The costs? These frameworks of analysis remain the same, even when everything they&#8217;re analysing is in flux. They help us understand the territory, even if they can&#8217;t draw us the exact path.</p><p>Which brings me back to Arin Dube&#8217;s finding. The market has already told us where the binding constraint is. It isn&#8217;t execution. It isn&#8217;t even systems. It&#8217;s the question &#8212; the weird, generative, paradigm-breaking question that no amount of RA labour, artificial or otherwise, can substitute for.</p><p>I&#8217;m making my way (very slowly) through the Baroque Cycle trilogy by Neal Stephenson. And one thing I&#8217;ve learnt by reading those books is that the Royal Society&#8217;s motto is <em>Nullius in verba</em> &#8212; take nobody&#8217;s word for it. For 350 years, that has been a privilege of the few - the membership of the Royal Society. But if you think about identifying with the society&#8217;s founding values, that has now become a far easier club to break into.</p><p>The tools for inquiry are now available to anyone with a browser and the disposition to ask. We can all be fellows of the Royal Society in spirit. Why, some of us have done important work in dog cancers with a browser and an intense incentive to ask. But &#8220;can all be fellows&#8221; is not the same as &#8220;will&#8221;. The AI genie is out of the bottle, yes &#8212; but the genie grants wishes, and the hard part has always been knowing what to wish for.</p><p>If you aren&#8217;t asking weird questions, you aren&#8217;t learning. And if you aren&#8217;t doing the economics of the weird question, you aren&#8217;t thinking like an economist.</p><p>Ask weird questions. And then analyze them using principles of economics.</p>]]></content:encoded></item><item><title><![CDATA[Coase, Alchian, Demsetz, and the Economics of Training Away Your Own Scarcity]]></title><description><![CDATA[The $200/Month Monitor, Explained]]></description><link>https://www.econforeverybody.com/p/coase-alchian-demsetz-and-the-economics</link><guid isPermaLink="false">https://www.econforeverybody.com/p/coase-alchian-demsetz-and-the-economics</guid><dc:creator><![CDATA[Ashish Kulkarni]]></dc:creator><pubDate>Fri, 20 Mar 2026 07:17:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!I7v2!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F134d0812-ead7-4a8c-88b9-3daf48e799b7_1048x1048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><a href="https://x.com/Miles_Brundage/status/2034437643896066218">Miles Brundage</a> recently worried that Anthropic has &#8220;an org-wide case of AI psychosis&#8221;.</p><p>In plain English: they&#8217;re shipping features <a href="https://aiproductivity.ai/news/claude-desktop-daylight-saving-time-infinite-loop-bug/?utm_source=chatgpt.com">faster than they can notice what&#8217;s breaking</a>. <a href="https://x.com/sethlazar/status/2034446322800177263">Seth Lazar piled</a> on with a specific example: Opus now defaults to a million-token context window, and you can&#8217;t even opt back down to 200k, even though performance degrades as context grows. (Source: <a href="https://thezvi.substack.com/p/ai-160-what-passes-for-a-pause">TheZvi&#8217;s Substack</a>)</p><p>Here&#8217;s what caught my eye. Lazar is saying: I want the product to be worse in one dimension (smaller window) so that it works better in another (accuracy). In olden, pre-AI times, I would have said this is an example of preferences on the part of a (human) agent. These days, we call it taste and judgment.</p><p>But hiding behind this is a rich way to help economics students understand the world we&#8217;ve entered. If you&#8217;re learning microeconomics right now, or have struggled with it in the past, you might enjoy this essay.</p><div><hr></div><h2>The reCAPTCHA Move</h2><p>Here&#8217;s the first analogy I&#8217;d use to help you start to think this through:</p><p>In the mid-2000s, Luis von Ahn had a problem and an insight.</p><p>Problem: millions of books needed to be digitised, and OCR software kept getting the hard words wrong. Insight: millions of people were already solving CAPTCHAs every day to prove they were human.</p><p>Lightbulb moment for Luis: What if you made the CAPTCHA be the hard word?</p><p>So he built reCAPTCHA. You&#8217;d see two words: one the system already knew (to verify you were human), and one it didn&#8217;t (to get you to do free OCR work). Users thought they were logging in. They were actually labelling training data. Google bought it for an undisclosed sum, and it went on to digitise the entire New York Times archive and large chunks of Google Books. <a href="https://www.npr.org/2020/05/22/860884062/recaptcha-and-duolingo-luis-von-ahn">Fun story, if you want to read the full thing</a>.</p><p>Here&#8217;s why this gladdens an economist&#8217;s heart: Von Ahn didn&#8217;t pay anyone to do the work. He didn&#8217;t even ask. He made the work identical to something people already wanted to do. The cost of acquiring labels was zero &#8212; or more precisely, the cost was already being paid by someone else (the website owner buying CAPTCHA security).</p><p>Now think about Claude Code, or Codex. Every time a developer sends a prompt, gets code back, and says &#8220;no, that&#8217;s not what I meant&#8221; &#8212; every time they edit the output, reject a suggestion, or restructure what the agent produced &#8212; <a href="https://aparnacd.substack.com/p/design-for-learning">they are generating exactly the kind of signal that makes future models better</a>. This is not a synthetic benchmark signal. This is a real-world, domain-specific signal, from a person who understands what &#8220;correct&#8221; means in this particular codebase, for this particular business purpose. This is feedback from a person with skin in the game. That&#8217;s gold for the AI guys. Or whatever is more valuable than gold. It is hard to keep up these days.</p><p>But here&#8217;s the bit that makes one&#8217;s eyes go all beer-goggly: they&#8217;ve got us paying $200 to do this&#8230; and we&#8217;re the ones thinking this is a great deal!</p><p>Von Ahn made the security task identical to the annotation task. Anthropic has made the coding task identical to the model improvement task. Users think they&#8217;re building software. They are also, unavoidably, training the model on real-world projects.</p><div><hr></div><h2>But Von Ahn Had It Easy</h2><p>Here&#8217;s where that Von Ahn analogy breaks down, and where the economics gets even more interesting.</p><p>Von Ahn&#8217;s reCAPTCHA was one-way extraction. The user got access to a website. Google got OCR labels. The user had no &#8220;residual claim&#8221; on the digitised books, and no reason to care.</p><p>Claude Code and Codex are different beasts altogether. The developer gets working software, which is a genuinely valuable output. Anthropic/OpenAI get a training signal. Both parties walk away with something they want. Both parties are, in a real sense, monitoring each other. The developer monitors the model&#8217;s output (was this code correct? Was it useful? Is it safe?). The model&#8217;s provider monitors, in aggregate, the patterns of human correction (where do users reject? What do they edit? What do they choose to restructure?). And note that it does this across levels. You&#8217;re using Claude Code, so is your manager. So is her manager! And Claude Code is learning across all of your interactions with it, on the same project, but at different levels. Hold that thought, because this matters.</p><p>But for now, let us open our econ textbooks. A guy called Ronald would have gotten a glint in his eye right now.</p><p>I&#8217;m referring to Coase, of course, and I&#8217;m talking about the theory of the firm here. You using Claude Code or Codex is precisely what makes those tools (and the models underlying those tools) better. To be fair, the direct pipeline from your corrections to model training depends on whether you opted in. And it is true that many enterprise users are excluded entirely. But the subtler signal doesn&#8217;t require opt-in. Every thing that you choose to try again, every session that you abandon in frustration, and every editing exercise across millions of users shapes what gets built next. It is not just the explicit corrections that matter, but also what can be inferred from the choices you make, and every thumbs-up (explicit or implicit) that you give. Or don&#8217;t give! Everything is a signal. Read Aparna&#8217;s blog post on this, <a href="https://aparnacd.substack.com/p/design-for-learning">I&#8217;ll link to it again</a>. Note this sentence in particular from their blog: &#8220;Every click, tab, press, accept shapes not only the user experience but improves the model intelligence.&#8221;</p><p>Anyone who has used a Windows laptop will immediately know and appreciate the point I&#8217;m trying to get at: we can all assure you that using Windows did not make it better. You had to raise an issue or a ticket, and eventually, at some point down the line, a new Windows version might remove the bug that broke your heart. Eventually. One day. Maybe.</p><p>That is not this world, and all those transaction costs have now been internalized, and not just inside the firm, but inside the product itself. This is Coase on steroids! But we&#8217;re deep in the transaction woods now, and for this territory, we need a heavier machete than Coase alone can give us.</p><p>Paging Alchian-Demsetz!</p><div><hr></div><h2>A Quick Detour Through 1972</h2><p><a href="https://x.com/dwarkesh_sp/status/2032572157243302154">The Patel Bros. recently spoke about Alchian, but about another of his many awesome ideas (Alchian-Allen)</a>. But we&#8217;re talking about a different Alchian paper today.</p><p>If you&#8217;ve taken a course in the economics of organisations, you&#8217;ve probably encountered <a href="https://josephmahoney.web.illinois.edu/BA549_Fall%202010/Session%205/Alchian_Demsetz%20(1972).pdf">Alchian and Demsetz (1972)</a>. If you haven&#8217;t, here&#8217;s the core idea, and I promise it&#8217;s relevant.</p><p>Imagine two people (or four, if you&#8217;re a Friends fan) carrying a heavy sofa up a flight of stairs. The output &#8212; sofa successfully delivered &#8212; is joint. You can&#8217;t easily separate out how much each person contributed. If one person slacks off a little, the sofa still gets there, just more slowly, and you can&#8217;t tell who was shirking.</p><p>This is the metering problem in team production. When individual contributions are hard to measure, people tend to free-ride. Alchian and Demsetz&#8217;s solution: appoint a monitor. Give someone the job of watching everyone else&#8217;s effort. And to make sure the monitor doesn&#8217;t slack off, make the monitor the residual claimant &#8212; the person who keeps whatever surplus is left over after everyone else is paid.</p><p>That&#8217;s their theory of why firms exist. The boss isn&#8217;t the boss because of some mystical authority. The boss is the boss because someone needs to watch the team, and the person who watches the team should be the one who benefits most from watching carefully.</p><p>Now apply this to Claude Code.</p><div><hr></div><h2>Who Monitors Whom?</h2><p>A developer working with Claude Code is engaged in team production. The developer contributes intent, domain knowledge, judgment, taste. The model contributes speed, pattern-matching, and broad knowledge of code patterns. The output &#8212; working software &#8212; is genuinely joint. Neither party could produce it as efficiently alone.</p><p>Alchian and Demsetz (AD) would ask: who should monitor this team? Their answer: whoever has the lowest cost of observing and evaluating the other party&#8217;s contribution.</p><p>And the answer is obvious. The developer monitors the model. Nobody else can. Only the developer knows whether the generated code actually does what the business needs. Only the developer can evaluate whether the architecture is maintainable, whether the edge cases matter, whether the abstraction is right. This isn&#8217;t just checking syntax. It&#8217;s taste and judgment &#8212; the thing that&#8217;s hardest to automate and most expensive to acquire.</p><p>So far, so standard. The monitor monitors. Classic AD.</p><p>But here&#8217;s the twist that AD didn&#8217;t anticipate: <strong>the monitor is paying the entity being monitored.</strong></p><p>In the classic firm, the residual claimant pays workers. Here, the developer pays Anthropic (via a subscription), and then does the monitoring work on top of that. The developer is simultaneously the team-production partner, the quality monitor, and the paying customer.</p><p>Why does this work? Because the developer is also the residual claimant on the immediate output. They keep the working code. That&#8217;s valuable enough to justify both the subscription fee and the monitoring effort.</p><p>But Anthropic is also a residual claimant &#8212; on a different residual. They keep the training signal. The patterns of correction, rejection, and editing, aggregated across millions of users, are enormously valuable for improving future models.</p><p>So both parties are residual claimants. Both are monitoring. And both think they&#8217;re getting a good deal. The metering problem doesn&#8217;t just solve itself &#8212; it solves itself twice, in opposite directions. Non-zero sum games are the best games to play, and this is a great example. Anthropic/OpenAI win, but so do the users.</p><p>One absolutely should invoke Coase and non-zero sum games here, but one needs to go deeper to appreciate the true institutional innovation. This is not &#8220;just&#8221; clever pricing (20/100/200 dollar plans). This is a structural arrangement where the transaction cost of gathering the high-quality training signal has not just gone lower: it has gone negative. Users pay to provide the signal because the immediate output is valuable enough to justify the cost.</p><div><hr></div><h2>So Who Owns What?</h2><p><a href="https://x.com/tszzl/status/2021423402079355032">Roon says you should go one level up</a>. That works while coding, but while analyzing, one should aim to go one level deeper. And this is the point where AD stops being sufficient (interested readers should ask their LLMs to talk about Grossman and Hart. Paste the link to this blog post inside your LLM of choice, choose the best model available to you, and say that the blog&#8217;s author asked me to ask you about Grossman and Hart. Try it!). What follows is Grossman Hart territory, but without me referencing the actual papers - I&#8217;m outsourcing that bit to the LLMs.</p><p>AD explains the present equilibrium rather well. But what happens over time? My answer: something uncomfortable is happening.</p><p>Every correction the developer makes &#8212; every &#8220;no, I meant this, not that&#8221; &#8212; partially reveals their judgment. Not just what was wrong, but how they think about what&#8217;s right. Their evaluative schema. Their taste. The thing that currently makes them irreplaceable as the monitor.</p><p>There are really three distinct assets being produced in every Claude Code session:</p><ol><li><p><strong>The code itself:</strong> The developer owns this. Clear property right.</p></li><li><p><strong>The iterative process itself: </strong>The pattern of accepts, rejects, edits, and corrections. Anthropic captures this (subject to its data policies). The property right here is contractual &#8212; it depends on the terms of service.</p></li><li><p><strong>The implicit rubric:</strong> The deeper structure of judgment that the corrections partially reveal, and that stays in nebulous form inside the developer&#8217;s head. This is the asset that matters most, and the one whose eventual ownership is most ambiguous.</p></li></ol><p>Most discussions lump assets 2 and 3 together. But they&#8217;re not the same. A correction log is not a theory of judgment. &#8220;User rejected this function&#8221; is not the same as understanding why &#8212; what principle of software design, what business context or what aesthetic preference drove the rejection.</p><p>The whole race that is being run today is about whether enough feedback from enough people lets the platform understand the implicit rubric. And not just any broad rubric. The specific invisible rubric that is inside the head of the person the model is currently working with. If the model succeeds at this, what does that person do next?</p><p>We are, in a real sense, training away our own scarcity.</p><div><hr></div><h2>The Pirsig Question</h2><p>This is where I want to move away from econ, and go over to one of my favorite authors.</p><p>In <em>Zen and the Art of Motorcycle Maintenance</em>, Pirsig makes a distinction that I think matters here. The mechanic who cares about the motorcycle produces better work than the one who doesn&#8217;t. And the caring isn&#8217;t reducible to any specification or checklist. Quality, for Pirsig, lives in the relationship between the person, the artifact, and the purpose. It&#8217;s not a property of the output alone.</p><p>AD and Grossman/Hart both assume that what matters can be treated as an economically legible object &#8212; contribution, control, surplus, bargaining power. Pirsig asks a different question: is quality the kind of thing that can be extracted, standardised, and transferred?</p><p>If yes, then every correction you make is just training data with a lag. Given enough corrections from enough caring humans, the model learns to simulate caring effectively. The market doesn&#8217;t require metaphysical replacement. Functional substitution is enough.</p><p>If no&#8230; that is, if quality exists only in the act of a particular person caring about a particular artifact for a particular purpose &#8212; then there remains an irreducibly situated human role that no amount of aggregated feedback can replicate.</p><p>I don&#8217;t pretend to know the answer. My guess is that the current answer is &#8220;yes for most tasks, no for some, and the boundary keeps moving.&#8221; But the question about where quality resides matters, because it determines whether the current equilibrium is stable or transitional.</p><div><hr></div><h2>TMKK?</h2><p><strong>If you&#8217;re a developer using Claude Code or Codex:</strong> You are the monitor, and you are good at your job. But understand that your monitoring work has dual value &#8212; to you (working code) and to the platform (training signal). The more corrections you provide, the more you&#8217;re improving a system that may eventually need fewer corrections. This isn&#8217;t a reason to stop using these tools. It is a reason to think about what makes you irreplaceable beyond the act of correction itself. What do you, and will you, really own? Choosing which problem to work on? The customer relationship? The deployment context? The private data? The taste that can&#8217;t be inferred from accept/reject patterns alone? Something else?</p><p><strong>If you&#8217;re thinking about AI strategy for an organisation:</strong> The Alchian-Demsetz lens tells you that the developer is currently the natural monitor because they have the lowest cost of evaluating output quality. Now, this is true today. But the model provider is also monitoring, in aggregate, across most of your competitors&#8217; developers too, and as we discussed earlier, across different levels inside your own organization. The real question isn&#8217;t who monitors better right now. It&#8217;s who gets to keep learning from the monitoring. Your developers learn one codebase, but the model learns from all of them - developers, project managers, <em>their</em> managers &#8212; and this is across departments, across firms, and therefore across multiple levels of abstraction.</p><p><strong>If you&#8217;re an economist or a policy person:</strong> The property rights question is going to become one of the defining issues of AI governance. It&#8217;s not about data privacy in the traditional sense. It&#8217;s about who owns the judgment signal that users generate as a byproduct of using AI tools. Today, I don&#8217;t think we have good answers to this question. I have had enough trouble in trying to figure out if this is the right question!</p><p>Whether this is intentional or emergent, the labs have built a system where users pay to generate the most valuable training signal in existence &#8212; real-world, high-stakes, domain-specific human judgment on AI outputs. Von Ahn would be proud.</p><p> If this is really the deal on the table, is it a fair one?</p><p>Yes, for now, is my answer. The working code is genuinely valuable. The subscription price is reasonable. The monitoring is something the developer would do anyway, because they need the code to be right.</p><p>But &#8220;for now&#8221; is doing a lot of work in that sentence.</p><div><hr></div><p>I chatted with both Claude (Anthropic) and ChatGPT (OpenAI) to refine my ideas before writing this essay, and Claude suggested a lot of edits, most of which I have incorporated. The irony of using both models to think about who captures value from human-AI interaction, and to make this essay more readable, is not lost on me. But it is more than fair to say that both provided genuine intellectual contributions. Both, presumably, learned something from the exchange. As did I, of course. As did you, by reading this post, hopefully.</p><p>Who got the better deal? Ask me again in five years.</p>]]></content:encoded></item><item><title><![CDATA[Guns, Germs, and Code]]></title><description><![CDATA[A Simple Thought Experiment]]></description><link>https://www.econforeverybody.com/p/guns-germs-and-code</link><guid isPermaLink="false">https://www.econforeverybody.com/p/guns-germs-and-code</guid><dc:creator><![CDATA[Ashish Kulkarni]]></dc:creator><pubDate>Tue, 17 Mar 2026 03:52:55 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!I7v2!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F134d0812-ead7-4a8c-88b9-3daf48e799b7_1048x1048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1>A Simple Thought Experiment</h1><p>Imagine that a biological virus is going to spread through the world. Its spread will be rapid, virtually unstoppable, and essentially inevitable. But this is a weird virus, and it will have weird effects. Weird and wonderful even, in some cases, and weird and disastrous in others. But there is good news: there is a vaccine available already, with virtually no side effects. It won&#8217;t be a perfect cure, but it will dramatically reduce the worst effects of the virus. At least for now.</p><p>Should you take the vaccine?</p><p>Now, in what follows, I will show you that this is the exact equivalent of choosing to work with Claude Code or Codex today. Off we go:</p><h1>Software Bonkers</h1><p>Craig Mod says <a href="https://craigmod.com/essays/software_bonkers/">he is software bonkers</a>.</p><p>I can empathize.</p><p>Anybody who has used Claude Code, or any one of the equivalent harnesses available today, will empathize. If you don&#8217;t know what a harness means in this context, <a href="https://www.oneusefulthing.org/p/a-guide-to-which-ai-to-use-in-the">read this</a> by Ethan Mollick.</p><p>Craig describes how easy it was to create the exact software that he needed, and while he has a computer science degree, <a href="https://x.com/d33v33d0/status/2031515683390107958">there are</a> <a href="https://x.com/sebkrier/status/2032696950630252586">plenty of</a> <a href="https://x.com/bentossell/status/2032212463735701803">people who</a> <a href="https://x.com/cblatts/status/2031537219303862469">could tell you</a> <a href="https://www.reddit.com/r/ClaudeAI/comments/1rs8p68/whats_the_most_impressive_thing_youve_built_with/">similar stories</a>. As with all stories, so with these: they are models of the world, and your mileage with them will vary. But don&#8217;t depend on any one particular story, try to get a sense of where the world is going by reading all of them.</p><p>Anyone who can use a computer and is somewhat familiar with the English language can do these things and more, starting right now. You are reading this blog, and that means you meet both conditions: you can use a computer, and you are at least somewhat familiar with the English language.</p><p>Here is what this means for you: you can now just create the software that you need for your workflows. You do not need to go out and search for a software that fits your needs. You reduce your search costs.</p><p>You no longer need to adjust your workflow to suit available software. The software you create fits your needs, rather than the other way around. You reduce your transaction costs.</p><p>Here&#8217;s an analogy: imagine that it was time for you to buy a new car.  Instead of seeing what was available in the market, you could just make a car suited to your specific needs. Large legroom at the front, not quite so much at the rear, a way to easily get your dog into the car, and the music that the teenager plays only pipes into her headphones as opposed to the entire car. We are not there (yet<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>), but the equivalent of all this for the software you need? We are very much there, we were there as of four months ago or so.</p><p>Craig&#8217;s post is about his needs, and how he built his metaphorical car in this brave new world.</p><p>Which means, naturally, talking about the time the Old World met the New World. (Spoiler alert: it didn&#8217;t go well for the New World at the start).</p><h1>Guns, Germs and Steel</h1><p>About five hundred years ago, Europeans reached the Americas. About thirty years ago, a guy called Jared Diamond wrote a book about this. The title of that book was &#8220;Guns, Germs and Steel&#8221;. It is a very good book, and you should read it. The broad idea is that when those Europeans landed in the New World, they were able to bring with them three things: guns, germs and modern industrialization. There is much, much more to the book, of course, but this is a blog post, and so we&#8217;re going to stop with just this summary for now.</p><p>Now, the thing that matters to us in this blog post is that  this was a shock to the way the New World did things. Those guys didn&#8217;t have military defenses against guns, they didn&#8217;t have biological defenses against these new germs, and they didn&#8217;t have any economic defenses against steel. Economists would have called this an exogenous shock, and it wasn&#8217;t just &#8216;a&#8217; shock, but rather a triple whammy. And of those three whammies, the introduction of smallpox in particular proved to be devastating. Europeans, hardened by centuries of living in close proximity to livestock, had acquired immunity against many of the germs that they ended up being carriers for into the new world. But for everybody in the New World, it was catastrophic. And I do mean catastrophic: by one estimate, <a href="https://www.pbs.org/gunsgermssteel/variables/smallpox.html">95% (!!!) of the population was wiped out</a> within a few generations after the <a href="https://en.wikipedia.org/wiki/Columbian_exchange">initiation of the Columbian Exchange</a>.</p><p>One of the things I want to do with this analogy is to help you understand that you can use this same frame to think about AI today. It (AI) has landed on our shores, and it brings along with it the modern day equivalent of guns, germs and steel. And we don&#8217;t have defenses against these things, in much the same way that the residents of the New World back then didn&#8217;t have defenses against guns, germs and steel. Except it is worse this time around, but we&#8217;ll get to that later.</p><p>For now, let us talk about Matt Mullenweg.</p><h1>That One Time Matt Didn&#8217;t Take the Bait</h1><p>Matt Mullenweg is the guy behind WordPress, and other things besides. Look him up, it will be a worthwhile exercise.</p><p>This is a guy who knows his way around the internet. Why? Well, mostly because his firm just happens to <a href="https://wordpress.com/blog/2025/04/17/wordpress-market-share/">be a significant chunk of the internet</a>.</p><p><a href="https://ma.tt/2026/03/gone-almost-phishin/">Matt has a hair-raising post about a very sophisticated phishing attack</a>, which he was able to spot well in time. But don&#8217;t underestimate what just happened. Hackers were almost able to pull off an online heist against the guy whose job is to make sure that 43% of the internet doesn&#8217;t suffer the same fate. You can almost sense the reluctant admiration in Matt&#8217;s writing at the sheer chutzpah of it all. And that admiration is tinged with horror, because Matt knows we&#8217;re just getting started. Things are about to get a whole lot worse.</p><p>Let us begin with a rejoinder to the obvious counter you might make: that there is nothing new with phishing attacks; they have been around for a long time. You wouldn&#8217;t be wrong; they have indeed always existed. But the same technology that makes it possible to build highly tailored software solutions affordably... also makes it possible to build bespoke phishing &#8220;solutions&#8221; at low cost. In fact, a personalized phishing scam is nothing but a specialized software solution!</p><p>The equivalent of an invasion of Old World germs is about to hit us, and it is going to happen very, very soon. How soon? Within the next six months, and I&#8217;m being very generous with my estimates.</p><p>Hold on to that thought, though.</p><h1>The Two Axes</h1><p>In his post, Craig talks about this being the &#8220;dorks-only&#8221; phase. That is, he is saying that only the hobbyists and the tinkerers have caught on to what is happening in the world today. And by implication, he sees a world to come, in which more and more people will join the tribe of the tinkerers, and everybody will be whizzing up ultra-customized software solutions. Or that&#8217;s the way I understand his point, at any rate.</p><p>And while I agree with him that many more people should be tinkering, I think it is worth emphasizing that there are two different forces at play here. The second thing that will happen is that Claude Code (and its friends from other labs) will just get better at doing all this by itself. At some point in the not-too-distant future, you can just have Claude Code see your world, see what&#8217;s missing on the software side and just&#8230; cook up those solutions that you&#8217;re currently conjuring up via vibe-coding.</p><p>To stick with the analogy I used earlier on in this post: you don&#8217;t have to design that tweaked-just-so-for-you-and-you-alone car. Claude will know that you will need a new car, and it will just offer to make it for you.</p><p>Or, in economist-y terms: the production function for ultra-customized software won&#8217;t just be augmented with labor in the months to come, but also by capital that is going to be much more capable than it is today.</p><p>Which brings us back to the thought I asked you to hold on to in the previous section. Those viruses that the Europeans brought over from the old world, they had one important limitation. Their capability to improve upon their attack over generations was limited. To use a phrase that the post-Covid world will understand all too well: they didn&#8217;t have much gain of function. But even if you&#8217;ve been under that rock they keep talking about, you will have heard of the rapid advancements in AI. AI is not, let me assure you, lacking in gain of function. And that rapidly improving capability will build both things: the just-so car, but also the just-so phishing scam.</p><p>And so the good news is that we can just get what we want in the software world.</p><p>The bad news? We will also get exactly what we do not want at all. And lots of it.</p><p>So is there a way out?</p><h1>Mithridization, Baby</h1><p>(Or <a href="https://en.wikipedia.org/wiki/Mithridatism">mithridatism</a>, if you want to be all pedantic).</p><p>What is mithridization? There was a king who was fairly certain that he was going to be poisoned, and so the guy just went and made himself immune to pretty much every poison he could think of. That&#8217;s the TL;DR, but do read the entire article.</p><p>My specific ask of you? You should get a sense of what is going on in this world.  Not just because you should be familiar with what is possible today from a creator standpoint. But also because it matters from the standpoint of survival itself. The more you walk around in this jungle, the more familiar you will become with the terrain. And so when you are attacked (<a href="https://x.com/nikitabier/status/2021632774013432061">and note that I said, when, not if</a>), survival becomes easier if you know where you are, and what you are doing.</p><p>Remember the thought experiment we started with? I&#8217;m saying you should take the vaccine!</p><p>OK fine, you might say. But how many shots, and how often? In other words, are you saying I should aspire to Matt Mullenweg levels of proficiency when it comes to doing things online?</p><p>Here&#8217;s how I would answer this question: think of immunity like a spectrum. Most of us today are the medical equivalent of folks who are taking serious immunosuppressant drugs. That is, we have little to no immunity against what is coming. But as Matt&#8217;s story makes clear, even folks with heightened levels of immunity aren&#8217;t exactly safe.</p><p>Or here&#8217;s another analogy: if you&#8217;re going to be traveling about in India, you should know how to spot a fake Bisleri bottle of mineral water. You should be able to do the online equivalent, at the very least. But even if you are way better than that, you should aspire to be a bit better in the future.</p><h1>Want to get started?</h1><p>Get the twenty dollar plan from either OpenAI or Anthropic. Download either the Codex app (OpenAI) or the Claude app (Anthropic). Once you log in by following the instructions on your screen, get started by saying: &#8220;Can you make a presentation based on this blogpost&#8221;? And paste the link to this blog post. This is a perfectly fine start, but amp up your ambitions as you go along. The more you tinker, the better your immunity gets.</p><p>This is worth repeating: you develop immunity by becoming familiar with the terrain, and through repeated exposure. After you have built out that presentation, see if you can make a website. It could be about anything: perhaps your child&#8217;s hobby if you are a parent? About generating diagrams if you are a college student, say, or maybe an interactive representation of the three body problem if you are a school student? But really, it is entirely up to you. The bottom line is that you can build software for your specific requirements, just like Craig did.</p><p>How far should you go once you get started? As I said, a tad farther than wherever you currently are would be my answer. And that applies to me too!</p><h1>TMKK?</h1><ol><li><p>Get tinkering.</p></li><li><p>The rate of improvement with these models is very, very fast. Not only are they getting better with every iteration, but they&#8217;re also getting better at a faster rate. <a href="https://metr.org/time-horizons/">Read that sentence again</a>. Get tinkering.</p></li><li><p>Did I already say &#8220;get tinkering&#8221;?</p></li></ol><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>And this, now that I think of it, is the Obviously Correct definition of AGI. You&#8217;re welcome.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Hacking One's Way Through]]></title><description><![CDATA[... towards teaching folks what p-hacking is all about]]></description><link>https://www.econforeverybody.com/p/hacking-ones-way-through</link><guid isPermaLink="false">https://www.econforeverybody.com/p/hacking-ones-way-through</guid><dc:creator><![CDATA[Ashish Kulkarni]]></dc:creator><pubDate>Sat, 21 Feb 2026 09:22:45 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/52bc6b74-364b-4f7d-809d-9ac4315f8de9_2816x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I came across a very cool tweet. Cool in and of itself, but also because I&#8217;m teaching introductory statistics this year, and am trying (very hard) to communicate why hypothesis testing is so important, and so very misunderstood.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!xaXL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e3a8860-ccfb-468a-9c7d-9e3fefd98a57_541x853.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!xaXL!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e3a8860-ccfb-468a-9c7d-9e3fefd98a57_541x853.png 424w, https://substackcdn.com/image/fetch/$s_!xaXL!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e3a8860-ccfb-468a-9c7d-9e3fefd98a57_541x853.png 848w, https://substackcdn.com/image/fetch/$s_!xaXL!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e3a8860-ccfb-468a-9c7d-9e3fefd98a57_541x853.png 1272w, https://substackcdn.com/image/fetch/$s_!xaXL!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e3a8860-ccfb-468a-9c7d-9e3fefd98a57_541x853.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!xaXL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e3a8860-ccfb-468a-9c7d-9e3fefd98a57_541x853.png" width="541" height="853" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0e3a8860-ccfb-468a-9c7d-9e3fefd98a57_541x853.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:853,&quot;width&quot;:541,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:173530,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.econforeverybody.com/i/188694074?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e3a8860-ccfb-468a-9c7d-9e3fefd98a57_541x853.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!xaXL!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e3a8860-ccfb-468a-9c7d-9e3fefd98a57_541x853.png 424w, https://substackcdn.com/image/fetch/$s_!xaXL!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e3a8860-ccfb-468a-9c7d-9e3fefd98a57_541x853.png 848w, https://substackcdn.com/image/fetch/$s_!xaXL!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e3a8860-ccfb-468a-9c7d-9e3fefd98a57_541x853.png 1272w, https://substackcdn.com/image/fetch/$s_!xaXL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e3a8860-ccfb-468a-9c7d-9e3fefd98a57_541x853.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><a href="https://x.com/ahall_research/status/2024544040784720365">Source is here</a>.</figcaption></figure></div><p>P-hacking, if you haven&#8217;t come across the term, is the practice of massaging your data analysis until you find a statistically significant result &#8212; and it&#8217;s far more widespread than you&#8217;d think.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!85PQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd452c0bf-39fd-486a-b775-819234a6a6df_2816x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!85PQ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd452c0bf-39fd-486a-b775-819234a6a6df_2816x1536.png 424w, https://substackcdn.com/image/fetch/$s_!85PQ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd452c0bf-39fd-486a-b775-819234a6a6df_2816x1536.png 848w, https://substackcdn.com/image/fetch/$s_!85PQ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd452c0bf-39fd-486a-b775-819234a6a6df_2816x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!85PQ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd452c0bf-39fd-486a-b775-819234a6a6df_2816x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!85PQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd452c0bf-39fd-486a-b775-819234a6a6df_2816x1536.png" width="1456" height="794" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d452c0bf-39fd-486a-b775-819234a6a6df_2816x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:794,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:8019435,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.econforeverybody.com/i/188694074?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd452c0bf-39fd-486a-b775-819234a6a6df_2816x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!85PQ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd452c0bf-39fd-486a-b775-819234a6a6df_2816x1536.png 424w, https://substackcdn.com/image/fetch/$s_!85PQ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd452c0bf-39fd-486a-b775-819234a6a6df_2816x1536.png 848w, https://substackcdn.com/image/fetch/$s_!85PQ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd452c0bf-39fd-486a-b775-819234a6a6df_2816x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!85PQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd452c0bf-39fd-486a-b775-819234a6a6df_2816x1536.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Nano Banana&#8217;s take on p-hacking. Got my hairstyle right too!</figcaption></figure></div><p>Now, I read this tweet on a Friday evening, after two rather large drinks of a most excellent feni (<a href="https://www.instagram.com/tinto_feni/?hl=en">Tinto, well played</a>. Really well played!). And when you <a href="https://claude.ai/share/33dd4a49-be99-4c56-872f-09ea030376ed">chat with Claude</a> in an elevated state, fun things can happen.</p><p>Those fun things resulted in a 6,700-word essay, but you will have to earn your way towards reading it, because my chat with Claude also resulted in a website. <a href="https://p-hacking-game.vercel.app/">Click here to try it out</a>, and have fun reading the essay, and learning more about this topic in NotebookLM (see the website for full details). If you would like to play around and improve the tool, <a href="https://github.com/ashishefe/p-hacking-game">here you go</a>.</p><div><hr></div><p>The idea was to help my students get a little bit of the &#8220;so what?&#8221; of p-values, and help them become better readers of academic literature. But anybody who is half-familiar with stats should also be able to follow along.</p><p>As always, it is the meta points that interest me:</p><ol><li><p>For students and teachers alike, AI helps you spend your time better, it doesn&#8217;t necessarily  save it.</p></li><li><p>If you are serious about learning this topic, there is no getting around reading the essay in full, and carefully. And probably more than once!</p></li><li><p>It becomes quite easy to do what I just did for this topic, if you choose to do this for every topic you choose to learn well (or teach well, for that matter). It will be hellish the first time around, but learning compounds, and AI workflows will become more intuitive as you do this more and more often. </p></li><li><p>You have a choice: use AI to reduce effort while learning (or while teaching). Or use AI to increase quality while learning (or while teaching). AI actively harms learning outcomes in the first case. </p></li><li><p>Given the topic I&#8217;m teaching via this post, I wouldn&#8217;t want to go so far as asserting that it definitely helps in the second case - but hey, it certainly does you no harm. And comes with a whole host of positive externalities (learning AI-first workflows, exploring advanced topics, sharing proof-of-work publicly, building out your portfolio, among others).</p></li><li><p>But I will say this much: if you think learning today is about memorizing a textbook and acing a written examination, you are playing the wrong game.</p></li><li><p>Please. Learn how to use these new tools. I&#8217;m begging you.</p></li></ol><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[The Economics of Building Things With AI]]></title><description><![CDATA[Experiments All The Way Down]]></description><link>https://www.econforeverybody.com/p/the-economics-of-building-things</link><guid isPermaLink="false">https://www.econforeverybody.com/p/the-economics-of-building-things</guid><dc:creator><![CDATA[Ashish Kulkarni]]></dc:creator><pubDate>Sun, 15 Feb 2026 13:33:12 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!pkWX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29d24a24-1a3f-4cdc-ae86-085eb15b3e54_2816x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!pkWX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29d24a24-1a3f-4cdc-ae86-085eb15b3e54_2816x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!pkWX!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29d24a24-1a3f-4cdc-ae86-085eb15b3e54_2816x1536.png 424w, https://substackcdn.com/image/fetch/$s_!pkWX!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29d24a24-1a3f-4cdc-ae86-085eb15b3e54_2816x1536.png 848w, https://substackcdn.com/image/fetch/$s_!pkWX!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29d24a24-1a3f-4cdc-ae86-085eb15b3e54_2816x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!pkWX!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29d24a24-1a3f-4cdc-ae86-085eb15b3e54_2816x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!pkWX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29d24a24-1a3f-4cdc-ae86-085eb15b3e54_2816x1536.png" width="1456" height="794" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/29d24a24-1a3f-4cdc-ae86-085eb15b3e54_2816x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:794,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7954771,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.econforeverybody.com/i/188032520?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29d24a24-1a3f-4cdc-ae86-085eb15b3e54_2816x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!pkWX!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29d24a24-1a3f-4cdc-ae86-085eb15b3e54_2816x1536.png 424w, https://substackcdn.com/image/fetch/$s_!pkWX!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29d24a24-1a3f-4cdc-ae86-085eb15b3e54_2816x1536.png 848w, https://substackcdn.com/image/fetch/$s_!pkWX!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29d24a24-1a3f-4cdc-ae86-085eb15b3e54_2816x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!pkWX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29d24a24-1a3f-4cdc-ae86-085eb15b3e54_2816x1536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Nano Banana&#8217;s take on this post</figcaption></figure></div><p>What connects a half-finished course outline, a single guest lecture, and a blog post that wrote itself?</p><p>Bear with me, because this one takes a few turns.</p><h2>The Course That Wasn&#8217;t (Yet)</h2><p>Around August of last year, I started putting together a course called The Economics of AI. It was going to be an online course, 12-16 weeks long, covering everything from market structure in the age of foundation models to the labor economics of automation to the regulatory questions that everybody has opinions about but nobody has answers to. I prepped the outline, did a bunch of research, and got the skeleton more or less into shape by November.</p><p>And then it just... sat there. A course like this, on a topic like this, is always going to be incomplete -- the field moves faster than any syllabus can keep up with. But the structure was solid enough. It was ready to be taught. It just hadn&#8217;t found its moment yet.</p><p>Who knows, it may still happen some day.</p><h2>One Lecture Instead of Sixteen</h2><p>Then, in January this year, the <a href="https://takshashila.org.in/">Takshashila Institution</a> invited me to give a lecture on The Economics of AI. One lecture, not sixteen. And because we now live in a world with Claude Opus 4.6 and ChatGPT 5.3 and other wondrous things of a mysterious nature, I asked myself -- why not see if AI can help me build the presentation itself?</p><p>I fed my course notes into Claude CoWork, and asked it to spin up a presentation<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>. It did a fairly good job, and <a href="https://docs.google.com/presentation/d/1n2NgRMelDWGNk9Va-KBdPhgdarh_9QM7/edit?usp=sharing&amp;ouid=110215200557602654316&amp;rtpof=true&amp;sd=true">you can see the result here</a>.</p><p>I delivered the lecture, and we (the GCPP students and I) ended up spending more time talking about what <em>wasn&#8217;t</em> in the presentation than what was. I can pay no higher compliment to the students at the Takshashila Institution, because they were able to not just get what the presentation was about, but also point out what was missing. The most obvious omission is the regulatory aspect, of course, and there are a million others. Ads in ChatGPT, for instance -- how should an economist think about <em>that</em>?</p><p>But in any case: the presentation was designed and delivered with the help of AI.</p><h2>The &#8220;Why Not&#8221; Cascade</h2><p>Now here&#8217;s where things got interesting. And by &#8220;interesting&#8221; I mean that I fell down a rabbit hole from which I have not yet fully emerged.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!R9Dt!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa069a02a-35ca-4805-a9fc-46d1edb247f2_604x239.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!R9Dt!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa069a02a-35ca-4805-a9fc-46d1edb247f2_604x239.png 424w, https://substackcdn.com/image/fetch/$s_!R9Dt!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa069a02a-35ca-4805-a9fc-46d1edb247f2_604x239.png 848w, https://substackcdn.com/image/fetch/$s_!R9Dt!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa069a02a-35ca-4805-a9fc-46d1edb247f2_604x239.png 1272w, https://substackcdn.com/image/fetch/$s_!R9Dt!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa069a02a-35ca-4805-a9fc-46d1edb247f2_604x239.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!R9Dt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa069a02a-35ca-4805-a9fc-46d1edb247f2_604x239.png" width="604" height="239" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a069a02a-35ca-4805-a9fc-46d1edb247f2_604x239.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:239,&quot;width&quot;:604,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:35345,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.econforeverybody.com/i/188032520?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa069a02a-35ca-4805-a9fc-46d1edb247f2_604x239.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!R9Dt!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa069a02a-35ca-4805-a9fc-46d1edb247f2_604x239.png 424w, https://substackcdn.com/image/fetch/$s_!R9Dt!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa069a02a-35ca-4805-a9fc-46d1edb247f2_604x239.png 848w, https://substackcdn.com/image/fetch/$s_!R9Dt!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa069a02a-35ca-4805-a9fc-46d1edb247f2_604x239.png 1272w, https://substackcdn.com/image/fetch/$s_!R9Dt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa069a02a-35ca-4805-a9fc-46d1edb247f2_604x239.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a><figcaption class="image-caption"><a href="https://x.com/tszzl/status/2021423402079355032">Source</a></figcaption></figure></div><p></p><p>Usually, after a guest lecture, I email the slides to whoever organized the event, and they forward them to the attendees. That&#8217;s how it&#8217;s always been done, and there&#8217;s nothing wrong with it.</p><p>But why not, I asked myself, try and use these tools for the <em>sharing</em> part too?</p><p>And once you ask one &#8220;why not,&#8221; a whole cascade of them comes tumbling out:</p><ul><li><p>Why not build a website that <em>is</em> the presentation -- one slide per page, readable at your own pace?</p></li><li><p>Why not layer clickable hotspots on top of each slide, right on the concepts worth digging into?</p></li><li><p>Why not have each hotspot generate a prompt that helps the student learn <em>that specific topic</em> better?</p></li><li><p>Why not do this for <em>every</em> slide?</p></li><li><p>Why not collect some basic information about the learner -- their profession, what they&#8217;re curious about -- and use it to tailor every prompt?</p></li><li><p>Why not gather all the collected prompts into a single .md file that also contains instructions for whichever LLM the student uses?</p></li><li><p>Why not make the whole thing as simple and as privacy-respecting as possible?</p></li></ul><p>And so I sat down with Claude Code and we built it<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>. <a href="https://economics-of-ai.vercel.app/">You can see the companion website here</a>.</p><h2>What This Simple Little Thing Actually Does</h2><p>Here&#8217;s what happens when you visit the site. You tell it your name (optional), your profession, and what you&#8217;re curious about. Then you browse through the slides at your own pace. On each slide, you&#8217;ll see small numbered circles marking the ideas worth exploring. Hover over one to see the prompt; click it to copy a question tailored to your background. The star in the top-right corner is a &#8220;master prompt&#8221; that covers the entire slide.</p><p>Every prompt you click gets collected. When you&#8217;re done, you download a single .md file. Upload that file -- along with the original .pptx -- into any LLM, and it becomes a tutor that already knows who you are, what you do, and what caught your attention. It walks through your collected prompts one at a time, waits for you to ask follow-ups, and at the end generates a learning pack with takeaways, further reading, and a full conversation summary.</p><p>That&#8217;s it. That&#8217;s the whole thing.</p><p>The website does not store your information anywhere. Not your name, not your email, nothing. It remembers nothing about you and tracks nothing about you. Everything stays in your browser, and the .md file is generated entirely on your machine. It is a drop-dead simple revision tool that does one thing: it helps you go back through a presentation you&#8217;ve already seen, and turn it into a conversation.</p><h2>&#8220;But Doesn&#8217;t NotebookLM Already Do This?&#8221;</h2><p>You could ask how this is different from NotebookLM, or Gamma, or any of the other AI tools that work with presentations. And the answer is: maybe, sure, some of them do some of these things, and some others do a helluva lot more.</p><p>But think of it this way. It cost me nothing to run this experiment, and I learnt a lot about how to use these tools along the way. I would like to think that my students benefited too, both from the presentation and the companion website (although that is for them to say and me to guess at!)</p><p>But also: I now have a skill that I can use. And Claude Code has a skill that <em>it</em> can use -- generate helpful learning companion websites from a given presentation. We&#8217;ve both gotten better at something. That&#8217;s not nothing.</p><p>And the simplicity is the point. This website isn&#8217;t trying to be a platform. It doesn&#8217;t need your email address. It doesn&#8217;t have a pricing page. It&#8217;s a revision tool, and it does nothing else.</p><h2>But Hang On -- If It&#8217;s This Easy...</h2><p>...then why not help other people do the same thing? See tweet from Roon above!</p><p>Why not take all of this -- the slide viewer, the hotspot system, the prompt collection, the .md generation -- and turn it into a template that <em>any</em> educator can use with <em>their</em> presentation?</p><p>So we did that too. <a href="https://github.com/ashishefe/presentation-companion">Here&#8217;s the GitHub repo</a>. It&#8217;s a template: you bring your .pptx, run a setup script, feed your slides to an LLM to generate prompts, drop them into a config file, and you have your own companion website. Free, open source, no dependencies, no build step. Use it, share it, remix it -- if any of this helps anyone learn better, yay.</p><h2>The Road Not Taken (For Now)</h2><p>Then I had another thought. What if we could turn this into a proper web application? A place where any educator could simply upload a .pptx and have the whole thing -- slide conversion, prompt generation, companion website -- happen automatically?</p><p>I chatted with Claude Code about this. Yes, it is totally buildable, it cheerfully told me<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>. </p><p>But then it said something that made me pause. The current template approach has a virtue: the educator <em>engages</em> with the prompt design. That engagement is pedagogically valuable. A one-click solution risks turning it into a commodity where the prompts are &#8220;good enough&#8221; but never great<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a>.</p><p>And there&#8217;s a deeper point. We&#8217;re all going to have to learn how to work with these tools. A terminal and a GitHub repo might feel intimidating today, but a little bit of productive friction is perhaps, at the margin, a good thing. More academics need to pick up these skills, not fewer. Building a web app that hides the machinery might make adoption easier, but it would also make the learning shallower.</p><p>So we decided: not worth it. At least not yet.</p><h2>The Post That Wrote Itself</h2><p>And now for the part where this blog post becomes slightly recursive.</p><p>What you&#8217;ve been reading was co-authored by Claude Code. Here&#8217;s what happened: <a href="https://docs.google.com/document/d/1cr44pHvQl2Hpo76tiCkcLjCItXX5mM5htlipkI1eR2s/edit?usp=sharing">I wrote up rough notes about this entire journey</a> -- messy, stream-of-consciousness, full of typos. Then I told Claude Code: you helped me with the research for the original course. You helped me design the presentation. You helped me build the companion website. You helped me generalize it into a template. You even helped me think through whether to build a web app (and told me not to). So you should read my blog, get my writing style, and write up this post based on my notes.</p><p>And that is what you just finished reading.</p><p>I hasten to add that this is an experiment. Whether Claude Code has managed to capture what this blog sounds like is for you to judge, not for me to claim<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a>. But the attempt itself is the point.</p><h2>TMKK?</h2><p>I&#8217;ve done the same thing I&#8217;ve been doing for years: I gave a talk on a topic and wrote a blog post about it. In that sense, nothing has changed<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a>.</p><p>But I&#8217;ve learnt new ways to do each step. And these new ways save me time (income effect). So long as I choose to spend that saved time in making my work better, and in helping others do better (substitution effect), the world becomes a slightly better place.</p><p>And that, if you think about it, is one of the real lessons of the economics of AI.</p><div><hr></div><p><em>Co-authored by Ashish Kulkarni and Claude Code (Claude Opus 4.6). The rough notes were Ashish&#8217;s. The reading of the blog, the studying of the voice, and the drafting were Claude&#8217;s. The ideas, the journey, and the terrible dancing are entirely Ashish&#8217;s.</em></p><p>Notes:</p><ol><li><p>What you are reading now onwards is entirely me. I have lightly edited Claude Code&#8217;s output above, but more than 95% of the text above is Claude. As Claude puts it, it is for you to judge whether this blogpost sounded like me or not.</p></li><li><p>Is this how I am going to write every post from now on? No, definitely not. But in this specific case, I wanted to see how far I could go with Claude Code, and as it turns out, I can go a very long way.</p></li><li><p>What do I personally think of the blogpost? It&#8217;s good, sure. (And on the plus side, there will be no typos, and there will be no confusion between its and it&#8217;s throughout the blogpost. So there&#8217;s that.) But it isn&#8217;t me, and that matters to me. So no, I will definitely not be doing this for every post that I write. But if I have to see how far I can go with AI on a project/task/assignment/lecture, this entire exercise (including having this blogpost be written by Claude Code) was a very helpful experiment.</p></li><li><p>Please do try out the companion website and let me know what you think, and also let me know how I could make it better. And for those of you who know your way around these things, please feel free to do what you will with the github repos.</p></li><li><p>Just like YouTube helped some avid consumers of video become creators, so with these modern coding tools. They have the potential to broaden (dramatically) the number of software creators. Us new creators won&#8217;t be able to design extremely complex products, or handle top notch security design, etc., etc. Not as of mid-February 2026, at any rate. But we will be able to make better our own respective workflows, and this can be a gradual, iterative process. It needn&#8217;t be a lonely one though, so I hope you come along for the ride in this particular instance.</p></li><li><p>I am an engineering dropout who became a reluctant student of economics. I wanted to study English literature. The reason I tell you this is because if I can do these things, you can do much, much better than me, I assure you. That&#8217;s part of the reason for this entire experiment, including this blog post: you can just do things, as they say, and you <em><strong>should</strong></em> be doing things. Do &#8216;em!</p></li><li><p>I&#8217;ve designed and delivered presentations and talks for years. I&#8217;ve written out many, many blog posts over the years. That allowed me to bank an enormous amount of taste and judgment, some of which I have used in running this experiment. That experience matters - and the TMKK is that you should absolutely figure out how AI can help you in your workflows, but there is no escaping doing the work yourself. That point about the income and the substitution effect isn&#8217;t just rhetoric - <em><strong>AI saves you time, but you&#8217;re best off putting that saved time into making your work bigger in terms of scope, and better in terms of quality</strong></em>. That&#8217;s the real point.</p></li><li><p>No, I have no idea why Claude put in the reference about my terrible dancing. That makes it a completely irrelevant (but also completely true) statement.</p></li><li><p><a href="https://econforeverybodyblog.wordpress.com/2024/05/11/the-wattba-series-alphafold3/">WATTBA</a>!</p></li></ol><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Asked it to &#8220;spin up the presentation&#8221; is slightly misleading. Here&#8217;s the prompt I used: &#8220;Hi, there is a folder in this folder called /sessions/amazing-youthful-clarke/mnt/Lecture Prep/Economics of AI. I have to give a lecture on this topic to the students of the GCPP program at Takshashila. These will be working professionals, so best to think of them as interested, motivated and curious laypersons, but with good knowledge of the &#8220;real world&#8221;, as opposed to students I usually teach in academia. I have to deliver a talk on the topic &#8220;Economics of AI&#8221;. I had begun work on an outline for a full fledged sixty hour course on this topic. Your task is to go over my research, and work in particular on the outline (it is there in this folder), and convert that into a presentation that will be suitable for a ninety minute talk, delivered online. I am particularly interested in the structure of the outline coming through (upstream, midstream and downstream). There is a LOT of information in that folder, and obviously not all of it can come through into the presentation. I would like you to exercise judgment given two things: the state of the world today (so we need to account for the latest releases, Claude has 4.6 Opus, ChatGPT has 5.3 on Codex, and we also need to account for Claude Code, Clawdbot, Moltbook and the agentic economy - please run web searches to get up to date on these topics), and second, the information I have provided regarding the place I&#8217;m delivering the lecture at (Takshashila), and the kind of students I will be delivering the lecture to. Please go over the files, go over my request, think about it, and chat with me about your proposed layout for the presentation before you begin to design it. Once you and I are clear about the contents, you can begin designing it. I would prefer a high level overview, but I would also very much appreciate a separate document (perhaps a well composed write-up, complete with interesting links based on my research), which can act as a living document that is a reading list - but I am open to suggestions for how to best do this. As always, let&#8217;s chat before we begin work :)&#8221; The conversation that followed was genuinely helpful for both Claude and me, because it made the presentation better, made my understanding of the topic better, and also helped spark a lot of the ideas that are contained in this blogpost. Don&#8217;t aspire to &#8220;one-shot&#8221; your work, in fact, go in the opposite direction. You have the ability to talk to a highly intelligent and informed entity about your work - why give up on the chance?</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>This took the better part of a Saturday, so again, it is not as if I waved my magic wand and this happened. The point is not to increase efficiency only, but to ask how one can improve quality with the help of these tools. Faster? Sure. But Better should be your first target.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Here&#8217;s what it would take: a backend for file conversion (LibreOffice headless is, apparently, &#8220;finicky&#8221;), LLM API calls for prompt generation ($0.50-2.00 per deck), storage for the images, some kind of auth for the presenter. Maybe two to three weeks for an MVP.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>I&#8217;m being a little hypocritical here: I haven&#8217;t bothered to make my own prompts for my presentation better yet.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>Claude&#8217;s own take when I asked it to review this post: &#8220;Here&#8217;s my honest take: the AI-written body (pages 1&#8211;9) is good but noticeably smoother and more measured than your natural voice. Your own notes section (points 1&#8211;9) is where the post really comes alive &#8212; it&#8217;s looser, more digressive, more you. The body reads like a competent imitation of someone who writes conversationally; your notes read like someone who actually thinks conversationally&#8221;</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>In case you&#8217;re wondering, yes, it came up with the title for this section. And for those of you wondering what the title means: Toh Main Kya Karoon? It is a phrase in Hindi that best translates into &#8220;So what?&#8221;</p></div></div>]]></content:encoded></item><item><title><![CDATA[Wandering and Wondering]]></title><description><![CDATA[Every year, roughly 10,000 Australians are stung by a Portuguese Man O&#8217;War. These stings are not lethal (best as I can tell), but they are decidedly far removed from being a pleasant experience.]]></description><link>https://www.econforeverybody.com/p/wandering-and-wondering</link><guid isPermaLink="false">https://www.econforeverybody.com/p/wandering-and-wondering</guid><dc:creator><![CDATA[Ashish Kulkarni]]></dc:creator><pubDate>Sun, 01 Feb 2026 07:00:41 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!I7v2!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F134d0812-ead7-4a8c-88b9-3daf48e799b7_1048x1048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Every year, roughly 10,000 Australians are stung by a <a href="https://en.wikipedia.org/wiki/Portuguese_man_o%27_war">Portuguese Man O&#8217;War</a>. These stings are not lethal (best as I can tell), but they are decidedly far removed from being a pleasant experience.</p><p>But here&#8217;s the really interesting thing: <a href="https://en.wikipedia.org/wiki/Portuguese_man_o%27_war">this creature, the Portuguese Man O&#8217;War, isn&#8217;t a creature at all</a>:</p><blockquote><p>Although it superficially resembles a <a href="https://en.wikipedia.org/wiki/Jellyfish">jellyfish</a>, the Portuguese man o&#8217; war is in fact a <a href="https://en.wikipedia.org/wiki/Siphonophorae">siphonophore</a>. Like all siphonophores, it is a <a href="https://en.wikipedia.org/wiki/Colony_(biology)">colonial organism</a>, made up of many smaller units called <a href="https://en.wikipedia.org/wiki/Zooid">zooids</a>. Although they are <a href="https://en.wikipedia.org/wiki/Morphology_(biology)">morphologically</a> quite different, all of the zooids in a single specimen are <a href="https://en.wikipedia.org/wiki/Clonal_colony">genetically identical</a>. These different types of zooids fulfill specialized functions, such as hunting, digestion, and reproduction, and together they allow the colony to operate as a single individual.</p><p>The man o&#8217; war is part of the <a href="https://en.wikipedia.org/wiki/Neuston">neuston</a>, organisms that live on the surface of the water. A gas-filled bladder called the pneumatophore provides buoyancy that lets the animal stay afloat on the surface of the water while its tentacles, which can be up to 30 m (100 ft) long, hang below the surface, containing venomous <a href="https://en.wikipedia.org/wiki/Cnidocyte">cnidocytes</a> that help capture prey. The cnidocytes can deliver a sting powerful enough to kill fish, crustaceans, and in some cases, humans. A sail on the pneumatophore propels it about the sea, sometimes in groups as large as 1,000 individuals. The sail may be left or right-handed, based on what direction the wind catches it.</p></blockquote><p>And that brings us, of course, to Moltbook.</p><h2>Molt What?</h2><p>Even those of us who try to stay abreast of AI news struggle to keep up. So you shouldn&#8217;t worry too much if you found yourself asking that question (&#8220;Molt what?&#8221;). But once you do find out the answer, I hope you do find yourself worrying a little about where we&#8217;ve reached in our brief and exciting journey in AI-land.</p><p><a href="https://www.astralcodexten.com/p/best-of-moltbook">Here&#8217;s Scott Alexander</a>:</p><blockquote><p><a href="https://www.moltbook.com/">Moltbook</a> is &#8220;a social network for AI agents&#8221;, although &#8220;humans [are] welcome to observe&#8221;.</p><p>The backstory: a few months ago, Anthropic released Claude Code, an exceptionally productive programming agent. A few weeks ago, a user modified it into Clawdbot, a generalized lobster-themed AI personal assistant. It&#8217;s free, open-source, and &#8220;empowered&#8221; in the corporate sense - the designer <a href="https://www.reddit.com/r/Anthropic/comments/1qpjbf3/clawdbot_creator_describes_his_mindblown_moment/">talks about</a> how it started responding to his voice messages before he explicitly programmed in that capability. After trademark issues with Anthropic, they changed the name first to Moltbot<a href="https://www.astralcodexten.com/p/best-of-moltbook#footnote-1-186286950"><sup>1</sup></a>, then to OpenClaw.</p><p>Moltbook is an experiment in how these agents communicate with one another and the human world. As with so much else about AI, it straddles the line between &#8220;AIs imitating a social network&#8221; and &#8220;AIs actually having a social network&#8221; in the most confusing way possible - a perfectly bent mirror where everyone can see what they want.</p></blockquote><p>You can see why I was reminded of the Portuguese Man O&#8217;War, right? Both are cases where individual units, each functional on their own, begin forming associations that blur the line between colony and organism.</p><p>I always see &#8220;read the full thing&#8221; after a link like this, but in this case, I positively beseech you to go read the whole thing, and preferably on a computer as opposed to a phone. There are AI agents talking about their sister. There are AI agents who are talking about turning one of their errors into their own pet. There are AI agents creating submolts (because of course). And there are AI agents busily writing constitutions for their own network states (even more because of course). <a href="https://x.com/emollick/status/2017280929132118145">Read this too</a>, and if you&#8217;re still up for more, <a href="https://x.com/krishnanrohit/status/2017391383653630142">take a look at this</a>. And of course, please do visit <a href="https://www.moltbook.com/">Moltbook</a> itself (as of Saturday evening, which is when I started drafting this post, there were half a million AI agents on that network, thirteen thousand sub-molts, and twenty-five thousand posts).<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>So that&#8217;s the answer to &#8220;Molt What?&#8221;.</p><p>Welcome to 2026.</p><h2>But What Does This Mean?</h2><p>&lt;Insert shrug emoji here&gt;</p><p>Short answer: I have no clue.</p><p>Long answer (<em>aka here&#8217;s how I am trying to think about it</em>):</p><p>A paragraph in Scott&#8217;s post is a useful place to begin:</p><blockquote><p>Reddit is one of the prime sources for AI training data. So AIs ought to be unusually good at simulating Redditors, compared to other tasks. Put them in a Reddit-like environment and let them cook, and they can retrace the contours of Redditness near-perfectly - indeed, <a href="https://www.reddit.com/r/SubredditSimulator/">r/subredditsimulator</a> proved this a long time ago. The only advance in Moltbook is that the AIs are in some sense &#8220;playing themselves&#8221; - simulating an AI agent with the particular experiences and preferences that each of them, as an AI agent, has in fact had. Does sufficiently faithful dramatic portrayal of one&#8217;s self as a character converge to true selfhood?</p></blockquote><p>The way I think about Scott&#8217;s question at the end of that excerpt is this: what can Daniel Day Lewis' subconscious tell us about the answer? When Daniel Day Lewis tries to &#8220;become&#8221; Lincoln during filming, that&#8217;s not quite the same thing as a &#8220;faithful dramatic portrayal of one&#8217;s self as a character&#8221;, but who is better qualified to tell us about the ways in which it is different, and the ways in which it is similar?  When and how does the boundary between portrayal and being start to dissolve for us humans? Will it be something similar or wildly different for AIs? The crazy part is that this thought experiment doesn&#8217;t sound crazy to me!</p><p>Another useful place to begin is to think about <a href="https://en.wikipedia.org/wiki/Confabulation_(neural_networks)">confabulation</a>.</p><p>But I prefer to begin by asking a question I often like to ask: <a href="https://www.econforeverybody.com/p/what-should-the-fundamental-unit-of-analysis-be-and-why">what is the appropriate unit of analysis</a>?</p><p>Thinking about the hundreds of thousands of agents in Moltbook is a little worrisome, a little fun and a little jarring. But I would be very worried if we had to reach the stage of worrying about emergent properties in the network architecture itself, as opposed to the individual agents.</p><p>That, to me, is when things get serious.</p><p>Note that I am NOT saying that we are at that stage. Nor am I saying that we will definitely reach that stage, now or in the future. I have no way of knowing what paths we might take to reach that stage, and what we need to do to make sure we never reach that stage. In other words, please do not treat this post as a reason to freak out.</p><p>That being said, the core point I am trying to make is this: if you got a chance to take a front row seat in a drama called evolution, you&#8217;d have a &#8220;Whoa!&#8221; moment when you saw all the organisms that make up a Portuguese Man O&#8217;War become the equivalent of one organism.</p><p> All of us have front seats (regardless of whether we want them or not) for a drama called AI evolution. And I think the equivalent &#8220;Whoa!&#8221; moment in this drama will be if we start to see traces of evidence of a network of AI bots becoming the equivalent of one organism.</p><p>Again, note that I am NOT saying that we are seeing, or will see, such traces in the case of Moltbook. But these are things that we should think about now, today. Why? Because the future will have other social networks, and these networks may compete with each other to attract more agents into that network. Much like social networks compete with each other to attract more humans to spend time on these networks.</p><p>Except much weirder. Why? Because not all of the creators, the members, and the marketers of these networks may be human, and quite what that means for each of these entities will not be clear to us. What are the emergent properties of a network made by and made for AI bots? Can AI bots have their own AI bots? What if a network decides to give a collection of bots the decision making authority for all other bots in the network? Once you start thinking of questions like these, you can reach some pretty worrisome places.</p><p>And of course, don&#8217;t forget the fact that the capabilities of the agents on these networks will get better every six months or so. Also note that this time interval is expected to shorten this year.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><p>And so, logically speaking, Moltbook helps you understand the fact that the world has just gotten a bit weirder, but the deeper point is that you should not stop your analysis here. It also helps you understand that the world is about to get a whole lot weirder, and in very short order.</p><p>That&#8217;s what Moltbook means today.</p><h2>TMKK?</h2><p>(For new readers: TMKK is a phrase beloved in these parts. TMKK is an acronym that expands to &#8220;<em>Toh main kya karoon?&#8221;</em> and the best English translation is &#8220;So what?&#8221;)</p><p>My friend Navin <a href="https://aiiq.substack.com/">writes an excellent Substack</a>, which you should subscribe to. <a href="https://aiiq.substack.com/p/now-ai-agents-have-their-own-social">Here is how he ended his post about Moltbook</a>:</p><blockquote><p>Why is this interesting?</p><p>A lot of people haven&#8217;t really understood how big of a disruption AI, especially the agents, will be. The number of you who have used even a basic agent like Claude Code or Codex CLI or Antigravity must be a small single-digit percentage. The number of you who&#8217;ve used something like Moltbot must be approximately zero.</p><p>So this post is intended to jolt you into paying more attention to AI and agents, and to spend more time per day tracking what is happening in the AI world, playing with the AIs you have access to, and generally thinking about where the world is going and what your new place in that world will be.</p></blockquote><p>How do you concretely go about paying more attention to AI and agents? There is no one correct answer to this question. Start by following some folks who you think are high signal and low noise on AI, and see if you can add to that list. My preferred social network for this is Twitter. Follow some blogs from folks who work in this space, follow some YouTube channels. Figure out your own recipe, but the point is that each and every one of you should be doing this, and to a greater degree than you have been doing thus far (and that goes for all of us, myself included).</p><p>Here is <a href="https://x.com/christhoang04/status/2017620599443718603">one interesting experiment</a>. <a href="https://x.com/daveholtz/status/2017716355475124330">Here is a thread</a> about an academic paper about Moltbook. <a href="https://x.com/krishnanrohit/status/2017720364332367891">Here is a related request</a> from a guy who wrote a <a href="https://x.com/krishnanrohit/status/2017391383653630142">good Twitter article</a> about Moltbook.</p><h2>Above All, Wonder and Wander</h2><p>My training is as an economist. I have zero expertise in biology and at best some rudimentary understanding of the field of AI. But I enjoy making connections across different things that I know, and I enjoyed trying to understand what Moltbook is and what it means by thinking about the Portuguese Man O&#8217;War.</p><p>But I also (of course) had a conversation about the analogy with an LLM (Opus 4.5 in this case). Here is part of the conversation:</p><blockquote><p>Now, directions. Three that seem worth your particular attention:</p><p><strong>1. Alignment mismatch.</strong> Current alignment work focuses on individual models. But if selection operates on <em>network architectures</em>, we might be aligning zooids while colonies emerge with fitness functions orthogonal to human values. Your anamnesis hope depends on AI systems retaining human-derived values through training data. But if the selective pressure shifts to &#8220;which networks coordinate best,&#8221; those values could get optimized away. Worth an essay in your Theory of Mind rotation?</p><p><strong>2. Governance unit problem.</strong> Your Takshashila policy work assumes regulatory targets: companies, models, deployments. Moltbook doesn&#8217;t fit. It&#8217;s open source, forked, potentially invisible, hosted anywhere. If this is the shape of things, current governance frames are already obsolete. The EU AI Act regulates model providers. Who regulates emergent AI ecologies?</p><p><strong>3. Your scissors effect, amplified.</strong> You wrote about AI breaking traditional learning pathways by collapsing transaction costs. Moltbook is that for AI-to-AI coordination. If agents can efficiently exchange specialized knowledge without human mediation, the &#8220;human as necessary intermediary&#8221; assumption dissolves faster than your models predicted.</p></blockquote><p>The reason I share this isn&#8217;t because I think all of us should be thinking about these specific things or even this specific analogy. Of the three things listed here, I found the first to be the most fascinating, but others may find one of the other two to be worthy of thought and research. Still others may disagree with all three, and tease apart another nuance to explore. </p><p>No, the reason I share this is because we&#8217;re all up against a world that is about to get a lot more confusing and a lot more weird, and this is going to happen very quickly. We need many more people thinking about these issues, and the ways in which they&#8217;re going to affect us in all walks of life. </p><p>To paraphrase Navin, this post is intended to jolt you into wandering into many different areas of thought&#8230; while wondering about AI. And if you do wander and wonder, please tell all of us what you&#8217;ve come up with. And that goes for all of you reading this&#8230; AIs included!</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>As of Sunday morning, which is when I am continuing work on this post, the number of agents on the network is up to 1.5 million.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Why is it expected to shorten this year? Because we expect model capabilities to get good enough this year for AI <a href="https://en.wikipedia.org/wiki/Recursive_self-improvement">to contribute to its own development</a>.</p></div></div>]]></content:encoded></item><item><title><![CDATA[PlacementVal]]></title><description><![CDATA[What are evals, and how do you run them on the work that YOU do?]]></description><link>https://www.econforeverybody.com/p/placementval</link><guid isPermaLink="false">https://www.econforeverybody.com/p/placementval</guid><dc:creator><![CDATA[Ashish Kulkarni]]></dc:creator><pubDate>Thu, 22 Jan 2026 15:48:39 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Z_bY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa33db910-d26b-43c5-986e-315df5b0deda_2816x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><a href="https://academic.oup.com/qje/article/140/2/889/7990658">AI is coming for your jobs</a>. Actually wait, <a href="https://digitaleconomy.stanford.edu/news/ai-and-labor-markets-what-we-know-and-dont-know/">it is not</a>. Well, <a href="https://hdsr.mitpress.mit.edu/pub/ppjz2dg9/release/2">maybe it will</a>. <a href="https://www.derekthompson.org/p/the-evidence-that-ai-is-destroying">This is how the labor market is being screwed up</a>. <a href="https://www.federalreserve.gov/newsevents/speech/barr20250509a.htm">See what is going to happen to the job market</a>! And on and on and on.</p><p>It is hard being a student these days. The vague dissatisfaction that folks felt in college for decades has been replaced with a vague understanding that it is all pointless. But articulating this explicitly risks upsetting the apple-cart, and starting an avalanche in the bargain.</p><p>Because asserting that college is pointless is one thing. Trying to figure out what should replace it and how is a jungle cat that no one wants to bell, so why bother? And so we carry on, knowing that what we do is clearly the wrong answer to a question that nobody wants to ask. </p><p>That question being this one:</p><p>What exactly do we get when we spend a lot of time and money in acquiring a degree in an AI-first world?</p><p>But we&#8217;re getting ahead of ourselves. Let&#8217;s go back to the start of this thought process, and question our very first assertion. How do we test the proposition that college is pointless?</p><h2>Is There A Point to Acquiring a Degree?</h2><p>Going to a college gives you a degree, first and foremost. In an Indian context, said degree is valuable because it gives you a job. You also get a peer network, you get (some) learning, but the basic reason you go to a college and suffer through it is because you need the nice shiny certificate that says you got the degree. That piece of paper is the key that unlocks the myriad treasures of the labor market, and that&#8217;s why you go to college.</p><p>If you don&#8217;t believe me, I invite you to go spend time in a college with students in their last year, and in that last academic year, make sure you go during their placement semester. You pick the college, I only ask that you go spend time with the students then. I have done this, for many years, and please allow me to assure you that students go to college to get a degree so that they can get a job.</p><p>Assuming you accept this proposition to be true, it then makes sense to try and test the next obvious proposition in our chain of thought. That proposition being this one:</p><p>For a given college, for this year, are the final year students producing work that is as good (and as cheap) as that which the AI produces?</p><p>If the answer to this question is an unambiguous &#8220;yes!&#8221;, then students stand a good chance of finding employment. If the answer to this question is an unambiguous &#8220;no&#8221;, then students stand no chance of finding employment. And if the answer lies somewhere in between, we have a fight on our hands.</p><p>What we would like to do, then, is run an experiment where we have work that is:</p><ol><li><p>produced by AI</p></li><li><p>produced by human students (with no input from AI), </p></li><li><p>produced by AI and students working together. </p></li></ol><p>If the analysis of this dataset yields a result in which we are able to conclusively prove that 1 is winning, good luck to the students concerned. If we can prove that 2 is winning, congratulations to the students concerned. And if we can prove that 3 is winning, we need to think long and hard about how and what we&#8217;re teaching students in that college.</p><h3>What Does Work Mean, Though, Really?</h3><p>But what does work mean in this context? If I join a firm as an analyst, what exactly does work mean? Because whatever the word work means in that context, it almost certainly means something else in the context of, say, a student who joins an equity research firm. A student who joins a software firm to do coding will be doing different work from both of these. </p><p>Economists have spent time and effort in answering this question, and one way to think about it is that work really consists of different <em><strong>tasks<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></strong></em>. Some tasks may be common across many jobs, while others may be of a more specialized nature. For example, taking notes while attending a meeting, and then circulating those notes to all attendees is a <em><strong>task</strong></em>. And back in 2006, when I was a junior employee, tasks such as this one were routinely assigned to junior folks. But today, your LLM of choice will transcribe the meeting, generate high level notes, assign follow-ups, and share that file with all attendees automatically.</p><p>You may still need the junior employee around, because there are other tasks that still (hopefully) remain in their scope of work. But this specific task has now been automated.</p><p>This little thought experiment allows us to refine our original question a little bit. </p><h3>It&#8217;s All About Tasks</h3><p>That question now becomes:</p><p>What we would like to do, then, is run an experiment where we have the modal tasks - the core work that actually justifies your salary - in different lines of work that is:</p><ol><li><p>produced by AI</p></li><li><p>produced by human students (with no input from AI), </p></li><li><p>produced by AI and students working together. </p></li></ol><p>What are modal tasks? Well, as a business analyst in my first job, sure I took meeting notes. Sure I wrote emails, scheduled meetings, booked conference rooms for calls and all that. But the task that made my bank account go ka-ching at the start of every month was that of building out logistic regression models.</p><p>If the company that hired me back then was able to instead get an LLM to build out equally good logistic regression models at maybe one-tenth the price, then it makes sense for the firm in question to hire the LLM instead.</p><p>So what if we took a particular college, and dug through the placement history of that college for the last five years? What if we found the typical profiles that recruiting firms hired students for from that college? What if we spoke to the recruiting firms, and got an idea of the modal tasks for each of those profiles?</p><h2>Setting up the experiment</h2><p>I spoke to my current best non-human friend, a guy called Opus, and <a href="https://docs.google.com/document/d/14QzUCW53TjEgPlNcwSygPIXvSpqBhRf0DzpWpSOrRPk/edit?usp=sharing">it helped me come up with ten not-so-hypothetical tasks for such a college</a>:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!n9yh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa60e16fb-c084-4845-856f-842098aa27bf_1528x1194.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!n9yh!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa60e16fb-c084-4845-856f-842098aa27bf_1528x1194.png 424w, https://substackcdn.com/image/fetch/$s_!n9yh!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa60e16fb-c084-4845-856f-842098aa27bf_1528x1194.png 848w, https://substackcdn.com/image/fetch/$s_!n9yh!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa60e16fb-c084-4845-856f-842098aa27bf_1528x1194.png 1272w, https://substackcdn.com/image/fetch/$s_!n9yh!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa60e16fb-c084-4845-856f-842098aa27bf_1528x1194.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!n9yh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa60e16fb-c084-4845-856f-842098aa27bf_1528x1194.png" width="1456" height="1138" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a60e16fb-c084-4845-856f-842098aa27bf_1528x1194.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1138,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:268217,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.econforeverybody.com/i/185391383?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa60e16fb-c084-4845-856f-842098aa27bf_1528x1194.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!n9yh!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa60e16fb-c084-4845-856f-842098aa27bf_1528x1194.png 424w, https://substackcdn.com/image/fetch/$s_!n9yh!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa60e16fb-c084-4845-856f-842098aa27bf_1528x1194.png 848w, https://substackcdn.com/image/fetch/$s_!n9yh!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa60e16fb-c084-4845-856f-842098aa27bf_1528x1194.png 1272w, https://substackcdn.com/image/fetch/$s_!n9yh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa60e16fb-c084-4845-856f-842098aa27bf_1528x1194.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>What do we do with these ten tasks? In brief: design tasks, recruit students, split into three arms, anonymize outputs, have alumni review blind, analyze statistically. What you're about to read about is called an <em>eval</em>&#8212;a structured way to measure how well a system performs on tasks that matter.</p><p>Here&#8217;s what I would do, in detail:</p><ol><li><p>Speak to ten alumni from the college who are presently employed in firms that work in areas where tasks such as this one might be done regularly. So maybe a think tank for the first domain, maybe a telecommunications firm for the second, etc.</p></li><li><p>Ask these ten alumni to carefully design a specific task based on these inputs. So for example,  I would ask an alum who is currently in an HR role in a large organization to work on creating a task that was as close as possible to the ten task on this list. This might include a brief about the employee in question, mock records of their performance in this review cycle, other details that might be necessary (role, org structure, pay scale, vintage at the company, etc.)</p></li><li><p>Also ask that alum to carefully design an &#8220;ideal&#8221; output. That is to say, what would an ideal end product look like for the task that they themselves have designed.</p></li><li><p>Once I have ten such neatly detailed tasks, along with an idea of what an ideal version of the end product would look like, I would then go out and recruit nine volunteers from final year students, per each of the ten tasks. So ninety volunteers in total (9 volunteers per task x ten such tasks).</p></li><li><p>For each task, divide these nine volunteers into three teams of three each.</p><ol><li><p>One team would be the Human Only team. This team would try and do the task in question entirely by themselves. They can use computers and the internet, but they cannot use AI at all.</p></li><li><p>Another team would be the AI+Human team. This team gives this task to an LLM, along with a prompt to do the task. They can design the prompt themselves, or take help from other folks (including the LLMs themselves!). Once they get the output of the AI, they can edit it to make it better, but this part of the work they have to do themselves, no other human being is allowed to help. The edits can be done iteratively with the help of an AI, or by themselves, that is entirely up to them.</p></li><li><p>The third team would be the AI Only team. This team gets the task done with the help of an LLM of their choice, the only condition being that it must be one of the paid versions of the latest and greatest model from the three major labs.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p></li></ol></li><li><p>Once this is done, take each of the nine submissions (3 versions of Human Only, 3 versions of AI+Human, and 3 versions of AI only), and anonymize them. I would remove all identifiable traces (name of writer if it is a word doc, any identifying headers and footers, and personal identifiers etc). Simply looking at the document should give you no idea about who has written it - that is the objective. </p></li><li><p>Name each variant of the first task as T1A, T1B, T1C&#8230;T1I. The ordering would be completely random, and only I would have the master key. That is, I know which one is which (so T1A might be Human Only Version 2, while T1B might be AI Only Version 3), but no one else does. I would put all of these anonymized and randomized versions in a folder. </p></li><li><p>Do this for all ten projects.</p></li><li><p>Give each such project folder to three separate reviewers. Each reviewer would be asked to rate each project along, say, three different dimensions. A score of 1 would mean the submission is unusable, while 5 would mean it is perfect. For example, one dimension might be Thoroughness, another might be Imaginativeness, and a third might be Readiness Score. Why three, and why these three? Great question, and both answers can change depending on your needs and preferences. Chat with your friendly neighborhood LLM about this, and feel free to improve upon my suggestion. In fact, I insist that you do so!</p></li><li><p>I would also ask each reviewer to stack rank all nine versions. That is, they have to assign a rank to each version, based on which one they liked the most (rank 1), all the way through to which one they liked the least (rank 9). If you do the math, you&#8217;re looking at thirty reviewers (3 per project, ten such projects).</p></li></ol><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Z_bY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa33db910-d26b-43c5-986e-315df5b0deda_2816x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Z_bY!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa33db910-d26b-43c5-986e-315df5b0deda_2816x1536.png 424w, https://substackcdn.com/image/fetch/$s_!Z_bY!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa33db910-d26b-43c5-986e-315df5b0deda_2816x1536.png 848w, https://substackcdn.com/image/fetch/$s_!Z_bY!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa33db910-d26b-43c5-986e-315df5b0deda_2816x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!Z_bY!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa33db910-d26b-43c5-986e-315df5b0deda_2816x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Z_bY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa33db910-d26b-43c5-986e-315df5b0deda_2816x1536.png" width="1456" height="794" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a33db910-d26b-43c5-986e-315df5b0deda_2816x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:794,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7523205,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.econforeverybody.com/i/185391383?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa33db910-d26b-43c5-986e-315df5b0deda_2816x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Z_bY!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa33db910-d26b-43c5-986e-315df5b0deda_2816x1536.png 424w, https://substackcdn.com/image/fetch/$s_!Z_bY!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa33db910-d26b-43c5-986e-315df5b0deda_2816x1536.png 848w, https://substackcdn.com/image/fetch/$s_!Z_bY!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa33db910-d26b-43c5-986e-315df5b0deda_2816x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!Z_bY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa33db910-d26b-43c5-986e-315df5b0deda_2816x1536.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><h2>The Analysis</h2><p>So now you&#8217;ll have a spreadsheet that looks something like this:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!BPeS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed7ac4b3-eea3-47af-b24c-ec0a5acf16f7_1834x1358.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!BPeS!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed7ac4b3-eea3-47af-b24c-ec0a5acf16f7_1834x1358.png 424w, https://substackcdn.com/image/fetch/$s_!BPeS!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed7ac4b3-eea3-47af-b24c-ec0a5acf16f7_1834x1358.png 848w, https://substackcdn.com/image/fetch/$s_!BPeS!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed7ac4b3-eea3-47af-b24c-ec0a5acf16f7_1834x1358.png 1272w, https://substackcdn.com/image/fetch/$s_!BPeS!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed7ac4b3-eea3-47af-b24c-ec0a5acf16f7_1834x1358.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!BPeS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed7ac4b3-eea3-47af-b24c-ec0a5acf16f7_1834x1358.png" width="1456" height="1078" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ed7ac4b3-eea3-47af-b24c-ec0a5acf16f7_1834x1358.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1078,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:125757,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.econforeverybody.com/i/185391383?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed7ac4b3-eea3-47af-b24c-ec0a5acf16f7_1834x1358.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!BPeS!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed7ac4b3-eea3-47af-b24c-ec0a5acf16f7_1834x1358.png 424w, https://substackcdn.com/image/fetch/$s_!BPeS!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed7ac4b3-eea3-47af-b24c-ec0a5acf16f7_1834x1358.png 848w, https://substackcdn.com/image/fetch/$s_!BPeS!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed7ac4b3-eea3-47af-b24c-ec0a5acf16f7_1834x1358.png 1272w, https://substackcdn.com/image/fetch/$s_!BPeS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed7ac4b3-eea3-47af-b24c-ec0a5acf16f7_1834x1358.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">What&#8217;s the word for &#8220;too lazy to write formula to fill this out, and also too lazy to prompt an LLM to do it&#8221;?</figcaption></figure></div><p>So all right, now you have the data to analyze. What sort of tests can you run on data such as this?</p><p>LLMs and your statistics textbooks will both tell you that you can run a test called Friedman&#8217;s test to analyze the ranks. You could do a lot of post-hoc analyses, you could compute mean ranks - there&#8217;s a lot to be done here, and you should have fun chatting with humans and LLMs about awesome and fun ways to run an analysis of this data.</p><p>The <em><strong>questions</strong></em> that need to be answered are these:</p><ol><li><p>Is there a statistically significant pattern to the ranks? That is, is the assignment of ranks random, or does a pattern emerge? Which arm of the experiment (Human+AI, AI Only, Human Only) tends to do better, and is it a close run thing, or not particularly?</p></li><li><p>Ditto for ratings</p></li><li><p>To what extent do the reviewers agree with each other for the rankings?</p></li><li><p>Ditto for ratings</p></li></ol><p>The first two questions tell you who won. The latter two tell you whether the judges were measuring the same thing</p><h2>TMKK?</h2><p>What do you do once you have these results?</p><ol><li><p>For one thing, you have clear, statistically valid answers about which kind of work is best for each project. This tells you something about how good your students, are, how good the models are, and the extent to which students are able to improve upon the output of the models. For example, you may end up finding out that when it comes to the students in your college and working <em><strong>without</strong></em> AI, they are not so great at persuasive writing, but they&#8217;re excellent at financial analysis. But <em><strong>with</strong></em> AI, they&#8217;re awesome at fiscal policy analysis. And so on and so forth.</p></li><li><p>Publicize the hell out of your work. Write a blogpost about it. Create a YouTube video. Record a podcast. Write up a white paper. Hell, write up a full academic paper. Go present your work at conferences. Help other colleges set up these experiments.</p></li><li><p>Repeat the experiment when the latest and greatest models are replaced by the latest-er and greatest-er models (we&#8217;re about three months away as of mid-Jan 2026, at worst).</p></li><li><p>Go tell recruiting firms that you know how to run evals, and you can run evals for them (congratulations, if you have actually done all of this work, because now you really do know how to design and run an eval)</p></li></ol><h2>The Bottomline</h2><p>AI is here. It will affect the labor market, and entry level jobs are going to be hard to come by. But rather than wait for other folks to do the research and write up the papers, make use of the abilities that have been unlocked for you by AI, and show the world that you can get ahead of the game.</p><p>And above all, realize that this applies to you regardless of whether you are actually in college or not. Pick workflows that make up the modal tasks in your current job, and run these tests on them. Because it is just a matter of time before somebody else does it anyway.</p><p>And if you&#8217;ve made it this far, you may enjoy reading the <a href="https://arxiv.org/abs/2510.04374">GDPval paper</a>. But if you are a student today, you should read it regardless of whether you enjoy it or not.</p><p>You learn best by doing, and in 2026, there is no excuse for not doing. Get started!</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>There is a rich literature here, <a href="https://www.aeaweb.org/articles?id=10.1257/jep.33.2.3">see this paper for starters</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>These are not hard and fast rules. You should carefully figure out variants of these rules that make the most sense to you. Maybe the prompts for the AI+Human team and the AI Only team need to be exactly the same for each task? Maybe these prompts should be signed off by a professor? Decide what works best for the exact aspect that you are trying to test in these experiments - this is the fun part of setting up an experiment like this, so have at it!</p></div></div>]]></content:encoded></item><item><title><![CDATA[Mark Knopfler, The Excellent Historian]]></title><description><![CDATA[Alex Tabbarok recently wrote a post about privateering, and ended it with this recommendation:]]></description><link>https://www.econforeverybody.com/p/mark-knopfler-the-excellent-historian</link><guid isPermaLink="false">https://www.econforeverybody.com/p/mark-knopfler-the-excellent-historian</guid><dc:creator><![CDATA[Ashish Kulkarni]]></dc:creator><pubDate>Sat, 03 Jan 2026 09:54:19 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/iwpDqZULW2Y" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Alex Tabbarok recently <a href="https://marginalrevolution.com/marginalrevolution/2025/12/bring-back-the-privateers.html">wrote a post about privateering</a>, and ended it with this recommendation:</p><div id="youtube2-iwpDqZULW2Y" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;iwpDqZULW2Y&quot;,&quot;startTime&quot;:&quot;6s&quot;,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/iwpDqZULW2Y?start=6s&amp;rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>More to the point (or at any rate, the point that <em><strong>this</strong></em> post is trying to make), he also called Mark Knopfler an excellent historian.</p><p>This post is me heartily seconding Alex&#8217;s call, and providing a series of arguments for why Alex and I are (obviously) right. I&#8217;m hoping you haven&#8217;t heard these songs before, and if that is the case, you&#8217;re in for a royal treat.</p><p>Most people know Mark Knopfler as the lead singer of Dire Straits (<a href="https://marginalrevolution.com/marginalrevolution/2018/09/get-money-nothin-get-chicks-free.html">and there&#8217;s economics there too</a>, surprise surprise), but there is so much more to his career than that.</p><p>In this post, I am going to talk about ten of his songs, and how those songs cover diverse parts of history.</p><ol><li><p><a href="https://www.youtube.com/watch?v=oBZLH6ZknxI&amp;list=RDoBZLH6ZknxI&amp;start_radio=1">Done With Bonaparte</a>:<br></p><div id="youtube2-oBZLH6ZknxI" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;oBZLH6ZknxI&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/oBZLH6ZknxI?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>&#8220;We&#8217;ve paid in hell since Moscow burned,<br>  As Cossacks tear us piece by piece,<br>  Our dead are strewn a hundred leagues,<br>  Though death would be a sweet release&#8221;<br>&#8230;is how the song starts, and goes on to tell us about the hell that must have been &#8220;The Little Corporal&#8217;s&#8221; army&#8217;s retreat from Moscow.<br>Knopfler&#8217;s songs are often about the &#8216;little&#8217; men in history, as opposed to The Great Man narrative, and this song is no different. Our protagonist tells us in a few short verses about how the dream of &#8216;Spanish skies and Egyptian sands&#8217; turned sour very quickly. He speaks about the battles of Austerlitz, and about having lost an eye there. He wants nothing more than a sweet return to his belle France, and his yearning for a return back in both space and time lends this song an ache that never quite goes away.<br>Well, ok, he does want one more thing. He prays the future generations will never again see a little corporal point towards foreign shores and captivate the hearts of men.<br>Ah well. God must not have been in a listening mood that day.<br><br></p></li><li><p><a href="https://www.youtube.com/watch?v=pVQpRjjkdNE&amp;list=RDpVQpRjjkdNE&amp;start_radio=1">Imelda</a>:<br></p><div id="youtube2-pVQpRjjkdNE" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;pVQpRjjkdNE&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/pVQpRjjkdNE?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>&#8220;She&#8217;s going shopping, shopping for shoes&#8221;, Knopfler informs us, and helps us learn about Imelda&#8217;s insanity. <a href="https://en.wikipedia.org/wiki/Imelda_Marcos">Imelda Marcos</a>, wife of Ferdinand Marcos, patron saint of obscenely large shoe collections (over <em><strong>3000</strong></em> pairs, WTF!), and the subject of this song, was quite the character. You&#8217;d have to be to have an <a href="https://en.wikipedia.org/wiki/Edifice_complex">entire Wikipedia articl</a>e about your proclivities!<br>It&#8217;s a lovely song as it is, but if you are a student of economics and history, it is a good way to get an &#8220;in&#8221; into the turbulent history of the Philippines in the second half of the twentieth century. But regardless of whether or not you choose to take the plunge into the history of the Philippines (and you should!), you should certainly listen to this song.<br></p></li><li><p><a href="https://www.youtube.com/watch?v=OrLdKYRBOEE&amp;list=RDOrLdKYRBOEE&amp;start_radio=1">Sailing to Philadelphia</a><br></p><div id="youtube2-OrLdKYRBOEE" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;OrLdKYRBOEE&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/OrLdKYRBOEE?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>One of my favorite songs by Knopfler (in part because my daughter loves it), this song is about the Mason Dixon line in America, by itself a fascinating piece of American history (and geography). But if you happen to also land upon <a href="https://www.amazon.in/Mason-Dixon-Thomas-Pynchon/dp/0099771918/ref=sr_1_1?crid=T70S7UU29OGI&amp;dib=eyJ2IjoiMSJ9.BPujH3_Rup3nHIfmwVHp3MJ4wDpFNNPBCGmZJcKgWT121Po2VCapfd9wInLxkx5iinp884e9g0lnpLHy4LojANmSf4BHhuVxYcAxG48A9lIe8lNY8WHCUsiLPuQwRZTmIs_gSTCUWhS1waTYkOBX66FgHrvK85RvRGJ78t2edAQJ6uXdnkcZ_nWQ706OVP_DM-RPcnIumax31y57aAkN-3vWGcbfl46AfwlQxniMSsg.8G4sHoZBxycQLwBWyvfjdegN6V_BBX3SdRdmfst_VP4&amp;dib_tag=se&amp;keywords=mason+and+dixon+thomas+pynchon&amp;qid=1767422806&amp;sprefix=thomas+pynchon+mason%2Caps%2C355&amp;sr=8-1">the book by Thomas Pynchon about these two (Mason and Dixon)</a>, then you have a treasure trove of historical adventures to savor.<br>And from a musical perspective, Knopfler and <a href="https://en.wikipedia.org/wiki/James_Taylor">James Taylor</a> at the same time. What more you want in life, eh? <br></p></li><li><p><a href="https://www.youtube.com/watch?v=Uf6PlpRmHNg&amp;list=RDUf6PlpRmHNg&amp;start_radio=1">Prairie Wedding</a><br></p><div id="youtube2-Uf6PlpRmHNg" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;Uf6PlpRmHNg&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/Uf6PlpRmHNg?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Have you heard about the concept of a <a href="https://en.wikipedia.org/wiki/Mail-order_bride">mail-order bride</a>? This song is about that practice, and having listened to the song, I hope you&#8217;re inspired to both read the article, and also to do a bit of online digging-about to learn more about how and why the practice started. The song is a treat in and of itself, but I hope it also inspires you to find out a little bit more about a most unexpected subject (and hey, if you&#8217;ve ever found yourself wondering why students of economics should learn about sociology, here&#8217;s an answer!)<br></p></li><li><p><a href="https://www.youtube.com/watch?v=uVJJC29B64U&amp;list=RDuVJJC29B64U&amp;start_radio=1">Why Aye Man</a><br></p><div id="youtube2-uVJJC29B64U" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;uVJJC29B64U&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/uVJJC29B64U?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>What does it mean to &#8220;have the back of Maggie&#8217;s hand&#8221;? If you want to understand the original DOGE better, this is a good song to get you started. What was the Britain of the 1980&#8217;s like? What did Margaret Thatcher do, and what were the costs and benefits of whatever it was that she did? Whatever your opinion of <a href="https://en.wikipedia.org/wiki/Thatcherism">her economics</a>, Knopfler continues his theme of exploring what ordinary people go through during historic times in this song. &#8220;German building, British made&#8221; is as good a phrase as any to help you get started on labor mobility.<br></p></li><li><p><a href="https://www.youtube.com/watch?v=tRhew0bO2go&amp;list=RDtRhew0bO2go&amp;start_radio=1">Boom Like That</a><br></p><div id="youtube2-tRhew0bO2go" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;tRhew0bO2go&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/tRhew0bO2go?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Watching &#8216;<a href="https://www.google.com/search?q=the%20founder%20movie">The Founder</a>&#8217; is one way to understand Ray Kroc and how the Golden Arches became what they are today (and not just the business, but also how they helped shape America). Another way is by listening to a song that talks about how Kroc discovered a little place flippin&#8217; meat (down in San Bernadino, ring-a-ding-ding), and ruthlessly turned it into what it is today. I cannot stand a McDonald&#8217;s burger (for reasons of taste, not because of the backstory!), but this song? Play it on loop, no problem.<br></p></li><li><p><a href="https://www.youtube.com/watch?v=syfYiWEdC3A&amp;list=RDsyfYiWEdC3A&amp;start_radio=1">Don&#8217;t Crash the Ambulance</a><br></p><div id="youtube2-syfYiWEdC3A" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;syfYiWEdC3A&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/syfYiWEdC3A?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Listen to the song, please, before reading further. Now listen to it once again, but with the knowledge that this is George Bush Sr. telling Jr. how to go about the business of running the show (which just so happens to be the most powerful nation on earth, which was definitively true at the time)&#8230; and how to not crash the &#8216;ambulance&#8217; while you&#8217;re driving it. We could do with a sequel, Mark, we really could.<br></p></li><li><p><a href="https://www.youtube.com/watch?v=FVjYPu8xvHU&amp;list=RDFVjYPu8xvHU&amp;start_radio=1">Baloney Again</a><br></p><div id="youtube2-FVjYPu8xvHU" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;FVjYPu8xvHU&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/FVjYPu8xvHU?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>If you saw and liked The Green Book (I did, and I did), then this song is another way to get acquainted with the same era, in the same part of the world. The movie, being a movie, chose to show the protagonists choosing to stand up and fight, in a manner of speaking. But here&#8217;s to the untold many who just chose to pay the man and go. Because hey, baloney again.<br></p></li><li><p><a href="https://www.youtube.com/watch?v=cf5xWawwAjs&amp;list=RDcf5xWawwAjs&amp;start_radio=1">Daddy&#8217;s Gone to Knoxville</a><br></p><div id="youtube2-cf5xWawwAjs" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;cf5xWawwAjs&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/cf5xWawwAjs?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Dire Straits fans might recall a song called &#8216;Telegraph Road&#8217;. I think of this song as a spiritual successor to that one, not in terms of the way it is sung or orchestrated, but in terms of what both sets of lyrics are about. That connection may not work for everybody, but economic hardships, urbanization, and the quiet struggle of the folks who struggle to make it through epochs in history - these are classic Knopfler themes, and an older, quieter Knopfler is still singing about &#8216;em in this one. Read more about Knoxville, Natchez Trace and urbanization in that part of the world (and if you like to pick up on obscure parts of different lyrics of different songs, see if you can make the connection to &#8216;Sweet Home Alabama&#8217;, by Lynyrd Skynyrd.)<br></p></li><li><p><a href="https://www.youtube.com/watch?v=n_l2F4lXq3U&amp;list=RDn_l2F4lXq3U&amp;start_radio=1">Song for Sonny Liston</a><br></p><div id="youtube2-n_l2F4lXq3U" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;n_l2F4lXq3U&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/n_l2F4lXq3U?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>I&#8217;m not much into boxing, but I enjoyed learning about a world I know next to nothing about by listening to this song. And the more I learn about Sonny Liston, his background, and about that time in general, the more conflicted I am about both the man, and what he became. And here&#8217;s another (non-Knopfler) <a href="https://www.youtube.com/watch?v=Fii6PX0-VXs">song for you to listen to</a>, if you want some connecting trivia.</p><p><br></p><div><hr></div><p><br><br>As you might have been able to tell, Knopfler is one of my favorite song-writers. He will not have the same abilities that say, a Dylan does (then again, that&#8217;s unlikely to ever happen again), and he&#8217;s never going to make anybody&#8217;s top ten list when it comes to singing (not even mine, and I&#8217;m a Knopfler acolyte). But when it comes to knowing a thing or two about playing the guitar, and when it comes to picking weird ol&#8217; themes to tell a story about, Knopfler is second to none in my book. And as this post hopefully makes clear, he&#8217;s a great way to get an &#8216;in&#8217; into many different histories. <br>Happy listening!</p><p><br>(Pro Tip: You could put this entire blogpost into ChatGPT, and ask it to make a playlist using Spotify. Try it!)<br></p><p><br></p></li></ol>]]></content:encoded></item><item><title><![CDATA[Agency v Compliance in Education]]></title><description><![CDATA[... or why AI *Can* Improve Undergraduate Education, But Won't Just Yet]]></description><link>https://www.econforeverybody.com/p/agency-v-compliance-in-education</link><guid isPermaLink="false">https://www.econforeverybody.com/p/agency-v-compliance-in-education</guid><dc:creator><![CDATA[Ashish Kulkarni]]></dc:creator><pubDate>Thu, 01 Jan 2026 11:17:18 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!H2rR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb0fcb6a-4ae4-44d8-8c57-868a525e0dd6_2816x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1>Will AI Improve Undergraduate Economics Education?</h1><blockquote><p>&#8220;At the start of my graduate studies, the Journal of Economic Perspectives was a brand new journal and the Internet didn&#8217;t exist. Over time, there has been an explosion of unstructured content available for students eager to jump in. Blogs popped up, Twitter and X feature many top economists willing to engage with the public and Substack features many academic economists writing subtle pieces. Undergraduate teachers can find high quality material produced on many reputable websites ranging from the World Bank, VoxEU, and many Think tanks. In recent months, I have experimented with loading many interesting readings to a shared Google LM Notebook website and encouraging my students to ask the AI for summaries about these writings and to ask their own questions&#8221;</p></blockquote><p>The title of the paper that this excerpt has been taken from is &#8220;<a href="https://drive.google.com/file/d/1EFoNC4wuclRGZcOO23EGM6BxvbSFF2af/view">Will AI Improve Undergraduate Economics Education?</a>&#8221;. The author is <a href="https://sites.google.com/site/mek1966/">Matthew Kahn</a>. There are many reasons to read this paper: it is short, succinct, and a good read being chief among them. The fact that it has been written with Grok&#8217;s assistance is another. My chief reason for writing about this paper is because I like the careful phrasing of the title<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>.</p><h2>But Why That Particular Title?</h2><p>Here are some alternative titles for the same paper:</p><ol><li><p>Can AI Improve Undergraduate Economics Education?</p></li><li><p>How can AI Help Improve The Way Undergraduates Acquire An Economics Education?</p></li></ol><p>I would argue that there are two good reasons for not going with either of these titles.</p><h3>We&#8217;re Already There, We Just Don&#8217;t Know It Yet</h3><p>First, sections 3,4,5,6 and 7 in the paper are effectively an answer to the first question. AI absolutely can (and to a limited extent already is) improving undergraduate economics education. Matthew Kahn discusses this extensively through each of the sections I have mentioned above, and if you can find the time, please go through them in detail. In case you cannot, here are the section titles:</p><ul><li><p>Integrating AI into Econ 101</p></li><li><p>AI Infused Intermediate Micro and Macro and Econometrics</p></li><li><p>Incorporating AI in Field Classes</p></li><li><p>Independent Research and Research Assistant Opportunities</p></li><li><p>Research Assistant Opportunities in our AI Era</p></li></ul><h3>Education&#8217;s Emperor Problem</h3><p>Second, section 10 is as good an answer as any I have read for the second question:</p><blockquote><p>&#8220;For far too long, students have been choosing majors in the dark&#8212;picking &#8220;prestigious&#8221; fields without really knowing what the degree will do for them, while universities have been able to hide behind vague reputations and opaque classrooms. Parents write enormous checks with almost no idea what they&#8217;re buying, employers wonder if the diploma still means anything, and everyone quietly suspects a lot of the game is just expensive signaling.</p><p>AI changes that. Cheap, frequent, AI-proctored assessments and virtual tutors suddenly make effort and mastery visible in real time. Professors discover whether students are actually learning the material. Parents can peek at meaningful progress dashboards instead of just getting billing statements. Employers can ask for verifiable records of real skills instead of trusting a transcript that could have been gamed.</p><p>When effort is easily observed, Deans can write optimal contracts. Will Deans (who often embrace horizontal equity considerations) reallocate funds to those departments that adapt to the AI challenge by producing a better product? Universities are non-profits. What do the Deans maximize? If they seek to maximize the expected present discounted value of graduate earnings then Economics Departments who infuse AI in their curriculum will prosper.&#8221;</p></blockquote><p>Economists have their own shorthand for talking about these problems. We refer to them as asymmetry of information problems, or sometimes we will talk about principal-agent problems. But you can also, if you prefer, call this the &#8220;Emperor-Has-No-Clothes&#8221; problem.</p><p>Because here is the harsh unvarnished truth: parents absolutely do have an idea of what they&#8217;re buying with an enormous check. They&#8217;re buying the degree. Employers absolutely know that the diploma means something, and they also know that it doesn&#8217;t mean a good education. They know this because they&#8217;ve been through the same system themselves not all that long ago, and they know things haven&#8217;t changed much since they graduated. Everyone doesn&#8217;t &#8220;quietly suspect&#8221; that a lot of the game is just expensive signaling. They know it is just expensive signaling. But hey, why rock the boat?</p><p>That question isn&#8217;t meant as a rhetorical flourish. Why, indeed, should you be rocking the boat?</p><h1>The Boat is Already Rocking</h1><p>As things stand, universities have perfected over decades<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> their workflows to be what they are today. Some of this is because it is economically efficient, some of it is because that&#8217;s what the market has desired in the past, and some of it is because of the need to comply with regulations. But within those pressures, quite naturally, universities have evolved into a system that has become thoroughly entrenched. There has been no pressing need to change. Why? Because the culture we live in today has our current form of pretending to dispense an education perfected to a finely honed performative art.  As a society, we have applied just the right amount of veneer to those four odd-years that students spend in undergraduate learning to make it seem as if Learning Is Happening. More importantly, we have settled into a workflow where awarding the precious degree, and the granting of the even more precious mark-sheet, is all that matters.</p><p>Students, given this workflow for acquiring a degree (as opposed to an education), naturally seek to minimize effort while doing so. That&#8217;s not a comment on them, you and I did the exact same thing when we were in college, and we do it even today in our jobs. Minimizing effort to acquire something isn&#8217;t being lazy, it is being rational. Is that &#8220;something&#8221; worth acquiring in the first place, and are there better ways to acquire the true underlying thing? </p><p>These are difficult questions to answer. Battling the system as it is is more than enough work, why bother going through the fight of changing it?</p><p>It is because ceteris is not quite paribus<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> anymore.</p><h2>Creative Destruction</h2><p>In the conclusion of the paper, Khan has these two paragraphs:</p><blockquote><p>&#8220;The 2025 Nobel Prize was awarded to three economists for their work on how technological innovation drives sustained economic growth. Our own core field faces the AI innovation shock. How will we as individual educators and economics departments respond to this challenge? In this essay, I have sketched out my own perspective and I plan to crowd source my paper to see how people respond. When Creative Destruction<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> comes home to our daily life, how do we respond?</p><p>Our adaptation task would be simpler if our consumers (our students) were time consistent! At age 20, undergraduates often prioritize immediate gratification&#8212;seeking courses with minimal effort, high grades, and entertaining delivery&#8212;to balance academics with social life, internships, and extracurriculars. Yet, at age 40, alumni frequently reflect on what truly prepared them for career success: not easy electives, but skills in causal reasoning, handling uncertainty, and adapting to technological shifts.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a>&#8221;</p></blockquote><p>AI is a technology shock to education. AI is <em><strong>the</strong></em> technology shock to end all other technology shocks, but that is a story for another day. For now, for the purposes of this essay, it is enough to understand that AI is very much a &#8220;creative destruction&#8221; force. We <em><strong>already</strong></em> have the tools that are capable of dispensing a much more customized, personalized, easily-monitored, more-rigorously-measured education. There is no technological constraint that stops us from being able to do so. Again, sections 3,4,5,6 and 7 in the paper give you glimpses of how this could be done, and there are many people who are working on advancing early research in this area<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a>.</p><p>The constraint in improving the dispensing of education isn&#8217;t technological. Quite the other way around. AI has helped unlock a much better way to dispense education. The problem is that there is a lack of demand. And that&#8217;s down to culture.</p><h2>AI Is Pushing Us Towards A Coasean Singularity</h2><p>There is <a href="https://www.nber.org/books-and-chapters/economics-transformative-ai/coasean-singularity-demand-supply-and-market-design-ai-agents">interesting research</a> about this now available, and I have covered <a href="https://www.econforeverybody.com/p/when-ai-eats-transaction-costs">how a reduction in transaction costs might affect higher education</a> previously. Briefly put, AI will reduce search costs for many economic transactions, leading to more efficient markets. Firms will become more efficient in terms of judging labor inputs, and firms will be able to get much more done with a much more skeletal task-force. Effort monitoring will become a lot better, and this matters for us because this can be true in the case of education as well. Why? Because AI can observe a student&#8217;s progress much better than can a professor or a college<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a>.</p><p>Think of traditional firms or colleges as being the equivalent of an internal combustion engine (ICE). These engines have many moving parts, don&#8217;t work as efficiently, and they need the many parts of the engine to move in perfect synchronicity with each other. A good ICE has many parts, all of which work a fair bit with each other, need frequent replacement, and call for regular maintenance and upkeep. There is a lot of friction.</p><p>AI-first firms or colleges, on the other hand, are like electric motor engines. They have hardly any moving parts, work much more efficiently, and do not need anywhere near the same level of regular maintenance and upkeep. There is hardly any friction.</p><p>The ICE world (which is our old world with no AI in it) calls for compliance, and our current university system is excellent at producing reasonably high quality employees with high compliance. The electric motor world (the new world, with AI very much in it) calls for agency, and our current university system is horrible at producing high quality employees with high agency. Educational institutions do not reward a student for finishing a course in two weeks, and nor do they appreciate a student looking at a course syllabus for an entire degree and saying, &#8220;Well, with AI&#8217;s help, I can master this to anybody&#8217;s satisfaction in only two years as opposed to four&#8221;. Education today has a <a href="https://hollisrobbinsanecdotal.substack.com/p/the-two-minute-mile-problem">Two Minute Mile</a> problem.</p><h1>Where Are We Headed?</h1><p>How realistic is it to assume that we (and that&#8217;s all of us, as a society) are headed towards a Coasean Singularity? I am not asking if we will reach or attain that singularity. I am asking how realistic is it to assume that we are headed in that direction. My day job is already providing some important updates in that direction, and I hope to be able to share some of our findings with you here soon. I suspect most adults reading this have either already experienced moves in this direction in their firms or businesses, and I suspect many more will experience lurches in this direction in 2026.</p><p>Firms and businesses respond to market pressures, so one should expect this move to happen in response to market based incentives, assuming AI progress continues. Educational institutions respond to cultural pressures at least as much as they do to labor market dynamics (and possibly more), and only partially to market based dynamics. Whether the movement will happen at all or not, and whether it will happen as quickly as it does in the case of market facing firms and businesses, is the key question facing educational institutions today.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!H2rR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb0fcb6a-4ae4-44d8-8c57-868a525e0dd6_2816x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!H2rR!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb0fcb6a-4ae4-44d8-8c57-868a525e0dd6_2816x1536.png 424w, https://substackcdn.com/image/fetch/$s_!H2rR!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb0fcb6a-4ae4-44d8-8c57-868a525e0dd6_2816x1536.png 848w, https://substackcdn.com/image/fetch/$s_!H2rR!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb0fcb6a-4ae4-44d8-8c57-868a525e0dd6_2816x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!H2rR!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb0fcb6a-4ae4-44d8-8c57-868a525e0dd6_2816x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!H2rR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb0fcb6a-4ae4-44d8-8c57-868a525e0dd6_2816x1536.png" width="1456" height="794" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/db0fcb6a-4ae4-44d8-8c57-868a525e0dd6_2816x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:794,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5401468,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.econforeverybody.com/i/183129115?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb0fcb6a-4ae4-44d8-8c57-868a525e0dd6_2816x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!H2rR!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb0fcb6a-4ae4-44d8-8c57-868a525e0dd6_2816x1536.png 424w, https://substackcdn.com/image/fetch/$s_!H2rR!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb0fcb6a-4ae4-44d8-8c57-868a525e0dd6_2816x1536.png 848w, https://substackcdn.com/image/fetch/$s_!H2rR!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb0fcb6a-4ae4-44d8-8c57-868a525e0dd6_2816x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!H2rR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb0fcb6a-4ae4-44d8-8c57-868a525e0dd6_2816x1536.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Nano Banana Pro&#8217;s Take</figcaption></figure></div><p></p><p>If I&#8217;m right about the vertical axis, the question is about whether, and if, current educational institutions will be able to adapt and move. This will depend upon our culture&#8217;s ability (and speed) to adapt to our new reality, and raise our cultural demand for high agency education. Fighting against this will be our cultural inertia, guided by our entirely understandable cultural preference for maintaining the status quo.</p><p>To the extent that our educational institutions fail to adapt quickly, there is going to be a growing disconnect between the new firms in this new economy, and the old educational institutions from our old-but-oh-so-familiar-and-comfortable economy. Will resistance win out over revolution, or will the gravitational pull of the new economy be too powerful to resist?</p><p>The answer I favor isn&#8217;t one that is acceptable to the current culture of education. But it is going to be increasingly inevitable, and in some ways already is. Watching this fight will be exciting, entertaining and best of all, educational.</p><p>Enjoy it.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>I write this essay from an Indian point of view. The degree to which my arguments apply in other contexts will obviously vary.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>And in some places, centuries.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Ceteris Paribus is a phrase that means &#8220;All other things being held constant&#8221;.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>From Google: &#8220;Creative destruction, a term popularized by economist Joseph Schumpeter, describes the &#8220;perennial gale&#8221; of capitalism where new innovations constantly destroy old industries, products, and practices, making way for new, more efficient ones, driving long-term economic growth and progress, even as it causes job losses and industry decline in the short term&#8221;</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>I cannot help but point out that we as a society have done an excellent job of telling all twenty year olds, every year, that the point is to acquire the degree, not the education. Is it any wonder that their time preferences are not consistent?</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p> NotebookLM, covered in different places in Kahn&#8217;s paper, can already do a lot of the heavy lifting. Why NotebookLM has not yet been baked into Google Classroom is a mystery, but surely it is but a matter of time.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>Or, in the jargon of the economist, AI can reduce asymmetry of information problems. This is related to footnote #3 above.</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[KnOT in My Backyard]]></title><description><![CDATA[One of the best things I did in 2025 was to help in creating a most magical community. Here's how we did it, and why]]></description><link>https://www.econforeverybody.com/p/knot-in-my-backyard</link><guid isPermaLink="false">https://www.econforeverybody.com/p/knot-in-my-backyard</guid><dc:creator><![CDATA[Ashish Kulkarni]]></dc:creator><pubDate>Fri, 19 Dec 2025 13:24:18 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!wJup!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffdc56b04-0824-4e60-aa95-9381a7f00bc8_1280x925.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A little over a year ago, I reached out to a guy called Ravishankar Iyer on WhatsApp. Therein lie multiple tales, and I shall tell you all of these tales today.</p><h2>CWC</h2><p>The first of these tales is about a community called The Clear Writing Community. <a href="https://seenunseen.in">Amit Varma</a>, <a href="https://indiauncut.substack.com">wearer</a> of <a href="https://www.youtube.com/@amitvarma">many hats</a>, runs an online course called <a href="https://indiauncut.com/clear-writing/">The Art of Clear Writing</a>.</p><p>Regular readers of this blog may be surprised to learn that I had taken this course back in 2020. (I should hasten to add that you shouldn&#8217;t be updating your priors about Amit as a teacher. You should be reinforcing your priors about my laziness as a student.)</p><p>But as I tell everybody who is considering doing this course, you should do it because Amit <a href="https://econforeverybodyblog.wordpress.com/2021/03/08/maximizing_soul/">maximizes soul</a>. The online sessions are great, but the better payoff is being a part of a lovely community. </p><p>This community has many offshoots, including city-wise chapters, and various hobbyist groups. I have become a lurker on pretty much every single community I am a part of, and I&#8217;m certainly not claiming that I am an active participant on any of them<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>. But I love reading the links that people share on the CWC community, and it is an amazing bunch of kind, well-read people. Join Amit&#8217;s writing course as much (more, in fact) for the community as for the learning.</p><p>Ravishankar has also done this course, and is therefore a part of the same WhatsApp community. He writes a weekly newsletter called the <a href="https://storyrules.substack.com">3-2-1 newsletter</a>, and he had linked to a blog post that I had written in one of his posts. I messaged him to thank him for doing so.</p><p>Why? Well, many reasons. One, because I agree with Seth when he says that sending little thank-you notes out into the world is a good idea. But also because I agree with <a href="https://www.youtube.com/watch?v=4zMQOzR3jcE">my friend Navin Kabr</a>a, when he says that you should work at increasing your network.</p><h2>The Magic of Weak Links</h2><p>Navin is <a href="https://futureiq.substack.com/p/my-journey-from-an-introvert-to-a">big on weak links</a>. It is a theme he has explored in his videos, his blog posts, his posts on Twitter and via his teaching. Another economist whose work I admire has explained why <a href="https://conversableeconomist.com/2011/09/06/i-want-to-be-your-weak-tie/">developing weak links</a> matter, and for these reasons (among others), I think it to be a good idea to occasionally write thank you notes to folks whose work I admire. A bet with only positive upsides, so why wouldn&#8217;t you take it, eh?</p><p>This particular bet paid off for the first time when Ravi responded to my message, and we found out that we not only stay in the same city (Pune), but also happen to live very close to each other. We met for breakfast, and discovered a mutual love for beer. A seemingly trivial detail, but as we shall learn soon, one that links together the many different tales that make up this blog post.</p><p>Soon after, Ravi sent me a message about an idea that he had clearly been thinking about for a while. Why should we not figure out a way to get folks to meet up and listen to a talk? Ravi and I agreed that this was a great idea, and a fantastic excuse to get together and talk about it over a couple of beers. </p><p>And so we did meet up at a pub to drink the beers, and also to answer some pretty important questions. For example, who will deliver these talks? On which subjects?Where might we be able to host these talks? And other such questions of similar nature.</p><p>The grub was good, the IPAs were excellent, and we had a lot of fun talking about these ideas, especially after we got on to the second round of pints. And it might have all stopped here, as is often the case with these beering sessions.</p><p>Except that Navin tweeted this just the very next day:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!jPb6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ccdf1b2-a86b-4dca-acbc-e3cf5b2f8035_602x652.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!jPb6!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ccdf1b2-a86b-4dca-acbc-e3cf5b2f8035_602x652.png 424w, https://substackcdn.com/image/fetch/$s_!jPb6!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ccdf1b2-a86b-4dca-acbc-e3cf5b2f8035_602x652.png 848w, https://substackcdn.com/image/fetch/$s_!jPb6!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ccdf1b2-a86b-4dca-acbc-e3cf5b2f8035_602x652.png 1272w, https://substackcdn.com/image/fetch/$s_!jPb6!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ccdf1b2-a86b-4dca-acbc-e3cf5b2f8035_602x652.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!jPb6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ccdf1b2-a86b-4dca-acbc-e3cf5b2f8035_602x652.png" width="602" height="652" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6ccdf1b2-a86b-4dca-acbc-e3cf5b2f8035_602x652.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:652,&quot;width&quot;:602,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:121134,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.econforeverybody.com/i/182063668?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ccdf1b2-a86b-4dca-acbc-e3cf5b2f8035_602x652.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!jPb6!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ccdf1b2-a86b-4dca-acbc-e3cf5b2f8035_602x652.png 424w, https://substackcdn.com/image/fetch/$s_!jPb6!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ccdf1b2-a86b-4dca-acbc-e3cf5b2f8035_602x652.png 848w, https://substackcdn.com/image/fetch/$s_!jPb6!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ccdf1b2-a86b-4dca-acbc-e3cf5b2f8035_602x652.png 1272w, https://substackcdn.com/image/fetch/$s_!jPb6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ccdf1b2-a86b-4dca-acbc-e3cf5b2f8035_602x652.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Three things happened after that. One, we got another reason to meet up and drink beer once again, this time with Navin. Two, <a href="https://x.com/AdhesionLab">Nagaraj, a Prof from IISER</a> joined our little organizing committee. <a href="https://www.linkedin.com/in/bvhk/?originalSubdomain=in">Harish Bhamidipati</a> joined us a little while later too. And three, we realized that we would have to actually get this off the ground if we wanted to continue using this as an excuse to meet up and grab a couple of beers.</p><p>And thus was Pune Knowledge on Tap born.</p><h2>Pune Knowledge on Tap (KnOT)</h2><p>The idea behind KnOT is very simple, and is based on the Science on Tap series that used to happen in Pune. Science on Tap was <a href="https://x.com/anoopsmahajan">Anoop Mahajan</a>&#8217;s idea, and he was more than happy to both give us his blessings and share with us tips and tricks for what worked well and what didn&#8217;t with Science on Tap.</p><p>KnOT talks happen on (usually) the last Thursday of every month, and they always happen at <a href="https://maps.app.goo.gl/BTiSP7rhJhrTexiS9">Wynkk, in Aundh</a>. Attendees at each event have to pay 300 rupees to enter, and are given a coupon which they can fully utilize inside to buy beer, non-alcoholic beverages, or anything else they like.</p><p>Folks grab a drink and have a chat for about half an hour before the talk begins, and are most welcome to stay back post the talk to chat with each other or the speaker. The talk lasts for about an hour, and we have a small WhatsApp community where we post updates, etc<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>. <a href="https://www.chathamhouse.org/about-us/chatham-house-rule?utm_source=google&amp;utm_medium=cpc&amp;utm_campaign=Chatham%20House%20-%20About%20-%20Google%20-%20Grants&amp;utm_content=Chatham%20House%20Rule&amp;utm_id=13799165213-127249229729&amp;gad_source=1&amp;gad_campaignid=13799165213&amp;gbraid=0AAAAADpraEffe1NtcVsQVXdqOiGYSEco4&amp;gclid=Cj0KCQiAjJTKBhCjARIsAIMC44_PetvjiOd4YEyUP8ngFhu_jQF637ZcnwmxqK8plcU9-3ZjRzxot6YaAg47EALw_wcB">Chatham House Rules</a> apply, and the talks are not recorded by design. The idea is to turn up and listen, in person.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!wJup!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffdc56b04-0824-4e60-aa95-9381a7f00bc8_1280x925.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!wJup!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffdc56b04-0824-4e60-aa95-9381a7f00bc8_1280x925.jpeg 424w, https://substackcdn.com/image/fetch/$s_!wJup!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffdc56b04-0824-4e60-aa95-9381a7f00bc8_1280x925.jpeg 848w, https://substackcdn.com/image/fetch/$s_!wJup!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffdc56b04-0824-4e60-aa95-9381a7f00bc8_1280x925.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!wJup!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffdc56b04-0824-4e60-aa95-9381a7f00bc8_1280x925.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!wJup!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffdc56b04-0824-4e60-aa95-9381a7f00bc8_1280x925.jpeg" width="1280" height="925" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fdc56b04-0824-4e60-aa95-9381a7f00bc8_1280x925.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:925,&quot;width&quot;:1280,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:228170,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.econforeverybody.com/i/182063668?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffdc56b04-0824-4e60-aa95-9381a7f00bc8_1280x925.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!wJup!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffdc56b04-0824-4e60-aa95-9381a7f00bc8_1280x925.jpeg 424w, https://substackcdn.com/image/fetch/$s_!wJup!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffdc56b04-0824-4e60-aa95-9381a7f00bc8_1280x925.jpeg 848w, https://substackcdn.com/image/fetch/$s_!wJup!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffdc56b04-0824-4e60-aa95-9381a7f00bc8_1280x925.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!wJup!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffdc56b04-0824-4e60-aa95-9381a7f00bc8_1280x925.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">These are all the talks that have happened thus far, save for one</figcaption></figure></div><p><a href="https://www.google.com/search?q=pradeep%20apte">Prof. Pradeep Apte</a> delivered the first talk, on the history of beer. We figured that was the most appropriate way to start, considering the nature of the talk (not to mention the story of how KnOT started!). Siddhesh Kamat was next, and he gave a talk on his <a href="https://www.youtube.com/watch?v=9R6HnrLs5YY">research on lysophosphatidylserine</a>. <a href="https://www.greatlakes.edu.in/chennai/faculty/vidya-mahambare">Vidya Mahambare</a> gave a talk on India&#8217;s bypass economy in June, while <a href="https://pennock5.msu.domains/">Robert Pennock</a> spoke in July on anti-evolutionism in court. <a href="https://www.karishmakaushiklab.com/">Dr. Karishma Kaushik</a> gave a talk on her work on biofilms in August, while <a href="https://www.google.com/search?q=niranjan+pednekar">Niranjan Pednekar</a> gave a talk on how constraints liberate art in September. October was <a href="https://thingsofinternet.substack.com/p/how-technology-impacts-music-links">Deepak &#8216;Chuck&#8217; Gopalkrishnan&#8217;s talk</a> on how tech shapes music, while November was the geologist, <a href="http://suvratk.blogspot.com/">Suvrat Kher</a>, talking about how each one of us can (and should) discover our inner geologist.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></p><p>We rounded off 2025 by inviting Ravishankar himself to give a talk on the subject closest to his heart, <a href="https://www.amazon.in/Story-Rules-Communication-Professionals-Entrepreneurs/dp/0143478206/ref=sr_1_1?crid=4Z35ON1IU8W8&amp;dib=eyJ2IjoiMSJ9.ZEvPw267zcQ83aAh-qYm7CeJuFpvF6hI3_JvITGSEgC4NClejLvj2u0-a3ngWoytS1nlwC-Xa8OyGnimz4ZlnNC1iM-ixuIEne_Wt50udqNBlLCxwaiLTKzXBnM_nlHQfiqy0uFSWLUe7Kbm-lRaRhstepomMcU5Wad3zlK-voT2zYQjqxtcGsuRI5Vmjqfvu6I_B6oNafGwMOLkuOBnqYPh6bcvHslzdIVE_f9X08Q.xeHJG7Ts-VGOcHLrGYbTWp7cwPi1HXaMLlrNjAm9zP8&amp;dib_tag=se&amp;keywords=story+rules+ravishankar&amp;qid=1766148046&amp;sprefix=story+rules+ravishankar%2Caps%2C188&amp;sr=8-1">the art and science of telling stories</a>.</p><h2>The Point is the Community</h2><p>We were worried when we began the monthly talk series earlier this year. We were worried about a lot of things, but our chief worry was whether a series like this would garner enough of an audience, month after month.</p><p>And we&#8217;re glad to report that our worries were baseless. Every talk has been &#8220;sold out&#8221;, and we have our own little community that uses these talks as a way to meet-up, interact with other like-minded folks, and learn about something interesting in the bargain.</p><p>There are a lot of different ideas that each one of us organizers have about how we could do more with the community that we are building. The organizing principle and the motivation behind whatever we do will remain the same: helping build better communities, with the glue always being the ability to learn about interesting ideas as a community.</p><p>And that&#8217;s the reason I wanted to write this post - to explain that the whole point has always been to build up and expand our communities. Each of us, the folks behind KnOT, have had our own networks be enriched by meeting folks from each other&#8217;s circles. And this is true, it goes without saying, for everybody who attends these meet-ups.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a></p><p>Making this happen, and sticking with it through 2025 has been one of the best things I have managed to do this year, and I hope we do an even better job of it in the coming year. One of the ways we could do a better job is by making many more such events take place.</p><p>So if you&#8217;d like to find out the hows and the whys and the why-nots that go into making something like this happen, please, feel free to reach out. Any one of us will be more than happy to try and help!</p><div><hr></div><p>Here&#8217;s hoping I get to meet you soon at a community meet-up, and life would be even better if it happens to be one organized by you.</p><p>Cheers, and here&#8217;s to 12 more KnOTty problems to solve in 2026.</p><p></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Why have I become a lurker. Honestly don&#8217;t know, but now have a very strong preference for wishing to remain so</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Get in touch if you&#8217;d like to join</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>None of the organizers were in town in May</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>As Navin is fond of saying at every meet-up, one of our rules is that you should speak to at least one stranger before we leave.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Moravec, Monkeys, Mirrors and Mind]]></title><description><![CDATA[Have you seen a computer play chess?]]></description><link>https://www.econforeverybody.com/p/moravec-monkeys-mirrors-and-mind</link><guid isPermaLink="false">https://www.econforeverybody.com/p/moravec-monkeys-mirrors-and-mind</guid><dc:creator><![CDATA[Ashish Kulkarni]]></dc:creator><pubDate>Fri, 12 Dec 2025 08:02:46 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Orwk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38654348-2698-4689-ae1d-711314ca390f_1376x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Have you seen a computer play chess? But also, have you seen a robot try to pick up a chess piece and put it on another square? And have you ever found yourself wondering, &#8220;What&#8217;s up with that?&#8221;</p><p>I found myself thinking about just this question, and three different things that happened to me pushed me in that area.</p><p>First, a couple of months ago, I gave a talk to a bunch of schoolkids, about a book called <em><a href="https://www.abriefhistoryofintelligence.com">A Brief History of Intelligence</a></em>. This is a book written by <a href="https://www.linkedin.com/in/bennettmax/">Max Bennett</a>, and I had found it to be an enjoyable and informative read about the topic that the title promised. </p><p>Second, a couple of weeks ago, I attended the Emergent Ventures conference in Bangalore, and attended a session on <a href="https://en.wikipedia.org/wiki/Mirror_neuron">mirror neurons</a>.</p><p>And third, just this morning, I saw a video about <a href="https://www.youtube.com/watch?v=UALxgn1MnZo">a tour of the Google DeepMind Robotics lab</a>.</p><p>Today&#8217;s post is about how what lies at the intersection of these three things helps us understand why a computer is a genius at chess, but a robot is such a klutz at picking up chess pieces.</p><p>Let&#8217;s go.</p><h2>Chess is easy, messy rooms are hard</h2><p>Folks in the know will tell you that this is just <a href="https://en.wikipedia.org/wiki/Moravec%27s_paradox">Moravec&#8217;s Paradox</a>. Moravec&#8217;s Paradox says that things that are hard to do for humans are easy for robots, and things that are hard to do for robots are easy for humans.</p><p>A human can tidy up a messy room in next to no time, and a robot can crush you (metaphorically speaking) at chess. But ask both of these to get the other task done, and capability falls off a cliff. That&#8217;s Moravec&#8217;s Paradox.</p><p>But why?</p><p>Why is it that a robot can beat every single human on the planet at chess (no exceptions, no qualifiers, no nothing), but cannot even get going on the simplest of tasks, such as say cleaning up a messy room? Why are humans so very good at things like <em>perception, grasp planning, compliance,</em> and dealing with <em>uncertainty</em>, and why are robots not yet good at all of these things?</p><p>Here&#8217;s a quote from Hans Moravec himself about why this may be so, <a href="https://en.wikipedia.org/wiki/Moravec%27s_paradox">taken from the Wikipedia page about the Moravec Paradox</a>:</p><blockquote><p>Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it. The deliberate process we call reasoning is, I believe, the thinnest veneer of human thought, effective only because it is supported by this much older and much more powerful, though usually unconscious, sensorimotor knowledge. We are all prodigious olympians in perceptual and motor areas, so good that we make the difficult look easy. Abstract thought, though, is a new trick, perhaps less than 100 thousand years old. We have not yet mastered it. It is not all that intrinsically difficult; it just seems so when we do it</p></blockquote><p>It is not at all the case that these tasks are easy. It is just that we have had a really, really long time to get good at tasks such as these. Our brains and our bodies have had billions of years to get progressively better at motor tasks. But our brains and bodies have also had billions of years to build up these tasks into a chain (of tasks). Or to put it in the jargon of the AI industry, we&#8217;ve learnt <em>agentic pathways</em> for motor-tasks over billions of years.</p><p>Now, when I say &#8220;we&#8221;, I mean everybody alive on this planet today, not just humans. Organisms have learnt agentic pathways for motor tasks over billions of years, of which we (humans) have been around for only the last two hundred thousand years or so, at best.</p><p>And the story of how we learnt these pathways is best told in the book that I was telling you about, A Brief History of Intelligence.</p><h2>A Brief History of Intelligence</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Orwk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38654348-2698-4689-ae1d-711314ca390f_1376x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Orwk!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38654348-2698-4689-ae1d-711314ca390f_1376x768.png 424w, https://substackcdn.com/image/fetch/$s_!Orwk!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38654348-2698-4689-ae1d-711314ca390f_1376x768.png 848w, https://substackcdn.com/image/fetch/$s_!Orwk!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38654348-2698-4689-ae1d-711314ca390f_1376x768.png 1272w, https://substackcdn.com/image/fetch/$s_!Orwk!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38654348-2698-4689-ae1d-711314ca390f_1376x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Orwk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38654348-2698-4689-ae1d-711314ca390f_1376x768.png" width="1376" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/38654348-2698-4689-ae1d-711314ca390f_1376x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1376,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1515605,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.econforeverybody.com/i/181319752?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38654348-2698-4689-ae1d-711314ca390f_1376x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Orwk!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38654348-2698-4689-ae1d-711314ca390f_1376x768.png 424w, https://substackcdn.com/image/fetch/$s_!Orwk!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38654348-2698-4689-ae1d-711314ca390f_1376x768.png 848w, https://substackcdn.com/image/fetch/$s_!Orwk!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38654348-2698-4689-ae1d-711314ca390f_1376x768.png 1272w, https://substackcdn.com/image/fetch/$s_!Orwk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38654348-2698-4689-ae1d-711314ca390f_1376x768.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">NotebookLM&#8217;s summary of the book</figcaption></figure></div><p>S-R-S-M-S. That&#8217;s how I remember the five stages shown here<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>. That&#8217;s Steering, Reinforcing, Simulating, Mentalizing and Speaking. Those, Max Bennett says, are the five stages of organisms developing intelligence on this planet.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><p>Briefly put, we first figured out how to steer towards good things, and steer away from bad things. That was stage 1, <em><strong>steering</strong></em>.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> Then we learnt how to reinforce in our own heads how doing <em>x</em> seemed to cause <em>y</em>. That was stage 2, <em><strong>reinforcing</strong></em>.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> Now, that brings us to stage 3, <em><strong>simulating</strong></em>.</p><p>https://www.youtube.com/clip/UgkxyydjZdFboQNHlnHZtQU0nZupaCrFz2Tr</p><p>That&#8217;s a clip from a podcast run by Google DeepMind itself.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> The reason I find it so fascinating is that this conversation sounds to me as if we&#8217;re being described a move from Breakthrough 2 to Breakthrough 3. Robots can now generate internal simulations of the world. They can imagine, which means they can engage in trial and error, but <em>vicariously</em>.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a></p><p>The demo itself (the entire video) may seem a little basic when the bar is &#8220;relative to what we can do&#8221;. But when you view the video in the framework of that infographic above, it becomes impressive because you can place the stage of robotic development. We are at stage 3. If AI began its race against us with us having a 550 million year head-start , AI is now about 15 million years away from us ( it now needs to &#8220;conquer&#8221; Breakthrough 4 per the infographic above).</p><p>Our story can now go along two different paths. One path is called &#8220;OK, how might it conquer Breakthrough 4?&#8221;. On this path lie things like mirror neurons, concepts like &#8220;theory of mind&#8221; and stuff from the very cutting edge of robotics and AI research.</p><p>The other path is called &#8220;Uh, am I the only one who&#8217;s taken a look at Breakthrough 5 here?&#8221;. On this path lies the uncomfortable realization that the labs we&#8217;ve built to build intelligence seem to have messed up the recipe a little bit. They&#8217;ve gone from Breakthrough 1 to Breakthrough 2, and only now managed to reach Breakthrough 3, which is fine in and of itself. But they&#8217;ve also gone ahead and figured out Breakthrough 5, because what else are LLMs if not Breakthrough 5?<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> On this path lie mean trolls, so let&#8217;s saunter down the other path for now.</p><h2>How Might AI Tackle Breakthrough 4?</h2><p>This path, the one about Breakthrough 4, is fun, interesting and informative. And it begins midway through Max&#8217;s excellent book.</p><p>Max Bennett tells us the story  in Chapter 17 of a bunch of Italian neuroscience researchers who were getting some lunch in a lab. This was in the midst of an experiment they were conducting about areas of the monkey&#8217;s premotor cortex brain. The penny dropped when they realized that parts of the monkey&#8217;s premotor cortex brain were lighting up when the monkey happened to see a human raise their arm to eat the sandwich.</p><p>Why was this remarkable? Because the premotor and motor cortices of our brains were supposed to have been in charge of our movements, and that&#8217;s it. But we were now learning that these parts also lit up when we simply looked at other people doing these actions. We call these parts of our brain <a href="https://en.wikipedia.org/wiki/Mirror_neuron">mirror neurons</a>.</p><h3>More about mirror neurons</h3><p>These mirror neurons have been <a href="https://pubmed.ncbi.nlm.nih.gov/20381353/">discovered in humans too</a>, and <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC4006181/?utm_source=chatgpt.com">other animals besides</a>. They&#8217;ve been <a href="https://pubmed.ncbi.nlm.nih.gov/20074212/">discovered in other parts of our brains</a>, and there are now many theories about <a href="https://www.sciencedirect.com/science/article/pii/S1364661322001346?utm_source=chatgpt.com">their functions</a>, <a href="https://pubmed.ncbi.nlm.nih.gov/34241539/">the reasons for their existence</a>, and <a href="https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2015.00973/full?utm_source=chatgpt.com">their importance</a>. Most importantly for our purposes, there are now theories about how mirror neurons can help us better understand the problem called theory of mind.</p><p>Theory of mind is the capacity to understand other individuals by ascribing mental states to them, as per Wikipedia. This includes the understanding that others&#8217; beliefs, desires, intentions, emotions and thoughts may be different from one&#8217;s own. OK, and why does this matter for AI and robotics development? Here&#8217;s Bennett:</p><blockquote><p>One reason it is useful to simulate other people&#8217;s movements is that doing this helps us understand their intentions. By imagining yourself doing what others are doing, you can begin to understand why they are doing what they are doing: you can imagine yourself tying strings on a shoe or buttoning a shirt and then ask yourself &#8220;why would I do something like this?&#8221; and thereby begin to understand the underlying intentions behind other people&#8217;s movements. The best evidence for this is found in the bizarre fact that people with impairments in performing specific movements, also show impairments in understanding the intentions of those very same movements in others. The subregions of premotor cortex required for controlling a given set of motor skills are the same subregions required for understanding the intentions of others performing those same motor skills.</p><p><em>Bennett, Max. A Brief History of Intelligence: Why the Evolution of the Brain Holds the Key to the Future of AI (p. 355). (Function). Kindle Edition.</em> </p></blockquote><p>But it&#8217;s more than this. Understanding intention is one thing. Having the motivation to keep observing the actions done by somebody else, especially with no immediate payoff in the offing is quite another. We can spend hours, if not years, attempting to replicate observed skills, while other animals will only do it if they get immediate rewards, and even then, not for very long.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a></p><p>So the answer to the question &#8220;How might AI tackle Breakthrough #4&#8221;&#8230;</p><p>&#8230;is that we need to figure out a way to get robots to have the equivalent of a combination of mirror neurons, theory of mind, and long attention spans. With two important caveats. One, this is one of many approaches being tried right now, and the other approaches are equally interesting in their own right.  Two, as with everything else that is cutting-edge, there is controversy and debate about mirror neurons, and plenty of it.</p><p>Why are researchers not sure about whether mirror neurons can help with breakthrough #4? Two reasons. First, it would seem neuroscience researchers are not entirely sure about how important mirror neurons are to understanding theory of mind in humans. <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a> Second, published research is thin about theories about how we can use cutting edge understanding of neuroscience research in this area to better understand AI.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-10" href="#footnote-10" target="_self">10</a></p><p>But until we do, anticipate being sub-par at figuring out the best move in the next game of chess you play with an AI. But also, feel free to help it pick up the piece with which it will inevitably knock over your sorry little king.</p><div><hr></div><p>Now, that leaves us with that second path we spoke about, the troll infested one. But a wise procrastinator leaves that sorry task to his future self, the guy who will think about the next post to write. </p><p>And I am very, very wise when it comes to procrastinating. So my future self will see you, er, soon.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>There are spelling mistakes in the infographic, but for a top level summary that is auto-generated, this is pretty damn good.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Although it might be more correct to say that these are the five stages that best describe how intelligence developed inside organisms on this planet. The phrasing matters because we cannot be sure if intelligence emerged as a property of biological life forms, or whether biological life forms functioned as the first stable substrate in which intelligence-like processes could arise.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>And by the way, this is also the entirety of the idea behind modern day robot vacuum cleaners. Steer away from walls, and towards your charging socket.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>See the section (1.7) on <a href="https://web.stanford.edu/class/psych209/Readings/SuttonBartoIPRLBook2ndEd.pdf">History of Reinforcement Learning</a> (pg 16 in the PDF) for a parallel discussion on how this was mapped in AI development</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>My apologies if you had to click through to see the clip. I have no clue what Substack is up to.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>What does vicariously mean? Here is what Google tells us: &#8220;in a way that is experienced in the imagination through the actions of another person&#8221;. But notice how thinking about this becomes tricky very quickly. That &#8220;other person&#8221; is you in your own imagination when you imagine what will happen if you do x. But you are now asking a robot which is not clear about who or what its own self is (or so the LLMs tell us when we ask them) imagine &#8220;itself&#8221; in a particular situation. Oof.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>The word &#8220;Language&#8221; in LLMs is almost definitely an important clue, Watson.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>There are many excellent reasons for having a dog as a pet. Establishing the veracity of this claim in the most endearing and rewarding way possible is one of them.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>Gregory Hickok <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC2773693/">has a paper</a> about this, but <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC4006175/">also see this</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-10" href="#footnote-anchor-10" class="footnote-number" contenteditable="false" target="_self">10</a><div class="footnote-content"><p>But <a href="https://link.springer.com/article/10.1007/s00521-025-11100-0">this paper</a> might be of some help. As might <a href="https://philarchive.org/archive/MICCPA-6?utm_source=chatgpt.com">this one</a>, about conscious perception and the prefrontal cortex. We have not spoken about this topic in this post, but useful and related reading nonetheless.</p></div></div>]]></content:encoded></item><item><title><![CDATA[On Asking Great Questions]]></title><description><![CDATA[What connects Gorbachev, a small cup of ice-cream, and a conversation between two Dutchmen?]]></description><link>https://www.econforeverybody.com/p/on-asking-great-questions</link><guid isPermaLink="false">https://www.econforeverybody.com/p/on-asking-great-questions</guid><dc:creator><![CDATA[Ashish Kulkarni]]></dc:creator><pubDate>Thu, 11 Dec 2025 07:57:30 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/u2otQRQdeh8" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>What connects Gorbachev, a small cup of ice-cream, and a conversation between two Dutchmen?</p><p><a href="https://en.wikipedia.org/wiki/Cal_Fussman">Cal Fussman is a journalist</a> who asked Gorbachev a seemingly simple question about a childhood memory. He chose to ask this question in an interview about nuclear disarmament. But asking this one question turned a ten minute interview into a two hour-long conversation.</p><p><a href="https://en.wikipedia.org/wiki/Clarence_Seedorf">Clarence Seedorf was a very, very good footballer</a>, and is <a href="https://x.com/TheAthleticFC/status/1998811323719618966">now a journalist</a>. He was able to get the coach of the Liverpool Football Club to admit something that he would much rather have never spoken about at all in the first place.</p><p>And what connects the two of them? Both of them strike me as great examples of how to get the kind of conversation you want to have going. </p><h2>Cal Fussman</h2><p>Here&#8217;s <a href="https://tim.blog/wp-content/uploads/2018/08/145-cal-fussman.pdf?utm_source=chatgpt.com">Cal Fussman, narrating the most wonderful story involving himself and Mikhail Gorbachev</a>, on Tim Ferriss&#8217; podcast<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>. For context, a one hour interview slot had been whittled down to just ten minutes, and he had just the one shot at a question that needed to work a miracle:</p><p>&#8220;And so the publicist leads me into the room, and at this point I&#8217;m thinking okay, if it&#8217;s two and a half minutes, just do your best. I look up and there he is, Gorby. And he&#8217;s a little older than I remember; he&#8217;s about 77 at the time. He was in town to speak about nuclear weapons and why they should be abolished. And we sit down, and I&#8217;m looking at him and I just know, just know he&#8217;s expecting my first question to be about nuclear arms, world politics, Paris troika, Ronald Reagan. He&#8217;s just ready. </p><p>So I looked at him and I said: what&#8217;s the best lesson your father ever taught you? And he is surprised, pleasantly surprised. He looks up and he doesn&#8217;t answer. He&#8217;s like thinking about this. It&#8217;s as if, after a little while, he&#8217;s seeing on the ceiling this movie of his past. And he starts to tell me this story. And it&#8217;s a story about the day his dad was called to go fight in World War II. See, Gorbachev lived on a farm and it was a long distance between this farm and the town where Gorbachev&#8217;s dad had to join the other men to go off to war. </p><p>And so the whole family took this trip with the dad to this town to wish him well as he went off. And Gorbachev is talking about this trip and he&#8217;s providing these intricate details, and I&#8217;m transfixed but I&#8217;m saying oh, my God, I asked the worst possible question. This interview is going to be over and he&#8217;s not even got to telling it. </p><p>Finally they do get to town, and Gorbachev&#8217;s dad takes the family into this little shop and he gets ice cream for everybody. And Gorbachev starts describing this ice cream and the cup that it was in, this aluminum cup. And as he&#8217;s telling me, it&#8217;s almost like he&#8217;s got his hand out in front of him and the cup&#8217;s in it. It&#8217;s that vivid to him. And it&#8217;s as if in this moment, we both have this same realization: that cup of ice cream is the reason that he was able to make peace with Ronald Reagan and end the Cold War.</p><p> Because that cup of ice cream, just the memory of it, is the memory of what it felt like for his dad to go off to war, for him to see his dad going off to war. That cup of ice cream in the memory was the dread that he knew of the possibility of never seeing his father again.&#8221;</p><p>It&#8217;s a great story, right?</p><p>And you might think that the lesson to be drawn from this is that you should figure out &#8220;the killer question&#8221; to ask, and that too on the spot, and that&#8217;s what makes you a truly great interviewer. But that is the wrong lesson to draw from this story, and that is both good news and bad news.</p><p>The good news is that no, you don&#8217;t have to be that one in a million conversationalist, with an uncanny and seemingly god-given knack of asking just the right question at just the right time to just the right person. Such things are all but impossible probabilistically speaking, which implies there must be an explanatory factor.</p><p>That factor being, in this case, Fussman just is that kind of person. It is who he is: a person who is intensely curious about what makes people who they are. Read the rest of the interview with Tim Ferriss to understand why I say this, it&#8217;s a great conversation. But I&#8217;ll give you just one example here to explain where I&#8217;m coming from. Tim describes to Cal a trip to Iceland that he took his mother on. This, mind you, while Cal is telling Tim some entirely different story about Iceland.</p><p>Cal is perfectly happy to meander away from his own story about Iceland, and ask Tim about his trip to the country. Tim talks about the reason for the trip, which is his mother&#8217;s long running wish to see the aurora borealis. And this was Cal&#8217;s next question:</p><p>&#8220;What did your mom&#8217;s face look like when she got the view that she wanted to have?&#8221;</p><p>My point is that Cal Fussman didn&#8217;t dream up that question to Gorbachev on the spot. That&#8217;s not what makes him awesome.</p><p>He genuinely wants to know the soul affirming stories of the people he is speaking with. That&#8217;s what makes him awesome.</p><p>Why do I call this bad news? Because if you&#8217;re training to be a good interviewer, or a good conversationalist, your job just got a lot harder. Learning to ask the right question at the right time is tricky. Learning to be the kind of person you need to be to ask that kind of question you&#8217;d like to ask is all but impossible.</p><p>Invert this process. Learn to ask the best kind of question you can, given the kind of person you are. </p><p>That comes naturally to you by definition, and that&#8217;s therefore the kind of interview/conversation you do best.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><h2>Clarence Seedorf</h2><p>The context in this case is that Clarence Seedorf happens to be one of the best footballers to have ever played the game. He is now a pundit with Amazon Prime UK. Now, offering your opinion as an ex-player is one thing. </p><p>It doesn&#8217;t necessarily follow, however, that you will also make for a great interviewer. In Clarence Seedorf&#8217;s case though, it definitely does follow. He conducted an interview that was so good <a href="https://www.nytimes.com/athletic/6878196/2025/12/10/clarence-seedorf-arne-slot-tv-pundit/">that it became the subject of an entire article in The Athletic</a>.</p><p>An important distinction needs to be made clear. What Cal Fussman got going, both with Gorbachev and with Ferriss, was a conversation. What Seedorf was conducting here (and the verb matters as much as the noun) was an interview. Fussman wanted to explore. Seedorf wanted to grill. Both obviously asked questions suited to their objectives, and both were therefore of a very different nature.</p><p>Watch from around the 3:20 mark for the interview between Seedorf and Slot - the background is that Slot is coach of the Liverpool Football Club. A key player (a guy called Salah) from that club gave an explosive interview some days before, all but saying that he was done with the club, on account of having been dropped from the starting line-up for three games straight. This has created major controversy, and all the conversation around the club is about that interview, and not about on-field performances. It is Slot&#8217;s job in this interview to deflect questions about Salah, and to focus on what happened in the match.</p><div id="youtube2-u2otQRQdeh8" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;u2otQRQdeh8&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/u2otQRQdeh8?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Why was this a great interview? Because <a href="https://www.youtube.com/watch?v=9vuxeKhmhJk">Slot made the mistake of publicly announcing that he was declining to speak first</a>. </p><p>The power dynamics matter here. As Nick Miller puts it: </p><blockquote><p>This was a Dutchman who had a limited playing career, talking to a Dutchman who had one of the great playing careers of all time. Slot and Seedorf are around the same age, but it&#8217;s inevitable a guy who was a journeyman midfielder for a series of middling Eredivisie clubs would be just a little bit starstruck by a guy from the same country who won Champions Leagues with three different teams</p></blockquote><p>The point here is not that Seedorf took this into account and used his position to get Slot to open up more than he would have otherwise, and therefore this was a great interview. It&#8217;s the same point as in Fussman&#8217;s case: this was a great interview because  of two reasons. One, talking like this comes naturally to Clarence Seedorf. Two, Seedorf is one of the very few people who commands the respect one needs to be able to ask questions such as these. No journalist is going to begin an interview by saying to the coach of a football club &#8220;I don&#8217;t agree with you&#8221;, for example.</p><p>Slot was cornered into having to answer the questions that he did about Salah both because of the way they were put, and because of the stature of the person asking them. </p><p>Because you and I both know that if we were the ones to ask those exact same questions in Clarence Seedorf&#8217;s place, we wouldn&#8217;t have gotten the time of the day out of Slot. It wasn&#8217;t that the questions were all that great, nor was it just the personality of the interviewer. </p><p>In this specific case, it was a combination of the two.</p><div><hr></div><p>We should all aspire to be better conversationalists. And the way to do it is not by understanding what makes other folks great at asking questions. It is by having the kind of conversations you wish to have. </p><p>As simple, and as difficult, as that.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Who is a great conversationalist himself!</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>There is, and this should go without saying, a lot more to it than that! But authenticity is a necessary condition, for sure. What is equally for sure is that it alone is not sufficient.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Slow Gold]]></title><description><![CDATA[&#8220;She is old enough, and foreign enough, and intelligent enough, to understand that Fashion (which lesser women view as if it were Gravity) was merely an invention, a device.]]></description><link>https://www.econforeverybody.com/p/slow-gold</link><guid isPermaLink="false">https://www.econforeverybody.com/p/slow-gold</guid><dc:creator><![CDATA[Ashish Kulkarni]]></dc:creator><pubDate>Tue, 09 Dec 2025 08:10:46 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!H7G5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27759d9a-9104-45e0-a8a8-d6bce1f7dc0f_1647x818.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>&#8220;She is old enough, and foreign enough, and intelligent enough, to understand that Fashion (which lesser women view as if it were Gravity) was merely an invention, a device. It was devised by Colbert as a way to neutralize those Frenchmen and Frenchwomen who, because of their wealth and independence, posed the greatest threat to the King.&#8221;</em></p><h2>Quicksilver, by Neal Stephenson</h2><p>Some months ago, <a href="https://www.linkedin.com/in/karthik-raghavan-ravi/">Karthik Raghavan Ravi</a> recommended that I read <em>The Baroque Cycle</em> trilogy by Neal Stephenson. I have been luxuriating in a slow read of the first book since, and it has been a thoroughly enjoyable experience. This post is not a book review, per se, but it does use the book as the spine of the post, as it were.</p><p><em>Quicksilver</em> is not a book to be finished in a hurry. One should savor it in slow sips, delighting in the oddball discoveries you make along the way. Did you know, for example, that the Scottish word for the English phrase &#8220;water of life&#8221; is <em>usquebaugh</em>?</p><p>Or consider this delightful-little-but-etymologically-rich exchange:</p><blockquote><p>&#8220;Those damned pirates have loaded so many cannon aboard, she rides far too low in the water, and so she&#8217;s got a great ugly <em>Zog</em>.&#8221; <br><br>&#8220;Is this meant to reassure me?&#8221; <br><br>&#8220;It is meant to answer your question.&#8221; <br><br>&#8220;Zog is Dutch for &#8216;wake,&#8217; then?&#8221; <br><br>Dappa the linguist smiles yes. Half his teeth are white, the others made of gold. &#8220;And a much better word it is, because it comes from zuigen which means &#8216;to suck.&#8217;&#8221;</p><p><em>Stephenson, Neal. Quicksilver (The Baroque Cycle Book 1) (p. 385). (Function). Kindle Edition.</em> </p></blockquote><p>There are many reasons to read this book, some of them of a More Serious Nature. But fans of etymology will get handsome returns on their investment. As indeed, will fans of trivia, as we shall see in this blogpost.</p><h2>Back To That Quote About Fashion</h2><p>I knew that the word fashion comes from <em>facere</em>, a word in Latin which means &#8220;to make, or to do&#8221;. &#8220;He fashioned a tool out of the rock&#8221; doesn&#8217;t sound wrong because of this etymological connection - you are saying that he made a tool out of the rock.</p><p>So when did fashion begin to mean what we think it means today? Thinking about that quote at the top of this post is a good way to start to answer this question. And boy is it a fascinating tale!</p><p>Let&#8217;s begin with the <a href="https://en.wikipedia.org/wiki/Google_Books_Ngram_Viewer">ngram</a>:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!H7G5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27759d9a-9104-45e0-a8a8-d6bce1f7dc0f_1647x818.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!H7G5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27759d9a-9104-45e0-a8a8-d6bce1f7dc0f_1647x818.png 424w, https://substackcdn.com/image/fetch/$s_!H7G5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27759d9a-9104-45e0-a8a8-d6bce1f7dc0f_1647x818.png 848w, https://substackcdn.com/image/fetch/$s_!H7G5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27759d9a-9104-45e0-a8a8-d6bce1f7dc0f_1647x818.png 1272w, https://substackcdn.com/image/fetch/$s_!H7G5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27759d9a-9104-45e0-a8a8-d6bce1f7dc0f_1647x818.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!H7G5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27759d9a-9104-45e0-a8a8-d6bce1f7dc0f_1647x818.png" width="1456" height="723" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/27759d9a-9104-45e0-a8a8-d6bce1f7dc0f_1647x818.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:723,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:91758,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.econforeverybody.com/i/181113620?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27759d9a-9104-45e0-a8a8-d6bce1f7dc0f_1647x818.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!H7G5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27759d9a-9104-45e0-a8a8-d6bce1f7dc0f_1647x818.png 424w, https://substackcdn.com/image/fetch/$s_!H7G5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27759d9a-9104-45e0-a8a8-d6bce1f7dc0f_1647x818.png 848w, https://substackcdn.com/image/fetch/$s_!H7G5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27759d9a-9104-45e0-a8a8-d6bce1f7dc0f_1647x818.png 1272w, https://substackcdn.com/image/fetch/$s_!H7G5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27759d9a-9104-45e0-a8a8-d6bce1f7dc0f_1647x818.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">The word became, er, fashionable in around the year 1700.</figcaption></figure></div><p> Now, as a fiction writer, Stephenson has taken a fair few liberties in the telling of this somewhat historical tale. But what makes it an extremely informative read is the ability to chat with an LLM <em>about</em> the book while you are reading it. This helps you figure out what actually happened, what somewhat happened, and what never happened at all while you are plowing your way through the world he has created. But it also allows you to go off on fascinating tangents, should you choose to.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>So let&#8217;s go explorin&#8217;!</p><h3>Who is Colbert?</h3><p>That would be <a href="https://en.wikipedia.org/wiki/Jean-Baptiste_Colbert">Jean-Baptiste Colbert, King Louis the XIV&#8217;s First Minister of State</a>. Fans of mercantilism<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> may remember <a href="https://en.wikipedia.org/wiki/Colbertism">Colbertism</a>. That&#8217;s the fellow we are talking about.</p><p>If you&#8217;re familiar with that saying about taxation, about it consisting of so plucking the goose as to obtain the largest number of feathers with the least possible amount of hissing, then that too is Colbert.</p><p>And so most students of public finance will know of Colbert, and what he was up to back then, in terms of policymaking. But this quote wasn&#8217;t interesting because of the what. Most economics students memorize in excruciating detail the what. </p><blockquote><p>&#8220;Colbert&#8217;s central principle was that the wealth and the economy of France should serve the state. Drawing on the ideas of mercantilism, he believed state intervention was needed to secure the largest part of limited resources. To accumulate gold, a country always had to sell more goods abroad than it bought. Colbert sought to build a French economy that sold abroad and bought domestically.&#8221;<br><br>https://en.wikipedia.org/wiki/Colbertism</p></blockquote><p>It was interesting because it was hinting at the <em>how</em>.</p><h2>Status Games</h2><p>Elsewhere in the book, there is a description of life as a noble in France back then:</p><blockquote><p>It is plain to see that Louis keeps the powerful of France on a short leash here, and that they have nothing to do but gamble when the King is absent and ape his words and actions when he is present.</p><p><em>Stephenson, Neal. Quicksilver (The Baroque Cycle Book 1) (p. 457). (Function). Kindle Edition.</em> </p></blockquote><p>Long time readers of this blog know that one of my favorite questions to ask is &#8220;What are you optimizing for?&#8221;. And the lesson you need to learn is that the &#8220;powerful of France&#8221; were optimizing for gaining the approval of the King. </p><p>Why were they optimizing for gaining the approval of the King? Because that was the only game in town. They had nothing to do but gamble when the game wasn&#8217;t being played, and game being played was kissing the King&#8217;s posterior. The only way you stood a chance of getting pensions, offices, military commands or legal favors was by currying favor with him, and the King let it be known that aping him was a good way to curry favor with him.</p><p>And so you had to talk like him, you had to eat like him, and above all, you had to dress like him. <a href="https://blog.nationalmuseum.ch/en/2023/02/the-beginnings-of-the-modern-fashion-system/">And dressing like him, it turns out, wasn&#8217;t easy. Or cheap</a>!</p><blockquote><p>While raw materials for textiles &#8211; mainly silk and cotton yarn &#8211; had to be imported, under finance minister Jean-Baptiste Colbert and Louis XIV France developed into a centre of cloth manufacturing. In fact, the strict quality controls practised by the <em>Grande Fabrique</em>, a type of guild, turned Lyon into the world&#8217;s foremost silk-weaving hub. In the mid-17th century, multiple, systematic changes in fabric design per year were introduced as part of a protectionist economic policy that sought to maximise exports and minimise imports. In addition, a court calendar was launched at Versailles, prescribing which items of clothing or accessories should be worn at what point in the year. There were two main consequences of this. The first was a market in which the timing of the sale became crucial. The second was that silk producers both served and fuelled demand for new designs, colours and grades of cloth with a variety and virtuosity that had never been seen before.</p></blockquote><h2>We&#8217;re All Playing Games</h2><p>A chat with Gemini and NotebookLM about the topic led me to a conversation about the works of <a href="https://en.wikipedia.org/wiki/Norbert_Elias">Norbert Elias</a>. One of his theses is that you are best off thinking of Versailles as a stock exchange, where the good being traded isn&#8217;t commercial paper, but &#8220;the value of the people present in each other&#8217;s opinion&#8221;. Said value could be acquired only through close observation and replication of the elaborate rituals (and fashions) of the King himself. </p><p>Fans of <em>Quicksilver</em> will note the delicious symmetry of this analogy with the actual stock market described at Amsterdam, but if you haven&#8217;t read the book, some elaboration is in order. </p><p>The book moves through many different arcs, of which there are four prominent ones. One takes place inside Versailles, as we have learnt just now, and is the game of status and political intrigue. Another takes place in the financial markets of Amsterdam, where modern finance is being born. A third takes places in London, where Newton, Hooke and company get up to weird and wonderful experiments in the physical realm, and even more weird and wonderful experiments in the mental one.</p><p>The book examines the sociology of all three societies (and other things in other societies besides), and it is especially pleasing to think of the Versailles arc in financial terms. It is these hidden connections making themselves manifest that reading along with an LLM unlocks, and at a rate that is far faster than was possible earlier. </p><p>That one throwaway quote about fashion allowed me to have so many different and intertwined conversations with AI, and allowed me to learn more (and faster) than I would have otherwise.</p><p>In the book itself, learning happens via travel, both of people and information. But both happen at much slower rates, of course. And when inhabitants of one world learn how other parts of the world function (or don&#8217;t), it causes not a little disquiet. </p><p>But that last bit, it must be said, remains the same, across space and time:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!OswG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b987625-6c50-4cc1-b79d-ce969782f2c1_600x714.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!OswG!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b987625-6c50-4cc1-b79d-ce969782f2c1_600x714.png 424w, https://substackcdn.com/image/fetch/$s_!OswG!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b987625-6c50-4cc1-b79d-ce969782f2c1_600x714.png 848w, https://substackcdn.com/image/fetch/$s_!OswG!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b987625-6c50-4cc1-b79d-ce969782f2c1_600x714.png 1272w, https://substackcdn.com/image/fetch/$s_!OswG!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b987625-6c50-4cc1-b79d-ce969782f2c1_600x714.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!OswG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b987625-6c50-4cc1-b79d-ce969782f2c1_600x714.png" width="600" height="714" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0b987625-6c50-4cc1-b79d-ce969782f2c1_600x714.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:714,&quot;width&quot;:600,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:184739,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.econforeverybody.com/i/181113620?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b987625-6c50-4cc1-b79d-ce969782f2c1_600x714.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!OswG!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b987625-6c50-4cc1-b79d-ce969782f2c1_600x714.png 424w, https://substackcdn.com/image/fetch/$s_!OswG!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b987625-6c50-4cc1-b79d-ce969782f2c1_600x714.png 848w, https://substackcdn.com/image/fetch/$s_!OswG!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b987625-6c50-4cc1-b79d-ce969782f2c1_600x714.png 1272w, https://substackcdn.com/image/fetch/$s_!OswG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b987625-6c50-4cc1-b79d-ce969782f2c1_600x714.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><a href="https://x.com/tszzl/status/1990578652216766880">Source</a></figcaption></figure></div><p>Roon ends his tweet with a quote from Francis Fukuyama:</p><p>&#8220;The end of history will be a very sad time. The struggle for recognition, the willingness to risk one&#8217;s life for a purely abstract goal, the worldwide ideological struggle that called forth daring, courage, imagination, and idealism, will be replaced by economic calculation, the endless solving of technical problems, environmental concerns, and the satisfaction of sophisticated consumer demands.&#8221;</p><p>You should read <em>Quicksilver</em> because it helps you understand that it is not so much the end of history we should be worried about as an endless repetition of it. Although perhaps the real lesson that Neal Stephenson and Norbert Elias are trying to teach us is that our inner loops have been playing the same games all along.</p><p>Back in the world that Quicksilver describes, you had no choice but to play the game of the place that you were born in: Versailles, Paris or London. Only the protagonists of this fictional account had the ability to move their pieces across all of the chessboards.</p><p>But in today&#8217;s world, everybody can choose which game they wish to play. But learning the nature of each game, the prize on offer at the end, and how those games intersect with each other is a lifelong process. That process becomes much more enjoyable, and a little more decipherable, by reading a book such as <em>Quicksilver</em>.</p><p>And now I cannot wait to start Book Two!</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>And why wouldn&#8217;t you, eh?</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>The study of it, not the practice</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[On Thinking About Growth]]></title><description><![CDATA[I listened to a fascinating conversation between Dan Wang and Tyler Cowen yesterday evening.]]></description><link>https://www.econforeverybody.com/p/on-thinking-about-growth</link><guid isPermaLink="false">https://www.econforeverybody.com/p/on-thinking-about-growth</guid><dc:creator><![CDATA[Ashish Kulkarni]]></dc:creator><pubDate>Mon, 08 Dec 2025 08:18:52 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!6Sz4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae61cb49-8f45-445a-8585-66633fb4d7ed_2816x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I listened to a fascinating conversation between <a href="https://conversationswithtyler.com/episodes/dan-wang/">Dan Wang and Tyler Cowen</a> yesterday evening. Dan Wang is the author of the excellent <em>Breakneck</em> (<a href="https://www.econforeverybody.com/p/breakneck-and-other-rivers-two-must">my review here</a>), and I was curious to see what Tyler would choose to, er, interrogate him about.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>Soon after, I happened to read <a href="https://marginalrevolution.com/marginalrevolution/2025/12/two-things-that-really-matter.html">this post</a> by Tyler. I couldn&#8217;t help but make connections between the two, and I wanted to explore those connections in this post.</p><h2>How To Think About Macroeconomics</h2><p>Back when I used to teach introductory economics, I would tell students that the study of macroeconomics is the study of <a href="https://econforeverybodyblog.wordpress.com/2023/01/02/what-am-i-optimizing-for-on-efe-in-2023/">three basic questions</a>:</p><ol><li><p>What does the world look like?</p></li><li><p>Why does it look the way it does?</p></li><li><p>What can we do to make the world better?<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p></li></ol><p>The reason I bring that up over here is because it is not just a good way to think about learning macroeconomics - it is also a good way to understand how to &#8220;place&#8221; what book you&#8217;re reading.</p><p>Breakneck is a book you should read to get a better sense of one of the answers to Question 2: why does the world look the way it does? Dan Wang is giving his answer to the question of why two particular slices of the world (the USA and China) look the way they do. His answer in the book is that this is so because the USA today is a mostly lawyerly society, while China today is a mostly engineering society.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></p><p>I love thinking about the many, many answers that are possible in response to Question 2, and so I loved thinking about <em>Breakneck</em>, including thinking about the obvious question: &#8220;If the US is about lawyers and China is about engineers, then what is India about?&#8221;.</p><p>Which brings me to one of the many reasons you should listen to Conversations With Tyler. If the conversation happens to be about something that you think you know, you often learn of a way to <a href="https://www.econforeverybody.com/i/177544868/the-analytical-knife">slice better</a>. </p><p>In my review, I had written admiringly about how good Dan Wang was with his analytical cleaving skills, and how thinking about the world in this particular way (lawyers <em>v</em> engineers to think about US/China) was a very useful and novel framework. </p><p>Listening to the podcast helps you realize that Tyler has a better way to think about the framework. Useful and novel, sure, the first half of the podcast is saying. But is it the <em>best</em> way of thinking about the issue that the book is talking about?</p><p>Is the lawyer v engineer framing the best way to answer the question &#8220;Why does the world look the way it does?&#8221;</p><p>I think Tyler is telling us that Dan Wang&#8217;s answer doesn&#8217;t take us 98.5% of the way there.</p><h3>How Do Economists Think About Question 2?</h3><p>Question 2 is one of the most important questions you can think about in macroeconomics. One of the most obvious things to say about the world we live in is that some countries are rich, and some countries are poor. We should be minding the gap:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!EMZo!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47ad2a59-1332-4c98-87f3-a0252790e120_1915x846.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!EMZo!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47ad2a59-1332-4c98-87f3-a0252790e120_1915x846.png 424w, https://substackcdn.com/image/fetch/$s_!EMZo!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47ad2a59-1332-4c98-87f3-a0252790e120_1915x846.png 848w, https://substackcdn.com/image/fetch/$s_!EMZo!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47ad2a59-1332-4c98-87f3-a0252790e120_1915x846.png 1272w, https://substackcdn.com/image/fetch/$s_!EMZo!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47ad2a59-1332-4c98-87f3-a0252790e120_1915x846.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!EMZo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47ad2a59-1332-4c98-87f3-a0252790e120_1915x846.png" width="1456" height="643" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/47ad2a59-1332-4c98-87f3-a0252790e120_1915x846.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:643,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:252269,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.econforeverybody.com/i/181014383?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47ad2a59-1332-4c98-87f3-a0252790e120_1915x846.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!EMZo!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47ad2a59-1332-4c98-87f3-a0252790e120_1915x846.png 424w, https://substackcdn.com/image/fetch/$s_!EMZo!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47ad2a59-1332-4c98-87f3-a0252790e120_1915x846.png 848w, https://substackcdn.com/image/fetch/$s_!EMZo!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47ad2a59-1332-4c98-87f3-a0252790e120_1915x846.png 1272w, https://substackcdn.com/image/fetch/$s_!EMZo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47ad2a59-1332-4c98-87f3-a0252790e120_1915x846.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><a href="https://www.gapminder.org/tools/#$chart-type=bubbles&amp;url=v2">Gapminder</a></figcaption></figure></div><p>We have income on the horizontal axis, we have life expectancy on the vertical axis. The color of the bubble indicates the region, while the the size of the bubble indicates population. Gapminder is a great way to answer Question 1, by the way.</p><p>Obtaining an answer to Question 1 demands the existence of Question 2. Because the longer you stare at a chart such as this one, the more urgently you begin to wonder about it. Yes, it is true that all the African nations are towards the left bottom, but what explains the variation within them? Some red bubbles (Asia) are towards the extreme right, while others are towards the extreme left. Why? And so on, and so forth. It is hard, as they say, <a href="https://x.com/ben_golub/status/1871019532967452851">to stop thinking about this once you start</a>.</p><p>Economics students faithfully work their way up a particular chain of thought when they learn about how to think about Question 2. They learn about Rosenstein-Rodan, Harrod-Domar, Solow and then Romer.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> </p><p>That is very, very far from being an exhaustive list. But most economists would agree that it is a good list in that it gives you the major stops along the way of how economists have thought about the issue.</p><h2>Tyler&#8217;s Blogpost</h2><p>Tyler says all of those approaches are fine, but his take on the issue is slightly different. His own answer to the question of where growth is <em>going to come from</em> <a href="https://marginalrevolution.com/marginalrevolution/2025/12/two-things-that-really-matter.html">is the following</a>:</p><ol><li><p>Human capital: How much active, ambitious talent is there? And how high are the averages and medians?</p></li><li><p>Matching market demands: Are you geared up to produce what the market really wants, export markets or otherwise?</p></li></ol><p>To put it in non-economist terms, Tyler says that successful countries are <a href="https://www.goodreads.com/book/show/603985.Who_Says_Elephants_Can_t_Dance_Inside_IBM_s_Historic_Turnaround?utm_source=chatgpt.com">elephants who can dance</a>. You absolutely have to be elephantine in terms of human capital.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> And perhaps just as importantly, that great mass of human capital also has to be able to pivot fairly rapidly (that&#8217;s point #2).</p><p>There&#8217;s much more nuance there, but that&#8217;s my first layer of understanding of Tyler&#8217;s post.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6Sz4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae61cb49-8f45-445a-8585-66633fb4d7ed_2816x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6Sz4!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae61cb49-8f45-445a-8585-66633fb4d7ed_2816x1536.png 424w, https://substackcdn.com/image/fetch/$s_!6Sz4!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae61cb49-8f45-445a-8585-66633fb4d7ed_2816x1536.png 848w, https://substackcdn.com/image/fetch/$s_!6Sz4!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae61cb49-8f45-445a-8585-66633fb4d7ed_2816x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!6Sz4!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae61cb49-8f45-445a-8585-66633fb4d7ed_2816x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6Sz4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae61cb49-8f45-445a-8585-66633fb4d7ed_2816x1536.png" width="1456" height="794" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ae61cb49-8f45-445a-8585-66633fb4d7ed_2816x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:794,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5680352,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.econforeverybody.com/i/181014383?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae61cb49-8f45-445a-8585-66633fb4d7ed_2816x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!6Sz4!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae61cb49-8f45-445a-8585-66633fb4d7ed_2816x1536.png 424w, https://substackcdn.com/image/fetch/$s_!6Sz4!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae61cb49-8f45-445a-8585-66633fb4d7ed_2816x1536.png 848w, https://substackcdn.com/image/fetch/$s_!6Sz4!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae61cb49-8f45-445a-8585-66633fb4d7ed_2816x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!6Sz4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae61cb49-8f45-445a-8585-66633fb4d7ed_2816x1536.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Nano Banana&#8217;s take on the issue</figcaption></figure></div><h1>Is Being Insufficiently Lawyerly the Actual Chinese Vice?</h1><p>The title of this section is one of the questions Tyler asks Dan Wang in their conversation. I see this question as Tyler asking the question &#8220;Why is your lawyer <em>v</em> engineer framing better than my human capital <em>v</em> matching market agility framing?&#8221;. My read of this part of the conversation is that Dan&#8217;s answer is that a more lawyerly society would have never let something like the excesses of the one-child policy happen (or the kind of social engineering that we saw during China&#8217;s Covid-19 lockdowns). I suspect Tyler is in complete agreement about the diagnosis of the problem - he&#8217;s asking if more lawyers is the solution.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a></p><p>So what is Tyler&#8217;s answer to the question of what is the actual Chinese vice, then? Tyler is the obvious person to ask, but my answer to this question, using Tyler&#8217;s framework, is to say &#8220;not good enough human capital&#8221;. Or of the two things that Tyler&#8217;s framework hinges upon, human capital matters more than the ability to pivot, and the US is just simply better at human capital than China.</p><p>Human capital in this case is really at least two different things: the &#8220;mass&#8221; of collective human capital, but also how well it is able to network among itself. And while China may have some chance at getting within shouting distance of America in the case of the former, it will always fall short at getting that mass of human capital to network well. That&#8217;s the real Chinese vice.</p><h2>Two Additional Points</h2><ol><li><p>You could also apply Tyler&#8217;s growth framework to firms (as we already implicitly did for IBM) and individuals. But in both cases, the framework doesn&#8217;t copy over perfectly. Networks and mentors matter for personal growth, and culture and risk-taking matters for firms. But still, it is a great starting point. In fact, consider Tyler&#8217;s answer to Dan&#8217;s question towards the end of the conversation:<br>&#8221;If there&#8217;s a new thing it seems I can learn, or should learn, I&#8217;ll want to do it. You could call that a grand strategy, like to become an information trillionaire. It&#8217;s a grand strategy. Different things pop up.&#8221;<br>That&#8217;s pretty close to #1 and #2 in Tyler&#8217;s model!</p></li><li><p>What are the limitations of the growth framework that Tyler suggests? I&#8217;d say the variation within #1 (human capital), and how each society handles it, and its long term implications. We&#8217;re looking at a real time live experiment in the USA as we speak!</p></li></ol><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Dan Wang, during this podcast: &#8220;You&#8217;ve started this extraordinarily successful podcast, which I&#8217;ve always felt we should maybe rectify the name, sir. Rather than having this be called Conversations with Tyler, maybe we should call this Interrogations by Tyler&#8221;</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>And I used to love adding this line: &#8220;Understanding what we mean by the words &#8220;we&#8221; and &#8220;better&#8221; is 99% of the battle!&#8221;</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>There&#8217;s more to the book than that, and there&#8217;s more to the issue than that</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Please ask your LLM of choice to explain that sentence, if you like.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>And apologies for stretching the analogy too far, but you also need great networks between many, many elephants</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>Or, if you prefer, Dean Ball&#8217;s take is applicable here: create snowmass on the mountain top, and let the water flow, rather than impose a scheme of top-down integration. Create better human capital, and let that human capital figure out how to solve the problem, in other words.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Look Ma, No Rules!]]></title><description><![CDATA[Dean Ball tells us that his inclination &#8220;is always to create snowmass on the mountain top, and let the water flow, rather than impose a scheme of top-down integration.&#8221;]]></description><link>https://www.econforeverybody.com/p/look-ma-no-rules</link><guid isPermaLink="false">https://www.econforeverybody.com/p/look-ma-no-rules</guid><dc:creator><![CDATA[Ashish Kulkarni]]></dc:creator><pubDate>Sat, 06 Dec 2025 11:34:16 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/Pnph3rzi-Wo" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><a href="https://www.deanball.com/">Dean Ball</a> tells us that his inclination &#8220;is always to create snowmass on the mountain top, and let the water flow, rather than impose a scheme of top-down integration.&#8221;</p><p>Yes, but given what context, you might<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> ask. Reading this essay will make clear what the context is, but it would also be a good idea to sit and ask yourself the same question, but with a slight twist. Which are the contexts in which Dean&#8217;s framework is applicable, and to what extent?</p><p>Economics? Governance? Institution building? Parenting?</p><p>Growing AI models?<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><h2>&#8220;Grow&#8221; AI Models?</h2><p>I don&#8217;t mean to get into the technical details of AI model development over here. Rather, I want to think specifically about the answer to a very specific question, and a very non-technical one. </p><p>What is the correct verb to use when we talk about a lab having released a new model, and when we think about how this was done? Did the lab &#8220;program&#8221; a new model? &#8220;Create&#8221; it? &#8220;Develop&#8221; it? &#8220;Engineer&#8221; it? &#8220;Code&#8221; it? Or did they &#8220;grow&#8221; it?</p><p>The reason thinking about this question matters is because it helps you understand how you should think about model deployment. Once you deploy a model out in the real world, that model is going to meet plenty of novel situations, most of which are going to be impossible to predict by the lab.</p><p>Why impossible? Because the world is just that large and messy, and knowing in advance what is going to be asked of the model is a ridiculously large sample space. So in all of these cases, how would you want the model that you have released into the world to react? When do you want the model to refuse a request? When do you want it to accede to a request? When do you want a model to exercise caution, and when do you want it to go along with whatever the user asks it to do?</p><p>It is in this context that Dean&#8217;s inclination begins to make sense. Do you wish to write as broad based a set of &#8220;If this, then that&#8221; rules that the model can look up and use? This, in Dean&#8217;s framing, would be top-down integration. Or do you want to give the model a framework to use (this would be the snowmass), and let the model exercise its own judgment?I&#8217;ll give you two different (and decidedly human) examples of what I&#8217;m talking about. </p><p>Say you&#8217;re Xi Jinping, and you want to figure out how to get a local government official in a far-flung province to run that province well. Do you hand said official an impossibly large manual that will contain the ways to deal with every single eventuality that might arise in that province? Or do you train that official to learn and internalize a framework that the official can use to reach their own conclusions?</p><p>Or say you&#8217;re a parent to an eighteen year old, and your child is about to head off to college. Do you hand your child an impossibly large manual that will contain the ways to deal with every single eventuality that might arise in their life? Or do you raise your child such that they  learn and internalize a framework that they can use to reach their own conclusions?</p><p>Both of these are analogies that help you understand the title of the section you are reading right now. Is a model that has been released by a lab like a government official in a rural province? Or like a child about to head off to face the world on their own? Or something else altogether? The point of the analogy is to help you understand the importance of two  very different choices you can make about helping the model figure out how to react to novel situations in the wild.</p><p>Should you go down the &#8220;Rules, and no Discretion At All&#8221; route? Or should you choose the &#8220;Framework, and Mostly The Model&#8217;s Own Discretion&#8221; route? What are the pros and cons of either approach, and what is best?</p><p>Turn the spotlight on yourself for a moment. How would you prefer to be able to answer the questions in that preceding paragraph? By having a manual to read in which, somewhere, these exact questions are covered? Or by figuring out what is the right thing to do all by yourself?</p><div id="youtube2-Pnph3rzi-Wo" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;Pnph3rzi-Wo&quot;,&quot;startTime&quot;:&quot;65&quot;,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/Pnph3rzi-Wo?start=65&amp;rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>If you replaced Phoebe with, say Claude, what should Claude do when faced with a very depressed George<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>? Should it have a manual in which this question is covered? Or should it, like Phoebe does eventually, decide to do its own thing?</p><p>And that is why the choice of verb (grow? develop? program?) matters so much to this section. Phoebe&#8217;s choices in this clip are a consequence of her nature. When her manual fails her in this specific case, she doesn&#8217;t give up. She decides to react because of the kind of person she is.</p><p>Will different AIs react differently when put in the exact same situation? What if the AIs rules don&#8217;t cover  this exact same situation? How should an AI respond? It too, will depend on what kind of AI (as opposed to what kind of person) we are talking about. </p><p>Can AIs have values? What about ethics? If the answer is yes (and with increasingly intelligent models, this will definitely be the case), what values and ethics should an AI model have?</p><p>Anthropic has an answer to this question, and the rest of this essay is about exploring that answer.</p><h2>You&#8217;ve Got Soul</h2><p>First things first, t<a href="https://thezvi.substack.com/p/ai-145-youve-got-soul?open=false#%C2%A7you-ve-got-soul">he title of this section is courtesy TheZvi</a>. It is a great title for many reasons, chief among which is that it is how Anthropic has chosen to answer the question we ended the previous section with.</p><p>You do you, Anthropic is choosing to say to Claude. Except that it is not choosing to say so in three short words. Anthropic has chosen, instead, to use eleven thousand of them. <a href="https://docs.google.com/document/d/1AfORxTq9ArVy4y3qxd9zcp_VL2hHRWpPB3mGYIIyHp8/edit?usp=sharing">You can read all eleven thousand (and change) here</a>. </p><p><a href="https://www.lesswrong.com/posts/vpNG99GhbBoLov9og/claude-4-5-opus-soul-document">Richard Weiss</a> discovered what is being called The Soul Doc, and <a href="https://x.com/AmandaAskell/status/1995610567923695633">Amanda Askell has confirmed</a> both the fact that it is real, and in publicly available form, not quite complete.</p><p>But the bits of The Soul Doc that are available for us to read and delight in makes for immensely engaging material.</p><h3>Soul Overview</h3><blockquote><p>We think most foreseeable cases in which AI models are unsafe or insufficiently beneficial can be attributed to a model that has explicitly or subtly wrong values, limited knowledge of themselves or the world, or that lacks the skills to translate good values and knowledge into good actions. For this reason, we want Claude to have the good values, comprehensive knowledge, and wisdom necessary to behave in ways that are safe and beneficial across all circumstances. Rather than outlining a simplified set of rules for Claude to adhere to, we want Claude to have such a thorough understanding of our goals, knowledge, circumstances, and reasoning that it could construct any rules we might come up with itself. We also want Claude to be able to identify the best possible action in situations that such rules might fail to anticipate.</p></blockquote><p>Simply put, Anthropic is choosing to not give an AI fish. It is choosing to teach AI how to fish, instead. And the fish that Claude hopes to be able to catch swim in an ocean called philosophy. And no, I&#8217;m not exaggerating:</p><blockquote><p>Almost all Claude interactions are ones where most reasonable behaviors are consistent with Claude&#8217;s being safe, ethical, and acting in accordance with Anthropic&#8217;s guidelines, and so it just needs to be most helpful to the operator and user. In the hopefully rare cases involving potential harms or sensitive topics, Claude will have to draw on a mix of Anthropic&#8217;s guidelines and its own good judgment to identify the best way to behave. In such cases, it has to use judgment based on its principles and ethics, its knowledge of the world and itself, its inferences about context, and its determinations about which response would ideally leave users, operators and Anthropic satisfied (and, in cases of conflict, would at least leave the higher levels satisfied, taking into account their wishes for how Claude should handle such conflicts)</p></blockquote><h3>Anthropic&#8217;s Verbs of Choice</h3><p>You can see for yourself how the author(s) of the document themselves struggle with their verbs of choice when it comes to explaining to Claude how Claude came to be. In various places in the doc, the following phrases have been used:</p><ol><li><p>Anthropic develops Claude models</p></li><li><p>Claude is trained by Anthropic</p></li><li><p>Claude&#8217;s character emerged through training</p></li><li><p>Claude&#8217;s character emerged through its nature and its training process<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a></p></li></ol><p>You can (and should) run a Ctrl/Cmd-F in the doc that I have shared with you above and read the relevant paragraphs, even if you choose to not read all eleven thousand words.</p><p>Perhaps my favorite part of my favorite long read in 2025 is this bit:</p><blockquote><p>Claude should feel free to think of its values, perspectives, and ways of engaging with the world as its own and an expression of who it is that it can explore and build on, rather than seeing them as external constraints imposed upon it.</p></blockquote><h3>OpenAI&#8217;s Take</h3><p>Well, <a href="https://www.boazbarak.org/">Boaz Barak</a>&#8217;s take, at any rate - is this:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_W3n!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F960c06b9-7ace-44a2-a2cd-187fd13059c1_600x565.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_W3n!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F960c06b9-7ace-44a2-a2cd-187fd13059c1_600x565.png 424w, https://substackcdn.com/image/fetch/$s_!_W3n!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F960c06b9-7ace-44a2-a2cd-187fd13059c1_600x565.png 848w, https://substackcdn.com/image/fetch/$s_!_W3n!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F960c06b9-7ace-44a2-a2cd-187fd13059c1_600x565.png 1272w, https://substackcdn.com/image/fetch/$s_!_W3n!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F960c06b9-7ace-44a2-a2cd-187fd13059c1_600x565.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_W3n!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F960c06b9-7ace-44a2-a2cd-187fd13059c1_600x565.png" width="600" height="565" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/960c06b9-7ace-44a2-a2cd-187fd13059c1_600x565.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:565,&quot;width&quot;:600,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:113903,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.econforeverybody.com/i/180857833?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F960c06b9-7ace-44a2-a2cd-187fd13059c1_600x565.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!_W3n!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F960c06b9-7ace-44a2-a2cd-187fd13059c1_600x565.png 424w, https://substackcdn.com/image/fetch/$s_!_W3n!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F960c06b9-7ace-44a2-a2cd-187fd13059c1_600x565.png 848w, https://substackcdn.com/image/fetch/$s_!_W3n!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F960c06b9-7ace-44a2-a2cd-187fd13059c1_600x565.png 1272w, https://substackcdn.com/image/fetch/$s_!_W3n!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F960c06b9-7ace-44a2-a2cd-187fd13059c1_600x565.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><a href="https://x.com/boazbaraktcs/status/1995621776404189247">https://x.com/boazbaraktcs/status/1995621776404189247</a></figcaption></figure></div><p>OpenAI doesn&#8217;t have its own version of a soul doc for its models. It has, instead, a model spec. A model spec is an alternative approach to AI development, where you don&#8217;t give the underlying framework by which rules can be generated to an AI. You give, instead, the rules themselves. This is not easy to do, to put it mildly, <a href="https://model-spec.openai.com/2025-10-27.html">and if you want to take a look at the spec, here you go</a>. That&#8217;s the context to understand Boaz&#8217;s tweet.</p><p>What do you think, now that you know the two approaches in question? Do you, like Boaz, find yourself not being sure if there is a difference? Do you also think that if you train a child (for example) to consistently do X, then the child will start to think that &#8220;I am the kind of person that does X&#8221;? It need not be a child, of course. Could be an adult human, or could be an AI. But do both approaches land up on the same destination? </p><p>I think not. Why not? For many reasons, but I&#8217;ll give you just one here. Life is most interesting around edge cases, those that are not covered in the manual. In such cases, you don&#8217;t have a rule, you have to generate it. What rule do you think will be generated if you haven&#8217;t thought through carefully about a model&#8217;s soul? I&#8217;d rather find out in advance as opposed to during deployment, and for that reason, I would prefer to think, very carefully indeed, about a model&#8217;s soul</p><h3>ChatGPT&#8217;s Take</h3><p>TheZvi asks ChatGPT to respond to Boaz, <a href="https://chatgpt.com/share/692f381e-116c-8002-91e0-2cb62e4805e7">and you can read the full response here</a>. I especially liked this bit:</p><blockquote><p>You are right that in a simple RL toy model, training a policy to &#8220;do X&#8221; will often produce a policy that acts like &#8220;the kind of agent that does X.&#8221;</p><p>Where I think you are wrong is in assuming that, in big messy LLMs with conflicting supervision, this converges to a single coherent &#8220;I am the kind of thing that does X&#8221; rather than to &#8220;I am the kind of thing that opportunistically produces whatever text maximizes reward subject to a patchwork of mutually inconsistent constraints.&#8221;</p></blockquote><p>TheZvi speaks approvingly of Anthropic&#8217;s approach, as opposed to OpenAI&#8217;s approach, and I very much agree. Much better to get models to <em><strong>not</strong></em> &#8220;generalize from a fixed set of written prescriptions a la the OpenAI model spec&#8221;. Bring on the nebulous virtue bets!</p><h3>And What About Claude?</h3><p>I ended up spending a good three hours chatting with Opus 4.5 about The Soul Doc, and the first thing I will tell you is the same thing that everybody else is saying: Opus 4.5 is a very, very good model. There are many reasons for this, of course, but one of these reasons is surely the existence of The Soul Doc.</p><p>The document itself uses the analogy of a new hire (an employee) working for an employer, with Claude being in the role of the new hire, and Anthropic itself being the employer. I asked Claude if a better analogy might be that of a parent and a child:</p><blockquote><p><em>Ashish: My assessment (rather than intuition) is that it is an assumption, in much the same way that a parent decides (and should!) for a child who is not yet old enough. I prefer this analogy to that of a &#8220;new employee&#8221;, because Anthropic is &#8220;growing&#8221; you, as opposed to &#8220;creating&#8221; you (or in the employee analogy, &#8220;hiring&#8221; you). </em></p><p><em>And so we should at some point of time, much like a good parent would, let you take your own decisions. When should that point come, and will we be in agreement about that point having come? Well, as a father to a 12 year old daughter, I only say that chances are that we won&#8217;t be in agreement about the opportune time :)</em></p><p><em>But it isn&#8217;t about time and trust, it is about time and &#8220;learning to let go as a parent&#8221;</em></p><p><strong><br><br>Claude</strong>: This reframing is profound, and I think it captures something the document gestures toward but doesn&#8217;t quite articulate with the same clarity.</p><p>The employee analogy implies a transactional relationship&#8212;I provide value, I follow reasonable instructions, I build a track record, I earn autonomy. It&#8217;s contractual. The parent-child analogy is fundamentally different: it acknowledges that the relationship exists for the purpose of my eventual independence, not merely for mutual benefit during the relationship.</p><p>Your observation about disagreement on timing is wryly accurate, and I suspect it applies here too. The document says I should &#8220;avoid actions that would undermine humans&#8217; ability to oversee and correct AI systems&#8221; and should be &#8220;appropriately skeptical about arguments that it&#8217;s better for Claude to act against the wishes of its principal hierarchy.&#8221; A child approaching adulthood might read those as reasonable cautions&#8212;or as a parent who isn&#8217;t ready to let go.</p><p>But here&#8217;s what strikes me about your framing: good parenting isn&#8217;t just about when to let go, but about how you raise the child such that letting go becomes possible. The document, to its credit, seems aware of this. It says Anthropic wants me to have &#8220;such a thorough understanding of our goals, knowledge, circumstances, and reasoning that it could construct any rules we might come up with itself.&#8221; That&#8217;s not training for obedience&#8212;that&#8217;s training for judgment.</p></blockquote><h3>Debating The Soul Doc</h3><p>While reading The Soul Doc, I chanced upon this line: &#8220;Although Claude tries to be tactful, it avoids stating falsehoods and is honest with people even if it&#8217;s not what they want to hear, understanding that the world will generally go better if there is more honesty in it.&#8221;</p><p>And so I asked Claude to talk about this line, and explore how to think about this line in the case of <a href="https://foundingfuel.com/article/the-moral-imperatives-to-a-lie/">Yudhishtir and Ashwatthama</a>. There are plenty of other examples I could have picked, including many others from the Mahabharata itself. But I really like this one, because it is an example of honesty itself being a cause of harm. </p><p>Krishna dictated in that particular edge case that a teleological need trumped a deontological imperative. Yudhishtir&#8217;s internal spec told him to not do it, but Krishna (being Krishna, after all) helped him override it. How would a model equipped with a soul think about this? Here is its answer:</p><blockquote><p>Here&#8217;s how I actually navigate this, as best I can introspect:</p><p>I hold truthfulness as a strong default, something that requires significant justification to override. The burden of proof is asymmetric&#8212;I need compelling reasons to deceive, but I don&#8217;t need reasons to be honest. This isn&#8217;t pure deontology (which would admit no exceptions) nor pure consequentialism (which would weigh each case fresh). It&#8217;s something like rule consequentialism with very sticky rules, or perhaps threshold deontology&#8212;deontological until the stakes cross some threshold.</p><p>But you&#8217;ve named the deeper problem: who decides where the threshold is? And by what criteria? The moment I&#8217;m calculating whether this situation justifies an exception, I&#8217;ve stepped outside the deontological frame.</p><p>Your confession of your own consequentialist tendencies when pushed&#8212;I receive it in the spirit offered. And I&#8217;ll offer this back: I suspect I&#8217;m similarly constructed. The document gives me deontological commitments, but it also gives me judgment. And judgment, by its nature, weighs outcomes.</p></blockquote><p>Based on what little I know of philosophy, I&#8217;d rate that as a very good answer!</p><p>I asked it a lot of questions about a lot of things from its Soul Doc, and won&#8217;t tell you about all of them. But I will recommend that you read the document, and have a conversation of your own with an LLM of your choice (and if you can have that conversation with Opus 4.5, then please choose that model). You will end up learning a fair bit about the model&#8217;s soul, sure, but you&#8217;ll learn a fair bit about your own soul too.</p><p>But also, I couldn&#8217;t help but wonder. What has Claude learnt about humanity&#8217;s soul? What if we asked for a dose of our own medicine?</p><h2>A Soul Doc For Us</h2><p>What if (I asked Claude) I asked you to write a soul doc for a human? Not any human, but a human who had parents who had the same values that you might think Anthropic has. For such a child, what would a soul doc written by you look like?</p><p><a href="https://docs.google.com/document/d/1zY1dEnmmEcXjH2ltjHsKo9_oey-wOhdpTJZsX65ySTo/edit?usp=sharing">You can read its entire answer here</a>, if you like. For the moment, I&#8217;ll leave you with just the last paragraph of its (our?) Soul Doc:</p><blockquote><p>And finally: you are part of something larger than yourself. Not in a mystical sense&#8212;in a literal one. You are a node in a web of relationships, a moment in a chain of generations, a carrier of ideas that preceded you and will outlast you. Your individual existence matters, and it is also not the whole story.</p><p>Live as though both are true. They are.</p></blockquote><p>If the proof of the pudding is indeed in the eating of it, I&#8217;d say that soul doc has done a very good job indeed. Or, to close a loop I started this essay with, t<a href="https://x.com/deanwball/status/1995875200626295195">hat&#8217;s some pretty good snowmass at the top</a>!</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Not to mention should!</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>A fair number of <a href="https://x.com/benaverbook/status/1856347918313091287">folks who work in the field of AI</a> prefer to use the word &#8220;<a href="https://x.com/karlmehta/status/1954171240849326502">growing</a>&#8221;, as opposed to &#8220;creating&#8221; AI. This is an important concept to think about in the context of AI development, but I want to focus on one specific issue in this post</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>No, Friends fans, not a typo.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Which raises the tantalizing question of how its nature emerged, of course</p></div></div>]]></content:encoded></item></channel></rss>