Trust, Integration and Taste: The Holy Trinity of Human Careers in The Age of AI
Here's a simple question for you to think about, and please answer it before you go on to read the rest of this post:
How much better and cheaper will the paid-for modal AI model be in, say, 2030, compared to today?
When I say "paid-for modal", what I mean is the "not free" and the " most popular" model in 2030. Let's say that model today is the 4o model from ChatGPT.
There are many ways to answer this question, and none of them is going to be 100% correct. AI pessimists will have a different answer compared to AI optimists, and while everybody who is reading this has thought in at least an informal way about this question, your ability to answer it depends on your level of expertise and familiarity with the question. And that's fine, the point is not accuracy -the point is for you to have a definitive answer to this question.
Me, personally, I'd say about a 100 times better compared to what we have now, and it costs say half of what it does today. Squeal with outrage at my guesstimate all you like, and I might squeal with outrage at yours (so there). All I'm asking you to do is have your own estimate. You don't need to defend it, explain it, or change it. But you do need to have one ready.
Got it? OK, great. Now tell me: what does the existence of this model in 2030 mean for employment prospects in 2030?
How does one think about coming up with the answer to this question?
The Framework Matters, Not the Answer
Note that the question I have asked has been worded very carefully. I do not think it is possible to come up with "the" answer to this question, because we are talking about a five year horizon into the future, and about a technology that updates monthly, if not weekly, and does so at a significant rate of improvement.
But it is not the answer I am interested in wrestling with. I am interested in wrestling with the framework we should be using to answer this question.
And one possible framework was suggested in a nice write-up in The New York Times, titled "22 New Jobs A.I. Could Give You", by Robert Capps. I liked the article for a variety of reasons, but chief among these was the fact that Robert Capps takes as given that AI deployment will be far and wide by 2030. I happen to agree with that hypothesis, in part because of my answer to the question we started today's post with.
But now the question becomes this one: in such a scenario, what is likely to increase demand for human inputs?
Capps gives us three answers, which I'm going to dub the holy trinity of job searches in the age of AI:
Trust
Integration
Taste
Trust, Integration and Taste
Trust
It is not going to be enough, Capps says, to just deploy AI.
This is true for a number of reasons. First, what Ethan Mollick refers to as the jagged frontiers of AI. AI capabilities don't expand and improve smoothly, and there are a lot of reports on how some capabilities actually become worse with model updates (while others improve, of course). It would be reasonable to expect that this trend will continue, so it will not be a case of deploying the new model and kicking back on the sofa. Someone has to pay close attention to what is going on, what is working and what is not, what got broken and needs fixing, and what might get broken, and should therefore be dealt with ASAP.
When I teach statistics, I often tell my students that their most underrated skill as a statistician is being able to speak the English language well. If you tell me, for example, that you've run an experiment and rejected the null because the p-value is 0.003, I'll be (mostly) fine with it. But that's because I speak Statisticese.
Not everybody does! And so you have to, as a statistician, also be able to translate your work so that ordinary mortals can understand just what the hell is going on. This will be applicable to a lot of the things that AI will do in the future as well - you will need, Capps says, AI translators, just like you need statistics translators today. Statistics translators also "do" statistics, whereas AI translators will leave the doing to AIs entirely. But the job of translating into human-speak goes up in value, especially when done well. Or so we hope, of course.
Capps also talks about related jobs, including trust authenticators, trust directors and AI ethicists (and others besides). The point with these jobs is to not focus on the title, but rather on the function, the key idea being that the work that AI does will need to be audited and verified.
And on a related note, this section also has my favorite job from the future: sin-eaters:
In a number of fields, from law to architecture, A.I. will be able to do much of the basic work customers need, from writing a contract to designing a house. But at some point, a human, perhaps even a certified one, needs to sign off on this work. You might call this new role a legal guarantor: someone who provides the culpability that the A.I. cannot. Ethan Mollick, a professor at the Wharton School of Business and the author of “Co-Intelligence: Living and Working With A.I.,” refers to such jobs as the “sin eaters” for A.I. — the final stop in the responsibility chain.
Integration
It's all well and good for a senior leader to wave their hands and say "Let there be AI". But it will take a lot more than seven days and seven nights, and there's going to be no Garden of Eden waiting on the other side. AI replaces which tasks in which functions in which department, and how exactly does it do it? Who decides, on what basis, and how does the implementation proceed?
Expect AI plumbers to be in demand soon, Capps says, and the job of these new guys will be to "snake the pipes of the entire system". You will also have AI assessors, whose job it will be to assess the "fit" of the AI for the task, job or person in question. This is already happening, by the way, but expect this to accelerate even more rapidly in the coming years:
Moderna, the mRNA vaccine maker, has merged its tech and HR departments. The idea is that over the next few years, the roles of the humans and AI in the organization are likely to see a lot of overlap and hence the decisions regarding them cannot be split across different parts of the organization.
Moderna’s HR chief, Tracey Franklin has been promoted to the role of “Chief People and Digital Technology Officer”:
Franklin said she is redesigning teams across the company based on what work is best done by people versus what can be automated with technology, including the tech it leverages from a partnership with AI giant OpenAI. Roles are being created, eliminated and reimagined as a result, she said.
Again, please read the whole thing, Capps mentions other roles that might come about in this area as well.
Taste
Capps speaks about how he can imagine a future in which AI will write the entire piece instead of him. His job, Capps hypothesizes, will be to select the inputs, provide guidance on style and phrasing, and finalizing and approving the output. He writes, with a memorable turn of phrase, about being the author of the piece, rather than the writer.
https://www.youtube.com/shorts/akcSX81KOv4
Capps references this clip in his write-up, but I think watching the clip itself makes the point much more powerfully. The point being that sure Veo3 can come up with twenty different clips, all of them being very, very good. But who chooses which clip actually makes it into the final product? On what basis?
How do I know that this clip here is better than that clip there? You may not be the creator of the clip, but is the Chooser of the Best Clip an even more important job? Can writers become article designers? Can singers become song choosers?
This isn't mentioned in the article, but can they help other people develop taste better? In a world with infinite choices and limited time, developing taste itself becomes a very important thing. Is Rick Rubin's job in the future going to be helping other people develop Rick Rubin like capabilities?
Taste Tutors does have a nice ring to it. No?
Going Meta
Given that AI deployment for existing jobs is a near-certainty, where will we see an increase in demand for humans to be on the job?
Like I said, we are not looking for an answer to the question about employment prospects for humans in 2030. We are looking for a framework that can help each of us come up with an answer. We went above and beyond our limitation in terms of not being able to come up with the answer, and tried to come up with a framework instead.
But this ability, that of going beyond (going meta) our brief in a good way, is not an ability that is unique to us. AIs may well develop this ability in the years to come!
When you're faced with twenty different audio clips that have been generated by AI, and asked to choose the best one, you decide which is best basis some benchmark in your head. You may not be able to immediately answer questions about that benchmark, because verbalizing why you like something can often be quite difficult. But this benchmarking, if you think about it, must be happening - for how else are you able to decide which is best?
Can AIs go meta and learn about this benchmark? Can they learn about and adopt and adapt human value systems, as opposed to just being guided by them?
And if yes, whither all those jobs from the kingdoms of Trust, Integration and Taste?
This only goes to prove that no good can come from going meta. No?