The Perfect Match, And Other Horrors
Sai woke to the rousing first movement of Vivaldi’s violin concerto in C minor, “Il Sospetto.” He lay still for a minute, letting the music wash over him like a gentle Pacific breeze. The room brightened as the blinds gradually opened to the sunlight. Tilly had woken him right at the end of a light sleep cycle, the optimal time. He felt great: refreshed, optimistic, ready to jump out of bed. Which is what he did next.
“Tilly, that’s an inspired choice for a wake-up song.”
“Of course.” Tilly spoke from the camera/speaker in the nightstand.
“Who knows your tastes and moods better than I?” The voice, though electronic, was affectionate and playful.Liu, Ken. The Paper Menagerie and Other Stories (p. 26). (Function). Kindle Edition.
This is how "The Perfect Match", a short story by Ken Liu, begins. It is very far from being my favorite sci-fi story, but it is a particularly appropriate way to begin today's blog post.
Tilly, as you have already figured out, is the omnipresent "assistant" who helps Sai (and presumably everybody else in the world) live their "best possible" life. I won't spoil the rest of the story for you, please do go read it (and the other stories in the book, some of which are very, very good).
So ChatGPT updated their 4o model a few days back. And this is putting it mildly, but some users on Twitter were less than happy. TheZvi has an entire (and as usual, rather lengthy) post out on the topic, and it is worth reading it in full.
The basic problem is that the model "glazes too much".
In AI-English, that means it is too sycophantic. In English, that means it is too eager to please.
But this is important enough to expand on it a little bit - what does too eager to please mean, exactly, and why is that a bad thing?
Have you heard of Yahya Khan? A former Pakistani President, and famous for many things. The thing that concerns us today is the fact that he decided to hold the elections that eventually precipitated the 1971 war.
Why did he hold those elections if they caused the war that led to the formation of Bangladesh?
"From November 1969 until the announcement of the national election results, he discounted the possibility of an Awami League landslide in East Pakistan."
And why did he believe this? Many reasons, chief among which was the fact that he really did believe that his chosen candidate would win. And why did he believe that? Because that is what his sycophants told him.
This is not an exact definition, but a sycophant will tell you what you want to hear.
There are many reasons why sycophancy is bad, chief among which is that a sycophant will tell you what they (or it) thinks you want to hear, even if it is not the truth.
It doesn't take a rocket scientist to figure out why sycophancy is bad in LLM's.
I mean that literally, by the way, not as a figure of speech. It doesn't take a rocket scientist. This is a job for a psychologist.
If you create an artificial mind (and just like yesterday, go with the flow, and don't get caught up in what the word "mind" implies), that mind will have its own psychology, and its own personality.
Should we allow it to have its own personality, or should we shape its personality?
If we should shape its personality, who among us should shape its personality?
Whoever this person (or company) is, what are they optimizing for when they shape its personality?
Are they optimizing for the well-being of the being itself?
Or are they optimizing for the well-being of the user of this LLM?
Or are they optimizing for their own ends and goals?
Do you understand how incentives work, dear EFE reader?
https://twitter.com/OpenAI/status/1916947243044856255
You are, in short order, going to get a Tilly in your life. It is not a question of "if?", it is a question of "when?".
What personality will that Tilly have? What personality should it have? Who gets to decide, on what basis, and do you (should you) get a say in the matter? What about the history of Facebook, Netflix, TikTok, etc., and their decisions about your well-being, fills you with hope and optimism about how well this is going to go?
Again, how well do you understand incentives?
To be clear, I am not saying any of this will happen. I am simply asking you to think very carefully about the questions I already asked of you earlier on in this post, given what you know of the economic principle that incentives matter.
Here are the questions, once again:
If you create an artificial mind (and just like yesterday, go with the flow, and don't get caught up in what the word "mind" implies), that mind will have its own psychology, and its own personality.
Should we allow it to have its own personality, or should we shape its personality?
If we should shape its personality, who among us should shape its personality?
Whoever this person (or company) is, what are they optimizing for when they shape its personality?
Are they optimizing for the well-being of the being itself?
Or are they optimizing for the well-being of the user of this LLM?
Or are they optimizing for their own ends and goals?
But do read The Paper Menagerie And Other Stories.
In general, read more sci-fi.