Stanislaw Lem on professional use of LLMs
Systematic pretend compliance
In the Eleventh Voyage of Ijon Tichy, by Stanislaw Lem, the protagonist is sent to a planet that is apparently undergoing a robot rebellion. He needs to avoid detection, so they give him a robot suit as a disguise, and instruct him not to breathe or eat1 while on his mission, because robots don’t. Tichy remonstrates:
“You must be mad,” I said. “How can I not breathe? I’ll suffocate!” “A misunderstanding. Obviously you are allowed to breathe, but do it quietly. No sighs, no panting or puffing, no deep inhalation—keep everything inaudible, and for the love of God don’t sneeze. That would be the end of you.”
If you’ve read the story, you’ll remember the twist. If not, suffice it to say that the story is a thinly veiled satire on Polish society under Soviet-style communism. Systematic pretend compliance is the order of the day, for everyone. I’m surprised that the story escaped the censor’s pencil. But then, the censor was a member of Polish society too.
Lem has been dead for nearly 20 years, and generative AI was under the radar until 2022, so he can’t really have an opinion about the professional use of LLMs. But he does have an opinion on systematic pretend compliance. It happens when following the official line on something breaks down in the face of “actually existing realities”2.
Here’s some disguised and anonymized customer commentary on an LLM system. It refers to a product that competes with my employer’s. But that’s not my point. Similar criticisms could be leveled against many LLM products.
XYZ LLM professional product is little more than a thin wrapper around ChatGPT with a lightly edited system prompt. I tried the same query on XYZ and ChatGPT. The results are nearly identical. Very disappointing. I’m not inclined to use XYZ until it offers something genuinely better. Right now it’s not worth the money.
Our firm has banned us from using ChatGPT for our professional work. So no-one does. Some of the junior people are still using XYZ. I’m guessing this is because it is a good substitute for ChatGPT, not because it is better.
Seen through the lens of systematic pretend compliance, one might suspect the bolded comments. Most likely, people are tempted to circumvent the ban. Most likely, some of them are not really in compliance. I bet “no-one does” is untrue.
Do we believe the author when they attribute XYZ use to “junior people?” Are they doing the thing which people do when they tell the doctor that “My friend has an embarrassing problem …?” Perhaps the author is still using XYZ (or ChatGPT) and doesn’t want to say so. Perhaps the author is pretending to be a robot, but the robot suit is slipping.
Does this mean that we should be content with products that are thin wrappers on ChatGPT? No. Obviously a product that properly engages with professional norms would be preferable. But navigating an internal ban is a valid reason for using an alternative product, even if the extra functionality is thin. That use case is enough to justify the existence of such products. Lem’s reminder about human nature still matters.
or defecate.
Yep, a reference to "really existing socialism”. The Warsaw Pact regimes intended this to be a positive term, but only the heavily ironic negative meaning survives. A lesson for us all. https://en.wikipedia.org/wiki/Real_socialism


