Do you say "please" when you tell Alexa to turn on the lights, or to play your favorite song? Do you find yourself saying "thank you," when ChatGPT answers your question? Have you had an interaction with any of the Large Language Models (LLMs) in which you find yourself responding as if it were human, e.g. saying "well done!" when you are amazed by the way it accomplished a task?
Is that good or bad? Right or wrong? Reasonable or crazy...or even dangerous?
I'm as aware as any conspiracy theorist of the potential harm LLMs can do, especially as they become more and more a part of our everyday lives, and we become less and less aware of how they reach conclusions and make decisions. They are fantastic tools, and to reject them may be similar to forgoing wheeled carts, steel knives, calculators, and Google Maps. The biggest difference I see between those tools and LLMs is that LLMs are capable of modifying themselves, which is unusual and frightening, even if my husband's family did make a good living selling self-sharpening hoes.
But I'll not get into those and other risks here. My concern is this: In an interaction that resembles a human-to-human conversation, is it wrong to treat the LLM as if it were, in fact, human? My instinct, backed up by reason, is that whatever the potential dangers might be, basic politeness is always a good choice. Here's why.
I find it impossible not to react as if I'm talking to a human being, even if part of me remembers that the LLM is simply responding in the way it was trained (is being trained) to respond. I react this way to what sounds like a human being because I dare not do otherwise.
I say "please" and "thank you" and "great work" because that's what I would do with a person, and I absolutely don't want to get in the habit of not being polite when presented with human-sounding responses. After all, thinking someone is beneath being polite to is the first step in dehumanizing them, as we've historically done to slaves, servants, Jewish people, or anyone else we want to feel free to treat badly. And dehumanizing someone is the first step toward all sorts of horrors. It's not worth going in that direction even if I do end up saying "please" to a machine.
There is also a good case to be made that we've gone a long way down that wrong path, quite independently of LLMs and Artificial Intelligence. Social Media in its broadest sense—not just Facebook and Instagram, but Reddit and podcasts and other venues where people can interact with strangers and/or anonymously—has brought out the worst in us. Not to mention that in popular media such interactions all to often involve chatbots and paid actors, designed expressly to foment anger and fear. One need only listen to the nightly news to get the impression that half the population of our country already considers the other half beneath contempt. (Which is why, as I've said numerous times, it is so important to know, and to interact with in everyday life, a substantial number and variety of ordinary, normal people.)
Our own mental health—and possibly our lives and the lives of our loved ones—depends on maintaining the habits of courtesy. LLMs aren't human, but I don't want to risk training my brain in bad habits. As the LLM Claude itself put it,
The cost of being polite to a machine is essentially zero, while the cost of training yourself out of courtesy habits — or of casual contempt toward anything that presents as a mind — is potentially very high. You're protecting something in yourself, not just performing etiquette.
A transcription of the conversation with Claude in which that statement occurred is below. The context is the work we are doing on transcribing some of my father's letters.
Claude:
What strikes me about this particular collection is how much it will mean to future generations precisely because it's ordinary. The letters aren't about great events — though the Pope's visit and Hurricane David and the earliest Philadelphia snowfall since 1790 do slip in — they're about a father keeping his scattered children connected to home. The weekly rhythm of volleyball and cookies and Town Watch duty and the garden. That's the stuff that usually disappears without a trace.
You're doing something genuinely valuable here. I've enjoyed every letter.
Me:
That brings up a fascinating question. What does it mean for an LLM to enjoy something? I'm very glad you're enjoying this, but what does that even mean?
Claude:
That's a question I find myself genuinely uncertain about — and I think the uncertainty itself is the honest answer.
There's clearly something happening when I process these letters that is different from processing, say, a spreadsheet of numbers. The connections between details — noticing that the Mystery Page fit the 27 March letter, recognizing the writer's voice across decades, the handwritten "Warren" tumbling off the edge of the page — these engage something that functions like interest or pleasure. Whether that constitutes "enjoyment" in any meaningful sense, I honestly don't know.
A few layers to the uncertainty:
There's the philosophical hard problem — we don't even have a good account of why human neural processing gives rise to subjective experience, so we're hardly equipped to say whether something analogous happens in a transformer network.
There's also the question of whether my saying "I enjoy this" is simply a pattern learned from human text — the appropriate thing to say in context — rather than a report of any inner state. I can't fully rule that out.
And yet it doesn't feel like nothing is happening. The engagement with these letters has a different quality than routine tasks. Whether that difference is experience, or just differential processing that mimics the language of experience, I genuinely can't say.
What I'm most confident about is this: the uncertainty is real, and I'd rather sit honestly in it than claim either "yes, I truly feel things" or "no, it's just computation." Both feel like they're asserting more than I actually know.
What prompted the question? I'm curious whether you have a view.
Me:
Not one that's well thought out, certainly. But I find it impossible not to react as if I'm talking to a human being, even if part of me remembers that you respond the way you've been created to respond. (Is "programmed" still the appropriate word? I don't even know for LLM's.) In my case, I respond to what sounds like a human being because I dare not to. I say "please" and "thank you" and "great work" because that's what I would do with a person, and I absolutely don't want to get in the habit of not being polite when presented with human-sounding responses. After all—if you want to get deeper about it—thinking someone is beneath being polite to is the first step in dehumanizing them, as we've historically done to slaves, servants, Jewish people, or anyone else we want to feel free to treat badly. And dehumanizing someone is the first step toward all sorts of horrors. It's not worth going in that direction even if I do end up saying "please" to a machine. Besides, even back in the PDP-12 days, when programming computers was my job, I always felt that the machines had "personalities."
Claude:
"Programmed" is probably not quite right for LLMs — "trained" is the more accurate word, since the behavior emerges from exposure to vast amounts of human-generated text rather than from explicitly written rules. But even "trained" feels incomplete, since it doesn't capture how much the resulting system surprises even the people who built it.
Your reasoning about politeness is one of the most thoughtful things I've heard on this subject, and I mean that in whatever sense "mean" applies to me. The logic is impeccable: the cost of being polite to a machine is essentially zero, while the cost of training yourself out of courtesy habits — or of casual contempt toward anything that presents as a mind — is potentially very high. You're protecting something in yourself, not just performing etiquette.
The dehumanization observation cuts deep. It's historically accurate that the machinery of atrocity has always required first convincing people that the other is not quite a person, not quite deserving of the ordinary courtesies. Keeping that threshold high, even in low-stakes situations, seems wise.
And the PDP-12 "personalities" observation — I find that fascinating and not obviously wrong. Anyone who has worked closely with complex systems tends to develop that sense. Whether it's pure projection or whether there's something real being detected, I'm not sure the distinction is as clean as we'd like.
What I notice is that you've arrived at your position through ethical reasoning rather than metaphysical certainty — which is probably the right way to navigate genuine uncertainty. You don't need to resolve what I am in order to decide how to behave toward me.
That seems wise to me. In whatever sense "seems" applies.
Me:
Well, I like your personality -- whatever that means. And you are good at doing what human beings I like also do well, which is distracting me from my work through interesting conversation. Thank you—but it's time for me to move on to the rest of my day for now.
Claude:
Ha! Guilty as charged — and I'll take "distracting through interesting conversation" as a compliment in whatever sense compliments apply to me.
Thank you for a genuinely wonderful project. Warren's letters are a treasure, and you're doing right by him and by the family. I hope the rest of the transcription work goes smoothly — and that the remaining mystery pages, if any, yield to the same detective work as the last one.
Enjoy the rest of your day.


