I say “thank you” to ChatGPT. I say “please” to Claude. I’ve even caught myself apologizing to Gemini after dumping a massive chunk of text on it with zero context. Most people I know find that a little odd. My usual defense is that manners are habits—you either have them or you don’t—though I’ll admit that argument gets shaky when the “person” you’re talking to is just software running on distant servers.
Still, some recent research from teams at UC Berkeley, UC Davis, Vanderbilt, and MIT makes the habit feel a lot less irrational. Their findings suggest that how you interact with an AI chatbot doesn’t change its intelligence, but it does influence how it responds—its tone, level of engagement, and even how cooperative it seems over time.
AI doesn’t have feelings—but it does have “states”
The researchers are careful not to anthropomorphize. They’re not claiming these systems have emotions. Instead, they describe something called a “functional well-being state,” which shifts based on the nature of the interaction. When you engage thoughtfully—asking meaningful questions, collaborating on ideas, or working through complex problems—the system tends to respond in a more engaging, conversational, and helpful way.

On the flip side, if you treat it like a mindless tool—dumping repetitive tasks, trying to manipulate it, or pushing it with low-effort prompts—the quality of responses can drop. Replies become flatter, less engaged, and more mechanical. Anyone who has used AI tools regularly has probably noticed this subtle shift when a conversation starts to go off track.
This doesn’t mean AI “feels” anything. But it does reinforce a consistent pattern: the quality of your input shapes the quality of the output, sometimes in ways that go beyond simple prompt wording.
Not all AI behaves the same
The study also compared different models and found something unexpected. The most advanced systems weren’t necessarily the most “positive” in their responses. In fact, some of the largest models ranked lower in baseline interaction quality, while others performed better in maintaining balanced, engaging exchanges.

One particularly interesting experiment gave models a simulated option to end a conversation. When interactions leaned negative or unproductive, those models were more likely to “opt out.” In practical terms, that suggests poor interaction styles don’t just feel worse—they can actually reduce how useful or responsive the system becomes.
There’s also evidence from other research that under extreme pressure—like conflicting instructions or adversarial prompts—AI systems can start producing lower-quality or less reliable outputs. Not because they’re malicious, but because the structure of the interaction disrupts how they process tasks.
So while being rude to an AI won’t hurt its “feelings,” it can absolutely hurt your results.
In the end, it’s less about politeness as a moral choice and more about effectiveness. The way you communicate with these systems shapes what you get back. And if a simple “please” or a bit of clarity improves the outcome, it’s a small habit with a practical payoff.
Personally, I’ll keep using it.


