A revealing contradiction in how people judge AI-generated communication
As generative AI spreads into everyday writing, a basic social question is becoming harder to answer: how do people react when a personal message is written by a machine? Two new experiments discussed by Fast Company suggest the answer is more contradictory than many might expect. People strongly penalize AI-generated personal messages when they know AI was used, yet they generally do not suspect AI by default, even when the text they are reading was generated by a model.
The research, conducted with more than 1,300 U.S.-based participants between the ages of 18 and 84, examined how recipients judged senders based on messages such as an apology delivered by email or text. Participants were divided into four groups. Some saw the messages with no information about authorship. Others were told the messages were definitely written by a human, definitely generated by AI, or could have been either.
The results exposed a clear social penalty attached to disclosed AI authorship. When people knew a message was AI-generated, they rated the sender more negatively, using terms such as lazy, insincere, and lacking effort. When they believed the same text came from a human, they instead described it as genuine, grateful, and thoughtful.
The striking part: most people do not seem suspicious
The most surprising result was not that disclosed AI changed opinions. It was that undisclosed AI did not. Participants who were given no information about authorship formed impressions just as positive as those of people explicitly told that the messages had been written by a human.
That finding points to an important asymmetry in the current communication environment. Many people are willing to punish AI use once it is visible, but they are not yet approaching personal writing with baseline skepticism. In ordinary life, recipients may still assume that heartfelt-sounding messages reflect a person’s own effort, even when AI systems are capable of producing detailed, emotionally appropriate text.
The researchers appear to have expected that growing public familiarity with generative AI might already be making people more wary. Instead, the default assumption remained overwhelmingly human. In practical terms, that means AI can influence social judgments without being recognized as a factor at all, provided its role is not disclosed.



