A revealing contradiction in how people judge AI-generated communication
As generative AI spreads into everyday writing, a basic social question is becoming harder to answer: how do people react when a personal message is written by a machine? Two new experiments discussed by Fast Company suggest the answer is more contradictory than many might expect. People strongly penalize AI-generated personal messages when they know AI was used, yet they generally do not suspect AI by default, even when the text they are reading was generated by a model.
The research, conducted with more than 1,300 U.S.-based participants between the ages of 18 and 84, examined how recipients judged senders based on messages such as an apology delivered by email or text. Participants were divided into four groups. Some saw the messages with no information about authorship. Others were told the messages were definitely written by a human, definitely generated by AI, or could have been either.
The results exposed a clear social penalty attached to disclosed AI authorship. When people knew a message was AI-generated, they rated the sender more negatively, using terms such as lazy, insincere, and lacking effort. When they believed the same text came from a human, they instead described it as genuine, grateful, and thoughtful.
The striking part: most people do not seem suspicious
The most surprising result was not that disclosed AI changed opinions. It was that undisclosed AI did not. Participants who were given no information about authorship formed impressions just as positive as those of people explicitly told that the messages had been written by a human.
That finding points to an important asymmetry in the current communication environment. Many people are willing to punish AI use once it is visible, but they are not yet approaching personal writing with baseline skepticism. In ordinary life, recipients may still assume that heartfelt-sounding messages reflect a person’s own effort, even when AI systems are capable of producing detailed, emotionally appropriate text.
The researchers appear to have expected that growing public familiarity with generative AI might already be making people more wary. Instead, the default assumption remained overwhelmingly human. In practical terms, that means AI can influence social judgments without being recognized as a factor at all, provided its role is not disclosed.
Frequent AI users were not much different
The researchers then tested whether familiarity with generative AI changed how participants responded. They compared heavy users, light users, and people who rarely or never use the technology. Here too, the results undercut a common assumption.
Frequent users did penalize disclosed AI use slightly less than infrequent users. But they were not notably more skeptical when authorship was left unspecified. Even people who use generative AI every other day tended to assume the messages were written by a person. The same basic pattern held across usage groups: disclosure triggered a negative reaction, but the absence of disclosure generally preserved positive impressions.
That matters because it suggests exposure alone may not be enough to change social norms around AI-written communication. People can use these tools themselves and still fail to account for them when evaluating the messages they receive. The habit of assuming human authorship appears to remain strong, at least for now.
Why the social penalty matters
The study’s implications reach beyond academic curiosity. Personal and professional relationships are shaped by how people interpret written effort. A thoughtful apology, a warm thank-you note, a carefully composed update, or a tactful work message can all influence how the sender is perceived. Recipients often treat the time and care reflected in a message as evidence of sincerity, authenticity, or competence.
If AI-generated text is judged more harshly when disclosed, then people who use AI may face a reputational cost once that use becomes visible. At the same time, if undisclosed AI messages continue to receive the same positive reception as human-written ones, the technology can quietly reshape interpersonal communication without corresponding changes in expectations.
That creates a new tension. Individuals may have practical reasons to use AI, especially for difficult or emotionally sensitive messages. But the social meaning of doing so remains unsettled. The experiments suggest that many recipients still read personal writing through an older lens, one in which message quality is assumed to reflect direct human labor.
A disclosure problem with no settled norm
The findings also raise a more complicated policy and etiquette question: should people disclose AI assistance in personal communication? The research summarized by Fast Company does not answer that question directly, but it does show the cost of disclosure in current social conditions. Once readers know AI was involved, they view the sender less favorably, even when the text itself is unchanged.
That is a difficult foundation on which to build norms of transparency. If disclosure damages perception but non-disclosure goes unnoticed, people are given a strong incentive to stay silent about AI involvement. Over time, that may widen the gap between how messages are produced and how they are interpreted.
It may also complicate workplace communication, dating, friendships, and other settings where written messages carry emotional or reputational weight. The stronger the models become, the easier it will be to produce convincing text at scale. But the study suggests that social expectations have not caught up with that technical reality.
What this says about the next phase of AI adoption
The most important takeaway is that AI use in writing is not only a technical issue. It is a social one. The technology can already generate messages that many readers receive positively. Yet once its involvement is revealed, the exact same message can be judged as less sincere. That gap is likely to shape how AI is adopted in day-to-day communication.
For now, the public seems to be in an unstable transition period. People know AI exists and many use it themselves, but they still often interpret personal writing as if it comes directly from another person. Until that assumption changes, AI-assisted communication will continue to create mismatches between production and perception.
That is why these experiments matter. They suggest the next phase of generative AI will not be defined only by what models can write, but by whether social norms, disclosure standards, and expectations of authenticity evolve fast enough to meet what the tools are already doing.
- Participants judged disclosed AI-written personal messages more negatively than identical messages believed to be human-written.
- When authorship was not disclosed, most people assumed the message came from a person and responded positively.
- Even frequent AI users were not much more skeptical by default, suggesting social norms lag behind technical capability.
This article is based on reporting by Fast Company. Read the original article.
Originally published on fastcompany.com






