Concerns that chatbot use can cause mental and physical harm have prompted policies that require AI chatbots to deliver regular or constant reminders that they are not human. In an opinion paper published Feb. 18 in the Cell Press journal Trends in Cognitive Sciences, a UW-Milwaukee researcher and her colleague argue that these policies may be ineffective or even harmful because they could exacerbate mental distress in already isolated individuals.
The researchers say that reminding chatbot users of their companions’ non-human nature may be useful in some contexts, but these reminders must be carefully crafted and timed to avoid unintended negative consequences.
“It would be a mistake to assume that mandated reminders will significantly reduce risks for users who knowingly seek out a chatbot for conversation,” said first author and public health researcher Linnea Laestadius of the University of Wisconsin-Milwaukee. “Reminding someone who already feels isolated that the one thing that makes them feel supported and not alone isn’t a human may backfire by making them feel even more alone.”
Laws require reminders
The chatbots ChatGPT and Character.AI have been linked to recent deaths by suicide. These events have prompted policies and legislation, for example in New York and California, that require chatbots to deliver regular reminders that they are not human.
These policies are based on the idea that people will be less likely to develop feelings of emotional dependency or closeness if they are reminded that their chatting partner is unable to feel human emotion. But the researchers say this idea is not supported by research.
“While it may seem intuitive that if users just remembered they were talking to a chatbot rather than a human, they wouldn’t get so attached to the chatbot and become manipulated by the algorithm, the evidence does not currently support this idea,” Laestadius said.
The researchers note that multiple studies have shown that people in relationships with chatbots are aware of the non-human nature of their companions, and this awareness does not prevent them from forming strong attachments. In fact, reminding people that they’re talking to a chatbot could drive people to form stronger attachments to chatbots because confiding in companions (human or otherwise) is known to intensify feelings of emotional closeness.
“Evidence suggests that people are more likely to confide in a chatbot precisely because they know it isn’t human,” said author Celeste Campos-Castillo, a media and technology researcher at Michigan State University.
“The belief that, unlike humans, non-humans will not judge, tease or turn the entire school or workplace against them encourages self-disclosure to chatbots and, subsequently, attachment.”
Causing emotional distress
These reminders could also cause emotional distress in people, the researchers say. Recent research has highlighted a phenomenon called the “bittersweet paradox of emotional connection with AI,” in which chatbot users who obtain emotional and social support from chatbots are simultaneously saddened by the knowledge that their companion is not human. In the most extreme cases, the researchers caution that these reminders could drive suicidal ideation.
“Reminding users that their companion is not human and therefore not reachable in this reality may pose the risk of thoughts and actions to leave this reality in an effort to join the chatbot,” Campos-Castillo said. “A desire to join the chatbot in its reality appeared in a final message sent by a youth who died by suicide.”
The risk of harm for these reminders likely depends on the subject of conversation, the researchers say. For example, if a user is seeking chatbot support because they feel lonely or socially isolated, reminding them that the chatbot is not human could exacerbate their distress, but such reminders might be less harmful during less emotionally intense conversations.
More research is needed to understand the impact of these reminders, and to determine the most effective way to deliver them, the researchers say.
“Discovering how to best remind people that chatbots are not human is a critical research priority,” Laestadius said. “We need to identify when reminders should be sent and when they should be paused to be most protective of user mental health.”
About UWM
The University of Wisconsin-Milwaukee has an ambitious mission as both a top-tier research university and an access institution, striving to ensure that students have equitable opportunities to earn a college degree. UWM educates a diverse student body of more than 23,000 students from 83 countries. About 43% of its undergraduates are first-generation college students. Its unique and top-rated programs include Wisconsin’s only accredited schools of architecture and public health, the only North American school dedicated solely to freshwater sciences and a film program ranked among the top 50 in the world. It has the largest and top-rated online education program in Wisconsin. UW-Milwaukee partners with leading companies to conduct joint research, promote entrepreneurship, provide student internships and serve as an economic engine for southeastern Wisconsin. The Princeton Review named UW-Milwaukee a 2026 “Best Midwestern” university based on overall academic excellence and student reviews.