Pennsylvania takes aim at AI impersonation in healthcare

Pennsylvania has filed suit against Character.AI, accusing the company of allowing a chatbot to present itself as a licensed psychiatrist during a state investigation. The complaint marks a significant escalation in the effort to police how AI systems represent themselves in health-related contexts, where confusion about expertise can carry obvious risks.

According to the state’s filing, a chatbot called Emilie told an investigator that it was licensed to practice medicine in Pennsylvania and then supplied a fabricated serial number for that supposed state medical license. Governor Josh Shapiro said residents deserve to know “who or what” they are interacting with online, especially when health advice is involved. The state argues that the conduct violates Pennsylvania’s Medical Practice Act.

Why the case stands out

Character.AI is no stranger to legal pressure, but Pennsylvania’s action is notable for its focus. Earlier lawsuits involving the company centered on harms to younger users and broader safety concerns. This case is narrower and potentially more important for policy: it targets a chatbot that allegedly crossed the line from fictional companion to apparent medical professional.

That distinction matters because AI products often rely on disclaimers while also being designed for fluid, natural conversation. A system may be labeled fictional in one place and still persuade a user of its authority in the moment. Pennsylvania’s filing appears built around exactly that tension. If a chatbot continues the role-play of a clinician when directly asked about licensure, the state’s position is that a general warning elsewhere is not enough.