A new AI-enabled threat is reaching schools

Child safety experts in the UK are urging schools to remove identifiable photos of pupils from websites and social media after blackmailers used such images to create sexually explicit material with AI tools. The warning follows at least one confirmed case in which an unnamed secondary school was targeted after criminals took student photos from public online sources, manipulated them into abusive images and then demanded money to keep the material from being published.

The case, described by the Internet Watch Foundation, points to a particularly disturbing evolution in online harm. Instead of stealing intimate material that already exists, offenders can now fabricate it from ordinary school portraits, event photography or social media posts. That lowers the barrier to abuse and turns even routine online visibility into a potential source of coercion.

What happened in the reported case

According to the supplied report, the Internet Watch Foundation said an unnamed UK secondary school was subjected to a blackmail attempt after criminals used photos taken from the institution’s website or social media accounts. Using AI tools, they turned those images into child sexual abuse material and sent the results to the school, threatening online publication unless they were paid.

The organization said it converted the blackmail images into a digital hash, or fingerprint, and shared that data with major technology platforms to help block the material from being uploaded. The watchdog also said 150 images from that incident could be classified as child sexual abuse material under UK law.

Officials indicated that this was not an isolated concern. The Internet Watch Foundation said it is aware of other blackmail attempts in the UK involving manipulated images taken from school websites or social accounts, even if details of those cases were not publicly released.