Harassment at Machine Scale
Online harassment is not new, but artificial intelligence is transforming its nature, scale, and impact in ways that existing legal and platform frameworks are struggling to address. A new investigation reveals how readily available AI tools are being weaponized to create synthetic intimate imagery of real people, generate personalized abuse at volumes no human could produce, and orchestrate coordinated harassment campaigns that evade automated detection systems.
The convergence of generative AI capabilities with harassment tactics represents what researchers describe as a phase change in online abuse. Previous forms of harassment required human effort for each instance — writing threatening messages, manually editing images, or coordinating groups of real people. AI removes these labor constraints, enabling a single individual to produce thousands of pieces of harassing content tailored to specific targets.
Deepfake Imagery Leads the Crisis
The most immediately harmful application is the creation of non-consensual intimate imagery using face-swapping and image generation tools. Despite platform policies prohibiting such content and the removal of the most egregious generation tools from major AI platforms, open-source alternatives have proliferated. These tools require minimal technical skill to operate and can produce convincing synthetic images from a handful of publicly available photographs.
Victims of deepfake image abuse report severe psychological harm, including anxiety, depression, and social withdrawal. The damage is compounded by the difficulty of removal — once synthetic images are created and distributed, they can be copied and re-uploaded indefinitely. Legal recourse varies dramatically by jurisdiction, with some states and countries having enacted specific deepfake abuse laws while others have no applicable statutes.


