Fake releases are showing up under real artists’ names
A growing problem on music streaming platforms is becoming harder for artists to ignore: AI-made tracks and releases appearing under the identities of real musicians. The issue is not simply one of low-quality content flooding platforms. It is increasingly a matter of impersonation, attribution, and control over an artist profile that can influence reputation and income.
The problem was illustrated by the experience of jazz composer and pianist Jason Moran. After a friend contacted him about a new release on Spotify, Moran discovered an artist profile bearing his name. The page included albums from his former label, Blue Note Records, which owns rights to his early music, but it also featured a new EP called For You that Moran said was not his work.
According to Moran, the recording did not resemble his style at all. He said there was not even a piano player on the record and described the music as indie pop rather than anything he would make. The discovery pushed him to try to get the release removed.
Generative AI has accelerated an existing fraud pattern
Fraudulent streaming activity is not new. For years, the music business has dealt with fake listens, manipulated metrics, and identity confusion across digital platforms. What appears to be changing is the speed and scale at which generative AI can create plausible-looking releases, artwork, and metadata that can be attached to existing names.
The report says Moran is one of a growing number of musicians affected by this pattern on streaming services. It notes that at least a dozen well-known jazz musicians, indie rock artists, and even rapper Drake have been targeted by what appear to be AI-generated works presented as if they belong to established artists.
That makes the issue larger than a simple moderation failure. When a platform’s discovery, catalog, and recommendation systems treat a fake release as legitimate, the synthetic material can piggyback on the reputation of a known musician. For artists, that can create confusion among listeners, distort a catalog, and force them into time-consuming cleanup work just to defend their own identity.
Platform controls are under pressure
Spotify has publicly acknowledged both the breadth of spam on its service and the growing pressure created by AI-generated uploads. Last September, the company said it had removed more than 75 million “spammy tracks” over the previous 12 months. It also said it was strengthening protections for musicians, including tougher rules around impersonation.
More recently, Spotify said it was developing a tool intended to give artists more control over what appears under their names. That response suggests the company understands the problem as one of governance and ownership, not just content quality. Artist pages function as identity layers inside the platform. If those layers are weak, bad actors can exploit them.
The Moran case also exposes another complication. Catalog rights, legacy recordings, label ownership, and profile management can already make streaming attribution messy. When AI-generated tracks enter that environment, the boundaries between legitimate catalog material and impostor releases can become even harder for listeners to interpret at a glance.
Why this matters beyond one platform
The broader risk is that streaming services become less trustworthy as archives of creative work. Listeners expect a page bearing a recognized artist’s name to reflect that artist’s output. If that expectation breaks down, platform credibility breaks down with it.
For artists who already operate outside the biggest commercial channels, the stakes may be even higher. Moran said he does not use Spotify and prefers Bandcamp. Yet a profile using his name still appeared on Spotify, meaning an artist does not have to actively participate in a platform to become vulnerable to impersonation there.
That asymmetry favors the platform and the uploader rather than the creator. The artist may not control the storefront, but still has to deal with the reputational effects when something false appears in it.
Generative AI did not invent fraud in music distribution. What it appears to have done is lower the cost of making convincing filler, packaging it attractively, and attaching it to names with existing cultural value. Anime-style cover art, plausible metadata, and algorithm-friendly tracks can all contribute to a release that looks legitimate long enough to spread.
A test case for digital identity in creative markets
The dispute points to a larger unresolved question across AI-era media systems: who gets to authenticate identity at scale? Streaming companies, labels, distributors, and artists all have partial roles, but gaps between those systems create openings for abuse.
In practice, artists need faster challenge mechanisms, clearer verification, and more direct authority over the releases associated with their names. Platforms need stronger screening before material goes live, especially where an upload is linked to an established artist identity. And listeners need clearer signals when a release’s origin is uncertain.
Without those changes, AI impersonation is likely to remain more than a nuisance. It becomes an infrastructure problem for digital culture, where authorship, authenticity, and attribution can be bent by automated production and weak platform controls. Moran’s experience shows how strange and personal that can feel for the artist. It also shows how quickly the problem can move from abstract debate about AI “slop” to a direct challenge to artistic identity.
This article is based on reporting by The Guardian. Read the original article.



