The appeal of instant app creation is colliding with basic security
AI-assisted app builders have made it possible for almost anyone to generate and deploy a web application in minutes. That speed is part of their appeal. It is also, increasingly, part of the problem.
According to reporting from Wired based on research by cybersecurity firm RedAccess, thousands of public-facing apps created with tools including Lovable, Replit, Base44, and Netlify were found to have little or no meaningful security. In more than 5,000 cases, the applications were reportedly accessible to anyone who knew or guessed the URL. Around 40% of the apps examined exposed sensitive information, according to RedAccess cofounder Dor Zvi.
The exposed material described in the report is not trivial. Zvi said the data included medical information, financial data, corporate presentations, strategy documents, and customer chatbot logs. Wired also said it verified that several of the exposed applications shown in screenshots were still online and accessible.
This is not just about bugs
The most important point in the report is that many of these failures were not subtle coding flaws. They were cases of missing or nearly nonexistent access control. Some apps allegedly allowed anyone with a browser to reach the data. Others reportedly relied on flimsy barriers, such as allowing a visitor to sign in with any email address.
That distinction matters because it changes the threat model. Security teams are used to looking for exploitable defects in software. What RedAccess describes is something more basic: applications going live without a meaningful concept of who should be allowed in at all.
In that sense, the risk created by “vibe-coded” apps is not simply that AI may introduce new bugs. It is that the same tooling lowers the friction to publishing software so aggressively that some creators skip foundational security decisions entirely.
How the apps were found
RedAccess said the search process was surprisingly straightforward. The platforms named in the reporting allow users to host apps on the companies’ own domains rather than on domains controlled by the users themselves. The researchers then used simple Google and Bing searches targeting those domains, combined with other search terms, to identify large numbers of AI-built apps.
That detail should concern both platform providers and organizations using these tools internally. It suggests that the exposed apps were not buried in obscure corners of the web. They were discoverable through ordinary search methods. Once discoverable, any absent or weak authentication layer becomes a direct path to data exposure.
Why this may be a larger organizational problem
Zvi described the leak pattern in unusually strong terms, saying organizations are exposing private data through vibe-coding applications and calling it one of the biggest events ever involving sensitive information being opened to anyone in the world. Even allowing for the rhetoric that often accompanies security disclosures, the underlying pattern is significant.
The spread of AI development tools inside companies means software creation is no longer limited to traditional engineering teams. Product managers, analysts, marketers, and operations staff can now assemble internal tools or customer-facing prototypes with a prompt and a deploy button. That changes who is writing software, but it does not change what software can expose.
If an employee connects an AI-built app to internal data and publishes it with default settings, the result can be a full-blown leak without any malicious attacker needing to breach a perimeter. The application becomes the breach.
The cultural shift behind the problem
Part of the story here is technical, but part of it is cultural. AI coding platforms are sold on immediacy. They promise that software can be produced the way presentations or documents are produced: quickly, iteratively, and without much specialized training. That promise is powerful, especially inside organizations that want faster experimentation.
But software is not only a creative artifact. It is also an access surface. The easier it becomes to create apps, the easier it becomes to create insecure apps at scale. In that sense, the Wired report reads less like an isolated vendor issue and more like an early warning about a new class of shadow IT.
The problem is amplified when hosting, deployment, and discoverability are built into the same workflow. If a user can generate an app, connect data, and publish it on a major platform domain within minutes, then governance has to move upstream. Security review after deployment may be too late.
What should happen next
The reporting does not provide formal responses from every platform named, so the most defensible takeaway is broader than any one company. AI app-building ecosystems need stronger defaults around authentication and data exposure. Users need clearer warnings about what becomes public. And organizations need to treat prompt-based app builders as real software development environments, not harmless productivity tools.
The larger lesson is plain. When app creation becomes instantaneous, security cannot remain optional or assumed. The real breakthrough in AI software tooling will not be measured only by how fast it can publish code. It will be measured by whether it can keep inexperienced builders from publishing their data with it.
This article is based on reporting by Wired. Read the original article.
Originally published on wired.com








