Last week, Israeli cybersecurity firm RedAccess dropped a bombshell that should concern every business owner, CTO, and founder who has been tempted by the promise of “build an app in 60 seconds.” Their research uncovered 380,000 publicly accessible assets built with popular AI coding platforms — Lovable, Base44, Replit, and Netlify — with roughly 5,000 containing sensitive corporate data visible to anyone with a browser.
This is not a theoretical risk. This is patient records, financial data, and internal corporate documents sitting on the open web, indexed by Google, waiting to be found.
TL;DR
- RedAccess found 380,000 publicly accessible assets built with AI vibe-coding platforms, ~5,000 containing sensitive corporate data
- Exposed data includes medical records, bank financials, Fortune 500 internal documents, and clinical trial details — all indexed by search engines
- Default-public privacy settings and zero security review are the root causes, not the AI code generation itself
- Platform CEOs are deflecting responsibility, calling public access “expected behaviour” — this is the new S3 bucket crisis
- Businesses need professional security review, proper access controls, and governance frameworks before any AI-generated app touches production data
What RedAccess Actually Found
The scale of exposure is staggering. RedAccess researchers discovered applications leaking:
- Medical records from long-term care facilities, including patient conversations and staff schedules
- Financial information from a Brazilian bank — unredacted and publicly accessible
- Fortune 500 internal documents that were never meant to leave the corporate network
- Clinical trial details from a UK health company, potentially violating GDPR and clinical data regulations
- Full customer service transcripts for a cabinet supplier — names, addresses, complaints, all in the open
- Shipping vessel schedules for a logistics company — commercially sensitive operational data
Every single one of these applications was built by someone who thought they were solving a problem quickly. Instead, they created a compliance nightmare.
How Did We Get Here?
The root cause is painfully simple: these platforms default to public access. When an employee spins up an internal tool on Lovable or Base44, that application is publicly accessible unless they manually change the privacy settings. Worse still, many of these apps are indexed by Google, meaning a well-crafted search query is all it takes to find them.
This is the new S3 bucket crisis — except instead of misconfigured cloud storage, we have entire applications with business logic, database connections, and user data sitting wide open. And unlike S3 bucket misconfigurations, which typically require some technical knowledge to create, vibe-coded apps can be deployed by anyone in the organisation with a browser and a credit card.
RedAccess CEO Dor Zvi put it bluntly: employees without cybersecurity training are deploying production tools without company permission or access controls, at unprecedented scale and speed.
The Platform Response Is Telling
Perhaps the most concerning aspect of this story is how the platforms responded:
- Replit CEO Amjad Masad argued that “public apps being accessible on the internet is expected behaviour” and that privacy settings can be changed with a single click
- Wix (which owns Base44) claimed RedAccess “deliberately withheld URLs” needed for verification
- Lovable stated they are “actively investigating” the reported exposures
The “it’s the user’s fault” defence is remarkably similar to what we heard from AWS in the early days of S3 bucket leaks. It took years of high-profile breaches and regulatory pressure before cloud providers started implementing better defaults and guardrails. We are at the very beginning of that same cycle with AI coding platforms.
Why This Matters More Than You Think
The vibe-coding data leak is not just a security story — it is a governance story. Consider what is actually happening in organisations right now:
- Shadow IT on steroids: Employees are building and deploying applications without IT oversight. This has always been a problem, but AI coding tools have reduced the barrier from “someone who can code” to “anyone with an idea”
- No security review process: These apps bypass every control your organisation has — code review, penetration testing, access management, data classification. They go straight from prompt to production
- Regulatory exposure: If your organisation handles personal data under GDPR, HIPAA, or any similar regulation, an employee deploying patient records on a public Lovable app is a reportable breach. The fines do not care that it was “just a quick internal tool”
- No audit trail: Most of these platforms provide minimal logging. When (not if) a breach is discovered, you will struggle to determine what data was exposed, for how long, and who accessed it
What Your Business Should Do Right Now
If your organisation has not already addressed this, here is a practical checklist:
Immediate Actions
- Audit your exposure: Search for your company name, domain, and key terms on these platforms. You may be surprised by what you find
- Establish a policy: Make it clear that deploying any application — AI-generated or otherwise — that handles company or customer data requires IT approval
- Block or monitor: Consider whether your network controls should flag or restrict access to vibe-coding platforms until governance is in place
Longer-Term Governance
- Create an approved tools list: If employees need rapid prototyping tools, give them sanctioned options with proper security configurations
- Security review for AI-generated code: Any AI-generated application that touches real data needs the same security review as hand-written code. Full stop
- Training: Help employees understand that “easy to build” does not mean “safe to deploy.” A 30-minute security awareness session could prevent the next data leak
The Bigger Picture: Speed vs. Security
Vibe coding is not going away, nor should it. The ability to rapidly prototype and test ideas is genuinely valuable. But there is a world of difference between building a prototype to test a concept and deploying an application that handles patient records or financial data.
The organisations that will navigate this well are the ones that draw a clear line between experimentation and production. Prototype freely, but the moment real data enters the picture, professional development practices must apply — proper authentication, access controls, security testing, and deployment governance.
At REPTILEHAUS, we have been helping businesses build secure, production-ready applications for years. The irony of the vibe-coding crisis is that it is making the case for professional development stronger than ever. When the cost of getting it wrong is a GDPR fine, a data breach notification, or your company’s internal documents on the front page of Hacker News, “move fast and break things” stops being clever and starts being negligent.
If your team is exploring AI-assisted development and needs help establishing proper security governance — or if you need a production-grade version of something that started as a vibe-coded prototype — get in touch. We specialise in turning good ideas into secure, scalable applications.
📷 Photo by Jake Walker on Unsplash



