Back to blog
InsightsApr 1, 20263 min read

The "Vibecoding" Trap: Why Your AI-Built App Is Probably Wide Open

Explore the risks of relying on AI to build apps, highlighting security vulnerabilities common in AI-generated code that leave your app exposed.

Written by

Idir Ouhab Meskine

Updated Apr 1, 2026
The "Vibecoding" Trap: Why Your AI-Built App Is Probably Wide Open

The recent buzz around the Claude Code CLI "leak" (which Anthropic later clarified was an intended release) has brought a topic to the surface that I’ve been obsessed with since February.

We are living in the era of "vibecoding", where if you can describe it, you can build it. It’s an incredible time to be alive, but my recent research suggests we might be building on a foundation of sand.

The Experiment

To see how deep the "AI magic" goes, I challenged 10 people in my circle to build and launch a project using only AI guidance. I didn't give them technical hints. I just acted as a sounding board. The group was a perfect cross-section of the current tech landscape:

  • 2 Senior Developers.
  • 3 Hobbyists with some basic JavaScript/HTML knowledge.
  • 5 Complete beginners who had never seen a line of code.

The Results

Out of the 10, nine managed to showcase a fully functional platform. As a fan of what AI can do, I was genuinely impressed. People who couldn't write a "Hello World" script six months ago were suddenly showing me live, interactive apps.

Then, I tried to break them.

I’m not a professional security researcher. I’m just someone who spent my younger years tinkering with the web when it was much less secure. But using incredibly basic "script kiddie" techniques, I managed to hack 7 out of the 9 projects.

How I Did It

The exploits weren't sophisticated. For most of them, I simply:

  1. Opened the browser's developer tools.
  2. Refreshed the page and intercepted a token from an existing API call.
  3. Used Postman to target the same URL.
  4. Swapped a POST request for a GET request.

Just like that, I was able to pull "user" data (thankfully, they used seed data). In one particularly alarming case, I even managed to gain full access to their Supabase account.

Why the Pros Stayed Safe

The two senior developers were the only ones I couldn't crack (not with my knowledge). It wasn't because their "vibes" were better, but because they knew the invisible rules of the road. Their projects included:

  • Timed/Expiring tokens.
  • Strict HTTP method blocking (preventing a GET where only a POST should live).
  • Server-side validation that didn't expose keys in the HTML.

The Takeaway

I still believe vibecoding is one of the most powerful shifts in history. But functionality does not equal security.

If you are building things for the real world—especially if you're handling customer data—you have to remember that compliance isn't optional. Things like cookie banners and data encryption are "boring," so we often forget to ask the AI to include them.

Don't just ask the AI to "build it." Ask it to "secure it."

Next time you're about to push to production, try asking your assistant:

"What security measures do I need to implement if I’m dealing with Personally Identifiable Information data?"

It only takes one simple question to turn a "vibe" into a professional-grade application.

Topics

ai securityapp developmentcybersecurityai vulnerabilitiessoftware risksvibecodingartificial intelligenceapp securitycoding risksdata protection

Keep reading

Go back to the archive and read more posts.

Browse all posts