
Vibe Coding Produces 45% Insecure Code: The Hangover Is Real
"Vibe coding" was Collins Dictionary's Word of the Year 2025. The idea is seductive: describe what you want, let AI write the code, don't look too closely at the output. Ship it.
We use AI every day. We wrote about it. But after a year of reviewing AI-generated code in production codebases, we need to talk about what's actually in that code — because 45% of it fails basic security tests.
The Numbers Are Ugly
Veracode's 2025 State of Software Security report found that 45% of AI-generated code contains security vulnerabilities. Not style issues. Not lint warnings. Actual exploitable flaws.
Here's the breakdown of what we see most often:
| Vulnerability | Frequency | Severity |
|---|---|---|
| Missing input validation | Very common | Medium–High |
| SQL/NoSQL injection | Common | Critical |
| Missing authentication checks | Common | Critical |
| Hardcoded secrets in examples | Occasional | Critical |
| Path traversal in file operations | Occasional | High |
| Missing rate limiting | Very common | Medium |
These aren't theoretical. We've caught every single one of these in AI-generated code on real client projects.
What AI-Generated Vulnerabilities Actually Look Like
1. The Missing Auth Check
This is the most dangerous pattern because it looks completely correct:
// AI-generated: clean, typed, works perfectly
export async function GET(
request: NextRequest,
{ params }: { params: { id: string } }
) {
const order = await prisma.order.findUnique({
where: { id: params.id },
include: { items: true, customer: true },
});
if (!order) {
return NextResponse.json({ error: "Not found" }, { status: 404 });
}
return NextResponse.json(order);
}
See the problem? There's no authentication. No authorization. Any user can fetch any order by ID — including other customers' orders with their personal data. The AI generated a perfectly working endpoint that's a GDPR violation waiting to happen.
// What it should be:
export async function GET(
request: NextRequest,
{ params }: { params: { id: string } }
) {
const session = await auth();
if (!session?.user) {
return NextResponse.json({ error: "Unauthorized" }, { status: 401 });
}
const order = await prisma.order.findUnique({
where: { id: params.id, userId: session.user.id }, // scoped to user
include: { items: true }, // no customer PII leak
});
if (!order) {
return NextResponse.json({ error: "Not found" }, { status: 404 });
}
return NextResponse.json(order);
}
We caught this exact pattern in a client review. The AI had generated 14 API routes. Three of them had no auth checks. One exposed customer email addresses. This was in a PR that "looked good" at first glance.
2. The Confident SQL Injection
AI models have learned enough to avoid string concatenation + SQL most of the time. But they still produce subtle injection vectors:
// AI-generated: uses template literals for "convenience"
export async function searchProducts(query: string) {
const products = await prisma.$queryRaw`
SELECT * FROM products
WHERE name ILIKE '%${query}%'
ORDER BY created_at DESC
`;
return products;
}
This looks like it uses Prisma's tagged template, but the ${query} inside the ILIKE pattern bypasses parameterization. A user searching for %'; DROP TABLE products; -- has a very bad day ahead of them — or rather, you do.
3. The Oversharing Response
// AI-generated: returns the full user object
const user = await prisma.user.findUnique({ where: { id } });
return NextResponse.json(user);
// Includes: passwordHash, resetToken, internalNotes, stripeCustomerId
AI defaults to returning everything. It doesn't know which fields are sensitive because it doesn't understand your business context. A select clause isn't optional — it's a security boundary.
Why Vibe Coding Makes This Worse
Traditional AI-assisted coding has a human reading every generated line. Vibe coding explicitly discourages that. "Don't read the code, just see if it works." This creates three compounding problems:
- No review filter. Vulnerabilities that a developer would catch in a normal PR flow pass straight through.
- False confidence. The code works in the happy path, so it must be fine. Security flaws rarely show up in manual testing.
- Volume. Vibe coding produces more code faster. More code = more attack surface = more places for vulnerabilities to hide.
If AI generates 100 functions and 45% have security issues, that's 45 vulnerable endpoints. A human writing those 100 functions over weeks would catch most issues during the writing process itself. Speed without review is just faster vulnerability creation.
Our AI Code Review Checklist
After catching too many issues in AI-generated code, we built a review checklist. Every AI-generated PR gets checked against this before it can merge:
Authentication & Authorization
- Every endpoint checks authentication
- Data is scoped to the authenticated user (no IDOR)
- Admin-only routes have role checks
- API keys are not hardcoded anywhere
Input Validation
- All user inputs are validated and sanitized
- File uploads check type, size, and content
- Query parameters are parsed with Zod or equivalent
- No raw SQL with string interpolation
Data Exposure
- Response objects use
select— never return full models - Error messages don't leak internal details
- Logs don't contain PII or secrets
- Stack traces are hidden in production
Business Logic
- Edge cases are handled (empty arrays, null values, concurrent access)
- Rate limiting is applied to public endpoints
- Monetary calculations use
Decimal, notfloat - State transitions are validated (can't go from "refunded" to "pending")
Infrastructure
- Environment variables are used for configuration
- CORS is configured explicitly, not
* - Sensitive headers are set (CSP, HSTS, X-Frame-Options)
- Dependencies are pinned and audited
Where AI-Generated Code Is Safe
We're not saying "don't use AI." We use it daily. But we've learned which tasks are safe to delegate and which aren't:
| Safe for AI | Needs Human Review | Never Delegate |
|---|---|---|
| UI components | API routes | Auth logic |
| Prisma schemas | Database queries | Payment processing |
| Type definitions | Form validation | Security rules |
| Test boilerplate | Error handling | Data deletion |
| CSS/Tailwind | File operations | Encryption |
| README drafts | Email templates | Access control |
The pattern is simple: AI handles structure, humans handle trust boundaries.
The Bottom Line
We love AI-assisted development. It makes us measurably faster. But "vibe coding" — shipping AI code without reading it — is negligence, not innovation.
The hangover is real. We've caught SQL injections, auth bypasses, and data leaks in AI-generated code that passed surface-level review. Every single time, the code looked clean. It was typed. It had good variable names. It just happened to be exploitable.
Read the code. Review the code. Especially when a machine wrote it.

