
OWASP Top 10 in AI-Generated Code: The Vulnerabilities Your AI Keeps Writing
OWASP Top 10 in AI-Generated Code: The Vulnerabilities Your AI Keeps Writing
AI coding assistants like GitHub Copilot, Cursor, and ChatGPT have fundamentally changed how developers build software. You can scaffold an entire SaaS in a weekend. But there is a problem that nobody talks about enough: these tools are trained on millions of lines of code that includes some of the most common security mistakes in existence.
The OWASP Top 10 is the industry-standard list of the most critical web application security risks. When we analyzed code generated by popular AI assistants across hundreds of PreBreach scans, we found that AI-generated codebases consistently introduce vulnerabilities from at least five of the top ten categories — often in the very first prompt response.
This is not a theoretical risk. These are the exact patterns we see every week in real applications built by real developers shipping real products.
Why AI Code Generators Write Insecure Code
Before we walk through each category, it helps to understand why this happens. AI code generators optimize for functional correctness — making code that works. Security is a non-functional requirement that the models treat as secondary. The training data contains far more examples of insecure code than secure code, simply because most open-source code on GitHub was not written with security as a primary concern.
When you prompt an AI to "create a user authentication system," it will give you something that authenticates users. Whether it resists credential stuffing, implements proper session management, or follows the principle of least privilege is another question entirely.
A01: Broken Access Control
The #1 vulnerability on the OWASP Top 10 — and the one AI generators get wrong most often.
The AI Pattern
Ask any AI assistant to build a CRUD API, and you will almost certainly get something like this:
// AI-generated Next.js API route
export async function GET(
request: Request,
{ params }: { params: { id: string } }
) {
const user = await db.user.findUnique({
where: { id: params.id },
});
return Response.json(user);
}
export async function DELETE(
request: Request,
{ params }: { params: { id: string } }
) {
await db.user.delete({
where: { id: params.id },
});
return Response.json({ success: true });
}
Notice what is missing? There is no authentication check. There is no authorization check. Any unauthenticated user can read or delete any other user's data just by changing the ID in the URL.
This also shows up in more subtle forms:
// AI-generated "protected" route — but only checks if logged in
export async function PATCH(request: Request, { params }: { params: { id: string } }) {
const session = await getServerSession();
if (!session) {
return Response.json({ error: 'Unauthorized' }, { status: 401 });
}
// BUG: Checks authentication but not authorization
// Any logged-in user can modify any other user's data
const updated = await db.user.update({
where: { id: params.id },
data: await request.json(),
});
return Response.json(updated);
}
Why the AI Does This
AI models complete prompts with the most statistically likely code. Most example code in tutorials, Stack Overflow answers, and open-source repos does not include authorization checks because the examples are meant to demonstrate functionality, not production security. The model learns that API routes fetch data by ID — and that is exactly what it generates.
The Fix
Every data-access operation must verify that the requesting user has permission to perform that specific action on that specific resource:
export async function PATCH(request: Request, { params }: { params: { id: string } }) {
const session = await getServerSession();
if (!session) {
return Response.json({ error: 'Unauthorized' }, { status: 401 });
}
// Verify the user owns this resource
const existing = await db.user.findUnique({
where: { id: params.id },
});
if (existing?.ownerId !== session.user.id) {
return Response.json({ error: 'Forbidden' }, { status: 403 });
}
const updated = await db.user.update({
where: { id: params.id },
data: await request.json(),
});
return Response.json(updated);
}
PreBreach's AI agents specifically test for IDOR (Insecure Direct Object Reference) vulnerabilities by attempting to access resources belonging to different user sessions. This is one of the first things our scan catches.
A02: Cryptographic Failures
Hardcoded secrets, plaintext storage, and broken crypto — all AI favorites.
The AI Pattern
AI code generators routinely hardcode secrets directly into source files:
// AI-generated database connection
import { createClient } from '@supabase/supabase-js';
const supabase = createClient(
'https://abcdefgh.supabase.co',
'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZSIsInJlZiI6...'
);
// AI-generated JWT implementation
import jwt from 'jsonwebtoken';
const SECRET = 'my-super-secret-key-123';
export function createToken(userId: string) {
return jwt.sign({ userId }, SECRET, { expiresIn: '7d' });
}
// AI-generated password storage
async function createUser(email: string, password: string) {
await db.user.create({
data: {
email,
password, // Stored in plaintext
},
});
}
Why the AI Does This
Training data is full of tutorials and example code where hardcoded values make the example easier to follow. The model also lacks context about your deployment environment, so it cannot know to reference environment variables specific to your setup. For password storage, many older code examples predate the widespread adoption of bcrypt and argon2.
The Fix
Never commit secrets to source code. Use environment variables and validate their presence at startup:
// Validated environment variables
const supabaseUrl = process.env.SUPABASE_URL;
const supabaseKey = process.env.SUPABASE_SERVICE_ROLE_KEY;
if (!supabaseUrl || !supabaseKey) {
throw new Error('Missing required Supabase environment variables');
}
const supabase = createClient(supabaseUrl, supabaseKey);
// Proper password hashing
import bcrypt from 'bcrypt';
async function createUser(email: string, password: string) {
const hashedPassword = await bcrypt.hash(password, 12);
await db.user.create({
data: {
email,
password: hashedPassword,
},
});
}
A03: Injection
String concatenation in queries is the oldest vulnerability in the book — and AI still writes it.
The AI Pattern
Despite decades of awareness, AI generators still produce code vulnerable to SQL injection, NoSQL injection, and command injection:
// AI-generated search endpoint
export async function GET(request: Request) {
const { searchParams } = new URL(request.url);
const query = searchParams.get('q');
// SQL injection via string concatenation
const results = await db.$queryRaw(
`SELECT * FROM products WHERE name LIKE '%${query}%'`
);
return Response.json(results);
}
// AI-generated MongoDB query
app.get('/api/users', async (req, res) => {
const { username } = req.query;
// NoSQL injection — attacker can pass { "$gt": "" } as username
const user = await User.findOne({ username });
res.json(user);
});
// AI-generated file processing
import { exec } from 'child_process';
app.post('/api/convert', (req, res) => {
const { filename } = req.body;
// Command injection — filename could contain "; rm -rf /"
exec(`convert ${filename} output.pdf`, (error, stdout) => {
res.json({ success: !error });
});
});
Why the AI Does This
String interpolation is the most concise way to build dynamic queries, and conciseness is what AI models optimize for. Parameterized queries require more boilerplate, and the model often chooses the shorter path. For NoSQL injection, many developers (and AI models) do not realize that passing unsanitized objects to MongoDB queries is dangerous.
The Fix
Always use parameterized queries and validate input types:
// Parameterized query
export async function GET(request: Request) {
const { searchParams } = new URL(request.url);
const query = searchParams.get('q') || '';
const results = await db.$queryRaw(
Prisma.sql`SELECT * FROM products WHERE name LIKE ${`%${query}%`}`
);
return Response.json(results);
}
// Type-validated MongoDB query
import { z } from 'zod';
const querySchema = z.object({
username: z.string().min(1).max(100),
});
app.get('/api/users', async (req, res) => {
const parsed = querySchema.safeParse(req.query);
if (!parsed.success) {
return res.status(400).json({ error: 'Invalid input' });
}
const user = await User.findOne({ username: parsed.data.username });
res.json(user);
});
A04: Insecure Design
AI generates features. It does not generate threat models.
The AI Pattern
AI assistants build what you ask for — literally. Ask for a login form, and you get a login form. You do not get rate limiting, account lockout, CAPTCHA, or brute-force protection:
// AI-generated login endpoint — no rate limiting
export async function POST(request: Request) {
const { email, password } = await request.json();
const user = await db.user.findUnique({ where: { email } });
if (!user || !await bcrypt.compare(password, user.password)) {
return Response.json({ error: 'Invalid credentials' }, { status: 401 });
}
const token = generateToken(user.id);
return Response.json({ token });
}
// AI-generated password reset — no token expiration or single-use enforcement
export async function POST(request: Request) {
const { email } = await request.json();
const resetToken = crypto.randomUUID();
await db.passwordReset.create({
data: { email, token: resetToken },
// No expiresAt field
});
await sendEmail(email, `Reset your password: /reset?token=${resetToken}`);
return Response.json({ success: true });
}
// AI-generated file upload — no size or type restrictions
export async function POST(request: Request) {
const formData = await request.formData();
const file = formData.get('file') as File;
// No file size check, no type validation, no malware scanning
const buffer = await file.arrayBuffer();
await writeFile(`./uploads/${file.name}`, Buffer.from(buffer));
return Response.json({ url: `/uploads/${file.name}` });
}
Why the AI Does This
Insecure design is not about a specific code bug — it is about missing security controls at the architecture level. AI models respond to what you ask. If your prompt does not mention rate limiting, the model will not include it. Security-by-design requires anticipating abuse scenarios, and current AI models do not proactively think adversarially.
The Fix
Add rate limiting and abuse prevention as standard infrastructure:
import { Ratelimit } from '@upstash/ratelimit';
import { Redis } from '@upstash/redis';
const ratelimit = new Ratelimit({
redis: Redis.fromEnv(),
limiter: Ratelimit.slidingWindow(5, '15 m'), // 5 attempts per 15 minutes
});
export async function POST(request: Request) {
const ip = request.headers.get('x-forwarded-for') || 'unknown';
const { success } = await ratelimit.limit(ip);
if (!success) {
return Response.json(
{ error: 'Too many attempts. Try again later.' },
{ status: 429 }
);
}
const { email, password } = await request.json();
// ... rest of login logic
}
A05: Security Misconfiguration
Debug mode in production, default credentials, and overly permissive CORS — the classics.
The AI Pattern
AI-generated configuration code routinely ships with development-friendly defaults that are dangerous in production:
// AI-generated CORS configuration
app.use(cors({
origin: '*', // Allows any website to make requests
credentials: true,
}));
// AI-generated Next.js config with all headers disabled
const nextConfig = {
async headers() {
return [
{
source: '/(.*)',
headers: [
{ key: 'Access-Control-Allow-Origin', value: '*' },
],
},
];
},
};
// AI-generated Express setup
const app = express();
app.use(express.json());
app.use(morgan('dev')); // Verbose logging in production
app.set('x-powered-by', true); // Leaks framework info
// Debug error handler that leaks stack traces
app.use((err, req, res, next) => {
res.status(500).json({
error: err.message,
stack: err.stack, // Full stack trace sent to client
});
});
# AI-generated Dockerfile
FROM node:18
WORKDIR /app
COPY . .
RUN npm install
# Running as root, no .dockerignore, copies everything including .env
CMD ["node", "server.js"]
Why the AI Does This
Permissive configurations make code "just work" during development. The AI's training data is predominantly development-oriented code where broad CORS, verbose error messages, and debug logging are normal. Production hardening is usually described in deployment guides, not in the code samples that form the model's training data.
The Fix
Set restrictive defaults and use environment-specific configuration:
// Production-ready CORS
const allowedOrigins = process.env.ALLOWED_ORIGINS?.split(',') || [];
app.use(cors({
origin: (origin, callback) => {
if (!origin || allowedOrigins.includes(origin)) {
callback(null, true);
} else {
callback(new Error('Not allowed by CORS'));
}
},
credentials: true,
}));
// Environment-aware error handling
app.use((err, req, res, next) => {
console.error(err);
res.status(500).json({
error: process.env.NODE_ENV === 'production'
? 'Internal server error'
: err.message,
});
});
PreBreach checks for misconfigured CORS headers, exposed stack traces, missing security headers, and other configuration issues as part of every scan.
A07: Identification and Authentication Failures
Weak JWTs, missing session management, and broken auth flows.
The AI Pattern
AI generators frequently produce authentication code with critical weaknesses:
// AI-generated JWT with weak configuration
import jwt from 'jsonwebtoken';
export function createToken(user: User) {
return jwt.sign(
{
id: user.id,
email: user.email,
role: user.role, // Role stored in token — can be manipulated if secret is weak
},
'secret', // Weak secret
{
expiresIn: '30d', // Token valid for 30 days with no refresh mechanism
}
);
}
export function verifyToken(token: string) {
return jwt.verify(token, 'secret');
// No algorithm restriction — vulnerable to algorithm confusion attacks
}
// AI-generated session handling
app.post('/login', async (req, res) => {
const { email, password } = req.body;
const user = await authenticate(email, password);
if (user) {
req.session.userId = user.id;
// Session not regenerated after login — session fixation vulnerability
// No session timeout configuration
// No concurrent session control
res.json({ success: true });
}
});
// AI-generated "remember me" functionality
app.post('/login', async (req, res) => {
const user = await authenticate(req.body.email, req.body.password);
if (user) {
// Storing user ID in a plain cookie — trivially forgeable
res.cookie('userId', user.id, { maxAge: 30 * 24 * 60 * 60 * 1000 });
res.json({ success: true });
}
});
Why the AI Does This
Secure authentication is one of the hardest things to implement correctly, and the training data contains far more simple (broken) examples than production-grade implementations. JWTs in particular are frequently demonstrated in tutorials with intentionally simple secrets and configurations so readers can focus on the concept rather than the security.
The Fix
Use established auth libraries instead of rolling your own. If you must implement JWTs:
import jwt from 'jsonwebtoken';
const JWT_SECRET = process.env.JWT_SECRET;
if (!JWT_SECRET || JWT_SECRET.length < 32) {
throw new Error('JWT_SECRET must be set and at least 32 characters');
}
export function createToken(userId: string) {
return jwt.sign(
{ sub: userId }, // Minimal claims — fetch role from DB on each request
JWT_SECRET,
{
algorithm: 'HS256',
expiresIn: '15m', // Short-lived access token
issuer: 'prebreach.com',
}
);
}
export function verifyToken(token: string) {
return jwt.verify(token, JWT_SECRET, {
algorithms: ['HS256'], // Explicitly restrict algorithm
issuer: 'prebreach.com',
});
}
Better yet, use a proven auth provider like Clerk, Auth0, or NextAuth.js. PreBreach's 24 custom Nuclei templates include specific checks for JWT misconfigurations in modern stacks.
A08: Software and Data Integrity Failures
No dependency auditing, no subresource integrity, no supply chain awareness.
The AI Pattern
AI assistants recommend packages without considering their security posture:
// AI-generated package.json — popular but potentially vulnerable dependencies
{
"dependencies": {
"express": "^4.18.0",
"lodash": "^4.17.0", // History of prototype pollution vulnerabilities
"moment": "^2.29.0", // Deprecated, known ReDoS vulnerabilities
"request": "^2.88.0", // Deprecated since 2020
"serialize-javascript": "^3.0.0" // XSS vulnerability in older versions
}
}
// AI-generated script loading — no integrity checks
export default function Layout({ children }) {
return (
<html>
<head>
{/* No subresource integrity hash — CDN compromise = XSS */}
<script src="https://cdn.jsdelivr.net/npm/some-library@3.0.0/dist/lib.min.js" />
<link href="https://cdn.example.com/styles.css" rel="stylesheet" />
</head>
<body>{children}</body>
</html>
);
}
// AI-generated CI pipeline — no security scanning
name: Deploy
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- run: npm install
- run: npm run build
- run: npm run deploy
# No npm audit, no dependency scanning, no SAST
Why the AI Does This
AI models recommend packages based on popularity in their training data, not on current security status. A package that was widely used in 2022 might have known vulnerabilities in 2026. The models also lack awareness of the npm advisory database, CVE reports, or deprecation notices published after their training cutoff.
The Fix
Add dependency security to your workflow:
// package.json scripts
{
"scripts": {
"audit": "npm audit --audit-level=high",
"audit:fix": "npm audit fix",
"deps:check": "npx depcheck",
"postinstall": "npm audit --audit-level=high"
}
}
# CI pipeline with security scanning
name: Deploy
on:
push:
branches: [main]
jobs:
security:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: npm ci
- run: npm audit --audit-level=high
- uses: github/codeql-action/analyze@v3
deploy:
needs: security
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: npm ci
- run: npm run build
- run: npm run deploy
The Compounding Effect
The real danger is not any single vulnerability category in isolation. It is that AI code generators introduce vulnerabilities from multiple categories simultaneously in the same codebase. A typical AI-generated SaaS might have:
- No authorization checks on API routes (A01)
- Hardcoded Supabase keys in client code (A02)
- String-interpolated database queries (A03)
- No rate limiting on login or signup (A04)
- Wildcard CORS and verbose error messages (A05)
- Weak JWT configuration (A07)
- Outdated dependencies with known CVEs (A08)
Each of these alone is exploitable. Together, they create an attack surface that an automated scanner or a motivated attacker can compromise in minutes.
What You Should Do
1. Treat AI-generated code as untrusted input. Review every suggestion with a security lens. If the AI wrote a database query, check for injection. If it wrote an API route, check for authorization.
2. Use security linters and static analysis. Tools like ESLint security plugins, Semgrep, and CodeQL can catch many of these patterns automatically.
3. Test your application from the attacker's perspective. This is what penetration testing does — and it is why we built PreBreach. Our 8 AI agents specifically target these OWASP Top 10 categories using 24 custom Nuclei templates designed for modern stacks like Next.js, Supabase, and Vercel.
4. Scan before you ship. Running a PreBreach scan takes less time than fixing a breach. With plans starting at $29/month, there is no reason to ship blind.
The AI assistant that helped you build your app is not going to help you secure it. That requires a different kind of AI — one trained to think like an attacker, not a developer. That is exactly what PreBreach does.
Run your first scan today and find out what your AI coding assistant left behind.

