Over four parts, we've covered the full stack of web security fundamentals: XSS and injection, CSRF and rate limiting, CORS and security headers, session attacks, SSRF, logging, and container hardening.
Part 5 closes the series with the security patterns most developers never implement — until they get burned. These aren't obscure edge cases. OAuth misconfiguration is one of the most common sources of account takeover bugs. WebSocket security is almost never addressed. CSP is implemented just enough to satisfy a security scanner, then forgotten. Getting these right is the difference between an app that's hardened and one that's merely defended.
1. OAuth / OIDC Implementation Mistakes
What it is: OAuth 2.0 and OpenID Connect are the backbone of "Sign in with Google / GitHub / Apple" flows. The protocol is well-designed, but the devil is in the implementation. Most OAuth vulnerabilities aren't flaws in the spec — they're implementation shortcuts that developers take when they don't fully understand why each step exists.
Missing the state parameter
The state parameter in the OAuth flow is a CSRF token for your authorization request. Without it, an attacker can trick a user into binding their account to the attacker's identity — a login CSRF attack.
// BAD — initiating OAuth without a state parameter
app.get('/auth/github', (req, res) => {
const authUrl = new URL('https://github.com/login/oauth/authorize')
authUrl.searchParams.set('client_id', process.env.GITHUB_CLIENT_ID!)
authUrl.searchParams.set('redirect_uri', 'https://example.com/auth/callback')
authUrl.searchParams.set('scope', 'read:user')
// No state — attacker can craft a malicious authorization link
res.redirect(authUrl.toString())
})
app.get('/auth/callback', async (req, res) => {
const { code } = req.query
// No state validation — just exchanges the code
const token = await exchangeCodeForToken(code as string)
req.session.userId = token.userId
res.redirect('/dashboard')
})
// GOOD — generate and verify state on every OAuth flow
import { randomBytes } from 'crypto'
app.get('/auth/github', (req, res) => {
const state = randomBytes(32).toString('hex')
req.session.oauthState = state // store in session before redirecting
const authUrl = new URL('https://github.com/login/oauth/authorize')
authUrl.searchParams.set('client_id', process.env.GITHUB_CLIENT_ID!)
authUrl.searchParams.set('redirect_uri', 'https://example.com/auth/callback')
authUrl.searchParams.set('scope', 'read:user')
authUrl.searchParams.set('state', state)
res.redirect(authUrl.toString())
})
app.get('/auth/callback', async (req, res) => {
const { code, state } = req.query
// Reject the callback if state doesn't match
if (!state || state !== req.session.oauthState) {
return res.status(400).json({ error: 'Invalid state parameter' })
}
delete req.session.oauthState // consume the nonce — single use
const token = await exchangeCodeForToken(code as string)
req.session.userId = token.userId
res.redirect('/dashboard')
})
Trusting the redirect_uri without validation
Authorization codes and tokens are delivered to the redirect_uri. If your OAuth provider allows wildcard redirect URIs, or if your server doesn't validate the redirect URI strictly, an attacker can register a URL that captures the code.
// BAD — constructing redirect_uri dynamically from user input
app.get('/auth/github', (req, res) => {
const returnTo = req.query.returnTo as string
const authUrl = new URL('https://github.com/login/oauth/authorize')
authUrl.searchParams.set('client_id', process.env.GITHUB_CLIENT_ID!)
// Attacker sets returnTo=https://attacker.com — code gets delivered there
authUrl.searchParams.set(
'redirect_uri',
`https://example.com/auth/callback?next=${returnTo}`,
)
res.redirect(authUrl.toString())
})
// GOOD — redirect_uri is a hardcoded constant, never user-supplied
const OAUTH_REDIRECT_URI = 'https://example.com/auth/callback'
app.get('/auth/github', (req, res) => {
// Store the post-login destination separately in the session
if (req.query.returnTo) {
req.session.postLoginRedirect = req.query.returnTo as string
}
const authUrl = new URL('https://github.com/login/oauth/authorize')
authUrl.searchParams.set('client_id', process.env.GITHUB_CLIENT_ID!)
authUrl.searchParams.set('redirect_uri', OAUTH_REDIRECT_URI)
authUrl.searchParams.set('state', req.session.oauthState!)
res.redirect(authUrl.toString())
})
app.get('/auth/callback', async (req, res) => {
// Validate state (as above)...
const token = await exchangeCodeForToken(req.query.code as string)
req.session.userId = token.userId
// Validate the post-login redirect against an allowlist
const next = req.session.postLoginRedirect
const safeRedirect = next && next.startsWith('/') ? next : '/dashboard'
delete req.session.postLoginRedirect
res.redirect(safeRedirect)
})
Accepting ID token claims without signature verification
When using OIDC, you receive an ID token — a JWT signed by the provider. Skipping signature verification means an attacker can craft a token with arbitrary claims.
// BAD — decoding the JWT without verifying the signature
import jwt from 'jsonwebtoken'
app.post('/auth/verify', (req, res) => {
const { idToken } = req.body
// jwt.decode does NOT verify the signature — never use for auth
const claims = jwt.decode(idToken) as { sub: string; email: string }
req.session.userId = claims.sub
res.json({ success: true })
})
// GOOD — verify the signature against the provider's public keys
import { createRemoteJWKSet, jwtVerify } from 'jose'
const JWKS = createRemoteJWKSet(
new URL('https://accounts.google.com/.well-known/openid-configuration'),
)
app.post('/auth/verify', async (req, res) => {
try {
const { idToken } = req.body
const { payload } = await jwtVerify(idToken, JWKS, {
issuer: 'https://accounts.google.com',
audience: process.env.GOOGLE_CLIENT_ID,
})
// payload is now verified — claims are trustworthy
req.session.userId = payload.sub
res.json({ success: true })
} catch {
res.status(401).json({ error: 'Invalid token' })
}
})
Key takeaway: Always include a cryptographically random
stateparameter. Hardcode yourredirect_uri— never derive it from user input. Verify ID token signatures using the provider's JWKS endpoint, and validateissandaudclaims. The OAuth spec has guards for every attack; the mistake is not using them.
2. WebSocket Security
What it is: WebSockets bypass most of the security controls developers put on HTTP endpoints. There's no automatic CORS enforcement, no middleware chain, and the connection stays open — a single unauthenticated or unvalidated connection is a persistent foothold.
No authentication on connection
// BAD — WebSocket server accepts any connection, no auth check
import { WebSocketServer } from 'ws'
const wss = new WebSocketServer({ port: 8080 })
wss.on('connection', (ws) => {
ws.on('message', (data) => {
// Who is this? We have no idea.
broadcastToRoom(data)
})
})
// GOOD — authenticate before the connection is accepted
import { WebSocketServer } from 'ws'
import { parse } from 'url'
import { verifySessionToken } from './auth'
const wss = new WebSocketServer({ noServer: true })
// Intercept the HTTP upgrade request — reject unauthenticated connections
httpServer.on('upgrade', async (req, socket, head) => {
try {
const { query } = parse(req.url!, true)
// Token can come from the query string (since WS doesn't support custom headers easily)
// or from the session cookie
const token = query.token as string
const user = await verifySessionToken(token)
if (!user) {
socket.write('HTTP/1.1 401 Unauthorized\r\n\r\n')
socket.destroy()
return
}
wss.handleUpgrade(req, socket, head, (ws) => {
// Attach the verified user to the socket before emitting
;(ws as any).user = user
wss.emit('connection', ws, req)
})
} catch {
socket.write('HTTP/1.1 401 Unauthorized\r\n\r\n')
socket.destroy()
}
})
No origin validation
Unlike HTTP requests, WebSocket connections don't enforce CORS. A malicious page on another domain can open a WebSocket to your server and leverage the user's active cookies.
// BAD — accepts connections from any origin
const wss = new WebSocketServer({ port: 8080 })
// No origin check — any website can connect using the victim's cookies
// GOOD — validate the Origin header during the upgrade handshake
const ALLOWED_ORIGINS = new Set([
'https://example.com',
'https://app.example.com',
])
httpServer.on('upgrade', (req, socket, head) => {
const origin = req.headers.origin
if (!origin || !ALLOWED_ORIGINS.has(origin)) {
socket.write('HTTP/1.1 403 Forbidden\r\n\r\n')
socket.destroy()
return
}
// Proceed with auth check and upgrade...
})
No input validation on messages
WebSocket messages arrive as raw strings or buffers. Without validation, your message handlers are as vulnerable as an unvalidated HTTP endpoint — command injection, prototype pollution, oversized payloads.
// BAD — parsing and trusting raw message content directly
wss.on('connection', (ws) => {
ws.on('message', (data) => {
const message = JSON.parse(data.toString()) // can throw, can be anything
db.insert({ content: message.text }) // unvalidated input into the database
})
})
// GOOD — validate every message with a schema, set size limits
import { z } from 'zod'
import { WebSocketServer } from 'ws'
const MAX_MESSAGE_BYTES = 64 * 1024 // 64KB
const ChatMessageSchema = z.object({
type: z.enum(['chat', 'ping']),
roomId: z.string().uuid(),
text: z.string().min(1).max(2000),
})
wss.on('connection', (ws) => {
ws.on('message', (data) => {
// Reject oversized messages before parsing
if (Buffer.byteLength(data as Buffer) > MAX_MESSAGE_BYTES) {
ws.close(1009, 'Message too large')
return
}
let parsed: unknown
try {
parsed = JSON.parse(data.toString())
} catch {
ws.close(1003, 'Invalid JSON')
return
}
const result = ChatMessageSchema.safeParse(parsed)
if (!result.success) {
ws.send(JSON.stringify({ error: 'Invalid message format' }))
return
}
// result.data is now typed and validated
handleChatMessage((ws as any).user, result.data)
})
})
Key takeaway: Authenticate during the HTTP upgrade — reject unauthenticated sockets before the WebSocket connection is established. Validate the
Originheader to prevent cross-site WebSocket hijacking. Treat every incoming message as untrusted input: parse defensively, validate with a schema, and enforce size limits.
3. Content Security Policy (CSP) Done Right
What it is: CSP is a response header that tells the browser which sources of content are trusted. It's the last line of defense against XSS — even if an attacker injects a <script> tag, CSP can prevent the browser from executing it. But most CSP deployments are either too permissive to stop anything, or too restrictive and immediately broken by developers adding unsafe-inline to make things work.
The permissive CSP that stops nothing
// BAD — policies that are too wide to provide real protection
const badPolicies = [
// Allows any script source — completely negates script-src
"script-src 'self' 'unsafe-inline' 'unsafe-eval'",
// Wildcard: allows scripts from any subdomain you might have
"script-src 'self' *.example.com cdn.example.com",
// Allows data: URIs for scripts — common XSS vector
"script-src 'self' data:",
]
// Particularly dangerous: unsafe-inline + unsafe-eval together
// This is equivalent to having no CSP at all for scripts
Report-only mode: the right way to roll out CSP
The challenge with CSP is that a strict policy will break your app until every inline script and external dependency is accounted for. Content-Security-Policy-Report-Only lets you observe violations without enforcing them.
// GOOD — start in report-only mode, collect violations, then enforce
import helmet from 'helmet'
import express from 'express'
const app = express()
// Phase 1: Report-only — understand what would break
app.use((req, res, next) => {
res.setHeader(
'Content-Security-Policy-Report-Only',
[
"default-src 'self'",
"script-src 'self'",
"style-src 'self' 'unsafe-inline'", // tighten after audit
"img-src 'self' data: https:",
"font-src 'self'",
"connect-src 'self'",
"frame-ancestors 'none'",
'report-uri /csp-violations', // receive violation reports
].join('; '),
)
next()
})
// Collect violation reports to understand what the policy breaks
app.post(
'/csp-violations',
express.json({ type: 'application/csp-report' }),
(req, res) => {
const report = req.body['csp-report']
console.warn({
event: 'csp.violation',
blockedUri: report['blocked-uri'],
violatedDirective: report['violated-directive'],
documentUri: report['document-uri'],
})
res.status(204).send()
},
)
Nonce-based CSP: the right way to allow inline scripts
If your app needs inline scripts (common with server-side rendering), the correct approach is nonces — not unsafe-inline.
// GOOD — generate a per-request nonce, attach it to both the header and the script tag
import { randomBytes } from 'crypto'
app.use((req, res, next) => {
const nonce = randomBytes(16).toString('base64')
res.locals.nonce = nonce // available to templates
res.setHeader(
'Content-Security-Policy',
[
"default-src 'self'",
`script-src 'self' 'nonce-${nonce}'`,
"style-src 'self' 'unsafe-inline'",
"img-src 'self' data: https:",
"connect-src 'self'",
"frame-ancestors 'none'",
"base-uri 'self'",
"form-action 'self'",
].join('; '),
)
next()
})
// In your HTML template — the nonce ties this specific script to the policy
// <script nonce="<%= nonce %>">
// window.__INITIAL_STATE__ = { ... }
// </script>
//
// Any injected script without the correct nonce is blocked by the browser,
// even if the attacker can inject raw HTML
For Next.js apps, nonces can be set in middleware:
// GOOD — Next.js middleware generates a nonce for every request
import { NextResponse } from 'next/server'
import type { NextRequest } from 'next/server'
import { randomBytes } from 'crypto'
export function middleware(request: NextRequest) {
const nonce = randomBytes(16).toString('base64')
const cspHeader = [
"default-src 'self'",
`script-src 'self' 'nonce-${nonce}' 'strict-dynamic'`,
"style-src 'self' 'unsafe-inline'",
"img-src 'self' blob: data:",
"font-src 'self'",
"connect-src 'self'",
"frame-ancestors 'none'",
"base-uri 'self'",
"form-action 'self'",
].join('; ')
const requestHeaders = new Headers(request.headers)
requestHeaders.set('x-nonce', nonce)
const response = NextResponse.next({ request: { headers: requestHeaders } })
response.headers.set('Content-Security-Policy', cspHeader)
return response
}
Key takeaway:
unsafe-inlineandunsafe-evalnullify your CSP for scripts — they exist only to keep broken apps working, not for new code. UseContent-Security-Policy-Report-Onlyfirst to understand what your policy would block. Then move to enforcement using nonces for inline scripts. Thestrict-dynamickeyword lets trusted scripts load further scripts without expanding the allowlist.
Closing: Five Parts, One Practice
Five parts. Over forty code examples. XSS, SQL injection, broken auth, CSRF, rate limiting, dependency scanning, CORS, security headers, SSRF, session attacks, logging, Docker hardening, OAuth, WebSockets, and CSP.
If it feels like a lot — it is. But there's a pattern across every one of these vulnerabilities:
They happen when developers trust input they shouldn't trust, or skip validation steps because nothing broke during development.
XSS is trusting HTML from users. SQL injection is trusting query fragments. OAuth login CSRF is trusting a callback URL without a nonce. WebSocket hijacking is trusting a connection without checking its origin. The security bugs change — the underlying failure is the same.
The checklist below is a final consolidation. Use it as a pre-launch review, a code review checklist, or a reference when something goes wrong.
Final Series Checklist
Input & Output
- All user-supplied HTML is escaped or rendered in sandboxed contexts (Part 1)
- All database queries use parameterized statements or an ORM (Part 1)
- File uploads validate MIME type, extension, and size (Part 2)
- Incoming data is validated against a strict schema (Zod, Joi, etc.) (Part 3)
- All WebSocket messages are validated and size-limited (Part 5)
Authentication & Sessions
- Passwords hashed with bcrypt, scrypt, or Argon2 (Part 1)
- Session ID regenerated immediately after login (Part 4)
- Short rolling session TTL with httpOnly, Secure, SameSite cookies (Parts 1 & 4)
- OAuth flows include
stateparameter (Part 5) - OAuth
redirect_uriis hardcoded, never user-supplied (Part 5) - OIDC ID tokens verified against provider JWKS (Part 5)
Network & Protocol
- CORS allowlist with explicit origins — no
origin: '*'on authenticated routes (Part 3) - CSRF protection via SameSite cookies or CSRF tokens (Part 2)
- Security headers set: HSTS, X-Content-Type-Options, X-Frame-Options (Part 3)
- Open redirects blocked — post-login destinations validated (Part 3)
- User-supplied URLs blocked from internal IP ranges (SSRF) (Part 4)
- WebSocket origin validated during HTTP upgrade (Part 5)
- WebSocket connections authenticated before upgrade completes (Part 5)
Secrets & Configuration
- No secrets in source code, environment variables committed to git (Part 2)
- Docker containers run as non-root user (Part 4)
- Container images scanned for CVEs in CI (Part 4)
- CSP header set — at minimum with
frame-ancestors 'none'(Parts 3 & 5) - CSP uses nonces instead of
unsafe-inlinefor scripts (Part 5)
Visibility
- Failed logins, permission denials, and admin actions logged (Part 4)
- Alerts configured for anomalous activity (brute force, mass failures) (Part 4)
- CSP violation reporting configured to surface injection attempts (Part 5)
- Dependencies audited regularly —
npm audit, Dependabot (Part 2)
Security isn't a checklist you complete — it's a practice you maintain. The threat landscape shifts. New vulnerabilities are discovered in packages you already trust. Protocols you implement correctly today can be misconfigured by the next engineer to touch the code.
What carries you through that is not any single fix, but the habit of asking: who controls this input, what happens if I trust it, and what's the worst case if I'm wrong?
Ask that question consistently, and most of these vulnerabilities never get written in the first place.
This post was written with the assistance of AI to help articulate the author's own views, knowledge, and experiences.