Part 1 secured the basics — XSS, SQL injection, weak passwords, data exposure. Part 2 went after the overlooked layers: CSRF, rate limiting, file uploads, secrets management. Part 3 tackled the invisible mistakes — CORS, security headers, open redirects, clickjacking, IDOR.
Part 4 steps back from the application code entirely. You can harden every endpoint, validate every input, and ship perfect business logic — and still get breached if your sessions leak, your server makes unsafe outbound requests, you can't see what's happening, or your Docker container runs as root.
The attack surface is bigger than your code. Let's look at the parts most developers never think about.
1. Session Fixation & Session Hijacking
What it is: Two related attacks on user sessions. Session fixation happens before login — an attacker sets a known session ID in the victim's browser, the user logs in, and now the attacker has a valid authenticated session. Session hijacking happens after login — the attacker steals an existing session token through XSS, network interception, or log leakage.
Both attacks share the same underlying problem: sessions are treated as permanent identifiers instead of ephemeral credentials.
Vulnerable code
// BAD — session ID stays the same after login
// If the attacker planted this session ID before login, they're now authenticated
app.post('/login', async (req, res) => {
const user = await authenticate(req.body)
if (!user) return res.status(401).json({ error: 'Invalid credentials' })
req.session.userId = user.id // same session ID as the anonymous session
res.json({ success: true })
})
// BAD — session has no expiry and no binding to the client
// A stolen token is valid forever, from anywhere
req.session.userId = user.id
// No TTL, no IP check, no user-agent check
Fixed code
// GOOD — regenerate session ID after successful authentication
app.post('/login', async (req, res) => {
const user = await authenticate(req.body)
if (!user) return res.status(401).json({ error: 'Invalid credentials' })
// Destroy the pre-login session, create a fresh one
req.session.regenerate((err) => {
if (err) return res.status(500).json({ error: 'Session error' })
req.session.userId = user.id
req.session.ip = req.ip
req.session.userAgent = req.headers['user-agent']
req.session.createdAt = Date.now()
req.session.save((saveErr) => {
if (saveErr) return res.status(500).json({ error: 'Session error' })
res.json({ success: true })
})
})
})
// GOOD — validate session integrity on each request
function validateSession(req: Request, res: Response, next: NextFunction) {
if (!req.session.userId) return next() // not authenticated
const ipMismatch = req.session.ip && req.session.ip !== req.ip
const agentMismatch =
req.session.userAgent && req.session.userAgent !== req.headers['user-agent']
if (ipMismatch || agentMismatch) {
// Session token is being used from a different client — likely stolen
req.session.destroy(() => {})
return res.status(401).json({ error: 'Session invalid' })
}
next()
}
app.use(validateSession)
For session store configuration, set a short TTL and rolling expiry:
// GOOD — short-lived sessions with rolling expiry
app.use(
session({
secret: process.env.SESSION_SECRET!,
resave: false,
saveUninitialized: false,
rolling: true, // reset TTL on each request
cookie: {
httpOnly: true,
secure: true,
sameSite: 'strict',
maxAge: 30 * 60 * 1000, // 30 minutes of inactivity → expire
},
}),
)
Key takeaway: Call
req.session.regenerate()immediately after login — not optional. Bind sessions to IP and user-agent. Set a short rolling TTL. A stolen token should be useless after a few minutes of inactivity.
2. Server-Side Request Forgery (SSRF)
What it is: Your server accepts a URL from the user and makes an HTTP request to it. The attacker provides an internal URL — your cloud metadata endpoint, an internal admin panel, a database management interface — and your server dutifully fetches it for them.
This is one of the most dangerous vulnerabilities in cloud environments. The AWS EC2 metadata endpoint at http://169.254.169.254/latest/meta-data/ returns IAM credentials with no authentication required. If your server will fetch any URL, an attacker can use it to retrieve credentials that grant access to your entire AWS account.
Vulnerable code
// BAD — fetches any URL the user supplies
app.post('/api/preview', async (req, res) => {
const { url } = req.body
const response = await fetch(url)
const html = await response.text()
res.json({ preview: html })
// Attacker sends: { "url": "http://169.254.169.254/latest/meta-data/iam/security-credentials/" }
// Response contains AWS access key, secret key, session token
})
// BAD — URL validation that's easy to bypass
function isAllowed(url: string): boolean {
return !url.includes('169.254') && !url.includes('localhost')
// Bypasses: http://[::1]/, http://0177.0.0.1/, http://2130706433/
// Any IP representation that avoids the string match
}
String matching against known bad values is not enough. IP addresses have many representations, and DNS rebinding lets an attacker register a hostname that resolves to a public IP during validation but switches to 127.0.0.1 when the actual request fires.
Fixed code
// GOOD — resolve the hostname first, then check the IP
import { URL } from 'url'
import dns from 'dns/promises'
import ipRangeCheck from 'ip-range-check'
const PRIVATE_RANGES = [
'10.0.0.0/8',
'172.16.0.0/12',
'192.168.0.0/16',
'127.0.0.0/8',
'169.254.0.0/16', // link-local / AWS metadata
'::1/128',
'fc00::/7',
]
async function isSafeUrl(urlString: string): Promise<boolean> {
try {
const url = new URL(urlString)
// Only allow http and https
if (!['http:', 'https:'].includes(url.protocol)) return false
// Resolve the hostname to an IP address
const { address } = await dns.lookup(url.hostname)
// Reject private and reserved IP ranges
if (ipRangeCheck(address, PRIVATE_RANGES)) return false
return true
} catch {
return false
}
}
app.post('/api/preview', async (req, res) => {
const { url } = req.body
if (!(await isSafeUrl(url))) {
return res.status(400).json({ error: 'URL not allowed' })
}
const response = await fetch(url)
const html = await response.text()
res.json({ preview: html })
})
If your use case allows it, a domain allowlist is even stronger:
// GOOD — allowlist is the most restrictive and safest option
const ALLOWED_DOMAINS = ['github.com', 'example.com']
function isAllowedDomain(urlString: string): boolean {
try {
const { hostname } = new URL(urlString)
return ALLOWED_DOMAINS.some(
(d) => hostname === d || hostname.endsWith(`.${d}`),
)
} catch {
return false
}
}
Key takeaway: Never fetch user-supplied URLs without validation. Resolve hostnames before making requests — check the resolved IP, not the string. Block all private IP ranges including
169.254.0.0/16. In cloud environments, SSRF can escalate to full account compromise.
3. Security Logging & Monitoring
What it is: When a breach happens, the first question is always "how long has this been going on?" Most apps can't answer that. They log application errors, not security events. There's no record of who tried to log in, from where, how often, or what they accessed.
Without security logging, you're defending blind. The attacker is already inside and you won't know until a user reports something wrong — or you see it in the news.
Vulnerable code
// BAD — generic error logging, no security context
app.post('/login', async (req, res) => {
const user = await authenticate(req.body)
if (!user) {
return res.status(401).json({ error: 'Invalid credentials' })
// No record: who tried? from where? how many times? with what email?
}
res.json({ token: generateToken(user) })
})
app.get('/api/admin/users', requireAdmin, async (req, res) => {
const users = await db.user.findMany()
res.json(users)
// No record: which admin accessed this? when? what did they download?
})
Fixed code
// GOOD — structured security event logging
import pino from 'pino'
const logger = pino({ level: 'info' })
app.post('/login', async (req, res) => {
const { email } = req.body
const user = await authenticate(req.body)
if (!user) {
logger.warn({
event: 'auth.login.failed',
email,
ip: req.ip,
userAgent: req.headers['user-agent'],
timestamp: new Date().toISOString(),
})
return res.status(401).json({ error: 'Invalid credentials' })
}
logger.info({
event: 'auth.login.success',
userId: user.id,
ip: req.ip,
timestamp: new Date().toISOString(),
})
res.json({ token: generateToken(user) })
})
app.delete('/api/admin/users/:id', requireAdmin, async (req, res) => {
const targetId = req.params.id
await db.user.delete({ where: { id: targetId } })
// Admin actions must always be logged
logger.info({
event: 'admin.user.deleted',
actorId: req.user.id,
targetUserId: targetId,
ip: req.ip,
timestamp: new Date().toISOString(),
})
res.json({ success: true })
})
What to always log:
| Event | Required fields |
|---|---|
| Failed login | email, IP, user-agent, timestamp |
| Successful login | userId, IP, timestamp |
| Permission denied | userId, resource, action, timestamp |
| Rate limit triggered | IP, endpoint, timestamp |
| Admin action | actorId, action, targetId, timestamp |
| Password change/reset | userId, IP, timestamp |
Logging the events is only half the equation. You also need alerts for anomalies:
// GOOD — track failed login attempts and alert on suspicious patterns
const FAILED_LOGIN_THRESHOLD = 10
const WINDOW_SECONDS = 300 // 5 minutes
async function checkLoginAnomaly(ip: string): Promise<void> {
const key = `failed_logins:${ip}`
const count = await redis.incr(key)
if (count === 1) {
await redis.expire(key, WINDOW_SECONDS)
}
if (count >= FAILED_LOGIN_THRESHOLD) {
logger.error({
event: 'security.alert.brute_force',
ip,
failedAttempts: count,
window: `${WINDOW_SECONDS}s`,
timestamp: new Date().toISOString(),
})
// Trigger alert: PagerDuty, Slack webhook, email — whatever you use
await sendSecurityAlert(
`Brute force suspected from ${ip}: ${count} failed logins in ${WINDOW_SECONDS}s`,
)
}
}
For log storage, structured logs (JSON) shipped to a centralized service beat flat text files. Datadog, Grafana Loki, and CloudWatch all support querying structured fields — you can ask "show me all logins from IPs that had 5+ failures in the last hour" in seconds.
Key takeaway: Log all security events with structured data — not just errors. Set up alerts for brute force attempts, permission violations, and unusual admin activity. If you can't answer "who accessed this and when" within 5 minutes of a reported incident, your logging is insufficient.
4. Docker Container Security
What it is: Your Docker container is part of your attack surface. A misconfigured image can expose secrets, run privileged processes, or make lateral movement trivial if the container is compromised. Most developers treat the Dockerfile as an afterthought — a way to get the app running, not a security boundary.
Running as root
# BAD — default Node images run as root
# If an attacker escapes the app, they have root inside the container
FROM node:20
WORKDIR /app
COPY . .
RUN npm install
CMD ["node", "server.js"]
# GOOD — create and use a non-root user
FROM node:20-alpine
WORKDIR /app
RUN addgroup --system app && adduser --system --ingroup app app
COPY --chown=app:app package*.json ./
RUN npm ci --omit=dev
COPY --chown=app:app . .
USER app
CMD ["node", "server.js"]
The --chown=app:app on COPY is important — without it, files are copied as root and the non-root user can't read them.
Unpinned base images
# BAD — "latest" means your build is not reproducible
# A new base image release could introduce breaking changes or new vulnerabilities
FROM node:latest
# BAD — major version only, still unpredictable
FROM node:20
# GOOD — pinned to a specific version, reproducible and auditable
FROM node:20.11-alpine3.20
Alpine variants are significantly smaller than the full Debian-based images (40MB vs 400MB+). Smaller images mean smaller attack surface — fewer installed packages means fewer potential vulnerabilities.
Leaking secrets through the build context
# BAD — copies everything, including .env files and credentials
COPY . .
Without a .dockerignore, your entire project directory goes into the build context — including .env files, SSH keys, editor configs, and anything else sitting in the root.
# .dockerignore — GOOD
.env
.env.*
.env.local
*.pem
*.key
.git
.github
node_modules
*.log
Also, never embed secrets as ENV or ARG in the Dockerfile itself — they're visible in docker inspect and embedded in image layers:
# BAD — secret baked into the image, visible in docker history
ENV DATABASE_URL=postgres://user:password@prod-db/mydb
# GOOD — inject at runtime via environment variables
# docker run -e DATABASE_URL=... myapp:latest
# or use Docker secrets / Kubernetes secrets
CMD ["node", "server.js"]
Not scanning images for vulnerabilities
Your base image pulls in dozens of OS packages. Those packages have CVEs. Without scanning, you ship known vulnerabilities into production and only find out when someone exploits them.
# Scan your image before deploying — both tools are free
docker scout cves myapp:latest
# or
trivy image myapp:latest
# Integrate into CI: fail the build on HIGH/CRITICAL CVEs
trivy image --exit-code 1 --severity HIGH,CRITICAL myapp:latest
Set this up as a CI step that runs on every build. A failing scan should block the deploy — not generate a report nobody reads.
Key takeaway: Run containers as non-root. Pin base image versions. Add
.dockerignoreto exclude secrets and unnecessary files. Inject secrets at runtime, never bake them into images. Scan images in CI before every deploy.
Part 4 Checklist
Before deploying your next feature — does your session, server, observability, and container layer hold up?
-
req.session.regenerate()called immediately after login - Sessions bound to IP and user-agent, validated on every request
- Short rolling session TTL (30 minutes or less of inactivity)
- No user-supplied URLs fetched without IP validation (SSRF)
- Private IP ranges blocked:
10.x,172.16-31.x,192.168.x,127.x,169.254.x - DNS resolution checked before making outbound requests
- Failed logins logged with email, IP, user-agent, timestamp
- Admin actions logged with actor, target, and timestamp
- Alerts configured for brute force attempts and permission violations
- Docker containers run as non-root user
- Base images pinned to specific versions (not
latest) -
.dockerignoreexcludes.envfiles, keys, and credentials - Secrets injected at runtime — never baked into the image
- Container images scanned for CVEs in CI before deploy
Four parts in. The surface keeps expanding — and so does the checklist. But every item you cross off is a door you've closed.
There are still entire categories we haven't touched. Some of the most interesting vulnerabilities don't live in your HTTP handlers or your Dockerfile at all — they live in the trust relationships between your services, your tokens, and your users.
This post was written with the assistance of AI to help articulate the author's own views, knowledge, and experiences.