securitybackendnodejstypescript

Web Security Mistakes Every Developer Makes — Part 2

CSRF, rate limiting, dependency vulnerabilities, insecure file uploads, and secrets management — the next set of security basics you need to know.

February 20, 202611 min read

Part 1 covered the fundamentals: XSS, SQL injection, broken authentication, and sensitive data exposure. If you haven't read it, start there — those four mistakes alone account for the majority of real-world breaches.

Part 2 covers the next layer. These vulnerabilities are slightly less obvious than SQL injection, but equally dangerous. They're the ones that slip through code review because they look fine in isolation — the endpoint works, the feature ships, and the attack surface quietly opens up.


1. CSRF (Cross-Site Request Forgery)

What it is: An attacker tricks an already-authenticated user's browser into making an unwanted request to your app. The user is logged in to your site. The attacker lures them to a malicious page. That page fires a POST to your API using the user's cookies — because the browser attaches them automatically.

Vulnerable code

// BAD — accepts POST with no CSRF protection
// User is logged in via cookie-based auth
app.post('/api/transfer', async (req, res) => {
  await transferMoney(req.user.id, req.body.to, req.body.amount)
  res.json({ success: true })
})

The attacker's page doesn't need JavaScript. A plain HTML form does the job:

<!-- Attacker's page — auto-submits on load -->
<form action="https://yourapp.com/api/transfer" method="POST">
  <input type="hidden" name="to" value="attacker-account" />
  <input type="hidden" name="amount" value="10000" />
</form>
<script>
  document.forms[0].submit()
</script>

The victim visits the page, the form submits, the money moves. Their session cookie is sent automatically by the browser.

Fixed code

// GOOD — Option 1: SameSite cookies (modern default, simplest)
res.cookie('token', jwt, {
  httpOnly: true,
  secure: true,
  sameSite: 'strict', // browser will NOT send this cookie from cross-origin requests
  maxAge: 86400000,
})

// GOOD — Option 2: CSRF token (traditional, works for older browser support)
import { doubleCsrf } from 'csrf-csrf'

const { doubleCsrfProtection, generateToken } = doubleCsrf({
  getSecret: () => requireEnv('CSRF_SECRET'),
  cookieName: '__csrf',
  cookieOptions: { sameSite: 'strict', secure: true },
})

app.use(doubleCsrfProtection)

app.post('/api/transfer', (req, res) => {
  // csrf-csrf validates the token automatically — invalid token = 403
  transferMoney(req.user.id, req.body.to, req.body.amount)
  res.json({ success: true })
})

// GOOD — Option 3: Origin/Referer header check (defense in depth)
const ALLOWED_ORIGINS = ['https://yourapp.com']

app.use((req, res, next) => {
  if (
    req.method !== 'GET' &&
    !ALLOWED_ORIGINS.includes(req.headers.origin ?? '')
  ) {
    return res.status(403).json({ error: 'Forbidden' })
  }
  next()
})

sameSite: 'strict' is the simplest modern fix — the browser simply won't include the cookie in cross-origin POST requests. Combine it with Origin header validation for defense in depth.

Key takeaway: Set sameSite: 'strict' on all auth cookies. For legacy support or extra safety, add CSRF tokens. Never rely on secret URLs or body-only checks.


2. Missing Rate Limiting

What it is: No limit on how many requests a client can send means your login endpoint accepts 10,000 password attempts per minute. That's credential stuffing and brute force on a silver platter.

Vulnerable code

// BAD — no rate limiting on login
app.post('/api/login', async (req, res) => {
  const { email, password } = req.body
  const user = await findUserByEmail(email)
  if (!user) return res.status(401).json({ error: 'Invalid credentials' })

  const valid = await bcrypt.compare(password, user.passwordHash)
  if (!valid) return res.status(401).json({ error: 'Invalid credentials' })

  // An attacker can try tens of thousands of passwords here
  // Nothing stops them
  issueSession(res, user)
})

Fixed code

// GOOD — rate limit by IP, backed by Redis for distributed environments
async function checkRateLimit(
  key: string,
  maxAttempts: number,
  windowSeconds: number,
): Promise<boolean> {
  const count = await redis.incr(key)
  if (count === 1) await redis.expire(key, windowSeconds)
  return count <= maxAttempts
}

app.post('/api/login', async (req, res) => {
  const ipKey = `login:ip:${req.ip}`
  const emailKey = `login:email:${req.body.email}`

  // Rate limit by IP (stops broad brute force)
  if (!(await checkRateLimit(ipKey, 20, 60))) {
    return res
      .status(429)
      .json({ error: 'Too many attempts. Try again later.' })
  }

  // Rate limit by account (stops distributed credential stuffing)
  if (!(await checkRateLimit(emailKey, 5, 300))) {
    return res
      .status(429)
      .json({ error: 'Account temporarily locked. Try again in 5 minutes.' })
  }

  const { email, password } = req.body
  const user = await findUserByEmail(email)
  if (!user) return res.status(401).json({ error: 'Invalid credentials' })

  const valid = await bcrypt.compare(password, user.passwordHash)
  if (!valid) return res.status(401).json({ error: 'Invalid credentials' })

  issueSession(res, user)
})

Rate limiting only the login route isn't enough. Apply it to registration, password reset, OTP verification, and any public endpoint that accepts user input. For APIs, 100 requests per minute per key is a reasonable default — adjust based on actual usage patterns.

If you're using Express, express-rate-limit with a Redis store handles the boilerplate. For edge runtimes, Upstash's rate limiting library works well without maintaining a Redis instance.

Key takeaway: Rate limit by IP and by account. Login: 5 attempts per 5 minutes. All public endpoints: set a ceiling. Use Redis so limits survive restarts and scale across instances.


3. Dependency Vulnerabilities

What it is: Your app is only as secure as its weakest dependency. A typical Node.js project pulls in hundreds of transitive packages. Any one of them can carry a known CVE — or worse, a malicious update pushed by a compromised maintainer.

The problem

# Your package.json looks clean, but what about everything it depends on?
npm audit

# found 12 vulnerabilities (3 critical, 5 high, 4 moderate)
#
# critical: Prototype pollution in lodash < 4.17.21
# critical: ReDoS in path-to-regexp < 0.1.12
# high:     Command injection in got < 11.8.5

These aren't hypothetical. Prototype pollution in lodash and the path-to-regexp ReDoS vulnerability both affected real production applications.

Fixed approach

# Check what you're running
npm audit

# Fix what's safe to auto-fix
npm audit fix

# Review what needs manual attention
npm audit fix --force  # bumps major versions — verify nothing breaks

# In your CI pipeline (GitHub Actions example)
- name: Security audit
  run: npm audit --audit-level=high
  # Fails the build on high or critical vulnerabilities

Automated tools go further than manual auditing:

  • GitHub Dependabot — free, creates PRs automatically when vulnerabilities are published
  • Snyk — deeper analysis, monitors continuously, integrates into CI
  • Socket.dev — detects supply chain attacks (compromised maintainer accounts, malicious package updates)

A few habits that reduce your exposure:

  • Use a lockfile. package-lock.json or pnpm-lock.yaml pins exact versions. Without it, npm install can pull a different (potentially compromised) version tomorrow than it did today.
  • Review what you install. Check download counts, last publish date, number of maintainers. A package with one maintainer, last updated in 2019, and 50 weekly downloads is a risk.
  • Don't install packages for trivial tasks. left-pad famously broke the internet when it was unpublished. If the utility fits in 5 lines, write it yourself.

Key takeaway: Run npm audit in CI and fail on high/critical. Enable Dependabot. Use lockfiles. Treat your dependency tree as part of your attack surface.


4. Insecure File Uploads

What it is: Accepting file uploads without validation lets attackers upload executable scripts, malformed files that trigger parser vulnerabilities, or filenames designed to escape your upload directory.

Vulnerable code

// BAD — accepts anything, saves with the user-provided filename
import multer from 'multer'

const upload = multer({ dest: 'uploads/' })

app.post('/upload', upload.single('file'), (req, res) => {
  const filePath = `uploads/${req.file!.originalname}`
  fs.renameSync(req.file!.path, filePath)
  res.json({ url: `/uploads/${req.file!.originalname}` })
})

// Attack vectors:
// originalname = "../../../etc/passwd"        → path traversal
// originalname = "shell.php"                  → executable upload
// file size = 4GB                             → disk exhaustion
// mimetype header = "image/png", actual = PE  → type confusion

Fixed code

// GOOD — validate type, limit size, generate a safe filename
import { nanoid } from 'nanoid'
import path from 'path'
import multer from 'multer'

const ALLOWED_TYPES = new Set([
  'image/jpeg',
  'image/png',
  'image/webp',
  'application/pdf',
])
const MAX_FILE_SIZE = 5 * 1024 * 1024 // 5MB

const upload = multer({
  dest: 'tmp/',
  limits: { fileSize: MAX_FILE_SIZE },
})

app.post('/upload', upload.single('file'), (req, res) => {
  const file = req.file
  if (!file) return res.status(400).json({ error: 'No file provided' })

  // Validate MIME type against allowlist
  if (!ALLOWED_TYPES.has(file.mimetype)) {
    fs.unlinkSync(file.path) // clean up temp file
    return res.status(400).json({ error: 'File type not allowed' })
  }

  // Never use the original filename — generate a random one
  const ext = path
    .extname(file.originalname)
    .toLowerCase()
    .replace(/[^.a-z0-9]/g, '')
  const safeName = `${nanoid()}${ext}`
  const finalPath = path.join('uploads', safeName) // path.join prevents traversal

  fs.renameSync(file.path, finalPath)
  res.json({ url: `/uploads/${safeName}` })
})

Two additional layers worth adding:

Validate actual file content, not just the header. Browsers send the Content-Type header, but attackers can set it to anything. Use a library like file-type to read the magic bytes from the file itself and confirm it matches what was claimed.

Serve uploads from a separate domain or CDN bucket. If an attacker does get a malicious file uploaded, serving it from cdn.yourapp.com instead of yourapp.com means it can't access your app's cookies or localStorage — a critical isolation boundary.

Key takeaway: Whitelist file types. Limit file size at the framework level. Generate random filenames — never use the original. Serve user-uploaded content from a separate origin.


5. Secrets Management

What it is: API keys, database passwords, and signing secrets hardcoded in source code or accidentally committed to git. Once a secret hits a git commit, it's compromised — even if you delete it in the next commit. Git history is forever.

Vulnerable code

// BAD — hardcoded secrets ship in your source code
import Stripe from 'stripe'
import { Client } from 'pg'

const stripe = new Stripe('sk_live_AbCdEfGhIjKlMnOpQrStUvWx')

const db = new Client({
  host: 'prod-db.internal',
  password: 'hunter2_production_2024',
})

This appears in every developer's clone, every CI run, every Docker image layer, and git history for the lifetime of the project.

Fixed code

// GOOD — environment variables, validated at startup
import Stripe from 'stripe'
import { Client } from 'pg'

function requireEnv(name: string): string {
  const value = process.env[name]
  if (!value) throw new Error(`Missing required environment variable: ${name}`)
  return value
}

const stripe = new Stripe(requireEnv('STRIPE_SECRET_KEY'))

const db = new Client({
  host: requireEnv('DB_HOST'),
  password: requireEnv('DB_PASSWORD'),
})

Failing loudly at startup (throw new Error) is better than failing silently with undefined — you catch misconfiguration immediately instead of at runtime when a user hits a payment flow.

Beyond environment variables:

# Scan for secrets before they're committed
# .git/hooks/pre-commit or CI step
npx trufflehog filesystem . --only-verified
# or
git-secrets --scan

# If you use GitHub Actions, Gitleaks runs as a workflow step:
# - uses: gitleaks/gitleaks-action@v2

The hierarchy for secret storage in production:

  1. Platform-managed secrets — GitHub Actions Secrets, Vercel Environment Variables, AWS SSM Parameter Store. Encrypted at rest, never in plaintext on disk.
  2. Secrets manager — AWS Secrets Manager, HashiCorp Vault. Adds rotation, audit logs, and fine-grained access control.
  3. .env files — acceptable for local development only. Add .env, .env.local, and .env.*.local to .gitignore before your first commit.

If a secret is committed: rotate it immediately. Don't just delete the file — the secret is in git history and may have been indexed already by GitHub's secret scanning or other tools. Treat it as compromised the moment it hit a remote.

Key takeaway: All secrets in environment variables, never in source code. Validate at startup. Scan for secrets in CI. If it's in git history, rotate it now.


Part 2 Checklist

Before shipping any feature that involves state-changing requests, public endpoints, file handling, or external service credentials:

  • Auth cookies set sameSite: 'strict' (or 'lax' minimum)
  • CSRF token or Origin header validation on all state-changing endpoints
  • Rate limiting on login, registration, password reset, and all public endpoints
  • Rate limits enforced by account and by IP
  • npm audit runs in CI — build fails on high/critical vulnerabilities
  • Dependabot or Renovate configured for automated dependency updates
  • File uploads: MIME type allowlist, size limit, random filename generation
  • User-uploaded files served from a separate origin or CDN
  • No hardcoded secrets — all credentials in environment variables
  • .env files in .gitignore, secret scanning in CI

What's Next

Security isn't about being paranoid. It's about being disciplined. The vulnerabilities in Part 1 and Part 2 aren't exotic attack vectors — they're the standard checklist attackers run through when probing a new target. Get these right, and you've closed the door on the vast majority of opportunistic attacks.

Coming in Part 3: CORS misconfiguration, HTTP security headers, logging and monitoring for security events, and how to structure access control so it doesn't become the next thing you have to fix in production.


This post was written with the assistance of AI to help articulate the author's own views, knowledge, and experiences.