For Developers / SSDLC Hub

Design reviews are theatre.
Make them bite.

"We'll threat-model at the design review." Then nobody does. Or someone produces a STRIDE matrix that lives in Confluence and never gets tested. That's not security — it's a ceremony.

Here is what the "S" in SSDLC looks like when attack paths are the output: ten clusters, design-time interruptions, and a review that actually fails when a step has no control.

1. Ten clusters, one shared vocabulary

Every cyber threat exploits one of ten generic vulnerabilities. That's the whole taxonomy. A developer does not need all of it at once — only enough to stop arguing about what kind of bug this is and start arguing about where it sits in an attack path.

# Cluster Generic vulnerability
#1Abuse of FunctionsScope of designed functionality
#2Exploiting ServerServer-side code flaws
#3Exploiting ClientClient-side code flaws
#4Identity TheftWeak identity–credential binding
#5Man in the MiddleInsufficient transit protection
#6Flooding AttackFinite resource capacity
#7MalwareCode-execution capability (FEC)
#8Physical AttackPhysical accessibility
#9Social EngineeringHuman psychology
#10Supply Chain AttackThird-party trust reliance

Full definitions live on the 10 clusters page. The canonical source is the v2.1 white paper.

Two rules that bite developers most
  • One step = one cluster. Don't label a single action with two clusters. If two feel like they apply, you have a sequence — write it as #X → #Y.
  • #2 vs #3 is where the flaw executes, not two different flaw types. Buffer overflow in a request parser = #2. Same bug in a PDF reader = #3. Same CWE, different cluster.

2. Threat modeling that produces attack paths

A design review passes only when every step in a plausible attack path has a named interruption — a control, a check, a circuit-breaker. No interruption, no pass. This is the output the review owes the team: not a matrix, not a diagram — a sequence with holes that somebody actually plugged.

Common sequences worth reviewing on any web product:

  • Phishing to execution: #9 → #3 → #7
  • Credential theft to privileged misuse to execution: #9 → #4 → #1 → #7
  • Dependency compromise to execution on install: #10 ||[dev][@Vendor→@Org]|| → #7
  • Abuse-based amplification to volume exhaustion: #1 → #6
The design-review interruption table

For each step in the path, record the control that stops it. Empty cells are the review's output — they are the risk.

Step Interruption Status
#9 MFA + phishing sim ✔
#4 device-bound tokens ✔
#1 step-up on admin APIs ✘ gap
#7 EDR allow-list ✔

More patterns: attack-path examples. The ||…|| notation for trust-boundary crossings is defined in the white paper §5.

3. Your SDLC, phase by phase

Each phase asks the same question — "which generic vulnerabilities are we addressing now, and how do we know?" — and produces cluster-tagged deliverables. The detailed per-phase tables live on the phase-by-phase lifecycle reference. One-liners:

4. Pick your path

Four focused deep-dives. Go where your current pain is.

5. Canonical references

Try it Monday

Pick one plausible attack path for your product (start with #9 → #4 → #1 → #7). Walk it through your next design review. For every step without a named interruption, you have a finding. That's threat modeling doing work.