Blog / Framework & Concepts

From Asset Profiles to Causal Taxonomy: How TLCTC v2.0 Extends What OCTAVE Started

Bridging the gap between OCTAVE's asset-centric evaluations and TLCTC's cause-oriented cyber threat taxonomy.

BK
Bernhard Kreinz
Loading read time...
TLCTC Blog — Updated 2026/02/21

Original: 2025/06/28 — Updated: 2026/02/21 (revised for TLCTC v2.0 and OCTAVE FORTE).

The cybersecurity industry has a language problem. Terms like "threat," "risk," and "vulnerability" mean different things across organizations, standards, and frameworks — a phenomenon the TLCTC framework calls semantic diffusion. This problem isn't abstract: it causes real operational failures when incident reports, risk registers, and control matrices use the same words with different meanings.

OCTAVE and TLCTC both attempt to bring structure to threat assessment. They approach the problem from different angles, and understanding what each does well — and where each falls short — reveals how cause-oriented categorization can strengthen any risk management process, including one built on OCTAVE.

Click to Enlarge Model
OCTAVE Assessment Asset-Centric Evaluation TLCTC v2.0 Cause-Oriented Taxonomy vs Critical Business Asset Threat Profile "External Actor gains unauthorized access" Outcome/Event Focus #9 Social Eng. Δt=0s #7 Malware Δt=30m #4 Identity Theft Root Cause Analysis Defines specific controls for each structural jump Causal Focus
Figure 1: The structural difference between OCTAVE's asset/outcome focus and TLCTC's formal causal paths.

OCTAVE: The Organizational Perspective

The OCTAVE (Operationally Critical Threat, Asset, and Vulnerability Evaluation) methodology, developed by Carnegie Mellon University's Software Engineering Institute beginning in 1999, was genuinely pioneering. At a time when security assessment meant vulnerability scanning and penetration testing, OCTAVE insisted that organizational context matters — that the people who understand business operations should drive risk evaluation, not just the technical team.

OCTAVE has evolved through several variants. OCTAVE-S adapted the methodology for smaller organizations. OCTAVE Allegro streamlined the process around information assets specifically. OCTAVE FORTE, released in 2020, represents the most recent evolution, designed for tighter integration with enterprise risk management frameworks. All variants share the same philosophical core: start with what matters to the organization, identify threats to those assets, and build protection strategies accordingly.

This asset-centric orientation is OCTAVE's genuine strength. It ensures organizational buy-in, produces actionable security strategies, and — critically — connects security investment to business value. After more than two decades, OCTAVE remains actively taught by SEI and widely deployed.

However, OCTAVE's treatment of threat categorization itself reveals structural limitations that become increasingly problematic as organizations face modern, multi-stage attacks.

Where OCTAVE's Threat Model Runs Into Trouble

OCTAVE builds threat profiles around critical assets. The methodology identifies threats through knowledge elicitation — workshops where employees describe what could go wrong with their critical assets. This produces useful organizational insight, but the resulting threat descriptions tend to be event-centric rather than cause-oriented.

Consider what happens in practice. An OCTAVE assessment might produce a threat profile stating: "An external actor could gain unauthorized access to the customer database, resulting in data exposure." This is a perfectly reasonable threat description for asset protection planning. But it conflates several distinct questions that TLCTC v2.0 deliberately separates: How would unauthorized access occur? Through a server-side code exploit (#2)? Through stolen credentials (#4)? Through social engineering of an employee who then grants access (#9→#4)? Through a compromised third-party integration (#10)? Each of these represents a fundamentally different generic vulnerability with different controls.

OCTAVE does not force this decomposition. The methodology's threat categories tend to emerge from the organizational context rather than from a systematic analysis of how compromise happens. This creates several downstream problems.

  • First, inconsistent categorization across assessments. Two OCTAVE teams within the same organization can describe the same threat differently, because the methodology provides no canonical taxonomy for the threats themselves. What one team calls "unauthorized access" another might call "data breach" or "system compromise" — and all three are describing outcomes, not causes.
  • Second, difficulty mapping threats to specific controls. If a threat profile says "malicious code execution," the control response depends entirely on how that code executes. Malware delivered through a phishing email (#9→#7) requires security awareness training and email filtering. Malware that exploits a browser vulnerability (#3→#7) requires patching and endpoint protection. A supply chain compromise that delivers trusted-but-malicious updates (#10→#7) requires vendor assessment and integrity verification. OCTAVE's threat profiles rarely distinguish these mechanisms with enough precision for targeted control selection.
  • Third, no representation of attack sequences. Modern attacks are multi-stage. The attacker doesn't simply "gain unauthorized access." They phish an employee (#9), who clicks a link that exploits a browser vulnerability (#3), which drops malware (#7), which harvests credentials (#4 acquisition), which are then used to access the database (#4 application), which leads to data exfiltration (Loss of Confidentiality). OCTAVE has no native notation for representing this causal chain, and without it, defenders cannot identify where in the sequence their controls are effective — or where they have structural gaps.
  • Fourth, and most subtly, OCTAVE can blur the boundary between control failure and threat. When an OCTAVE team identifies "unpatched servers" as a threat, they are actually describing a control gap (failure to maintain patch currency), not a threat. The threat is #2 Exploiting Server — the exploitation of code flaws in server-side software. The unpatched state is the condition that makes the threat exploitable. TLCTC v2.0 codifies this distinction as Axiom V: control failure is control-risk, not a threat category. Mixing the two in a risk register makes it impossible to distinguish between improving controls and addressing new threats.

TLCTC v2.0: Cause-Oriented Threat Classification

The Top Level Cyber Threat Clusters framework, now at version 2.0, takes a fundamentally different approach. Rather than starting with assets and asking "what could go wrong?", TLCTC starts with a systematic decomposition of IT system architecture and asks: "what are the distinct generic vulnerabilities that enable all cyber threats?"

The answer, derived through axiomatic reasoning, is ten. Not because ten is a convenient number, but because when you decompose the attack surface of any IT system — its software functions, its code implementations (server-side and client-side), its identity mechanisms, its communication channels, its capacity limits, its execution capabilities, its physical components, its human operators, and its third-party dependencies — you arrive at exactly ten distinct categories of root weakness. Each defines a threat cluster.

The Ten Axioms

TLCTC v2.0 is built on ten explicit axioms organized into four groups.

  • Scope Axioms (I–II) establish what the framework covers. Axiom I states that generic IT assets are universal — sector labels like "healthcare" or "finance" do not create distinct threat classes. Axiom II establishes the client-server model as the universal interaction abstraction for networked systems.
  • Separation Axioms (III–V) draw hard boundaries. Axiom III: threats are causes, not outcomes — "ransomware" is not a threat category. Axiom IV: threats are not threat actors — "APT29" is not a cluster. Axiom V: control failure is not a threat — "unpatched servers" is a control gap, not a threat.
  • Classification Axioms (VI–VIII) define how assignment works. Axiom VI: every attack step exploits exactly one generic vulnerability and maps to exactly one cluster. Axiom VII: attack vectors are defined by the initial generic vulnerability targeted. Axiom VIII: each cluster encompasses operational sub-threats, separating a stable Strategic Management Layer from an evolving Operational Security Layer.
  • Sequence Axioms (IX–X) handle real-world complexity. Axiom IX: clusters chain into attack paths with velocity annotations (Δt). Axiom X: credentials exhibit dual operational nature — acquisition maps to the enabling cluster, application always maps to #4 Identity Theft.

The Formal Classification Grammar

V2.0 introduces nine R-* classification rules — formalized decision logic that eliminates ambiguity in cluster assignment. R-EXEC determines whether foreign code execution occurred (distinguishing #1 from #7 from #2/#3). R-CRED enforces the credential lifecycle separation. R-ROLE determines server vs client classification. R-ABUSE identifies function misuse without implementation flaws. R-FLOOD distinguishes capacity exhaustion from implementation-defect crashes. R-MITM separates position acquisition from position exploitation. R-SUPPLY applies the third-party trust test. R-PHYSICAL and R-HUMAN handle their respective bridge clusters.

These rules, combined with a tie-breaker hierarchy and a minimal classification procedure, ensure that any two analysts presented with the same evidence will assign the same cluster. This repeatability is something OCTAVE's threat profiling cannot guarantee by design — because OCTAVE's threat descriptions emerge from organizational context rather than formal grammar.

Attack Velocity: The Missing Dimension

Perhaps the most consequential addition in v2.0 is Attack Velocity (Δt) — the time interval between adjacent attack steps. This isn't just temporal metadata. Velocity determines which types of controls are structurally viable.

V2.0 defines four Velocity Classes:

  • VC-1 (Strategic): Days to months between steps. Typical of APT dwell time, slow supply chain compromises. Defense mode: log retention, threat hunting, strategic monitoring. Human-dependent controls are effective.
  • VC-2 (Tactical): Hours between steps. Typical of phishing campaigns followed by manual reconnaissance. Defense mode: SIEM alerting, analyst triage, guided response. Human controls still work, but need good tooling.
  • VC-3 (Operational): Minutes between steps. Typical of automated exploitation, lateral movement scripts. Defense mode: SOAR/EDR automation, rapid containment, prebuilt playbooks. Purely human response becomes structurally insufficient at this speed.
  • VC-4 (Real-Time): Seconds to milliseconds. Typical of flooding attacks, wormable exploits. Defense mode: architectural controls, circuit breakers, rate limits, automatic isolation. No human can respond in time — controls must be engineered into the architecture.

This produces a derived metric: the Detection Coverage Score (DCS = MTTD / Δt). When DCS < 1.0, the defender detects faster than the attacker transitions — the control is effective. When DCS > 1.0, the attacker wins the race — the control is structurally insufficient regardless of how well it performs within its own parameters.

The velocity dimension is directly relevant to OCTAVE implementations. An OCTAVE risk assessment might identify "credential theft" as a high-priority threat and recommend "implement MFA" as a control. But the effectiveness of that control depends entirely on the attack velocity. Against a VC-2 phishing campaign where the attacker manually uses stolen credentials hours later, MFA is highly effective. Against a VC-4 real-time session hijacking attack, MFA on initial authentication is irrelevant because the attacker captures the authenticated session, not the password. TLCTC's velocity analysis reveals this structural reality; OCTAVE's threat profiles do not.

Domain Boundaries and Topology

V2.0 introduces the Domain Boundary Operator: ||[context][@Source→@Target]||. This notation explicitly marks where an attack path crosses responsibility spheres — from a vendor's environment into an organization's environment, from the physical domain into the cyber domain, from an external party into the internal network.

Clusters are also classified by topology: internal clusters (#1 through #7) operate within the software domain's attack surface, while bridge clusters (#8 Physical Attack, #9 Social Engineering, #10 Supply Chain) inherently cross between different responsibility spheres. This distinction matters for control mapping: bridge clusters require controls that span organizational boundaries, not just technical controls within a single domain.

OCTAVE does acknowledge organizational boundaries through its asset-based approach, but it has no formal notation for expressing where an attack crosses from one sphere of responsibility to another — information that is critical for determining who owns which control in the defense chain.

Data Risk Events: Separating Effects from Causes

TLCTC v2.0 enforces a strict separation between threats (causes) and outcomes (effects) through Data Risk Events (DREs). Every outcome is recorded as one or more of three types: Loss of Confidentiality (C), Loss of Integrity (I), or Loss of Accessibility/Availability (A).

The notation #2 + [DRE: C] means "exploitation of a server-side code flaw resulted in unauthorized data exposure." The cluster identifies the cause; the DRE records the effect. This separation is normative — DREs must never be used as threat categories, and threat clusters must never be assigned based on outcomes.

This matters because the same outcome can result from entirely different causes. "Customer data was exposed" (Loss of Confidentiality) might result from #9→#4 (phishing leading to credential use), from #2 (SQL injection exploiting a server flaw), from #5 (man-in-the-middle intercepting unencrypted traffic), or from #8 (physical theft of a server). Each requires fundamentally different controls. Collapsing them into the same outcome category — as OCTAVE's threat profiles tend to do — obscures the causal mechanism that defenders need to address.

Machine-Readable Intelligence: The Three-Layer JSON Architecture

V2.0 specifies a standardized JSON architecture for machine-readable threat intelligence, organized in three layers. Layer 1 (Framework) contains the stable, universal definitions — cluster definitions, generic vulnerabilities, axioms, rules. This rarely changes. Layer 2 (Reference Data) contains semi-static organizational context — responsibility sphere definitions, domain boundary configurations. Layer 3 (Instances) contains dynamic, incident-specific data — attack paths, velocity observations, DRE annotations, evidence references.

This architecture is designed for STIX/TAXII compatibility, enabling standardized threat intelligence sharing with cluster-level categorization that any recipient can interpret consistently — because the cluster definitions in Layer 1 are universal and immutable.

OCTAVE produces organizational knowledge and protection strategies, but it does not output machine-readable threat intelligence in a standardized format. For organizations that need to share threat information across boundaries — with ISACs, with regulators, with partner organizations — this is a significant gap that TLCTC's JSON architecture directly addresses.

How TLCTC v2.0 Strengthens OCTAVE Implementations

The most productive framing is not "TLCTC replaces OCTAVE" but rather "TLCTC provides the threat taxonomy that OCTAVE needs but doesn't have." Organizations already using OCTAVE can integrate TLCTC at specific points to resolve the structural limitations described above.

Sharpen Threat Profiles with Cluster Classification

When OCTAVE's knowledge elicitation process produces threat descriptions, classify each one against the ten TLCTC clusters. "An external actor could gain unauthorized access to the customer database" becomes a set of possible attack paths: #9→#4 (phishing to credential use), #2 (server exploit), #10→#7 (supply chain to malware), etc. Each path identifies a distinct generic vulnerability requiring distinct controls.

This decomposition transforms vague threat profiles into precise, actionable cause-chains while preserving OCTAVE's organizational context and asset-centric prioritization.

Add Velocity Analysis to Risk Prioritization

OCTAVE's risk prioritization is based on asset criticality and impact. TLCTC's velocity classes add a structural dimension: not just "how bad would it be?" but "can our controls actually respond in time?" A threat with catastrophic impact but VC-1 velocity (weeks of dwell time) might be lower operational priority than a moderate-impact threat at VC-4 velocity (milliseconds) where human-dependent controls are structurally useless.

Calculate DCS for each identified threat-control pair: DCS = MTTD / Δt. Any pair where DCS > 1.0 represents a structural gap that no amount of process improvement can fix — only architectural change or automation.

Use the Bow-Tie Model for Control Placement

OCTAVE produces protection strategies. TLCTC's Bow-Tie model provides the structural framework for placing those controls correctly: preventive controls on the cause side (mapping to NIST CSF 2.0 IDENTIFY, PROTECT), detective and reactive controls around the central event and on the effect side (DETECT, RESPOND, RECOVER), and governance controls spanning the entire structure (GOVERN).

This ensures that OCTAVE's protection strategies aren't just lists of controls, but are positioned within a causal model that shows exactly where each control acts — and where gaps remain.

Enable Cross-Organizational Comparability

One of OCTAVE's historical limitations is that assessments are organization-specific. Because threat descriptions emerge from local context, comparing risk postures across organizations, business units, or time periods is difficult. TLCTC's standardized taxonomy enables direct comparison: "Organization A has effective controls against #2 (server exploits) but structural gaps against #9→#4 (phishing to credential use) at VC-3 velocity" is a statement that any organization can interpret consistently.

Map to NIST CSF 2.0 with Precision

TLCTC v2.0 maps each cluster to the six NIST CSF 2.0 functions — GOVERN, IDENTIFY, PROTECT, DETECT, RESPOND, and RECOVER — creating a control matrix that specifies which CSF categories apply to which clusters. This enables OCTAVE implementations to generate outputs that directly align with CSF 2.0 compliance requirements without the "significant customization effort" that has historically been required.

The mapping also includes the KRI/KCI/KPI indicator hierarchy: Key Risk Indicators at the risk event layer, Key Control Indicators at the control objectives layer (both technical state metrics and procedural performance metrics), and velocity-adjusted targets derived from DCS. This provides the measurement framework that OCTAVE identifies as important but does not define.

Key Differences at a Glance

Dimension OCTAVE (incl. FORTE) TLCTC v2.0
Primary question "What assets need protection and from what?" "What generic vulnerabilities enable all cyber threats?"
Control effectiveness Qualitative assessment DCS = MTTD / Δt (structural viability metric)
Organizational boundaries Acknowledged via asset ownership Formal domain boundary operators: ‖[context][@Source→@Target]‖
Machine readability Not designed for automated sharing Three-layer JSON architecture, STIX/TAXII compatible
Standards integration Standalone methodology, custom integration required Native mapping to NIST CSF 2.0 (6 functions), MITRE ATT&CK, CWE, STIX
Naming convention N/A Two-layer: strategic (#X) and operational (TLCTC-XX.YY)
Axiom count None (principles-based) 10 explicit axioms in 4 groups
Classification rules Informal guidance 9 formalized R-* rules

Practical Example: The Same Incident, Two Lenses

Consider a ransomware incident where an employee receives a phishing email, clicks a link that delivers malware, the malware harvests credentials, the attacker uses those credentials to move laterally, escalates privileges, and encrypts file shares.

OCTAVE lens: Threat profile for the "Customer Records" asset identifies "external actor — malicious code execution leading to data loss" as a high-priority threat. Protection strategy recommends: employee training, endpoint protection, backup procedures.

TLCTC v2.0 lens: Attack path notation:

TLCTC Syntax
#9 →[Δt=0s] #7 + [DRE:C Aquisition] →[Δt=30m] #4(use) →[Δt=5m] #4(application) →[Δt=2m] #1 →[Δt=1m] (#1 + #7) + [DRE: A]

This reveals:

  • The #9→#7 transition is VC-4 (instant — user clicks and malware executes). No human control can intervene here. Controls must be preventive (email filtering, sandboxing) or architectural (application whitelisting blocking #7).
  • The #7(acquisition)→#4 transition is VC-2 (30 minutes). SIEM/EDR alerting can potentially catch the credential harvesting if detection is tuned for this pattern.
  • The #4(application)→#1→(#1+#7) transitions are VC-3 (minutes). Automated response (SOAR playbooks, EDR containment) is required — human SOC analyst triage at this speed produces DCS > 1.0.
  • Two distinct DREs: Loss of Confidentiality (credential exfiltration) and Loss of Accessibility (file encryption). These trigger different regulatory and recovery processes.

The OCTAVE assessment identifies that something bad could happen and recommends reasonable controls. The TLCTC analysis identifies exactly where in the causal chain each control acts, whether each control can structurally keep pace with the attack velocity, and which specific generic vulnerabilities need to be addressed at each step.

Evolution, Not Replacement

OCTAVE deserves credit for pioneering the organizational perspective in security risk assessment at a time when the field was purely technical. Its insistence that business context matters, that employees should drive the process, and that assets should anchor the analysis remains valuable. OCTAVE FORTE's 2020 evolution toward enterprise risk management integration shows that the methodology continues to adapt.

But OCTAVE was never designed to be a threat taxonomy. It was designed to be a risk evaluation methodology that happens to identify threats along the way. The threats it identifies inherit the imprecision of natural language and organizational context — which is fine for building protection strategies, but insufficient for precise control mapping, cross-organizational comparison, machine-readable intelligence sharing, and temporal analysis of attack viability.

TLCTC v2.0 provides the causal taxonomy that sits beneath any risk evaluation methodology. It answers a question that OCTAVE never set out to answer but that every organization eventually needs answered: "What are the distinct, fundamental ways our systems can be compromised?" By integrating TLCTC's cause-based clusters, velocity analysis, and formal notation into an OCTAVE-style assessment process, organizations gain the precision of a scientific taxonomy without losing the organizational grounding that makes OCTAVE effective.

This integration reflects the maturation of cybersecurity as a discipline — from intuitive, experience-based threat identification toward formal, axiom-driven categorization that enables consistent analysis across incidents, organizations, and time.

The TLCTC framework is released under Creative Commons Attribution 4.0 International (CC BY 4.0). The OCTAVE methodology is a service mark of Carnegie Mellon University.

2026 TLCTC.net — Licensed under Creative Commons Attribution 4.0 International (CC BY 4.0)