“Zero Trust” and “Defense in Depth” are strategic decisions — GOVERN-level directives that constrain the entire control architecture. But a strategy requires a diagnosis: what threats are we addressing? This article decomposes Zero Trust through the TLCTC framework and Bow-Tie model, showing that without a causal threat taxonomy, these strategic labels are structurally empty — prescriptions without a diagnosis. Along the way, we map NIST SP 800-207’s tenets to specific generic vulnerabilities and reveal why the cybersecurity industry’s most prominent strategic concepts cannot be operationalized without the foundational layer it has never built.
Ask five CISOs what “Zero Trust” means and you will get six answers. One says it’s “never trust, always verify.” Another points to a ZTNA product. A third describes a network architecture. A fourth invokes micro-segmentation. The fifth says it’s a philosophy, not a product.
None of them are wrong. All of them are incomplete. And that incompleteness is not a communication problem — it is a structural problem. “Zero Trust” has no fixed position in any causal risk model. It floats across threat types, control functions, and Bow-Tie positions without being anchored to any one of them. In TLCTC terms, this is a textbook case of semantic diffusion: a label that has lost its structural identity because the industry never gave it one in the first place.
Let’s fix that by decomposing it.
Zero Trust is Not a Threat
“Zero Trust” does not name a generic vulnerability. It does not correspond to any of the 10 clusters. You cannot write an attack path that says #ZeroTrust → [SRE] — that notation is meaningless, because there is no exploitable weakness called “Zero Trust.” It fails the most basic TLCTC test: what was the cause-side condition that the attacker exploited?
This immediately tells us something important. Zero Trust is not a cause-side concept. It does not answer the question the framework is built to answer: “What generic vulnerability was exploited?” So whatever it is, it lives outside the 10 clusters.
Zero Trust is Not a Single Control
This is where the semantic diffusion becomes visible. When vendors say “we deliver Zero Trust,” they typically bundle products that address fragments of multiple clusters — primarily #4 (Identity Theft) via continuous authentication, #1 (Abuse of Functions) via least-privilege enforcement, and #5 (Man in the Middle) via encrypted micro-segments. Some add #3 (Exploiting Client) via browser isolation and #7 (Malware) via execution restrictions.
But they rarely make this causal decomposition explicit. The buyer receives a product that “implements Zero Trust” without knowing which generic vulnerabilities are actually being reduced. In the TLCTC × NIST CSF matrix (the 10×6 control objective structure), “Zero Trust” does not occupy a single cell. It scatters across multiple rows and columns simultaneously — which is precisely why it feels blurry. It is a label applied to a pattern that spans several structural positions, and nobody draws the map.
What Zero Trust Actually Is: A GOVERN-Level Strategic Decision
Through the TLCTC lens, “Zero Trust” is best understood as a strategic decision that belongs in the GOVERN (GV) function of the NIST CSF 2.0 framework. Specifically, it is a risk appetite statement that constrains the entire control architecture: minimize the blast radius of any single cluster realization by eliminating implicit trust at every domain boundary.
This matters because GOVERN is not optional decoration. NIST CSF 2.0 added GOVERN as the sixth function precisely because strategy must precede control selection. GV.RM (Risk Management Strategy) requires the organization to identify and prioritize threats before selecting controls. A cyber strategy — any cyber strategy — must answer the question: what are we defending against?
And this is where the industry’s approach to Zero Trust breaks down. Most organizations adopt ZT as a control architecture — they buy ZTNA products, deploy micro-segmentation, enforce MFA — without ever producing a causal threat analysis that justifies these specific controls. They jump from GOVERN directly to PROTECT, skipping the diagnostic step entirely. In TLCTC terms: they prescribe without diagnosing. The strategy sounds sophisticated, but it has no structural foundation — no explicit mapping from generic vulnerabilities to control objectives.
A legitimate Zero Trust strategy would say: “We have identified clusters #1, #4, #5, #7, and #10 as our primary cause-side threats, and LoC/LoI as our primary consequence-side risks. Our ZT architecture addresses these through the following control objectives per cluster, under PROTECT and DETECT on the cause side, and RESPOND and RECOVER on the consequence side.” — Nobody says this. Because nobody has the taxonomy to say it.
When you operationalize the GOVERN intent correctly, it decomposes into concrete control objectives — both PROTECT (prevention gates) and DETECT (continuous sensors) — across specific clusters. The meta-principle also extends to the consequence side (RESPOND and RECOVER), which is where most analyses stop noticing.
Decomposing ZT into Causal Control Objectives
Here is how each cluster-specific ZT control objective works:
- Against #4 (Identity Theft): Every access request must be authenticated and authorized at the point of use — no inherited trust from network position. The generic vulnerability is “weak identity management,” and the ZT directive says: make identity verification continuous, not perimeter-gated. DETECT adds UEBA and impossible-travel detection to spot credential misuse.
- Against #1 (Abuse of Functions): Enforce least-privilege and just-in-time access so that even a legitimately authenticated identity can only invoke the minimum necessary functions. This narrows the generic vulnerability “scope of legitimate functions” per-session. Behavioral analytics serve as the DETECT layer for anomalous function usage.
- Against #5 (Man in the Middle): Encrypt everything end-to-end, even inside the “trusted” network, because the architecture no longer assumes any communication path is inherently safe. This addresses “unprotected communication path” by treating every segment as potentially hostile.
- Against #3 (Exploiting Client): Browser isolation, application sandboxing, and Remote Browser Isolation (RBI) products all reduce the client-side code flaw attack surface. This is a ZT control that is rarely labeled as such, but it embodies the same principle: don’t trust the client environment.
- Against #7 (Malware): Application whitelisting, code signing enforcement, and sandboxed execution environments directly address #7’s generic vulnerability — “designed code execution capabilities.” A ZT architecture that locks down identity but allows arbitrary code execution has a structural gap in cluster #7.
- Against #8 (Physical Attack): Being “inside the building” or “on the corporate LAN” no longer grants access elevation. The generic vulnerability (physical accessibility) still exists, but ZT revokes the implicit trust traditionally derived from physical proximity.
- Against #10 (Supply Chain): The “never trust, always verify” principle applies explicitly at
||[@Vendor→@Org]||domain boundaries. SBOM validation, vendor posture assessment, and ZTNA enforcement at third-party access points all address “third-party trust dependencies.” This is arguably where “Zero Trust” is most literal: the trust being revoked is the implicit trust placed in the supply chain.
Domain Boundaries and Lateral Movement
The real structural value of Zero Trust becomes visible in TLCTC attack path notation. Consider a typical post-compromise lateral movement sequence:
#9 → #4 → #1 ||[admin][@Org→@Org(Admin)]|| → #1
In a traditional perimeter model, once the attacker passes the first domain boundary, subsequent → transitions encounter minimal resistance. The umbrella control assumes everything inside is trusted. Zero Trust attempts to insert both a PROTECT gate and a DETECT sensor at every arrow in the attack path — not just at the perimeter boundary @External→@Org(Perimeter).
In TLCTC language, this is the systematic tightening of ||[domain boundary]|| enforcement at every responsibility sphere transition. Traditional perimeter security is an umbrella control with a very wide scope assumption — “everything inside the perimeter is trusted.” Zero Trust says: reduce the scope assumption of every umbrella control to near-zero, and compensate with more granular local controls.
The Hidden Half: Zero Trust on the Consequence Side
Everything above covers the cause side (left side) of the Bow-Tie — PROTECT and DETECT controls that target specific clusters before system compromise. But this is only half the picture.
Zero Trust’s “assume breach” philosophy equally drives consequence-side controls — and the industry rarely acknowledges that these sit in a structurally different position.
The flagship example: data-at-rest encryption. This is universally listed as a core ZT pillar. But in TLCTC terms, encrypting stored data does not prevent any cluster from being exploited — it does not stop #4, #1, or any other cause-side threat. What it does is mitigate Loss of Confidentiality (LoC) after system compromise has already occurred. It sits on the right side of the Bow-Tie, in the RESPOND/RECOVER functions. It is a consequence-side control, not a preventive one.
This is why ZT is semantically irreducible: it conflates cause-side controls (continuous auth → PROTECT against #4) with consequence-side controls (encryption at rest → mitigate LoC after SRE) under the same label — without noticing they operate in different structural positions in the Bow-Tie.
Other consequence-side ZT controls include: network micro-segmentation to contain blast radius after compromise (RESPOND), automated isolation of compromised workloads (RESPOND), and immutable infrastructure patterns for rapid recovery (RECOVER). These are structurally valid controls — but they do not reduce generic vulnerabilities. They reduce the impact of successful exploitation. Calling them “Zero Trust controls” alongside cause-side controls like MFA is exactly the kind of category mixing that produces semantic diffusion.
NIST SP 800-207 Through the TLCTC Lens
NIST SP 800-207 defines seven tenets of Zero Trust Architecture. When we map them to the TLCTC framework, something instructive happens: the tenets scatter across multiple clusters and CSF functions, confirming that even NIST’s own formalization cannot be pinned to a single structural position.
| SP 800-207 Tenet | TLCTC Cluster(s) | CSF Function | BT Side |
|---|---|---|---|
| 1. All data sources and computing services are resources | #1, #8 | ID, GV | Both |
| 2. All communication is secured regardless of network location | #5 | PR | Cause |
| 3. Access to resources is granted per-session | #4, #1 | PR | Cause |
| 4. Access determined by dynamic policy | #4, #1 | PR, DE | Cause |
| 5. Enterprise monitors integrity of all assets | Cross-cluster | DE, ID | Both |
| 6. Authentication/authorization are dynamic and strictly enforced | #4 | PR, DE | Cause |
| 7. Enterprise collects maximum state information | Cross-cluster | DE, RS | Both |
The mapping reveals two things. First, tenets 3, 4, and 6 are variations of the same structural idea: reduce #4 and #1 generic vulnerabilities through continuous, contextual access enforcement. NIST needed three tenets to express what TLCTC captures in two cluster × function assignments. Second, tenets 5 and 7 are not cluster-specific at all — they are DETECT and RESPOND directives that apply across the entire 10×6 matrix. This is the “GOVERN” character of Zero Trust made visible: it constrains how controls are deployed everywhere, rather than specifying a control for a particular location.
What Zero Trust Does Not Address
The decomposition also reveals coverage gaps that are rarely discussed in ZT literature:
- #9 (Social Engineering): Zero Trust architectures have essentially no direct control against #9. You cannot “always verify” a human whose psychological factors are being exploited. Phishing awareness training and procedural verification are the controls for #9’s generic vulnerability — and they are not part of any ZT architecture specification. This is a structural blind spot, not a marketing omission.
- #2 (Exploiting Server): Server-side code imperfections (buffer overflows, SQL injection, logic flaws) are addressed by secure coding and patching, not by ZT architectural controls. ZT can limit the blast radius after a #2 exploitation (consequence side), but it does not reduce the generic vulnerability itself.
- #6 (Flooding Attack): DDoS mitigation is capacity planning, rate limiting, and CDN distribution — none of which follow from the “never trust, always verify” principle. Resource exhaustion sits outside ZT’s structural reach.
This does not make Zero Trust incomplete by design — it means it was never intended to be a comprehensive threat coverage strategy. It is a design constraint on a subset of the threat landscape, and the TLCTC decomposition makes visible exactly which subset.
The Deeper Problem: Strategy Without a Threat Taxonomy
Zero Trust is not the only strategic concept suffering from this structural vacancy. Defense in Depth has the same problem — and the same cause.
Defense in Depth says: deploy multiple layers of controls so that no single failure leads to compromise. Zero Trust says: assume every layer might fail and verify at every transition. Both are valid GOVERN-level directives. Both constrain how controls are deployed across the architecture. And both are routinely adopted as “strategies” without ever specifying what threats the layers or verification points are meant to address.
This is the structural gap the cybersecurity industry has normalized. A CISO presents a “Defense in Depth strategy” with seven layers of controls. A board approves a “Zero Trust transformation roadmap.” But neither document contains a threat taxonomy. Neither maps controls to generic vulnerabilities. Neither specifies which clusters are addressed at which layer, or which attack paths the architecture is designed to interrupt. The strategy has no diagnosis underneath it.
Control-first regulation is logically impossible — a prescription without a diagnosis. The same logic applies to control-first strategy. “We implement Zero Trust” and “We practice Defense in Depth” are prescriptions. The diagnosis — “these are the generic vulnerabilities we are reducing, mapped to these clusters, under these CSF functions” — is almost universally absent. Not because organizations are lazy, but because the industry has never provided a stable, cause-based threat taxonomy to populate the diagnosis with.
This is the fundamental circularity: NIST CSF 2.0 correctly places GOVERN first and requires threat identification (ID.RA) before control selection (PR). But ID.RA has no standardized threat taxonomy to reference. MITRE ATT&CK provides techniques, not a strategic threat classification. STRIDE mixes causes with outcomes. ENISA’s threat landscape categories change year to year. So organizations skip the diagnostic step and jump straight to strategic labels — “Zero Trust,” “Defense in Depth,” “Assume Breach” — that sound like strategies but are structurally empty without the threat taxonomy that would give them operational meaning.
TLCTC fills that gap. With 10 clusters mapped to generic vulnerabilities, a GOVERN-level decision like “Zero Trust” can be decomposed into specific control objectives per cluster per CSF function. Without it, the strategy remains a label. This is not a theoretical concern — it is the reason two organizations can both claim to “implement Zero Trust” while having entirely different control architectures, covering entirely different threat clusters, with no way to compare or audit them.
Why the Blurriness is Structurally Inevitable
The reason “Zero Trust” can never be semantically sharp is now clear. It is not positioned at any single node in the Bow-Tie. It is a meta-principle — a GOVERN-level directive that constrains how PROTECT and DETECT controls are deployed across multiple clusters simultaneously, and how RESPOND and RECOVER controls limit damage after compromise. It spans both sides of the pivot point. In TLCTC terminology:
- It is not a threat (not a cause-side noun).
- It is not a single control (not one cell in the 10×6 matrix).
- It is not an outcome (not a consequence-side concept).
- It is not even confined to one side of the Bow-Tie — it straddles the SRE pivot.
It is a design constraint on the control architecture — a statement about how seriously you take domain boundaries, responsibility spheres, and the assumption that any → transition in an attack path might succeed.
Practical Consequence: The Questions That Should Be Asked
“If a CISO says ‘we’re implementing Zero Trust,’ the TLCTC-informed follow-up question is: Which clusters does this target, under which CSF functions, with what control objectives — and are you addressing both sides of the Bow-Tie?”
If they cannot answer that, they are buying a label, not executing a strategy. The same test applies to Defense in Depth: which clusters does each layer address? Where are the structural gaps? Which attack paths can still traverse all layers without encountering a relevant control?
These are not unreasonable questions. They are the questions that GOVERN requires and that any mature risk management framework assumes have been answered before control selection begins. The fact that most organizations cannot answer them is not a failure of those organizations — it is a failure of the cybersecurity industry to provide the foundational taxonomy that makes the answers possible.
This is the Kreinz Thesis in its sharpest form: the industry has spent decades building control frameworks, compliance checklists, and strategic labels on top of a foundation that skips threat identification entirely. Zero Trust is the most visible symptom. Defense in Depth is another. Every regulation that mandates controls without identifying threats is another. The prescription is elaborate. The diagnosis is missing.
TLCTC exists to provide that diagnosis. With it, “Zero Trust” stops being a marketing term and becomes an auditable, decomposable architectural constraint — one that can be mapped to specific generic vulnerabilities, evaluated for coverage gaps, and compared across organizations. Without it, “Zero Trust” remains what it has been for the last decade: a label that means whatever the person using it needs it to mean.
References
- Kreinz, B. Top Level Cyber Threat Clusters (TLCTC), White Paper V2.0. tlctc.net
- NIST SP 800-207, Zero Trust Architecture (August 2020)
- NIST Cybersecurity Framework 2.0 (February 2024)
Opinions are the author’s own. Cite TLCTC properly when re-using definitions. Licensed under Creative Commons Attribution 4.0 International (CC BY 4.0).