Blog / Framework & Concepts

The Logical Foundations of TLCTC

Why TLCTC is not a new logical model — but a domain-specific application of established scientific principles to a field that has stubbornly resisted formalization.

BK
Bernhard Kreinz
Loading read time... TLCTC v2.0
Abstract

Three epistemological approaches to cyber threat categorization exist. Empirical: observe attacks and catalog what happened — this is MITRE ATT&CK. Heuristic: propose categories by intuition and professional experience — this is STRIDE. Analytical: derive categories from the structural properties of the thing being attacked — this is TLCTC.

The distinction matters. Empirical frameworks grow with every new observation but never reach structural closure. Heuristic frameworks are useful but unfalsifiable — you cannot test whether the categories are correct. Only an analytical framework can claim completeness and mutual exclusivity by construction, because the categories are not chosen but derived.

TLCTC does not invent new logic. It imports proven principles from safety engineering, systems theory, classical logic, and philosophy of science — and applies them to the one engineering discipline that somehow skipped the formalization step.

The Entry Point: Two Languages, One Framework

Every security organization already operates in two layers without knowing it. When a board member asks "What are our top threats?" and a SOC analyst asks "What triggered this alert?" — they are not asking the same question at different zoom levels. They are asking structurally different questions that require structurally different vocabularies.

TLCTC makes this explicit. The Strategic Layer (#1 through #10, plain language, management-facing) and the Operational Layer (TLCTC-XX.YY notation, machine-readable, engineering-facing) are two interfaces into the same framework. This is not a TLCTC invention — it is how every mature engineering discipline works. Medicine has "heart problem" and ICD-10 codes. Electrical engineering has circuit theory and Maxwell's equations. Cybersecurity is the one field that never made this separation.

That recognition is the entry door. What follows is why it holds up.

Nine Principles — One Descent

Principle I: Causal Separation

The Bow-Tie model — proven in aviation, petrochemical, and nuclear safety — separates causes from consequences with a central pivot event: Loss of Control. TLCTC applies this to cybersecurity. Threats live on the cause side. Data breaches, ransomware, downtime live on the consequence side. The pivot is System Compromise. Without this boundary, you cannot distinguish prevention from damage limitation — and the entire control mapping becomes structurally ambiguous.

Bow-Tie Model Loss of Control Cause vs. Consequence Safety Engineering NIST CSF Functions Axiom III

Principle II: Actor–Threat Separation

The threat intelligence industry is organized around actors: APT28, Lazarus Group, FIN7. TLCTC says: irrelevant for classification. A phishing email exploits human psychology (#9) regardless of whether the sender is a nation-state or a teenager. The generic vulnerability does not change with the attacker's biography. Actors are metadata — operationally important, structurally irrelevant. If your classification changes depending on who is attacking, you are classifying actors, not threats.

Axiom V Actor-Agnostic Generic Vulnerability Threat Intelligence APT Attribution

Principle III: The Thought Experiment

TLCTC does not catalog observed attacks. It imagines the entire IT landscape as a single object and asks: in how many fundamentally distinct ways can this object be exploited? The answer falls out of the object's inherent properties — its designed functionality, its server-side code, its client-side code, its identity mechanisms, its communication channels, its finite capacity, its code execution capability, its physical form, its human operators, its third-party dependencies. Ten aspects, ten generic vulnerabilities, ten clusters. Not by convention — by construction. This is analytical decomposition, the same method physics uses to derive conservation laws from symmetry properties.

Analytical Decomposition Intrinsic Properties Systematic Derivation Completeness by Construction Noether's Theorem (analogy)

Principle IV: Partition Logic

The ten clusters form a proper partition over the adversarial cyber threat space: exhaustive (no gaps) and mutually exclusive (no overlaps). Every atomic attack step maps to exactly one cluster. This is classical set-theoretic logic applied to a domain that has been operating without it. The practical consequence is decisive: if your categories overlap, your control mappings become ambiguous — you cannot determine which control addresses which threat. If they have gaps, threats fall through unaddressed. The partition eliminates both failure modes.

Axiom VI Set Theory Mutual Exclusivity Exhaustive Coverage Disjoint Sets Control Mapping

Principle V: Attack Paths as Causal Sequences

Attack paths and event chains are established concepts — Kill Chain, MITRE ATT&CK sequences, attack trees. But every existing implementation sequences outcomes or techniques, not causes. A Kill Chain says "Delivery → Exploitation → Installation" — phase labels, not causal categories. TLCTC sequences generic vulnerabilities: #9 → #7 → #1 → #4. Each node identifies which structural weakness was exploited at that step. That is what makes control selection deterministic — at each position in the chain, you know exactly which class of defense applies.

Attack Path Notation Enabling Cluster Directed Sequences Causal Chain Kill Chain (contrast) Attack Velocity Δt

Principle VI: System Risk Event vs. Data Risk Event

Two categorically different event types occur during an attack, and the industry conflates them constantly. A System Risk Event is system compromise — the Bow-Tie pivot, Loss of Control. A Data Risk Event is a consequence: Loss of Confidentiality, Integrity, Accessibility, or Availability. "We had a breach" — is that system compromise or data loss? Which caused which? TLCTC enforces the distinction. Critically, Data Risk Events can occur during the attack path, not just at the end. Credential theft mid-sequence produces a Loss of Confidentiality right there — the data was already lost, even before system compromise occurs.

DRE Notation LoC / LoI / LoAc / LoAv System Compromise Bow-Tie Pivot Consequence-Side Controls

Principle VII: The Dual Nature of Credentials

This is where most intuitions break — and where the partition proves itself under pressure. A password is always the same string. But its operational role flips. During acquisition, the credential is data — something stolen, intercepted, copied. The cluster is determined by the method: #9 for phishing, #7 for a keylogger, #2 for SQL injection. During application, the credential is a system element — a key being turned. That is always #4. Same object, different generic vulnerability, different cluster, different controls. If the framework survives this stress test — and it does — the partition holds.

Axiom X Context-Dependent Typing Acquisition vs. Application R-CRED Rule #4 Identity Theft

Principle VIII: Falsifiability

This is the differentiator. No other cybersecurity framework specifies the conditions under which its own classifications would be wrong. TLCTC does. Every classification can be tested: "Is this really #10? Remove the third-party trust relationship — does the attack still work? If yes, it is not #10." This is Popperian demarcation applied to threat taxonomy — the boundary test is a counterfactual conditional. A framework that cannot be tested cannot be scientific. STRIDE is unfalsifiable. ATT&CK is extensible but has no built-in verification mechanism. TLCTC has both.

Karl Popper Boundary Tests Counterfactual Logic Demarcation Criterion Kreinz Thesis #10 Supply Chain Test

Principle IX: Technological Invariance

Every networked interaction — regardless of protocol, architecture, or technology stack — reduces to a requester-responder dyad. TLCTC calls this the Universal Interaction Model. It is an invariance claim in the formal sense: the structural relationship holds across all transformations of the underlying technology. Cloud, IoT, OT, AI-driven systems — the ten generic vulnerabilities do not change. This is what gives the framework longevity. A taxonomy that must be revised every time a new platform emerges was never a taxonomy — it was a snapshot.

Axiom II Client-Server Abstraction Invariance Technology-Agnostic Systems Theory Universal Interaction Model

Full Circle

Return to the opening image. The board member and the SOC analyst are asking different questions — but those questions now have a shared structural foundation. The two-layer architecture holds because underneath it, causal separation provides direction, the thought experiment provides derivation, the partition provides clarity, and falsifiability provides verification.

None of these principles are new. Bow-Tie logic is decades old. Set-theoretic partitioning is classical mathematics. Falsifiability is twentieth-century philosophy of science. Systems-theoretic invariance is standard engineering.

The question was never whether these principles could be applied to cybersecurity. The question is why it took this long for someone to do it.