The Problem PASTA Solves — and the One It Doesn't
The Process for Attack Simulation and Threat Analysis (PASTA), developed by Tony UcedaVélez and Marco Morana, remains one of the most thorough threat modeling methodologies available. Its seven-stage process bridges business objectives with technical analysis, forcing cross-functional collaboration between executives, architects, and security engineers. No other methodology so deliberately connects what the business values to what an attacker might do.
But PASTA has an architectural gap that becomes visible under operational pressure: it tells you how to find threats without giving you a standardized language to name them. Each PASTA engagement can produce a different vocabulary for the same underlying attack mechanism. Two teams running PASTA on similar applications may identify the same threat but describe it in incompatible terms — "credential compromise" vs. "authentication bypass" vs. "session hijacking" — all pointing to the same generic vulnerability but impossible to aggregate, compare, or trend.
This is where the Top Level Cyber Threat Clusters (TLCTC) framework, Version 2.0, provides what PASTA structurally lacks: a cause-oriented, axiomatic taxonomy of exactly 10 threat clusters, each anchored to a single generic vulnerability. TLCTC doesn't compete with PASTA. It completes it.
What follows is a stage-by-stage walkthrough of PASTA's methodology, showing precisely where and how TLCTC integration transforms each stage from process into precision.
Stage I — Define the Objectives
What PASTA Does
Stage I establishes the business context: business objectives, security requirements, compliance mandates, and data classification. This is PASTA's strategic foundation — the stage where security decisions get linked to what the organization actually cares about.
Where the Semantic Gap Opens
PASTA Stage I asks teams to define security objectives, but provides no standardized structure for expressing what threatens those objectives. Teams typically produce statements like "protect customer data from unauthorized access" or "ensure service availability." These are valid goals, but they describe desired outcomes without identifying the causal mechanisms that could defeat them.
The consequence: two organizations with identical risk profiles produce incomparable Stage I outputs because their threat language is ad hoc.
How TLCTC Integrates
TLCTC anchors Stage I objectives to specific generic vulnerabilities. Instead of abstract goal statements, teams map their assets against TLCTC's 10 clusters and ask: which of these generic vulnerabilities apply to our scope?
A web application handling payment data, for example, is structurally exposed to:
- #1 Abuse of Functions — the application's legitimate query and administrative functions can be turned against it
- #2 Exploiting Server — server-side code flaws could allow unintended data-to-code transitions
- #3 Exploiting Client — client-side code flaws in the browser context
- #4 Identity Theft — credentials protecting user and admin sessions
- #9 Social Engineering — operators and users are human
- #10 Supply Chain — third-party libraries, payment processor APIs, CDN dependencies
This isn't a threat list — it's a structural audit of the generic vulnerabilities present in the system's architecture. The result is a Stage I output that is comparable across engagements, aggregatable across business units, and directly maps to control strategies.
TLCTC adds to Stage I: A standardized threat scope checklist derived from generic vulnerabilities rather than ad hoc brainstorming.
Stage II — Define the Technical Scope
What PASTA Does
Stage II documents the technical architecture: system components, dependencies, network topology, data flows, API endpoints, and the overall attack surface. The goal is a comprehensive technical map of what needs protecting.
Where the Semantic Gap Opens
PASTA excels at enumerating components but lacks a vocabulary for annotating trust boundaries and cross-domain transitions. Where does your organization's responsibility end and a vendor's begin? Where does a client-side context become a server-side context? These boundaries are architecturally critical because attacks that cross them often change character — a vulnerability that's manageable within your domain may become catastrophic when it crosses into a third-party trust relationship.
How TLCTC Integrates
TLCTC V2.0 introduces the Domain Boundary Operator, a formal notation for marking where an attack path crosses a responsibility sphere:
||[context][@Source → @Target]||
Applied to Stage II, this means every trust boundary in the architecture diagram gets an explicit annotation. The payment application's architecture might include:
||[api][@OurApp → @PaymentProcessor]|| — API boundary to payment vendor
||[cdn][@OurApp → @CDNProvider]|| — content delivery dependency
||[browser][@Server → @ClientBrowser]|| — server-to-client context shift
||[lib][@OurApp → @OpenSourceLib]|| — third-party library trust
Each boundary marks a potential #10 Supply Chain entry point and a shift in Responsibility Sphere — the operational concept that determines who owns which controls. This transforms Stage II from a flat component inventory into a semantically annotated architecture where trust transitions are explicit and auditable.
TLCTC adds to Stage II: Domain Boundary Operators and Responsibility Spheres that make trust transitions explicit in the architecture documentation.
Stage III — Application Decomposition and Analysis
What PASTA Does
Stage III decomposes the application into its functional components: user roles, permissions, data entry points, assets, trust levels, and the data flows between them. This is where the application's internal logic becomes visible for threat analysis.
Where the Semantic Gap Opens
Decomposition produces data flow diagrams (DFDs) and interaction models, but without a systematic way to annotate what kind of threat applies at each interaction point. Every data entry point is potentially vulnerable — but vulnerable to what, exactly? PASTA leaves this to analyst judgment, which introduces inconsistency.
How TLCTC Integrates
Each interaction point in the decomposed application maps to one or more TLCTC generic vulnerabilities based on what structurally occurs there:
- Where data enters and gets processed by server-side code — the generic vulnerability of code flaws in server-side software is present → relevant cluster: #2 Exploiting Server
- Where the application returns data that the client renders — the generic vulnerability of code flaws in client-side software is present → relevant cluster: #3 Exploiting Client
- Where authentication occurs — the generic vulnerability of insufficient protection of identity credentials is present → relevant cluster: #4 Identity Theft
- Where the application calls external APIs — the generic vulnerability of organizational dependencies on third-party components is present → relevant cluster: #10 Supply Chain
- Where the application performs actions on behalf of users — the generic vulnerability of insufficient restriction on scope of legitimate functionality is present → relevant cluster: #1 Abuse of Functions
This annotation transforms the DFD from a technical diagram into a threat-annotated architecture. Every data flow and entry point now carries an explicit causal label, not an outcome-based guess.
A critical distinction TLCTC enforces at this stage: the difference between data staying data (#1 Abuse of Functions) and data transitioning to code (#2/#3 Exploiting Server/Client). A SQL query input field that allows data extraction is #1. The same field, if it permits command execution through a code flaw like xp_cmdshell, becomes #2. This isn't a semantic nicety — it determines whether your control strategy targets function restriction or code-level patching.
TLCTC adds to Stage III: Cause-based threat annotation of every entry point and data flow, enforcing the data-stays-data vs. data-becomes-code distinction.
Stage IV — Threat Analysis
What PASTA Does
Stage IV is PASTA's core analytical stage: identifying credible threats based on threat intelligence, analyst insight, known attack patterns, and attack trees. Teams build threat libraries and model how adversaries might approach the system.
Where the Semantic Gap Opens
This is where PASTA's lack of standardized taxonomy bites hardest. Threat libraries are built from diverse sources — MITRE ATT&CK techniques, vendor advisories, internal intelligence — each using different terminology. The result is what TLCTC's diagnostic framework calls semantic diffusion: the same underlying attack mechanism described in incompatible terms across sources, making aggregation and trending impossible.
A phishing attack that delivers a malicious document exploiting a PDF reader vulnerability is simultaneously:
- MITRE ATT&CK: T1566.001 (Spearphishing Attachment) → T1203 (Exploitation for Client Execution)
- Kill Chain: Delivery → Exploitation
- STRIDE: Spoofing + Elevation of Privilege
- Common parlance: "phishing attack," "zero-day exploit," "malware infection"
Each description captures a different facet. None identifies the root cause. All are correct. None are compatible.
How TLCTC Integrates
TLCTC provides the unifying causal layer. The same scenario decomposes into an unambiguous attack path:
#9 →[~24h] #3 →[LoC] #4 →[~5m] #1
Reading this notation:
- #9 Social Engineering — an attacker psychologically manipulates a user into opening a malicious document. The generic vulnerability exploited: human psychological susceptibilities and trust.
- →[~24h] — approximately 24 hours pass (Attack Velocity Δt) between the phishing delivery and the user opening the document. This Velocity Class (VC-1, strategic pace) tells defenders that human-speed controls like awareness training and email filtering are structurally viable at this edge.
- #3 Exploiting Client — the malicious document triggers an unintended data-to-code transition in the PDF reader (client-side code flaw). The exploit executes code through a software vulnerability — this is not #7 Malware, because the code execution occurs through an unintended bug, not through the system's designed execution capability.
- →[LoC] — the exploit harvests credentials stored on the system. This is a Data Risk Event: Loss of Confidentiality. The credentials are acquired here — they are data being exfiltrated, not yet used for impersonation.
- #4 Identity Theft — the attacker uses the stolen credentials to authenticate as the legitimate user. The generic vulnerability exploited: insufficient protection of identity credentials. Note the dual nature of credentials: acquisition was a consequence (LoC) of #3; use is always #4.
- →[~5m] — approximately 5 minutes between credential use and the next action (VC-3, operational pace). At this velocity, automated detection becomes critical — human response alone is too slow.
- #1 Abuse of Functions — the attacker, now authenticated, abuses the application's legitimate administrative functions to extract sensitive data. Data stays data throughout. No foreign code executes.
This single notation encodes the causal mechanism at every step, the temporal dynamics between steps, and the data risk events produced — all in a format that is identical regardless of which team, which tool, or which organization documents it.
TLCTC adds to Stage IV: A universal, cause-oriented attack path notation with velocity annotations that replaces semantic diffusion with precision.
Stage V — Weakness and Vulnerability Analysis
What PASTA Does
Stage V correlates identified threats with known vulnerabilities in the application — CVEs, CWEs, misconfigurations, and design weaknesses. The goal is to determine which threats from Stage IV can actually be realized given the system's specific weaknesses.
Where the Semantic Gap Opens
PASTA correlates threats to vulnerabilities, but without a causal taxonomy, the correlation logic is implicit. Teams map CVE-2024-XXXX to "the phishing threat" without a rigorous framework for explaining why that CVE enables that specific attack step.
Additionally, PASTA's outcome-oriented framing often conflates the vulnerability (the weakness) with the consequence (what happens). "Data breach vulnerability" tells you nothing about whether the mechanism is function abuse, a code exploit, credential theft, or a supply chain compromise — each of which requires fundamentally different controls.
How TLCTC Integrates
TLCTC enforces a strict separation:
- Generic Vulnerability (what's structurally present) → maps to a specific cluster
- Specific Weakness (CVE/CWE) → the concrete instance of that generic vulnerability
- Data Risk Event (what happens when exploited) → LoC, LoI, LoAc, or LoAv
For Stage V, this means every CVE and CWE gets mapped to its corresponding TLCTC cluster based on what generic vulnerability it represents:
- CVE in a PDF parser (memory corruption enabling code execution in client software) → #3 Exploiting Client — generic vulnerability: code flaws in client-side software
- CWE-89 SQL Injection → #1 Abuse of Functions if data stays data (SELECT-based extraction), or #2 Exploiting Server if the injection enables code execution (xp_cmdshell, INTO OUTFILE → web shell)
- CWE-798 Hard-coded Credentials → #4 Identity Theft — generic vulnerability: insufficient protection of identity credentials
The four Data Risk Events in TLCTC V2.0 replace the traditional CIA triad with operationally precise categories:
- Loss of Confidentiality (LoC) — unauthorized information disclosure
- Loss of Integrity (LoI) — unauthorized modification of data
- Loss of Accessibility (LoAc) — data exists but cannot be used (e.g., ransomware encryption — the system is operational, files are inaccessible)
- Loss of Availability (LoAv) — system or service is down entirely (e.g., DDoS, caused exclusively by #6 Flooding Attack)
The LoAc/LoAv distinction is not academic. Ransomware causes Loss of Accessibility — the system remains available, but data is encrypted and unusable. A DDoS attack causes Loss of Availability — the system itself is down. Conflating these leads to fundamentally wrong control strategies: backups address LoAc, capacity scaling addresses LoAv. Different causes, different events, different controls.
TLCTC adds to Stage V: Causal CVE/CWE mapping to generic vulnerabilities and the four-part Data Risk Event model (LoC/LoI/LoAc/LoAv) that replaces outcome-blurred CIA with operational precision.
Stage VI — Attack Modeling and Simulation
What PASTA Does
Stage VI is PASTA's centerpiece: simulating realistic attacks by mapping viable attack paths through the vulnerabilities identified in Stage V. Teams build attack trees, test exploit viability, and determine which paths actually lead to compromise.
Where the Semantic Gap Opens
Attack trees in PASTA are powerful but organization-specific. Two teams modeling the same attack scenario produce different tree structures with different node labels. The resulting attack simulations cannot be compared, shared as threat intelligence, or aggregated across an industry.
More critically, PASTA's attack trees lack a temporal dimension. They show that an attack path exists but not how fast it executes. A path that takes weeks gives defenders fundamentally different intervention opportunities than one that completes in seconds.
How TLCTC Integrates
TLCTC V2.0 transforms PASTA Stage VI in two ways.
First: Standardized Attack Path Notation. Every attack simulation result encodes as a universally readable sequence:
#9 →[~24h] #3 →[LoC] #4 →[~5m] #1 →[LoC]
This notation is deterministic. Any TLCTC-trained analyst reading this path reconstructs the same attack: social engineering delivers an exploit for a client-side code flaw, which produces credential exposure (LoC), followed by credential use (identity theft), then function abuse for data exfiltration (LoC). The vocabulary is fixed. The interpretation is unambiguous.
Second: Attack Velocity (Δt) and Velocity Classes. TLCTC V2.0 annotates every edge in the attack path with the time interval between steps, organized into four Velocity Classes:
| Velocity Class | Time Range | Implication |
|---|---|---|
| VC-1 (Strategic) | Days to months | Human-dependent controls viable. Awareness training, manual review, threat hunting. |
| VC-2 (Tactical) | Hours to days | Blended controls needed. Human oversight with automated alerting. |
| VC-3 (Operational) | Minutes to hours | Automated detection required. SOC analyst triage, automated containment triggers. |
| VC-4 (Machine-speed) | Seconds or less | Only automated controls viable. Human-in-the-loop is structurally impossible. |
In our running example:
- #9 →[~24h] — VC-1: email filtering and awareness training are structurally viable
- #3 →[LoC] — the exploit's internal execution is VC-4 (machine-speed), but it's a single atomic step, not an inter-step interval
- #4 →[~5m] — VC-3: automated detection (SIEM correlation, behavioral analytics) is required; waiting for a human analyst is too slow
- #1 →[LoC] — the final data extraction may occur at VC-3 or VC-4 depending on method
Velocity annotations answer a question PASTA's attack trees cannot: which controls are structurally capable of intervening at each point in the chain? A control that requires 30 minutes of human analysis is useless at a VC-4 edge. This insight directly informs control investment decisions in Stage VII.
TLCTC adds to Stage VI: Deterministic attack path notation and velocity-classified temporal annotations that transform attack simulations into shareable, velocity-aware threat intelligence.
Stage VII — Risk and Impact Analysis
What PASTA Does
Stage VII closes the loop: analyzing residual risk, defining countermeasures, and prioritizing mitigations based on business impact. PASTA's risk-centric approach ensures that countermeasure investment aligns with what the business values most.
Where the Semantic Gap Opens
PASTA Stage VII prescribes risk reduction but provides no standardized structure for mapping countermeasures to specific attack mechanisms. Teams determine mitigations based on their own interpretation of which controls address which threats — again, an analyst-dependent process that produces non-comparable outputs.
How TLCTC Integrates
TLCTC provides two structures that transform Stage VII from ad hoc mitigation into systematic control mapping.
The Bow-Tie Model separates every risk scenario into three lanes:
THREAT CLUSTERS EVENT DATA RISK EVENTS
(Causes) (Critical Moment) (Effects)
#9, #3, #4, #1 ───> Compromise ───> LoC, LoI, LoAc, LoAv
│
← PREVENTION MITIGATION →
(controls aligned (controls aligned
to clusters) to consequences)
Left side: preventive controls target specific generic vulnerabilities (cluster by cluster). Right side: mitigative controls target specific data risk events. This separation prevents the common mistake of conflating "prevent ransomware" (meaningless — ransomware is an outcome) with "prevent #9 entry, detect #7 execution, recover from LoAc" (actionable — each element maps to specific controls).
The Cluster-NIST CSF Matrix maps each TLCTC cluster against NIST CSF 2.0 functions (Govern, Identify, Protect, Detect, Respond, Recover), producing a 10×6 control mapping. Not every cell carries equal weight — the matrix is deliberately sparse for some combinations and dense for others. For our running example:
| Cluster in Path | Protect | Detect | Respond |
|---|---|---|---|
| #9 Social Engineering | Email filtering, awareness training, URL sandboxing | Phishing report analysis, email anomaly detection | Quarantine mailbox, block sender domain |
| #3 Exploiting Client | Client patching, sandboxing, application hardening | EDR behavioral monitoring, exploit detection signatures | Isolate endpoint, forensic imaging |
| #4 Identity Theft | MFA, credential rotation, privileged access management | Impossible travel detection, anomalous login alerting | Session termination, forced password reset |
| #1 Abuse of Functions | Least privilege, function scoping, query parameterization | Data access anomaly detection, DLP monitoring | Access revocation, audit trail review |
Every row maps to a specific generic vulnerability. Every control is justified by the causal mechanism it addresses. Nothing is guessed. Nothing is outcome-labeled.
TLCTC adds to Stage VII: The Bow-Tie model's cause/consequence separation and a cluster-specific control matrix that produces auditable, repeatable mitigation strategies.
The Running Example: Complete Integration
Here is the full PASTA + TLCTC analysis for our phishing-to-data-exfiltration scenario, consolidated:
- PASTA Stage I (Objectives): Protect payment card data and customer PII. Applicable TLCTC clusters: #1, #2, #3, #4, #9, #10.
- PASTA Stage II (Technical Scope): Web application with payment API integration. Domain boundaries: ||[api][@OurApp → @PaymentProcessor]||, ||[browser][@Server → @ClientBrowser]||.
- PASTA Stage III (Decomposition): Entry points annotated — login form (#4), search function (#1/#2), document upload (#3/#7), API calls (#10).
- PASTA Stage IV (Threat Analysis) — TLCTC Attack Path:
#9 →[~24h] #3 →[LoC] #4 →[~5m] #1 →[LoC]
- PASTA Stage V (Vulnerability Analysis): CVE-2025-XXXX in PDF renderer → #3; CWE-862 Missing Authorization on admin endpoint → #1; Weak session tokens → #4.
- PASTA Stage VI (Attack Simulation): Path validated. Velocity analysis shows VC-1 entry (human-speed, trainable), VC-3 lateral movement (requires automated detection), VC-3/VC-4 exfiltration (requires automated DLP).
- PASTA Stage VII (Risk & Impact):
- Bow-Tie analysis:
- Left (Prevention): email filtering (#9), client patching (#3), MFA (#4), least privilege (#1)
- Center (Detection): EDR (#3→LoC), SIEM correlation (#4 anomalous login), DLP (#1→LoC)
- Right (Mitigation): incident response, breach notification, forensic investigation
- Data Risk Events: LoC at credential acquisition (via #3), LoC at data exfiltration (via #1).
- Bow-Tie analysis:
What Changes When PASTA Speaks TLCTC
The integration isn't cosmetic. It produces measurable operational improvements:
- Comparability. Two teams running PASTA on similar applications produce outputs that can be compared, because the threat vocabulary is fixed. "#9 → #3 → #4 → #1" means the same thing in every organization.
- Aggregation. Enterprise-wide threat trending becomes possible. If 60% of incidents across business units start with #9 (Social Engineering), that's a board-level insight derived from standardized data — not from manually reconciling incompatible threat libraries.
- Velocity-informed investment. Velocity annotations answer the question executives actually need answered: "Can our current controls respond fast enough?" A VC-4 edge where only VC-2 controls exist is a quantifiable gap, not a qualitative risk rating.
- Machine-readable intelligence. TLCTC V2.0's JSON architecture means every PASTA output can be encoded in a standardized schema for automated ingestion by SIEMs, threat intelligence platforms, and incident response tools. The attack path isn't a slide deck — it's structured data.
- Causal clarity. "We were hit by ransomware" becomes "Entry via #9 Social Engineering, execution via #7 Malware, encryption via #1 Abuse of Functions, resulting in Loss of Accessibility." Each element maps to a specific control failure. Each control failure maps to a specific generic vulnerability. The causal chain is complete and actionable.
Conclusion: PASTA Provides the Process, TLCTC Provides the Precision
PASTA is a well-designed threat modeling methodology. Its seven-stage process ensures that threat analysis stays connected to business objectives, technical reality, and risk-based decision making. That process value doesn't diminish.
What TLCTC adds is the semantic infrastructure PASTA was designed to carry but never received: a fixed, cause-oriented taxonomy where every threat maps to exactly one generic vulnerability, every attack path encodes temporal dynamics, every data risk event is precisely categorized, and every control links to a specific causal mechanism.
PASTA tells you how to think about threats.
TLCTC tells you what to call them.
Together, they produce threat models that are not just thorough — they are standardized, comparable, velocity-aware, and machine-readable. That's the operational evolution threat modeling has been waiting for.
The TLCTC framework is available at tlctc.net under CC BY 4.0 licensing. PASTA was developed by Tony UcedaVélez and Marco Morana, detailed in "Risk Centric Threat Modeling" (Wiley, 2015).