Chapter 8Platforms, Privacy, Vaults, Execution SurfacesPaulo H LeocadioIntroductionAs artificial intelligence systems transition from passive analytical tools to agentic participants in cybersecurity operations, the primary source of risk shifts from model capability to execution context. The question is no longer whether an AI system can reason effectively about a threat, but where, how, and under what constraints that reasoning can materialize into action.Contemporary discourse on agentic AI concentrates on model architecture (e.g., Transformers, diffusion pipelines, reinforcement learning, or prompt chaining) while treating the underlying platform as a neutral substrate. This assumption is incorrect. Platforms encode governance decisions through identity boundaries, execution privileges, persistence models, and observability constraints. These properties ultimately determine whether autonomy remains bounded or escalates into systemic exposure.This chapter reframes platforms, privacy mechanisms, secret vaults, and execution surfaces as control infrastructure rather than auxiliary services. In operational cybersecurity environments, these elements function as the final authority layer between cognition and consequence. Models may generate hypotheses, plans, or recommended actions, but platforms determine what can be executed, with what authority, for how long, and under what auditability guarantees.Privacy, in this context, is not merely a compliance obligation. It is a stability requirement. Persistent context accumulation, uncontrolled memory retention, and opaque data reuse introduce feedback loops that amplify misalignment and undermine forensic accountability. Systems that cannot enforce forgetting cannot be governed reliably, regardless of model sophistication.Similarly, vaults and secrets management systems must be understood as trust boundaries, not storage conveniences. By externalizing authority (e.g., credentials, signing keys, privileged tokens) away from cognitive components, architectures preserve a critical separation between reasoning and power. This separation enables revocation, replay, and audit, even under adversarial or degraded conditions.Execution surfaces represent the narrow interfaces where intent becomes impact. Poorly defined or overly permissive execution paths create nonlinear risk escalation, particularly in automated or semi-automated response scenarios. Constraining these surfaces through simulation, scope limitation, and policy mediation is essential to preventing cascading failure.The central argument of this chapter is that safe agentic behavior does not emerge from smarter models, but from stricter infrastructure. Autonomy must be shaped by platform-enforced boundaries that are inspectable, revocable, and independently governed. In cybersecurity operations, where errors propagate at machine speed, and stakes are measured in real-world damage, this distinction is not theoretical.By examining platforms, privacy controls, vault architectures, and execution surfaces as interlocking components of a cognitive control plane, this chapter establishes the infrastructural preconditions required for deploying agentic AI systems responsibly in security-critical environments.8.1 Platforms as cognitive substratesPlatforms have long been recognized as active participants in system behavior rather than neutral execution environments, particularly through their enforcement of identity, isolation, and privilege boundaries (Saltzer and Schroeder 1975, Lampson 2004). Large-scale cloud architectures further encode governance decisions by centralizing control planes that regulate execution, persistence, and observability independently of application logic (Armbrust, et al. 2010, Burns, et al. 2016). Containerization and orchestration frameworks formalize these constraints by separating workload description from execution authority, enabling reproducible and policy-governed runtime behavior (Merkel 2014, Pahl 2015).In security-critical systems, execution context is as consequential as algorithmic capability, as privilege expansion, lateral movement, and persistence failures frequently originate at the platform layer rather than within application code (NIST Joint Task Force 2020). Observability infrastructures, such as distributed tracing and structured telemetry, transform platforms into control instruments by enabling deterministic replay, forensic reconstruction, and post hoc accountability (Sigelman, et al. 2010, Google SRE, 2016). Conversely, platforms optimized primarily for elasticity or developer convenience often collapse identity, execution, and persistence into a single operational plane, increasing systemic exposure under automation (AWS 2024, Microsoft 2023).These findings support the architectural position that agentic behavior is bounded not only by model design but also by platform-enforced constraints on authority, visibility, and actionability. As autonomy increases, the platform increasingly functions as a cognitive substrate, defining not only where computation occurs, but under what conditions reasoning may safely transition into action (Leveson 2012, NIST 2023).The platform is not a neutral substrate; it is a measurable attack surface. To move beyond qualitative descriptions of security , we define the Execution Attack Surface \(\left(\mathbf{A}_{\mathbf{s}}\right)\) as a quantifiable ratio of exposure:\begin{equation} A_{s}=\frac{\sum\text{Authorized\ System\ Calls}}{\sum\text{Total\ Available\ Kernel\ Interface}}\nonumber \\ \end{equation}By calculating the \(A_{s}\) ratio for a containerized agent, we establish a hard limit on agency. As established by (Saltzer and Schroeder 1975), the protection of information requires that every program and user operate with the minimum set of privileges necessary to complete their jobs. In this architecture, an agent with an\(A_{s}>0.05\) is considered over-privileged . The threshold is intentionally conservative, reflecting the principle of minimal kernel exposure for autonomous workloads rather than an empirically fixed constant.Platforms are not neutral deployment environments. They encode assumptions about identity, persistence, isolation, observability, and control, which directly shape the behavior of autonomous and semi-autonomous systems (Saltzer and Schroeder 1975, Lampson 2004). In cognitive defense architectures, the platform serves as the substrate on which reasoning, action, and auditability are constrained (Armbrust, et al. 2010, Burns, et al. 2016). What an agent is allowed to do is therefore inseparable from where it is allowed to exist (Leveson 2012, NIST 2023).Cloud platforms designed for elastic, multi-tenant operation (such as Amazon Web Services, Microsoft Azure, and Google Cloud Platform) offer elastic compute, managed services, and operational convenience at scale (Armbrust, et al. 2010, AWS 2024, Microsoft 2023, Google SRE, 2016). These characteristics are necessary but insufficient for agentic systems operating in security-critical environments (Leveson 2012). Elasticity describes how resources expand and contract; it does not describe how authority is bounded (Saltzer and Schroeder 1975, Lampson 2004). For cognitive defense, the decisive property of a platform is whether it supports bounded execution with enforceable guarantees (NIST Joint Task Force 2020, Miller, Yee and Shapiro 2003).A platform suitable for agentic cybersecurity must make constraints explicit rather than implicit (Burns, et al. 2016, Levan 2024, The Kubernetes Authors, 2026). Identity must be scoped to execution contexts rather than embedded in application logic (NIST Joint Task Force 2020, Miller, Yee and Shapiro 2003).. Isolation must separate reasoning processes from action surfaces to prevent lateral privilege expansion (Saltzer and Schroeder 1975, Lampson 2004). Persistence must be optional and governed, not an ambient default (Carlini, et al. 2021, Shokri, et al. 2017). Observability must be continuous and structured, enabling the reconstruction of decisions after the fact rather than retrospective interpretation (Sigelman, et al. 2010, Google SRE, 2016).True isolation requires the hardware itself to become a party to the security contract. Protecting sensitive inference workloads increasingly requires cryptographic isolation at the hardware level. Trusted Execution Environments (TEEs ) provide this capability by encrypting memory and restricting host-level inspection during model execution (Skyflow Inc. 2023).“‘yaml# Canon Specification: Sanctuary Execution Policyexecution_surface_policy:isolation_level: ”Hardware-Encrypted-TEE” # Cryptographic hardware isolationsecret_provider: ”Vault-Sidecar” # No local secret storagenetwork_egress: ”Restricted-VLAN-Only” # No public internet accesspersistence: ”None-Ephemeral-Only” # Disk wipes on task completion“‘In a production environment (e.g., Google Cloud’s Confidential GKE ), this materializes as encrypted memory at the silicon level. The agent’s thought process remains opaque even to the host operating system. We define this requirement through the followingExecution Surface Policy :Within this framing, cognitive defense systems require platforms that can:Enforce identity-scoped execution, ensuring that actions are always attributable to a defined role, policy, or trust domain (NIST Joint Task Force 2020)Provide strong isolation between reasoning and action, preventing cognitive components from directly exercising authority (Saltzer and Schroeder 1975, Miller, Yee and Shapiro 2003).Support deterministic replay and forensic reconstruction, allowing decisions to be examined under the same constraints in which they were made (Sigelman, et al. 2010, Doshi-Velez and Kim 2017)Treat telemetry as a first-class control signal rather than a debugging artifact, closing the loop between observation and governance (Google SRE, 2016, Leveson 2012)Platforms optimized primarily for throughput, latency reduction, or cost efficiency tend to blur these boundaries (AWS 2024, Microsoft 2023). Convenience abstractions often collapse identity, execution, and persistence into a single operational plane, increasing the difficulty of containment once autonomy is introduced (NIST Joint Task Force 2020). In contrast, platforms that expose fine-grained controls over execution context, privilege boundaries, and auditability enable architectures in which autonomy is contained by design rather than corrected after failure (Leveson 2012, NIST 2023).In this sense, platforms do not merely host cognitive systems; they define the conditions under which cognition can safely operate (Armbrust, et al. 2010, Leveson 2012). The substrate precedes the agent (Lampson 2004).Figure 8.1 depicts a layered, platform-mediated constraint architecture that separates cognition from authority. Identity, execution, and persistence are managed by infrastructure-level control planes rather than being embedded within agent logic. Execution privileges, identity scope, and persistence are enforced at the platform level, which enables bounded autonomy and auditability.