For more than half a century, computing has been understood through the stored-program abstraction: programs encode explicit control flow, and machines execute them deterministically. This model has shaped hardware architecture, programming languages, and software engineering practice. Contemporary AI systems-particularly foundation models and agentic architectures-operate in ways that increasingly strain this abstraction. Behavior is no longer determined solely by fixed code paths; it is conditioned by high-level intent specifications interpreted by large-scale learned models at runtime. This article argues that we are witnessing a shift in computational emphasis, and introduces Intent-Conditioned Computation (IIC) as a complementary abstraction for understanding this shift. In IIC, intent serves as the primary interface, models function as general-purpose policy engines, and execution unfolds as probabilistic, context-sensitive processes. Crucially, IIC does not replace classical computing: modern AI systems stratify both paradigms into complementary layers, with deterministic infrastructure handling reproducibility-critical tasks while intentconditioned models handle flexible, context-sensitive reasoning. We ground this argument in a concrete open-source case study-the ArchHarness enterprise architecture tool-that demonstrates within a single workflow how stored-program and intent-conditioned computation coexist, divide labor, and interact. We discuss implications for system design, evaluation, and governance, and identify five open research directions for the AI community.
Foundation models are increasingly deployed as execution engines mediating the mapping from symbolic inputs to concrete actions. Traditional metaphors—treating models as programs, interpreters, or knowledge bases—fail to account for their stochastic, latent, and training-dependent execution behavior. We propose Probabilistic Program Execution Semantics (PPES), a formal framework in which foundation models are interpreted as probabilistic interpreters: runtime programs correspond to prompts, execution states include both observable outputs and hidden latent variables, and transitions are governed by a learned Markov kernel K𝜃 defined in the Giry measurable framework. Execution is resolved at infer-time, a distinct semantic phase absent in classical computing. PPES provides a rigorous account of key phenomena in foundation-model-based systems. We develop small-step probabilistic semantics formulated as a sub-probability transition measure (not a set-theoretic relation), define probabilistic reachability over execution traces, and prove four structural theorems: almostsure termination, trace monotonicity, Lipschitz continuity of semantics under parameter perturbation (in total variation, not KL divergence), and the Markov property of latent state. We also establish a notion of 𝜀-observational equivalence on prompts and prove a congruence theorem for sequential composition. Version note. This preprint (v2) revises v1 in four respects: (1) K𝜃 is now formally defined as a Giry-style Markov kernel with explicit measurability conditions; (2) the small-step rule is reformulated to avoid placing a sampling operation in an SOS premise; (3) the smoothness property uses total variation distance with an explicit Lipschitz condition; (4) all four structural properties are now stated as theorems with complete proofs. An extended theoretical development appears in [4]; engineering implications and the ArchHarness case study appear in [5].