As generative AI lowers the marginal cost of producing polished scientific prose, institutions increasingly rely on AI-detection systems to preserve authorship norms and epistemic standards. Yet detection and generation are strategic complements: stronger detection induces stronger evasion, while stylometric policing can redirect effort away from truth-directed verification toward presentation-directed laundering. Building on the verification-budget perspective, we develop an infinite-horizon repeated game between a generator and a detector. The generator chooses an AI-use rate, verification effort, and evasion effort; the detector chooses screening and audit intensity. Our central result is a structural distortion theorem: in the hidden-use equilibrium, tighter stylometric detection increases evasion effort but does not increase verification effort. Hence an arms race in detection cannot, by itself, purchase credibility. We then characterize a disclosure equilibrium sustained by random audits and future penalties. In that equilibrium, institutions regulate not raw AI use but under-verified AI use by coupling admissible AI assistance to a verification schedule. This yields three theoretical implications. First, there are parameter regions in which positive AI reliance is socially optimal, so blanket bans or hard "AI-rate" caps are inefficient. Second, random-audit disclosure can dominate stylometric policing whenever the social harm of unreliability exceeds the author's private expected sanction and actors are sufficiently patient. Third, provenance and verification are normatively separable from truth: machine-generated text may be truth-evaluable, but responsibility for its use must be assigned to human and institutional actors. We use these results to analyze policy and ethics, arguing for process-based governance, contestable provenance systems, and a shift from purity norms to accountable verification.