The growing reliance on AI-generated content across various industries necessitates robust methods for controlling the outputs of language models to ensure quality, relevance, and adherence to ethical guidelines. Introducing a novel gametheoretic framework, this research establishes a structured approach to controllable text generation, enabling strategic manipulation of language model outputs through adaptive prompt interventions. The study employed the Mistral language model, utilizing concepts of Nash equilibrium and feedback loops to dynamically adjust prompt strategies, optimizing the balance between content alignment, diversity, and coherence. Experimental results demonstrated that different prompt strategies distinctly influenced the generated text, with direct prompts enhancing relevance and interrogative prompts promoting creative expression. Case studies further illustrated the practical applications of the framework, showcasing its adaptability across various text generation tasks. The comparative analysis against traditional control methods highlighted the superiority of the game-theoretic approach in achieving high-quality, controlled outputs. These findings demonstrate the framework's potential to enhance AIdriven content generation, offering significant implications for human-AI collaboration, automated content creation, and the ethical deployment of AI technologies.