Today's trend in AR application customization focuses on addressing individual user needs, leveraging advancements in sensing technologies such as biosensors and eye trackers, alongside the evolution of generative AI models. These innovations facilitate the creation of dynamic and personalized content, including videos and images, tailored to specific models and rules. To address this evolving landscape, this research proposes a framework for Context-Aware AR Content Generation designed for cultural scenarios. This framework incorporates biosensors to monitor emotional states, eye-tracking to assess visual focus, and geospatial data to consider location and time. By integrating these inputs with generative AI, the framework not only generates and adapts culturally relevant AR content but also employs AR technology to provide historical context about the content elements on the user's device. Our framework aims to minimize the need for extensive user prompts and improve accessibility for individuals who may find textual or voice commands challenging.