Large Language Models (LLMs) have shown nascent Theory of Mind (ToM) capabilities, yet they still struggle with complex high-order belief reasoning in multi-agent scenarios. Existing enhancement methods are often limited to specific psychological states, require costly fine-tuning, or lack structured mechanisms for systematic hierarchical belief tracking. This paper introduces Cognitive Trace-guided ToM Reasoning (CToM-R), a novel, purely prompt-engineering method designed to significantly boost LLMs' high-order ToM capabilities. CToM-R compels LLMs to perform structured "cognitive path tracing" through a multi-stage process: scene parsing and agent modeling, event-driven belief update, recursive belief inference and hierarchy construction, and final answer generation. By explicitly simulating and externalizing each agent's mental states at every step, CToM-R transforms implicit ToM reasoning into an interpretable, step-by-step process, supporting arbitrary depths of inference. We extensively evaluate CToM-R on leading ToM benchmarks, including HI-TOM, TOMCHALLENGES, and OpenToM, using state-of-theart LLMs in a zero-shot/few-shot setting. Our results demonstrate that CToM-R consistently achieves state-of-the-art performance, notably excelling in tasks requiring complex high-order reasoning, and outperforms both existing prompt-only and fine-tuned baselines. Ablation studies confirm the critical role of structured tracing, while human evaluations highlight its superior interpretability. This work offers a cost-effective, generalizable, and highly effective paradigm for advancing LLMs towards more robust and transparent high-order Theory of Mind.