This paper explores the embodied intelligence capabilities of GPT-4o, emphasizing its performance in key areas such as embodied reasoning, embodied manipulation, and embodied navigation. We are the first to systematically evaluate GPT-4o on embodied intelligence tasks. As an advanced language model, GPT-4o demonstrates significant potential in complex tasks by integrating visual and linguistic understanding, with particularly noteworthy abilities in image generation. In the embodied reasoning section, the study evaluates GPT-4o's performance in perception, spatial reasoning, temporal reasoning, planning, and causal reasoning. The findings reveal how the model extracts information from images to make informed inferences. The embodied manipulation section analyzes its applications in object understanding, environmental perception, and task planning, showcasing the model's adaptability in dynamic environments. Additionally, the embodied navigation section focuses on GPT-4o's ability to process navigation instructions, perform map reasoning, infer trajectories, and predict actions, indicating its effectiveness in navigation tasks. In summary, GPT-4o shows remarkable advancements in the field of embodied intelligence. Future development plans aim to further enhance its capabilities in multimodal interactions and complex environmental adaptability, laying a foundation for achieving higher levels of artificial intelligence. Through ongoing research and optimization, GPT-4o is poised to play a greater role in the practical applications of embodied intelligence.