Given a picture classified as a Persian cat by AI, users may ask questions such as, "What are the contributions of the eyes and ears to the classification result?" or "Which features contribute the most?". Existing computational XAI methods have achieved success in explaining the AI output, but they cannot directly help users find the answers to such questions. In this paper, we propose a new approach that addresses the challenge by visually exploring XAI explanation through intuitive features. Our method computes the contributions of predefined semantic features (e.g., eyes, ears, body) to individual images and represents them with a quantitative representation. This approach provides an easily interpretable explanation of model classification and enables convenient visual exploration over multiple images. We also have developed a visualization prototype that allows users to efficiently explore, filter, and compare image groups based on the quantitative representation of the semantic features. We demonstrate the effectiveness of this approach in providing an intuitive and scalable way to interpret XAI results through a set of example scenarios and expert feedback.