Meta-learning, or "learning to learn", enables machines to acquire general priors with minimal supervision and rapidly adapt to new tasks. Unlike traditional AI methods that approach each task from scratch using a fixed learning algorithm, meta-learning refines the learning algorithm itself through experience across various tasks, enhancing transferability and generalization. This is especially valuable when data collection is difficult or costly, allowing for effective learning from task sequences while reducing the dependency on extensive target domain data. Consequently, meta-learning has emerged as a promising field in machine learning. Although existing surveys provide valuable insights into meta-learning, they often present methods and applications in isolation and lack coverage of the latest advancements. Given the rapid growth of the field, a comprehensive survey is both necessary and challenging. Moreover, meta-learning algorithms often remain disconnected, with no unified framework to explain how they facilitate "learning to learn". This survey seeks to bridge that gap by systematizing meta-learning research, offering a thorough overview of strategies to enhance understanding. Additionally, the paper reviews over thirty representative meta-learning methods across models, tasks, and applications, analyzing their characteristics and challenges. To illustrate method performance, we evaluate more than fifteen models on six problems spanning thirteen scenarios, emphasizing the importance of selecting appropriate meta-learning approaches for practical applications.