Federated Learning (FL) is a distributed machine learning paradigm that enables collaborative model training across decentralized devices while preserving data privacy. It addresses critical challenges in privacy, scalability, and data ownership, making it a promising approach for applications in healthcare, IoT, and finance. However, practical implementation of FL faces several efficiency bottlenecks, including communication overhead, system and data heterogeneity, and security vulnerabilities. This paper provides a comprehensive survey of state-of-the-art techniques aimed at enhancing the efficiency of FL. Key methods such as model compression, including pruning, quantization, and tensor decomposition, are explored to address communication constraints. Strategies to mitigate data and system heterogeneity, including personalized FL and resource-aware training, are discussed alongside advancements in privacy-preserving mechanisms like differential privacy and secure aggregation. We also examine scalability solutions, including hierarchical and decentralized FL, to enable large-scale deployment. The survey highlights open challenges and emerging opportunities in FL, offering insights into future research directions for building efficient and robust federated systems.