Diffusion models have emerged as a powerful paradigm in generative modeling, underpinned by robust probabilistic and stochastic principles. These models operate by gradually transforming complex data distributions into simple priors via a forward diffusion process and subsequently learning to reverse this transformation. Despite their success in producing high-quality and diverse outputs, diffusion models face significant computational challenges, particularly during training and inference. This paper explores the theoretical foundations of diffusion models, detailing their formulation, training objectives, and connection to stochastic differential equations. We survey key techniques for enhancing their efficiency, including optimized noise scheduling, architectural innovations, accelerated sampling strategies, and hybrid training approaches. Furthermore, we examine the transformative impact of diffusion models across domains such as image and video synthesis, natural language processing, scientific research, healthcare, and entertainment. While these advancements underscore the potential of diffusion models to drive innovation across industries, challenges remain in computational scalability, domain-specific adaptation, and ethical considerations. We conclude by discussing future research directions aimed at addressing these limitations and unlocking the full potential of diffusion models. This work highlights the evolving landscape of generative modeling and its far-reaching applications, establishing diffusion models as a cornerstone of modern artificial intelligence.