As modern firefighting environments grow increasingly complex, traditional equipment is often insufficient to meet the challenges posed by high-risk situations. To enhance firefighters’ situational awareness and decision-making capabilities, this study develops a multi-channel data fusion model that integrates multiple information streams, including images, electrocardiogram, pulse, and posture data. The model employs temporal and spatial synchronization analysis, coupled with deep learning algorithms such as Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN), along with Kalman filtering and Bayesian estimation, to optimize data processing. Furthermore, weighted averaging methods, adaptive systems, and blockchain technology are utilized to ensure transparency and security in the adjustment of model weights. Experimental results demonstrate that the proposed model significantly improves data fusion accuracy, response time, and robustness, thereby enhancing both the efficiency and safety of firefighting operations. This research provides effective support for intelligent management in the firefighting domain, strengthening decision-making capabilities and situational awareness for firefighting commanders.