Fully polarimetric range-Doppler (RD) radar images, while rich in polarimetric scattering information, often yield suboptimal feature representations when directly used as input to detection networks, due to inherent coherent speckle noise and inter-channel information redundancy. To address this limitation, this paper proposes a data fusion method based on Pauli decomposition to integrate polarimetric features. This approach establishes a mapping mechanism from physical scattering attributes to visual semantic features. Specifically, the multi-polarimetric RD data are decomposed via Pauli decomposition into surface, volume, and dihedral scattering components, which are then mapped into a high-contrast three-channel RGB image. Simulation results indicate that, compared with conventional fully polarimetric imagery, the three-channel RGB image achieves a background noise suppression of 23.2 dB, improves the signal-to-noise ratio (SNR) of corner reflectors by 7.9 dB, and enhances the target-to-background contrast by approximately 3.5 times. Further experiments based on the YOLOv8 model demonstrate that the proposed method effectively mitigates feature masking in high clutter-to-signal ratio environments. Both ship targets and corner reflectors exhibit stable improvements of 2–3\% in detection precision (P) and mean average precision at IoU=0.5 (mAP50), confirming the effectiveness and superiority of fusing physical polarimetric features with deep learning-based visual representations.