Camouflage is an adaptive strategy that allows organisms and objects to blend with their surrounding environments, thereby evading detection by predators or adversaries. Current artificial methods, including manual painting, computer-aided techniques, and deep learning approaches, face significant challenges related to manual intervention, scalability, and generalization across diverse and non-calibrated scenes. In this paper, we propose a novel few-shot diffusion-based method for camouflage pattern generation, called Camouflage Diffusion (CamoDiff). Our method consists of two distinct stages: meta learning and few-shot learning. During the meta learning stage, our method employs a diffusion-based architecture to automatically generate camouflage patterns from noise, thus it eliminates manual intervention and enables scalable production. Additionally, our approach integrates the guidance mean absolute error (GMAE) loss in the few-shot learning stage. This allows the generated camouflage to adapt seamlessly to new environments with minimal retraining, regardless of varying viewpoints. We also introduce a comprehensive camouflage dataset and generation tool, which can be used as benchmarks for future research. Experimental results demonstrate that CamoDiff outperforms existing state-of-the-art methods in camouflage pattern generation on different datasets and metrics.