This letter introduces an attention-based unsupervised generative adversarial network, named Illu-GAN, for low- light camera image enhancement. Many images captured by camera sensors suffer from inadequate lighting conditions or over-/under- exposure. Previous low-light enhancement methods are mostly based on supervised learning-based and heavily rely on paired data. Existing unsupervised methods only utilize information in spatial domain and take no account of frequency information, which leads to results of poor quality. In this letter, we propose a new network, Illu- GAN, to incorporate the features from frequency and spatial domains to better guide the network through a Wavelet Transform module. Extensive experiments with Illu-GAN on various benchmarking datasets demonstrate qualitatively and quantitatively the improvement with respect to state-of-the-art methods. The qualitative and quantitative results demonstrate that the newly proposed Illu-GAN can generate more natural enhanced images, less noisy, and with better generalisation ability.