AUTHOREA
Log in
Sign Up
Browse Preprints
LOG IN
SIGN UP
Essential Site Maintenance
: Authorea-powered sites will be updated circa 15:00-17:00 Eastern on Tuesday 5 November.
There should be no interruption to normal services, but please contact us at help@authorea.com in case you face any issues.
Shunjie Zhang
Public Documents
2
Gaussian Low-pass Channel Attention Convolution Network for RF Fingerprinting
Shunjie Zhang
and 4 more
September 08, 2022
Radio frequency (RF) fingerprinting is a challenging and important technique in individual identification of wireless devices. Recent work has used deep learning-based classifiers on ADS-B signal without missing aircraft ID information. However, traditional methods are difficult to obtain well performance accuracy for classical deep learning methods to recognize RF signals. This letter proposes a Gaussian Low-pass Channel Attention Convolution Network (GLCA-Net), where a Gaussian Low-pass Channel Attention module (GLCAM) is designed to extract fingerprint features with low frequency. Particularly, in GLCAM, we design a Frequency-Convolutional Global Average Pooling (F-ConvGAP) module to help channel attention mechanism learn channel weights in frequency domain. Experimental results on the datasets of large-scale real-world ADS-B signals show that our method can achieve an accuracy of 92.08%, which is 6.21% higher than Convolutional Neural Networks.
An ADS-B Signal Poisoning Method based on U-Net
Tianhao Wu
and 3 more
September 13, 2022
Automatic dependent surveillance-broadcast (ADS-B) has been widely used due to its low cost and high precision. The deep learning methods for ADS-B signal classification have achieved a high performance. However, recent studies have shown that deep learning networks are very sensitive and vulnerable to small noise. We propose an ADS-B signal poisoning method based on U-Net. This method can generate poisoned signals. We assign one of ADS-B signal classification networks as the attacked network and another one as the protected network. When poisoned signals are fed into these two well-performed classification networks, the poisoned signal will recognized incorrectly by the attacked network while classified correctly by the protected network. We further propose an Attack-Protect-Similar loss to achieve “triple-win” in leading attacked network poor performance, protected network well performance and the poisoned signals similar to unpoisoned signals. Experimental results show attacked network classifies poisoned signals with a 1.55% classification accuracy, while the protected network classifies rate is still maintained at 99.38%.