Essential Site Maintenance: Authorea-powered sites will be updated circa 15:00-17:00 Eastern on Tuesday 5 November.
There should be no interruption to normal services, but please contact us at help@authorea.com in case you face any issues.

loading page

Where can distinguishing features be extracted in an image for visibility estimate?
  • +1
  • han wang,
  • Jia Li Liu,
  • Ke Cheng Shen,
  • Quan Shi
han wang
Nantong University

Corresponding Author:hanwang@ntu.edu.cn

Author Profile
Jia Li Liu
Nantong University
Author Profile
Ke Cheng Shen
Nantong University
Author Profile
Quan Shi
Nantong University
Author Profile

Abstract

Standard convolution is difficult to provide an effective fog feature for visibility estimate tasks due to the fixed grid kernel structure. In this paper, a multiscale deformable convolution model (MDCM) is proposed to extract features that make effectively sampling discriminating features from the atmospheric region in foggy image. Moreover, to enhance performance we use RGB-IR image pair as observations and design a multimodal visibility range classification network based on the MDCM. Experimental results show that both the robustness and accuracy of visibility estimate performance are raised beyond 30% compared to standard convolutional neural networks (CNNs).