Abstract:
Although most indices associated with the super-resolution reconstruction of a single remote sensing image based on deep learning have been significantly improved, the effect observed by the human eyes is not obvious. Previous methods for creating low-resolution images cause some information losses. To avoid this problem, we use different scales to obtain high-and low-resolution remote sensing image pairs as training data sets. Through this method, we can effectively avoid the loss of original image information caused by downsampling. We use a generative adversarial network (GAN) image super-resolution model based on deep residual blocks so that the model can better learn a priori information. Thus, the quality of the image generated by the algorithm and the efficiency improve. We also add the spatial position information between image features to the contextual loss function, thereby reducing image artifacts caused by feature matching errors. Then, we add a relative discriminator to evaluate the relative authenticity of the obtained image and optimize the super-resolution effect. Experimental results on MWPU-RESISC45 dataset verify that the proposed method greatly enhances PSNR (peak signal to noise ratio), SSIM (structural similarity), and AG(average gradient) indicators. The human eye observation reveals that the network outputs a good super-resolution effect map.