基于可逆网络的单一图像超分辨率

陈国俊, 羊洁明, 葛洪伟

陈国俊, 羊洁明, 葛洪伟. 基于可逆网络的单一图像超分辨率[J]. 信息与控制, 2021, 50(5): 602-608, 615. DOI: 10.13976/j.cnki.xk.2021.0463
引用本文: 陈国俊, 羊洁明, 葛洪伟. 基于可逆网络的单一图像超分辨率[J]. 信息与控制, 2021, 50(5): 602-608, 615. DOI: 10.13976/j.cnki.xk.2021.0463
CHEN Guojun, YANG Jieming, GE Hongwei. Single Image Super-resolution Based on Reversible Network[J]. INFORMATION AND CONTROL, 2021, 50(5): 602-608, 615. DOI: 10.13976/j.cnki.xk.2021.0463
Citation: CHEN Guojun, YANG Jieming, GE Hongwei. Single Image Super-resolution Based on Reversible Network[J]. INFORMATION AND CONTROL, 2021, 50(5): 602-608, 615. DOI: 10.13976/j.cnki.xk.2021.0463
陈国俊, 羊洁明, 葛洪伟. 基于可逆网络的单一图像超分辨率[J]. 信息与控制, 2021, 50(5): 602-608, 615. CSTR: 32166.14.xk.2021.0463
引用本文: 陈国俊, 羊洁明, 葛洪伟. 基于可逆网络的单一图像超分辨率[J]. 信息与控制, 2021, 50(5): 602-608, 615. CSTR: 32166.14.xk.2021.0463
CHEN Guojun, YANG Jieming, GE Hongwei. Single Image Super-resolution Based on Reversible Network[J]. INFORMATION AND CONTROL, 2021, 50(5): 602-608, 615. CSTR: 32166.14.xk.2021.0463
Citation: CHEN Guojun, YANG Jieming, GE Hongwei. Single Image Super-resolution Based on Reversible Network[J]. INFORMATION AND CONTROL, 2021, 50(5): 602-608, 615. CSTR: 32166.14.xk.2021.0463

基于可逆网络的单一图像超分辨率

详细信息
    作者简介:

    陈国俊(1982-), 男, 博士生, 副教授.研究领域为大数据, 深度学习, 人工智能, 图像处理

    羊洁明(1995-), 男, 博士生.研究领域为深度学习, 图像处理, 多目标跟踪

    葛洪伟(1967-), 男, 博士, 教授, 博士生导师.研究领域为人工智能与模式识别, 图像处理与分析, 信息管理与数据挖掘, 嵌入式系统

    通讯作者:

    羊洁明, 572785530@qq.com

  • 中图分类号: TP391

Single Image Super-resolution Based on Reversible Network

  • 摘要:

    本文提出了一种基于可逆神经网络的单一图像超分辨率算法.最近提出的基于深度神经网络的单一图像超分辨率模型都是利用生成的超分辨率图像和对应的高分辨率图像之间的差异定义目标函数以及更新模型参数,这些模型仅利用了超分辨率正向过程中高分辨率图像对于低分辨率图像的依赖,并没有建立低分辨率图像与高分辨率图像之间的相互依赖.本文提出的超分辨可逆网络利用具有可逆结构的神经网络建立低分辨率图像与高分辨率图像之间的相互依赖,它能够将低分辨率图像和高分辨率图像分别投影到相互的图像空间之中,然后利用两个投影的误差反馈来优化模型在低分辨率和高分辨率图像空间的相互映射,基于模型的可逆特性,实现从正向和逆向两个过程分别对超分辨率过程的优化.通过实验,提出的模型在超分辨率基准数据集上取得了优异的结果.

    Abstract:

    A single image super-resolution algorithm based on reversible network is proposed in this paper. Many models based on deep neural network (DNN) are proposed recently to solve the problem of single image super-resolution. In these models, the difference between the generated super-resolution image and the corresponding high-resolution image used to define the objective function and update the model parameters. These methods only take advantage of the dependence of high resolution image on low resolution image, and do not establish the inter-dependence between them. In this paper, we propose a super-resolution reversible network (SRRevnet) model based on reversible network to establish the Mutual mapping between the high-resolution image and low-resolution image. This model maps low resolution image and high resolution image to each other's resolution space respectively, and then use the error feedback to optimize those two opposite process. because the model is reversible, this model can optimize the process of super-resolution from forward and backward respectively. To our knowledge, this paper is the first to use the neural network with reversible structure to solve the problem of single image super- resolution. Through experiments, our model has achieved excellent results on the super-resolution benchmark datasets.

  • 图  1   本文所提方法的流程图

    Figure  1.   The flow chart of the proposed method

    图  2   基于双三次插值的预处理

    Figure  2.   The preprocess based on bicubic interpolation

    图  3   数据的挤压与解压操作

    Figure  3.   The squeeze and unsqueeze operation of data

    图  4   模型结构和数据流向

    Figure  4.   The structure of model and data flow

    图  5   Glow中的flow模块和本文可逆模块

    Figure  5.   The flow block in Glow and the proposed reversible block

    图  6   仿射耦合层正向传播过程和逆向传播过程

    Figure  6.   The forward-propagation and the reverse-propagation of the affine coupling layer

    图  7   在Set5数据集上的视觉效果对比

    Figure  7.   The visual effect comparison on Set5

    图  8   在Set14数据集上的视觉效果对比

    Figure  8.   The visual effect comparison on Set14

    图  9   在BSD100数据集上的视觉效果对比

    Figure  9.   The visual effect comparison on BSD100

    表  1   在Set5数据集上的定量对比

    Table  1   The quantitative comparison on Set5

    set5 nearest bicubic glasner ScSR SRCNN Kim SelfExSR VPGF SRRevnet(ours)
    PSNR 26.489 7 28.579 9 28.926 9 29.195 8 30.123 0 30.123 8 30.374 0 30.510 0 30.791 8
    SSIM 0.764 8 0.821 8 0.833 3 0.839 3 0.862 6 0.866 1 0.873 2 0.863 2 0.883 2
    下载: 导出CSV

    表  2   在Set14数据集上的定量对比

    Table  2   The quantitative comparison on Set14

    Set14 nearest bicubic glasner ScSR SRCNN Kim SelfExSR VPGF SRRevnet(ours)
    PSNR 24.493 3 25.789 5 26.199 4 25.763 7 26.354 9 25.239 1 26.768 4 27.23 28.13
    SSIM 0.679 7 0.719 5 0.734 9 0.740 5 0.756 7 0.761 0 0.770 46 0.748 6 0.789 5
    下载: 导出CSV

    表  3   在BSD100数据集上的定量对比

    Table  3   The quantitative comparison on BSD100

    BSD100 nearest bicubic glasner ScSR SRCNN Kim SelfExSR VPGF SRRevnet(ours)
    PSNR 25.092 6 25.991 8 26.191 9 26.621 5 26.706 5 26.725 7 26.878 1 27.01 27.690 2
    SSIM 0.653 4 0.683 7 0.692 2 0.716 0 0.719 6 0.721 4 0.729 8 0.710 5 0.737 9
    下载: 导出CSV
  • [1]

    Irani M, Peleg S. Improving resolution by image registration[J]. CVGIP: Graphical Models and Image Processing, 1991, 53(3): 231-239. doi: 10.1016/1049-9652(91)90045-L

    [2]

    Stark H, Oskoui P. High-resolution image recovery from image plane arrays, using convex projections[J]. Journal of the Optical Society of America A Optics & Image Science, 1989, 6(11): 1715-1726. https://pubmed.ncbi.nlm.nih.gov/2585170/

    [3]

    Chang H, Yeung D Y, Xiong Y. Super-resolution through neighbor embedding[C]//IEEE Computer Society Conference on Computer Vision & Pattern Recognition. Piscataway, USA: IEEE, 2004: 275-282.

    [4]

    Gao X B, Zhang K B, Tao D C, et al. Image super-resolution with sparse neighbor embedding[J]. IEEE Transactions on Image Processing, 2012, 21(7): 3194-3205. doi: 10.1109/TIP.2012.2190080

    [5]

    Freeman W T, Pasztor E C, Carmichael O T. Learning low-level vision[J]. International Journal of Computer Vision, 2000, 40(1): 25-47. doi: 10.1023/A:1026501619075

    [6]

    Polatkan G, Zhou M, Carin L, et al. A Bayesian nonparametric approach to image super-resolution[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014, 37(2): 346-358. http://med.wanfangdata.com.cn/Paper/Detail/PeriodicalPaper_PM26353246

    [7]

    Timofte R, De Smet V, Van Gool L. A+: Adjusted anchored neighborhood regression for fast super-resolution[C]//Asian Conference on Computer Vision. Berlin, Germany: Springer, 2014: 111-126.

    [8]

    Hu Y, Wang N, Tao D, et al. SERF: A simple, effective, robust, and fast image super-resolver from cascaded linear regression[J]. IEEE Transactions on Image Processing, 2016, 25(9): 4091-4102. doi: 10.1109/TIP.2016.2580942

    [9]

    Wang H J, Gao X B, Zhang K B, et al. Single-image super-resolution using active-sampling Gaussian process regression[J]. IEEE Transactions on Image Processing, 2015, 25(2): 935-948. https://ieeexplore.ieee.org/document/7364246/citations

    [10]

    Yang J C, Wright J, Huang T S, et al. Image super-resolution via sparse representation[J]. IEEE Transactions on Image Processing, 2010, 19(11): 2861-2873. doi: 10.1109/TIP.2010.2050625

    [11]

    He L, Qi H, Zaretzki R. Beta process joint dictionary learning for coupled feature spaces with application to single image super-resolution[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, USA: IEEE, 2013: 345-352.

    [12]

    Schulter S, Leistner C, Bischof H. Fast and accurate image upscaling with super-resolution forests[C]//IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, USA: IEEE, 2015: 3791-3799.

    [13]

    Wang Z, Bovik A C, Sheikh H. et al. Image quality assessment: From error visibility to structural similarity[J]. IEEE Transactions on Image Processing, 2004, 13(4): 600-612. doi: 10.1109/TIP.2003.819861

    [14]

    Dong C, Chen C L, He K, et al. Image super-resolution using deep convolutional networks[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2016, 38(2): 295-307. http://ieeexplore.ieee.org/document/7115171

    [15]

    Dong C, Chen C L, Tang X. Accelerating the super-resolution convolutional neural network[C]//European Conference on Computer Vision. Berlin, Germnay: Springer, 2016: 391-407.

    [16]

    Shi W, Caballero J, Huszar F, et al. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network[C]//IEEE Computer Vision and Pattern Recognition. Piscataway, USA: IEEE, 2016: 1874-1883.

    [17]

    Kim J, Lee J K, Lee K M. Accurate image super-resolution using very deep convolutional networks[C]//IEEE Computer Vision and Pattern Recognition. Piscataway, USA: IEEE, 2016: 1646-1654.

    [18]

    Kim J, Lee J K, Lee K M. Deeply-recursive convolutional network for image super-resolution[C]//IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, USA: IEEE, 2015: 1637-1645.

    [19]

    Ledig C, Wang Z, Shi W, et al. Photo-realistic single image super-resolution using a generative adversarial network[C]//IEEE Computer Vision and Pattern Recognition. Piscataway, USA: IEEE, 2017: 105-114.

    [20]

    Maclaurin D, Duvenaud D, Adams R. Gradient-based hyperparameter optimization through reversible learning[C]//International Conference on Machine Learning. Berlin, Germany: Springer, 2015: 2113-2122.

    [21]

    Dinh L, Krueger D, Bengio Y. Nice: Non-linear independent components estimation[EB/OL]. (2015-04-10)[2019-12-15]. https://arxiv.org/abs/1410.8516.

    [22]

    Dinh L, Sohl-Dickstein J, Bengio S. Density estimation using real-NVP[EB/OL]. (2017-02-27)[2019-12-20]. https://arxiv/abs/1605.08803.

    [23]

    Kingma D P, Dhariwal P. Glow: Generative flow with invertible 1×1 convolutions[C]//Advances in Neural Information Processing Systems. Red Hook, USA: Curran Associates, Inc., 2018: 10215-10224.

    [24]

    Ioffe S, Szegedy C. Batch normalization: Accelerating deep network training by reducing internal covariate shift[EB/OL]. (2015-03-02)[2019-11-29]. https://arxiv/abs/1502.03167.

    [25]

    Grover A, Dhar M, Ermon S. Flow-GAN: Combining maximum likelihood and adversarial learning in generative models[EB/OL]. (2018-01-03)[2019-06-20]. http://arxiv.org/abs/1705.88068.

    [26]

    Goodfellow I J, Pouget-Abadie J, Mirza M, et al. Generative adversarial nets[C]//International Conference on Neural Information Processing Systems. Berlin, Germany: Springer, 2014: 2672-2680.

    [27]

    Bevilacqua M, Roumy A, Guillemot C, et al. Low-complexity single-image super-resolution based on nonnegative neighbor embedding[C]//British Machine Vision Conference. London, UK: Cambridge, 2012: 135.1-135.10.

    [28]

    Zeyde R, Elad M, Protter M. On single image scale-up using sparse-representations[C]//International Conference on Curves and Surfaces, Berlin, Germany: Springer, 2010: 711-730.

    [29]

    Martin D, Fowlkes C, Tal D, et al. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics[C]//IEEE International Conference on Computer Vision. Piscataway, USA: IEEE, 2001: 416-423.

    [30]

    Kim K I, Kwon Y. Single-image super-resolution using sparse regression and natural image prior[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, 32(6): 1127-1133. doi: 10.1109/TPAMI.2010.25

    [31]

    Glasner D, Bagon S, Irani M. Super-resolution from a single image[C]//IEEE 12th International Conference on Computer Vision. Piscataway, USA: IEEE, 2009: 349-356.

    [32]

    Huang J B, Singh A, Ahuja N. Single image super-resolution from transformed self-exemplars[C]//IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, USA: IEEE, 2015: 5197-5206.

    [33]

    Wang Z, Chen B, Zhang H, et al. Variational probabilistic generative framework for single image super-resolution[J]. Signal Processing, 2019, 156: 92-105. doi: 10.1016/j.sigpro.2018.10.004

图(9)  /  表(3)
计量
  • 文章访问数: 
  • HTML全文浏览量:  0
  • PDF下载量: 
  • 被引次数: 0
出版历程
  • 收稿日期:  2020-10-08
  • 录用日期:  2021-01-14
  • 发布日期:  2021-10-19
  • 刊出日期:  2021-10-19

目录

    /

    返回文章
    返回
    x