基于生成对抗网络探索潜在空间的医学图像融合算法

Medical Image Fusion Algorithm Adopting Generative Adversarial Network to Explore Latent Space

  • 摘要: 针对现有医学图像融合方法产生的图像,存在源图像潜在特征信息丢失而导致融合效果差的问题,提出一种利用生成对抗网络(generative adversarial network,GAN)来探索潜在空间的图像融合算法(GAN-ELS),用于提高计算机断层成像(CT)与T2加权核磁共振成像(MR-T2)融合图像的质量.该算法首先通过基于StyleGAN改进的对抗网络在训练进程中的特征解缠学习、多分辨率层级样式控制,来充分实现对CT与MR-T2图像特征分布的无监督学习;然后在训练得到的生成器的基础上,根据配准过的源图像与当前主流融合方法所产生的对应融合图像,来探索目标融合图像所在的潜在特征空间;最终获得语义信息丰富的高质量融合图像.在Havard Medical School的全脑图谱数据集上的实验表明,与5种性能良好的主流融合方法相比,GAN-ELS融合后的图像在结构相似性、归一化互信息、峰值信噪比、归一化均方根误差等多个指标上更优,融合质量更好.

     

    Abstract: To solve the problem of poor fusion effect caused by the loss of latent features of source images produced by existing medical image fusion methods, an image fusion algorithm is proposed by adopting a generative adversarial network (GAN) to explore latent space (GAN-ELS), used to improve the fusion images quality of computed tomography (CT) and T2-weighted magnetic resonance imaging (MR-T2). Firstly, the unsupervised learning of the feature distribution of CT and MR-T2 images is fully realized through the feature disentanglement learning and multi-resolution hierarchical style control of the improved GAN based on StyleGAN in the training process. Then, based on the trained generator, the latent feature space of the target fusion image is explored according to the registered source image and the corresponding fusion image generated by the current mainstream fusion method. Finally, high-quality fusion images with rich semantic information are obtained. Experimental results on the whole brain atlas data set of Havard Medical School show that, in comparison with the five mainstream fusion methods with good performance, the images fused by GAN-ELS are better in structural similarity (SSIM), qualitative mutual information (QMI), peak signal to noise ratio (PSNR), normalized mean squared error (NMSE), and other indicators, with better fusion quality.

     

/

返回文章
返回