To solve the problem of poor fusion effect caused by the loss of latent features of source images produced by existing medical image fusion methods, an image fusion algorithm is proposed by adopting a generative adversarial network (GAN) to explore latent space (GAN-ELS), used to improve the fusion images quality of computed tomography (CT) and T2-weighted magnetic resonance imaging (MR-T2). Firstly, the unsupervised learning of the feature distribution of CT and MR-T2 images is fully realized through the feature disentanglement learning and multi-resolution hierarchical style control of the improved GAN based on StyleGAN in the training process. Then, based on the trained generator, the latent feature space of the target fusion image is explored according to the registered source image and the corresponding fusion image generated by the current mainstream fusion method. Finally, high-quality fusion images with rich semantic information are obtained. Experimental results on the whole brain atlas data set of Havard Medical School show that, in comparison with the five mainstream fusion methods with good performance, the images fused by GAN-ELS are better in structural similarity (SSIM), qualitative mutual information (QMI), peak signal to noise ratio (PSNR), normalized mean squared error (NMSE), and other indicators, with better fusion quality.