||
小编导读
在临床上,使用低剂量X-Ray进行计算断层扫描(LDCT)是比较理想的,因为它可以减少病人的辐射剂量。然而,由于不可避免的强量子噪声,LDCT图像的质量往往不理想。基于深度学习(DL)的LDCT去燥技术已经在计算机视觉领域得到了广泛的应用。然而,尽管DL模型有很好的去噪能力,但研究人员发现去噪后的图像分辨率较低,降低了其临床应用价值。来自德克萨斯大学西南医学中心放射肿瘤学系医学人工智能与自动化实验室的研究学者们在期刊Journal of Artificial Intelligence of Medical Sciences(eISSN 2666-1470)上发表了题为“Deep High-Resolution Network for Low-Dose X-Ray CT Denoising”的文章,通过引入高分辨率网络(HRNet)开发了一种更有效的去噪器。
要点介绍
X射线计算机断层扫描(CT)在临床上被广泛应用于对患者的内脏器官进行成像以便医生进行诊断。然而,X射线CT扫描所涉及的辐射剂量对人体构成了潜在的健康问题,因为它可能诱发遗传、癌症和其他疾病。因此,在确定CT扫描的辐射剂量水平时,遵循尽可能低的合理可行(ALARA)原则在临床上是最重要的。降低剂量水平的主要方法之一是降低每个投影角的曝光水平。然而,较低的曝光量不可避免地会在CT图像中引入较强的量子噪声,从而降低了这些图像的临床价值。因此,去噪算法对于提高低剂量CT(LDCT)图像的质量至关重要。在这方面的研究已经有了很大的发展,大致可以分为三类:基于投影域的去噪,基于图像域的去噪以及正则化迭代重建。在这项研究中,我们致力于最后一类,因为我们寻求提高图像质量的LDCT。
我们的LDCT图像去噪任务的目标是在尽可能保持分辨率的同时有效地抑制噪声,因为噪声和分辨率是评价CT图像质量的两个重要指标。因此,在设计用于LDCT图像去噪的CNN结构时,提取高质量的低层和高层特征并进行有效的融合是非常重要的。而通过采用更先进的网络结构,可以更有效地提取和融合低层和高层特征,提高去噪CT图像的图像质量。我们引入高分辨率网络(HRNet)开发了一种更有效的去噪器。HRNet由多个子网分支组成,这些子网提取多尺度特征,然后融合在一起,大大提高了生成特征的质量,提高了去噪性能。
图1. HRNet架构。四个不同的分支从不同的尺度上提取特征。在前向传播过程中,不同尺度的特征逐渐融合在一起。为了启用由虚线连接指示的融合过程,根据特征的相对大小,首先对先前的特征进行下采样或上采样,使得基于求和的特征融合是有效的。最后,如果需要,将来自不同尺度的所有特征向上采样到原始尺度,并连接在一起,得到最终的特征。最后一层是预测器,输出去噪后的图像。所有卷积层由三个算子组成:卷积、实例归一化和校正非线性单元。最后的预测器由两个算子组成:卷积和校正非线性单元。
图2. 验证数据集中每个图层的定量结果。x轴是切片编号,y轴是RMSE(左图)和SSIM(右图)。实红、虚线绿、虚线蓝曲线分别属于LDCT、HRNet和U-Net。验证数据集中的两个患者的切片由虚线黑色垂直线分隔。
图3. 噪声分析(a) LDCT,(b)由基于HRNet的去噪器去除的噪声,(c)去除的噪声的傅里叶域,(d)添加的噪声、去除的噪声和目标噪声之间的余弦相关性,(e)去除的噪声的高频分量,和(f)去除了低频区域的傅里叶域。显示窗口为[−160、240]HU代表图像(a)[−图像(b)和(e)为50,50]HU,图像(c)和(f)为[104,105]HU。
研究结论:任何CT去噪器的目标都是在尽可能保留解剖细节的同时尽可能地抑制噪声。基于DL的去噪器通过自动提取图像特征来抑制噪声,从而达到最先进的去噪性能。然而,特征质量在很大程度上取决于模型体系结构。对于去噪任务,低层和高层特征都很重要。前者对于细节的保持非常重要,而后者对于大规模利用上下文信息有效抑制噪声非常重要。编码器-解码器结构对于高级特征提取是有效的,但是它缺乏低级信息。U-Net通过跳转连接可以在一定程度上改善低层特征质量,但仍然不能提供高、低层的特征质量来忠实地恢复细节。
实验结果表明HRNet能够有效地去除噪声,同时很好地保留了人体的精细解剖结构。在某些情况下,HRNet的结果甚至比NDCT更好。这可能是因为我们不仅去除了添加的模拟噪声,而且还去除了从目标NDCT图像中继承的噪声。
总之,在这项工作中,我们引入了一个基于HRNet的去噪器来提高LDCT图像的质量。由于HRNet能同时提取高质量的低层和高层特征,因此它能有效地抑制噪声和保持细节。与基于U-Net的去噪器相比,HRNet可以产生更高分辨率的图像。定量实验表明,基于HRNet的去噪器能将RMSE/SSIM值从113.80/0.550(LDCT)改善到55.24/0.745,其性能优于基于U-Net的去噪器(59.87/0.712)。
参考文献 References
[1] J. Wang, et al., Sinogram noise reduction for low-dose CT by statistics-based nonlinear filters, in Medical Imaging 2005: Image Processing, International Society for Optics and Photonics, San Diego, CA, USA, 2005.
[2] A. Manduca, et al., Projection space denoising with bilateral filter- ing and CT noise modeling for dose reduction in CT, Med. Phys. 36 (2009), 4911–4919.
[3] K. Dabov, et al., Image denoising with block-matching and 3D fil- tering, in Image Processing: Algorithms and Systems, Neural Net- works, and Machine Learning, International Society for Optics and Photonics, San Jose, CA, USA, 2006.
[4] K. Dabov, et al., Image denoising by sparse 3-D transform-domain collaborative filtering, IEEE Trans. Image Process. 16 (2007), 2080–2095.
[5] T. Bai, et al., Z-index parameterization for volumetric CT image reconstruction via 3-D dictionary learning, IEEE Trans. Med. Imaging. 36 (2017), 2466–2478.
[6] H. Yan, et al., Towards the clinical implementation of iterative low-dose cone-beam CT reconstruction in image-guided radi- ation therapy: cone/ring artifact correction and multiple GPU implementation, Med. Phys. 41 (2014), 111912.
[7] I.A. Elbakri, J.A. Fessler, Statistical image reconstruction for polyenergetic X-ray computed tomography, IEEE Trans. Med. Imaging. 21 (2002), 89–99.
[8] J. Wang, et al., Penalized weighted least-squares approach to sino- gram noise reduction and image reconstruction for low-dose X- ray computed tomography, IEEE Trans. Med. Imaging. 25 (2006), 1272–1283.
[9] G.H. Chen, J. Tang, S. Leng, Prior Image Constrained Compressed Sensing (PICCS): a method to accurately reconstruct dynamic CT images from highly undersampled projection data sets, Med. Phys. 35 (2008), 660–663.
[10] E.Y. Sidky, X. Pan, Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimiza- tion, Phys. Med. Biol. 53 (2008), 4777.
[11] J. Wang, T. Li, L. Xing, Iterative image reconstruction for CBCT using edge-preserving prior, Med. Phys. 36 (2009), 252–260.
[12] Q. Xu, et al., Low-dose X-ray CT reconstruction via dictionary learning, IEEE Trans. Med. Imaging. 31 (2012), 1682–1697.
[13] Y.Lecun,Y.Bengio,G.Hinton,Deeplearning,Nature.521(2015), 436–444.
[14] J. Caballero, et al., Real-time video super-resolution with spatio-temporal networks and motion compensation, in 2017 IEEE Con- ference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 2016, pp. 2848–2857.
[15] A. Kappeler, et al., Video super-resolution with convolutional neural networks, IEEE Trans. Comput. Imaging. 2 (2016), 109–122.
[16] X.J. Mao, C. Shen, Y.B. Yang, Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections, in Advances in Neural Information Processing Sys- tems 29 (NIPS 2016), Barcelona, Spain, 2016.
[17] D. Ulyanov, A. Vedaldi, V. Lempitsky, Deep Image Prior, In Pro- ceedings of the IEEE conference on computer vision and pattern recognition, (2018), pp. 9446–9454.
[18] J.Lehtinen,etal.,Noise2noise:learningimagerestorationwithout clean data, In ICML., Stockholm, Sweden, 2018. arXiv preprint arXiv:1803.04189, 2018.
[19] A. Krizhevsky, I. Sutskever, G.E. Hinton, Imagenet classification with deep convolutional neural networks, in Advances in Neural Information Processing Systems, Lake Tahoe, Nevada, 2012, pp. 1097–1105.
[20] K. Simonyan, A. Zisserman, Very deep convolutional net- works for large-scale image recognition, (2014). arXiv preprint arXiv:1409.1556.
[21] R. Girshick, Fast R-CNN, in 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 2015.
[22] K. He, et al., Deep residual learning for image recognition, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 2015.
[23] K. He, et al., Mask R-CNN, IEEE Trans. Pattern Anal. Mach. Intell. 42 (2020), 386–397.
[24] H. Chen, et al., Low-dose CT with a residual encoder-decoder convolutional neural network, IEEE Trans. Med. Imaging. 36 (2017), 2524–2535.
[25] H. Chen, et al., Low-dose CT via convolutional neural network, Biomed. Optics Express. 8 (2017), 679–694.
[26] H. Shan, et al., 3-D convolutional encoder-decoder network for low-dose CT via transfer learning from a 2-D trained network, IEEE Trans. Med. Imaging. 37 (2018), 1522–1534.
[27] Q. Yang, et al., Low-dose CT image denoising using a generative adversarial network with wasserstein distance and perceptual loss, IEEE Trans. Med. Imaging. 37 (2018), 1348–1357.
[28] C. You, et al., Structurally-sensitive multi-scale deep neural network for low-dose CT denoising, IEEE Access. 6 (2018), 41839–41855.
[29] H. Shan, et al., Competitive performance of a modularized deep neural network compared to commercial algorithms for low-dose CT image reconstruction, Nat. Mach. Intell. 1 (2019), 269–269.
[30] T. Bai, et al., Probabilistic self-learning framework for low-dose CT denoising, arXiv:2006.00327, 2020.
[31] G. Wang, S. Li, Low-dose CT image denoising using parallel-clone networks, arXiv:2005.06724v1, 2020.
[32] K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition, arXiv:1409.1556, 2014.
[33] C. Szegedy, et al., Inception-v4, inception-ResNet and the impact of residual connections on learning, In Proceedings of the AAAI Conference on Artificial Intelligence, San Francisco, California, USA, 2017.
[34] T.-Y. Lin, et al., Feature pyramid networks for object detection, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, 2016.
[35] J. Hu, L. Shen, G. Sun, Squeeze-and-excitation networks, in Pro- ceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018.
[36] G. Huang, et al., Densely connected convolutional networks, in 2017 IEEE Conference on Computer Vision and Pattern Recog- nition (CVPR), Honolulu, HI, USA, 2016.
[37] S. Xie, et al., Aggregated residual transformations for deep neural networks, in IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 2017.
[38] O. Ronneberger, P. Fischer, T. Brox, U-Net: convolutional net- works for biomedical image segmentation, in International Con- ference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 2015.
[39] K. Sun, et al., Deep high-resolution representation learning for human pose estimation, in 2019 IEEE/CVF Conference on Com- puter Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 2019.
[40] J. Wang, et al., Deep high-resolution representation learning for visual recognition, IEEE Trans. Pattern Anal. Mach. Intell. (2020).
[41] D. Ulyanov, A. Vedaldi, V. Lempitsky, Instance normaliza- tion: the missing ingredient for fast stylization, arXiv preprint arXiv:1607.08022, 2016.
[42] B. Chen, et al., Development and validation of an open data format for CT projection data, Med. Phys. 42 (2015), 6964–6972.
[43] D.P. Kingma, J. Ba, Adam: a method for stochastic optimization, (2014). arXiv preprint arXiv:1412.6980.
[44] W. Zhou, et al., Image quality assessment: from error visibility to structural similarity, IEEE Trans. Image Process. 13 (2004), 600–612.
原文信息
T. Bai, D. Nguyen, B. Wang, S. Jiang, "Deep High-Resolution Network for Low-Dose X-Ray CT Denoising", Journal of Artificial Intelligence for Medical Sciences, 2021, DOI: 10.2991/jaims.d.210428.001.
扫描二维码,获取英文原文
https://www.atlantis-press.com/journals/jaims/125956172
关于期刊
Journal of Artificial Intelligence of Medical Sciences (JAIMS,eISSN 2666-1470)是一本国际性的、经过严格同行评审的开放存取期刊,刊载人工智能在医学、医疗保健和生命科学所有交叉学科方向的理论,方法和应用的研究。编辑团队尤其欢迎在机器/深度学习、数据科学、自然语言处理(NLP)等支持下,为医学诊断、药物开发、护理、精准治疗等领域提供最近见解的原创性研究文章、综合评论、通信和观点。
JAIMS 由荷兰阿姆斯特丹自由大学黄智生教授担任创刊主编,来自八个国家的36名领域一流学者担任首届编委会,致力于将期刊打造为医学人工智能领域的首选阵地和开放科学平台。文章成果版权作者保留,不收取任何费用。欢迎各位专家投稿!
版权声明:*本文内容由Atlantis Press中国办公室翻译编辑。中文内容仅供参考,一切内容以英文原版为准。如需转载,请在评论区留言,或联系xin.guo@atlantis-press.com。
Atlantis Press是科学、技术和医学(STM)领域的全球开放获取出版品牌,2006年创立于法国巴黎,在巴黎、阿姆斯特丹、北京、郑州和香港设有办事处。我们的使命是通过促进科研界和整个社会更有效地传播和交流知识来支持科学、技术和医学研究的进步。迄今,Atlantis Press的数字内容平台包含超过14万篇开放获取论文供读者免费下载阅读,每年产生2500多万下载量。Atlantis Press是施普林格·自然的一部分。
Archiver|手机版|科学网 ( 京ICP备07017567号-12 )
GMT+8, 2024-11-23 07:07
Powered by ScienceNet.cn
Copyright © 2007- 中国科学报社