File(s) under embargo

Reason: 12 month publisher embargo

9

month(s)

23

day(s)

until file(s) become available

MIDCAN: A multiple input deep convolutional attention network for Covid-19 diagnosis based on chest CT and chest X-ray

journal contribution
posted on 14.09.2021, 09:39 by YD Zhang, Z Zhang, X Zhang, SH Wang
Background
COVID-19 has caused 3.34m deaths till 13/May/2021. It is now still causing confirmed cases and ongoing deaths every day.

Method
This study investigated whether fusing chest CT with chest X-ray can help improve the AI's diagnosis performance. Data harmonization is employed to make a homogeneous dataset. We create an end-to-end multiple-input deep convolutional attention network (MIDCAN) by using the convolutional block attention module (CBAM). One input of our model receives 3D chest CT image, and other input receives 2D X-ray image. Besides, multiple-way data augmentation is used to generate fake data on training set. Grad-CAM is used to give explainable heatmap.

Results
The proposed MIDCAN achieves a sensitivity of 98.10±1.88%, a specificity of 97.95±2.26%, and an accuracy of 98.02±1.35%.

Conclusion
Our MIDCAN method provides better results than 8 state-of-the-art approaches. We demonstrate the using multiple modalities can achieve better results than individual modality. Also, we demonstrate that CBAM can help improve the diagnosis performance.

Funding

Medical Research Council Confidence in Concept Award, UK (MC_PC_17171)

Hope Foundation for Cancer Research, UK (RM60G0680)

British Heart Foundation Accelerator Award, UK

Sino-UK Industrial Fund, UK (RP202G0289)

Global Challenges Research Fund (GCRF), UK (P202PF11)

Royal Society International Exchanges Cost Share Award, UK (RP202G0230)

History

Citation

Pattern Recognition Letters Volume 150, October 2021, Pages 8-16

Author affiliation

School of Informatics

Version

AM (Accepted Manuscript)

Published in

Pattern Recognition Letters

Volume

150

Pagination

8 - 16

Publisher

Elsevier

issn

0167-8655

Acceptance date

23/06/2021

Copyright date

2021

Available date

14/07/2022

Language

eng