breadcrumb-bg

An interpretable deep learning workflow for discovering subvisual abnormalities in CT scans of COVID-19 inpatients and survivors

KAUST Smart-Health

Featured Research

An interpretable deep learning workflow for discovering subvisual abnormalities in CT scans of COVID-19 inpatients and survivors

Abstract

Tremendous efforts have been made to improve diagnosis and treatment of COVID-19, but knowledge on long-term complications is limited. In particular, a large portion of survivors has respiratory complications, but currently, experienced radiologists and state-of-the-art artificial intelligence systems are not able to detect many abnormalities from follow-up computerized tomography (CT) scans of COVID-19 survivors. Here we propose Deep-LungParenchyma-Enhancing (DLPE), a computer-aided detection (CAD) method for detecting and quantifying pulmonary parenchyma lesions on chest CT. Through proposing a number of deep-learning-based segmentation models and assembling them in an interpretable manner, DLPE removes irrelevant tissues from the perspective of pulmonary parenchyma, and calculates the scan-level optimal window, which considerably enhances parenchyma lesions relative to the lung window. Aided by DLPE, radiologists discovered novel and interpretable lesions from COVID-19 inpatients and survivors, which were previously invisible under the lung window. Based on DLPE, we removed the scan-level bias of CT scans, and then extracted precise radiomics from such novel lesions. We further demonstrated that these radiomics have strong predictive power for key COVID-19 clinical metrics on an inpatient cohort of 1,193 CT scans and for sequelae on a survivor cohort of 219 CT scans. Our work sheds light on the development of interpretable medical artificial intelligence and showcases how artificial intelligence can discover medical findings that are beyond sight. [READ MORE]

Image credit: 2021 KAUST; Heno Hwang

Image credits: 2022 KAUST; Ivan Gromicho

References

Zhou, L., Meng, X., Huang, Y., Kang, K., Zhou, J., Chu, Y., Li, H., Xie, D., Zhang, J., Yang, W., Bai, N., Zhao, Y., Zhao, M., Wang, G., Carin, L., Xiao, X., Yu, K., Qiu, Z. & Gao, X. An interpretable deep learning workflow for discovering sub-visual abnormalities in CT scans of COVID-19 inpatients and survivors. Nature Machine Intelligence (2022). 

Principal investigator

Gao Xin

Xin Gao

Professor, Computer Science;
Interim Director, Computational Bioscience Research Center (CBRC)
Deputy Director, Smart Health Initiative;
Lead, Structural and Functional Bioinformatics (SFB) Group

First author:

Longxi-Zhou

Longxi Zhou Ph.D. Student.