Loading…

Development of attenuation correction methods using deep learning in brain‐perfusion single‐photon emission computed tomography

Purpose Computed tomography (CT)‐based attenuation correction (CTAC) in single‐photon emission computed tomography (SPECT) is highly accurate, but it requires hybrid SPECT/CT instruments and additional radiation exposure. To obtain attenuation correction (AC) without the need for additional CT image...

Full description

Saved in:
Bibliographic Details
Published in:Medical physics (Lancaster) 2021-08, Vol.48 (8), p.4177-4190
Main Authors: Murata, Taisuke, Yokota, Hajime, Yamato, Ryuhei, Horikoshi, Takuro, Tsuneda, Masato, Kurosawa, Ryuna, Hashimoto, Takuma, Ota, Joji, Sawada, Koichi, Iimori, Takashi, Masuda, Yoshitada, Mori, Yasukuni, Suyari, Hiroki, Uno, Takashi
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Purpose Computed tomography (CT)‐based attenuation correction (CTAC) in single‐photon emission computed tomography (SPECT) is highly accurate, but it requires hybrid SPECT/CT instruments and additional radiation exposure. To obtain attenuation correction (AC) without the need for additional CT images, a deep learning method was used to generate pseudo‐CT images has previously been reported, but it is limited because of cross‐modality transformation, resulting in misalignment and modality‐specific artifacts. This study aimed to develop a deep learning‐based approach using non‐attenuation‐corrected (NAC) images and CTAC‐based images for training to yield AC images in brain‐perfusion SPECT. This study also investigated whether the proposed approach is superior to conventional Chang’s AC (ChangAC). Methods In total, 236 patients who underwent brain‐perfusion SPECT were randomly divided into two groups: the training group (189 patients; 80%) and the test group (47 patients; 20%). Two models were constructed using Autoencoder (AutoencoderAC) and U‐Net (U‐NetAC), respectively. ChangAC, AutoencoderAC, and U‐NetAC approaches were compared with CTAC using qualitative analysis (visual evaluation) and quantitative analysis (normalized mean squared error [NMSE] and the percentage error in each brain region). Statistical analyses were performed using the Wilcoxon signed‐rank sum test and Bland‐Altman analysis. Results U‐NetAC had the highest visual evaluation score. The NMSE results for the U‐NetAC were the lowest, followed by AutoencoderAC and ChangAC (P 
ISSN:0094-2405
2473-4209
DOI:10.1002/mp.15016