Fast and accurate 3D compton cone projections on GPU using CUDA

We present a fast and accurate method for reconstructing single photons detected by a Compton camera using 3D cone projection operations formulated to run on a graphics processing unit (GPU) and the compute unified device architecture (CUDA) framework. With these projection operations, image quality...

Full description

Saved in:
Bibliographic Details
Main Authors: Jingyu Cui, Chinn, G., Levin, C. S.
Format: Conference Proceeding
Language:eng
Subjects:
GPU
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:We present a fast and accurate method for reconstructing single photons detected by a Compton camera using 3D cone projection operations formulated to run on a graphics processing unit (GPU) and the compute unified device architecture (CUDA) framework. With these projection operations, image quality and accuracy of modalities such as positron emission tomography (PET) can be improved by incorporating Compton scatter events. We also use Monte Carlo simulation to produce a model of the blurring effects caused by limited energy and spatial resolution of the detectors to improve the quality and accuracy of the reconstructed images. The blur model is then incorporated into the cone projections in a cone-by-cone basis. Our method overcomes challenges such as compute thread divergence, and exploits GPU capabilities such as shared memory and texture memory. Unique challenges for projecting cones compared with projecting lines are also addressed for the GPU. The projection operations are integrated into a list-mode ordered subsets expectation maximization (OSEM) framework to reconstruct images from a Compton camera. The algorithm with blurring model achieves an average of 17.3% improvement on CNR compared with the image reconstructed without the blurring model. The whole reconstruction algorithm takes 2.2 seconds per iteration to process 50,000 cones in a 96×96×32 image on a NVIDIA GeForce GTX 480 GPU, including forward projection, backprojection, and multiplicative update. On a single core state-of-the-art central processing unit (CPU), it takes 3.1 hours for the same task with the same level of accuracy in blur modeling. Images generated using the CPU and GPU implementing the same blurring model are virtually identical, with root mean squared (RMS) deviation of 0.01%.
ISSN:1082-3654
2577-0829