Loading…
Towards optimal deep fusion of imaging and clinical data via a model‐based description of fusion quality
Background Due to intrinsic differences in data formatting, data structure, and underlying semantic information, the integration of imaging data with clinical data can be non‐trivial. Optimal integration requires robust data fusion, that is, the process of integrating multiple data sources to produc...
Saved in:
Published in: | Medical physics (Lancaster) 2023-06, Vol.50 (6), p.3526-3537 |
---|---|
Main Authors: | , , , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Background
Due to intrinsic differences in data formatting, data structure, and underlying semantic information, the integration of imaging data with clinical data can be non‐trivial. Optimal integration requires robust data fusion, that is, the process of integrating multiple data sources to produce more useful information than captured by individual data sources. Here, we introduce the concept of fusion quality for deep learning problems involving imaging and clinical data. We first provide a general theoretical framework and numerical validation of our technique. To demonstrate real‐world applicability, we then apply our technique to optimize the fusion of CT imaging and hepatic blood markers to estimate portal venous hypertension, which is linked to prognosis in patients with cirrhosis of the liver.
Purpose
To
develop a measurement method of optimal data fusion quality deep learning problems utilizing both imaging data and clinical data.
Methods
Our approach is based on modeling the fully connected layer (FCL) of a convolutional neural network (CNN) as a potential function, whose distribution takes the form of the classical Gibbs measure. The features of the FCL are then modeled as random variables governed by state functions, which are interpreted as the different data sources to be fused. The probability density of each source, relative to the probability density of the FCL, represents a quantitative measure of source‐bias. To minimize this source‐bias and optimize CNN performance, we implement a vector‐growing encoding scheme called positional encoding, where low‐dimensional clinical data are transcribed into a rich feature space that complements high‐dimensional imaging features. We first provide a numerical validation of our approach based on simulated Gaussian processes. We then applied our approach to patient data, where we optimized the fusion of CT images with blood markers to predict portal venous hypertension in patients with cirrhosis of the liver. This patient study was based on a modified ResNet‐152 model that incorporates both images and blood markers as input. These two data sources were processed in parallel, fused into a single FCL, and optimized based on our fusion quality framework.
Results
Numerical validation of our approach confirmed that the probability density function of a fused feature space converges to a source‐specific probability density function when source data are improperly fused. Our numerical results demonstrate tha |
---|---|
ISSN: | 0094-2405 2473-4209 |
DOI: | 10.1002/mp.16181 |