SCIENTIFIC COMPUTING AND IMAGING INSTITUTE
at the University of Utah

An internationally recognized leader in visualization, scientific computing, and image analysis

SCI Publications

2022


A. Busatto, J.A. Bergquist, L.C. Rupp, B. Zenger, R.S. MacLeod. “Unexpected Errors in the Electrocardiographic Forward Problem,” In Computing in Cardiology, Vol. 49, 2022.

ABSTRACT

Previous studies have compared recorded torso potentials with electrocardiographic forward solutions from a pericardial cage. In this study, we introduce new comparisons of the forward solutions from the sock and cage with each other and with respect to the measured potentials on the torso. The forward problem of electrocardiographic imaging is expected to achieve high levels of accuracy since it is mathematically well posed. However, unexpectedly high residual errors remain between the computed and measured torso signals in experiments. A possible source of these errors is the limited spatial coverage of the cardiac sources in most experiments; most capture potentials only from the ventricles. To resolve the relationship between spatial coverage and the accuracy of the forward simulations, we combined two methods of capturing cardiac potentials using a 240-electrode sock and a 256-electrode cage, both surrounding a heart suspended in a 192-electrode torso tank. We analyzed beats from three pacing sites and calculated the RMSE, spatial correlation, and temporal correlation. We found that the forward solutions using the sock as the cardiac source were poorer compared to those obtained from the cage. In this study, we explore the differences in forward solution accuracy using the sock and the cage and suggest some possible explanations for these differences.



N. Cheng, O.A. Malik, Y. Xu, S. Becker, A. Doostan, A. Narayan. “Quadrature Sampling of Parametric Models with Bi-fidelity Boosting,” Subtitled “arXiv:2209.05705v1,” 2022.

ABSTRACT

Least squares regression is a ubiquitous tool for building emulators (a.k.a. surrogate models) of problems across science and engineering for purposes such as design space exploration and uncertainty quantification. When the regression data are generated using an experimental design process (e.g., a quadrature grid) involving computationally expensive models, or when the data size is large, sketching techniques have shown promise to reduce the cost of the construction of the regression model while ensuring accuracy comparable to that of the full data. However, random sketching strategies, such as those based on leverage scores, lead to regression errors that are random and may exhibit large variability. To mitigate this issue, we present a novel boosting approach that leverages cheaper, lower-fidelity data of the problem at hand to identify the best sketch among a set of candidate sketches. This in turn specifies the sketch of the intended high-fidelity model and the associated data. We provide theoretical analyses of this bi-fidelity boosting (BFB) approach and discuss the conditions the low- and high-fidelity data must satisfy for a successful boosting. In doing so, we derive a bound on the residual norm of the BFB sketched solution relating it to its ideal, but computationally expensive, high-fidelity boosted counterpart. Empirical results on both manufactured and PDE data corroborate the theoretical analyses and illustrate the efficacy of the BFB solution in reducing the regression error, as compared to the non-boosted solution.



H. Csala, S.T.M. Dawson, A. Arzani. “Comparing different nonlinear dimensionality reduction techniques for data-driven unsteady fluid flow modeling,” In Physics of Fluids, AIP Publishing, 2022.
DOI: https://doi.org/10.1063/5.0127284

ABSTRACT

Computational fluid dynamics (CFD) is known for producing high-dimensional spatiotemporal data. Recent advances in machine learning (ML) have introduced a myriad of techniques for extracting physical information from CFD. Identifying an optimal set of coordinates for representing the data in a low-dimensional embedding is a crucial first step toward data-driven reduced-order modeling and other ML tasks. This is usually done via principal component analysis (PCA), which gives an optimal linear approximation. However, fluid flows are often complex and have nonlinear structures, which cannot be discovered or efficiently represented by PCA. Several unsupervised ML algorithms have been developed in other branches of science for nonlinear dimensionality reduction (NDR), but have not been extensively used for fluid flows. Here, four manifold learning and two deep learning (autoencoder)-based NDR methods are investigated and compared to PCA. These are tested on two canonical fluid flow problems (laminar and turbulent) and two biomedical flows in brain aneurysms. The data reconstruction capabilities of these methods are compared, and the challenges are discussed. The temporal vs spatial arrangement of data and its influence on NDR mode extraction is investigated. Finally, the modes are qualitatively compared. The results suggest that using NDR methods would be beneficial for building more efficient reduced-order models of fluid flows. All NDR techniques resulted in smaller reconstruction errors for spatial reduction. Temporal reduction was a harder task; nevertheless, it resulted in physically interpretable modes. Our work is one of the first comprehensive comparisons of various NDR methods in unsteady flows.



H. Dai, M. Bauer, P.T. Fletcher, S.C. Joshi. “Deep Learning the Shape of the Brain Connectome,” Subtitled “arXiv preprint arXiv:2203.06122, 2022,” 2022.

ABSTRACT

To statistically study the variability and differences between normal and abnormal brain connectomes, a mathematical model of the neural connections is required. In this paper, we represent the brain connectome as a Riemannian manifold, which allows us to model neural connections as geodesics. We show for the first time how one can leverage deep neural networks to estimate a Riemannian metric of the brain that can accommodate fiber crossings and is a natural modeling tool to infer the shape of the brain from DWMRI. Our method achieves excellent performance in geodesic-white-matter-pathway alignment and tackles the long-standing issue in previous methods: the inability to recover the crossing fibers with high fidelity.



M. Dorier, Z. Wang, U. Ayachit, S. Snyder, R. Ross, M. Parashar. “Colza: Enabling Elastic In Situ Visualization for High-performance Computing Simulations,” In 2022 IEEE International Parallel and Distributed Processing Symposium (IPDPS), IEEE, pp. 538-548. 2022.
DOI: 10.1109/IPDPS53621.2022.00059

ABSTRACT

In situ analysis and visualization have grown increasingly popular for enabling direct access to data from high-performance computing (HPC) simulations. As a simulation progresses and interesting physical phenomena emerge, however, the data produced may become increasingly complex, and users may need to dynamically change the type and scale of in situ analysis tasks being carried out and consequently adapt the amount of resources allocated to such tasks. To date, none of the production in situ analysis frameworks offer such an elasticity feature, and for good reason: the assumption that the number of processes could vary during run time would force developers to rethink software and algorithms at every level of the in situ analysis stack. In this paper we present Colza, a data staging service with elastic in situ visualization capabilities. Colza relies on the widely used ParaView Catalyst in situ visualization framework and enables elasticity by replacing MPI with a custom collective communication library based on the Mochi suite of libraries. To the best of our knowledge, this work is the first to enable elastic in situ visualization capabilities for HPC applications on top of existing production analysis tools.



S. Fang, A. Narayan, R.M. Kirby, S. Zhe. “Bayesian Continuous-Time Tucker Decomposition,” In Proceedings of the 39 th International Conference on Machine Learning, 2022.

ABSTRACT

Tensor decomposition is a dominant framework for multiway data analysis and prediction. Although practical data often contains timestamps for the observed entries, existing tensor decomposition approaches overlook or under-use this valuable time information. They either drop the timestamps or bin them into crude steps and hence ignore the temporal dynamics within each step or use simple parametric time coefficients. To overcome these limitations, we propose Bayesian Continuous-Time Tucker Decomposition (BCTT). We model the tensor-core of the classical Tucker decomposition as a time-varying function, and place a Gaussian process prior to flexibly estimate all kinds of temporal dynamics. In this way, our model maintains the interpretability while is flexible enough to capture various complex temporal relationships between the tensor nodes. For efficient and high-quality posterior inference, we use the stochastic differential equation (SDE) representation of temporal GPs to build an equivalent state-space prior, which avoids huge kernel matrix computation and sparse/low-rank approximations. We then use Kalman filtering, RTS smoothing, and conditional moment matching to develop a scalable message-passing inference algorithm. We show the advantage of our method in simulation and several real-world applications.



R. Faust, C. Scheidegger, K. Isaacs, W.Z. Bernstein, M. Sharp, C. North. “Interactive Visualization for Data Science Scripts,” In 2022 IEEE Visualization in Data Science (VDS), IEEE, pp. 37-45. 2022.

ABSTRACT

As the field of data science continues to grow, so does the need for adequate tools to understand and debug data science scripts. Current debugging practices fall short when applied to a data science setting, due to the exploratory and iterative nature of analysis scripts. Additionally, computational notebooks, the preferred scripting environment of many data scientists, present additional challenges to understanding and debugging workflows, including the non-linear execution of code snippets. This paper presents Anteater, a trace-based visual debugging method for data science scripts. Anteater automatically traces and visualizes execution data with minimal analyst input. The visualizations illustrate execution and value behaviors that aid in understanding the results of analysis scripts. To maximize the number of workflows supported, we present prototype implementations in both Python and Jupyter. Last, to demonstrate Anteater’s support for analysis understanding tasks, we provide two usage scenarios on real world analysis scripts.



A. Ferrero, B. Knudsen, D. Sirohi, R. Whitaker. “A Pathologist-Informed Workflow for Classification of Prostate Glands in Histopathology,” In Medical Optical Imaging and Virtual Microscopy Image Analysis, Springer Nature Switzerland, pp. 53--62. 2022.
DOI: 10.1007/978-3-031-16961-8_6

ABSTRACT

Pathologists diagnose and grade prostate cancer by examining tissue from needle biopsies on glass slides. The cancer's severity and risk of metastasis are determined by the Gleason grade, a score based on the organization and morphology of prostate cancer glands. For diagnostic work-up, pathologists first locate glands in the whole biopsy core, and---if they detect cancer---they assign a Gleason grade. This time-consuming process is subject to errors and significant inter-observer variability, despite strict diagnostic criteria. This paper proposes an automated workflow that follows pathologists' modus operandi, isolating and classifying multi-scale patches of individual glands in whole slide images (WSI) of biopsy tissues using distinct steps: (1) two fully convolutional networks segment epithelium versus stroma and gland boundaries, respectively; (2) a classifier network separates benign from cancer glands at high magnification; and (3) an additional classifier predicts the grade of each cancer gland at low magnification. Altogether, this process provides a gland-specific approach for prostate cancer grading that we compare against other machine-learning-based grading methods.



M. Grant, M. R. Kunz, K. Iyer, L. I. Held, T. Tasdizen, J. A. Aguiar, P. P. Dholabhai. “Integrating atomistic simulations and machine learning to design multi-principal element alloys with superior elastic modulus,” In Journal of Materials Research, Springer International Publishing, pp. 1--16. 2022.

ABSTRACT

Multi-principal element, high entropy alloys (HEAs) are an emerging class of materials that have found applications across the board. Owing to the multitude of possible candidate alloys, exploration and compositional design of HEAs for targeted applications is challenging since it necessitates a rational approach to identify compositions exhibiting enriched performance. Here, we report an innovative framework that integrates molecular dynamics and machine learning to explore a large chemical-configurational space for evaluating elastic modulus of equiatomic and non-equiatomic HEAs along primary crystallographic directions. Vital thermodynamic properties and machine learning features have been incorporated to establish fundamental relationships correlating Young’s modulus with Gibbs free energy, valence electron concentration, and atomic size difference. In HEAs, as the number of elements increases …



J. Gu, P. Davis, G. Eisenhauer, W. Godoy, A. Huebl, S. Klasky, M. Parashar, N. Podhorszki, F. Poeschel, J. Vay, L. Wan, R. Wang, K. Wu. “Organizing Large Data Sets for Efficient Analyses on HPC Systems,” In Journal of Physics: Conference Series, Vol. 2224, No. 1, IOP Publishing, pp. 012042. 2022.

ABSTRACT

Upcoming exascale applications could introduce significant data management challenges due to their large sizes, dynamic work distribution, and involvement of accelerators such as graphical processing units, GPUs. In this work, we explore the performance of reading and writing operations involving one such scientific application on two different supercomputers. Our tests showed that the Adaptable Input and Output System, ADIOS, was able to achieve speeds over 1TB/s, a significant fraction of the peak I/O performance on Summit. We also demonstrated the querying functionality in ADIOS could effectively support common selective data analysis operations, such as conditional histograms. In tests, this query mechanism was able to reduce the execution time by a factor of five. More importantly, ADIOS data management framework allows us to achieve these performance improvements with only a minimal amount …



M. Han, S. Sane, C. R. Johnson. “Exploratory Lagrangian-Based Particle Tracing Using Deep Learning,” In Journal of Flow Visualization and Image Processing, Begell, 2022.
DOI: 10.1615/JFlowVisImageProc.2022041197

ABSTRACT

Time-varying vector fields produced by computational fluid dynamics simulations are often prohibitively large and pose challenges for accurate interactive analysis and exploration. To address these challenges, reduced Lagrangian representations have been increasingly researched as a means to improve scientific time-varying vector field exploration capabilities. This paper presents a novel deep neural network-based particle tracing method to explore time-varying vector fields represented by Lagrangian flow maps. In our workflow, in situ processing is first utilized to extract Lagrangian flow maps, and deep neural networks then use the extracted data to learn flow field behavior. Using a trained model to predict new particle trajectories offers a fixed small memory footprint and fast inference. To demonstrate and evaluate the proposed method, we perform an in-depth study of performance using a well-known analytical data set, the Double Gyre. Our study considers two flow map extraction strategies, the impact of the number of training samples and integration durations on efficacy, evaluates multiple sampling options for training and testing, and informs hyperparameter settings. Overall, we find our method requires a fixed memory footprint of 10.5 MB to encode a Lagrangian representation of a time-varying vector field while maintaining accuracy. For post hoc analysis, loading the trained model costs only two seconds, significantly reducing the burden of I/O when reading data for visualization. Moreover, our parallel implementation can infer one hundred locations for each of two thousand new pathlines in 1.3 seconds using one NVIDIA Titan RTX GPU.



M. Han, T.M. Athawale, D. Pugmire, C.R. Johnson. “Accelerated Probabilistic Marching Cubes by Deep Learning for Time-Varying Scalar Ensembles,” In 2022 IEEE Visualization and Visual Analytics (VIS), IEEE, pp. 155-159. 2022.
DOI: 10.1109/VIS54862.2022.00040

ABSTRACT

Visualizing the uncertainty of ensemble simulations is challenging due to the large size and multivariate and temporal features of en-semble data sets. One popular approach to studying the uncertainty of ensembles is analyzing the positional uncertainty of the level sets. Probabilistic marching cubes is a technique that performs Monte Carlo sampling of multivariate Gaussian noise distributions for positional uncertainty visualization of level sets. However, the technique suffers from high computational time, making interactive visualization and analysis impossible to achieve. This paper introduces a deep-learning-based approach to learning the level-set uncertainty for two-dimensional ensemble data with a multivariate Gaussian noise assumption. We train the model using the first few time steps from time-varying ensemble data in our workflow. We demonstrate that our trained model accurately infers uncertainty in level sets for new time steps and is up to 170X faster than that of the original probabilistic model with serial computation and 10X faster than that of the original parallel computation.



J.D. Hogue, R.M. Kirby, A. Narayan. “Dimensionality Reduction in Deep Learning via Kronecker Multi-layer Architectures,” Subtitled “arXiv:2204.04273,” 2022.

ABSTRACT

Deep learning using neural networks is an effective technique for generating models of complex data. However, training such models can be expensive when networks have large model capacity resulting from a large number of layers and nodes. For training in such a computationally prohibitive regime, dimensionality reduction techniques ease the computational burden, and allow implementations of more robust networks. We propose a novel type of such dimensionality reduction via a new deep learning architecture based on fast matrix multiplication of a Kronecker product decomposition; in particular our network construction can be viewed as a Kronecker product-induced sparsification of an "extended" fully connected network. Analysis and practical examples show that this architecture allows a neural network to be trained and implemented with a significant reduction in computational time and resources, while achieving a similar error level compared to a traditional feedforward neural network.



John Holmen. “Portable, Scalable Approaches For Improving Asynchronous Many-Task Runtime Node Use,” School of Computing, University of Utah, 2022.

ABSTRACT

This research addresses node-level scalability, portability, and heterogeneous computing challenges facing asynchronous many-task (AMT) runtime systems. These challenges have arisen due to increasing socket/core/thread counts and diversity among supported architectures on current and emerging high-performance computing (HPC) systems. This places greater emphasis on thread scalability and simultaneous use of diverse architectures to maximize node use and is complicated by architecture-specific programming models.

To reduce the exposure of application developers to such challenges, AMT programming models have emerged to offer a runtime-based solution. These models overdecompose a problem into many fine-grained tasks to be scheduled and executed by an underlying runtime to improve node-level concurrency. However, task execution granularity challenges remain, and it is unclear where and how shared memory programming models should be used within an AMT model to improve node use. This research aims to ease these design decisions with consideration for performance portability layers (PPLs), which provide a single interface to multiple shared memory programming models.
The contribution of this research is the design of a task scheduling approach for portably improving node use when extending AMT runtime systems to many-core and heterogeneous HPC systems with shared memory programming models. The success of this approach is shown through the portable adoption of a performance portability layer, Kokkos, within Uintah, a representative AMT runtime system. The resulting task scheduler enables the scheduling and execution of portable, fine-grained tasks across processors and accelerators simultaneously with flexible control over task execution granularity. A collection of experiments on current many-core and heterogeneous HPC systems are used to validate this approach and inform design recommendations. Among resulting recommendations are approaches for easing the adoption of a heterogeneous MPI+PPL task scheduling approach in an asynchronous many-task runtime system and furthermore to ease indirect adoption of a performance portability layer in large legacy codebases.



J.K. Holmen, D. Sahasrabudhe, M. Berzins. “Porting Uintah to Heterogeneous Systems,” In Proceedings of the Platform for Advanced Scientific Computing Conference (PASC22) Best Paper Award, ACM, 2022.

ABSTRACT

The Uintah Computational Framework is being prepared to make portable use of forthcoming exascale systems, initially the DOE Aurora system through the Aurora Early Science Program. This paper describes the evolution of Uintah to be ready for such architectures. A key part of this preparation has been the adoption of the Kokkos performance portability layer in Uintah. The sheer size of the Uintah codebase has made it imperative to have a representative benchmark. The design of this benchmark and the use of Kokkos within it is discussed. This paper complements recent work with additional details and new scaling studies run 24x further than earlier studies. Results are shown for two benchmarks executing workloads representative of typical Uintah applications. These results demonstrate single-source portability across the DOE Summit and NSF Frontera systems with good strong-scaling characteristics. The challenge of extending this approach to anticipated exascale systems is also considered.



Y. Ishidoya, E. Kwan, D. J. Dosdall, R. S. Macleod, L. Navaravong, B. A. Steinberg, T. J. Bunch, R. Ranjan. “Short-Term Natural Course of Esophageal Thermal Injury After Ablation for Atrial Fibrillation,” In Journal of Cardiovascular Electrophysiology, Wiley, 2022.
DOI: 10.1111/jce.15553

ABSTRACT

Purpose
To provide insight into the short-term natural history of esophageal thermal injury (ETI) after radiofrequency catheter ablation (RFCA) for atrial fibrillation (AF) by esophagogastroduodenoscopy (EGD).

Methods
We screened patients who underwent RFCA for AF and EGD based on esophageal late gadolinium enhancement (LGE) in post ablation MRI. Patients with ETI diagnosed with EGD were included. We defined severity of ETI according to Kansas City classification (KCC): type 1: erythema; type 2: ulcers (2a: superficial; 2b deep); type 3 perforation (3a: perforation; 3b: perforation with atrioesophageal fistula). Repeated EGD was performed within 1-14 days after the last EGD if recommended and possible until any certain healing signs (visible reduction in size without deepening of ETI or complete resolution) were observed.
Results
ETI was observed in 62 of 378 patients who underwent EGD after RFCA. Out of these 62 patients with ETI, 21% (13) were type 1, 50% (31) were type 2a and 29% (18) were type 2b at the initial EGD. All esophageal lesions, but one type 2b lesion that developed into an atrioesophageal fistula (AEF), showed signs of healing in repeated EGD studies within 14 days after the procedure. The one type 2b lesion developing into an AEF showed an increase in size and ulcer deepening in repeat EGD 8 days after the procedure.
Conclusion
We found that all ETI which didn't progress to AEF presented healing signs within 14 days after the procedure and that worsening ETI might be an early signal for developing esophageal perforation.



Y. Ishidoya, E. Kwan, D. J. Dosdall, R. S. Macleod, L. Navaravong, B. A. Steinberg, T. J. Bunch, R. Ranjan. “Shorter Distance Between The Esophagus And The Left Atrium Is Associated With Higher Rates Of Esophageal Thermal Injury After Radiofrequency Ablation,” In Journal of Cardiovascular Electrophysiology, Wiley, 2022.
DOI: 10.1111/jce.15554

ABSTRACT

Background
Esophageal thermal injury (ETI) is a known and potentially serious complication of catheter ablation for atrial fibrillation. We intended to evaluate the distance between the esophagus and the left atrium posterior wall (LAPW) and its association with esophageal thermal injury.

Methods
A retrospective analysis of 73 patients who underwent esophagogastroduodenoscopy (EGD) after LA radiofrequency catheter ablation for symptomatic atrial fibrillation and pre-ablation magnetic resonance imaging (MRI) was used to identify the minimum distance between the inner lumen of the esophagus and the ablated atrial endocardium (pre-ablation atrial esophageal distance; pre-AED) and occurrence of ETI. Parameters of ablation index (AI, Visitag Surpoint) were collected in 30 patients from the CARTO3 system and compared to assess if ablation strategies and AI further impacted risk of ETI.
Results
Pre-AED was significantly larger in patients without ETI than those with ETI (5.23 ± 0.96 mm vs 4.31 ± 0.75 mm, p < 0.001). Pre-AED showed high accuracy for predicting ETI with the best cutoff value of 4.37 mm. AI was statistically comparable between Visitag lesion markers with and without associated esophageal late gadolinium enhancement (LGE) detected by post-ablation MRI in the low-power long-duration ablation group (LPLD, 25-40W for 10 to 30 s, 393.16 [308.62, 408.86] versus 406.58 [364.38, 451.22], p = 0.16) and high-power short-duration group (HPSD, 50W for 5-10 s, 336.14 [299.66, 380.11] versus 330.54 [286.21, 384.71], p = 0.53), respectively.
Conclusion
Measuring the distance between the LA and the esophagus in pre-ablation LGE-MRI could be helpful in predicting ETI after LAPW ablation.



K. Iyer, A. Morris, B. Zenger, K. Karnath, B.A. Orkild, O. Korshak, S. Elhabian. “Statistical Shape Modeling of Biventricular Anatomy with Shared Boundaries,” Subtitled “arXiv:2209.02706v1,” 2022.

ABSTRACT

Statistical shape modeling (SSM) is a valuable and powerful tool to generate a detailed representation of complex anatomy that enables quantitative analysis and the comparison of shapes and their variations. SSM applies mathematics, statistics, and computing to parse the shape into a quantitative representation (such as correspondence points or landmarks) that will help answer various questions about the anatomical variations across the population. Complex anatomical structures have many diverse parts with varying interactions or intricate architecture. For example, the heart is a four-chambered anatomy with several shared boundaries between chambers. Coordinated and efficient contraction of the chambers of the heart is necessary to adequately perfuse end organs throughout the body. Subtle shape changes within these shared boundaries of the heart can indicate potential pathological changes that lead to uncoordinated contraction and poor end-organ perfusion. Early detection and robust quantification could provide insight into ideal treatment techniques and intervention timing. However, existing SSM approaches fall short of explicitly modeling the statistics of shared boundaries. In this paper, we present a general and flexible data-driven approach for building statistical shape models of multi-organ anatomies with shared boundaries that captures morphological and alignment changes of individual anatomies and their shared boundary surfaces throughout the population. We demonstrate the effectiveness of the proposed methods using a biventricular heart dataset by developing shape models that consistently parameterize the cardiac biventricular structure and the interventricular septum (shared boundary surface) across the population data.



M.H. Jensen, S. Joshi, S. Sommer. “Discrete-Time Observations of Brownian Motion on Lie Groups and Homogeneous Spaces: Sampling and Metric Estimation,” In Algorithms, Vol. 15, No. 8, 2022.
ISSN: 1999-4893
DOI: 10.3390/a15080290

ABSTRACT

We present schemes for simulating Brownian bridges on complete and connected Lie groups and homogeneous spaces. We use this to construct an estimation scheme for recovering an unknown left- or right-invariant Riemannian metric on the Lie group from samples. We subsequently show how pushing forward the distributions generated by Brownian motions on the group results in distributions on homogeneous spaces that exhibit a non-trivial covariance structure. The pushforward measure gives rise to new non-parametric families of distributions on commonly occurring spaces such as spheres and symmetric positive tensors. We extend the estimation scheme to fit these distributions to homogeneous space-valued data. We demonstrate both the simulation schemes and estimation procedures on Lie groups and homogenous spaces, including SPD(3)=GL+(3)/SO(3) and S2=SO(3)/SO(2).



X. Jiang, Z. Li, R. Missel, Md. Zaman, B. Zenger, W. W. Good, R. S. MacLeod, J. L. Sapp, L. Wang. “Few-Shot Generation of Personalized Neural Surrogates for Cardiac Simulation via Bayesian Meta-learning,” In Medical Image Computing and Computer Assisted Intervention -- MICCAI 2022, Springer Nature Switzerland, pp. 46--56. 2022.
ISBN: 978-3-031-16452-1
DOI: 10.1007/978-3-031-16452-1_5

ABSTRACT

Clinical adoption of personalized virtual heart simulations faces challenges in model personalization and expensive computation. While an ideal solution is an efficient neural surrogate that at the same time is personalized to an individual subject, the state-of-the-art is either concerned with personalizing an expensive simulation model, or learning an efficient yet generic surrogate. This paper presents a completely new concept to achieve personalized neural surrogates in a single coherent framework of meta-learning (metaPNS). Instead of learning a single neural surrogate, we pursue the process of learning a personalized neural surrogate using a small amount of context data from a subject, in a novel formulation of few-shot generative modeling underpinned by: 1) a set-conditioned neural surrogate for cardiac simulation that, conditioned on subject-specific context data, learns to generate query simulations not included in the context set, and 2) a meta-model of amortized variational inference that learns to condition the neural surrogate via simple feed-forward embedding of context data. As test time, metaPNS delivers a personalized neural surrogate by fast feed-forward embedding of a small and flexible number of data available from an individual, achieving -- for the first time -- personalization and surrogate construction for expensive simulations in one end-to-end learning framework. Synthetic and real-data experiments demonstrated that metaPNS was able to improve personalization and predictive accuracy in comparison to conventionally-optimized cardiac simulation models, at a fraction of computation.