SCIENTIFIC COMPUTING AND IMAGING INSTITUTE
at the University of Utah

An internationally recognized leader in visualization, scientific computing, and image analysis

SCI Publications

2023


B. Zhang, H. Manoochehri, M.M. Ho, F. Fooladgar, Y. Chong, B. Knudsen, D. Sirohi, T. Tasdizen. “CLASSMix: Adaptive stain separation-based contrastive learning with pseudo labeling for histopathological image classification,” Subtitled “arXiv:2312.06978v2,” 2023.

ABSTRACT

Histopathological image classification is one of the critical aspects in medical image analysis. Due to the high expense associated with the labeled data in model training, semi-supervised learning methods have been proposed to alleviate the need of extensively labeled datasets. In this work, we propose a model for semi-supervised classification tasks on digital histopathological Hematoxylin and Eosin (H&E) images. We call the new model Contrastive Learning with Adaptive Stain Separation and MixUp (CLASS-M). Our model is formed by two main parts: contrastive learning between adaptively stain separated Hematoxylin images and Eosin images, and pseudo-labeling using MixUp. We compare our model with other state-of-the-art models on clear cell renal cell carcinoma (ccRCC) datasets from our institution and The Cancer Genome Atlas Program (TCGA). We demonstrate that our CLASS-M model has the best performance on both datasets. The contributions of different parts in our model are also analyzed.



N. Zhou, G. Scorzelli, J. Luettgau, R.R. Kancharla, J. Kane, R. Wheeler, B. Croom, B. Newell, V. Pascucci, M. Taufer. “Orchestration of materials science workflows for heterogeneous resources at large scale,” In The International Journal of High Performance Computing Applications, Sage, 2023.

ABSTRACT

In the era of big data, materials science workflows need to handle large-scale data distribution, storage, and computation. Any of these areas can become a performance bottleneck. We present a framework for analyzing internal material structures (e.g., cracks) to mitigate these bottlenecks. We demonstrate the effectiveness of our framework for a workflow performing synchrotron X-ray computed tomography reconstruction and segmentation of a silica-based structure. Our framework provides a cloud-based, cutting-edge solution to challenges such as growing intermediate and output data and heavy resource demands during image reconstruction and segmentation. Specifically, our framework efficiently manages data storage, scaling up compute resources on the cloud. The multi-layer software structure of our framework includes three layers. A top layer uses Jupyter notebooks and serves as the user interface. A middle layer uses Ansible for resource deployment and managing the execution environment. A low layer is dedicated to resource management and provides resource management and job scheduling on heterogeneous nodes (i.e., GPU and CPU). At the core of this layer, Kubernetes supports resource management, and Dask enables large-scale job scheduling for heterogeneous resources. The broader impact of our work is four-fold: through our framework, we hide the complexity of the cloud’s software stack to the user who otherwise is required to have expertise in cloud technologies; we manage job scheduling efficiently and in a scalable manner; we enable resource elasticity and workflow orchestration at a large scale; and we facilitate moving the study of nonporous structures, which has wide applications in engineering and scientific fields, to the cloud. While we demonstrate the capability of our framework for a specific materials science application, it can be adapted for other applications and domains because of its modular, multi-layer architecture.


2022


J. Adams, N. Khan, A. Morris, S. Elhabian. “Spatiotemporal Cardiac Statistical Shape Modeling: A Data-Driven Approach,” Subtitled “arXiv preprint arXiv:2209.02736,” 2022.

ABSTRACT

Clinical investigations of anatomy’s structural changes over time could greatly benefit from population-level quantification of shape, or spatiotemporal statistic shape modeling (SSM). Such a tool enables characterizing patient organ cycles or disease progression in relation to a cohort of interest. Constructing shape models requires establishing a quantitative shape representation (e.g., corresponding landmarks). Particle-based shape modeling (PSM) is a data-driven SSM approach that captures population-level shape variations by optimizing landmark placement. However, it assumes cross-sectional study designs and hence has limited statistical power in representing shape changes over time. Existing methods for modeling spatiotemporal or longitudinal shape changes require predefined shape atlases and pre-built shape models that are typically constructed cross-sectionally. This paper proposes a data-driven approach inspired by the PSM method to learn population-level spatiotemporal shape changes directly from shape data. We introduce a novel SSM optimization scheme that produces landmarks that are in correspondence both across the population (inter-subject) and across time-series (intra-subject). We apply the proposed method to 4D cardiac data from atrial-fibrillation patients and demonstrate its efficacy in representing the dynamic change of the left atrium. Furthermore, we show that our method outperforms an image-based approach for spatiotemporal SSM with respect to a generative time-series model, the Linear Dynamical System (LDS). LDS fit using a spatiotemporal shape model optimized via our approach provides better generalization and specificity, indicating it accurately captures the underlying time-dependency.



M. Alirezaei, T. Tasdizen. “Adversarially Robust Classification by Conditional Generative Model Inversion,” Subtitled “arXiv preprint arXiv:2201.04733,” 2022.

ABSTRACT

Most adversarial attack defense methods rely on obfuscating gradients. These methods are successful in defending against gradient-based attacks; however, they are easily circumvented by attacks which either do not use the gradient or by attacks which approximate and use the corrected gradient. Defenses that do not obfuscate gradients such as adversarial training exist, but these approaches generally make assumptions about the attack such as its magnitude. We propose a classification model that does not obfuscate gradients and is robust by construction without assuming prior knowledge about the attack. Our method casts classification as an optimization problem where we "invert" a conditional generator trained on unperturbed, natural images to find the class that generates the closest sample to the query image. We hypothesize that a potential source of brittleness against adversarial attacks is the high-to-low-dimensional nature of feed-forward classifiers which allows an adversary to find small perturbations in the input space that lead to large changes in the output space. On the other hand, a generative model is typically a low-to-high-dimensional mapping. While the method is related to Defense-GAN, the use of a conditional generative model and inversion in our model instead of the feed-forward classifier is a critical difference. Unlike Defense-GAN, which was shown to generate obfuscated gradients that are easily circumvented, we show that our method does not obfuscate gradients. We demonstrate that our model is extremely robust against black-box attacks and has improved robustness against white-box attacks compared to naturally trained, feed-forward classifiers.



M. Alirezaei, Q.C. Nguyen, R. Whitaker, T. Tasdizen. “Multi-Task Classification for Improved Health Outcome Prediction Based on Environmental Indicators,” In IEEE Access, 2022.
DOI: 10.1109/ACCESS.2023.3295777

ABSTRACT

The influence of the neighborhood environment on health outcomes has been widely recognized in various studies. Google street view (GSV) images offer a unique and valuable tool for evaluating neighborhood environments on a large scale. By annotating the images with labels indicating the presence or absence of certain neighborhood features, we can develop classifiers that can automatically analyze and evaluate the environment. However, labeling GSV images on a large scale is a time-consuming and labor-intensive task. Considering these challenges, we propose using a multi-task classifier to improve training a classifier with limited supervised, GSV data. Our multi-task classifier utilizes readily available, inexpensive online images collected from Flicker as a related classification task. The hypothesis is that a classifier trained on multiple related tasks is less likely to overfit to small amounts of training data and generalizes better to unseen data. We leverage the power of multiple related tasks to improve the classifier’s overall performance and generalization capability. Here we show that, with the proposed learning paradigm, predicted labels for GSV test images are more accurate. Across different environment indicators, the accuracy, F1 score and balanced accuracy increase up to 6 % in the multi-task learning framework compared to its single-task learning counterpart. The enhanced accuracy of the predicted labels obtained through the multi-task classifier contributes to a more reliable and precise regression analysis determining the correlation between predicted built environment indicators and health outcomes. The R2 values calculated for different health outcomes improve by up to 4 % using multi-task learning detected indicators.



E.E. Anstadt, W. Tao, E. Guo, L. Dvoracek, M.K. Bruce, P.J. Grosse, L. Wang, L. Kavan, R. Whitaker, J.A. Goldstein. “Quantifying the Severity of Metopic Craniosynostosis Using Unsupervised Machine Learning,” In Plastic and Reconstructive Surgery, November, 2022.

ABSTRACT

Background: 

Quantifying the severity of head shape deformity and establishing a threshold for operative intervention remains challenging in patients with Metopic Craniosynostosis (MCS). This study combines 3D skull shape analysis with an unsupervised machine-learning algorithm to generate a quantitative shape severity score (CMD) and provide an operative threshold score.

Methods: 

Head computed tomography (CT) scans from subjects with MCS and normal controls (age 5-15 months) were used for objective 3D shape analysis using ShapeWorks software and in a survey for craniofacial surgeons to rate head-shape deformity and report whether they would offer surgical correction based on head shape alone. An unsupervised machine-learning algorithm was developed to quantify the degree of shape abnormality of MCS skulls compared to controls.

Results: 

124 CTs were used to develop the model; 50 (24% MCS, 76% controls) were rated by 36 craniofacial surgeons, with an average of 20.8 ratings per skull. The interrater reliability was high (ICC=0.988). The algorithm performed accurately and correlates closely with the surgeons assigned severity ratings (Spearman’s Correlation coefficient r=0.817). The median CMD for affected skulls was 155.0 (IQR 136.4-194.6, maximum 231.3). Skulls with ratings ≥150.2 were highly likely to be offered surgery by the experts in this study.

Conclusions: 

This study describes a novel metric to quantify the head shape deformity associated with metopic craniosynostosis and contextualizes the results using clinical assessments of head shapes by craniofacial experts. This metric may be useful in supporting clinical decision making around operative intervention as well as in describing outcomes and comparing patient population across centers.



A. Arzani, K.W. Cassel, R.M. D'Souza. “Theory-guided physics-informed neural networks for boundary layer problems with singular perturbation,” In Journal of Computational Physics, 2022.
DOI: https://doi.org/10.1016/j.jcp.2022.111768

ABSTRACT

Physics-informed neural networks (PINNs) are a recent trend in scientific machine learning research and modeling of differential equations. Despite progress in PINN research, large gradients and highly nonlinear patterns remain challenging to model. Thin boundary layer problems are prominent examples of large gradients that commonly arise in transport problems. In this study, boundary-layer PINN (BL-PINN) is proposed to enable a solution to thin boundary layers by considering them as a singular perturbation problem. Inspired by the classical perturbation theory and asymptotic expansions, BL-PINN is designed to replicate the procedure in singular perturbation theory. Namely, different parallel PINN networks are defined to represent different orders of approximation to the boundary layer problem in the inner and outer regions. In different benchmark problems (forward and inverse), BL-PINN shows superior performance compared to the traditional PINN approach and is able to produce accurate results, whereas the classical PINN approach could not provide meaningful solutions. BL-PINN also demonstrates significantly better results compared to other extensions of PINN such as the extended PINN (XPINN) approach. The natural incorporation of the perturbation parameter in BL-PINN provides the opportunity to evaluate parametric solutions without the need for retraining. BL-PINN demonstrates an example of how classical mathematical theory could be used to guide the design of deep neural networks for solving challenging problems.



T. M. Athawale, D. Maljovec. L. Yan, C. R. Johnson, V. Pascucci, B. Wang. “Uncertainty Visualization of 2D Morse Complex Ensembles Using Statistical Summary Maps,” In IEEE Transactions on Visualization and Computer Graphics, Vol. 28, No. 4, pp. 1955-1966. April, 2022.
ISSN: 1077-2626
DOI: 10.1109/TVCG.2020.3022359

ABSTRACT

Morse complexes are gradient-based topological descriptors with close connections to Morse theory. They are widely applicable in scientific visualization as they serve as important abstractions for gaining insights into the topology of scalar fields. Data uncertainty inherent to scalar fields due to randomness in their acquisition and processing, however, limits our understanding of Morse complexes as structural abstractions. We, therefore, explore uncertainty visualization of an ensemble of 2D Morse complexes that arises from scalar fields coupled with data uncertainty. We propose several statistical summary maps as new entities for quantifying structural variations and visualizing positional uncertainties of Morse complexes in ensembles. Specifically, we introduce three types of statistical summary maps – the probabilistic map , the significance map , and the survival map – to characterize the uncertain behaviors of gradient flows. We demonstrate the utility of our proposed approach using wind, flow, and ocean eddy simulation datasets.



J. Baker, E. Cherkaev, A. Narayan, B. Wang. “Learning POD of Complex Dynamics Using Heavy-ball Neural ODEs,” Subtitled “arXiv:2202.12373,” 2022.

ABSTRACT

Proper orthogonal decomposition (POD) allows reduced-order modeling of complex dynamical systems at a substantial level, while maintaining a high degree of accuracy in modeling the underlying dynamical systems. Advances in machine learning algorithms enable learning POD-based dynamics from data and making accurate and fast predictions of dynamical systems. In this paper, we leverage the recently proposed heavy-ball neural ODEs (HBNODEs) [Xia et al. NeurIPS, 2021] for learning data-driven reduced-order models (ROMs) in the POD context, in particular, for learning dynamics of time-varying coefficients generated by the POD analysis on training snapshots generated from solving full order models. HBNODE enjoys several practical advantages for learning POD-based ROMs with theoretical guarantees, including 1) HBNODE can learn long-term dependencies effectively from sequential observations and 2) HBNODE is computationally efficient in both training and testing. We compare HBNODE with other popular ROMs on several complex dynamical systems, including the von Kármán Street flow, the Kurganov-Petrova-Popov equation, and the one-dimensional Euler equations for fluids modeling.



J. Baker, H. Xia, Y. Wang, E. Cherkaev, A. Narayan, L. Chen, J. Xin, A. L. Bertozzi, S. J. Osher, B. Wang. “Proximal Implicit ODE Solvers for Accelerating Learning Neural ODEs,” Subtitled “arXiv preprint arXiv:2204.08621,” 2022.

ABSTRACT

Learning neural ODEs often requires solving very stiff ODE systems, primarily using explicit adaptive step size ODE solvers. These solvers are computationally expensive, requiring the use of tiny step sizes for numerical stability and accuracy guarantees. This paper considers learning neural ODEs using implicit ODE solvers of different orders leveraging proximal operators. The proximal implicit solver consists of inner-outer iterations: the inner iterations approximate each implicit update step using a fast optimization algorithm, and the outer iterations solve the ODE system over time. The proximal implicit ODE solver guarantees superiority over explicit solvers in numerical stability and computational efficiency. We validate the advantages of proximal implicit solvers over existing popular neural ODE solvers on various challenging benchmark tasks, including learning continuous-depth graph neural networks and continuous normalizing flows.



W. Bangerth, C. R. Johnson, D. K. Njeru, B. van Bloemen Waanders. “Estimating and using information in inverse problems,” Subtitled “arXiv:2208.09095,” 2022.

ABSTRACT

For inverse problems one attempts to infer spatially variable functions from indirect measurements of a system. To practitioners of inverse problems, the concept of ``information'' is familiar when discussing key questions such as which parts of the function can be inferred accurately and which cannot. For example, it is generally understood that we can identify system parameters accurately only close to detectors, or along ray paths between sources and detectors, because we have ``the most information'' for these places.

Although referenced in many publications, the ``information'' that is invoked in such contexts is not a well understood and clearly defined quantity. Herein, we present a definition of information density that is based on the variance of coefficients as derived from a Bayesian reformulation of the inverse problem. We then discuss three areas in which this information density can be useful in practical algorithms for the solution of inverse problems, and illustrate the usefulness in one of these areas -- how to choose the discretization mesh for the function to be reconstructed -- using numerical experiments.



J. A. Bergquist, J. Coll-Font, B. Zenger, L. C. Rupp, W. W. Good, D. H. Brooks, R. S. MacLeod. “Reconstruction of cardiac position using body surface potentials,” In Computers in Biology and Medicine, Vol. 142, pp. 105174. 2022.
DOI: https://doi.org/10.1016/j.compbiomed.2021.105174

ABSTRACT

Electrocardiographic imaging (ECGI) is a noninvasive technique to assess the bioelectric activity of the heart which has been applied to aid in clinical diagnosis and management of cardiac dysfunction. ECGI is built on mathematical models that take into account several patient specific factors including the position of the heart within the torso. Errors in the localization of the heart within the torso, as might arise due to natural changes in heart position from respiration or changes in body position, contribute to errors in ECGI reconstructions of the cardiac activity, thereby reducing the clinical utility of ECGI. In this study we present a novel method for the reconstruction of cardiac geometry utilizing noninvasively acquired body surface potential measurements. Our geometric correction method simultaneously estimates the cardiac position over a series of heartbeats by leveraging an iterative approach which alternates between estimating the cardiac bioelectric source across all heartbeats and then estimating cardiac positions for each heartbeat. We demonstrate that our geometric correction method is able to reduce geometric error and improve ECGI accuracy in a wide range of testing scenarios. We examine the performance of our geometric correction method using different activation sequences, ranges of cardiac motion, and body surface electrode configurations. We find that after geometric correction resulting ECGI solution accuracy is improved and variability of the ECGI solutions between heartbeats is substantially reduced.



M. Berzins. “Energy conservation and accuracy of some MPM formulations,” In Computational Particle Mechanics, 2022.
DOI: 10.1007/s40571-021-00457-3

ABSTRACT

The success of the Material Point Method (MPM) in solving many challenging problems nevertheless raises some open questions regarding the fundamental properties of the method such as time integration accuracy and energy conservation. The traditional MPM time integration methods are often based upon the symplectic Euler method or staggered central differences. This raises the question of how to best ensure energy conservation in explicit time integration for MPM. Two approaches are used here, one is to extend the Symplectic Euler method (Cromer Euler) to provide better energy conservation and the second is to use a potentially more accurate symplectic methods, namely the widely-used Stormer-Verlet Method. The Stormer-Verlet method is shown to have locally third order time accuracy of energy conservation in time, in contrast to the second order accuracy in energy conservation of the symplectic Euler methods that are used in many MPM calculations. It is shown that there is an extension to the Symplectic Euler stress-last method that provides better energy conservation that is comparable with the Stormer-Verlet method. This extension is referred to as TRGIMP and also has third order accuracy in energy conservation. When the interactions between space and time errors are studied it is seen that spatial errors may dominate in computed quantities such as displacement and velocity. This connection between the local errors in space and time is made explicit mathematically and explains the observed results that displacement and velocity errors are very similar for both methods. The observed and theoretically predicted third-order energy conservation accuracy and computational costs are demonstrated on a standard MPM test example.



M. Berzins. “Computational Error Estimation for The Material Point Method,” In Computational Particle Mechanics, Springer, 2022.
DOI: https://doi.org/10.1007/s40571-022-00530-5

ABSTRACT

A common feature of many methods in computational mechanics is that there is often a way of estimating the error in the computed solution. The situation for computational mechanics codes based upon the Material Point Method is very different in that there has been comparatively little work on computable error estimates for these methods. This work is concerned with introducing such an approach for the Material Point Method. Although it has been observed that spatial errors may dominate temporal ones at stable time steps, recent work has made more precise the sources and forms of the different MPM errors. There is then a need to estimate these errors computationally through computable estimates of the different errors in the material point method. Estimates of the different spatial errors in the Material Point Method are constructed based upon nodal derivatives of the different physical variables in MPM. These derivatives are then estimated using standard difference approximations calculated on the background mesh. The use of these estimates of the spatial error makes it possible to measure the growth of errors over time. A number of computational experiments are used to illustrate the performance of the computed error estimates. As the key feature of the approach is the calculation of derivatives on the regularly spaced background mesh, the extension to calculating derivatives and hence to error estimates for higher dimensional problems is clearly possible.



J.A. Bergquist, L.C. Rupp, A. Busatto, B. Orkild, B. Zenger, W. Good, J. Coll-Font, A. Narayan, J. Tate, D. Brooks, R.S. MacLeod. “Heart Position Uncertainty Quantification in the Inverse Problem of ECGI,” In Computing in Cardiology, Vol. 49, 2022.

ABSTRACT

Electrocardiographic imaging (ECGI) is a clinical and research tool for noninvasive diagnosis of cardiac electrical dysfunction. The position of the heart within the torso is both an input and common source of error in ECGI. Many studies have sought to improve cardiac localization accuracy, however, few have examined quantitatively the effects of uncertainty in the position of the heart within the torso. Recently developed uncertainty quantification (UQ) tools enable the robust application of UQ to ECGI reconstructions. In this study, we developed an ECGI formulation, which for the first time, directly incorporated uncertainty in the heart position. The result is an ECGI solution that is robust to variation in heart position. Using data from two Langendorff experimental preparations, each with 120 heartbeats distributed across three activation sequences, we found that as heart position uncertainty increased above ±10 mm, the solution quality of the ECGI degraded. However, even at large heart position uncertainty (±40 mm) our novel UQ-ECGI formulation produced reasonable solutions (root mean squared error < 1 mV, spatial correlation >0.6, temporal correlation >0.75).



J.D. Blum, J. Beiriger, C. Kalmar, R.A. Avery, S. Lang, D.F. Villavisanis, L. Cheung, D.Y. Cho, W. Tao, R. Whitaker, S.P. Bartlett, J.A. Taylor, J.A. Goldstein, J.W. Swanson. “Relating Metopic Craniosynostosis Severity to Intracranial Pressure,” In The Journal of Craniofacial Surgery, 2022.
DOI: 10.1097/SCS.0000000000008748

ABSTRACT

Purpose:

A subset of patients with metopic craniosynostosis are noted to have elevated intracranial pressure (ICP). However, it is not known if the propensity for elevated ICP is influenced by the severity of metopic cranial dysmorphology.

Methods:

Children with nonsyndromic single-suture metopic synostosis were prospectively enrolled and underwent optical coherence tomography to measure optic nerve head morphology. Preoperative head computed tomography scans were assessed for endocranial bifrontal angle as well as scaled metopic synostosis severity score (MSS) and cranial morphology deviation score determined by CranioRate, an automated severity classifier.
Results:

Forty-seven subjects were enrolled between 2014 and 2019, at an average age of 8.5 months at preoperative computed tomography and 11.8 months at index procedure. Fourteen patients (29.7%) had elevated optical coherence tomography parameters suggestive of elevated ICP at the time of surgery. Ten patients (21.3%) had been diagnosed with developmental delay, eight of whom demonstrated elevated ICP. There were no significant associations between measures of metopic severity and ICP. Metopic synostosis severity score and endocranial bifrontal angle were inversely correlated, as expected (r=−0.545, P<0.001). A negative correlation was noted between MSS and formally diagnosed developmental delay (r=−0.387, P=0.008). Likewise, negative correlations between age at procedure and both MSS and cranial morphology deviation was observed (r=−0.573, P<0.001 and r=−0.312, P=0.025, respectively).
Conclusions:

Increased metopic severity was not associated with elevated ICP at the time of surgery. Patients who underwent later surgical correction showed milder phenotypic dysmorphology with an increased incidence of developmental delay.



PT. Bremer, G. Tourassi, W. Bethel, K. Gaither, V. Pascucci, W. Xu. “Report for the ASCR Workshop on Visualization for Scientific Discovery, Decision-Making, and Communication,” DOE, 2022.



M. K. Bruce, W. Tao, J. Beiriger, C. Christensen, M. J. Pfaff, R. Whitaker, J. A. Goldstein. “3D Photography to Quantify the Severity of Metopic Craniosynostosis,” In The Cleft Palate-Craniofacial Journal, SAGE Publications, 2022.

ABSTRACT

Objective

This study aims to determine the utility of 3D photography for evaluating the severity of metopic craniosynostosis (MCS) using a validated, supervised machine learning (ML) algorithm.

Design/Setting/Patients

This single-center retrospective cohort study included patients who were evaluated at our tertiary care center for MCS from 2016 to 2020 and underwent both head CT and 3D photography within a 2-month period.
Main Outcome Measures

The analysis method builds on our previously established ML algorithm for evaluating MCS severity using skull shape from CT scans. In this study, we regress the model to analyze 3D photographs and correlate the severity scores from both imaging modalities.
Results

14 patients met inclusion criteria, 64.3% male (n = 9). The mean age in years at 3D photography and CT imaging was 0.97 and 0.94, respectively. Ten patient images were obtained preoperatively, and 4 patients did not require surgery. The severity prediction of the ML algorithm correlates closely when comparing the 3D photographs to CT bone data (Spearman correlation coefficient [SCC] r = 0.75; Pearson correlation coefficient [PCC] r = 0.82).

Conclusion

The results of this study show that 3D photography is a valid alternative to CT for evaluation of head shape in MCS. Its use will provide an objective, quantifiable means of assessing outcomes in a rigorous manner while decreasing radiation exposure in this patient population.



A. Busatto, J.A. Bergquist, L.C. Rupp, B. Zenger, R.S. MacLeod. “Unexpected Errors in the Electrocardiographic Forward Problem,” In Computing in Cardiology, Vol. 49, 2022.

ABSTRACT

Previous studies have compared recorded torso potentials with electrocardiographic forward solutions from a pericardial cage. In this study, we introduce new comparisons of the forward solutions from the sock and cage with each other and with respect to the measured potentials on the torso. The forward problem of electrocardiographic imaging is expected to achieve high levels of accuracy since it is mathematically well posed. However, unexpectedly high residual errors remain between the computed and measured torso signals in experiments. A possible source of these errors is the limited spatial coverage of the cardiac sources in most experiments; most capture potentials only from the ventricles. To resolve the relationship between spatial coverage and the accuracy of the forward simulations, we combined two methods of capturing cardiac potentials using a 240-electrode sock and a 256-electrode cage, both surrounding a heart suspended in a 192-electrode torso tank. We analyzed beats from three pacing sites and calculated the RMSE, spatial correlation, and temporal correlation. We found that the forward solutions using the sock as the cardiac source were poorer compared to those obtained from the cage. In this study, we explore the differences in forward solution accuracy using the sock and the cage and suggest some possible explanations for these differences.



N. Cheng, O.A. Malik, Y. Xu, S. Becker, A. Doostan, A. Narayan. “Quadrature Sampling of Parametric Models with Bi-fidelity Boosting,” Subtitled “arXiv:2209.05705v1,” 2022.

ABSTRACT

Least squares regression is a ubiquitous tool for building emulators (a.k.a. surrogate models) of problems across science and engineering for purposes such as design space exploration and uncertainty quantification. When the regression data are generated using an experimental design process (e.g., a quadrature grid) involving computationally expensive models, or when the data size is large, sketching techniques have shown promise to reduce the cost of the construction of the regression model while ensuring accuracy comparable to that of the full data. However, random sketching strategies, such as those based on leverage scores, lead to regression errors that are random and may exhibit large variability. To mitigate this issue, we present a novel boosting approach that leverages cheaper, lower-fidelity data of the problem at hand to identify the best sketch among a set of candidate sketches. This in turn specifies the sketch of the intended high-fidelity model and the associated data. We provide theoretical analyses of this bi-fidelity boosting (BFB) approach and discuss the conditions the low- and high-fidelity data must satisfy for a successful boosting. In doing so, we derive a bound on the residual norm of the BFB sketched solution relating it to its ideal, but computationally expensive, high-fidelity boosted counterpart. Empirical results on both manufactured and PDE data corroborate the theoretical analyses and illustrate the efficacy of the BFB solution in reducing the regression error, as compared to the non-boosted solution.