SCIENTIFIC COMPUTING AND IMAGING INSTITUTE
at the University of Utah

An internationally recognized leader in visualization, scientific computing, and image analysis

SCI Publications

2018


Y. Yu, R.M. Kirby, G.E. Karniadakis. “Spectral Element and hp Methods,” In Encyclopedia of Computational Mechanics Second Edition, John Wiley & Sons, Ltd, pp. 1--43. 2018.

ABSTRACT

Spectral/hp element methods provide high‐order discretization, which is essential in the longtime integration of advection–diffusion systems and for capturing dynamic instabilities in solids. In this chapter, we review the main formulations for simulations of incompressible and compressible viscous flows as well as for solid mechanics and present several examples with some emphasis on fluid–structure interactions and interfaces. The first generation of (nodal) spectral elements was limited to relatively simple geometries and smooth solutions. However, the new generation of spectral/hp elements, consisting of both nodal and modal forms, can handle very complex geometries using unstructured grids and can capture strong shocks by employing discontinuous Galerkin methods. New flexible formulations allow simulations of multiphysics problems including extremely complex geometries and multiphase flows. Several implementation strategies have also been developed on the basis of multilevel parallel algorithms that allow dynamic p ‐refinement at constant wall clock time. After three decades of intense developments, spectral element and hp methods are mature and efficient to be used effectively in applications of industrial complexity. They provide the capabilities that standard finite element and finite volume methods do, but, in addition, they exhibit high‐order accuracy and error control.



Y.Y. Yu, S.Y. Elhabian, R.T. Whitaker. “Clustering With Pairwise Relationships: A Generative Approach,” In CoRR, 2018.

ABSTRACT

Semi-supervised learning (SSL) has become important in current data analysis applications, where the amount of unlabeled data is growing exponentially and user input remains limited by logistics and expense. Constrained clustering, as a subclass of SSL, makes use of user input in the form of relationships between data points (e.g., pairs of data points belonging to the same class or different classes) and can remarkably improve the performance of unsupervised clustering in order to reflect user-defined knowledge of the relationships between particular data points. Existing algorithms incorporate such user input, heuristically, as either hard constraints or soft penalties, which are separate from any generative or statistical aspect of the clustering model; this results in formulations that are suboptimal and not sufficiently general. In this paper, we propose a principled, generative approach to probabilistically model, without ad hoc penalties, the joint distribution given by user-defined pairwise relations. The proposed model accounts for general underlying distributions without assuming a specific form and relies on expectation-maximization for model fitting. For distributions in a standard form, the proposed approach results in a closed-form solution for updated parameters.



V. Zala, V. Shankar, S.P. Sastry, R.M. Kirby. “Curvilinear Mesh Adaptation Using Radial Basis Function Interpolation and Smoothing,” In Journal of Scientific Computing, Springer Nature, pp. 1--22. April, 2018.
DOI: 10.1007/s10915-018-0711-0

ABSTRACT

We present a new iterative technique based on radial basis function (RBF) interpolation and smoothing for the generation and smoothing of curvilinear meshes from straight-sided or other curvilinear meshes. Our technique approximates the coordinate deformation maps in both the interior and boundary of the curvilinear output mesh by using only scattered nodes on the boundary of the input mesh as data sites in an interpolation problem. Our technique produces high-quality meshes in the deformed domain even when the deformation maps are singular due to a new iterative algorithm based on modification of the RBF shape parameter. Due to the use of RBF interpolation, our technique is applicable to both 2D and 3D curvilinear mesh generation without significant modification.



J.L. Zitnay, S.P. Reese, G. Tran, N. Farhang, R.D. Bowles, J.A. Weiss. “Fabrication of dense anisotropic collagen scaffolds using biaxial compression,” In Acta Biomaterialia, Vol. 65, Elsevier BV, pp. 76--87. Jan, 2018.
DOI: 10.1016/j.actbio.2017.11.017

ABSTRACT

We developed a new method to manufacture dense, aligned, and porous collagen scaffolds using biaxial plastic compression of type I collagen gels. Using a novel compression apparatus that constricts like an iris diaphragm, low density collagen gels were compressed to yield a permanently densified, highly aligned collagen material. Micro-porosity scaffolds were created using hydrophilic elastomer porogens that can be selectively removed following biaxial compression, with porosity modulated by using different porogen concentrations. The resulting scaffolds exhibit collagen densities that are similar to native connective tissues (∼10% collagen by weight), pronounced collagen alignment across multiple length scales, and an interconnected network of pores, making them highly relevant for use in tissue culture, the study of physiologically relevant cell-matrix interactions, and tissue engineering applications. The scaffolds exhibited highly anisotropic material behavior, with the modulus of the scaffolds in the fiber direction over 100 times greater than the modulus in the transverse direction. Adipose-derived mesenchymal stem cells were seeded onto the biaxially compressed scaffolds with minimal cell death over seven days of culture, along with cell proliferation and migration into the pore spaces. This fabrication method provides new capabilities to manufacture structurally and mechanically relevant cytocompatible scaffolds that will enable more physiologically relevant cell culture studies. Further improvement of manufacturing techniques has the potential to produce engineered scaffolds for direct replacement of dense connective tissues such as meniscus and annulus fibrosus.


2017


M. Adamaszek, H. Adams, E. Gasparovic, M. Gommel, E. Purvine, R. Sazdanovic, B. Wang, Y. Wang, L. Ziegelmeier. “Vietoris-Rips and Cech Complexes of Metric Gluings,” In CoRR, 2017.

ABSTRACT

We study Vietoris-Rips and Cech complexes of metric wedge sums and metric gluings. We show that the Vietoris-Rips (resp. Cech) complex of a wedge sum, equipped with a natural metric, is homotopy equivalent to the wedge sum of the Vietoris-Rips (resp. Cech) complexes. We also provide generalizations for certain metric gluings, i.e. when two metric spaces are glued together along a common isometric subset. As our main example, we deduce the homotopy type of the Vietoris-Rips complex of two metric graphs glued together along a sufficiently short path. As a result, we can describe the persistent homology, in all homological dimensions, of the Vietoris-Rips complexes of a wide class of metric graphs.



M. Berzins, D. A. Bonnell, Jr. Cizewski, K. M. Heeger, A.J.G. Hey, C. J. Keane, B. A. Ramsey, K. A. Remington, J.L. Rempe. “Department of Energy, Advanced Scientific Computing Advisory Committee (ASCAC), Subcommittee on LDRD Review Final Report,” May, 2017.



M. Berzins. “Nonlinear Stability of the MPM Method,” In V International Conference on Particle-based Methods – Fundamentals and Applications. PARTICLES 2017, Edited by P. Wriggers, M. Bischoff, E. O˜nate, D.R.J. Owen, & T. Zohdi, pp. 671--682. 2017.

ABSTRACT

The Material Point Method (MPM) has been very successful in providing solutions to many challenging problems involving large deformations. The nonlinear nature of MPM makes it necessary to use a full nonlinear stability analysis to determine a stable timestep. The stability analysis of Spigler and Vianello is adapted to MPM and used to derive a stable timestep bound for a model problem. This bound is contrasted against a traditional CFL bound.



A. Bhaduri, Y. He, M.D. Shields, L. Graham-Brady, R.M. Kirby. “Stochastic collocation approach with adaptive mesh refinement for parametric uncertainty analysis,” In CoRR, 2017.

ABSTRACT

Presence of a high-dimensional stochastic parameter space with discontinuities poses major computational challenges in analyzing and quantifying the effects of the uncertainties in a physical system. In this paper, we propose a stochastic collocation method with adaptive mesh refinement (SCAMR) to deal with high dimensional stochastic systems with discontinuities. Specifically, the proposed approach uses generalized polynomial chaos (gPC) expansion with Legendre polynomial basis and solves for the gPC coefficients using the least squares method. It also implements an adaptive mesh (element) refinement strategy which checks for abrupt variations in the output based on the second order gPC approximation error to track discontinuities or non-smoothness. In addition, the proposed method involves a criterion for checking possible dimensionality reduction and consequently, the decomposition of the full-dimensional problem to a number of lower-dimensional subproblems. Specifically, this criterion checks all the existing interactions between input dimensions of a specific problem based on the high-dimensional model representation (HDMR) method, and therefore automatically provides the subproblems which only involve interacting dimensions. The efficiency of the approach is demonstrated using both smooth and non-smooth function examples with input dimensions up to 300, and the approach is compared against other existing algorithms.



A. Bhatele, J. Yeom, N. Jain, C. J. Kuhlman, Y. Livnat, K. R. Bisset, L. V. Kale, M. V. Marathe. “Massively Parallel Simulations of Spread of Infectious Diseases over Realistic Social Networks,” In 2017 17th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID), May, 2017.
DOI: 10.1109/ccgrid.2017.141

ABSTRACT

Controlling the spread of infectious diseases in large populations is an important societal challenge. Mathematically, the problem is best captured as a certain class of reaction-diffusion processes (referred to as contagion processes) over appropriate synthesized interaction networks. Agent-based models have been successfully used in the recent past to study such contagion processes. We describe EpiSimdemics, a highly scalable, parallel code written in Charm++ that uses agent-based modeling to simulate disease spreads over large, realistic, co-evolving interaction networks. We present a new parallel implementation of EpiSimdemics that achieves unprecedented strong and weak scaling on different architectures — Blue Waters, Cori and Mira. EpiSimdemics achieves five times greater speedup than the second fastest parallel code in this field. This unprecedented scaling is an important step to support the long term vision of real-time epidemic science. Finally, we demonstrate the capabilities of EpiSimdemics by simulating the spread of influenza over a realistic synthetic social contact network spanning the continental United States (∼280 million nodes and 5.8 billion social contacts).



L. Bos, A. Narayan, N. Levenberg, F. Piazzon. “An Orthogonality Property of the Legendre Polynomials,” In Constructive Approximation, Vol. 45, No. 1, pp. 65--81. Feb, 2017.
ISSN: 0176-4276, 1432-0940
DOI: 10.1007/s00365-015-9321-3

ABSTRACT

We give a remarkable additional othogonality property of the classical Legendre polynomials on the real interval [−1,1]: polynomials up to degree n from this family are mutually orthogonal under the arcsine measure weighted by the degree-n normalized Christoffel function



A. Brown, B. Wang. “Sheaf-Theoretic Stratification Learning,” In CoRR, 2017.

ABSTRACT

In this paper, we investigate a sheaf-theoretic interpretation of stratification learning. Motivated by the work of Alexandroff (1937) and McCord (1978), we aim to redirect efforts in the computational topology of triangulated compact polyhedra to the much more computable realm of sheaves on partially ordered sets. Our main result is the construction of stratification learning algorithms framed in terms of a sheaf on a partially ordered set with the Alexandroff topology. We prove that the resulting decomposition is the unique minimal stratification for which the strata are homogeneous and the given sheaf is constructible. In particular, when we choose to work with the local homology sheaf, our algorithm gives an alternative to the local homology transfer algorithm given in Bendich et al. (2012), and the cohomology stratification algorithm given in Nanda (2017). We envision that our sheaf-theoretic algorithm could give rise to a larger class of stratification beyond homology-based stratification. This approach also points toward future applications of sheaf theory in the study of topological data analysis by illustrating the utility of the language of sheaf theory in generalizing existing algorithms.



J. Cates, L. Nevell, S. I. Prajapati, L. D. Nelon, J. Y. Chang, M. E. Randolph, B. Wood, C. Keller, R. T. Whitaker. “Shape analysis of the basioccipital bone in Pax7-deficient mice,” In Scientific Reports, Vol. 7, No. 1, Springer Nature, Dec, 2017.
DOI: 10.1038/s41598-017-18199-9

ABSTRACT

We compared the cranial base of newborn Pax7-deficient and wildtype mice using a computational shape modeling technology called particle-based modeling (PBM). We found systematic differences in the morphology of the basiooccipital bone, including a broadening of the basioccipital bone and an antero-inferior inflection of its posterior edge in the Pax7-deficient mice. We show that the Pax7 cell lineage contributes to the basioccipital bone and that the location of the Pax7 lineage correlates with the morphology most effected by Pax7 deficiency. Our results suggest that the Pax7-deficient mouse may be a suitable model for investigating the genetic control of the location and orientation of the foramen magnum, and changes in the breadth of the basioccipital.



M. Chen, G. Grinstein, C. R. Johnson, J. Kennedy, M. Tory. “Pathways for Theoretical Advances in Visualization,” In IEEE Computer Graphics and Applications, IEEE, pp. 103--112. July, 2017.

ABSTRACT

More than a decade ago, Chris Johnson proposed the "Theory of Visualization" as one of the top research problems in visualization. Since then, there have been several theory-focused events, including three workshops and three panels at IEEE Visualization (VIS) Conferences. Together, these events have produced a set of convincing arguments.



J. Docampo-Sánchez, J.K. Ryan, M. Mirzargar, R.M. Kirby. “Multi-Dimensional Filtering: Reducing the Dimension Through Rotation Read More: https://epubs.siam.org/doi/abs/10.1137/16M1097845,” In SIAM Journal on Scientific Computing, Vol. 39, No. 5, SIAM, pp. A2179--A2200. Jan, 2017.
DOI: 10.1137/16m1097845

ABSTRACT

Over the past few decades there has been a strong effort toward the development of Smoothness-Increasing Accuracy-Conserving (SIAC) filters for discontinuous Galerkin (DG) methods, designed to increase the smoothness and improve the convergence rate of the DG solution through this postprocessor. These advantages can be exploited during flow visualization, for example, by applying the SIAC filter to DG data before streamline computations [M. Steffen, S. Curtis, R. M. Kirby, and J. K. Ryan, IEEE Trans. Vis. Comput. Graphics, 14 (2008), pp. 680--692]. However, introducing these filters in engineering applications can be challenging since a tensor product filter grows in support size as the field dimension increases, becoming computationally expensive. As an alternative, [D. Walfisch, J. K. Ryan, R. M. Kirby, and R. Haimes, J. Sci. Comput., 38 (2009), pp. 164--184] proposed a univariate filter implemented along the streamline curves. Until now, this technique remained a numerical experiment. In this paper we introduce the line SIAC filter and explore how the orientation, structure, and filter size affect the order of accuracy and global errors. We present theoretical error estimates showing how line filtering preserves the properties of traditional tensor product filtering, including smoothness and improvement in the convergence rate. Furthermore, numerical experiments are included, exhibiting how these filters achieve the same accuracy at significantly lower computational costs, becoming an attractive tool for the scientific visualization community.



E. Erdil, M.U. Ghani, L. Rada, A.O. Argunsah, D. Unay, T. Tasdizen, M. Cetin. “Nonparametric joint shape and feature priors for image segmentation,” In IEEE Transactions on Image Processing, Vol. 26, No. 11, IEEE, pp. 5312--5323. Nov, 2017.
DOI: 10.1109/tip.2017.2728185

ABSTRACT

In many image segmentation problems involving limited and low-quality data, employing statistical prior information about the shapes of the objects to be segmented can significantly improve the segmentation result. However, defining probability densities in the space of shapes is an open and challenging problem, especially if the object to be segmented comes from a shape density involving multiple modes (classes). Existing techniques in the literature estimate the underlying shape distribution by extending Parzen density estimator to the space of shapes. In these methods, the evolving curve may converge to a shape from a wrong mode of the posterior density when the observed intensities provide very little information about the object boundaries. In such scenarios, employing both shape- and class-dependent discriminative feature priors can aid the segmentation process. Such features may involve, e.g., intensity-based, textural, or geometric information about the objects to be segmented. In this paper, we propose a segmentation algorithm that uses nonparametric joint shape and feature priors constructed by Parzen density estimation. We incorporate the learned joint shape and feature prior distribution into a maximum a posteriori estimation framework for segmentation. The resulting optimization problem is solved using active contours. We present experimental results on a variety of synthetic and real data sets from several fields involving multimodal shape densities. Experimental results demonstrate the potential of the proposed method.



M. Feiszli, A. Narayan. “Numerical Computation of Weil-Peterson Geodesics in the Universal Teichmueller Space,” In SIAM Journal on Imaging Sciences, Vol. 10, No. 3, SIAM, pp. 1322--1345. Jan, 2017.
DOI: 10.1137/15M1043947

ABSTRACT

We propose an optimization algorithm for computing geodesics on the universal Teichm\"uller space T(1) in the Weil-Petersson (WP) metric. Another realization for T(1) is the space of planar shapes, modulo translation and scale, and thus our algorithm addresses a fundamental problem in computer vision: compute the distance between two given shapes. The identification of smooth shapes with elements on T(1) allows us to represent a shape as a diffeomorphism on S1. Then given two diffeomorphisms on S1 (i.e., two shapes we want connect with a flow), we formulate a discretized WP energy and the resulting problem is a boundary-value minimization problem. We numerically solve this problem, providing several examples of geodesic flow on the space of shapes, and verifying mathematical properties of T(1). Our algorithm is more general than the application here in the sense that it can be used to compute geodesics on any other Riemannian manifold.



M. Foote, P. Sabouri, A. Sawant, S. Joshi. “Rank Constrained Diffeomorphic Density Motion Estimation for Respiratory Correlated Computed Tomography,” In Graphs in Biomedical Image Analysis, Computational Anatomy and Imaging Genetics, Springer International Publishing, pp. 177--185. 2017.
DOI: 10.1007/978-3-319-67675-3_16

ABSTRACT

Motion estimation of organs in a sequence of images is important in numerous medical imaging applications. The focus of this paper is the analysis of 4D Respiratory Correlated Computed Tomography (RCCT) Imaging. It is hypothesized that the quasi-periodic breathing induced motion of organs in the thorax can be represented by deformations spanning a very low dimension subspace of the full infinite dimensional space of diffeomorphic transformations. This paper presents a novel motion estimation algorithm that includes the constraint for low-rank motion between the different phases of the RCCT images. Low-rank deformation solutions are necessary for the efficient statistical analysis and improved treatment planning and delivery. Although the application focus of this paper is RCCT the algorithm is quite general and applicable to various motion estimation problems in medical imaging.



K. Furmanova, S. Gratzl, H. Stitz, T. Zichner, M. Jaresova, M. Ennemoser, A. Lex, M. Streit. “Taggle: Scalable Visualization of Tabular Data through Aggregation,” In CoRR, 2017.

ABSTRACT

Visualization of tabular data---for both presentation and exploration purposes---is a well-researched area. Although effective visual presentations of complex tables are supported by various plotting libraries, creating such tables is a tedious process and requires scripting skills. In contrast, interactive table visualizations that are designed for exploration purposes either operate at the level of individual rows, where large parts of the table are accessible only via scrolling, or provide a high-level overview that often lacks context-preserving drill-down capabilities. In this work we present Taggle, a novel visualization technique for exploring and presenting large and complex tables that are composed of individual columns of categorical or numerical data and homogeneous matrices. The key contribution of Taggle is the hierarchical aggregation of data subsets, for which the user can also choose suitable visual representations.The aggregation strategy is complemented by the ability to sort hierarchically such that groups of items can be flexibly defined by combining categorical stratifications and by rich data selection and filtering capabilities. We demonstrate the usefulness of Taggle for interactive analysis and presentation of complex genomics data for the purpose of drug discovery.



E. Ghafoori, E.G. Kholmovski, S. Thomas, J. Silvernagel, N. Angel, N. Hu, D.J. Dosdall, R.s. MacLeod, R. Ranjan. “Characterization of Gadolinium Contrast Enhancement of Radiofrequency Ablation Lesions in Predicting Edema and Chronic Lesion Size,” In Circulation: Arrhythmia and Electrophysiology, Vol. 10, No. 11, Ovid Technologies (Wolters Kluwer Health), pp. e005599. Oct, 2017.
DOI: 10.1161/circep.117.005599

ABSTRACT

Background Magnetic resonance imaging (MRI) has been used to acutely visualize radiofrequency ablation lesions, but its accuracy in predicting chronic lesion size is unknown. The main goal of this study was to characterize different areas of enhancement in late gadolinium enhancement MRI done immediately after ablation to predict acute edema and chronic lesion size.

Methods and Results In a canine model (n=10), ventricular radiofrequency lesions were created using ThermoCool SmartTouch (Biosense Webster) catheter. All animals underwent MRI (late gadolinium enhancement and T2-weighted edema imaging) immediately after ablation and after 1, 2, 4, and 8 weeks. Edema, microvascular obstruction, and enhanced volumes were identified in MRI and normalized to chronic histological volume. Immediately after contrast administration, the microvascular obstruction region was 3.2±1.1 times larger than the chronic lesion volume in acute MRI. Even 60 minutes after contrast administration, edema was 8.7±3.31 times and the enhanced area 6.14±2.74 times the chronic lesion volume. Exponential fit to the microvascular obstruction volume was found to be the best predictor of chronic lesion volume at 26.14 minutes (95% prediction interval, 24.35–28.11 minutes) after contrast injection. The edema volume in late gadolinium enhancement correlated well with edema volume in T2-weighted MRI with an R2 of 0.99.

Conclusion Microvascular obstruction region on acute late gadolinium enhancement images acquired 26.1 minutes after contrast administration can accurately predict the chronic lesion volume. We also show that T1-weighted MRI images acquired immediately after contrast injection accurately shows edema resulting from radiofrequency ablation.



S. Ghimire, J. Dhamala, J. Coll-Font, J. D. Tate, M. S. Guillem, D. H. Brooks, R. S. MacLeod, L. Wang. “Overcoming Barriers to Quantification and Comparison of Electrocardiographic Imaging Methods: A Community-Based Approach,” In Computing in Cardiology, Vol. 44, 2017.

ABSTRACT

There has been a recent upsurge in the development of electrocardiographic imaging (ECGI) methods, along with a significant increase in clinical application. To better assess the state-of-the-art, enable reliable progress, and facilitate clinical adoption, it is important to be able to compare results in a comprehensive manner, scientifically and clinically. However, studies vary in modeling choices, computational methods, validation mechanisms and metrics, and clinical applications, making unified evaluation and comparison of ECGI a critical challenge.

This paper describes initial results of a project to address this challenge via a community-based approach organized by the Consortium for Electrocardiographic Imaging (CEI). We detail different aspects of this collective effort including a data sharing repository, a platform for comparison of different algorithms and modeling approaches on the same datasets, several active workgroups and progress made along these directions. We also summarize the results from groups participating in this collaboration and contributing solutions by applying their methods to the same dataset for comparison.