SCIENTIFIC COMPUTING AND IMAGING INSTITUTE
at the University of Utah

An internationally recognized leader in visualization, scientific computing, and image analysis

SCI Publications

2011


J.T. Oden, O. Ghattas, J.L. King, B.I. Schneider, K. Bartschat, F. Darema, J. Drake, T. Dunning, D. Estep, S. Glotzer, M. Gurnis, C.R. Johnson, D.S. Katz, D. Keyes, S. Kiesler, S. Kim, J. Kinter, G. Klimeck, C.W. McCurdy, R. Moser, C. Ott, A. Patra, L. Petzold, T. Schlick, K. Schulten, V. Stodden, J. Tromp, M. Wheeler, S.J. Winter, C. Wu, K. Yelick. “Cyber Science and Engineering: A Report of the National Science Foundation Advisory Committee for Cyberinfrastructure Task Force on Grand Challenges,” Note: NSF Report, 2011.

ABSTRACT

This document contains the findings and recommendations of the NSF – Advisory Committee for Cyberinfrastructure Task Force on Grand Challenges addressed by advances in Cyber Science and Engineering. The term Cyber Science and Engineering (CS&E) is introduced to describe the intellectual discipline that brings together core areas of science and engineering, computer science, and computational and applied mathematics in a concerted effort to use the cyberinfrastructure (CI) for scientific discovery and engineering innovations; CS&E is computational and data-based science and engineering enabled by CI. The report examines a host of broad issues faced in addressing the Grand Challenges of science and technology and explores how those can be met by advances in CI. Included in the report are recommendations for new programs and initiatives that will expand the portfolio of the Office of Cyberinfrastructure and that will be critical to advances in all areas of science and engineering that rely on the CI.



Y. Pan, W.-K. Jeong, R.T. Whitaker. “Markov surfaces: A probabilistic framework for user-assisted three-dimensional image segmentation,” In Computer Vision and Image Understanding, Vol. 115, No. 10, pp. 1375--1383. 2011.

ABSTRACT

This paper presents Markov surfaces, a probabilistic algorithm for user-assisted segmentation of elongated structures in 3D images. The 3D segmentation problem is formulated as a path-finding problem, where path probabilities are described by Markov chains. Users define points, curves, or regions on 2D image slices, and the algorithm connects these user-defined features in a way that respects the underlying elongated structure in data. Transition probabilities in the Markov model are derived from intensity matches and interslice correspondences, which are generated from a slice-to-slice registration algorithm. Bezier interpolations between paths are applied to generate smooth surfaces. Subgrid accuracy is achieved by linear interpolations of image intensities and the interslice correspondences. Experimental results on synthetic and real data demonstrate that Markov surfaces can segment regions that are defined by texture, nearby context, and motion. A parallel implementation on a streaming parallel computer architecture, a graphics processor, makes the method interactive for 3D data.



Valerio Pascucci, Xavier Tricoche, Hans Hagen, Julien Tierny. “Topological Methods in Data Analysis and Visualization: Theory, Algorithms, and Applications (Mathematics and Visualization),” Springer, 2011.
ISBN: 978-3642150135



T. Peterka, R. Ross, A. Gyulassy, V. Pascucci, W. Kendall, H.-W. Shen, T.-Y. Lee, A. Chaudhuri. “Scalable Parallel Building Blocks for Custom Data Analysis,” In Proceedings of the 2011 IEEE Symposium on Large-Scale Data Analysis and Visualization (LDAV), pp. 105--112. October, 2011.
DOI: 10.1109/LDAV.2011.6092324

ABSTRACT

We present a set of building blocks that provide scalable data movement capability to computational scientists and visualization researchers for writing their own parallel analysis. The set includes scalable tools for domain decomposition, process assignment, parallel I/O, global reduction, and local neighborhood communicationtasks that are common across many analysis applications. The global reduction is performed with a new algorithm, described in this paper, that efficiently merges blocks of analysis results into a smaller number of larger blocks. The merging is configurable in the number of blocks that are reduced in each round, the number of rounds, and the total number of resulting blocks. We highlight the use of our library in two analysis applications: parallel streamline generation and parallel Morse-Smale topological analysis. The first case uses an existing local neighborhood communication algorithm, whereas the latter uses the new merge algorithm.



S. Philip, B. Summa, P.-T. Bremer, and V. Pascucci. “Parallel Gradient Domain Processing of Massive Images,” In Proceedings of the 2011 Eurographics Symposium on Parallel Graphics and Visualization, pp. 11--19. 2011.

ABSTRACT

Gradient domain processing remains a particularly computationally expensive technique even for relatively small images. When images become massive in size, giga or terapixel, these problems become particularly troublesome and the best serial techniques take on the order of hours or days to compute a solution. In this paper, we provide a simple framework for the parallel gradient domain processing. Specifically, we provide a parallel out-of-core method for the seamless stitching of gigapixel panoramas in a parallel MPI environment. Unlike existing techniques, the framework provides both a straightforward implementation, maintains strict control over the required/allocated resources, and makes no assumptions on the speed of convergence to an acceptable image. Furthermore, the approach shows good weak/strong scaling from several to hundreds of cores and runs on a variety of hardware.



S. Philip, B. Summa, P-T Bremer, V. Pascucci. “Hybrid CPU-GPU Solver for Gradient Domain Processing of Massive Images,” In Proceedings of 2011 International Conference on Parallel and Distributed Systems (ICPADS), pp. 244--251. 2011.

ABSTRACT

Gradient domain processing is a computationally expensive image processing technique. Its use for processing massive images, giga or terapixels in size, can take several hours with serial techniques. To address this challenge, parallel algorithms are being developed to make this class of techniques applicable to the largest images available with running times that are more acceptable to the users. To this end we target the most ubiquitous form of computing power available today, which is small or medium scale clusters of commodity hardware. Such clusters are continuously increasing in scale, not only in the number of nodes, but also in the amount of parallelism available within each node in the form of multicore CPUs and GPUs. In this paper we present a hybrid parallel implementation of gradient domain processing for seamless stitching of gigapixel panoramas that utilizes MPI, threading and a CUDA based GPU component. We demonstrate the performance and scalability of our implementation by presenting results from two GPU clusters processing two large data sets.



T.A. Quinn, S. Granite, M.A. Allessie, C. Antzelevitch, C. Bollensdorff, G. Bub, R.A.B. Burton, E. Cerbai, P.S. Chen, M. Delmar, D. DiFrancesco, Y.E. Earm, I.R. Efimov, M. Egger, E. Entcheva, M. Fink, R. Fischmeister, M.R. Franz, A. Garny, W.R. Giles, T. Hannes, S.E. Harding, P.J. Hunter, s, G. Iribe, J. Jalife, C.R. Johnson, R.S. Kass, I. Kodama, G. Koren, P. Lord, V.S. Markhasin, S. Matsuoka, A.D. McCulloch, G.R. Mirams, G.E. Morley, S. Nattel, D. Noble, S.P. Olesen, A.V. Panfilov, N.A. Trayanova, U. Ravens, S. Richard, D.S. Rosenbaum, Y. Rudy, F. Sachs, F.B. Sachse, D.A. Saint, U. Schotten, O. Solovyova, P. Taggart, L. Tung, A. Varrò, P.G. Volders, K. Wang, J.N. Weiss, E. Wettwer, E. White, R. Wilders, R.L. Winslow, P. Kohl. “Minimum Information about a Cardiac Electrophysiology Experiment (MICEE): Standardised reporting for model reproducibility, interoperability, and data sharing,” In Progress in Biophysics and Molecular Biology, Vol. 107, No. 1, Elsevier, pp. 4--10. October, 2011.
DOI: 10.1016/j.pbiomolbio.2011.07.001
PubMed Central ID: PMC3190048

ABSTRACT

Cardiac experimental electrophysiology is in need of a well-defined Minimum Information Standard for recording, annotating, and reporting experimental data. As a step toward establishing this, we present a draft standard, called Minimum Information about a Cardiac Electrophysiology Experiment (MICEE). The ultimate goal is to develop a useful tool for cardiac electrophysiologists which facilitates and improves dissemination of the minimum information necessary for reproduction of cardiac electrophysiology research, allowing for easier comparison and utilisation of findings by others. It is hoped that this will enhance the integration of individual results into experimental, computational, and conceptual models. In its present form, this draft is intended for assessment and development by the research community. We invite the reader to join this effort, and, if deemed productive, implement the Minimum Information about a Cardiac Electrophysiology Experiment standard in their own work.

Keywords: Minimum Information Standard; Cardiac electrophysiology; Data sharing; Reproducibility; Integration; Computational modelling



W. Reich, Dominic Schneider, Christian Heine, Alexander Wiebel, Guoning Chen, Gerik Scheuermann. “Combinatorial Vector Field Topology in 3 Dimensions,” In Mathematical Methods in Biomedical Image Analysis (MMBIA) Proceedings IEEE MMBIA 2012, pp. 47--59. November, 2011.
DOI: 10.1007/978-3-642-23175-9_4

ABSTRACT

In this paper, we present two combinatorial methods to process 3-D steady vector fields, which both use graph algorithms to extract features from the underlying vector field. Combinatorial approaches are known to be less sensitive to noise than extracting individual trajectories. Both of the methods are a straightforward extension of an existing 2-D technique to 3-D fields. We observed that the first technique can generate overly coarse results and therefore we present a second method that works using the same concepts but produces more detailed results. We evaluate our method on a CFD-simulation of a gas furnace chamber. Finally, we discuss several possibilities for categorizing the invariant sets with respect to the flow.



P. Rosen, V. Popescu, K. Hayward, C. Wyman. “Non-Pinhole Approximations for Interactive Rendering,” In IEEE Computer Graphics and Applications, Vol. 99, 2011.



P. Rosen, V. Popescu. “An Evaluation of 3-D Scene Exploration Using a Multiperspective Image Framework,” In The Visual Computer, Vol. 27, No. 6-8, Springer-Verlag New York, Inc., pp. 623--632. 2011.
DOI: 10.1007/s00371-011-0599-2
PubMed ID: 22661796
PubMed Central ID: PMC3364594

ABSTRACT

Multiperspective images (MPIs) show more than what is visible from a single viewpoint and are a promising approach for alleviating the problem of occlusions. We present a comprehensive user study that investigates the effectiveness of MPIs for 3-D scene exploration. A total of 47 subjects performed searching, counting, and spatial orientation tasks using both conventional and multiperspective images. We use a flexible MPI framework that allows trading off disocclusion power for image simplicity. The framework also allows rendering MPI images at interactive rates, which enables investigating interactive navigation and dynamic 3-D scenes. The results of our experiments show that MPIs can greatly outperform conventional images. For searching, subjects performed on average 28% faster using an MPI. For counting, accuracy was on average 91% using MPIs as compared to 42% for conventional images.

Keywords: Interactive 3-D scene exploration, Navigation, Occlusions, User study, Visual interfaces



N. Sadeghi, M.W. Prastawa, P.T. Fletcher, J.H. Gilmore, W. Lin, G. Gerig. “Statistical Growth Modeling of Longitudinal DT-MRI for Regional Characterization of Early Brain Development,” In Proceedings of the Medical Image Computing and Computer Assisted Intervention (MICCAI) 2011 Workshop on Image Analysis of Human Brain Development, pp. 1507--1510. 2011.
DOI: 10.1109/ISBI.2012.6235858

ABSTRACT

A population growth model that represents the growth trajectories of individual subjects is critical to study and understand neurodevelopment. This paper presents a framework for jointly estimating and modeling individual and population growth trajectories, and determining significant regional differences in growth pattern characteristics applied to longitudinal neuroimaging data. We use non-linear mixed effect modeling where temporal change is modeled by the Gompertz function. The Gompertz function uses intuitive parameters related to delay, rate of change, and expected asymptotic value; all descriptive measures which can answer clinical questions related to growth. Our proposed framework combines nonlinear modeling of individual trajectories, population analysis, and testing for regional differences. We apply this framework to the study of early maturation in white matter regions as measured with diffusion tensor imaging (DTI). Regional differences between anatomical regions of interest that are known to mature differently are analyzed and quantified. Experiments with image data from a large ongoing clinical study show that our framework provides descriptive, quantitative information on growth trajectories that can be directly interpreted by clinicians. To our knowledge, this is the first longitudinal analysis of growth functions to explain the trajectory of early brain maturation as it is represented in DTI.

Keywords: namic



B. Salter, B. Wang, M. Sadinski, S. Ruhnau, V. Sarkar, J. Hinkle, Y. Hitchcock, K. Kokeny, S. Joshi. “WE-E-BRC-06: Comparison of Two Methods of Contouring Internal Target Volume on Multiple 4DCT Data Sets from the Same Subjects: Maximum Intensity Projection and Combination of 10 Phases,” In Medical Physics, Vol. 38, No. 6, pp. 3820. 2011.



R. Samuel, H.J. Sant, F. Jiao, C.R. Johnson, B.K. Gale. “Microfluidic laminate-based phantom for diffusion tensor-magnetic resonance imaging,” In Journal of Micromech. Microeng., Vol. 21, pp. 095027--095038. 2011.
DOI: 10.1088/0960-1317/21/9/095027



M. Schott, A.V.P. Grosset, T. Martin, V. Pegoraro, S.T. Smith, C.D. Hansen. “Depth of Field Effects for Interactive Direct Volume Rendering,” In Computer Graphics Forum, Vol. 30, No. 3, Edited by H. Hauser and H. Pfister and J.J. van Wijk, Wiley-Blackwell, pp. 941--950. jun, 2011.
DOI: 10.1111/j.1467-8659.2011.01943.x

ABSTRACT

In this paper, a method for interactive direct volume rendering is proposed for computing depth of field effects, which previously were shown to aid observers in depth and size perception of synthetically generated images. The presented technique extends those benefits to volume rendering visualizations of 3D scalar fields from CT/MRI scanners or numerical simulations. It is based on incremental filtering and as such does not depend on any precomputation, thus allowing interactive explorations of volumetric data sets via on-the-fly editing of the shading model parameters or (multi-dimensional) transfer functions.



M. Schulz, J.A. Levine, P.-T. Bremer, T. Gamblin, V. Pascucci. “Interpreting Performance Data Across Intuitive Domains,” In International Conference on Parallel Processing, Taipei, Taiwan, IEEE, pp. 206--215. 2011.
DOI: 10.1109/ICPP.2011.60



M. Seyedhosseini, A.R.C. Paiva, T. Tasdizen. “Multi-scale Series Contextual Model for Image Parsing,” SCI Technical Report, No. UUSCI-2011-004, SCI Institute, University of Utah, 2011.



M. Seyedhosseini, A.R.C. Paiva, T. Tasdizen. “Fast AdaBoost training using weighted novelty selection,” In Proc. IEEE Intl. Joint Conf. on Neural Networks, San Jose, CA, USA pp. 1245--1250. August, 2011.

ABSTRACT

In this paper, a new AdaBoost learning framework, called WNS-AdaBoost, is proposed for training discriminative models. The proposed approach significantly speeds up the learning process of adaptive boosting (AdaBoost) by reducing the number of data points. For this purpose, we introduce the weighted novelty selection (WNS) sampling strategy and combine it with AdaBoost to obtain an efficient and fast learning algorithm. WNS selects a representative subset of data thereby reducing the number of data points onto which AdaBoost is applied. In addition, WNS associates a weight with each selected data point such that the weighted subset approximates the distribution of all the training data. This ensures that AdaBoost can trained efficiently and with minimal loss of accuracy. The performance of WNS-AdaBoost is first demonstrated in a classification task. Then, WNS is employed in a probabilistic boosting-tree (PBT) structure for image segmentation. Results in these two applications show that the training time using WNS-AdaBoost is greatly reduced at the cost of only a few percent in accuracy.



M. Seyedhosseini, R. Kumar, E. Jurrus, R. Guily, M. Ellisman, H. Pfister, T. Tasdizen. “Detection of Neuron Membranes in Electron Microscopy Images using Multi-scale Context and Radon-like Features,” In Medical Image Computing and Computer-Assisted Intervention – MICCAI 2011, Lecture Notes in Computer Science (LNCS), Vol. 6891, pp. 670--677. 2011.
DOI: 10.1007/978-3-642-23623-5_84



F. Shi, D. Shen, P.-T. Yap, Y. Fan, J.-Z. Cheng, H. An, L.L. Wald, G. Gerig, J.H. Gilmore, W. Lin. “CENTS: Cortical Enhanced Neonatal Tissue Segmentation,” In Human Brain Mapping HBM, Vol. 32, No. 3, Note: ePub 5 Aug 2010, pp. 382--396. March, 2011.
DOI: 10.1002/hbm.21023
PubMed ID: 20690143



M. Steinberger, M. Waldner, M. Streit, A. Lex, D. Schmalstieg. “Context-Preserving Visual Links,” In IEEE Transactions on Visualization and Computer Graphics (InfoVis '11), Vol. 17, No. 12, 2011.

ABSTRACT

Evaluating, comparing, and interpreting related pieces of information are tasks that are commonly performed during visual data analysis and in many kinds of information-intensive work. Synchronized visual highlighting of related elements is a well-known technique used to assist this task. An alternative approach, which is more invasive but also more expressive is visual linking in which line connections are rendered between related elements. In this work, we present context-preserving visual links as a new method for generating visual links. The method specifically aims to fulfill the following two goals: first, visual links should minimize the occlusion of important information; second, links should visually stand out from surrounding information by minimizing visual interference. We employ an image-based analysis of visual saliency to determine the important regions in the original representation. A consequence of the image-based approach is that our technique is application-independent and can be employed in a large number of visual data analysis scenarios in which the underlying content cannot or should not be altered. We conducted a controlled experiment that indicates that users can find linked elements in complex visualizations more quickly and with greater subjective satisfaction than in complex visualizations in which plain highlighting is used. Context-preserving visual links were perceived as visually more attractive than traditional visual links that do not account for the context information.