SCI Publications
2021
D. Dai, Y. Epshteyn, A. Narayan.
Hyperbolicity-Preserving and Well-Balanced Stochastic Galerkin Method for Two-Dimensional Shallow Water Equations, In SIAM Journal on Scientific Computing, Vol. 43, No. 2, Society for Industrial and Applied Mathematics, pp. A929-A952. 2021.
Stochastic Galerkin formulations of the two-dimensional shallow water systems parameterized with random variables may lose hyperbolicity, and hence change the nature of the original model. In this work, we present a hyperbolicity-preserving stochastic Galerkin formulation by carefully selecting the polynomial chaos approximations to the nonlinear terms of , and in the shallow water equations. We derive a sufficient condition to preserve the hyperbolicity of the stochastic Galerkin system which requires only a finite collection of positivity conditions on the stochastic water height at selected quadrature points in parameter space. Based on our theoretical results for the stochastic Galerkin formulation, we develop a corresponding well-balanced hyperbolicity-preserving central-upwind scheme. We demonstrate the accuracy and the robustness of the new scheme on several challenging numerical tests.
D. Dai, Y. Epshteyn, A. Narayan.
Non-Dissipative and Structure-Preserving Emulators via Spherical Optimization, Subtitled arXiv:2108.12053, 2021.
Approximating a function with a finite series, eg, involving polynomials or trigonometric functions, is a critical tool in computing and data analysis. The construction of such approximations via now-standard approaches like least squares or compressive sampling does not ensure that the approximation adheres to certain convex linear structural constraints, such as positivity or monotonicity. Existing approaches that ensure such structure are norm-dissipative and this can have a deleterious impact when applying these approaches, eg, when numerical solving partial differential equations. We present a new framework that enforces via optimization such structure on approximations and is simultaneously norm-preserving. This results in a conceptually simple convex optimization problem on the sphere, but the feasible set for such problems can be very complex. We establish well-posedness of the optimization problem through results on spherical convexity and design several spherical-projection-based algorithms to numerically compute the solution. Finally, we demonstrate the effectiveness of this approach through several numerical examples.
E. Deelman, A. Mandal, A. P. Murillo, J. Nabrzyski, V. Pascucci, R. Ricci, I. Baldin, S. Sons, L. Christopherson, C. Vardeman, R. F. da Silva, J. Wyngaard, S. Petruzza, M. Rynge, K. Vahi, W. R. Whitcup, J. Drake, E. Scott.
Blueprint: Cyberinfrastructure Center of Excellence, Subtitled arXiv, 2021.
In 2018, NSF funded an effort to pilot a Cyberinfrastructure Center of Excellence (CI CoE or Center) that would serve the cyberinfrastructure (CI) needs of the NSF Major Facilities (MFs) and large projects with advanced CI architectures. The goal of the CI CoE Pilot project (Pilot) effort was to develop a model and a blueprint for such a CoE by engaging with the MFs, understanding their CI needs, understanding the contributions the MFs are making to the CI community, and exploring opportunities for building a broader CI community. This document summarizes the results of community engagements conducted during the first two years of the project and describes the identified CI needs of the MFs. To better understand MFs' CI, the Pilot has developed and validated a model of the MF data lifecycle that follows the data generation and management within a facility and gained an understanding of how this model captures the fundamental stages that the facilities' data passes through from the scientific instruments to the principal investigators and their teams, to the broader collaborations and the public. The Pilot also aimed to understand what CI workforce development challenges the MFs face while designing, constructing, and operating their CI and what solutions they are exploring and adopting within their projects. Based on the needs of the MFs in the data lifecycle and workforce development areas, this document outlines a blueprint for a CI CoE that will learn about and share the CI solutions designed, developed, and/or adopted by the MFs, provide expertise to the largest NSF projects with advanced and complex CI architectures, and foster a …
T.P. Driscoll, T.C. Bidone, S.J. Ahn, A. Yu, A. Groisman, G.A. Voth, M.A. Schwartz.
Integrin-Based Mechanosensing through Conformational Deformation, In Biophysical Journal, 2021.
DOI: https://doi.org/10.1016/j.bpj.2021.09.010
Conversion of integrins from low to high affinity states, termed activation, is important in biological processes including immunity, hemostasis, angiogenesis and embryonic development. Integrin activation is regulated by large-scale conformational transitions from closed, low affinity states to open, high affinity states. While it has been suggested that substrate stiffness shifts the conformational equilibrium of integrin and governs its unbinding, here we address the role of integrin conformational activation in cellular mechanosensing. Comparison of WT vs activating mutants of integrin αVβ3 show that activating mutants shift cell spreading, FAK activation, traction stress and force on talin toward high stiffness values at lower stiffness. Although all activated integrin mutants showed equivalent binding affinity for soluble ligands, the β3 S243E mutant showed the strongest shift in mechanical responses. To understand this behavior, we used coarse-grained computational models derived from molecular level information. The models predicted that wild type integrin αVβ3 displaces under force, and that activating mutations shift the required force toward lower values, with S243E showing the strongest effect. Cellular stiffness sensing thus correlates with computed effects of force on integrin conformation. Together, these data identify a role for force-induced integrin conformational deformation in cellular mechanosensing.
A. Dubey, M. Berzins, C. Burstedde, M.l L. Norman, D. Unat, M. Wahib.
Structured Adaptive Mesh Refinement Adaptations to Retain Performance Portability With Increasing Heterogeneity, In Computing in Science & Engineering, Vol. 23, No. 5, pp. 62-66. 2021.
ISSN: 1521-9615
DOI: 10.1109/MCSE.2021.3099603
Adaptive mesh refinement (AMR) is an important method that enables many mesh-based applications to run at effectively higher resolution within limited computing resources by allowing high resolution only where really needed. This advantage comes at a cost, however: greater complexity in the mesh management machinery and challenges with load distribution. With the current trend of increasing heterogeneity in hardware architecture, AMR presents an orthogonal axis of complexity. The usual techniques, such as asynchronous communication and hierarchy management for parallelism and memory that are necessary to obtain reasonable performance are very challenging to reason about with AMR. Different groups working with AMR are bringing different approaches to this challenge. Here, we examine the design choices of several AMR codes and also the degree to which demands placed on them by their users influence these choices.
M. D. Foote, P. E. Dennison, P. R. Sullivan, K. B. O'Neill, A. K. Thorpe, D. R. Thompson, D. H. Cusworth, R. Duren, S. Joshi.
Impact of scene-specific enhancement spectra on matched filter greenhouse gas retrievals from imaging spectroscopy, In Remote Sensing of Environment, Vol. 264, Elsevier, pp. 112574. 2021.
Matched filter techniques have been widely used for retrieval of greenhouse gas enhancements from imaging spectroscopy datasets. While multiple algorithmic techniques and refinements have been proposed, the greenhouse gas target spectrum used for concentration enhancement estimation has remained largely unaltered since the introduction of quantitative matched filter retrievals. The magnitude of retrieved methane and carbon dioxide enhancements, and thereby integrated mass enhancements (IME) and estimated flux of point-source emitters, is heavily dependent on this target spectrum. Current standard use of molecular absorption coefficients to create unit enhancement target spectra does not account for absorption by background concentrations of greenhouse gases, solar and sensor geometry, or atmospheric water vapor absorption. We introduce geometric and atmospheric parameters into the generation of scene-specific unit enhancement spectra to provide target spectra that are compatible with all greenhouse gas retrieval matched filter techniques. Specifically, we use radiative transfer modeling to model four parameters that are expected to change between scenes: solar zenith angle, column water vapor, ground elevation, and sensor altitude. These parameter values are well defined, with low variation within a single scene. A benchmark dataset consisting of ten AVIRIS-NG airborne imaging spectrometer scenes was used to compare IME retrieved using a matched filter algorithm. For methane plumes, IME resulting from use of standard, generic enhancement spectra varied from −22 to +28.7% compared to scene-specific enhancement spectra. Due to differences in spectral shape between the generic and scene-specific enhancement spectra, differences in methane plume IME were linked to surface spectral characteristics in addition to geometric and atmospheric parameters. IME differences were much larger for carbon dioxide plumes, with generic enhancement spectra producing integrated mass enhancements −76.1 to −48.1% compared to scene-specific enhancement spectra. Fluxes calculated from these integrated enhancements would vary by the same percentages, assuming equivalent wind conditions. Methane and carbon dioxide IME were most sensitive to changes in solar zenith angle and ground elevation. We introduce an interpolation approach that can efficiently generate scene-specific unit enhancement spectra for given sets of parameters. Scene-specific target spectra can improve confidence in greenhouse gas retrievals and flux estimates across collections of scenes with diverse geometric and atmospheric conditions.
K. Gadhave, J. Görtler, Z. Cutler, C. Nobre, O. Deussen, M. Meyer, J.M. Phillips, A. Lex.
Predicting intent behind selections in scatterplot visualizations, In Information Visualization, Vol. 20, No. 4, pp. 207-228. 2021.
DOI: 10.1177/14738716211038604
Predicting and capturing an analyst’s intent behind a selection in a data visualization is valuable in two scenarios: First, a successful prediction of a pattern an analyst intended to select can be used to auto-complete a partial selection which, in turn, can improve the correctness of the selection. Second, knowing the intent behind a selection can be used to improve recall and reproducibility. In this paper, we introduce methods to infer analyst’s intents behind selections in data visualizations, such as scatterplots. We describe intents based on patterns in the data, and identify algorithms that can capture these patterns. Upon an interactive selection, we compare the selected items with the results of a large set of computed patterns, and use various ranking approaches to identify the best pattern for an analyst’s selection. We store annotations and the metadata to reconstruct a selection, such as the type of algorithm and its parameterization, in a provenance graph. We present a prototype system that implements these methods for tabular data and scatterplots. Analysts can select a prediction to auto-complete partial selections and to seamlessly log their intents. We discuss implications of our approach for reproducibility and reuse of analysis workflows. We evaluate our approach in a crowd-sourced study, where we show that auto-completing selection improves accuracy, and that we can accurately capture pattern-based intent.
K. Gadhave, Z.T. Cutler, A. Lex.
Reusing Interactive Analysis Workflows, Subtitled OSF Preprints, 2021.
Interactive visual analysis has many advantages, but has the disadvantage that analysis processes and workflows cannot be easily stored and reused, which is in contrast to scripted analysis workflows using a programming language such as Python. In this paper, we introduce methods to semantically capture workflows in interactive visualization systems for different interactions such as selections, filters, categorizing/grouping, labeling, and aggregation. We design these workflows to be robust to updates in the dataset by capturing the semantics of underlying interactions, and, hence, they can be applied to updated datasets. We demonstrate this specification using a prototype that visualizes the data, shows interaction provenance, and allows generating workflows from this provenance. Finally, we introduce a Python library that can consume the workflow and apply it to the datasets, providing a seamless bridge between computational workflows and interactive visualization tools. We demonstrate our techniques using our UI prototype and Jupyter notebooks.
W. W. Good, B. Zenger, J. A. Bergquist, L. C. Rupp, K. K. Gillette, M. A.F. Gsell, G. Plank, R. S. MacLeod.
Quantifying the spatiotemporal influence of acute myocardial ischemia on volumetric conduction velocity, In Journal of Electrocardiology, Vol. 66, Churchill Livingstone, pp. 86-94. 2021.
Introduction
Acute myocardial ischemia occurs when coronary perfusion to the heart is inadequate, which can perturb the highly organized electrical activation of the heart and can result in adverse cardiac events including sudden cardiac death. Ischemia is known to influence the ST and repolarization phases of the ECG, but it also has a marked effect on propagation (QRS); however, studies investigating propagation during ischemia have been limited.
W. W. Good, K. Gillette, B. Zenger, J. Bergquist, L. C. Rupp, J. D. Tate, D. Anderson, M. Gsell, G. Plank, R. S. Macleod.
Estimation and validation of cardiac conduction velocity and wavefront reconstruction using epicardial and volumetric data, In IEEE Transactions on Biomedical Engineering, IEEE, 2021.
DOI: 10.1109/TBME.2021.3069792
Objective: In this study, we have used whole heart simulations parameterized with large animal experiments to validate three techniques (two from the literature and one novel) for estimating epicardial and volumetric conduction velocity (CV). Methods: We used an eikonal-based simulation model to generate ground truth activation sequences with prescribed CVs. Using the sampling density achieved experimentally we examined the accuracy with which we could reconstruct the wavefront, and then examined the robustness of three CV estimation techniques to reconstruction related error. We examined a triangulation-based, inverse-gradient-based, and streamline-based techniques for estimating CV cross the surface and within the volume of the heart. Results: The reconstructed activation times agreed closely with simulated values, with 50-70% of the volumetric nodes and 97-99% of the epicardial nodes were within 1 ms of the ground truth. We found close agreement between the CVs calculated using reconstructed versus ground truth activation times, with differences in the median estimated CV on the order of 3-5% volumetrically and 1-2% superficially, regardless of what technique was used. Conclusion: Our results indicate that the wavefront reconstruction and CV estimation techniques are accurate, allowing us to examine changes in propagation induced by experimental interventions such as acute ischemia, ectopic pacing, or drugs. Significance: We implemented, validated, and compared the performance of a number of CV estimation techniques. The CV estimation techniques implemented in this study produce accurate, high-resolution CV fields that can be used to study propagation in the heart experimentally and clinically.
A. A. Gooch, S. Petruzza, A. Gyulassy, G. Scorzelli, V. Pascucci, L. Rantham, W. Adcock, C. Coopmans.
Lessons learned towards the immediate delivery of massive aerial imagery to farmers and crop consultants, In Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping VI, Vol. 11747, International Society for Optics and Photonics, pp. 22 -- 34. 2021.
DOI: 10.1117/12.2587694
In this paper, we document lessons learned from using ViSOAR Ag Explorer™ in the fields of Arkansas and Utah in the 2018-2020 growing seasons. Our insights come from creating software with fast reading and writing of 2D aerial image mosaics for platform-agnostic collaborative analytics and visualization. We currently enable stitching in the field on a laptop without the need for an internet connection. The full resolution result is then available for instant streaming visualization and analytics via Python scripting. While our software, ViSOAR Ag Explorer™ removes the time and labor software bottleneck in processing large aerial surveys, enabling a cost-effective process to deliver actionable information to farmers, we learned valuable lessons with regard to the acquisition, storage, viewing, analysis, and planning stages of aerial data surveys. Additionally, with the ultimate goal of stitching thousands of images in minutes on board a UAV at the time of data capture, we performed preliminary tests for on-board, real-time stitching and analysis on USU AggieAir sUAS using lightweight computational resources. This system is able to create a 2D map while flying and allow interactive exploration of the full resolution data as soon as the platform has landed or has access to a network. This capability further speeds up the assessment process on the field and opens opportunities for new real-time photogrammetry applications. Flying and imaging over 1500-2000 acres per week provides up-to-date maps that give crop consultants a much broader scope of the field in general as well as providing a better view into planting and field preparation than could be observed from field level. Ultimately, our software and hardware could provide a much better understanding of weed presence and intensity or lack thereof.
W. W. Good, B. Zenger, J. A. Bergquist, L. C. Rupp, K. Gillett, N. Angel, D. Chou, G. Plank, R. S. MacLeod.
Combining endocardial mapping and electrocardiographic imaging (ECGI) for improving PVC localization: A feasibility study, In Journal of Electrocardiology, 2021.
ISSN: 0022-0736
DOI: https://doi.org/10.1016/j.jelectrocard.2021.08.013
Introduction
Accurate reconstruction of cardiac activation wavefronts is crucial for clinical diagnosis, management, and treatment of cardiac arrhythmias. Furthermore, reconstruction of activation profiles within the intramural myocardium has long been impossible because electrical mapping was only performed on the endocardial surface. Recent advancements in electrocardiographic imaging (ECGI) have made endocardial and epicardial activation mapping possible. We propose a novel approach to use both endocardial and epicardial mapping in a combined approach to reconstruct intramural activation times.
To implement and validate a combined epicardial/endocardial intramural activation time reconstruction technique.
We used 11 simulations of ventricular activation paced from sites throughout myocardial wall and extracted endocardial and epicardial activation maps at approximate clinical resolution. From these maps, we interpolated the activation times through the myocardium using thin-plate-spline radial basis functions. We evaluated activation time reconstruction accuracy using root-mean-squared error (RMSE) of activation times and the percent of nodes within 1 ms of the ground truth.
Reconstructed intramural activation times showed an RMSE and percentage of nodes within 1 ms of the ground truth simulations of 3 ms and 70%, respectively. In the worst case, the RMSE and percentage of nodes were 4 ms and 60%, respectively.
Conclusion
We showed that a simple, yet effective combination of clinical endocardial and epicardial activation maps can accurately reconstruct intramural wavefronts. Furthermore, we showed that this approach provided robust reconstructions across multiple intramural stimulation sites.
J. K. Holmen, D. Sahasrabudhe, M. Berzins.
A Heterogeneous MPI+PPL Task Scheduling Approach for Asynchronous Many-Task Runtime Systems, In Proceedings of the Practice and Experience in Advanced Research Computing 2021 on Sustainability, Success and Impact (PEARC21), ACM, 2021.
Asynchronous many-task runtime systems and MPI+X hybrid parallelism approaches have shown promise for helping manage the increasing complexity of nodes in current and emerging high performance computing (HPC) systems, including those for exascale. The increasing architectural diversity, however, poses challenges for large legacy runtime systems emphasizing broad support for major HPC systems. Performance portability layers (PPL) have shown promise for helping manage this diversity. This paper describes a heterogeneous MPI+PPL task scheduling approach for combining these promising solutions with additional consideration for parallel third party libraries facing similar challenges to help prepare such a runtime for the diverse heterogeneous systems accompanying exascale computing. This approach is demonstrated using a heterogeneous MPI+Kokkos task scheduler and the accompanying portable abstractions [15] implemented in the Uintah Computational Framework, an asynchronous many-task runtime system, with additional consideration for hypre, a parallel third party library. Results are shown for two challenging problems executing workloads representative of typical Uintah applications. These results show performance improvements up to 4.4x when using this scheduler and the accompanying portable abstractions [15] to port a previously MPI-Only problem to Kokkos::OpenMP and Kokkos::CUDA to improve multi-socket, multi-device node use. Good strong-scaling to 1,024 NVIDIA V100 GPUs and 512 IBM POWER9 processor are also shown using MPI+Kokkos::OpenMP+Kokkos::CUDA at scale.
J. K. Holmen, D. Sahasrabudhe, M. Berzins, A. Bardakoff, T. J. Blattner, . Keyrouz.
Uintah+Hedgehog: Combining Parallelism Models for End-to-End Large-Scale Simulation Performance, Scientific Computing and Imaging Institute, 2021.
The complexity of heterogeneous nodes near and at exascale has increased the need for “heroic” programming efforts. To accommodate this complexity, significant investment is required for codes not yet optimizing for low-level architecture features (e.g., wide vector units) and/or running at large-scale. This paper describes ongoing efforts to combine two codes, Hedgehog and Uintah, lying at both extremes to ease programming efforts. The end goals of this effort are (1) to combine the two codes to make an asynchronous many-task runtime system specializing in both node-level and large-scale performance and (2) to further improve the accessibility of both with portable abstractions. A prototype adopting Hedgehog in Uintah and a prototype extending Hedgehog to support MPI+X hybrid parallelism are discussed. Results achieving ∼60% of NVIDIA V100 GPU peak performance for a distributed DGEMM problem are shown for a naive MPI+Hedgehog implementation before any attempt to optimize for performance.
Authors note: This is a refereed but unpublished report that was
submitted to, reviewed for and accepted in revised form for a presentation of the same material at the Hipar Workshop at Supercomputing 21
Z. Houmani, D. Balouek-Thomert, E. Caron, M. Parashar.
Enabling microservices management for Deep Learning applications across the Edge-Cloud Continuum, In SBAC-PAD 2021 - IEEE 33rd International Symposium on Computer Architecture and High Performance Computing, October, 2021.
Deep Learning has shifted the focus of traditional batch workflows to data-driven feature engineering on streaming data. In particular, the execution of Deep Learning workflows presents expectations of near-real-time results with user-defined acceptable accuracy. Meeting the objectives of such applications across heterogeneous resources located at the edge of the network, the core, and in-between requires managing trade-offs between the accuracy and the urgency of the results. However, current data analysis rarely manages the entire Deep Learning pipeline along the data path, making it complex for developers to implement strategies in real-world deployments. Driven by an object detection use case, this paper presents an architecture for time-critical Deep Learning workflows by providing a data-driven scheduling approach to distribute the pipeline across Edge to Cloud resources. Furthermore, it adopts a data management strategy that reduces the resolution of incoming data when potential trade-off optimizations are available. We illustrate the system's viability through a performance evaluation of the object detection use case on the Grid'5000 testbed. We demonstrate that in a multiuser scenario, with a standard frame rate of 25 frames per second, the system speed-up data analysis up to 54.4% compared to a Cloud-only-based scenario with an analysis accuracy higher than a fixed threshold.
X. Huang, P. Klacansky, S. Petruzza, A. Gyulassy, P.T. Bremer, V. Pascucci.
Distributed merge forest: a new fast and scalable approach for topological analysis at scale, In Proceedings of the ACM International Conference on Supercomputing, pp. 367-377. 2021.
Topological analysis is used in several domains to identify and characterize important features in scientific data, and is now one of the established classes of techniques of proven practical use in scientific computing. The growth in parallelism and problem size tackled by modern simulations poses a particular challenge for these approaches. Fundamentally, the global encoding of topological features necessitates inter process communication that limits their scaling. In this paper, we extend a new topological paradigm to the case of distributed computing, where the construction of a global merge tree is replaced by a distributed data structure, the merge forest, trading slower individual queries on the structure for faster end-to-end performance and scaling. Empirically, the queries that are most negatively affected also tend to have limited practical use. Our experimental results demonstrate the scalability of both the merge forest construction and the parallel queries needed in scientific workflows, and contrast this scalability with the two established alternatives that construct variations of a global tree.
M. H. Jensen, S. Joshi, S. Sommer.
Bridge Simulation and Metric Estimation on Lie Groups, Subtitled arXiv preprint arXiv:2106.03431, 2021.
We present a simulation scheme for simulating Brownian bridges on complete and connected Lie groups. We show how this simulation scheme leads to absolute continuity of the Brownian bridge measure with respect to the guided process measure. This result generalizes the Euclidean result of Delyon and Hu to Lie groups. We present numerical results of the guided process in the Lie group $\SO(3)$. In particular, we apply importance sampling to estimate the metric on $\SO(3)$ using an iterative maximum likelihood method.
M. Højgaard Jensen, L. Hilgendorf, S. Joshi, S. Sommer.
Bridge Simulation on Lie Groups and Homogeneous Spaces with Application to Parameter Estimation, Subtitled arXiv:2112.00866, 2021.
X. Jiang, J. C. Font, J. A. Bergquist, B. Zenger, W. W. Good, D. H. Brooks, R. S. MacLeod, L. Wang.
Deep Adaptive Electrocardiographic Imaging with Generative Forward Model for Error Reduction, In Functional Imaging and Modeling of the Heart: 11th International Conference, In Functional Imaging and Modeling of the Heart: 11th International Conference, Vol. 12738, Springer Nature, pp. 471. 2021.
Accuracy of estimating the heart’s electrical activity with Electrocardiographic Imaging (ECGI) is challenging due to using an error-prone physics-based model (forward model). While getting better results than the traditional numerical methods following the underlying physics, modern deep learning approaches ignore the physics behind the electrical propagation in the body and do not allow the use of patientspecific geometry. We introduce a deep-learning-based ECGI framework capable of understanding the underlying physics, aware of geometry, and adjustable to patient-specific data. Using a variational autoencoder (VAE), we uncover the forward model’s parameter space, and when solving the inverse problem, these parameters will be optimized to reduce the errors in the forward model. In both simulation and real data experiments, we demonstrated the ability of the presented framework to provide accurate reconstruction of the heart’s electrical potentials and localization of the earliest activation sites.
C. R. Johnson.
Translational computer science at the scientific computing and imaging institute, In Journal of Computational Science, Vol. 52, pp. 101217. 2021.
ISSN: 1877-7503
DOI: https://doi.org/10.1016/j.jocs.2020.101217
The Scientific Computing and Imaging (SCI) Institute at the University of Utah evolved from the SCI research group, started in 1994 by Professors Chris Johnson and Rob MacLeod. Over time, research centers funded by the National Institutes of Health, Department of Energy, and State of Utah significantly spurred growth, and SCI became a permanent interdisciplinary research institute in 2000. The SCI Institute is now home to more than 150 faculty, students, and staff. The history of the SCI Institute is underpinned by a culture of multidisciplinary, collaborative research, which led to its emergence as an internationally recognized leader in the development and use of visualization, scientific computing, and image analysis research to solve important problems in a broad range of domains in biomedicine, science, and engineering. A particular hallmark of SCI Institute research is the creation of open source software systems, including the SCIRun scientific problem-solving environment, Seg3D, ImageVis3D, Uintah, ViSUS, Nektar++, VisTrails, FluoRender, and FEBio. At this point, the SCI Institute has made more than 50 software packages broadly available to the scientific community under open-source licensing and supports them through web pages, documentation, and user groups. While the vast majority of academic research software is written and maintained by graduate students, the SCI Institute employs several professional software developers to help create, maintain, and document robust, tested, well-engineered open source software. The story of how and why we worked, and often struggled, to make professional software engineers an integral part of an academic research institute is crucial to the larger story of the SCI Institute’s success in translational computer science (TCS).
Page 16 of 142