SCIENTIFIC COMPUTING AND IMAGING INSTITUTE
at the University of Utah

An internationally recognized leader in visualization, scientific computing, and image analysis

nvidia-renewalThe NVIDIA Corporation, the worldwide leader in visual computing technologies has renewed the University of Utah's recognition as a CUDA Center of Excellence, a milestone that marks the continuing of a significant partnership, starting in 2008, between the two organizations.

NVIDIA® CUDA™ technology is an award-winning C-compiler and software development kit (SDK) for developing computing applications on GPUs. Its inclusion in the University of Utah's curriculum is a clear indicator of the ground-swell that parallel computing using a many-core architecture is having on the high-performance computing industry. One of twenty-two centers, the University of Utah was the second school to be recognized as a CUDA Center of Excellence along with the University of Illinois at Urbana-Champaign. Over 50 other schools and universities now include CUDA technology as part of their Computer Science curriculum or in their research.

The center, led by Professors Chris Johnson and Charles Hansen includes the work of several faculty members including:

Dr. Chris Johnson: Uncertainty Visualization
Dr. Chuck Hansen: Visualization

Collaborators

Dr. Martin Berzins: Scalable parallel computing and computational algorithms
Dr. Mike Kirby: Large‐scale scientific computing and visualization
Dr. Mary Hall: Performance optimization
Dr. Ross Whitaker: Image Processing and Visualization
Dr. Valerio Pascucci: Extreme data management, analysis and visualization
Dr. Paul Rosen: Software Performance Visualization

As an NVIDIA CUDA Center of Excellence, the SCI Institute at the University of Utah has available to our project a new 32‐node GPU cluster, named "Kepler". Kepler is an invaluable resource in preparation for running at larger scale on Titan. Each node is 4 servers with: 64GB RAM, (2)8‐core Intel Xeon E5‐2660 CPUs @2.20GHz, (2) K20 GPUs (1 per CPU), (2) Mellanox FDR Infiniband ConnectIB Cards (1 per CPU). The Mellanox cards are dual ported where each port is 56 Gb/s,1 microsecond MPI latency and 130M MPI messages/sec. The cluster has a total of 128 Infiniband connections for RDMA between all the GPUs In the Cluster every GPU can communicate with every other GPU over its own IB connection.

nvidia-poster