20th High Performance Computing Symposium
(HPC 2012)

part of the
SCS Spring Simulation Multiconference (SpringSim'12)
in cooperation with ACM/SIGSIM


Link to the program HPC introduction


Link to a text file of the schedule


Paper submission due:January 7, 2012
Notification of acceptance:January 27, 2012
Revised manuscript due:February 10, 2012
Symposium:March 26-29, 2012


The 2012 Spring Simulation Multiconference will feature the 20th High Performance Computing Symposium (HPC 2012), devoted to the impact of high performance computing and communications on computer simulations.

Advances in multicore and many-core architectures, networking, high end computers, large data stores, and middleware capabilities are ushering in a new era of high performance parallel and distributed simulations. Along with these new capabilities come new challenges in computing and system modeling. The goal of HPC 2012 is to encourage innovation in high performance computing and communication technologies and to promote synergistic advances in modeling methodologies and simulation. It will promote the exchange of ideas and information between universities, industry, and national laboratories about new developments in system modeling, high performance computing and communication, and scientific computing and simulation. Topics of interest include:

  • high performance/large scale application case studies,
  • GPUs for general purpose computations (GPGPU),
  • multi-core and many-core computing,
  • power aware computing,
  • cloud, distributed, and grid computing,
  • asynchronous numerical methods and programming,
  • hybrid system modeling and simulation,
  • large scale visualization and data management,
  • tools and environments for coupling parallel codes,
  • parallel algorithms and architectures,
  • high performance software tools,
  • resilience at the simulation level,
  • component technologies for high performance computing.


Prospective authors are invited to submit full papers (up to 8 pages, double column format) on topics related to the areas listed above. Submissions will be evaluated on relevance, technical quality, and exposition. Papers must not have appeared before (or be pending) in a journal or conference with published proceedings, nor may they be under review or submitted to another forum during the HPC 2012 review process. All accepted papers will be published in the proceedings as regular papers and indexed by Odysci. Papers should be submitted electronically using the paper submission system.

Papers must use SCS format. (Formatting instructions)

At least one author of an accepted paper must register for the symposium and must present the paper at the symposium.


Designing Multiple-Fault Tolerant RAIDS: Using Graphs and Hypergraphs

by Narsingh Deo, University of Central Florida

Accelerating linear system solutions on new parallel architectures

by Marc Baboulin, Inria Saclay -- Ile-de-France and Universite Paris-Sud

Exascale Algorithms for Synthesizing Parameters of Computational Models

by Sumit Kumar Jha, University of Central Florida


HPC 2012 will feature the following tutorials:

Ingredients for good parallel performance on multicore-based systems

by Georg Hager, Gerhard Wellein, and Jan Treibig, University of Erlangen-Nuremberg, Germany
This tutorial covers program optimization techniques for multicore processors and the systems they are used in. It concentrates on the dominating parallel programming paradigms, MPI and OpenMP.

The presenters start by giving an architectural overview of multicore processors. Peculiarities like shared vs. separate caches, bandwidth bottlenecks, and ccNUMA characteristics are pointed out. We show typical performance features like synchronization overhead, intranode MPI bandwidths and latencies, ccNUMA locality, and bandwidth saturation (in cache and memory) in order to pinpoint the influence of system topology and thread affinity on the performance of typical parallel programming constructs. Multiple ways of probing system topology and establishing affinity, either by explicit coding or separate tools, are demonstrated. Finally we elaborate on programming techniques that help establish optimal parallel memory access patterns and/or cache reuse, with an emphasis on leveraging shared caches for improving performance.

Some related materials can be found on Georg's blog . Particularly fun is their "Fooling the Masses" talk.


by Karl Rupp, Vienna University of Technology, Austria
An introduction to the free open source linear algebra library ViennaCL based on OpenCL is given. The library allows for simple, high-level access to the vast computing resources available on parallel architectures such as GPUs and multi-core CPUs, and is primarily focused on common linear algebra operations (BLAS levels 1, 2 and 3) and the solution of large systems of equations by means of iterative methods with optional preconditioner.

First, the architecture and basic usage of ViennaCL is explained. Examples demonstrate that GPU-based implementations are obtained at a high level of convenience, preserving the high level of abstraction offered in C++. The simple integration of custom OpenCL compute kernels is demonstrated by example. Finally, applications of ViennaCL to the numerical solution of partial differential equations in general and the simulation of structural mechanics and microelectronics in particular are given, for which performance enhancements of about one order of magnitude compared to single-core implementations are achieved.


The symposium proceedings will be published in hard copy and on CD-ROM through SCS and may be in the ACM Digital Library.


At least one paper from each symposium will be chosen for a Best Paper Award, which will be recognized in an awards ceremony before a plenary lecture.


General Chair: Gary Howell, North Carolina State University
General Vice-Chair:Fang (Cherry) Liu, Ames Laboratory
Program Chair: Steven Seidel, Michigan Technological University
Program Vice-Chair:Rhonda Phillips, MIT Lincoln Laboratory
Publicity Chair: Karl Rupp, TU Wien


Marc Baboulin, Inria Saclay -- Ile-de-France and Universite Paris-Sud
Narsingh Deo, University of Central Florida
Julien Langou, University of Colorado Denver
Beth Plale, Indiana University
William Shoaff, Florida Institute of Technology
Masha Sosonkina, Ames Laboratory and Iowa State University
Niraj Srivastava, Raytheon Corporation
Will Thacker, Winthrop University
Layne Watson, Virginia Polytechnic Institute


Aron Ahmadia, King Abdullah University
Alex Aravind, University of Northern British Columbia, Canada
Eric Aubanel, University of New Brunswick, Canada
Sanjutka Bhowmick, University of Nebraska
Brett Bode, Ames Laboratory
Ali Butt, Virginia Polytechnic Institute
Bin Cao, Teradata Corporation
Haiyang Cheng, Willamette University
Jing-Ru C. "Ruth" Cheng, U.S. Army Research and Development Center
Jose C. Cunha, Universidade Nova de Lisboa
Nahid Emad, Universite' de Versailles Saint-Quentin-en-Yvelines, France
Samantha Foley, Oak Ridge National Laboratory
Gillian K. Groves, Raytheon Company
Phil Hammonds, RTSync Corporation
Azzam Haidar, University of Tennessee
Joshua Hursey, Oak Ridge National Laboratory
Jim Jones, Florida Tech
Michael Mascagni, Florida State University
Gabriel Mateescu, Leibniz-Rechenzentrum, Germany
John Michalakes, University Corporation for Atmospheric Research
Lois Curfman McInnes, Argonne National Laboratory
Jose Moreira, IBM Thomas J. Watson Research Center
Saeid Nooshabadi, Michigan Technological University
Suely Oliveira, University of Iowa
Thomas Oppe, U.S. Army Engineer Research and Development Center
Christian Perez, INRIA/ENS Lyon, France
Thomas Rauber, University of Bayreuth, Germany
Jill Reese, Mathworks
Cal Ribbens, Virginia Polytechnic Institute
Gudula Ruenger, Technical University of Chemnitz, Germany
Alan Stewart, Queen's University, Belfast, UK
William A. Ward, CSC, NASA Greenbelt
Robert White, North Carolina State University
Pak Chung Wong, Pacific Northwest National Laboratory
Qin Xin, Universite' Catholique de Louvain, Belgium
Ping Yang, Pacific Northwest National Lab