High Performance Computing

Introduction to Parallel Computing and the NCSU Linux Cluster

  • To be on the HPC mailing list

    Send mail to
    mj2@lists.ncsu.edu
    
    with the one line
    subscribe hpc
    
    in the body.
  • Spring 2017

    This free short course will meet for 5 hours total (over 2 sessions) on Thursday and Friday mornings of spring break. (Thursday, March 9, 9:30 to 12:00, and Friday, March 10 9:30 to 12:00). The 2 session short course will be held in the AFH-108-OIT-Training Lab Parking permits are not required during break. AFH is on Avent Ferry Rd. across from the Misson Valley Mall, the lower building south of the high rise dorms. Here's a link to an online campus map Avent Ferry Technology Center

    If you e-mail me (Gary Howell, gary_howell@ncsu.edu) in advance I can be sure there's enough space for you. The first day of the class was cover how to use the HPC cluster, with examples of how to compile and run simple jobs. The second day will be designed to help users port codes they need.

    Class notes for the first day can be downloaded at Intro to MPI. Sample codes can be downloaded from sample.tar.gz. See also Scientific Computing.

    The second day course materials are on Using Configure and Make with Lab Day 2 . After December, federal regulations will only allow HPC staff to port code we can promptly update (within 30 days). Since our current update cycle is closer to two years, we'll need many users to learn to port their own codes. In particular, this module will focus on how to find libraries in nonstandard locations, to verify they are the needed libraries, and to link to them.

    Graduate students, postdocs, faculty and staff who are likely to use parallel computation in research projects or theses are particularly invited.

    Before class starts, students who do not already have a Blade Center account are encouraged to have their advisors request them so they can have a permanent account. Faculty can request accounts for themselves and for their students online from http://www.ncsu.edu/itd/hpc/About/Contact.php

  • Fall 2016

    This free short course will meet for 4 hours total (over 2 sessions) on Thursday and Friday of fall break. (Thursday, October 6, 1:30 to 3:30 PM, and Friday, October 7 1:30 to 3:30 PM). The 2 session short course will be held in the AFH-110-OIT-Training Lab Parking permits are not required during break. AFH is on Avent Ferry Rd. across from the Misson Valley Mall, the lower building south of the high rise dorms. Here's a link to an online campus map Avent Ferry Technology Center

    If you e-mail me (Gary Howell, gary_howell@ncsu.edu) in advance I can be sure there's enough space for you.

    Class notes can be downloaded at Intro to MPI. Sample codes can be downloaded from sample.tar.gz. See also Scientific Computing.

    Graduate students, postdocs, faculty and staff who are likely to use parallel computation in research projects or theses are particularly invited. Before class starts, students who do not already have a Blade Center account are encouraged to have their advisors request them so they can have a permanent account. Faculty can request accounts for themselves and for their students online from http://www.ncsu.edu/itd/hpc/About/Contact.php

    The NC State linux cluster is an IBM blade center with around ten thousand cores available for high performance computing. This short course introduces the use of the machines, starting with how to log on and submit jobs.

    A focus is on how to compile and link to MPI (Message Passing Interface), the standard library for message passing parallel computation. Calls to MPI are embedded in Fortran, C, or C++ codes, enabling many processors to work together.

    Session 1. How to log into the HPC machines and submit jobs. Why to use parallel computation. Some simple MPI commands and example programs. The last half of the time will be spent in getting an example code to run. A version of the lab is Lab 1

    Session 2. MPI Collective communications. These can be simple and efficient. Considerations in efficient parallel computation. Running some more codes. The lab is Lab 2

    Some additional materials online show how to use OpenMP to speed computations on multi-core computers. OpenMP parallelization is often fairly straightforward. OpenMP OpenMP2 OpenMP3
    On the blade center, most blades have two motherboards. RAM is more easily accesible from one or the other of the motherboards (NUMA .. or Non Uniform Memory Access). For OpenMP to scale to use both motherboards effectively, some more advanced tricks are needed. See for example Tutorial from HPC2012 by Georg Hager, Gerhard Wellein, and Jan Treibig, University of Erlangen-Nuremberg, Germany.

  • Spring 2016

    This free short course will meet for 4 hours total (over 2 sessions) on Thursday and Friday of spring break. (Thursday, March 10, 9:30 to 11:30 AM, and Friday, March 11, 9:30 to 11:30 AM). The 2 session short course will be held in ITTC C in the D.H. Hill library. Parking permits are not required during break. It's a bit tricky to find the ITTC C, so ask at the front desk.

    If you e-mail me (Gary Howell, gary_howell@ncsu.edu) in advance I can be sure there's enough space for you.

    Class notes can be downloaded at Intro to MPI. Sample codes can be downloaded from sample.tar.gz. See also Scientific Computing.

    Graduate students, postdocs, faculty and staff who are likely to use parallel computation in research projects or theses are particularly invited. Before class starts, students who do not already have a Blade Center account are encouraged to have their advisors request them so they can have a permanent account. Faculty can request accounts for themselves and for their students online from http://www.ncsu.edu/itd/hpc/About/Contact.php

    The NC State linux cluster is an IBM blade center with around ten thousand cores available for high performance computing. This short course introduces the use of the machines, starting with how to log on and submit jobs.

    A focus is on how to compile and link to MPI (Message Passing Interface), the standard library for message passing parallel computation. Calls to MPI are embedded in Fortran, C, or C++ codes, enabling many processors to work together.

    Session 1. How to log into the HPC machines and submit jobs. Why to use parallel computation. Some simple MPI commands and example programs. The last half of the time will be spent in getting an example code to run. A version of the lab is Lab 1

    Session 2. MPI Collective communications. These can be simple and efficient. Considerations in efficient parallel computation. Running some more codes. The lab is Lab 2

    Some additional materials online show how to use OpenMP to speed computations on multi-core computers. OpenMP parallelization is often fairly straightforward. OpenMP OpenMP2 OpenMP3
    On the blade center, most blades have two motherboards. RAM is more easily accesible from one or the other of the motherboards (NUMA .. or Non Uniform Memory Access). For OpenMP to scale to use both motherboards effectively, some more advanced tricks are needed. See for example Tutorial from HPC2012 by Georg Hager, Gerhard Wellein, and Jan Treibig, University of Erlangen-Nuremberg, Germany.

  • Spring 2015 CSC302 -- Numerical Analysis

    CSC302


    Some previous courses introduce parallel debugging, profiling, and OpenMP (shared memory programming). See Previous Courses [Previous courses and links to class notes]

Last modified: March 10 2017 10:10:21.