Henry2 Linux cluster

The HPC cluster, named Henry2, is an Intel Xeon based Linux cluster. Compute nodes include a mix of several generations of Intel Xeon processors primarily in dual-socket blade servers. Integrated into Henry2 are a number of large memory compute nodes and a number of nodes with attached GPUs.

Work is scheduled on the cluster using a queuing system (LSF). Various queues are available to all HPC accounts. Submitted jobs are placed in queues based on how long they will run and how many processor cores they will use.

Resources may be specified according to: Availability of resources can be monitored using the Cluster Status pages:

For news, updates, outages, and maintenance schedules, check here.

For a list of available software packages and installed applications, click here.

Computing resources available to all accounts

For distributed memory jobs:

  • up to 400 cores for up to 2 hours
  • up to 256 cores for up to 2 days
  • up to 56 cores for up to 4 days
  • up to 16 cores for up to 15 days

For shared memory jobs:

  • up to 24 cores for up to 8 days

For GPU jobs:

  • various model GPUs for up to 10 days
For guidance on querying LSF to display the current resource limits available on each queue, see the LSF FAQ.

Storage resources available to all accounts

  • 1 GB per account home directory space
    The home directory is for source code, scripts, and small executables.
  • 10 TB per project scratch space
    Scratch space is temporary space used for running applications and working with large data.
  • 1 TB per project mass storage space
    Mass storage is long term storage space for files not being actively used.

Partner Compute Nodes

If existing compute resources are inadequate for the needs of a project, there is an opportunity to purchase additional compute nodes under the HPC Partner Program.

Learn More

Copyright © 2019 · Office of Information Technology · NC State University · Raleigh, NC 27695 · Accessibility · Privacy · University Policies