henry2 Linux Cluster

The HPC Cluster, named henry2, runs the Linux operating system on compute nodes that typically have two Intel Xeon multi-core processors. Some nodes also have Nvidia GPUs.

Work is scheduled on the cluster using a queuing system (LSF). Various queues are available to all HPC accounts. Submitted jobs are placed in queues based on how long they will run and how many processor cores they will use. Following queue resources are available to all accounts:

Computing Resources available
to all accounts

For distributed memory jobs:

  • up to 400 cores for up to 2 hours
  • up to 256 cores for up to 2 days
  • up to 56 cores for up to 4 days
  • up to 16 cores for up to 15 days

For shared memory jobs:

  • up to 24 cores for up to 8 days

For GPU jobs:

  • various model GPUs for up to 10 days

Storage Resources available
to all accounts

  • 1GB per account home directory space
    space for startup scripts and source code and executables for small applications
  • 10TB per project scratch space
    temporary space for data for running applications
  • 1TB per project mass storage space
    long term storage space for files not being actively used
Copyright © 2018 · Office of Information Technology · NC State University · Raleigh, NC 27695 · Accessibility · Privacy · University Policies