Get Access

Frequently Asked Questions

What is HPC?

NC State University High-Performance Computing (HPC) is part of the initiative to provide state of the art support for research and academic computing at NC State. HPC provides NC State students and faculty with entry and medium level high-performance research and education computing facilities, consulting support and scientific workflow support.

Who may use HPC?

HPC projects are available to all NC State faculty members on request, and faculty members may add any number of Unity IDs can be authorized for access to HPC resources under that project; however, a Unity ID can only be under one project at any given time.

Students without a faculty advisor should contact HPC Staff to discuss options for obtaining temporary access for the purposes of learning HPC.

There is no charge from OIT for HPC projects.

Who should use HPC?

HPC is used when the compute resources necessary to perform a computation in a reasonable amount of time exceed standard local computing resources.

What are prerequisites to using HPC?

Before advancing to HPC, a user should be comfortable with basic computing concepts, e.g., hardware, software, file systems, and have a general awareness of networking and security. A user must know or commit to learning the basics of Linux, and a user must learn how to properly use the HPC system using the web resources, training events, and by contacting HPC staff for additional guidance.

How does one request access to HPC?

For research projects, NC State faculty members can request an HPC project by clicking the Get Access button at the bottom of the page. Faculty project owners may add additional Unity IDs to an existing project using the research computing web application.

To request access for instructional use, send email to HPC Support with the course number and section. Class rolls will be pulled from Registration and Records, and the Unity IDs of registered students will be enabled for access.

Students without a faculty advisor should contact HPC Support to discuss options for obtaining temporary access for the purposes of learning HPC.

New projects and access for new Unity IDs are created once daily at 6:30 p.m.

Reading and accepting the HPC AUP is a requirement for gaining access to Henry2.

This video on the HPC Acceptable Use Policy explains some of the technical details of the AUP including the difference between login nodes and compute nodes, and it discusses some of the actions that would result in violating the AUP.

HPC Acceptable Use Policy

OIT's HPC services are provided to support the core university mission of instruction, research, and engagement. All use of HPC services must be under or in support of a project directed by a faculty member. The faculty member directing each project is ultimately responsible for ensuring that HPC services are used appropriately by all accounts associated with their project. Students wanting to use HPC resources for academic purposes who are not working with a faculty member may request access through an arrangement with the NC State Libraries.

OIT's HPC services are provided on shared resources. To ensure equitable access for all HPC projects, certain restrictions on acceptable use are necessary. All use of HPC services must adhere to the following guidelines. Accounts repeatedly violating these guidelines will have their HPC access disabled.

  1. Access to the HPC Linux cluster (Henry2) is to be via Secure Shell protocol (SSH) to respective login nodes or other interfaces that may be intended for direct access (e.g. Web Portals) and all access to compute nodes must be via the job scheduler LSF. Direct access to compute nodes is not permitted.
    • A maximum number of concurrent login sessions will be enforced on login nodes.
    • SSH sessions that have been idle for an amount of time will be automatically disconnected.
    • These limits will be adjusted as necessary to manage login node resources and to comply with applicable security standards.
  2. The purpose of a login node is to provide access to the cluster via SSH and to prepare for running a program (e.g., editing files, compiling, and submitting batch jobs). Processes that use a non-trivial amount of compute or memory resources must not be run on any of the shared login nodes. These processes may be run via LSF or on a dedicated login node reserved via HPC-VCL. Processes running on login nodes that have used significant CPU time or that are using significant memory resources will be terminated without notice.
  3. Scratch file systems (/share*, /gpfs_share) are intended to be used as data storage for running jobs or files under active analysis. Use of techniques for purpose of protecting files from the automated purge of scratch file systems is not permitted. Files on these file systems may be deleted at any time and are not backed up - the only copy of important files should not be kept on these file systems.
  4. To the extent feasible, jobs run on HPC resources are expected to make efficient use of the resources.
  5. Resources requested for jobs should be as accurate as possible even if such requests result in longer queue waiting times (e.g. if jobs require majority of per node memory they should use exclusive option even though this will likely increase the time the job will wait in queue).
  6. Compute nodes which have lost contact with LSF for more than a few hours or which are unreachable from the console will be rebooted without notice.