HPC Acceptable Use Policy
OIT's HPC services are provided to support core university mission of instruction, research, and engagement. All use of HPC services must be under or in support of a project directed by a faculty member. The faculty member directing each project is ultimately responsible for ensuring that HPC services are used appropriately by all accounts associated with their project.
OIT's HPC services are provided on shared resources. To ensure equitable access for all HPC projects certain restrictions on acceptable use are necessary. All use of HPC services must adhere to the following guidelines. Accounts repeatedly violating these guidelines will have their HPC access disabled.
- Access to the HPC Linux cluster (henry2) is to be via
sshto respective login nodes or other interfaces that may be intended for direct access (eg Web Portals) and all access to compute nodes must be via LSF. Direct access to compute nodes is not permitted.
- A maximum number of concurrent login sessions will be enforced on login nodes
- ssh sessions that have been idle for an amount of time will be automatically disconnected
- These limits will be adjusted as necessary to manage login node resources and to comply with applicable security standards
- Processes that use significant compute or memory resources (more than few minutes of CPU time or more than 100 MB of memory) must not be run on any of the shared login nodes. These processes can be run via LSF or on a dedicated login node reserved via VCL. Processes running on login nodes that have used significant CPU time or that are using significant memory resources will be terminated without notice.
- Scratch file systems (/share*, /gpfs_share) are intended to be used as data storage for running jobs or under active analysis. Use of techniques for purpose of protecting files from the automated purge of scratch file systems is not permitted. Files on these file systems may be deleted at any time and are not backed up - the only copy of important files should not be kept on these file systems.
- To extent feasible, jobs run on HPC resources are expected to make efficient use of the resources.
- Resources requested for jobs should be as accurate as possible even if such requests result in longer queue waiting times (eg if jobs require majority of per node memory they should use exclusive option even though this will likely increase the time the job will wait in queue).
- Compute nodes which have lost contact with LSF for more than a few hours or which are unreachable from the console will be rebooted without notice.
For details regarding logging onto HPC resources, using LSF, specifying resource requirements, etcetera visit the how to page for henry2 cluster or the how to page for sam cluster. Information regarding available storage options is also available on the getting started with HPC storage page.
Last modified: February 01 2018 15:16:51.