Compute Nodes

Partners purchase compatible HPC compute nodes. OIT houses the nodes in a secure campus data center with appropriate power, cooling, and networking. Partner nodes share software licenses and storage infrastructue with the other HPC cluster nodes.

A dedicated queue is created for the partner with access to the quantity of compute resources that the partner added to the cluster. Partner and any other Unity IDs the partner specifies have access to the dedicated queue. Queue parameters are set based on what the partner needs.

In addition to dedicated queue, the partner project has increased priority in all the queues using a fairshare scheduling methodology.

Compute resources not being actively used by the partner are made available to other NC State HPC projects for short duration jobs.

Learn More

Storage

Additional storage capacity in the shared HPC storage system can be leased based either on an annual or life of the array rate.

Partner storage allocations are not over allocated. That is a TB of partner storage allocation is provisioned with a TB of usable storage in the storage system, dedicated for use by the partner.

HPC Partner storage is accessible only from the HPC cluster and is intended to provide space to enable use of the HPC compute resources.

Learn More

Advantages

The HPC compute node Partner Program offers compelling advantages for both the faculty partner and for the university.

Partner Advantages (services provided by university)
  • secure space
  • power (including UPS and diesel generator)
  • cooling
  • rack (including rack power distribution)
  • network infrastructure (including message passing network for distributed memory nodes)
  • system administration and maintenance
  • priority access to additional compute resources
  • access to shared storage and file systems
  • access to university licensed software (compilers, debuggers, optimized math libraries, performance analyzers, ...)
  • system and computational science support from HPC staff
University Advantages
  • multiplies resources provided by university HPC investment
  • increased HPC resource utilization yields more efficient use of university-wide research computing dollars
  • scaling benefits reduce university-wide cost of HPC facilities (a few large power and cooling units vs. many small power and cooling units)
  • scaling benefits reduce university-wide cost of HPC system support (incremental system administration and maintenance load for compatible hardware is very small - that is it takes nearly the same work to operate an 8-processor cluster as it does to operate a 100-processor cluster)
Copyright © 2018 · Office of Information Technology · NC State University · Raleigh, NC 27695 · Accessibility · Privacy · University Policies