3 May 2020 - Python modules have been removed
28 April 2020 - rtx2080 GPU node is up.
25 April 2020 - rtx2080 GPU node is down for repairs.
25 April 2020 - Extended Maintenance Weekend
21 April 2020 - rtx2080, gtx1080, p100 GPU nodes up
13 April 2020 - Python modules will be removed
12 March 2020 - LSF on Henry2
11 March 2020 - LSF on Henry2
10 March 2020 - Cooling issue in the Data Center
1-2 February 2020 - OIT quarterly extended maintenance
27 January 2020 - /ncsu/volume1 maintenance
25 January 2020 - Software and module updates
22-January-2020 - Henry2 Network Interruption
3 October 2019 - cmake update and mpi module rename
23 September 2019 - Change in requesting memory resources
23 September 2019 - New bsub GPU syntax
23 September 2019 - GPU usage Update
15 September 2019 - LSF Update
08 September 2019 - LSF Update
24 June 2019 - jhl* compute nodes network disruption
10 April 2019 - CLC Genomics Server Update
22 March 2019 - compute nodes jhl025-jhl028 unavailable
3 February 2019 7pm - /gpfs_share partition is back on-line
3 February 2019 10am - /gpfs_partners partition is back on-line
1 February 2019 7pm - /gpfs_share and /gpfs_partners partitions are off-line
22 January 2019 - NFS exports unavailable & VCL-HPC reservations disabled
02-03 January 2019 - jhl* compute nodes unavailable
01 December 2018 - Gurobi License
30 November - 3 December 2018 - /rsstu is not available on the HPC cluster
19 October 2018 - Henry2 OS upgrade
22 September 2018 - Henry2 core Ethernet switch maintenance
30 July 2018 - Monthly Maintenance on login nodes
9 July 2018 - Maintenance on login01,login02,login03[.hpc.ncsu.edu]
2 July 2018 - Maintenance on login04.hpc.ncsu.edu
27 June 2018 - Web Application Broken
23 June 2018 - New Top Level HPC Web Pages
1 June 2018 VMD 1.9.3
module load vmd/1.9.3
1 June 2018 Amber 18
module load PrgEnv-intel/2017.1.132 module load amber/18
1 June 2018 PGI 18.4
module load PrgEnv-pgiOlder versions can be selected by specifying version explicitly.. eg
module load PrgEnv-pgi/18.1
9 May 2018 Remote Desktop connection with HPC-VCL
9 April 2018 /share, /gpfs_common, /gpfs_backup
22 March 2018 New Abaqus Version
module load abaqus
will set up environment to use Abaqus 2018. Run command abaqus to invoke Abaqus.
Previous versions can be accessed using
module load abaqus/2016
module load abaqus/6.13-2
and target for abaqus command will be adjusted appropriately
18 March 2018 New Portland Group Compiler Version
module load PrgEnv-pgi
will set up environment to use the new version.
[edsills@login01 ~]$ module load PrgEnv-pgi [edsills@login01 ~]$ module list Currently Loaded Modulefiles: 1) pgi/18.1 3) openmpi/2.1.2/2018 2) netcdf/4.5.0/openmpi-2.1.2/2018 4) PrgEnv-pgi/18.1 [edsills@login01 ~]$ which pgf90 /usr/local/pgi/linux86-64/18.1/bin/pgf90 [edsills@login01 ~]$ which mpif77 /usr/local/pgi/linux86-64/2018/mpi/openmpi-2.1.2/bin/mpif77
Older versions can be selected by specifying version explicitly.. eg
module load PrgEnv-pgi/16.7
Strongly recommend not to use any version older than 15.1
03-04 March 2018 Network Switch Reboot
Please plan job submissions to avoid having critical jobs running this weekend.
04 January 2018 LSF Update
There should be no impact to running jobs. New job submissions and new jobs starting will be temporarily disabled while the upgrade is in progress. Expect that interruption to new jobs will be less than 4 hours.
01 December 2017 New LSF Resource Definitions
26 October 2017 Gaussian 16.a03 Installed
22 August 2017 Shared file system upgrade/change completed
NOTE: You do not have read permission in /gpfs_common anymore and thus you cannot do ls there. To access your data on the old shared file system, you need to type the full path to your own subdirectory, such as /gpfs_common/old_share/your-user-name, when you do cd on login nodes or when you provide folder name in WinSCP.
The new shared file system you can access is
where your-group-name is the first group when you type the command "groups". You can cd into /share/your-group-name and use the command
to create your own subdirectory, and store data and run jobs in your own subdirectory, where your-user-name is your HPC username.
Each group has a 10TB quota for their group directory /share/your-group-name. As before /share is not backed up and files that have not been recently accessed are automatically deleted (currently purge is set to remove files that have not been accessed for 30 days).
21 August 2017 Henry2 Cluster Unavailable
Since many jobs would be impacted anyway, the outage will also be used to move /gpfs_common and /gpfs_backup to new hardware. Therefore the originally announced hour maintenance is being extended to be 12 hours, but will avoid need for another interruption to running jobs in near future.
A new organization of /share, /share1, and /share2 will be implemented on the new storage that will provide additional scratch quota to all HPC projects.
Job scripts will need to be changed to reflect the new directory structure.
11 August 2017 Henry2 /home quotas
5 July 2017 Henry2 Cluster Unavailable
Jobs running at 8am will be lost. Queues will be disabled over July 4th holiday to minimize number of lost jobs.
27 June 2017 henry2 /home and /usr/local
23 April 2017 Network maintenance
Queues will be paused Friday April 21 around 5pm to reduce number of jobs running Sunday afternoon. Jobs running during the Sunday maintenance tryring to access storage will likely fail.
6 April 2017 henry2 logins