Table of Contents

EEMCS-HPC Specific resources (gpu), features and partitions

resources

The generic consumable resources are :

#SBATCH --gres=gpu:1 

Keep in mind for gpu's you need to load the module of the required cuda version !

Once you request a gpu(s) resource, the scheduler will set for you the environment variable : CUDA_VISIBLE_DEVICES

This will point to the assigned gpu(s) for your job, you shall only use those and not others !!!

Some gpu boards have the nvlink modules fitted, this will allow you to double the gpu memory and cumputing power. If you request two gpu's with nvlink, you need to force socket binding using the following option :

#SBATCH --sockets-per-node=1 

features (constraint)

The available features are :

For example to force only a40 gpu's

#SBATCH --constraint=a40 

partitions

The HPC/SLURM cluster contains multiple common partitions :

Partition name Nodes Details available to
main ctit080..91 All
debug all admin
dmb ctit084..085, 88, 92, hpc-node07 eemcs-dmb
ram ctit086, ctit089 eemcs-ram
bdsi ctit087 bms-bdsi
mia ctit090..91,93..94,hpc-node05 eemcs-mia
am hpc-node01..04 eemcs-(dmmp/macs/mast/mia/mms/sor/stat)
mia-pof hpc-node06 eemcs-mia & tnw-pof
students hpc-node08 eemcs-students

The main partition is the default partition that will be used to submit a job to any of the nodes. The debug partition is for testing purposes only. Access to the following partitions are limited to the funders during the first year of investment, these can be reached using the funders partition.

Including multiple partitions is also possible. For example :

#SBATCH --partition=main,dmb
#SBATCH --partition=main,am,mia
#SBATCH --partition=main,students

See the EEMCS-HPC Hardware page for all the partition definitions.