Table of Contents

EEMCS-HPC Specific resources (gpu), features and partitions

resources

The generic consumable resources are :

#SBATCH --gres=gpu:1 

Keep in mind for gpu's you need to load the module of the required cuda version !

Once you request a gpu(s) resource, the scheduler will set for you the environment variable : CUDA_VISIBLE_DEVICES

This will point to the assigned gpu(s) for your job, you shall only use those and not others !!!

Some gpu boards have the nvlink modules fitted, this will allow you to double the gpu memory and cumputing power. If you request two gpu's with nvlink, you need to force socket binding using the following option :

#SBATCH --sockets-per-node=1 

features (constraint)

The available features are :

For example to force only a40 gpu's

#SBATCH --constraint=a40 

partitions

The HPC/SLURM cluster contains multiple common partitions :

Partition name Nodes Details available to
main ctit[080-094],caserta,hpc-node[01-07] All
debug ctit[080-094],caserta,hpc-node[01-12,14-19] admin

As well as multiple additional partitions :

Partition name Nodes Details available to
am hpc-node[01-04] eemcs-(dmmp/macs/mast/mia/mms/sor/stat)
bdsi ctit087 bms-bdsi
bmpi hpc-node[15-16] gpu tnw-bmpi
bss hpc-node15 cpu eemcs-bss
dmb ctit[084-085,092],hpc-node[07,09] eemcs-dmb
mia ctit[090-091,093-094],hpc-node05 eemcs-mia
mia-pof hpc-node06 eemcs-mia & tnw-pof
tfe hpc-node[16-19] cpu et-tfe
ps hpc-node[11-12,14] eemcs-ps
ram ctit[086,089] eemcs-ram
students hpc-node08 eemcs-students

The main partition is the default partition that will be used to submit a job to any of the nodes. The debug partition is for testing purposes only.

Access to the additional are limited to the funders during the first year of investment, these can be reached using the funders partitions.

Including multiple partitions is also possible. For example :

#SBATCH --partition=main,dmb
#SBATCH --partition=main,am,mia
#SBATCH --partition=main,students

See the EEMCS-HPC Hardware page for all the partition definitions.