A Step by step guide to use Comsol MultiPhysics in batch mode on the HPC/Slurm Cluster.
mkdir ~/comsol cd ~/comsol cp /deepstore/software/examples/run-comsol.sbatch .
sbatch run-comsol.sbatch
A Step by step guide to use Comsol MultiPhysics Server on the HPC/Slurm Cluster.
mkdir ~/comsol cd ~/comsol cp /deepstore/software/examples/run-comsol-server.sbatch .
This password will be stored in ~/.comsol/<version>/login.properties
comsolpasswd
sbatch run-comsol-server.sbatch
The command will show a <job-id>, wait until the job is actually running. Use this <job-id> to check the content of the comsol-server-<job-id>.log file. This will file will contain the required url to connect to the comsol-server using your local comsol-client.
Use your local comsol to connect to your comsol server by using the Server name, port and username listed in the log file. The content will probably look like this :
Adding Matlab r2019a Adding Comsol MultiPhysics v5.3a Server : ctit083.ewi.utwente.nl Port : 8805 Username : laanstragj COMSOL Multiphysics server 5.3a (Build: 348) started listening on port 8805 Use the console command 'close' to exit the program
The last two lines in the log file show the running comsol server.
scancel <job-id>
This will use the sinteractive wrapper, this will depend on the availability of the requested resources !
Load the slurm utils software module (sinteractive)
module load slurm/utils
Request resources using sinteractive (optionally modify the default resources, default is : 2 cpu's for 1 hour.).
sinteractive --cpus-per-task=4 --time=1-0:0:0
Once the resources are claimed you will get a similar response as the following:
srun: job 164362 queued and waiting for resources srun: job 164362 has been allocated resources
The result is that you will get a prompt on one of the compute nodes.
module load comsol/v5.6 comsol exit
Don't forget to exit the requested resources when you finish comsol (use the exit command) !