Using SLURM
By specifying -batchsystem slurm
on the command line, Simcenter STAR-CCM+ automatically distributes processes to all CPUs/cores allocated to the job.
General information about job submission with batch systems can be found in Working in Unix-Based Batch Systems.
Please also refer to the specific cluster documentation and the batch system documentation for additional and required parameters or settings.
Useful Commands for Job Submission and Monitoring
Display SLURM Release Version
$ sinfo -V
slurm 20.02.6
Submit a Job
Slurm jobs are submitted using either the sbatch
or salloc/srun
command. Our recommendation is for the sbatch
option since the srun
command has submission issues as the submission option become more complex.
$ sbatch <submission options> <job_script>
$ sbatch <submission options> <command>
Show Job Status
# show all queued jobs
$ squeue
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
129089 normal TEST02 tester PD 0:00 2 (Resources)
128807 normal TEST01 tester R 3-03:13:11 4 node[069-072]
# show a specific job by jobid
$ squeue -j 128807
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
128807 normal TEST01 tester R 3-03:13:11 4 node[069-072]
Terminate a Job
$ scancel <jobid>
Job Submission
A minimal, example jobscript to launch Simcenter STAR-CCM+ and automatically extract the information about allocated resources:
#!/bin/bash
#SBATCH --job-name=starsim
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=40
#SBATCH --time=1:00:00
echo "Running job $SLURM_JOB_NAME in the work directory: $SLURM_SUBMIT_DIR"
STAR_PATH=/path/to/star/bin
$STAR_PATH/starccm+ -bs slurm -batch step my.sim
This allocates a job named starsim
on two nodes with 40 processes per node for one hour. The standard and error output is redirected to a single file named "slurm-<jobid>.out
".
Job File Submission Prefix
The batch script may contain options preceded with #SBATCH
before any executable commands in the script. sbatch
stops processing further #SBATCH
directives once the first non-comment non-whitespace line has been reached in the script.
Submission Options
These arguments can be used either in a submission command or with a submission command file preceded by the submission prefix.
Parameter | Description |
---|---|
--job-name=<job_name> | name given to job |
--nodes=<#nodes> | number of nodes (computers) assigned to the job |
--ntasks-per-node=<#tasks> | number of cores/processes to assign per node |
--time=<hh:mm:ss> | maximum wall time for the job |
--partition=<partition_name> | partition (queue name) the job will run in |
--output=<output_filename> | standard output log file |
--error=<error_filename> | error output log file |
Prefer tasks-per-node over ntasks
Use --ntasks-per-node
instead of
--ntasks
to avoid having to change the value when changing the number of
nodes.
Submission Environment Variables
There are several environment variables that are active when a job is running. These variables are preceded by ‘SLURM_
’. This is a list of some of the more useful ones.
Environment Variable | Description |
---|---|
SLURM_JOBID | the job id |
SLURM_JOB_ID | the job id |
SLURM_JOB_NAME | the name assigned to the job |
SLURM_JOB_NODELIST | the list of nodes participating in the job. See Node Lists |
SLURM_NNODES | the number of nodes participating in the job |
SLURM_NPROCS | the number of parallel processes |
SLURM_NTASKS | the number of parallel tasks |
SLURM_NTASKS_PER_NODE | the number of tasks (cores) per node as assigned by the submission resource --ntasks-per-node option |
SLURM_SUBMIT_DIR | the CWD for the job |
Requirements on SLURM Environment Variable Propagation
By default, SLURM propagates all environment variables to the launched application (similar to sbatch --export=ALL
). If the default behavior is changed, issues with running Simcenter STAR-CCM+ can occur. In that case, please resort back to the default SLURM setting, or take care that all required environment variables are propagated to the launched application by other means.