Running on a Linux Cluster with Local Scratch Space

Clusters with local scratch space on each compute node can handle high performance input/output from many nodes simultaneously. To run Design Manager on such a cluster, the submission script must meet certain requirements. For design studies with mesh reuse, Design Manager must split the designs that require a mesh to be generated into two runs in order to make the cached mesh accessible for other designs.

Pre-allocation Submission Script Requirements

To submit a Design Manager project on a cluster with local scratch space, you must launch Resource Manager in the submission script using the following command:

starlaunch --command <batch_command> --scratch_root <path> --slots 0 --batchsystem <system>

where:

  • starlaunch launches Resource Manager.
  • --command <batch_command> specifies the starccm+ command that runs the Design Manager project in batch.
  • --scratch_root <path> invokes the usage of local scratch space and specifies the path to a scratch root directory on the local disk. You must have write access for this directory and the directory path must be the same on all cluster nodes. Example: --scratch_root "/local_storage/user1".
  • --slots 0 prevents the Resource Manager process from allocating cores.
  • --batchsystem <system> specifies the batch management system on your cluster

For more information, see Resource Manager Options.

Additionally, the submission script must perform the following steps:

  1. Create a unique task directory for the Design Manager run under the scratch root directory.
  2. Copy the Design Manager project file (*.dmprj) and its associated input files, such as the reference simulation file (*.sim) and custom Java macro files (*.java), from the shared filesystem on the head node into the created task directory.
  3. Run starlaunch from the created task directory using one of the following methods:
    • Change directory (cd) to the created task directory
    • Use the starlaunch option --cwd <path to created task directory>
  4. When starlaunch completes, copy the files from the created task directory back to the shared filesystem on the head node where you can access the results.
  5. Delete the created task directory to ensure a clean exit.

Example: The following example shows a pre-allocation submission script for running a Design Manager project on a cluster with local scratch space using the batch management system LSF. The script is written for bash (Bourne-again Shell). For other Unix shells, such as tcsh, you must modify the script accordingly.

#! /bin/bash 
 
export DM_PROJECT=[Project].dmprj           #[PROJECT]: set name of your Design Manager project file.
export CASELOG=dm_output.log  
           
export STARCCMHOME=[STAR-CCM+_INSTALL_DIR]  #[STAR-CCM+_INSTALL_DIR]:set absolute path to the STAR-CCM+ installation directory on the cluster.
                                            # This path must be the same for all compute nodes.
                                            # Example: /shared_storage/apps/starccm+/latestRelease
 
export SCRATCH_ROOT=[SCRATCH_DIR]           # [SCRATCH_DIR]: set absolute path to local scratch storage. 
                                            # This path must be the same for all compute nodes.
                                            # Example: "/local_storage/user1"
 
# For convenience, use the LSF batch scheduler job ID in the task directory path
export SCRATCH_TASK_DIR="$SCRATCH_ROOT/$LSB_JOBID"
 
# Copy the shared job submission directory for this task to a scratch location and change directory there
export INITIAL_SUBMISSION_DIRECTORY="$PWD"
mkdir -p "$SCRATCH_TASK_DIR"
cp -r . "$SCRATCH_TASK_DIR"
cd "$SCRATCH_TASK_DIR"

# Launch Resource Manager
# ----------------------------------
$STARBIN/starlaunch --rsh /usr/bin/ssh \
    --command "$STARCCMHOME/star/bin/starccm+ -rsh /usr/bin/ssh -batch run $DM_PROJECT [-passtodesign <options>]" \
    --scratch_root $SCRATCH_ROOT  --slots 0 --batchsystem lsf \
    --outpath $CASELOG
# ----------------------------------
 
# Copy files from the task directory $SCRATCH_TASKDIR on the cluster node local scratch back
# to the shared job submission directory and remove the task directory
cp -r . "$INITIAL_SUBMISSION_DIRECTORY"
cd
rm -r "$SCRATCH_TASK_DIR"

-passtodesign <options> allows you to pass additional Simcenter STAR-CCM+ simulation command line options, such as license options, to the design simulations. Passing command line options directly from the command line is useful for machine-specific or user-specific options that you do not want to save within your Design Manager project file (see Running on a Cluster Using Pre-Allocation, step 2b). However, command line options that require double quotes are not supported—you must set these options in the STAR-CCM+ Execution Command property. For more information, see Design Manager Options.

Design Studies with Mesh Reuse

To optimize the design exploration process in Design Manager where two or more designs have the same mesh parameters, the mesh is computed once, cached, and then reloaded for all of the similar designs. See Setting Up Simulation Effect for Simulation Parameters.

For running design studies with mesh reuse on a cluster with local scratch space, Design Manager splits the designs that require a mesh to be generated into two runs. The first run executes the design until the mesh operation completes, then caches the mesh, and finally shuts down. The second run reloads the cached mesh and continues to complete the remaining design operations. Designs that do not generate a mesh but reuse the cached mesh are not split into two runs. You can track the status of the design runs using the output table in the Graphics window, see Monitoring a Design Study.

The following diagrams illustrate design runs with mesh reuse for clusters with and without local scratch space:

Cluster without Local Scratch Space


Cluster with Local Scratch Space


Limitations

Currently, the following limitations apply to the usage of local scratch space:

  • Local scratch space is only supported for Design Manager in pre-allocation mode.
  • For Substituting Geometry Parts in a Study, you must set an absolute path to the Input File Directory on a shared filesystem in order to make the geometry parts accessible from all compute nodes—geometry input file directories on local scratch space are not supported.
  • For the Design Manager project file, the input files, and the file paths, the following limitations apply:
    • The Design Manager project file (*.dmprj) and its associated input files, such as the reference simulation file (*.sim) and custom java macro files (*.java), must be available under the same directory.
    • The *.dmprj file must be set up to refer to its associated input files in relative paths.
    • If you want to access input files from a shared file system, then the cluster must be configured to have access to those shared files on all compute nodes.