Using Open MPI
Open MPI is an actively developed and widely used open source MPI distribution. Simcenter STAR-CCM+ includes two distributions of different versions within its installer, and supports a locally installed Version 4.
To find an external Open MPI installation, Simcenter STAR-CCM+ checks the content of the OPENMPI_DIR environment variable.
To explicitly select the preferred version of Open MPI libraries to be used, include either of the following options with the starccm+ command:
-mpidriver openmpi
-mpi openmpi
Note | This is akin to the default behavior, that is, no
-mpi /-mpidriver flag given on the command
line. |
Open MPI command line options can be passed into Simcenter STAR-CCM+ using any of the following options:
-mpiflags <mpirun options>
-mpidriver openmpi:<mpirun options>
-mpi openmpi:<mpirun options>
Consult the official Open MPI documentation for further information.
Open MPI Version Selection
The default version of Open MPI to be used depends on the underlying hardware system:
- On Mellanox InfiniBand systems which do not feature the DC transport (that is, predating ConnectX-4), Open MPI 4.0.3 is used by default. This is due to the fact that UCX 1.8.0, which should be used on these systems to avoid performance regressions (see Using UCX), is not compatible with Open MPI 4.1.5.
- On all other systems, Open MPI 4.1.5 is used by default.
It is possible to use the non-default version of Open MPI by specifying certain command line options. To select Open MPI 4.1.5 explicitly, specify either of:
-mpidriver openmpi41
-mpi openmpi41
Using Open MPI 4.1.5 requires at least UCX 1.10.0. Therefore, if you select Open MPI 4.1.5 on systems which default to Open MPI 4.0.3 and UCX 1.8.0, the default UCX version must also be overridden (see Using UCX).
To select Open MPI 4.0.3 explicitly, specify either of:
-mpidriver openmpi40
-mpi openmpi40
Open MPI Components
Open MPI features a Modular Component Architecture (MCA), where the availability of specific components might require the presence of certain dependencies on the system. For the Open MPI distribution bundled with Simcenter STAR-CCM+, general information and a list of available components can be obtained with the following command (Linux only):
`[INSTALLDIR]/star/bin/map_mpi -type openmpi -binpath`/ompi_info
For a specific Open MPI version [VERSION], use the following command:
`[INSTALLDIR]/star/bin/map_mpi -type openmpi -version [VERSION] -binpath`/ompi_info
For more information consult the Open MPI documentation.
Using Hierarchical Collectives (HCOLL)
The Open MPI 4 distribution of Simcenter STAR-CCM+ supports the Hierarchical
Collectives (HCOLL) library on Mellanox InfiniBand systems. It is deactivated by
default and can be activated with the -hcoll
command line flag.
This command line flag does not unconditionally activate HCOLL, but only allows Open
MPI to use HCOLL if all prerequisites are fulfilled. This includes primarily the
presence of at least HCOLL version 3.7 (which is part of MOFED 4.0) on the system.
In case of compatibility warnings with HCOLL and UCX, it might be beneficial to use
the system installation of UCX instead of the Simcenter STAR-CCM+ distribution using the -xsystemucx
command
line flag (for further instructions, see Third-Party Library Versions and Usage).
HCOLL should not be used with MOFED 4.9 and lower when GPGPU computation is enabled.
Third-Party Library Versions and Usage
- On Intel Omni-Path clusters, please install the Intel OPA Software Stack 10.2 or newer.
- On Mellanox clusters, please install Mellanox OFED (MOFED) 2.1 or newer. The OFI fabric has known issues under MOFED 2.4. Please refrain from using this fabric under MOFED 2.4, or else upgrade MOFED if you need to use this fabric.
- Simcenter STAR-CCM+ comes with a distribution of UCX. For more information on UCX see Using UCX.
- Simcenter STAR-CCM+
comes with a distribution of Libfabric/OFI 1.15.2. If you want to use a local
installation of Libfabric 1.15.2 or newer instead, its library location must be
in the system-wide library path or supplied to Simcenter STAR-CCM+ using the
-ldlibpath
flag. Additionally, the expert option-xsystemlibfabric
must be passed to Simcenter STAR-CCM+ to avoid using the bundled Libfabric 1.15.2 distribution. - On AWS EFA systems, using the bundled Libfabric 1.15.2 distribution requires rdma-core version 27.1 or newer to be installed.
In general, highest performance can be expected with the most recent versions of third-party dependencies.
Docker and Virtual Interfaces with Open MPI
By default Docker and virtual network interfaces are deactivated with Open MPI. If you want to use Docker or virtual network interfaces, you can override the default setting either of the following ways:
- You can use those interfaces exclusively for Open MPI communication by specifying an appropriate white list:
-mpiflags "-mca btl_tcp_if_include docker0"
or
-mpiflags "-mca btl_tcp_if_include virbr0"
- You can allow Open MPI to use those interfaces by taking them from the black list (which still has to contain the local host, for example
127.0.0.0/8
):-mpiflags "-mca btl_tcp_if_exclude 127.0.0.0/8"
Memory Consumption
To reduce the memory consumption of Open MPI with Simcenter STAR-CCM+, a number of adjustments are applied to the default behavior of this MPI. All these adjustments can be overridden by the specification of appropriate MCA parameters or environment variables:
- In shared memory (single host) runs, the Vader component of Open MPI is selected explicitly.
- Using Open MPI 3, the OOB/ud component is deactivated unconditionally.
Specifying Environment Variables
To export environment variables to the spawned processes, use the following options
with the starccm+
command:
-mpiflags "-mca mca_base_env_list VARIABLE1=value1;VARIABLE2=value2"
Consult the Open MPI documentation for further information.
Passing mpiflags Requiring Flags with Hyphenated Arguments
When passing a flag as an
mpiflag
that itself requires an argument starting with one or
more hyphens, the flag must be passed via an environment variable.
For example, passing -mpiflags "-mca
plm_slurm_args --export=PATH"
does not work. To recognize the last
argument correctly, the flag must be passed via the environment variable
OMPI_MCA_plm_slurm_args="--export=PATH"
.
Please consult the Open MPI documentation about which environment variable to use.
Temporary Directory and Shared Memory Backing File Location
Having the temporary directory located on a network file system, such as NFS or
Lustre, may cause excessive network traffic to your file servers and/or cause shared
memory traffic in Open MPI to be much slower than expected. Open MPI prints a
warning message in this case. If reduced parallel performance is observed, it is
advised to move the temporary directory of Simcenter STAR-CCM+ to a node local folder by setting the TMPDIR
environment variable.