Dane (LLNL)
The Dane Intel CPU cluster is located at LLNL.
Introduction
If you are new to this system, please see the following resources:
LLNL user account (login required)
Jupyter service (documentation, login required)
-
/p/lustre1/$(whoami)and/p/lustre2/$(whoami): personal directory on the parallel filesystemNote that the
$HOMEdirectory and the/usr/workspace/$(whoami)space are NFS mounted and not suitable for production quality data generation.
Preparation
Use the following commands to download the WarpX source code. Note that these commands and the shell scripts all assume the bash shell. This downloads WarpX into the workspace directory, which is recommended. WarpX can be downloaded elsewhere if that doesn’t work with your directory structure, but note that the commands shown below refer to WarpX in the workspace directory.
git clone https://github.com/BLAST-WarpX/warpx.git /usr/workspace/${USER}/dane/src/warpx
The system software modules, environment hints, and further dependencies are setup via the file $HOME/dane_warpx.profile which is copied from the WarpX source.
Set it up now:
cp /usr/workspace/${USER}/dane/src/warpx/Tools/machines/dane-llnl/dane_warpx.profile.example $HOME/dane_warpx.profile
Edit the 2nd line of this script, which sets the export proj="" variable.
For example, if you are member of the project tps, then run vi $HOME/dane_warpx.profile.
Enter the edit mode by typing i and edit line 2 to read:
export proj="tps"
Exit the vi editor with Esc and then type :wq (write & quit).
Important
Now, and as the first step on future logins to Dane, activate these environment settings by executing the file:
source $HOME/dane_warpx.profile
Finally, since Dane does not yet provide software modules for some of our dependencies, WarpX provides a script to install them.
This is done executed now.
They are by default installed in the workspace directory (which is recommended), but can be installed elsewhere by setting the environment variable WARPX_SW_DIR.
The second command activates the Python virtual environment.
This would normally be done by the dane_warpx.profile script, but the environment is created by the install script and so wasn’t created yet when the profile was run above.
So the activation needs to be done this way only this one time.
bash /usr/workspace/${USER}/dane/src/warpx/Tools/machines/dane-llnl/install_dependencies.sh
source /usr/workspace/${USER}/dane/venvs/warpx-dane/bin/activate
Compilation
Use the following cmake commands to compile the application executable. The options should be modified to suit your needs, for example only building for the dimensions needed.
cd /usr/workspace/${USER}/dane/src/warpx
rm -rf build_dane
cmake -S . -B build_dane -DWarpX_FFT=ON -DWarpX_QED_TABLE_GEN=ON -DWarpX_DIMS="1;2;RZ;3"
cmake --build build_dane -j 6
The WarpX application executables are now in /usr/workspace/${USER}/dane/src/warpx/build_dane/bin/.
Additionally, the following commands will install WarpX as a Python module:
rm -rf build_dane_py
cmake -S . -B build_dane_py -DWarpX_FFT=ON -DWarpX_QED_TABLE_GEN=ON -DWarpX_APP=OFF -DWarpX_PYTHON=ON -DWarpX_DIMS="1;2;RZ;3"
cmake --build build_dane_py -j 6 --target pip_install
Now, you can submit Dane compute jobs for WarpX Python (PICMI) scripts (example scripts).
Or, you can use the WarpX executables to submit Dane jobs (example inputs).
For executables, you can reference their location in your job script or copy them to a location in $PROJWORK/$proj/.
Update WarpX & Dependencies
If you already installed WarpX in the past and want to update it, start by getting the latest source code:
cd /usr/workspace/${USER}/dane/src/warpx
# read the output of this command - does it look ok?
git status
# get the latest WarpX source code
git pull
# read the output of these commands - do they look ok?
git status
git log # press q to exit
And, if needed,
log out and into the system, activate the now updated environment profile as usual,
As a last step, clean the build directory rm -rf /usr/workspace/${USER}/dane/src/warpx/build_dane and rebuild WarpX.
Running
Intel Sapphire Rapids CPUs
The batch script below can be used to run a WarpX simulation on 2 nodes on the supercomputer Dane at LLNL.
Replace descriptions between chevrons <> by relevant values, for instance <input file> could be plasma_mirror_inputs.
#!/bin/bash -l
# Just increase this number of you need more nodes.
#SBATCH -N 2
#SBATCH -t 24:00:00
#SBATCH -A <allocation ID>
#SBATCH -J WarpX
#SBATCH -q pbatch
#SBATCH --qos=normal
#SBATCH --license=lustre1,lustre2
#SBATCH --export=ALL
#SBATCH -e error.txt
#SBATCH -o output.txt
# one MPI rank per half-socket (see below)
#SBATCH --tasks-per-node=2
# request all logical (virtual) cores per half-socket
#SBATCH --cpus-per-task=112
# each Dane node has 2 sockets of Intel Sapphire Rapids with 56 cores each
export WARPX_NMPI_PER_NODE=2
# each MPI rank per half-socket has 56 physical cores
# or 112 logical (virtual) cores
# over-subscribing each physical core with 2x
# hyperthreading led to a slight (3.5%) speedup on Cori's Intel Xeon E5-2698 v3,
# so we do the same here
# the settings below make sure threads are close to the
# controlling MPI rank (process) per half socket and
# distribute equally over close-by physical cores and,
# for N>9, also equally over close-by logical cores
export OMP_PROC_BIND=spread
export OMP_PLACES=threads
export OMP_NUM_THREADS=112
EXE="<path/to/executable>" # e.g. ./warpx
srun --cpu_bind=cores -n $(( ${SLURM_JOB_NUM_NODES} * ${WARPX_NMPI_PER_NODE} )) ${EXE} <input file>
To run a simulation, copy the lines above to a file dane.sbatch and run
sbatch dane.sbatch
to submit the job.