Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Small fix in Perlmutter GPU sbatch script #5683

Merged
merged 2 commits into from
Feb 19, 2025

Conversation

aeriforme
Copy link
Member

Changes in Perlmutter GPU job script:
from #SBATCH --cpus-per-task=16 to #SBATCH --cpus-per-task=32.

This is to request (v)cores in consecutive blocks.

GPU 3 is closest to CPU cores 0-15, 64-79,
GPU 2 to CPU cores 16-31, 80-95,
...

If --cpus-per-task=16, MPI ranks 0 and 1 are mapped to cores 0 and 8.
If --cpus-per-task=32, MPI ranks 0 and 1 are mapped to cores 0 and 16.

Visual representation
pm_gpu_vcores_mpi

@ax3l @WeiqunZhang is this correct?

@aeriforme aeriforme added Performance optimization changes input scripts / defaults Changes the syntax or meaning of input scripts and/or defaults machine / system Machine or system-specific issue labels Feb 19, 2025
@WeiqunZhang
Copy link
Member

I think export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK} should be updated too, because SLURM_CPUS_PER_TASK is 32 now. We might want to simply set it to 16.

@ax3l ax3l merged commit 686ef38 into ECP-WarpX:development Feb 19, 2025
30 of 36 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
changes input scripts / defaults Changes the syntax or meaning of input scripts and/or defaults machine / system Machine or system-specific issue Performance optimization
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants