-
Notifications
You must be signed in to change notification settings - Fork 1
Description
Description
Hello!
I wanted to point this out as it may be a potential bug, as it is not printing --mem-per-cpu=8G for the slurm script when it I believe that it should be. However, it does print --mem-per-gpu=8G, when the GPU option is used.
If more data is needed, I can try and package up some of it, so let me know.
In the clusters.toml file, memory_per_cpu = "8G" in cluster.partition does not populate the row submit --dry-run signac script, showing only:
#SBATCH --job-name=part_1_initial_parameters_command-be31aae200171ac52a9e48260b7ba5b1
#SBATCH --output=part_1_initial_parameters_command-%j.out
#SBATCH --partition=cpu_8GB_per_cpu_add_1+_partitions_in_custom_only
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=1
#SBATCH --gpus-per-task=0
#SBATCH --time=10
#SBATCH --account=<ACCOUNT_NAME>
In the clusters.toml file, when the same cluster.partition is used and it is changed to `memory_per_gpu = "8G", it does populate the row submit --dry-run signac script correctly, showing:
#SBATCH --job-name=part_1_initial_parameters_command-be31aae200171ac52a9e48260b7ba5b1
#SBATCH --output=part_1_initial_parameters_command-%j.out
#SBATCH --partition=cpu_8GB_per_cpu_add_1+_partitions_in_custom_only
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=1
#SBATCH --gpus-per-task=0
#SBATCH --mem-per-gpu=8G
#SBATCH --time=10
#SBATCH --account=<ACCOUNT_NAME>
**_clusters.tom_** file:
[[cluster.partition]]
name = "cpu_8GB_per_cpu_add_1+_partitions_in_custom_only"
maximum_cpus_per_job = 128
maximum_gpus_per_job = 0
memory_per_gpu = "8G" or memory_per_cpu = "8G"
Steps to reproduce
row submit --dry-runInput files
No response
Output
Expected output
No response
Row version
row 0.4.0 h8fae777_0 conda-forge
setuptools 75.8.2 pyhff2d567_0 conda-forge
signac 2.2.0 pyhd8ed1ab_1 conda-forge
signac-dashboard 0.6.1 pyhd8ed1ab_1 conda-forge