Ticket 18242 - Request multiple GPUs
Summary: Request multiple GPUs
Status: RESOLVED TIMEDOUT
Alias: None
Product: Slurm
Classification: Unclassified
Component: GPU (show other tickets)
Version: 23.02.6
Hardware: Linux Linux
: --- 4 - Minor Issue
Assignee: Tyler Connel
QA Contact:
URL:
Depends on:
Blocks:
 
Reported: 2023-11-20 13:46 MST by John Wang
Modified: 2024-01-16 09:23 MST (History)
1 user (show)

See Also:
Site: Emory-Cloud
Alineos Sites: ---
Atos/Eviden Sites: ---
Confidential Site: ---
Coreweave sites: ---
Cray Sites: ---
DS9 clusters: ---
HPCnow Sites: ---
HPE Sites: ---
IBM Sites: ---
NOAA SIte: ---
OCF Sites: ---
Recursion Pharma Sites: ---
SFW Sites: ---
SNIC sites: ---
Linux Distro: ---
Machine Name:
CLE Version:
Version Fixed:
Target Release: ---
DevPrio: ---
Emory-Cloud Sites: ---


Attachments

Note You need to log in before you can comment on or make changes to this ticket.
Description John Wang 2023-11-20 13:46:49 MST
We have following partitions with 8 GPUs.

PARTITION              AVAIL  TIMELIMIT  NODES  STATE NODELIST
v100-8-gm128-c64-m488     up 7-00:00:00      1   idle v100-8-gm128-c64-m488-st-p3-16xlarge-1
a100-8-gm320-c96-m1152    up 7-00:00:00      1    mix a100-8-gm320-c96-m1152-st-p4d-24xlarge-1

A user requested multiple GPUs for his job. 

#SBATCH --nodes=1
#SBATCH --gpus=6              
#SBATCH --mem=360G

However, I noticed his job used only one GPU.  

when he changed the Slurm script to

#SBATCH --nodes=1
#SBATCH --gpus=6
#SBATCH --ntasks-per-node=6               
#SBATCH --mem=360G

he got 

sbatch: error: Batch job submission failed: Requested node configuration is not available.

What's wrong with --ntasks-per-node=6?

Thanks,

John Wang
Comment 1 John Wang 2023-11-20 14:17:32 MST
I tested with following Slurm settings:

#SBATCH --nodes=1
#SBATCH --gpus=6              
#SBATCH --mem=360G
#SBATCH --ntasks=6

or 

#SBATCH --nodes=1
#SBATCH --gpus=6              
#SBATCH --mem=360G
#SBATCH --ntasks-per-node=6


I got "sbatch: error: Batch job submission failed: Requested node configuration is not available".

But, 

#SBATCH --nodes=1
#SBATCH --gpus=6              
#SBATCH --mem=360G
#SBATCH --ntasks=1

or 

#SBATCH --nodes=1
#SBATCH --gpus=6              
#SBATCH --mem=360G
#SBATCH --ntasks-per-node=1

works.

Thanks,

John Wang
Comment 2 Tyler Connel 2023-11-20 15:15:33 MST
Hello @John,

Does it help to specify --gpus-per-task=1? I'm presuming you're setting 1 GPU per task, so let me know if that's wrong. For example:

#SBATCH --nodes=1
#SBATCH --gpus=6
#SBATCH --ntasks-per-node=6               
#SBATCH --gpus-per-task=1
#SBATCH --mem=360G

Best,
Tyler Connel
Comment 3 Tyler Connel 2023-11-20 15:19:31 MST
Also, if the prior suggestion doesn't help, please do share your gres.conf and slurm.conf and we can try to rule out any abnormality in the configuration of the nodes as well.
Comment 4 John Wang 2023-11-21 06:30:31 MST
Hi Tyler,
I tested #SBATCH --gpus-per-task=1 in my Slurm script. I found my python program used the number of GPUs I requested. I have not heard back from the user.

Below is cgroup.conf of Slurm in AWS ParallelCluster. Please review it to see if there is anything I could improve.


$ cat cgroup.conf 
###
# Slurm cgroup support configuration file
###
CgroupAutomount=yes
ConstrainCores=yes
#
# WARNING!!! The slurm_parallelcluster_cgroup.conf file included below can be updated by the pcluster process.
# Please do not edit it.
include slurm_parallelcluster_cgroup.conf

$ cat slurm_parallelcluster_cgroup.conf
# slurm_parallelcluster.conf is managed by the pcluster processes.
# Do not modify.
# Please add user-specific slurm configuration options in cgroup.conf

ConstrainRAMSpace=yes

Thanks,

John Wang
Comment 5 Tyler Connel 2023-11-29 18:12:36 MST
Hello @John,

There was a very similar issue reported pertaining to behavior that deviated from documentation for `--ntasks` and `--tasks-per-node` options. I suspect this issue is related, but somewhat different. They're nearing resolution and I want to see what they settle on and whether a similar change might apply to this issue. In the meantime, have you been able to hear back from the user yet as to whether specifying `--gpus-per-task` was helpful?

Best,
Comment 6 Tyler Connel 2024-01-05 14:50:52 MST
Hello John,

Apologies for not replying earlier. I was looking back over this issue and I don't see anything awry with your cgroup.conf file, although there is an include directive that suggests other configuration settings could be present.

Did you hear back from the user about setting --gpus-per-task for the batch script perchance?

Best,
Tyler Connel
Comment 7 Tyler Connel 2024-01-16 09:23:28 MST
Hello John,

This ticket has been idle for some time, so I'll mark it as timed-out and assume that the --gpus-per-task option was able to help you.

Best,
Tyler Connel