Difference between revisions of "Submitting a job to a slurm queue"

From HPC Guide
Jump to navigation Jump to search
Line 42: Line 42:
 
srun --ntasks=56 --pty bash
 
srun --ntasks=56 --pty bash
 
[dolevg@compute-0-12 beta16.dvory]$....
 
[dolevg@compute-0-12 beta16.dvory]$....
 +
</pre>
 +
 +
On some slurm clusters need also to specify resources (as in powerslurm), w/o them, the qsub would fail, as in:
 +
<pre>
 +
[sagish@powerslurm ~]$ srun  --pty bash -p power-yoren
 +
srun: error: Unable to allocate resources: No partition specified or system default partition
 +
[sagish@powerslurm ~]$
 +
</pre>
 +
<pre>
 +
[dvory@powerslurm-login ~]$ srun  --pty -c 1 --mem=2G -p power-yoren /bin/bash
 +
[dvory@compute-0-62 ~]$
 
</pre>
 
</pre>
  

Revision as of 13:01, 24 May 2023

submit commands

sbatch - submits script

salloc - submit interactive job - allocates what it needs, but will not start to work on the node/s

srun - submits interactive job w mpi ("job step")

sattach - connect stdin/out/err for an existing job (or job step)

So for example, may submit a job with command:

sbatch script.sh

Examples

sbatch

sbatch --ntasks=1 --time=10 pre_process.bash (time is 10 minutes)
(Submitted batch job 45001)
sbatch --ntasks=128 --time=60 --depend=45001 do_work.bash
(Submitted batch job 45002)
sbatch --ntasks=1 --time=30 --depend=45002 post_process.bash
(Submitted batch job 45003)

srun

srun -intasks=2 --label hostname  (--label means that before the output line write the task id)
   0:compute-0-1
   1:compute-0-1

Using 2 nodes:

srun -innodes=2 --exclusive --label hostname
   0:compute-0-1
   1:compute-0-2

opening bash

srun --ntasks=56 --pty bash
[dolevg@compute-0-12 beta16.dvory]$....

On some slurm clusters need also to specify resources (as in powerslurm), w/o them, the qsub would fail, as in:

[sagish@powerslurm ~]$ srun  --pty bash -p power-yoren
srun: error: Unable to allocate resources: No partition specified or system default partition
[sagish@powerslurm ~]$ 
[dvory@powerslurm-login ~]$ srun  --pty -c 1 --mem=2G -p power-yoren /bin/bash
[dvory@compute-0-62 ~]$

Specifying compute node (which is available)

srun --ntasks=56 -p gcohen_2018 --nodelist="compute-0-12" --pty bash

See available nodes: salloc

salloc --ntasks=8 --time=10 bash
salloc: Granted job allocation 45000
(gives us a bash prompts of a node:)
env | grep SLURM
  SLURM_JOBID=45000
  SLURM_NPROCS=4
  SLURM_JOB_NODELIST=compute-0-1,compute-0-2
srun --label hostname
  0:compute-0-1
  1:compute-0-1
  2:compute-0-2
  3:compute-0-2
exit (terminates the shell)

info commands

sinfo -- to see all queues (partitions)

squeue -- to see all jobs

scontrol show partition -- to see all partitions

scontrol show job <number> -- to see job's attributes