Difference between revisions of "Submitting a job to a slurm queue"

From HPC Guide
Jump to navigation Jump to search
Line 72: Line 72:
  
 
Before submitting a job, you need to know which partitions you have permission to use.
 
Before submitting a job, you need to know which partitions you have permission to use.
 +
 +
'''Account''' is the group of students you belong to (PI)
 +
 +
'''Partition''' is the group of nodes you belong to
  
 
Run the command `<code>check_my_partitions</code>` to view a list of all the partitions you have permission to send jobs to.
 
Run the command `<code>check_my_partitions</code>` to view a list of all the partitions you have permission to send jobs to.

Revision as of 13:49, 22 September 2024

Accessing the System

To submit jobs to SLURM at Tel Aviv University, you need to access the system through one of the following login nodes:

  • powerslurm-login.tau.ac.il
  • powerslurm-login2.tau.ac.il

Requirements for Access

  • Group Membership: You must be part of the "power" group to access the resources.
  • University Credentials: Use your Tel Aviv University username and password to log in.

These login nodes are your starting point for submitting jobs, checking job status, and managing your SLURM tasks.

SSH Example

To access the system using SSH, use the following example:

# Replace 'your_username' with your actual Tel Aviv University username
ssh your_username@powerslurm-login.tau.ac.il

If you want to connect to the second login node, use:

# Replace 'your_username' with your actual Tel Aviv University username
ssh your_username@powerslurm-login2.tau.ac.il

If you have an SSH key set up for password-less login, you can specify it like this:

# Replace 'your_username' and '/path/to/your/private_key' accordingly
ssh -i /path/to/your/private_key your_username@powerslurm-login.tau.ac.il

Environment Modules

Environment Modules in SLURM allow users to dynamically modify their shell environment, providing an easy way to load and unload different software applications, libraries, and their dependencies. This system helps avoid conflicts between software versions and ensures the correct environment for running specific applications.

Here are some common commands to work with environment modules:

#List Available Modules: To see all the modules available on the system, use:
module avail

#To search for a specific module by name (e.g., `gcc`), use:
module avail gcc/gcc-12.1.0

#Get Detailed Information About a Module: The `module spider` command provides detailed information about a module, including versions, dependencies, and descriptions:
module spider gcc/gcc-12.1.0

#View Module Settings: To see what environment variables and settings will be modified by a module, use:
module show gcc/gcc-12.1.0

#Load a Module: To set up the environment for a specific software, use the `module load` command. For example, to load GCC version 12.1.0:
module load gcc/gcc-12.1.0

#List Loaded Modules: To view all currently loaded modules in your session, use:
module list

#Unload a Module: To unload a specific module from your environment, use:
module unload gcc/gcc-12.1.0

#Unload All Modules:** If you need to clear your environment of all loaded modules, use:
module purge

By using these commands, you can easily manage the software environments needed for different tasks, ensuring compatibility and reducing potential conflicts between software versions.

Basic Job Submission Commands

Finding Your Account and Partition

Before submitting a job, you need to know which partitions you have permission to use.

Account is the group of students you belong to (PI)

Partition is the group of nodes you belong to

Run the command `check_my_partitions` to view a list of all the partitions you have permission to send jobs to.

Submitting Jobs

sbatch: Submits a job script for batch processing.

Example:

    sbatch --ntasks=1 --time=10 -p power-general -A power-general-users pre_process.bash
   # This command submits pre_process.bash to the power-general partition for 10 minutes. 
   # With 1 GPU:
    sbatch --gres=gpu:1 -p gpu-general -A gpu-general-users gpu_job.sh

Writing SLURM Job Scripts

Here is a simple job script example:

Basic Script

#!/bin/bash
#SBATCH --job-name=my_job             # Job name
#SBATCH --account=power-general-users # Account name
#SBATCH --partition=power-general     # Partition name
#SBATCH --time=02:00:00               # Max run time (hh:mm:ss)
#SBATCH --ntasks=1                    # Number of tasks
#SBATCH --cpus-per-task=1             # CPUs per task
#SBATCH --mem-per-cpu=4G              # Memory per CPU
#SBATCH --output=my_job_%j.out        # Output file
#SBATCH --error=my_job_%j.err         # Error file

echo "Starting my SLURM job"
echo "Job ID: $SLURM_JOB_ID"
echo "Running on nodes: $SLURM_JOB_NODELIST"
echo "Allocated CPUs: $SLURM_JOB_CPUS_PER_NODE"

# Your application commands go here
# ./my_program

echo "Job completed"

Script for 1 GPU

#!/bin/bash
#SBATCH --job-name=gpu_job             # Job name
#SBATCH --account=my_account           # Account name
#SBATCH --partition=gpu-general        # Partition name
#SBATCH --time=02:00:00                # Max run time
#SBATCH --ntasks=1                     # Number of tasks
#SBATCH --cpus-per-task=1              # CPUs per task
#SBATCH --gres=gpu:1                   # Number of GPUs
#SBATCH --mem-per-cpu=4G               # Memory per CPU
#SBATCH --output=my_job_%j.out         # Output file
#SBATCH --error=my_job_%j.err          # Error file

module load python/python-3.8

echo "Starting GPU job"
echo "Job ID: $SLURM_JOB_ID"
echo "Running on nodes: $SLURM_JOB_NODELIST"
echo "Allocated CPUs: $SLURM_JOB_CPUS_PER_NODE"

# Your GPU commands go here

echo "Job completed"

Interactive Jobs

#Start an interactive session:
srun --ntasks=1 -p power-general -A power-general-users --pty bash

#Specify a compute node:
srun --ntasks=1 -p power-general -A power-general-users --nodelist="compute-0-12" --pty bash

#Using GUI:
srun --ntasks=1 -p power-general -A power-general-users --x11 /bin/bash

Submitting RELION Jobs

To submit a RELION job interactively on the gpu-relion queue with X11 forwarding, use the following steps:

#Start an interactive session with X11:
srun --ntasks=1 -p gpu-relion -A your_account --x11 --pty bash
#Load the RELION module:
module load relion/relion-4.0.1
#Launch RELION:
relion

AlphaFold

AlphaFold is a deep learning tool designed for predicting protein structures.

Guide: AlphaFold Guide

Common SLURM Commands

#View all queues (partitions):
sinfo
#View all jobs:
squeue
#View details of a specific job:
scontrol show job <job_number>
#Get information about partitions:
scontrol show partition

Troubleshooting & Tips

Common Error:

srun: error: Unable to allocate resources: No partition specified or system default partition

Solution: Always specify a partition. Example:

srun --pty -c 1 --mem=2G -p power-general /bin/bash

Chain Jobs: Use the --depend flag to set job dependencies.

Example:

sbatch --ntasks=1 --time=60 -p power-general -A power-general-users --depend=45001 do_work.bash

Always Specify Resources: When submitting jobs, ensure you include all required resources like partition, memory, and CPUs to avoid job failures.

Attaching to Running Jobs: If you need to monitor or interact with a running job, use sattach. This command allows you to attach to a job's input, output, and error streams in real-time.

Example:

sattach <job_id>

To view job steps of a specific job, use the following command:

scontrol show job <job_id>

Look for sections labeled "StepId" within the output.

For specific job steps, use:

sattach <job_id.step_id>

Note: sattach is particularly useful for interactive jobs, where you can provide input directly. For non-interactive jobs, it acts like tail -f, allowing you to monitor the output stream.

This guide provides the essentials for new users to get started with SLURM. For more complex tasks, refer to the full SLURM documentation or contact your system administrator.