Difference between revisions of "Submitting a job to a slurm queue"
| (31 intermediate revisions by 4 users not shown) | |||
| Line 1: | Line 1: | ||
| − | + | == Accessing the System == | |
| − | + | * '''We have chatgpt page for the new qos configuration, please look in [https://chatgpt.com/g/g-68be7f9acfb88191978615c1693e2cff-hpc-helper-toolkit HPC-helper-toolkit]''' | |
| − | |||
| − | + | To submit jobs to SLURM at Tel Aviv University, you need to access the system through one of the following login nodes: | |
| − | |||
| − | + | * slurmlogin.tau.ac.il | |
| − | |||
| − | |||
| − | + | === Requirements for Access === | |
| − | + | * '''Group Membership''': You must be part of the "power" group to access the resources. | |
| − | + | * '''University Credentials''': Use your Tel Aviv University username and password to log in. | |
| − | |||
| − | To | + | These login nodes are your starting point for submitting jobs, checking job status, and managing your SLURM tasks. |
| − | < | + | |
| − | + | === SSH Example === | |
| − | </ | + | |
| − | If you | + | To access the system using SSH, use the following example: |
| − | < | + | |
| − | + | <syntaxhighlight lang="bash"> | |
| − | </ | + | # Replace 'your_username' with your actual Tel Aviv University username |
| − | + | ssh your_username@slurmlogin.tau.ac.il | |
| − | < | + | </syntaxhighlight> |
| − | + | ||
| − | </ | + | Your connection will be automatically routed to one of the login nodes: |
| + | powerslurm-login, powerslurm-login2, or powerslurm-login3. | ||
| + | |||
| + | If you have an SSH key set up for password-less login, you can specify it like this: | ||
| + | |||
| + | <syntaxhighlight lang="bash"> | ||
| + | # Replace 'your_username' and '/path/to/your/private_key' accordingly | ||
| + | ssh -i /path/to/your/private_key your_username@slurmlogin.tau.ac.il | ||
| + | </syntaxhighlight> | ||
| + | |||
| + | == Environment Modules == | ||
| + | |||
| + | Environment Modules in SLURM allow users to dynamically modify their shell environment, providing an easy way to load and unload different software applications, libraries, and their dependencies. This system helps avoid conflicts between software versions and ensures the correct environment for running specific applications. | ||
| + | |||
| + | Here are some common commands to work with environment modules:<syntaxhighlight lang="bash"> | ||
| + | #List Available Modules: To see all the modules available on the system, use: | ||
| + | module avail | ||
| + | |||
| + | #To search for a specific module by name (e.g., `gcc`), use: | ||
| + | module avail gcc/gcc-12.1.0 | ||
| + | |||
| + | #Get Detailed Information About a Module: The `module spider` command provides detailed information about a module, including versions, dependencies, and descriptions: | ||
| + | module spider gcc/gcc-12.1.0 | ||
| + | |||
| + | #View Module Settings: To see what environment variables and settings will be modified by a module, use: | ||
| + | module show gcc/gcc-12.1.0 | ||
| + | |||
| + | #Load a Module: To set up the environment for a specific software, use the `module load` command. For example, to load GCC version 12.1.0: | ||
| + | module load gcc/gcc-12.1.0 | ||
| + | |||
| + | #List Loaded Modules: To view all currently loaded modules in your session, use: | ||
| + | module list | ||
| + | |||
| + | #Unload a Module: To unload a specific module from your environment, use: | ||
| + | module unload gcc/gcc-12.1.0 | ||
| + | |||
| + | #Unload All Modules:** If you need to clear your environment of all loaded modules, use: | ||
| + | module purge | ||
| + | |||
| + | </syntaxhighlight>By using these commands, you can easily manage the software environments needed for different tasks, ensuring compatibility and reducing potential conflicts between software versions. | ||
| + | |||
| + | == Basic Job Submission Commands == | ||
| + | |||
| + | === Finding Your Account and Partition === | ||
| + | |||
| + | Before submitting a job, you need to know which partitions you have permission to use. | ||
| + | |||
| + | Run the command `<code>check_my_partitions</code>` to view a list of all the partitions you have permission to send jobs to. | ||
| + | |||
| + | == Submitting Jobs== | ||
| + | sbatch: Submits a job script for batch processing. | ||
| + | |||
| + | '''Example''':<syntaxhighlight lang="bash"> | ||
| + | |||
| + | |||
| + | sbatch --ntasks=1 --time=10 -p power-general-shared-pool -A public-users_v2 --qos=public pre_process.bash | ||
| + | # This command submits pre_process.bash to the power-general partition for 10 minutes. | ||
| + | # With 1 GPU: | ||
| + | sbatch --gres=gpu:1 -p gpu-general-pool -A public-users_v2 --qos=public gpu_job.sh | ||
| + | |||
| + | |||
| + | </syntaxhighlight> | ||
| + | |||
| + | === Submitting Multiple Jobs === | ||
| + | |||
| + | If you need to submit many similar jobs (hundreds or more), you should use a **Slurm job array**. Submitting each job individually using separate `sbatch` commands places a heavy load on the scheduler, slowing down job processing across the cluster. Job arrays allow you to bundle many related jobs together as a single submission. This is more efficient and easier to manage. | ||
| + | |||
| + | Each task in the array runs independently like a separate job, but the array is submitted as a single job ID for scheduling and tracking purposes. | ||
| + | You can customize the behavior of each task using the environment variable <code>SLURM_ARRAY_TASK_ID</code>. | ||
| + | |||
| + | ==== Script Example: Job Array ==== | ||
| + | |||
| + | This script submits a job array with 100 tasks, each processing a different input file. The array reduces scheduler load and simplifies job tracking. | ||
| + | |||
| + | <syntaxhighlight lang="bash"> | ||
| + | #!/bin/bash | ||
| + | #SBATCH --job-name=array_job # Job name | ||
| + | #SBATCH --account=public-users_v2 # Account name | ||
| + | #SBATCH --partition=power-general-shared-pool # Partition name | ||
| + | #SBATCH --qos=public # qos type | ||
| + | #SBATCH --time=02:00:00 # Max run time (hh:mm:ss) | ||
| + | #SBATCH --ntasks=1 # Number of tasks per array job | ||
| + | #SBATCH --nodes=1 # Number of nodes | ||
| + | #SBATCH --cpus-per-task=1 # CPUs per task | ||
| + | #SBATCH --mem-per-cpu=4G # Memory per CPU | ||
| + | #SBATCH --array=1-100 # Array range: 100 tasks | ||
| + | #SBATCH --output=array_job_%A_%a.out # Output file: Job ID and array task ID | ||
| + | #SBATCH --error=array_job_%A_%a.err # Error file: Job ID and array task ID | ||
| + | |||
| + | echo "Starting SLURM array task" | ||
| + | echo "Job ID: $SLURM_JOB_ID" | ||
| + | echo "Array Task ID: $SLURM_ARRAY_TASK_ID" | ||
| + | echo "Running on node(s): $SLURM_JOB_NODELIST" | ||
| + | echo "Allocated CPUs: $SLURM_JOB_CPUS_PER_NODE" | ||
| + | |||
| + | # Your application commands go here | ||
| + | # You can use $SLURM_ARRAY_TASK_ID to customize behavior per task | ||
| + | # ./my_program input_${SLURM_ARRAY_TASK_ID}.txt | ||
| + | echo "Task completed" | ||
| + | </syntaxhighlight> | ||
| + | |||
| + | In this example: | ||
| + | * The job array consists of 100 tasks. | ||
| + | * Each task runs the same script but with a different input file. | ||
| + | * You access the task ID using the environment variable <code>SLURM_ARRAY_TASK_ID</code>. | ||
| + | * The output and error logs are separated per task using <code>%A</code> (job ID) and <code>%a</code> (array task ID). | ||
| − | ====Example | + | ==== Script Example: Job Array with different parameters per task ==== |
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | + | This script submits a job array with 3 tasks. Each task runs the same program with a different input file: `data1.txt`, `data2.txt`, and `data3.txt`. | |
| − | |||
| − | |||
| − | |||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
#!/bin/bash | #!/bin/bash | ||
| + | #SBATCH --job-name=array_job # Job name | ||
| + | #SBATCH --account=public-users_v2 # Account name | ||
| + | #SBATCH --partition=power-general-shared-pool # Partition name | ||
| + | #SBATCH --qos=public # qos type | ||
| + | #SBATCH --time=01:00:00 # Max run time (hh:mm:ss) | ||
| + | #SBATCH --ntasks=1 # Number of tasks per array job | ||
| + | #SBATCH --nodes=1 # Number of nodes | ||
| + | #SBATCH --cpus-per-task=1 # CPUs per task | ||
| + | #SBATCH --mem-per-cpu=2G # Memory per CPU | ||
| + | #SBATCH --array=1-3 # Run 3 tasks with IDs 1, 2, 3 | ||
| + | #SBATCH --output=array_%A_%a.out # Output file: Job ID and task ID | ||
| + | #SBATCH --error=array_%A_%a.err # Error file: Job ID and task ID | ||
| + | |||
| + | echo "Starting SLURM array task" | ||
| + | echo "Job ID: $SLURM_JOB_ID" | ||
| + | echo "Array Task ID: $SLURM_ARRAY_TASK_ID" | ||
| + | |||
| + | # Each task runs the program with a different input file | ||
| + | ./my_program data${SLURM_ARRAY_TASK_ID}.txt | ||
| + | |||
| + | echo "Task completed" | ||
| + | </syntaxhighlight> | ||
| + | ===Writing Single SLURM Job Scripts=== | ||
| + | Here is a simple job script example: | ||
| + | |||
| + | ==== Basic Script==== | ||
| + | <syntaxhighlight lang="bash"> | ||
| + | #!/bin/bash | ||
#SBATCH --job-name=my_job # Job name | #SBATCH --job-name=my_job # Job name | ||
| − | #SBATCH --account= | + | #SBATCH --account=public-users_v2 # Account name |
| − | #SBATCH --partition=power-general | + | #SBATCH --partition=power-general-shared-pool # Partition name |
| − | #SBATCH --time=02:00:00 # | + | #SBATCH --qos=public # qos type |
| − | #SBATCH --ntasks= | + | #SBATCH --time=02:00:00 # Max run time (hh:mm:ss) |
| − | #SBATCH --cpus-per-task=1 # | + | #SBATCH --ntasks=1 # Number of tasks |
| − | #SBATCH --mem-per-cpu=4G # Memory per CPU | + | #SBATCH --nodes=1 # Number of nodes |
| − | #SBATCH --output=my_job_%j.out # | + | #SBATCH --cpus-per-task=1 # CPUs per task |
| − | #SBATCH --error=my_job_%j.err # | + | #SBATCH --mem-per-cpu=4G # Memory per CPU |
| + | #SBATCH --output=my_job_%j.out # Output file | ||
| + | #SBATCH --error=my_job_%j.err # Error file | ||
| + | #SBATCH --mail-user=<your email> # Your mail address to receive an email | ||
| + | #SBATCH --mail-type=END,FAIL # The mail will be sent upon ending the script successfully or not | ||
| − | |||
| − | |||
| − | |||
echo "Starting my SLURM job" | echo "Starting my SLURM job" | ||
echo "Job ID: $SLURM_JOB_ID" | echo "Job ID: $SLURM_JOB_ID" | ||
| Line 69: | Line 186: | ||
echo "Allocated CPUs: $SLURM_JOB_CPUS_PER_NODE" | echo "Allocated CPUs: $SLURM_JOB_CPUS_PER_NODE" | ||
| − | # | + | # Your application commands go here |
# ./my_program | # ./my_program | ||
| − | |||
| − | |||
echo "Job completed" | echo "Job completed" | ||
| + | </syntaxhighlight> | ||
| − | </ | + | To ask for x cores interactively: |
| + | <pre> | ||
| + | srun --ntasks=1 --cpus-per-task=x --partition=power-general-public-pool --account=public-users_v2 --qos=public --nodes=1 --pty bash | ||
| + | </pre> | ||
| + | |||
| + | However, need for now also to set slurm parameters inside the script, or within the interactive job: | ||
| + | <pre> | ||
| + | export SLURM_TASKS_PER_NODE=48 | ||
| + | export SLURM_CPUS_ON_NODE=48 | ||
| + | </pre> | ||
| − | ==== Script | + | ==== Script for 1 GPU ==== |
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
#!/bin/bash | #!/bin/bash | ||
| + | #SBATCH --job-name=gpu_job # Job name | ||
| + | #SBATCH --account=my_account # Account name | ||
| + | #SBATCH --partition=gpu-general-pool # Partition name | ||
| + | #SBATCH --qos=my_qos # qos type | ||
| + | #SBATCH --time=02:00:00 # Max run time | ||
| + | #SBATCH --ntasks=1 # Number of tasks | ||
| + | #SBATCH --nodes=1 # Number of nodes | ||
| + | #SBATCH --cpus-per-task=1 # CPUs per task | ||
| + | #SBATCH --gres=gpu:1 # Number of GPUs | ||
| + | #SBATCH --mem-per-cpu=4G # Memory per CPU | ||
| + | #SBATCH --output=my_job_%j.out # Output file | ||
| + | #SBATCH --error=my_job_%j.err # Error file | ||
| − | + | module load python/python-3.8 | |
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | + | echo "Starting GPU job" | |
| − | |||
| − | |||
| − | |||
| − | echo "Starting | ||
echo "Job ID: $SLURM_JOB_ID" | echo "Job ID: $SLURM_JOB_ID" | ||
echo "Running on nodes: $SLURM_JOB_NODELIST" | echo "Running on nodes: $SLURM_JOB_NODELIST" | ||
echo "Allocated CPUs: $SLURM_JOB_CPUS_PER_NODE" | echo "Allocated CPUs: $SLURM_JOB_CPUS_PER_NODE" | ||
| − | # | + | # Your GPU commands go here |
| − | |||
| − | |||
| − | |||
echo "Job completed" | echo "Job completed" | ||
| + | </syntaxhighlight> | ||
| + | For excluding a node, one may add the following | ||
| + | <syntaxhighlight> | ||
| + | #SBATCH --exclude=compute-0-[100-103],compute-0-67 | ||
| + | </syntaxhighlight> | ||
| + | |||
| + | ===Importance of Correct RAM Usage in Jobs=== | ||
| + | |||
| + | When writing SLURM job scripts, it's crucial to understand and correctly specify the memory requirements for your job. | ||
| + | |||
| + | Proper memory allocation ensures efficient resource usage and prevents job failures due to out-of-memory (OOM) errors. | ||
| + | ==== Why Correct RAM Usage Matters ==== | ||
| + | |||
| + | * '''Resource Efficiency''': Allocating the right amount of memory helps in optimal resource utilization, allowing more jobs to run simultaneously on the cluster. | ||
| + | * '''Job Stability''': Underestimating memory requirements can lead to OOM errors, causing your job to fail and waste computational resources. | ||
| + | * '''Performance''': Overestimating memory needs can lead to underutilization of resources, potentially delaying other jobs in the queue. | ||
| + | |||
| + | ==== How to Specify Memory in SLURM ==== | ||
| + | |||
| + | * '''--mem''': Specifies the total memory required for the job. | ||
| + | * '''--mem-per-cpu''': Specifies the memory required per CPU. | ||
| + | |||
| + | '''Example''':<syntaxhighlight lang="bash"> | ||
| + | #SBATCH --mem=4G # Total memory for the job | ||
| + | #SBATCH --mem-per-cpu=2G # Memory per CPU | ||
</syntaxhighlight> | </syntaxhighlight> | ||
| − | === | + | ===Interactive Jobs=== |
| − | + | <syntaxhighlight lang="bash"> | |
| − | + | #Start an interactive session: | |
| − | + | srun --ntasks=1 -p power-general-shared-pool -A public-users_v2 --qos=public --pty bash | |
| − | + | ||
| + | #Specify a compute node: | ||
| + | srun --ntasks=1 -p power-general-shared-pool -A public-users_v2 --qos=public --nodelist="compute-0-12" --pty bash | ||
| + | |||
| + | #Using GUI: | ||
| + | srun --ntasks=1 -p power-general-shared-pool -A public-users_v2 --qos=public --x11 /bin/bash | ||
| + | |||
| + | </syntaxhighlight> | ||
| + | |||
| + | === Submitting RELION Jobs=== | ||
| + | |||
| + | To submit a RELION job interactively on the <code>gpu-relion</code> queue with X11 forwarding, use the following steps:<syntaxhighlight lang="bash"> | ||
| + | #Start an interactive session with X11: | ||
| + | srun --ntasks=1 -p gpu-relion-pool -A gpu-relion-users_v2 --qos=owner --x11 --pty bash | ||
| + | #Load the RELION module: | ||
| + | module load relion/relion-4.0.1 | ||
| + | #Launch RELION: | ||
| + | relion | ||
| + | |||
| + | </syntaxhighlight> | ||
| + | |||
| + | ==Running matlab example== | ||
| + | In this example there are 3 files: | ||
| + | |||
| + | myTable.m ⇒ This matlab file calculates something | ||
| + | <pre> | ||
| + | fprintf('=======================================\n'); | ||
| + | fprintf(' a b c d \n'); | ||
| + | fprintf('=======================================\n'); | ||
| + | while 1 | ||
| + | for j = 1:10 | ||
| + | a = sin(10*j); | ||
| + | b = a*cos(10*j); | ||
| + | c = a + b; | ||
| + | d = a - b; | ||
| + | fprintf('%+6.5f %+6.5f %+6.5f %+6.5f \n',a,b,c,d); | ||
| + | end | ||
| + | end | ||
| + | fprintf('=======================================\n'); | ||
| + | </pre> | ||
| + | |||
| + | my_table_script.sh ⇒ This script executes the matlab program. Need just to run qsub with this script | ||
| + | <pre> | ||
| + | #!/bin/bash | ||
| + | |||
| + | #SBATCH --mem=50mg | ||
| + | #SBATCH --partition power-general-shared-pool | ||
| + | #SBATCH -A public-users_v2 | ||
| + | hostname | ||
| + | |||
| + | cd /a/home/cc/tree/taucc/staff/dvory/matlab | ||
| + | |||
| + | matlab -nodisplay -nosplash -nodesktop -r "run(myTable());exit;" | ||
| + | </pre> | ||
| + | |||
| + | run_in_loop.sh ⇒ However, one may also generate many jobs with this file | ||
| + | <pre> | ||
| + | #!/bin/bash | ||
| + | |||
| + | for i in {1..100} | ||
| + | |||
| + | do | ||
| + | |||
| + | sbatch my_table_script.sh | ||
| + | |||
| + | done | ||
| + | </pre> | ||
| + | |||
| + | Running my job is with the command (after doing chmod +x 'run_in_loop.sh'): | ||
| + | <pre> | ||
| + | ./run_in_loop.sh | ||
| + | </pre> | ||
| + | |||
| + | |||
| + | |||
| + | |||
| + | |||
| − | == | + | ==AlphaFold== |
| − | + | AlphaFold is a deep learning tool designed for predicting protein structures. | |
| − | |||
| − | |||
| − | |||
| − | + | '''Guides:''' | |
| − | + | [https://hpcguide.tau.ac.il/index.php?title=Alphafold AlphaFold Guide] | |
| − | + | ||
| − | + | [https://hpcguide.tau.ac.il/index.php?title=Alphafold3 AlphaFold3 Guide] | |
| − | * | + | |
| + | ==Common SLURM Commands== | ||
| + | <syntaxhighlight lang="bash"> | ||
| + | #View all queues (partitions): | ||
| + | sinfo | ||
| + | #View all jobs: | ||
| + | squeue | ||
| + | #View details of a specific job: | ||
| + | scontrol show job <job_number> | ||
| + | #Get information about partitions: | ||
| + | scontrol show partition | ||
| + | </syntaxhighlight> | ||
| + | |||
| + | == Troubleshooting & Tips == | ||
| + | |||
| + | === Common Errors === | ||
| + | |||
| + | # <code>srun: error: Unable to allocate resources: No partition specified or system default partition</code> <br />'''Solution:''' Always specify a partition. Example: <code>srun --pty -c 1 --mem=2G -p power-general /bin/bash</code> | ||
| + | # Job failed, and upon doing scontrol show job job_id or when running sacct -j job_id -o JobID,JobName,State%20 <br />you see: <code>JobState=OUT_OF_MEMORY Reason=OutOfMemory</code> or :<syntaxhighlight lang="bash"> | ||
| + | JobID JobName State | ||
| + | ------------ ---------- -------------------- | ||
| + | 71 oom_test OUT_OF_MEMORY | ||
| + | 71.batch batch OUT_OF_MEMORY | ||
| + | 71.extern extern COMPLETED | ||
| + | |||
| + | </syntaxhighlight>it means that the ram requested for the job was not enough, please resubmit the job again with more ram. see [https://wikihpc.tau.ac.il/index.php?title=Slurm_user_guide#Estimating_RAM_Usage below] for help with understanding how much ram your job may need. | ||
| + | |||
| + | === Chain Jobs === | ||
| + | Use the <code>--depend</code> flag to set job dependencies. | ||
| + | |||
| + | '''Example:''' | ||
| + | <syntaxhighlight lang="bash"> | ||
| + | sbatch --ntasks=1 --time=60 -p power-general -A power-general-users --depend=45001 do_work.bash | ||
| + | </syntaxhighlight> | ||
| + | |||
| + | === Always Specify Resources === | ||
| + | When submitting jobs, ensure you include all required resources like partition, memory, and CPUs to avoid job failures. | ||
| + | |||
| + | === Attaching to Running Jobs === | ||
| + | If you need to monitor or interact with a running job, use <code>sattach</code>. This command allows you to attach to a job's input, output, and error streams in real-time. | ||
| + | |||
| + | '''Example:''' | ||
| + | <syntaxhighlight lang="bash"> | ||
| + | sattach <job_id> | ||
| + | </syntaxhighlight> | ||
| + | |||
| + | To view job steps of a specific job, use the following command: | ||
| + | |||
| + | <syntaxhighlight lang="bash"> | ||
| + | scontrol show job <job_id> | ||
| + | </syntaxhighlight> | ||
| + | |||
| + | Look for sections labeled "StepId" within the output. | ||
| + | |||
| + | '''For specific job steps, use:''' | ||
| + | <syntaxhighlight lang="bash"> | ||
| + | sattach <job_id.step_id> | ||
| + | </syntaxhighlight> | ||
| + | |||
| + | '''Note:''' <code>sattach</code> is particularly useful for interactive jobs, where you can provide input directly. For non-interactive jobs, it acts like <code>tail -f</code>, allowing you to monitor the output stream. | ||
| + | |||
| + | === Estimating RAM Usage === | ||
| + | |||
| + | When writing SLURM job scripts, it's crucial to understand and correctly specify the memory requirements for your job. Proper memory allocation ensures efficient resource usage and prevents job failures due to out-of-memory (OOM) errors. | ||
| + | |||
| + | ==== Tips for Estimating RAM Usage ==== | ||
| + | |||
| + | * Check Application Documentation: Refer to the official documentation or user guides for memory-related information. | ||
| + | * Run a Small Test Job: Submit a smaller version of your job and monitor its memory usage using commands like `free -m`, `top`, or `htop`. | ||
| + | * Use Profiling Tools: Tools like `valgrind`, `gprof`, or built-in profilers can help you understand memory usage. | ||
| + | * Analyze Previous Jobs: Review SLURM logs and job statistics for insights into memory consumption of past jobs. | ||
| + | * Consult with Peers or Experts: Ask colleagues or experts who have experience with similar workloads. | ||
| + | |||
| + | ==== Example: Monitoring Memory Usage ==== | ||
| + | <syntaxhighlight lang="bash"> | ||
| + | #!/bin/bash | ||
| + | |||
| + | #SBATCH --job-name=memory_test | ||
| + | #SBATCH --account=your_account | ||
| + | #SBATCH --partition=your_partition | ||
| + | #SBATCH --qos=your_qos | ||
| + | #SBATCH --time=01:00:00 | ||
| + | #SBATCH --ntasks=1 | ||
| + | #SBATCH --cpus-per-task=1 | ||
| + | #SBATCH --mem=4G | ||
| + | #SBATCH --output=memory_test.out | ||
| + | #SBATCH --error=memory_test.err | ||
| + | |||
| + | # Monitor memory usage | ||
| + | echo "Memory usage before running the job:" | ||
| + | free -m | ||
| + | |||
| + | # Your application commands go here | ||
| + | # ./your_application | ||
| + | |||
| + | # Monitor memory usage after running the job | ||
| + | echo "Memory usage after running the job:" | ||
| + | free -m | ||
| + | </syntaxhighlight> | ||
| + | |||
| + | ==== General Tips ==== | ||
| + | |||
| + | * Start Small: Begin with a conservative memory request and increase it based on observed usage. | ||
| + | * Consider Peak Usage: Plan for peak memory usage to avoid OOM errors. | ||
| + | * Use SLURM's Memory Reporting: Use `sacct` to view memory usage statistics. | ||
| + | |||
| + | '''Example:''' | ||
| + | <syntaxhighlight lang="bash"> | ||
| + | sacct -j <job_id> --format=JobID,JobName,MaxRSS,Elapsed | ||
| + | </syntaxhighlight> | ||
Latest revision as of 06:44, 23 October 2025
Accessing the System
- We have chatgpt page for the new qos configuration, please look in HPC-helper-toolkit
To submit jobs to SLURM at Tel Aviv University, you need to access the system through one of the following login nodes:
- slurmlogin.tau.ac.il
Requirements for Access
- Group Membership: You must be part of the "power" group to access the resources.
- University Credentials: Use your Tel Aviv University username and password to log in.
These login nodes are your starting point for submitting jobs, checking job status, and managing your SLURM tasks.
SSH Example
To access the system using SSH, use the following example:
# Replace 'your_username' with your actual Tel Aviv University username
ssh your_username@slurmlogin.tau.ac.il
Your connection will be automatically routed to one of the login nodes: powerslurm-login, powerslurm-login2, or powerslurm-login3.
If you have an SSH key set up for password-less login, you can specify it like this:
# Replace 'your_username' and '/path/to/your/private_key' accordingly
ssh -i /path/to/your/private_key your_username@slurmlogin.tau.ac.il
Environment Modules
Environment Modules in SLURM allow users to dynamically modify their shell environment, providing an easy way to load and unload different software applications, libraries, and their dependencies. This system helps avoid conflicts between software versions and ensures the correct environment for running specific applications.
Here are some common commands to work with environment modules:
#List Available Modules: To see all the modules available on the system, use:
module avail
#To search for a specific module by name (e.g., `gcc`), use:
module avail gcc/gcc-12.1.0
#Get Detailed Information About a Module: The `module spider` command provides detailed information about a module, including versions, dependencies, and descriptions:
module spider gcc/gcc-12.1.0
#View Module Settings: To see what environment variables and settings will be modified by a module, use:
module show gcc/gcc-12.1.0
#Load a Module: To set up the environment for a specific software, use the `module load` command. For example, to load GCC version 12.1.0:
module load gcc/gcc-12.1.0
#List Loaded Modules: To view all currently loaded modules in your session, use:
module list
#Unload a Module: To unload a specific module from your environment, use:
module unload gcc/gcc-12.1.0
#Unload All Modules:** If you need to clear your environment of all loaded modules, use:
module purge
By using these commands, you can easily manage the software environments needed for different tasks, ensuring compatibility and reducing potential conflicts between software versions.
Basic Job Submission Commands
Finding Your Account and Partition
Before submitting a job, you need to know which partitions you have permission to use.
Run the command `check_my_partitions` to view a list of all the partitions you have permission to send jobs to.
Submitting Jobs
sbatch: Submits a job script for batch processing.
Example:
sbatch --ntasks=1 --time=10 -p power-general-shared-pool -A public-users_v2 --qos=public pre_process.bash
# This command submits pre_process.bash to the power-general partition for 10 minutes.
# With 1 GPU:
sbatch --gres=gpu:1 -p gpu-general-pool -A public-users_v2 --qos=public gpu_job.sh
Submitting Multiple Jobs
If you need to submit many similar jobs (hundreds or more), you should use a **Slurm job array**. Submitting each job individually using separate `sbatch` commands places a heavy load on the scheduler, slowing down job processing across the cluster. Job arrays allow you to bundle many related jobs together as a single submission. This is more efficient and easier to manage.
Each task in the array runs independently like a separate job, but the array is submitted as a single job ID for scheduling and tracking purposes.
You can customize the behavior of each task using the environment variable SLURM_ARRAY_TASK_ID.
Script Example: Job Array
This script submits a job array with 100 tasks, each processing a different input file. The array reduces scheduler load and simplifies job tracking.
#!/bin/bash
#SBATCH --job-name=array_job # Job name
#SBATCH --account=public-users_v2 # Account name
#SBATCH --partition=power-general-shared-pool # Partition name
#SBATCH --qos=public # qos type
#SBATCH --time=02:00:00 # Max run time (hh:mm:ss)
#SBATCH --ntasks=1 # Number of tasks per array job
#SBATCH --nodes=1 # Number of nodes
#SBATCH --cpus-per-task=1 # CPUs per task
#SBATCH --mem-per-cpu=4G # Memory per CPU
#SBATCH --array=1-100 # Array range: 100 tasks
#SBATCH --output=array_job_%A_%a.out # Output file: Job ID and array task ID
#SBATCH --error=array_job_%A_%a.err # Error file: Job ID and array task ID
echo "Starting SLURM array task"
echo "Job ID: $SLURM_JOB_ID"
echo "Array Task ID: $SLURM_ARRAY_TASK_ID"
echo "Running on node(s): $SLURM_JOB_NODELIST"
echo "Allocated CPUs: $SLURM_JOB_CPUS_PER_NODE"
# Your application commands go here
# You can use $SLURM_ARRAY_TASK_ID to customize behavior per task
# ./my_program input_${SLURM_ARRAY_TASK_ID}.txt
echo "Task completed"
In this example:
- The job array consists of 100 tasks.
- Each task runs the same script but with a different input file.
- You access the task ID using the environment variable
SLURM_ARRAY_TASK_ID. - The output and error logs are separated per task using
%A(job ID) and%a(array task ID).
Script Example: Job Array with different parameters per task
This script submits a job array with 3 tasks. Each task runs the same program with a different input file: `data1.txt`, `data2.txt`, and `data3.txt`.
#!/bin/bash
#SBATCH --job-name=array_job # Job name
#SBATCH --account=public-users_v2 # Account name
#SBATCH --partition=power-general-shared-pool # Partition name
#SBATCH --qos=public # qos type
#SBATCH --time=01:00:00 # Max run time (hh:mm:ss)
#SBATCH --ntasks=1 # Number of tasks per array job
#SBATCH --nodes=1 # Number of nodes
#SBATCH --cpus-per-task=1 # CPUs per task
#SBATCH --mem-per-cpu=2G # Memory per CPU
#SBATCH --array=1-3 # Run 3 tasks with IDs 1, 2, 3
#SBATCH --output=array_%A_%a.out # Output file: Job ID and task ID
#SBATCH --error=array_%A_%a.err # Error file: Job ID and task ID
echo "Starting SLURM array task"
echo "Job ID: $SLURM_JOB_ID"
echo "Array Task ID: $SLURM_ARRAY_TASK_ID"
# Each task runs the program with a different input file
./my_program data${SLURM_ARRAY_TASK_ID}.txt
echo "Task completed"
Writing Single SLURM Job Scripts
Here is a simple job script example:
Basic Script
#!/bin/bash
#SBATCH --job-name=my_job # Job name
#SBATCH --account=public-users_v2 # Account name
#SBATCH --partition=power-general-shared-pool # Partition name
#SBATCH --qos=public # qos type
#SBATCH --time=02:00:00 # Max run time (hh:mm:ss)
#SBATCH --ntasks=1 # Number of tasks
#SBATCH --nodes=1 # Number of nodes
#SBATCH --cpus-per-task=1 # CPUs per task
#SBATCH --mem-per-cpu=4G # Memory per CPU
#SBATCH --output=my_job_%j.out # Output file
#SBATCH --error=my_job_%j.err # Error file
#SBATCH --mail-user=<your email> # Your mail address to receive an email
#SBATCH --mail-type=END,FAIL # The mail will be sent upon ending the script successfully or not
echo "Starting my SLURM job"
echo "Job ID: $SLURM_JOB_ID"
echo "Running on nodes: $SLURM_JOB_NODELIST"
echo "Allocated CPUs: $SLURM_JOB_CPUS_PER_NODE"
# Your application commands go here
# ./my_program
echo "Job completed"
To ask for x cores interactively:
srun --ntasks=1 --cpus-per-task=x --partition=power-general-public-pool --account=public-users_v2 --qos=public --nodes=1 --pty bash
However, need for now also to set slurm parameters inside the script, or within the interactive job:
export SLURM_TASKS_PER_NODE=48 export SLURM_CPUS_ON_NODE=48
Script for 1 GPU
#!/bin/bash
#SBATCH --job-name=gpu_job # Job name
#SBATCH --account=my_account # Account name
#SBATCH --partition=gpu-general-pool # Partition name
#SBATCH --qos=my_qos # qos type
#SBATCH --time=02:00:00 # Max run time
#SBATCH --ntasks=1 # Number of tasks
#SBATCH --nodes=1 # Number of nodes
#SBATCH --cpus-per-task=1 # CPUs per task
#SBATCH --gres=gpu:1 # Number of GPUs
#SBATCH --mem-per-cpu=4G # Memory per CPU
#SBATCH --output=my_job_%j.out # Output file
#SBATCH --error=my_job_%j.err # Error file
module load python/python-3.8
echo "Starting GPU job"
echo "Job ID: $SLURM_JOB_ID"
echo "Running on nodes: $SLURM_JOB_NODELIST"
echo "Allocated CPUs: $SLURM_JOB_CPUS_PER_NODE"
# Your GPU commands go here
echo "Job completed"
For excluding a node, one may add the following
#SBATCH --exclude=compute-0-[100-103],compute-0-67Importance of Correct RAM Usage in Jobs
When writing SLURM job scripts, it's crucial to understand and correctly specify the memory requirements for your job.
Proper memory allocation ensures efficient resource usage and prevents job failures due to out-of-memory (OOM) errors.
Why Correct RAM Usage Matters
- Resource Efficiency: Allocating the right amount of memory helps in optimal resource utilization, allowing more jobs to run simultaneously on the cluster.
- Job Stability: Underestimating memory requirements can lead to OOM errors, causing your job to fail and waste computational resources.
- Performance: Overestimating memory needs can lead to underutilization of resources, potentially delaying other jobs in the queue.
How to Specify Memory in SLURM
- --mem: Specifies the total memory required for the job.
- --mem-per-cpu: Specifies the memory required per CPU.
Example:
#SBATCH --mem=4G # Total memory for the job
#SBATCH --mem-per-cpu=2G # Memory per CPU
Interactive Jobs
#Start an interactive session:
srun --ntasks=1 -p power-general-shared-pool -A public-users_v2 --qos=public --pty bash
#Specify a compute node:
srun --ntasks=1 -p power-general-shared-pool -A public-users_v2 --qos=public --nodelist="compute-0-12" --pty bash
#Using GUI:
srun --ntasks=1 -p power-general-shared-pool -A public-users_v2 --qos=public --x11 /bin/bash
Submitting RELION Jobs
To submit a RELION job interactively on the gpu-relion queue with X11 forwarding, use the following steps:
#Start an interactive session with X11:
srun --ntasks=1 -p gpu-relion-pool -A gpu-relion-users_v2 --qos=owner --x11 --pty bash
#Load the RELION module:
module load relion/relion-4.0.1
#Launch RELION:
relion
Running matlab example
In this example there are 3 files:
myTable.m ⇒ This matlab file calculates something
fprintf('=======================================\n');
fprintf(' a b c d \n');
fprintf('=======================================\n');
while 1
for j = 1:10
a = sin(10*j);
b = a*cos(10*j);
c = a + b;
d = a - b;
fprintf('%+6.5f %+6.5f %+6.5f %+6.5f \n',a,b,c,d);
end
end
fprintf('=======================================\n');
my_table_script.sh ⇒ This script executes the matlab program. Need just to run qsub with this script
#!/bin/bash #SBATCH --mem=50mg #SBATCH --partition power-general-shared-pool #SBATCH -A public-users_v2 hostname cd /a/home/cc/tree/taucc/staff/dvory/matlab matlab -nodisplay -nosplash -nodesktop -r "run(myTable());exit;"
run_in_loop.sh ⇒ However, one may also generate many jobs with this file
#!/bin/bash
for i in {1..100}
do
sbatch my_table_script.sh
done
Running my job is with the command (after doing chmod +x 'run_in_loop.sh'):
./run_in_loop.sh
AlphaFold
AlphaFold is a deep learning tool designed for predicting protein structures.
Guides:
Common SLURM Commands
#View all queues (partitions):
sinfo
#View all jobs:
squeue
#View details of a specific job:
scontrol show job <job_number>
#Get information about partitions:
scontrol show partition
Troubleshooting & Tips
Common Errors
srun: error: Unable to allocate resources: No partition specified or system default partition
Solution: Always specify a partition. Example:srun --pty -c 1 --mem=2G -p power-general /bin/bash- Job failed, and upon doing scontrol show job job_id or when running sacct -j job_id -o JobID,JobName,State%20
you see:JobState=OUT_OF_MEMORY Reason=OutOfMemoryor :it means that the ram requested for the job was not enough, please resubmit the job again with more ram. see below for help with understanding how much ram your job may need.JobID JobName State ------------ ---------- -------------------- 71 oom_test OUT_OF_MEMORY 71.batch batch OUT_OF_MEMORY 71.extern extern COMPLETED
Chain Jobs
Use the --depend flag to set job dependencies.
Example:
sbatch --ntasks=1 --time=60 -p power-general -A power-general-users --depend=45001 do_work.bash
Always Specify Resources
When submitting jobs, ensure you include all required resources like partition, memory, and CPUs to avoid job failures.
Attaching to Running Jobs
If you need to monitor or interact with a running job, use sattach. This command allows you to attach to a job's input, output, and error streams in real-time.
Example:
sattach <job_id>
To view job steps of a specific job, use the following command:
scontrol show job <job_id>
Look for sections labeled "StepId" within the output.
For specific job steps, use:
sattach <job_id.step_id>
Note: sattach is particularly useful for interactive jobs, where you can provide input directly. For non-interactive jobs, it acts like tail -f, allowing you to monitor the output stream.
Estimating RAM Usage
When writing SLURM job scripts, it's crucial to understand and correctly specify the memory requirements for your job. Proper memory allocation ensures efficient resource usage and prevents job failures due to out-of-memory (OOM) errors.
Tips for Estimating RAM Usage
- Check Application Documentation: Refer to the official documentation or user guides for memory-related information.
- Run a Small Test Job: Submit a smaller version of your job and monitor its memory usage using commands like `free -m`, `top`, or `htop`.
- Use Profiling Tools: Tools like `valgrind`, `gprof`, or built-in profilers can help you understand memory usage.
- Analyze Previous Jobs: Review SLURM logs and job statistics for insights into memory consumption of past jobs.
- Consult with Peers or Experts: Ask colleagues or experts who have experience with similar workloads.
Example: Monitoring Memory Usage
#!/bin/bash
#SBATCH --job-name=memory_test
#SBATCH --account=your_account
#SBATCH --partition=your_partition
#SBATCH --qos=your_qos
#SBATCH --time=01:00:00
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=1
#SBATCH --mem=4G
#SBATCH --output=memory_test.out
#SBATCH --error=memory_test.err
# Monitor memory usage
echo "Memory usage before running the job:"
free -m
# Your application commands go here
# ./your_application
# Monitor memory usage after running the job
echo "Memory usage after running the job:"
free -m
General Tips
- Start Small: Begin with a conservative memory request and increase it based on observed usage.
- Consider Peak Usage: Plan for peak memory usage to avoid OOM errors.
- Use SLURM's Memory Reporting: Use `sacct` to view memory usage statistics.
Example:
sacct -j <job_id> --format=JobID,JobName,MaxRSS,Elapsed