<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://hpcguide.tau.ac.il/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Eyal</id>
	<title>HPC Guide - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://hpcguide.tau.ac.il/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Eyal"/>
	<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Special:Contributions/Eyal"/>
	<updated>2026-04-27T07:00:21Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.35.5</generator>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Using_jupyter_on_Slurm&amp;diff=1545</id>
		<title>Using jupyter on Slurm</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Using_jupyter_on_Slurm&amp;diff=1545"/>
		<updated>2025-12-03T14:48:30Z</updated>

		<summary type="html">&lt;p&gt;Eyal: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;#039;&amp;#039;&amp;#039;Using Jupyter Lab on Slurm Cluster&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Step 1&amp;#039;&amp;#039;&amp;#039;: Start an Interactive Job on the Head Node&lt;br /&gt;
&lt;br /&gt;
First, you need to request resources on the Slurm cluster. Run the following command to start an interactive job on the appropriate partition (&amp;lt;partition-name&amp;gt;) and with your account (&amp;lt;account-name&amp;gt;):&lt;br /&gt;
&lt;br /&gt;
     srun -p &amp;lt;partition-name&amp;gt; -A &amp;lt;account-name&amp;gt; --pty bash &lt;br /&gt;
&lt;br /&gt;
Replace:&lt;br /&gt;
&lt;br /&gt;
    &amp;lt;partition-name&amp;gt; with your Slurm partition name (e.g., power-general).&lt;br /&gt;
    &amp;lt;account-name&amp;gt; with your Slurm account name (e.g., power-leahfa-users).&lt;br /&gt;
&lt;br /&gt;
This command will allocate resources for your job and provide you with an interactive shell on a compute node (e.g., compute-0-62).&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Step 2&amp;#039;&amp;#039;&amp;#039;: Load the Jupyter Environment&lt;br /&gt;
&lt;br /&gt;
Once inside the interactive session, load the mamba-env158/jupyter module:&lt;br /&gt;
&lt;br /&gt;
      module load mamba-env158/jupyter&lt;br /&gt;
&lt;br /&gt;
This will prepare the environment to run Jupyter.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Step 3&amp;#039;&amp;#039;&amp;#039;: Start Jupyter Lab&lt;br /&gt;
&lt;br /&gt;
Now, start the Jupyter Lab server. Run the following command inside the interactive session:&lt;br /&gt;
&lt;br /&gt;
    jupyter lab --ip=* --port=8892 --no-browser&lt;br /&gt;
&lt;br /&gt;
    The --ip=* option binds Jupyter Lab to all available network interfaces.&lt;br /&gt;
    The --port=8892 specifies that Jupyter Lab will use port 8892.&lt;br /&gt;
    The --no-browser option prevents Jupyter Lab from opening a browser automatically.&lt;br /&gt;
&lt;br /&gt;
Once Jupyter Lab starts, you should see output like this:&lt;br /&gt;
&lt;br /&gt;
     [I 2025-03-16 09:25:06.190 ServerApp] Jupyter Server 2.14.2 is running at:&lt;br /&gt;
     [I 2025-03-16 09:25:06.190 ServerApp] http://localhost:8892/lab?token=&amp;lt;token&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
&lt;br /&gt;
[[File:2025-12-03 161826.jpg]]&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Step 4&amp;#039;&amp;#039;&amp;#039;: Set Up SSH Port Forwarding&lt;br /&gt;
&lt;br /&gt;
To access the Jupyter Lab server from your local machine, you need to set up SSH port forwarding.&lt;br /&gt;
&lt;br /&gt;
Open a terminal on your local (cmd on windows) machine and run the following command:&lt;br /&gt;
&lt;br /&gt;
        ssh -N -L 8892:&amp;lt;compute-node-name&amp;gt;:8892 &amp;lt;username&amp;gt;@&amp;lt;headnode-name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Replace:&lt;br /&gt;
&lt;br /&gt;
    &amp;lt;compute-node-name&amp;gt; with the name of the compute node where Jupyter Lab is running (e.g., compute-0-62).&lt;br /&gt;
    &amp;lt;username&amp;gt; with your Slurm username.&lt;br /&gt;
    &amp;lt;headnode-name&amp;gt; with the hostname or IP address of the Slurm head node (e.g., powerslurm-login.tau.ac.il).&lt;br /&gt;
&lt;br /&gt;
This command forwards port 8892 from the compute node back to your local machine.&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
&lt;br /&gt;
[[File:2025-12-03 162115.jpg]]&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Step 5&amp;#039;&amp;#039;&amp;#039;: Open Jupyter Lab in Your Browser&lt;br /&gt;
&lt;br /&gt;
After establishing the SSH tunnel, open your web browser and navigate to:&lt;br /&gt;
&lt;br /&gt;
    http://localhost:8892/lab?token=&amp;lt;token&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This URL will give you access to your Jupyter Lab instance.&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
&lt;br /&gt;
[[File:2025-12-03 162302.jpg]]&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Step 6&amp;#039;&amp;#039;&amp;#039;: Start Using Jupyter Lab&lt;br /&gt;
&lt;br /&gt;
Once you’ve entered the token, you can start working with Jupyter Lab on your browser, using the resources of the cluster.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Step 7&amp;#039;&amp;#039;&amp;#039;: Closing the Job&lt;br /&gt;
&lt;br /&gt;
When you&amp;#039;re finished using Jupyter Lab, return to the interactive session and press Ctrl+C to stop the Jupyter Lab server. Then, exit the interactive session by typing:&lt;br /&gt;
&lt;br /&gt;
    exit&lt;br /&gt;
&lt;br /&gt;
Additional Notes:&lt;br /&gt;
&lt;br /&gt;
Ensure that the SSH tunnel (ssh -N -L ...) remains open while you are working with Jupyter Lab.&amp;lt;br&amp;gt;&lt;br /&gt;
You can use different ports if 8892 is already in use, just adjust the port number in both the jupyter lab command and the SSH command.&amp;lt;br&amp;gt;&lt;br /&gt;
If you encounter any issues with port forwarding, double-check the node name (&amp;lt;compute-node-name&amp;gt;) and the port number.&lt;/div&gt;</summary>
		<author><name>Eyal</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Using_jupyter_on_Slurm&amp;diff=1544</id>
		<title>Using jupyter on Slurm</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Using_jupyter_on_Slurm&amp;diff=1544"/>
		<updated>2025-12-03T14:43:22Z</updated>

		<summary type="html">&lt;p&gt;Eyal: Created page with &amp;quot;&amp;#039;&amp;#039;&amp;#039;Using Jupyter Lab on Slurm Cluster&amp;#039;&amp;#039;&amp;#039;  &amp;#039;&amp;#039;&amp;#039;Step 1&amp;#039;&amp;#039;&amp;#039;: Start an Interactive Job on the Head Node  First, you need to request resources on the Slurm cluster. Run the following...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;#039;&amp;#039;&amp;#039;Using Jupyter Lab on Slurm Cluster&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Step 1&amp;#039;&amp;#039;&amp;#039;: Start an Interactive Job on the Head Node&lt;br /&gt;
&lt;br /&gt;
First, you need to request resources on the Slurm cluster. Run the following command to start an interactive job on the appropriate partition (&amp;lt;partition-name&amp;gt;) and with your account (&amp;lt;account-name&amp;gt;):&lt;br /&gt;
&lt;br /&gt;
     srun -p &amp;lt;partition-name&amp;gt; -A &amp;lt;account-name&amp;gt; --pty bash &lt;br /&gt;
&lt;br /&gt;
Replace:&lt;br /&gt;
&lt;br /&gt;
    &amp;lt;partition-name&amp;gt; with your Slurm partition name (e.g., power-general).&lt;br /&gt;
    &amp;lt;account-name&amp;gt; with your Slurm account name (e.g., power-leahfa-users).&lt;br /&gt;
&lt;br /&gt;
This command will allocate resources for your job and provide you with an interactive shell on a compute node (e.g., compute-0-62).&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Step 2&amp;#039;&amp;#039;&amp;#039;: Load the Jupyter Environment&lt;br /&gt;
&lt;br /&gt;
Once inside the interactive session, load the mamba-env158/jupyter module:&lt;br /&gt;
&lt;br /&gt;
      module load mamba-env158/jupyter&lt;br /&gt;
&lt;br /&gt;
This will prepare the environment to run Jupyter.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Step 3&amp;#039;&amp;#039;&amp;#039;: Start Jupyter Lab&lt;br /&gt;
&lt;br /&gt;
Now, start the Jupyter Lab server. Run the following command inside the interactive session:&lt;br /&gt;
&lt;br /&gt;
    jupyter lab --ip=* --port=8892 --no-browser&lt;br /&gt;
&lt;br /&gt;
    The --ip=* option binds Jupyter Lab to all available network interfaces.&lt;br /&gt;
    The --port=8892 specifies that Jupyter Lab will use port 8892.&lt;br /&gt;
    The --no-browser option prevents Jupyter Lab from opening a browser automatically.&lt;br /&gt;
&lt;br /&gt;
Once Jupyter Lab starts, you should see output like this:&lt;br /&gt;
&lt;br /&gt;
     [I 2025-03-16 09:25:06.190 ServerApp] Jupyter Server 2.14.2 is running at:&lt;br /&gt;
     [I 2025-03-16 09:25:06.190 ServerApp] http://localhost:8892/lab?token=&amp;lt;token&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
&lt;br /&gt;
[[File:2025-12-03 161826.jpg]]&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Step 4&amp;#039;&amp;#039;&amp;#039;: Set Up SSH Port Forwarding&lt;br /&gt;
&lt;br /&gt;
To access the Jupyter Lab server from your local machine, you need to set up SSH port forwarding.&lt;br /&gt;
&lt;br /&gt;
Open a terminal on your local (cmd on windows) machine and run the following command:&lt;br /&gt;
&lt;br /&gt;
        ssh -N -L 8892:&amp;lt;compute-node-name&amp;gt;:8892 &amp;lt;username&amp;gt;@&amp;lt;headnode-name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Replace:&lt;br /&gt;
&lt;br /&gt;
    &amp;lt;compute-node-name&amp;gt; with the name of the compute node where Jupyter Lab is running (e.g., compute-0-62).&lt;br /&gt;
    &amp;lt;username&amp;gt; with your Slurm username.&lt;br /&gt;
    &amp;lt;headnode-name&amp;gt; with the hostname or IP address of the Slurm head node (e.g., powerslurm-login.tau.ac.il).&lt;br /&gt;
&lt;br /&gt;
This command forwards port 8892 from the compute node back to your local machine.&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
&lt;br /&gt;
[[File:2025-12-03 162115.jpg]]&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Step 5&amp;#039;&amp;#039;&amp;#039;: Open Jupyter Lab in Your Browser&lt;br /&gt;
&lt;br /&gt;
After establishing the SSH tunnel, open your web browser and navigate to:&lt;br /&gt;
&lt;br /&gt;
    http://localhost:8892/lab?token=&amp;lt;token&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This URL will give you access to your Jupyter Lab instance.&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
&lt;br /&gt;
[[File:2025-12-03 162302.jpg]]&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Step 6&amp;#039;&amp;#039;&amp;#039;: Start Using Jupyter Lab&lt;br /&gt;
&lt;br /&gt;
Once you’ve entered the token, you can start working with Jupyter Lab on your browser, using the resources of the cluster.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Step 7&amp;#039;&amp;#039;&amp;#039;: Closing the Job&lt;br /&gt;
&lt;br /&gt;
When you&amp;#039;re finished using Jupyter Lab, return to the interactive session and press Ctrl+C to stop the Jupyter Lab server. Then, exit the interactive session by typing:&lt;br /&gt;
&lt;br /&gt;
    exit&lt;br /&gt;
&lt;br /&gt;
Additional Notes:&lt;br /&gt;
&lt;br /&gt;
Ensure that the SSH tunnel (ssh -N -L ...) remains open while you are working with Jupyter Lab.&lt;br /&gt;
You can use different ports if 8892 is already in use, just adjust the port number in both the jupyter lab command and the SSH command.&lt;br /&gt;
If you encounter any issues with port forwarding, double-check the node name (&amp;lt;compute-node-name&amp;gt;) and the port number.&lt;/div&gt;</summary>
		<author><name>Eyal</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=File:2025-12-03_162302.jpg&amp;diff=1543</id>
		<title>File:2025-12-03 162302.jpg</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=File:2025-12-03_162302.jpg&amp;diff=1543"/>
		<updated>2025-12-03T14:32:10Z</updated>

		<summary type="html">&lt;p&gt;Eyal: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;browser jupyter example&lt;/div&gt;</summary>
		<author><name>Eyal</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=File:2025-12-03_162115.jpg&amp;diff=1542</id>
		<title>File:2025-12-03 162115.jpg</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=File:2025-12-03_162115.jpg&amp;diff=1542"/>
		<updated>2025-12-03T14:30:06Z</updated>

		<summary type="html">&lt;p&gt;Eyal: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;ssh tunnle example&lt;/div&gt;</summary>
		<author><name>Eyal</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=File:2025-12-03_161826.jpg&amp;diff=1541</id>
		<title>File:2025-12-03 161826.jpg</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=File:2025-12-03_161826.jpg&amp;diff=1541"/>
		<updated>2025-12-03T14:25:58Z</updated>

		<summary type="html">&lt;p&gt;Eyal: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;example of jupyter interactive job&lt;/div&gt;</summary>
		<author><name>Eyal</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Submitting_a_job_to_a_slurm_queue&amp;diff=1520</id>
		<title>Submitting a job to a slurm queue</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Submitting_a_job_to_a_slurm_queue&amp;diff=1520"/>
		<updated>2025-04-08T07:12:49Z</updated>

		<summary type="html">&lt;p&gt;Eyal: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Accessing the System ==&lt;br /&gt;
&lt;br /&gt;
To submit jobs to SLURM at Tel Aviv University, you need to access the system through one of the following login nodes:&lt;br /&gt;
&lt;br /&gt;
* slurmlogin.tau.ac.il&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Requirements for Access ===&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Group Membership&amp;#039;&amp;#039;&amp;#039;: You must be part of the &amp;quot;power&amp;quot; group to access the resources.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;University Credentials&amp;#039;&amp;#039;&amp;#039;: Use your Tel Aviv University username and password to log in.&lt;br /&gt;
&lt;br /&gt;
These login nodes are your starting point for submitting jobs, checking job status, and managing your SLURM tasks.&lt;br /&gt;
&lt;br /&gt;
=== SSH Example ===&lt;br /&gt;
&lt;br /&gt;
To access the system using SSH, use the following example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Replace &amp;#039;your_username&amp;#039; with your actual Tel Aviv University username&lt;br /&gt;
ssh your_username@slurmlogin.tau.ac.il&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Your connection will be automatically routed to one of the login nodes:&lt;br /&gt;
powerslurm-login, powerslurm-login2, or powerslurm-login3.&lt;br /&gt;
&lt;br /&gt;
If you have an SSH key set up for password-less login, you can specify it like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Replace &amp;#039;your_username&amp;#039; and &amp;#039;/path/to/your/private_key&amp;#039; accordingly&lt;br /&gt;
ssh -i /path/to/your/private_key your_username@slurmlogin.tau.ac.il&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Environment Modules ==&lt;br /&gt;
&lt;br /&gt;
Environment Modules in SLURM allow users to dynamically modify their shell environment, providing an easy way to load and unload different software applications, libraries, and their dependencies. This system helps avoid conflicts between software versions and ensures the correct environment for running specific applications.&lt;br /&gt;
&lt;br /&gt;
Here are some common commands to work with environment modules:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#List Available Modules: To see all the modules available on the system, use:&lt;br /&gt;
module avail&lt;br /&gt;
&lt;br /&gt;
#To search for a specific module by name (e.g., `gcc`), use:&lt;br /&gt;
module avail gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#Get Detailed Information About a Module: The `module spider` command provides detailed information about a module, including versions, dependencies, and descriptions:&lt;br /&gt;
module spider gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#View Module Settings: To see what environment variables and settings will be modified by a module, use:&lt;br /&gt;
module show gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#Load a Module: To set up the environment for a specific software, use the `module load` command. For example, to load GCC version 12.1.0:&lt;br /&gt;
module load gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#List Loaded Modules: To view all currently loaded modules in your session, use:&lt;br /&gt;
module list&lt;br /&gt;
&lt;br /&gt;
#Unload a Module: To unload a specific module from your environment, use:&lt;br /&gt;
module unload gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#Unload All Modules:** If you need to clear your environment of all loaded modules, use:&lt;br /&gt;
module purge&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;By using these commands, you can easily manage the software environments needed for different tasks, ensuring compatibility and reducing potential conflicts between software versions.&lt;br /&gt;
&lt;br /&gt;
== Basic Job Submission Commands ==&lt;br /&gt;
&lt;br /&gt;
=== Finding Your Account and Partition ===&lt;br /&gt;
&lt;br /&gt;
Before submitting a job, you need to know which partitions you have permission to use.&lt;br /&gt;
&lt;br /&gt;
Run the command `&amp;lt;code&amp;gt;check_my_partitions&amp;lt;/code&amp;gt;` to view a list of all the partitions you have permission to send jobs to.&lt;br /&gt;
&lt;br /&gt;
== Submitting Jobs==&lt;br /&gt;
sbatch: Submits a job script for batch processing.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example&amp;#039;&amp;#039;&amp;#039;:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    sbatch --ntasks=1 --time=10 -p power-general -A power-general-users pre_process.bash&lt;br /&gt;
   # This command submits pre_process.bash to the power-general partition for 10 minutes. &lt;br /&gt;
   # With 1 GPU:&lt;br /&gt;
    sbatch --gres=gpu:1 -p gpu-general -A gpu-general-users gpu_job.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Writing SLURM Job Scripts===&lt;br /&gt;
&lt;br /&gt;
Here is a simple job script example:&lt;br /&gt;
&lt;br /&gt;
==== Basic Script====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --job-name=my_job             # Job name&lt;br /&gt;
#SBATCH --account=power-general-users # Account name&lt;br /&gt;
#SBATCH --partition=power-general     # Partition name&lt;br /&gt;
#SBATCH --time=02:00:00               # Max run time (hh:mm:ss)&lt;br /&gt;
#SBATCH --ntasks=1                    # Number of tasks&lt;br /&gt;
#SBATCH --nodes=1                     # Number of nodes&lt;br /&gt;
#SBATCH --cpus-per-task=1             # CPUs per task&lt;br /&gt;
#SBATCH --mem-per-cpu=4G              # Memory per CPU&lt;br /&gt;
#SBATCH --output=my_job_%j.out        # Output file&lt;br /&gt;
#SBATCH --error=my_job_%j.err         # Error file&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Starting my SLURM job&amp;quot;&lt;br /&gt;
echo &amp;quot;Job ID: $SLURM_JOB_ID&amp;quot;&lt;br /&gt;
echo &amp;quot;Running on nodes: $SLURM_JOB_NODELIST&amp;quot;&lt;br /&gt;
echo &amp;quot;Allocated CPUs: $SLURM_JOB_CPUS_PER_NODE&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Your application commands go here&lt;br /&gt;
# ./my_program&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Job completed&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To ask for x cores interactively:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=x  --partition=power-general --nodes=1 --pty bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
However, need for now also to set slurm parameters inside the script, or within the interactive job:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
export SLURM_TASKS_PER_NODE=48&lt;br /&gt;
export SLURM_CPUS_ON_NODE=48&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For defining an array, may add:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --array=1-300&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Script for 1 GPU ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --job-name=gpu_job             # Job name&lt;br /&gt;
#SBATCH --account=my_account           # Account name&lt;br /&gt;
#SBATCH --partition=gpu-general        # Partition name&lt;br /&gt;
#SBATCH --time=02:00:00                # Max run time&lt;br /&gt;
#SBATCH --ntasks=1                     # Number of tasks&lt;br /&gt;
#SBATCH --nodes=1                      # Number of nodes&lt;br /&gt;
#SBATCH --cpus-per-task=1              # CPUs per task&lt;br /&gt;
#SBATCH --gres=gpu:1                   # Number of GPUs&lt;br /&gt;
#SBATCH --mem-per-cpu=4G               # Memory per CPU&lt;br /&gt;
#SBATCH --output=my_job_%j.out         # Output file&lt;br /&gt;
#SBATCH --error=my_job_%j.err          # Error file&lt;br /&gt;
&lt;br /&gt;
module load python/python-3.8&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Starting GPU job&amp;quot;&lt;br /&gt;
echo &amp;quot;Job ID: $SLURM_JOB_ID&amp;quot;&lt;br /&gt;
echo &amp;quot;Running on nodes: $SLURM_JOB_NODELIST&amp;quot;&lt;br /&gt;
echo &amp;quot;Allocated CPUs: $SLURM_JOB_CPUS_PER_NODE&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Your GPU commands go here&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Job completed&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
For excluding a node, one may add the following&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
#SBATCH --exclude=compute-0-[100-103],compute-0-67&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Importance of Correct RAM Usage in Jobs===&lt;br /&gt;
&lt;br /&gt;
When writing SLURM job scripts, it&amp;#039;s crucial to understand and correctly specify the memory requirements for your job. &lt;br /&gt;
&lt;br /&gt;
Proper memory allocation ensures efficient resource usage and prevents job failures due to out-of-memory (OOM) errors.&lt;br /&gt;
&lt;br /&gt;
==== Why Correct RAM Usage Matters ====&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Resource Efficiency&amp;#039;&amp;#039;&amp;#039;: Allocating the right amount of memory helps in optimal resource utilization, allowing more jobs to run simultaneously on the cluster.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Job Stability&amp;#039;&amp;#039;&amp;#039;: Underestimating memory requirements can lead to OOM errors, causing your job to fail and waste computational resources.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Performance&amp;#039;&amp;#039;&amp;#039;: Overestimating memory needs can lead to underutilization of resources, potentially delaying other jobs in the queue.&lt;br /&gt;
&lt;br /&gt;
==== How to Specify Memory in SLURM ====&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;--mem&amp;#039;&amp;#039;&amp;#039;: Specifies the total memory required for the job.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;--mem-per-cpu&amp;#039;&amp;#039;&amp;#039;: Specifies the memory required per CPU.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example&amp;#039;&amp;#039;&amp;#039;:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --mem=4G              # Total memory for the job&lt;br /&gt;
#SBATCH --mem-per-cpu=2G      # Memory per CPU&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Interactive Jobs===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#Start an interactive session:&lt;br /&gt;
srun --ntasks=1 -p power-general -A power-general-users --pty bash&lt;br /&gt;
&lt;br /&gt;
#Specify a compute node:&lt;br /&gt;
srun --ntasks=1 -p power-general -A power-general-users --nodelist=&amp;quot;compute-0-12&amp;quot; --pty bash&lt;br /&gt;
&lt;br /&gt;
#Using GUI:&lt;br /&gt;
srun --ntasks=1 -p power-general -A power-general-users --x11 /bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Submitting RELION Jobs===&lt;br /&gt;
&lt;br /&gt;
To submit a RELION job interactively on the &amp;lt;code&amp;gt;gpu-relion&amp;lt;/code&amp;gt; queue with X11 forwarding, use the following steps:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#Start an interactive session with X11:&lt;br /&gt;
srun --ntasks=1 -p gpu-relion -A your_account --x11 --pty bash&lt;br /&gt;
#Load the RELION module:&lt;br /&gt;
module load relion/relion-4.0.1&lt;br /&gt;
#Launch RELION:&lt;br /&gt;
relion&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Running matlab example==&lt;br /&gt;
In this example there are 3 files:&lt;br /&gt;
&lt;br /&gt;
myTable.m ⇒ This matlab file calculates something&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
fprintf(&amp;#039;=======================================\n&amp;#039;);&lt;br /&gt;
fprintf(&amp;#039; a             b             c              d             \n&amp;#039;);&lt;br /&gt;
fprintf(&amp;#039;=======================================\n&amp;#039;);&lt;br /&gt;
while 1&lt;br /&gt;
                for j = 1:10&lt;br /&gt;
                                a = sin(10*j);&lt;br /&gt;
                                b = a*cos(10*j);&lt;br /&gt;
                                c = a + b;&lt;br /&gt;
                                d = a - b;&lt;br /&gt;
                                fprintf(&amp;#039;%+6.5f   %+6.5f   %+6.5f   %+6.5f   \n&amp;#039;,a,b,c,d);&lt;br /&gt;
                end&lt;br /&gt;
end&lt;br /&gt;
fprintf(&amp;#039;=======================================\n&amp;#039;);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
my_table_script.sh ⇒ This script executes the matlab program. Need just to run qsub with this script&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH --mem=50mg&lt;br /&gt;
#SBATCH --partition powers-general&lt;br /&gt;
#SBATCH -A power-general-users&lt;br /&gt;
hostname&lt;br /&gt;
&lt;br /&gt;
cd /a/home/cc/tree/taucc/staff/dvory/matlab&lt;br /&gt;
&lt;br /&gt;
matlab -nodisplay -nosplash -nodesktop -r &amp;quot;run(myTable());exit;&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
run_in_loop.sh ⇒ However, one may also generate many jobs with this file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
for i in {1..100}&lt;br /&gt;
&lt;br /&gt;
do&lt;br /&gt;
&lt;br /&gt;
        sbatch my_table_script.sh&lt;br /&gt;
&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Running my job is with the command (after doing chmod +x &amp;#039;run_in_loop.sh&amp;#039;):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
./run_in_loop.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==AlphaFold==&lt;br /&gt;
&lt;br /&gt;
AlphaFold is a deep learning tool designed for predicting protein structures.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Guides:&amp;#039;&amp;#039;&amp;#039;  &lt;br /&gt;
&lt;br /&gt;
[https://hpcguide.tau.ac.il/index.php?title=Alphafold AlphaFold Guide]&lt;br /&gt;
&lt;br /&gt;
[https://hpcguide.tau.ac.il/index.php?title=Alphafold3 AlphaFold3 Guide]&lt;br /&gt;
&lt;br /&gt;
==Common SLURM Commands==&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#View all queues (partitions):&lt;br /&gt;
sinfo&lt;br /&gt;
#View all jobs:&lt;br /&gt;
squeue&lt;br /&gt;
#View details of a specific job:&lt;br /&gt;
scontrol show job &amp;lt;job_number&amp;gt;&lt;br /&gt;
#Get information about partitions:&lt;br /&gt;
scontrol show partition&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting &amp;amp; Tips ==&lt;br /&gt;
&lt;br /&gt;
=== Common Errors ===&lt;br /&gt;
&lt;br /&gt;
# &amp;lt;code&amp;gt;srun: error: Unable to allocate resources: No partition specified or system default partition&amp;lt;/code&amp;gt;  &amp;lt;br /&amp;gt;&amp;#039;&amp;#039;&amp;#039;Solution:&amp;#039;&amp;#039;&amp;#039; Always specify a partition. Example:  &amp;lt;code&amp;gt;srun --pty -c 1 --mem=2G -p power-general /bin/bash&amp;lt;/code&amp;gt;&lt;br /&gt;
# Job failed, and upon doing scontrol show job job_id or when running sacct -j job_id -o JobID,JobName,State%20  &amp;lt;br /&amp;gt;you see:   &amp;lt;code&amp;gt;JobState=OUT_OF_MEMORY Reason=OutOfMemory&amp;lt;/code&amp;gt;  or :&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
JobID           JobName                State &lt;br /&gt;
------------ ---------- -------------------- &lt;br /&gt;
71             oom_test        OUT_OF_MEMORY &lt;br /&gt;
71.batch          batch        OUT_OF_MEMORY &lt;br /&gt;
71.extern        extern            COMPLETED &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;it means that the ram requested for the job was not enough, please resubmit the job again with more ram. see [https://wikihpc.tau.ac.il/index.php?title=Slurm_user_guide#Estimating_RAM_Usage below] for help with understanding how much ram your job may need.&lt;br /&gt;
&lt;br /&gt;
=== Chain Jobs ===&lt;br /&gt;
Use the &amp;lt;code&amp;gt;--depend&amp;lt;/code&amp;gt; flag to set job dependencies.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sbatch --ntasks=1 --time=60 -p power-general -A power-general-users --depend=45001 do_work.bash&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Always Specify Resources ===&lt;br /&gt;
When submitting jobs, ensure you include all required resources like partition, memory, and CPUs to avoid job failures.&lt;br /&gt;
&lt;br /&gt;
=== Attaching to Running Jobs ===&lt;br /&gt;
If you need to monitor or interact with a running job, use &amp;lt;code&amp;gt;sattach&amp;lt;/code&amp;gt;. This command allows you to attach to a job&amp;#039;s input, output, and error streams in real-time.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sattach &amp;lt;job_id&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To view job steps of a specific job, use the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
scontrol show job &amp;lt;job_id&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look for sections labeled &amp;quot;StepId&amp;quot; within the output. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;For specific job steps, use:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sattach &amp;lt;job_id.step_id&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Note:&amp;#039;&amp;#039;&amp;#039; &amp;lt;code&amp;gt;sattach&amp;lt;/code&amp;gt; is particularly useful for interactive jobs, where you can provide input directly. For non-interactive jobs, it acts like &amp;lt;code&amp;gt;tail -f&amp;lt;/code&amp;gt;, allowing you to monitor the output stream.&lt;br /&gt;
&lt;br /&gt;
=== Estimating RAM Usage ===&lt;br /&gt;
&lt;br /&gt;
When writing SLURM job scripts, it&amp;#039;s crucial to understand and correctly specify the memory requirements for your job. Proper memory allocation ensures efficient resource usage and prevents job failures due to out-of-memory (OOM) errors.&lt;br /&gt;
&lt;br /&gt;
==== Tips for Estimating RAM Usage ====&lt;br /&gt;
&lt;br /&gt;
* Check Application Documentation: Refer to the official documentation or user guides for memory-related information.&lt;br /&gt;
* Run a Small Test Job: Submit a smaller version of your job and monitor its memory usage using commands like `free -m`, `top`, or `htop`.&lt;br /&gt;
* Use Profiling Tools: Tools like `valgrind`, `gprof`, or built-in profilers can help you understand memory usage.&lt;br /&gt;
* Analyze Previous Jobs: Review SLURM logs and job statistics for insights into memory consumption of past jobs.&lt;br /&gt;
* Consult with Peers or Experts: Ask colleagues or experts who have experience with similar workloads.&lt;br /&gt;
&lt;br /&gt;
==== Example: Monitoring Memory Usage ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH --job-name=memory_test&lt;br /&gt;
#SBATCH --account=your_account&lt;br /&gt;
#SBATCH --partition=your_partition&lt;br /&gt;
#SBATCH --time=01:00:00&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --cpus-per-task=1&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --output=memory_test.out&lt;br /&gt;
#SBATCH --error=memory_test.err&lt;br /&gt;
&lt;br /&gt;
# Monitor memory usage&lt;br /&gt;
echo &amp;quot;Memory usage before running the job:&amp;quot;&lt;br /&gt;
free -m&lt;br /&gt;
&lt;br /&gt;
# Your application commands go here&lt;br /&gt;
# ./your_application&lt;br /&gt;
&lt;br /&gt;
# Monitor memory usage after running the job&lt;br /&gt;
echo &amp;quot;Memory usage after running the job:&amp;quot;&lt;br /&gt;
free -m&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== General Tips ====&lt;br /&gt;
&lt;br /&gt;
* Start Small: Begin with a conservative memory request and increase it based on observed usage.&lt;br /&gt;
* Consider Peak Usage: Plan for peak memory usage to avoid OOM errors.&lt;br /&gt;
* Use SLURM&amp;#039;s Memory Reporting: Use `sacct` to view memory usage statistics.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sacct -j &amp;lt;job_id&amp;gt; --format=JobID,JobName,MaxRSS,Elapsed&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>Eyal</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Submitting_a_job_to_a_slurm_queue&amp;diff=1519</id>
		<title>Submitting a job to a slurm queue</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Submitting_a_job_to_a_slurm_queue&amp;diff=1519"/>
		<updated>2025-04-08T07:05:44Z</updated>

		<summary type="html">&lt;p&gt;Eyal: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Accessing the System ==&lt;br /&gt;
&lt;br /&gt;
To submit jobs to SLURM at Tel Aviv University, you need to access the system through one of the following login nodes:&lt;br /&gt;
&lt;br /&gt;
* slurmlogin.tau.ac.il&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Requirements for Access ===&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Group Membership&amp;#039;&amp;#039;&amp;#039;: You must be part of the &amp;quot;power&amp;quot; group to access the resources.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;University Credentials&amp;#039;&amp;#039;&amp;#039;: Use your Tel Aviv University username and password to log in.&lt;br /&gt;
&lt;br /&gt;
These login nodes are your starting point for submitting jobs, checking job status, and managing your SLURM tasks.&lt;br /&gt;
&lt;br /&gt;
=== SSH Example ===&lt;br /&gt;
&lt;br /&gt;
To access the system using SSH, use the following example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Replace &amp;#039;your_username&amp;#039; with your actual Tel Aviv University username&lt;br /&gt;
ssh your_username@slurmlogin.tau.ac.il&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Your connection will be automatically routed to one of the login nodes:&lt;br /&gt;
powerslurm-login, powerslurm-login2, or powerslurm-login3.&lt;br /&gt;
&lt;br /&gt;
If you have an SSH key set up for password-less login, you can specify it like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Replace &amp;#039;your_username&amp;#039; and &amp;#039;/path/to/your/private_key&amp;#039; accordingly&lt;br /&gt;
ssh -i /path/to/your/private_key your_username@powerslurm-login.tau.ac.il&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You need to that for all 3 login nodes.&lt;br /&gt;
&lt;br /&gt;
== Environment Modules ==&lt;br /&gt;
&lt;br /&gt;
Environment Modules in SLURM allow users to dynamically modify their shell environment, providing an easy way to load and unload different software applications, libraries, and their dependencies. This system helps avoid conflicts between software versions and ensures the correct environment for running specific applications.&lt;br /&gt;
&lt;br /&gt;
Here are some common commands to work with environment modules:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#List Available Modules: To see all the modules available on the system, use:&lt;br /&gt;
module avail&lt;br /&gt;
&lt;br /&gt;
#To search for a specific module by name (e.g., `gcc`), use:&lt;br /&gt;
module avail gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#Get Detailed Information About a Module: The `module spider` command provides detailed information about a module, including versions, dependencies, and descriptions:&lt;br /&gt;
module spider gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#View Module Settings: To see what environment variables and settings will be modified by a module, use:&lt;br /&gt;
module show gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#Load a Module: To set up the environment for a specific software, use the `module load` command. For example, to load GCC version 12.1.0:&lt;br /&gt;
module load gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#List Loaded Modules: To view all currently loaded modules in your session, use:&lt;br /&gt;
module list&lt;br /&gt;
&lt;br /&gt;
#Unload a Module: To unload a specific module from your environment, use:&lt;br /&gt;
module unload gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#Unload All Modules:** If you need to clear your environment of all loaded modules, use:&lt;br /&gt;
module purge&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;By using these commands, you can easily manage the software environments needed for different tasks, ensuring compatibility and reducing potential conflicts between software versions.&lt;br /&gt;
&lt;br /&gt;
== Basic Job Submission Commands ==&lt;br /&gt;
&lt;br /&gt;
=== Finding Your Account and Partition ===&lt;br /&gt;
&lt;br /&gt;
Before submitting a job, you need to know which partitions you have permission to use.&lt;br /&gt;
&lt;br /&gt;
Run the command `&amp;lt;code&amp;gt;check_my_partitions&amp;lt;/code&amp;gt;` to view a list of all the partitions you have permission to send jobs to.&lt;br /&gt;
&lt;br /&gt;
== Submitting Jobs==&lt;br /&gt;
sbatch: Submits a job script for batch processing.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example&amp;#039;&amp;#039;&amp;#039;:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    sbatch --ntasks=1 --time=10 -p power-general -A power-general-users pre_process.bash&lt;br /&gt;
   # This command submits pre_process.bash to the power-general partition for 10 minutes. &lt;br /&gt;
   # With 1 GPU:&lt;br /&gt;
    sbatch --gres=gpu:1 -p gpu-general -A gpu-general-users gpu_job.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Writing SLURM Job Scripts===&lt;br /&gt;
&lt;br /&gt;
Here is a simple job script example:&lt;br /&gt;
&lt;br /&gt;
==== Basic Script====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --job-name=my_job             # Job name&lt;br /&gt;
#SBATCH --account=power-general-users # Account name&lt;br /&gt;
#SBATCH --partition=power-general     # Partition name&lt;br /&gt;
#SBATCH --time=02:00:00               # Max run time (hh:mm:ss)&lt;br /&gt;
#SBATCH --ntasks=1                    # Number of tasks&lt;br /&gt;
#SBATCH --nodes=1                     # Number of nodes&lt;br /&gt;
#SBATCH --cpus-per-task=1             # CPUs per task&lt;br /&gt;
#SBATCH --mem-per-cpu=4G              # Memory per CPU&lt;br /&gt;
#SBATCH --output=my_job_%j.out        # Output file&lt;br /&gt;
#SBATCH --error=my_job_%j.err         # Error file&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Starting my SLURM job&amp;quot;&lt;br /&gt;
echo &amp;quot;Job ID: $SLURM_JOB_ID&amp;quot;&lt;br /&gt;
echo &amp;quot;Running on nodes: $SLURM_JOB_NODELIST&amp;quot;&lt;br /&gt;
echo &amp;quot;Allocated CPUs: $SLURM_JOB_CPUS_PER_NODE&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Your application commands go here&lt;br /&gt;
# ./my_program&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Job completed&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To ask for x cores interactively:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=x  --partition=power-general --nodes=1 --pty bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
However, need for now also to set slurm parameters inside the script, or within the interactive job:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
export SLURM_TASKS_PER_NODE=48&lt;br /&gt;
export SLURM_CPUS_ON_NODE=48&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For defining an array, may add:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --array=1-300&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Script for 1 GPU ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --job-name=gpu_job             # Job name&lt;br /&gt;
#SBATCH --account=my_account           # Account name&lt;br /&gt;
#SBATCH --partition=gpu-general        # Partition name&lt;br /&gt;
#SBATCH --time=02:00:00                # Max run time&lt;br /&gt;
#SBATCH --ntasks=1                     # Number of tasks&lt;br /&gt;
#SBATCH --nodes=1                      # Number of nodes&lt;br /&gt;
#SBATCH --cpus-per-task=1              # CPUs per task&lt;br /&gt;
#SBATCH --gres=gpu:1                   # Number of GPUs&lt;br /&gt;
#SBATCH --mem-per-cpu=4G               # Memory per CPU&lt;br /&gt;
#SBATCH --output=my_job_%j.out         # Output file&lt;br /&gt;
#SBATCH --error=my_job_%j.err          # Error file&lt;br /&gt;
&lt;br /&gt;
module load python/python-3.8&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Starting GPU job&amp;quot;&lt;br /&gt;
echo &amp;quot;Job ID: $SLURM_JOB_ID&amp;quot;&lt;br /&gt;
echo &amp;quot;Running on nodes: $SLURM_JOB_NODELIST&amp;quot;&lt;br /&gt;
echo &amp;quot;Allocated CPUs: $SLURM_JOB_CPUS_PER_NODE&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Your GPU commands go here&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Job completed&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
For excluding a node, one may add the following&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
#SBATCH --exclude=compute-0-[100-103],compute-0-67&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Importance of Correct RAM Usage in Jobs===&lt;br /&gt;
&lt;br /&gt;
When writing SLURM job scripts, it&amp;#039;s crucial to understand and correctly specify the memory requirements for your job. &lt;br /&gt;
&lt;br /&gt;
Proper memory allocation ensures efficient resource usage and prevents job failures due to out-of-memory (OOM) errors.&lt;br /&gt;
&lt;br /&gt;
==== Why Correct RAM Usage Matters ====&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Resource Efficiency&amp;#039;&amp;#039;&amp;#039;: Allocating the right amount of memory helps in optimal resource utilization, allowing more jobs to run simultaneously on the cluster.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Job Stability&amp;#039;&amp;#039;&amp;#039;: Underestimating memory requirements can lead to OOM errors, causing your job to fail and waste computational resources.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Performance&amp;#039;&amp;#039;&amp;#039;: Overestimating memory needs can lead to underutilization of resources, potentially delaying other jobs in the queue.&lt;br /&gt;
&lt;br /&gt;
==== How to Specify Memory in SLURM ====&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;--mem&amp;#039;&amp;#039;&amp;#039;: Specifies the total memory required for the job.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;--mem-per-cpu&amp;#039;&amp;#039;&amp;#039;: Specifies the memory required per CPU.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example&amp;#039;&amp;#039;&amp;#039;:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --mem=4G              # Total memory for the job&lt;br /&gt;
#SBATCH --mem-per-cpu=2G      # Memory per CPU&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Interactive Jobs===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#Start an interactive session:&lt;br /&gt;
srun --ntasks=1 -p power-general -A power-general-users --pty bash&lt;br /&gt;
&lt;br /&gt;
#Specify a compute node:&lt;br /&gt;
srun --ntasks=1 -p power-general -A power-general-users --nodelist=&amp;quot;compute-0-12&amp;quot; --pty bash&lt;br /&gt;
&lt;br /&gt;
#Using GUI:&lt;br /&gt;
srun --ntasks=1 -p power-general -A power-general-users --x11 /bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Submitting RELION Jobs===&lt;br /&gt;
&lt;br /&gt;
To submit a RELION job interactively on the &amp;lt;code&amp;gt;gpu-relion&amp;lt;/code&amp;gt; queue with X11 forwarding, use the following steps:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#Start an interactive session with X11:&lt;br /&gt;
srun --ntasks=1 -p gpu-relion -A your_account --x11 --pty bash&lt;br /&gt;
#Load the RELION module:&lt;br /&gt;
module load relion/relion-4.0.1&lt;br /&gt;
#Launch RELION:&lt;br /&gt;
relion&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Running matlab example==&lt;br /&gt;
In this example there are 3 files:&lt;br /&gt;
&lt;br /&gt;
myTable.m ⇒ This matlab file calculates something&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
fprintf(&amp;#039;=======================================\n&amp;#039;);&lt;br /&gt;
fprintf(&amp;#039; a             b             c              d             \n&amp;#039;);&lt;br /&gt;
fprintf(&amp;#039;=======================================\n&amp;#039;);&lt;br /&gt;
while 1&lt;br /&gt;
                for j = 1:10&lt;br /&gt;
                                a = sin(10*j);&lt;br /&gt;
                                b = a*cos(10*j);&lt;br /&gt;
                                c = a + b;&lt;br /&gt;
                                d = a - b;&lt;br /&gt;
                                fprintf(&amp;#039;%+6.5f   %+6.5f   %+6.5f   %+6.5f   \n&amp;#039;,a,b,c,d);&lt;br /&gt;
                end&lt;br /&gt;
end&lt;br /&gt;
fprintf(&amp;#039;=======================================\n&amp;#039;);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
my_table_script.sh ⇒ This script executes the matlab program. Need just to run qsub with this script&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH --mem=50mg&lt;br /&gt;
#SBATCH --partition powers-general&lt;br /&gt;
#SBATCH -A power-general-users&lt;br /&gt;
hostname&lt;br /&gt;
&lt;br /&gt;
cd /a/home/cc/tree/taucc/staff/dvory/matlab&lt;br /&gt;
&lt;br /&gt;
matlab -nodisplay -nosplash -nodesktop -r &amp;quot;run(myTable());exit;&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
run_in_loop.sh ⇒ However, one may also generate many jobs with this file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
for i in {1..100}&lt;br /&gt;
&lt;br /&gt;
do&lt;br /&gt;
&lt;br /&gt;
        sbatch my_table_script.sh&lt;br /&gt;
&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Running my job is with the command (after doing chmod +x &amp;#039;run_in_loop.sh&amp;#039;):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
./run_in_loop.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==AlphaFold==&lt;br /&gt;
&lt;br /&gt;
AlphaFold is a deep learning tool designed for predicting protein structures.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Guides:&amp;#039;&amp;#039;&amp;#039;  &lt;br /&gt;
&lt;br /&gt;
[https://hpcguide.tau.ac.il/index.php?title=Alphafold AlphaFold Guide]&lt;br /&gt;
&lt;br /&gt;
[https://hpcguide.tau.ac.il/index.php?title=Alphafold3 AlphaFold3 Guide]&lt;br /&gt;
&lt;br /&gt;
==Common SLURM Commands==&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#View all queues (partitions):&lt;br /&gt;
sinfo&lt;br /&gt;
#View all jobs:&lt;br /&gt;
squeue&lt;br /&gt;
#View details of a specific job:&lt;br /&gt;
scontrol show job &amp;lt;job_number&amp;gt;&lt;br /&gt;
#Get information about partitions:&lt;br /&gt;
scontrol show partition&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting &amp;amp; Tips ==&lt;br /&gt;
&lt;br /&gt;
=== Common Errors ===&lt;br /&gt;
&lt;br /&gt;
# &amp;lt;code&amp;gt;srun: error: Unable to allocate resources: No partition specified or system default partition&amp;lt;/code&amp;gt;  &amp;lt;br /&amp;gt;&amp;#039;&amp;#039;&amp;#039;Solution:&amp;#039;&amp;#039;&amp;#039; Always specify a partition. Example:  &amp;lt;code&amp;gt;srun --pty -c 1 --mem=2G -p power-general /bin/bash&amp;lt;/code&amp;gt;&lt;br /&gt;
# Job failed, and upon doing scontrol show job job_id or when running sacct -j job_id -o JobID,JobName,State%20  &amp;lt;br /&amp;gt;you see:   &amp;lt;code&amp;gt;JobState=OUT_OF_MEMORY Reason=OutOfMemory&amp;lt;/code&amp;gt;  or :&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
JobID           JobName                State &lt;br /&gt;
------------ ---------- -------------------- &lt;br /&gt;
71             oom_test        OUT_OF_MEMORY &lt;br /&gt;
71.batch          batch        OUT_OF_MEMORY &lt;br /&gt;
71.extern        extern            COMPLETED &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;it means that the ram requested for the job was not enough, please resubmit the job again with more ram. see [https://wikihpc.tau.ac.il/index.php?title=Slurm_user_guide#Estimating_RAM_Usage below] for help with understanding how much ram your job may need.&lt;br /&gt;
&lt;br /&gt;
=== Chain Jobs ===&lt;br /&gt;
Use the &amp;lt;code&amp;gt;--depend&amp;lt;/code&amp;gt; flag to set job dependencies.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sbatch --ntasks=1 --time=60 -p power-general -A power-general-users --depend=45001 do_work.bash&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Always Specify Resources ===&lt;br /&gt;
When submitting jobs, ensure you include all required resources like partition, memory, and CPUs to avoid job failures.&lt;br /&gt;
&lt;br /&gt;
=== Attaching to Running Jobs ===&lt;br /&gt;
If you need to monitor or interact with a running job, use &amp;lt;code&amp;gt;sattach&amp;lt;/code&amp;gt;. This command allows you to attach to a job&amp;#039;s input, output, and error streams in real-time.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sattach &amp;lt;job_id&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To view job steps of a specific job, use the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
scontrol show job &amp;lt;job_id&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look for sections labeled &amp;quot;StepId&amp;quot; within the output. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;For specific job steps, use:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sattach &amp;lt;job_id.step_id&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Note:&amp;#039;&amp;#039;&amp;#039; &amp;lt;code&amp;gt;sattach&amp;lt;/code&amp;gt; is particularly useful for interactive jobs, where you can provide input directly. For non-interactive jobs, it acts like &amp;lt;code&amp;gt;tail -f&amp;lt;/code&amp;gt;, allowing you to monitor the output stream.&lt;br /&gt;
&lt;br /&gt;
=== Estimating RAM Usage ===&lt;br /&gt;
&lt;br /&gt;
When writing SLURM job scripts, it&amp;#039;s crucial to understand and correctly specify the memory requirements for your job. Proper memory allocation ensures efficient resource usage and prevents job failures due to out-of-memory (OOM) errors.&lt;br /&gt;
&lt;br /&gt;
==== Tips for Estimating RAM Usage ====&lt;br /&gt;
&lt;br /&gt;
* Check Application Documentation: Refer to the official documentation or user guides for memory-related information.&lt;br /&gt;
* Run a Small Test Job: Submit a smaller version of your job and monitor its memory usage using commands like `free -m`, `top`, or `htop`.&lt;br /&gt;
* Use Profiling Tools: Tools like `valgrind`, `gprof`, or built-in profilers can help you understand memory usage.&lt;br /&gt;
* Analyze Previous Jobs: Review SLURM logs and job statistics for insights into memory consumption of past jobs.&lt;br /&gt;
* Consult with Peers or Experts: Ask colleagues or experts who have experience with similar workloads.&lt;br /&gt;
&lt;br /&gt;
==== Example: Monitoring Memory Usage ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH --job-name=memory_test&lt;br /&gt;
#SBATCH --account=your_account&lt;br /&gt;
#SBATCH --partition=your_partition&lt;br /&gt;
#SBATCH --time=01:00:00&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --cpus-per-task=1&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --output=memory_test.out&lt;br /&gt;
#SBATCH --error=memory_test.err&lt;br /&gt;
&lt;br /&gt;
# Monitor memory usage&lt;br /&gt;
echo &amp;quot;Memory usage before running the job:&amp;quot;&lt;br /&gt;
free -m&lt;br /&gt;
&lt;br /&gt;
# Your application commands go here&lt;br /&gt;
# ./your_application&lt;br /&gt;
&lt;br /&gt;
# Monitor memory usage after running the job&lt;br /&gt;
echo &amp;quot;Memory usage after running the job:&amp;quot;&lt;br /&gt;
free -m&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== General Tips ====&lt;br /&gt;
&lt;br /&gt;
* Start Small: Begin with a conservative memory request and increase it based on observed usage.&lt;br /&gt;
* Consider Peak Usage: Plan for peak memory usage to avoid OOM errors.&lt;br /&gt;
* Use SLURM&amp;#039;s Memory Reporting: Use `sacct` to view memory usage statistics.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sacct -j &amp;lt;job_id&amp;gt; --format=JobID,JobName,MaxRSS,Elapsed&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>Eyal</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Main_Page&amp;diff=1429</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Main_Page&amp;diff=1429"/>
		<updated>2023-09-10T08:54:37Z</updated>

		<summary type="html">&lt;p&gt;Eyal: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;#039;&amp;#039;&amp;#039;Welcome to HPC Guide.&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
[[Linux basic commands]]&lt;br /&gt;
&lt;br /&gt;
[[Public queues]]&lt;br /&gt;
&lt;br /&gt;
[[Submitting a job to a queue]]&lt;br /&gt;
&lt;br /&gt;
[[Submitting a job to a slurm queue]]&lt;br /&gt;
&lt;br /&gt;
[[PBS-To-SLURM]]&lt;br /&gt;
&lt;br /&gt;
[[Creaing and using conda environment]]&lt;br /&gt;
&lt;br /&gt;
[[Palo Alto VPN for linux]]&lt;br /&gt;
&lt;br /&gt;
[[Alphafold]]&lt;br /&gt;
&lt;br /&gt;
[[Using GPU]]&lt;br /&gt;
&lt;br /&gt;
This HPC Tutorial is designed for researchers at TAU who are in need of computational power (computer resources) and wish to explore and use our High Performance Computing (HPC) core facilities. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The audience may be completely unaware of the HPC concepts but must have some basic understanding of computers and computer programming.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;What is HPC?&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
“High Performance Computing” (HPC) is computing on a “Supercomputer”, &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
a computer at the front line of contemporary processing capacity – particularly speed of calculation and available memory.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
A computer cluster consists of a set of loosely or tightly connected computers that work together so that in many respects they can be viewed as a single system.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The components of a cluster are usually connected to each other through fast local area networks(“LAN”) with each node (computer used as a server) running its own instance of an operating system. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Computer clusters emerged as a result of convergence of a number of computing trends including the availability of low cost microprocessors, &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
high-speed networks, and software for high performance distributed computing.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Compute clusters are usually deployed to improve performance and availability over that of a single computer, while typically being more cost-effective than single computers of comparable speed or availability.&lt;/div&gt;</summary>
		<author><name>Eyal</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=PBS-To-SLURM&amp;diff=1428</id>
		<title>PBS-To-SLURM</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=PBS-To-SLURM&amp;diff=1428"/>
		<updated>2023-09-10T08:52:29Z</updated>

		<summary type="html">&lt;p&gt;Eyal: Created page with &amp;quot; &amp;#039;&amp;#039;&amp;#039;Translating PBS Scripts to Slurm Scripts&amp;#039;&amp;#039;&amp;#039;  The following table contains a list of common commands and terms used with the TORQUE/PBS scheduler, and the corresponding com...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Translating PBS Scripts to Slurm Scripts&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The following table contains a list of common commands and terms used with the TORQUE/PBS scheduler, and the corresponding commands and terms used under the Slurm scheduler.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
This sheet can be used to assist in translating your existing PBS scripts into Slurm scripts to be read by the new scheduler, or as a reference when creating new Slurm job scripts. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:User-commands.jpg|frame|left]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Environment.jpg|frame|left]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Job-specification.jpg|frame|left]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>Eyal</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=File:Job-specification.jpg&amp;diff=1427</id>
		<title>File:Job-specification.jpg</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=File:Job-specification.jpg&amp;diff=1427"/>
		<updated>2023-09-10T08:49:41Z</updated>

		<summary type="html">&lt;p&gt;Eyal: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Job-specification&lt;/div&gt;</summary>
		<author><name>Eyal</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=File:Environment.jpg&amp;diff=1426</id>
		<title>File:Environment.jpg</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=File:Environment.jpg&amp;diff=1426"/>
		<updated>2023-09-10T08:41:25Z</updated>

		<summary type="html">&lt;p&gt;Eyal: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Environment&lt;/div&gt;</summary>
		<author><name>Eyal</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=File:User-commands.jpg&amp;diff=1425</id>
		<title>File:User-commands.jpg</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=File:User-commands.jpg&amp;diff=1425"/>
		<updated>2023-09-10T07:51:23Z</updated>

		<summary type="html">&lt;p&gt;Eyal: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;user-commands&lt;/div&gt;</summary>
		<author><name>Eyal</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Submitting_a_job_to_a_queue&amp;diff=1402</id>
		<title>Submitting a job to a queue</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Submitting_a_job_to_a_queue&amp;diff=1402"/>
		<updated>2022-12-14T14:34:03Z</updated>

		<summary type="html">&lt;p&gt;Eyal: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Power is a Linux cluster system running CentOS (version 7.3-8). The cluster consists of a single head node (power9), and more than 400 compute nodes (some with 16GB, others with 36GB or even 600GB memory and even more) 16 to 96 cores each. Users belonging to netgroup &amp;#039;power&amp;#039; can login and run their batch jobs on it.&lt;br /&gt;
&lt;br /&gt;
The Faculty Computer Coordinators can change their netgroup from general to power.&lt;br /&gt;
&lt;br /&gt;
Users’ jobs are executed on the compute nodes (compute-0-0 – compute-0-500) under control of a queuing system (PBSPRO). Users are able to logon to the head node, power, via ssh (where their home directory is mounted from the CC filer, the same as on the other CC servers) and submit their jobs to the batch system.&lt;br /&gt;
&lt;br /&gt;
Power cluster and pbspro queueing system&lt;br /&gt;
 &lt;br /&gt;
===PBSPRO main commands===&lt;br /&gt;
&lt;br /&gt;
A good reference can be found in link http://www.pbsworks.com/documentation/support/PBSProUserGuide10.4.pdf&lt;br /&gt;
&lt;br /&gt;
Start with one of the below commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh &amp;lt;username&amp;gt;@power9login.tau.ac.il&lt;br /&gt;
ssh powerlogin9.tau.ac.il -l &amp;lt;username&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create a batch job script, for example, file named script that contains the following lines:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
cd executables&lt;br /&gt;
./a.out&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Send the script to be executed in one of the existing queues, for example, to queue ‘public’:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qsub -q public script&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The number which is returned from this command is the job id that was assigned to the new job:&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;6770818.power.tau.ac.il&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
You can see the status of your executing jobs by executing:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qstat -u &amp;lt;username&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Which lists all the jobs running or being queued for the specified user.&lt;br /&gt;
&lt;br /&gt;
Job status may be mainly one of the following:&lt;br /&gt;
&lt;br /&gt;
Q – queued (waiting for its run)&lt;br /&gt;
R - running&lt;br /&gt;
You can see the status of all the executing jobs by executing:&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;qstat&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
To see the current available queues and their cputime and memory limits, execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qstat –q&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To see the status of a specific job, you may run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qstat -f &amp;lt;job number&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Some of the queues are private, accessible to a predefined group of users, other are public, open to all the users of power.&lt;br /&gt;
More detailed information on any queue limits may be viewed by:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qmgr -c &amp;quot;list queue &amp;lt;queuename&amp;gt;&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qmgr -c &amp;quot;list queue power-general&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Default queue limits are enforced unless specified otherwise (up to max values) on &amp;#039;qsub&amp;#039; command, using flag ‘-l’ (small ‘L’), according the following format:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qsub -q &amp;lt;queue&amp;gt; -l&amp;lt;attribute=limit,attribute=limit,.. &amp;lt;script&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
For example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qsub -q power-dvory -lpmem=2000mb,pvmem=3000mb &amp;lt;script&amp;gt;&lt;br /&gt;
qsub -q power-dvory -lmem=14gb,pmem=5gb,vmem=20gb,pvmem=20gb &amp;lt;script&amp;gt;&lt;br /&gt;
qsub -q gpu -lngpus=1 &amp;lt;script&amp;gt;&lt;br /&gt;
qsub -q public -lselect=1:ncpus=4 &amp;lt;script&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
While:&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;mem&amp;#039;&amp;#039;&amp;#039; - refers to maximum amount of memory to be allocated&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;pmem&amp;#039;&amp;#039;&amp;#039; - refers to maximum amount of memory to be allocated per process&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;vmem&amp;#039;&amp;#039;&amp;#039; - refers to maximum amount of virtual  memory to be allocated&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;pvmem&amp;#039;&amp;#039;&amp;#039; - refers to maximum amount of virtual memory to be allocated per process&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;nodes&amp;#039;&amp;#039;&amp;#039; - number of required nodes (servers)&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;ppn&amp;#039;&amp;#039;&amp;#039; - number of required cores (within a node)&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;ngpus&amp;#039;&amp;#039;&amp;#039; - number of required gpus (exists only for queue gpu)&lt;br /&gt;
The more updated syntax includes the word &amp;#039;select&amp;#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Quesues list&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;gpu&amp;#039;&amp;#039;&amp;#039; – this queue’s purpose it to enable running jobs, which require gpu processing&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;public&amp;#039;&amp;#039;&amp;#039; - queue that is used for public users, who do not pay. They have the lower priority.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;power-PI-username&amp;#039;&amp;#039;&amp;#039; - this queue is used by PI and her/his group. Jobs are directed to one global queue, named power-general&lt;br /&gt;
&lt;br /&gt;
The standard output and standard error files will be written by default at the end of the execution to files in your home directory: script.o#n and script.e#n (where #n is the job number given to your job by the batch queueing system).&lt;br /&gt;
&lt;br /&gt;
To delete a job, use the qdel command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qdel &amp;lt;job number&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
PBSPRO file parameters&lt;br /&gt;
The script to be run may have additional commands which are directions to the scheduler, instead of adding parameters to the qsub command line.&lt;br /&gt;
&lt;br /&gt;
Explanations regarding PBS script directives can be found at: https://www.osc.edu/supercomputing/batch-processing-at-osc/pbs-directives-summary&lt;br /&gt;
&lt;br /&gt;
For example, instead of specifying ‘qsub –q public …’, one may add ‘#PBS –q public’ to the script to be executed. Like in the below script, named ‘script.sh’, which can be run using the command: ‘qsub script.sh’&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#PBS -l walltime=1:00:00&lt;br /&gt;
#PBS -l select=1:ncpus=4,mem=400mb&lt;br /&gt;
./my application&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Running matlab example&lt;br /&gt;
In this example there are 3 files:&lt;br /&gt;
&lt;br /&gt;
myTable.m ⇒ This matlab file calculates something&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
function [] = myTable()&lt;br /&gt;
fprintf(&amp;#039;=======================================\n&amp;#039;);&lt;br /&gt;
fprintf(&amp;#039; a             b             c              d             \n&amp;#039;);&lt;br /&gt;
fprintf(&amp;#039;=======================================\n&amp;#039;);&lt;br /&gt;
while 1&lt;br /&gt;
                for j = 1:10&lt;br /&gt;
                                a = sin(10*j);&lt;br /&gt;
                                b = a*cos(10*j);&lt;br /&gt;
                                c = a + b;&lt;br /&gt;
                                d = a - b;&lt;br /&gt;
                                fprintf(&amp;#039;%+6.5f   %+6.5f   %+6.5f   %+6.5f   \n&amp;#039;,a,b,c,d);&lt;br /&gt;
                end&lt;br /&gt;
end&lt;br /&gt;
fprintf(&amp;#039;=======================================\n&amp;#039;);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
my_table_script.sh ⇒ This script executes the matlab program. Need just to run qsub with this script&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#PBS -e /tmp/dvory/matlab/output&lt;br /&gt;
&lt;br /&gt;
#PBS -o /tmp/dvory/matlab/output&lt;br /&gt;
&lt;br /&gt;
#PBS -l mem=5000mb&lt;br /&gt;
&lt;br /&gt;
#PBS -q power-dvory&lt;br /&gt;
&lt;br /&gt;
hostname&lt;br /&gt;
&lt;br /&gt;
cd /a/home/cc/tree/taucc/staff/dvory/matlab&lt;br /&gt;
&lt;br /&gt;
matlab -nodisplay -r &amp;quot;myTable()&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
run_in_loop.sh ⇒ However, one may also generate many jobs with this file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
for i in {1..100}&lt;br /&gt;
&lt;br /&gt;
do&lt;br /&gt;
&lt;br /&gt;
        qsub my_table_script.sh&lt;br /&gt;
&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Running my job is with the command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
./run_in_loop.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Interactive session===&lt;br /&gt;
Interactive sessions (line mode) are enabled, using flag ‘-I’ (a big ‘i&amp;#039;):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qsub -q &amp;lt;queue name&amp;gt; -I&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
(without adding a script name)&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
Interactive sessions with X window&lt;br /&gt;
To enable opening an x window (such as matlab window, or math window)&lt;br /&gt;
&lt;br /&gt;
This may be enabled using the commands below:&lt;br /&gt;
&lt;br /&gt;
Login to power.tau.ac.il with ‘X’:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -X -l &amp;lt;username&amp;gt; powerlogin9.tau.ac.il&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then use the qsub command with ‘-X’:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qsub -I -X -q &amp;lt;queue&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
(without adding a script name)&lt;br /&gt;
Keep in mind that - running matlab via an X window slows the matlab execution.&lt;br /&gt;
&lt;br /&gt;
For the benefit of matlab, need to allocate more memory than is defined in the default public queues, at least the following memory needs to requested:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qsub -q power-dvory –lmem=60gb,pmem=60gb,vmem=60gb,pvmem=60gb -I -X&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Parallelism===&lt;br /&gt;
Parallel jobs can be executed in the cluster - using up to 96 cores (=ppn) for a job. For example, jobs compiled with mpich can be submitted with the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qsub -l select=1:ncpus=8 -q paublic &amp;lt;script-filename&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Multithreaded matlab jobs can be submitted with the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
qsub -l select=1:ncpus=8 -q parallel &amp;lt;matlab-script&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
‘-l’ refers to a small ‘L’&lt;br /&gt;
&lt;br /&gt;
===Environment modules===&lt;br /&gt;
The Environment Modules package provides for the dynamic modification of a user’s environment via modulefiles.&lt;br /&gt;
&lt;br /&gt;
Typically modulefiles instruct the module command to alter or set shell environment variables such as PATH, MANPATH, etc. Modules are useful in managing different versions of applications. &lt;br /&gt;
Useful commands: &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module avail&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
⇒ lists the available modules on the system&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load &amp;lt;module&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
e.g.:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load intel/ifort10&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
⇒ loads the appropriate module and enables to use ifort version 10 without specifying the path to its binaries and libraries&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module list&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
⇒ lists the loaded modules&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module unload intel/ifort10&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
⇒ unloads the loaded module&lt;/div&gt;</summary>
		<author><name>Eyal</name></author>
	</entry>
</feed>