<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://hpcguide.tau.ac.il/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Levk</id>
	<title>HPC Guide - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://hpcguide.tau.ac.il/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Levk"/>
	<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Special:Contributions/Levk"/>
	<updated>2026-04-24T13:53:18Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.35.5</generator>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Main_Page&amp;diff=1555</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Main_Page&amp;diff=1555"/>
		<updated>2026-02-12T16:46:14Z</updated>

		<summary type="html">&lt;p&gt;Levk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;#039;&amp;#039;&amp;#039;Welcome to HPC Guide.&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
[[Linux basic commands]]&lt;br /&gt;
&lt;br /&gt;
[[Public queues]]&lt;br /&gt;
&lt;br /&gt;
[[ New slurm qos usage]]&lt;br /&gt;
&lt;br /&gt;
[[Submitting a job to a queue]]&lt;br /&gt;
&lt;br /&gt;
[[Submitting a job to a slurm queue]]&lt;br /&gt;
&lt;br /&gt;
[[PBS-To-SLURM]]&lt;br /&gt;
&lt;br /&gt;
[[Creating and using conda environment]]&lt;br /&gt;
&lt;br /&gt;
[[Palo Alto VPN for linux]]&lt;br /&gt;
&lt;br /&gt;
[[Alphafold]]&lt;br /&gt;
&lt;br /&gt;
[[Alphafold3]]&lt;br /&gt;
&lt;br /&gt;
[[Using GPU]]&lt;br /&gt;
&lt;br /&gt;
[[security installations]]&lt;br /&gt;
&lt;br /&gt;
[[Install matlab on work station per matlab user]]&lt;br /&gt;
&lt;br /&gt;
[[Submitting vscode job on slurm]]&lt;br /&gt;
&lt;br /&gt;
[[Storage and scratch]]&lt;br /&gt;
&lt;br /&gt;
[[Using jupyter on Slurm ]]&lt;br /&gt;
&lt;br /&gt;
[[PowerIDE User Guide]]&lt;br /&gt;
&lt;br /&gt;
This HPC Tutorial is designed for researchers at TAU who are in need of computational power (computer resources) and wish to explore and use our High Performance Computing (HPC) core facilities. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The audience may be completely unaware of the HPC concepts but must have some basic understanding of computers and computer programming.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;What is HPC?&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
“High Performance Computing” (HPC) is computing on a “Supercomputer”, &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
a computer at the front line of contemporary processing capacity – particularly speed of calculation and available memory.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
A computer cluster consists of a set of loosely or tightly connected computers that work together so that in many respects they can be viewed as a single system.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The components of a cluster are usually connected to each other through fast local area networks(“LAN”) with each node (computer used as a server) running its own instance of an operating system. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Computer clusters emerged as a result of convergence of a number of computing trends including the availability of low cost microprocessors, &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
high-speed networks, and software for high performance distributed computing.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Compute clusters are usually deployed to improve performance and availability over that of a single computer, while typically being more cost-effective than single computers of comparable speed or availability.&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=PowerIDE_User_Guide&amp;diff=1554</id>
		<title>PowerIDE User Guide</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=PowerIDE_User_Guide&amp;diff=1554"/>
		<updated>2026-02-12T16:45:52Z</updated>

		<summary type="html">&lt;p&gt;Levk: Created page with &amp;quot;= PowerIDE User Guide =  PowerIDE provides interactive access to the HPC cluster through a web browser. You can run Jupyter notebooks and VS Code directly on compute nodes wit...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= PowerIDE User Guide =&lt;br /&gt;
&lt;br /&gt;
PowerIDE provides interactive access to the HPC cluster through a web browser. You can run Jupyter notebooks and VS Code directly on compute nodes without needing SSH access.&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
== Getting Started ==&lt;br /&gt;
&lt;br /&gt;
=== 1. Access PowerIDE ===&lt;br /&gt;
&lt;br /&gt;
Open your web browser and navigate to:&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;[https://poweride.tau.ac.il/jupyter https://poweride.tau.ac.il/jupyter]&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
=== 2. Login ===&lt;br /&gt;
&lt;br /&gt;
Log in with your &amp;#039;&amp;#039;&amp;#039;TAU university credentials&amp;#039;&amp;#039;&amp;#039;:&lt;br /&gt;
&lt;br /&gt;
* Username: Your TAU username&lt;br /&gt;
* Password: Your TAU password&lt;br /&gt;
&lt;br /&gt;
This is the same login you use for email and other university services.&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
== Starting Your Server ==&lt;br /&gt;
&lt;br /&gt;
After logging in, you&amp;#039;ll see a page with a large orange button:&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Click &amp;quot;Start My Server&amp;quot;&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
You will then be presented with a &amp;#039;&amp;#039;&amp;#039;Server Options&amp;#039;&amp;#039;&amp;#039; form where you configure your compute resources.&lt;br /&gt;
&lt;br /&gt;
=== How It Works ===&lt;br /&gt;
&lt;br /&gt;
When you start your server, PowerIDE submits a &amp;#039;&amp;#039;&amp;#039;Slurm job&amp;#039;&amp;#039;&amp;#039; to the &amp;#039;&amp;#039;&amp;#039;PowerSlurm cluster&amp;#039;&amp;#039;&amp;#039;. Your Jupyter session runs on a &amp;#039;&amp;#039;&amp;#039;compute node&amp;#039;&amp;#039;&amp;#039;, not on the PowerIDE server itself.&lt;br /&gt;
&lt;br /&gt;
This means:&lt;br /&gt;
* You get dedicated resources (CPUs, memory, GPUs) on a compute node&lt;br /&gt;
* Your job runs through the same Slurm scheduler as other HPC jobs&lt;br /&gt;
* The PowerIDE server is only the web interface - all computation happens on cluster nodes&lt;br /&gt;
* Your session will queue if the cluster is busy (just like regular batch jobs)&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
== Configuring Resources ==&lt;br /&gt;
&lt;br /&gt;
[[File:Server_Options_Form.png|thumb|500px|Server Options form showing resource selection]]&lt;br /&gt;
&lt;br /&gt;
The form includes the following fields:&lt;br /&gt;
&lt;br /&gt;
=== Partition ===&lt;br /&gt;
&lt;br /&gt;
Select which partition (queue) to run on. The dropdown will &amp;#039;&amp;#039;&amp;#039;only show partitions you have access to&amp;#039;&amp;#039;&amp;#039; based on your Slurm account permissions.&lt;br /&gt;
&lt;br /&gt;
Common partitions:&lt;br /&gt;
* `power-general-shared-pool` - General purpose computing&lt;br /&gt;
* `gpu-general-pool` - GPU-enabled partition (if available)&lt;br /&gt;
* Check with your PI or HPC admin for which partitions you should use&lt;br /&gt;
&lt;br /&gt;
=== QOS (Quality of Service) ===&lt;br /&gt;
&lt;br /&gt;
Select the QOS for your job. This controls priority and resource limits.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Default (owner)&amp;#039;&amp;#039;&amp;#039; - Usually the best choice (uses your group&amp;#039;s default QOS)&lt;br /&gt;
* Other options may be available based on your partition selection&lt;br /&gt;
&lt;br /&gt;
The form will automatically show only valid QOS options for your selected partition.&lt;br /&gt;
&lt;br /&gt;
=== GPUs ===&lt;br /&gt;
&lt;br /&gt;
If you select a GPU-enabled partition, a &amp;#039;&amp;#039;&amp;#039;GPUs&amp;#039;&amp;#039;&amp;#039; field will appear. Specify how many GPUs you need (0 if none).&lt;br /&gt;
&lt;br /&gt;
The maximum number of GPUs is automatically limited based on the partition&amp;#039;s capabilities.&lt;br /&gt;
&lt;br /&gt;
=== Time (D-HH:MM:SS) ===&lt;br /&gt;
&lt;br /&gt;
Specify how long your session should run. Default is `04:00:00` (4 hours).&lt;br /&gt;
&lt;br /&gt;
Formats accepted:&lt;br /&gt;
* `HH:MM:SS` - e.g., `02:30:00` for 2.5 hours&lt;br /&gt;
* `D-HH:MM:SS` - e.g., `1-12:00:00` for 1 day and 12 hours&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Important:&amp;#039;&amp;#039;&amp;#039; Your session will be terminated when time runs out. Save your work regularly!&lt;br /&gt;
&lt;br /&gt;
=== CPUs per task ===&lt;br /&gt;
&lt;br /&gt;
Number of CPU cores for your session. Default is `1`.&lt;br /&gt;
&lt;br /&gt;
Increase this if you&amp;#039;re running multi-threaded code.&lt;br /&gt;
&lt;br /&gt;
=== Memory ===&lt;br /&gt;
&lt;br /&gt;
Amount of RAM to allocate. Default is `1G`.&lt;br /&gt;
&lt;br /&gt;
Examples:&lt;br /&gt;
* `2G` - 2 gigabytes&lt;br /&gt;
* `8G` - 8 gigabytes&lt;br /&gt;
* `500M` - 500 megabytes&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Tip:&amp;#039;&amp;#039;&amp;#039; Start with less and increase if needed. Over-requesting resources may delay job start.&lt;br /&gt;
&lt;br /&gt;
=== Working directory ===&lt;br /&gt;
&lt;br /&gt;
Default: Your LDAP home directory (e.g., `/a/home/cc/staff/yourusername`)&lt;br /&gt;
&lt;br /&gt;
This is where your Jupyter session starts.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Recommendation:&amp;#039;&amp;#039;&amp;#039; If you&amp;#039;re working on a specific project located elsewhere, &amp;#039;&amp;#039;&amp;#039;change this to your project directory&amp;#039;&amp;#039;&amp;#039;. For example:&lt;br /&gt;
* `/a/home/cc/students/yourgroup/project1`&lt;br /&gt;
* `/scratch/yourusername/analysis`&lt;br /&gt;
&lt;br /&gt;
This saves time navigating to your files after launch.&lt;br /&gt;
&lt;br /&gt;
=== Stdout directory ===&lt;br /&gt;
&lt;br /&gt;
Where to write job output logs. Default: your home directory.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Recommendation:&amp;#039;&amp;#039;&amp;#039; Usually fine to leave as default, but you can change it to organize logs better (e.g., `~/logs/` or your project directory).&lt;br /&gt;
&lt;br /&gt;
=== Stderr directory ===&lt;br /&gt;
&lt;br /&gt;
Where to write job error logs. Default: your home directory.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Recommendation:&amp;#039;&amp;#039;&amp;#039; Same as stdout - usually fine to keep default.&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
== Starting Your Session ==&lt;br /&gt;
&lt;br /&gt;
After filling out the form, click the orange &amp;#039;&amp;#039;&amp;#039;Start&amp;#039;&amp;#039;&amp;#039; button at the bottom.&lt;br /&gt;
&lt;br /&gt;
What happens next:&lt;br /&gt;
&lt;br /&gt;
# PowerIDE submits a Slurm job with your requested resources&lt;br /&gt;
# You&amp;#039;ll see a progress page saying &amp;quot;Your server is starting up...&amp;quot;&lt;br /&gt;
# Wait for a compute node to become available (usually 10-60 seconds)&lt;br /&gt;
# Once started, you&amp;#039;ll automatically be redirected to JupyterLab&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Note:&amp;#039;&amp;#039;&amp;#039; If the cluster is busy, it may take longer. You can close the browser and come back - your session will start when resources are available.&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
== Using JupyterLab ==&lt;br /&gt;
&lt;br /&gt;
Once your server starts, you&amp;#039;ll land in &amp;#039;&amp;#039;&amp;#039;JupyterLab&amp;#039;&amp;#039;&amp;#039; - a web-based development environment.&lt;br /&gt;
&lt;br /&gt;
=== JupyterLab Interface ===&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Left sidebar:&amp;#039;&amp;#039;&amp;#039; File browser, running kernels, extensions&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Main area:&amp;#039;&amp;#039;&amp;#039; Notebooks, text files, terminals&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Launcher:&amp;#039;&amp;#039;&amp;#039; Click the &amp;#039;&amp;#039;&amp;#039;+&amp;#039;&amp;#039;&amp;#039; button to see available tools&lt;br /&gt;
&lt;br /&gt;
=== Common Tasks ===&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Create a new notebook:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
# Click the &amp;#039;&amp;#039;&amp;#039;+&amp;#039;&amp;#039;&amp;#039; button (or File → New Launcher)&lt;br /&gt;
# Click on a kernel (e.g., &amp;quot;Python 3&amp;quot;)&lt;br /&gt;
# Start coding!&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Open a terminal:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
# Click the &amp;#039;&amp;#039;&amp;#039;+&amp;#039;&amp;#039;&amp;#039; button&lt;br /&gt;
# Click &amp;quot;Terminal&amp;quot; in the launcher&lt;br /&gt;
# You now have a bash shell on the compute node&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Upload files:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
* Drag and drop files into the file browser, OR&lt;br /&gt;
* Click the upload button (↑ icon) in the file browser&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Download files:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
* Right-click file → Download&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
== Using VS Code ==&lt;br /&gt;
&lt;br /&gt;
PowerIDE includes &amp;#039;&amp;#039;&amp;#039;VS Code&amp;#039;&amp;#039;&amp;#039; (Visual Studio Code) running in your browser!&lt;br /&gt;
&lt;br /&gt;
=== Starting VS Code ===&lt;br /&gt;
&lt;br /&gt;
# From JupyterLab, click the &amp;#039;&amp;#039;&amp;#039;+&amp;#039;&amp;#039;&amp;#039; button to open the launcher&lt;br /&gt;
# Look for the &amp;#039;&amp;#039;&amp;#039;VS Code&amp;#039;&amp;#039;&amp;#039; icon in the launcher&lt;br /&gt;
# Click it - VS Code will open in a new tab/window&lt;br /&gt;
&lt;br /&gt;
You now have a full VS Code environment running on the compute node with all your files accessible.&lt;br /&gt;
&lt;br /&gt;
=== VS Code Features ===&lt;br /&gt;
&lt;br /&gt;
* Full code editor with syntax highlighting&lt;br /&gt;
* Integrated terminal&lt;br /&gt;
* Extensions support&lt;br /&gt;
* Git integration&lt;br /&gt;
* File explorer&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Tip:&amp;#039;&amp;#039;&amp;#039; VS Code runs in the same job as JupyterLab, so it has access to all the same resources (CPUs, memory, GPUs) you requested.&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
== Python Environments ==&lt;br /&gt;
&lt;br /&gt;
=== What is a Python Kernel? ===&lt;br /&gt;
&lt;br /&gt;
A &amp;#039;&amp;#039;&amp;#039;kernel&amp;#039;&amp;#039;&amp;#039; is simply a Python interpreter that JupyterLab uses to run your code. When you create a notebook and select &amp;quot;Python 3.12 (Base)&amp;quot;, you&amp;#039;re choosing which Python environment to use.&lt;br /&gt;
&lt;br /&gt;
Think of it like choosing which Python installation to run: `/usr/bin/python3` vs `/path/to/my-env/bin/python`&lt;br /&gt;
&lt;br /&gt;
=== Default Kernel ===&lt;br /&gt;
&lt;br /&gt;
PowerIDE provides one default kernel:&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Python 3.12 (Base)&amp;#039;&amp;#039;&amp;#039; - Standard Python with JupyterLab and common packages&lt;br /&gt;
&lt;br /&gt;
=== Creating Your Own Kernels ===&lt;br /&gt;
&lt;br /&gt;
You can register your own conda/mamba environments as kernels! This lets you:&lt;br /&gt;
* Use different Python versions (3.9, 3.10, 3.11, etc.)&lt;br /&gt;
* Install custom packages without affecting others&lt;br /&gt;
* Have multiple project-specific environments&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Steps to register your own environment:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
# Create your conda/mamba environment (wherever you normally keep them)&lt;br /&gt;
# Activate it and make sure `ipykernel` is installed&lt;br /&gt;
# Register it as a kernel&lt;br /&gt;
# Refresh your browser - it will appear in the JupyterLab launcher!&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# From a JupyterLab terminal (or any cluster node):&lt;br /&gt;
module load mamba/mamba-2.1.1&lt;br /&gt;
mamba create -n my-project python=3.11 pandas matplotlib&lt;br /&gt;
mamba activate my-project&lt;br /&gt;
mamba install ipykernel&lt;br /&gt;
&lt;br /&gt;
# Register as kernel (--user means only you will see it)&lt;br /&gt;
python -m ipykernel install --user --name my-project --display-name &amp;quot;My Project (Python 3.11)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Done! Refresh your browser and look for &amp;quot;My Project (Python 3.11)&amp;quot; in the launcher&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Need help?&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
If you need assistance:&lt;br /&gt;
* Installing `ipykernel` in your environment&lt;br /&gt;
* Registering your kernel&lt;br /&gt;
* Troubleshooting kernel issues&lt;br /&gt;
&lt;br /&gt;
Contact us at &amp;#039;&amp;#039;&amp;#039;hpc@tauex.tau.ac.il&amp;#039;&amp;#039;&amp;#039; - we&amp;#039;re happy to help!&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Important notes:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
* Kernels are just small config files (~1 KB) - they don&amp;#039;t use your disk quota&lt;br /&gt;
* Each user only sees their own kernels (plus system defaults)&lt;br /&gt;
* You can have as many kernels as you want&lt;br /&gt;
* Remove a kernel: `jupyter kernelspec uninstall kernel-name`&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
== Stopping Your Server ==&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Important:&amp;#039;&amp;#039;&amp;#039; Always stop your server when you&amp;#039;re done to free up resources for others!&lt;br /&gt;
&lt;br /&gt;
There are two ways to stop:&lt;br /&gt;
&lt;br /&gt;
=== Method 1: From JupyterLab ===&lt;br /&gt;
&lt;br /&gt;
# Go to &amp;#039;&amp;#039;&amp;#039;File → Hub Control Panel&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
# Click the red &amp;#039;&amp;#039;&amp;#039;Stop My Server&amp;#039;&amp;#039;&amp;#039; button&lt;br /&gt;
&lt;br /&gt;
=== Method 2: From PowerIDE home ===&lt;br /&gt;
&lt;br /&gt;
# Navigate to [https://poweride.tau.ac.il/jupyter/hub/home https://poweride.tau.ac.il/jupyter/hub/home]&lt;br /&gt;
# Click the red &amp;#039;&amp;#039;&amp;#039;Stop My Server&amp;#039;&amp;#039;&amp;#039; button&lt;br /&gt;
&lt;br /&gt;
Your job will be terminated and the compute node will be freed.&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
== Best Practices ==&lt;br /&gt;
&lt;br /&gt;
=== Resource Allocation ===&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Start small:&amp;#039;&amp;#039;&amp;#039; Request fewer resources initially. You can always restart with more.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Be realistic:&amp;#039;&amp;#039;&amp;#039; Only request what you actually need&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Time limits:&amp;#039;&amp;#039;&amp;#039; Set a reasonable time limit. You can always restart if you need more time.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;GPU usage:&amp;#039;&amp;#039;&amp;#039; Only request GPUs if your code actually uses them&lt;br /&gt;
&lt;br /&gt;
=== File Management ===&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Working directory:&amp;#039;&amp;#039;&amp;#039; Set it to your project folder to save navigation time&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Save frequently:&amp;#039;&amp;#039;&amp;#039; Your session will end when time runs out&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Large files:&amp;#039;&amp;#039;&amp;#039; Store large datasets in scratch space, not your home directory&lt;br /&gt;
&lt;br /&gt;
=== Data and Code ===&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Home directory:&amp;#039;&amp;#039;&amp;#039; Your LDAP home directory - personal files, small projects&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Scratch space:&amp;#039;&amp;#039;&amp;#039; Large temporary datasets&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Project directories:&amp;#039;&amp;#039;&amp;#039; Shared group work (varies by group)&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Tip:&amp;#039;&amp;#039;&amp;#039; Use Git to version control your code, not for large data files.&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
&lt;br /&gt;
=== My server won&amp;#039;t start ===&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Possible reasons:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Cluster is full:&amp;#039;&amp;#039;&amp;#039; Wait a few minutes and try again&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Invalid partition:&amp;#039;&amp;#039;&amp;#039; Make sure you selected a partition you have access to&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Too many resources:&amp;#039;&amp;#039;&amp;#039; Try requesting fewer CPUs/memory&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;No QOS access:&amp;#039;&amp;#039;&amp;#039; You may not have any QOS configured for your account&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;What to do:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
# Wait 2-3 minutes&lt;br /&gt;
# If still pending, go to Hub Control Panel and click &amp;quot;Stop My Server&amp;quot;&lt;br /&gt;
# Try again with fewer resources or different partition&lt;br /&gt;
# If QOS dropdown is empty, contact HPC support - you may need Slurm associations configured&lt;br /&gt;
&lt;br /&gt;
=== I see &amp;quot;404: Not Found&amp;quot; ===&lt;br /&gt;
&lt;br /&gt;
This usually means your job didn&amp;#039;t start successfully.&lt;br /&gt;
&lt;br /&gt;
Check:&lt;br /&gt;
# Go to your home directory on a login node (or check via terminal)&lt;br /&gt;
# Look for files named `jupyterhub-JOBID.err` (where JOBID is a number)&lt;br /&gt;
# Check the file for error messages&lt;br /&gt;
# Contact HPC support if you can&amp;#039;t resolve it&lt;br /&gt;
&lt;br /&gt;
=== VS Code icon doesn&amp;#039;t appear ===&lt;br /&gt;
&lt;br /&gt;
This is rare - if it happens:&lt;br /&gt;
# Try refreshing your browser&lt;br /&gt;
# If still missing, contact HPC support&lt;br /&gt;
&lt;br /&gt;
=== My session was killed ===&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Common reasons:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Time limit reached:&amp;#039;&amp;#039;&amp;#039; Your session ran for the full time you requested&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Out of memory:&amp;#039;&amp;#039;&amp;#039; Your code used more RAM than allocated&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Node failure:&amp;#039;&amp;#039;&amp;#039; Rare, but compute nodes can crash&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Solution:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
* Save your work frequently&lt;br /&gt;
* Request more time/memory next time&lt;br /&gt;
* Check `jupyterhub-JOBID.err` file for clues&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
== Getting Help ==&lt;br /&gt;
&lt;br /&gt;
=== Support ===&lt;br /&gt;
&lt;br /&gt;
For technical issues:&lt;br /&gt;
&lt;br /&gt;
* Email: `hpc@tauex.tau.ac.il`&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;When asking for help, include:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
* Your username&lt;br /&gt;
* What you were trying to do&lt;br /&gt;
* Error messages (copy/paste or screenshot)&lt;br /&gt;
* Job ID if available (from error file name)&lt;br /&gt;
&lt;br /&gt;
Email: `hpc@tauex.tau.ac.il`&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
== FAQ ==&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Q: Can I run multiple servers at once?&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
A: No, you can only have one server running at a time per user.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Q: How long can my session run?&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
A: Varies by partition - most partitions allow up to 7 days maximum.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Q: Can I install Python packages?&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
A: Yes! Create your own conda/mamba environment, install whatever packages you need, and register it as a kernel (see &amp;quot;Python Environments&amp;quot; section). You have full control over your own environments.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Q: Why is my QOS dropdown empty?&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
A: This means you don&amp;#039;t have any QOS associations configured in Slurm. Contact HPC support - they need to add you to a Slurm account with appropriate QOS access.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Q: Do I need to use the terminal for everything?&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
A: No! JupyterLab notebooks are great for interactive work. Use the terminal only when needed.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Q: What happens to my files when I stop my server?&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
A: Your files are safe! Only the running session is terminated. All files in your home directory and project directories remain intact.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Q: Can I share my session with a colleague?&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
A: No, sessions are personal. However, you can share notebooks and code files through the filesystem or Git.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Q: Is PowerIDE the same as the login nodes?&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
A: No! PowerIDE runs on compute nodes through the Slurm scheduler, giving you dedicated resources. Login nodes are shared by everyone.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Q: How do I get access to different partitions?&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
A: Partition access is controlled by Slurm account associations. Contact your PI or HPC admin to request access to specific partitions.&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
== Quick Reference Card ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Action !! How-To&lt;br /&gt;
|-&lt;br /&gt;
| Access PowerIDE || [https://poweride.tau.ac.il/jupyter https://poweride.tau.ac.il/jupyter]&lt;br /&gt;
|-&lt;br /&gt;
| Start server || Click &amp;quot;Start My Server&amp;quot; → Fill form → Click &amp;quot;Start&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| Open notebook || Click &amp;#039;&amp;#039;&amp;#039;+&amp;#039;&amp;#039;&amp;#039; → Choose Python kernel&lt;br /&gt;
|-&lt;br /&gt;
| Open terminal || Click &amp;#039;&amp;#039;&amp;#039;+&amp;#039;&amp;#039;&amp;#039; → Click &amp;quot;Terminal&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| Start VS Code || Click &amp;#039;&amp;#039;&amp;#039;+&amp;#039;&amp;#039;&amp;#039; → Click &amp;quot;VS Code&amp;quot; icon&lt;br /&gt;
|-&lt;br /&gt;
| Stop server || File → Hub Control Panel → &amp;quot;Stop My Server&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| Upload files || Drag &amp;amp; drop into file browser&lt;br /&gt;
|-&lt;br /&gt;
| Download files || Right-click file → Download&lt;br /&gt;
|-&lt;br /&gt;
| Request custom environment || Email hpc@tauex.tau.ac.il with requirements&lt;br /&gt;
|-&lt;br /&gt;
| Get help || Email hpc@tauex.tau.ac.il&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;Happy computing! 🚀&amp;#039;&amp;#039;&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Main_Page&amp;diff=1546</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Main_Page&amp;diff=1546"/>
		<updated>2025-12-03T15:02:18Z</updated>

		<summary type="html">&lt;p&gt;Levk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;#039;&amp;#039;&amp;#039;Welcome to HPC Guide.&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
[[Linux basic commands]]&lt;br /&gt;
&lt;br /&gt;
[[Public queues]]&lt;br /&gt;
&lt;br /&gt;
[[ New slurm qos usage]]&lt;br /&gt;
&lt;br /&gt;
[[Submitting a job to a queue]]&lt;br /&gt;
&lt;br /&gt;
[[Submitting a job to a slurm queue]]&lt;br /&gt;
&lt;br /&gt;
[[PBS-To-SLURM]]&lt;br /&gt;
&lt;br /&gt;
[[Creating and using conda environment]]&lt;br /&gt;
&lt;br /&gt;
[[Palo Alto VPN for linux]]&lt;br /&gt;
&lt;br /&gt;
[[Alphafold]]&lt;br /&gt;
&lt;br /&gt;
[[Alphafold3]]&lt;br /&gt;
&lt;br /&gt;
[[Using GPU]]&lt;br /&gt;
&lt;br /&gt;
[[security installations]]&lt;br /&gt;
&lt;br /&gt;
[[Install matlab on work station per matlab user]]&lt;br /&gt;
&lt;br /&gt;
[[Submitting vscode job on slurm]]&lt;br /&gt;
&lt;br /&gt;
[[Storage and scratch]]&lt;br /&gt;
&lt;br /&gt;
[[Using jupyter on Slurm ]]&lt;br /&gt;
&lt;br /&gt;
This HPC Tutorial is designed for researchers at TAU who are in need of computational power (computer resources) and wish to explore and use our High Performance Computing (HPC) core facilities. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The audience may be completely unaware of the HPC concepts but must have some basic understanding of computers and computer programming.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;What is HPC?&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
“High Performance Computing” (HPC) is computing on a “Supercomputer”, &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
a computer at the front line of contemporary processing capacity – particularly speed of calculation and available memory.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
A computer cluster consists of a set of loosely or tightly connected computers that work together so that in many respects they can be viewed as a single system.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The components of a cluster are usually connected to each other through fast local area networks(“LAN”) with each node (computer used as a server) running its own instance of an operating system. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Computer clusters emerged as a result of convergence of a number of computing trends including the availability of low cost microprocessors, &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
high-speed networks, and software for high performance distributed computing.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Compute clusters are usually deployed to improve performance and availability over that of a single computer, while typically being more cost-effective than single computers of comparable speed or availability.&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Palo_Alto_VPN_for_linux&amp;diff=1539</id>
		<title>Palo Alto VPN for linux</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Palo_Alto_VPN_for_linux&amp;diff=1539"/>
		<updated>2025-10-29T08:56:43Z</updated>

		<summary type="html">&lt;p&gt;Levk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;For security reason TelAviv University starts a VPN with double authentication standard.&lt;br /&gt;
&lt;br /&gt;
In order to do that users have to check/fill in their mobile phone at myTAU page&lt;br /&gt;
(https://mytau.tau.ac.il/GetResource.php) and enroll to the service.&lt;br /&gt;
Then you need install GoogleAuthenticator on you mobile device and register it at TAU.&lt;br /&gt;
&lt;br /&gt;
After that you may download and install PaloAlto GlobalProtect VPN client on your device (all&lt;br /&gt;
operation systems are supported: IOS, Android, Linux MAC and even Window)&lt;br /&gt;
&lt;br /&gt;
The steps:&lt;br /&gt;
==Enrollment==&lt;br /&gt;
Go to https://mytau.tau.ac.il/GetResource.php&lt;br /&gt;
&lt;br /&gt;
Choose the “1” then “2” :&lt;br /&gt;
&lt;br /&gt;
Then you will receive SMS with 2-minute code and enter it immediately to the filed:&lt;br /&gt;
Then you will be redirected to the QR code for GoogleAuthenticator account setup:&lt;br /&gt;
Scan it using your mobile Google Authenticator app using “+” on bottom right corner of mobile device&lt;br /&gt;
and enter the generated code from mobile GoogleAuthenticator to the field and press the green button.&lt;br /&gt;
&lt;br /&gt;
==Download==&lt;br /&gt;
Download and install VPN client, from the browser, go to:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://hpcguide.tau.ac.il/vpn/PanGPLinux-5.3.4-c5.tgz GlobalProtect-5.3.4]&lt;br /&gt;
&lt;br /&gt;
[https://hpcguide.tau.ac.il/vpn/PanGPLinux-6.0.1-c6.tgz GlobalProtect-6.0.1]&lt;br /&gt;
&lt;br /&gt;
[https://hpcguide.tau.ac.il/vpn/PanGPLinux-6.1.1-c4.tgz GlobalProtect-6.1.1]&lt;br /&gt;
&lt;br /&gt;
[https://hpcguide.tau.ac.il/vpn/PanGPLinux-6.2.1-c15.tgz GlobalProtect-6.2.1]&lt;br /&gt;
&lt;br /&gt;
[https://hpcguide.tau.ac.il/vpn/PanGPLinux-6.2.9-c4.tgz GlobalProtect-6.2.9]&lt;br /&gt;
&lt;br /&gt;
[https://hpcguide.tau.ac.il/vpn/PanGPLinux-6.3.3-c22.tgz GlobalProtect-6.3.3]&lt;br /&gt;
&lt;br /&gt;
Linux package should be extracted and installed appropriated version:&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Debian/Ubuntu&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;dpkg -i GlobalProtect_UI_deb-6.0.1.1-6.deb&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Redhat/Centos&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;yum localinstall GlobalProtect_UI_rpm-6.0.1.1-6.rpm&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Configure==&lt;br /&gt;
&lt;br /&gt;
[[File:Paloalto3.PNG|thumb|right]]&lt;br /&gt;
&lt;br /&gt;
Execute and configure VPN client on Linux (another OS are similar) :&lt;br /&gt;
&lt;br /&gt;
Open client by pressing on the relevant icon (&amp;quot;1&amp;quot; as in the picture on the right)&lt;br /&gt;
&lt;br /&gt;
And enter address &amp;#039;&amp;#039;&amp;#039;vpn.tau.ac.il&amp;#039;&amp;#039;&amp;#039; (&amp;quot;2&amp;quot; as in the picture on the right)&lt;br /&gt;
&lt;br /&gt;
==Errors==&lt;br /&gt;
===SSL Error===&lt;br /&gt;
On latest ubuntu version, ubuntu 22.04, after installing and configuring globalprotect VPN, you get this error:&lt;br /&gt;
&lt;br /&gt;
[[File:784px-Vpn ssl error.png|none|thumb]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Fix only for globalprotect====&lt;br /&gt;
create new &amp;lt;code&amp;gt;ssl.conf&amp;lt;/code&amp;gt; file on your pc with the following content:&lt;br /&gt;
vim ~/ssl.conf&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
openssl_conf = openssl_init&lt;br /&gt;
[openssl_init]&lt;br /&gt;
ssl_conf = ssl_sect&lt;br /&gt;
[ssl_sect]&lt;br /&gt;
system_default = system_default_sect&lt;br /&gt;
[system_default_sect]&lt;br /&gt;
Options = UnsafeLegacyRenegotiation&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Then find this file:&lt;br /&gt;
&amp;lt;code&amp;gt;sudo find / -name PanGPUI.desktop -type f&amp;lt;/code&amp;gt;&lt;br /&gt;
or&lt;br /&gt;
&amp;lt;code&amp;gt;locate PanGPUI.desktop&amp;lt;/code&amp;gt; (may need to do sudo updatedb before running this one)&lt;br /&gt;
there should be at least 2 path with this file, ignore this one --&amp;gt; &amp;lt;code&amp;gt;/opt/paloaltonetworks/globalprotect/PanGPUI.desktop&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On my linux, kubuntu 22.04 the file is here: &amp;lt;code&amp;gt;/etc/xdg/autostart/PanGPUI.desktop&amp;lt;/code&amp;gt;&lt;br /&gt;
enter this file and change it from:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[Desktop Entry]&lt;br /&gt;
Name=PanGPUI&lt;br /&gt;
Type=Application&lt;br /&gt;
Exec=/opt/paloaltonetworks/globalprotect/PanGPUI&lt;br /&gt;
Terminal=false&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
to &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[Desktop Entry]&lt;br /&gt;
Name=PanGPUI&lt;br /&gt;
Type=Application&lt;br /&gt;
Exec=OPENSSL_CONF=~/ssl.conf /opt/paloaltonetworks/globalprotect/PanGPUI&lt;br /&gt;
Terminal=false&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After restarting you pc, globalprotect will autostart with the custom ssl settings&lt;br /&gt;
&lt;br /&gt;
====Global fix====&lt;br /&gt;
here is how to workaround it:&lt;br /&gt;
&lt;br /&gt;
open  &amp;lt;code&amp;gt;/usr/lib/ssl/openssl.cnf&amp;lt;/code&amp;gt; &lt;br /&gt;
&lt;br /&gt;
comment out this section:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# [openssl_init]&lt;br /&gt;
&lt;br /&gt;
# providers = provider_sect&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;add this new section under the commented one from earlier:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[openssl_init]&lt;br /&gt;
ssl_conf = ssl_sect&lt;br /&gt;
&lt;br /&gt;
[ssl_sect]&lt;br /&gt;
system_default = system_default_sect&lt;br /&gt;
&lt;br /&gt;
[system_default_sect]&lt;br /&gt;
Options = UnsafeLegacyRenegotiation&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;reboot globalprotect app and the error should be fixed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;source:https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/1960268&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==TAU credentials==&lt;br /&gt;
[[File:Paloalto4.PNG|thumb|right]]&lt;br /&gt;
Fill in pop-upped windows with your TAU credentials:&lt;br /&gt;
&lt;br /&gt;
Open your mobile GoogleAuthenticator and enter code from there&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Congratulations: you are done!&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Palo_Alto_VPN_for_linux&amp;diff=1523</id>
		<title>Palo Alto VPN for linux</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Palo_Alto_VPN_for_linux&amp;diff=1523"/>
		<updated>2025-08-10T05:43:29Z</updated>

		<summary type="html">&lt;p&gt;Levk: /* Download */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;For security reason TelAviv University starts a VPN with double authentication standard.&lt;br /&gt;
&lt;br /&gt;
In order to do that users have to check/fill in their mobile phone at myTAU page&lt;br /&gt;
(https://mytau.tau.ac.il/GetResource.php) and enroll to the service.&lt;br /&gt;
Then you need install GoogleAuthenticator on you mobile device and register it at TAU.&lt;br /&gt;
&lt;br /&gt;
After that you may download and install PaloAlto GlobalProtect VPN client on your device (all&lt;br /&gt;
operation systems are supported: IOS, Android, Linux MAC and even Window)&lt;br /&gt;
&lt;br /&gt;
The steps:&lt;br /&gt;
==Enrollment==&lt;br /&gt;
Go to https://mytau.tau.ac.il/GetResource.php&lt;br /&gt;
&lt;br /&gt;
Choose the “1” then “2” :&lt;br /&gt;
&lt;br /&gt;
Then you will receive SMS with 2-minute code and enter it immediately to the filed:&lt;br /&gt;
Then you will be redirected to the QR code for GoogleAuthenticator account setup:&lt;br /&gt;
Scan it using your mobile Google Authenticator app using “+” on bottom right corner of mobile device&lt;br /&gt;
and enter the generated code from mobile GoogleAuthenticator to the field and press the green button.&lt;br /&gt;
&lt;br /&gt;
==Download==&lt;br /&gt;
Download and install VPN client, from the browser, go to:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://hpcguide.tau.ac.il/vpn/PanGPLinux-5.3.4-c5.tgz GlobalProtect-5.3.4]&lt;br /&gt;
&lt;br /&gt;
[https://hpcguide.tau.ac.il/vpn/PanGPLinux-6.0.1-c6.tgz GlobalProtect-6.0.1]&lt;br /&gt;
&lt;br /&gt;
[https://hpcguide.tau.ac.il/vpn/PanGPLinux-6.1.1-c4.tgz GlobalProtect-6.1.1]&lt;br /&gt;
&lt;br /&gt;
[https://hpcguide.tau.ac.il/vpn/PanGPLinux-6.2.1-c15.tgz GlobalProtect-6.2.1]&lt;br /&gt;
&lt;br /&gt;
[https://hpcguide.tau.ac.il/vpn/PanGPLinux-6.2.9-c4.tgz GlobalProtect-6.2.9]&lt;br /&gt;
&lt;br /&gt;
Linux package should be extracted and installed appropriated version:&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Debian/Ubuntu&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;dpkg -i GlobalProtect_UI_deb-6.0.1.1-6.deb&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Redhat/Centos&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;yum localinstall GlobalProtect_UI_rpm-6.0.1.1-6.rpm&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Configure==&lt;br /&gt;
&lt;br /&gt;
[[File:Paloalto3.PNG|thumb|right]]&lt;br /&gt;
&lt;br /&gt;
Execute and configure VPN client on Linux (another OS are similar) :&lt;br /&gt;
&lt;br /&gt;
Open client by pressing on the relevant icon (&amp;quot;1&amp;quot; as in the picture on the right)&lt;br /&gt;
&lt;br /&gt;
And enter address &amp;#039;&amp;#039;&amp;#039;vpn.tau.ac.il&amp;#039;&amp;#039;&amp;#039; (&amp;quot;2&amp;quot; as in the picture on the right)&lt;br /&gt;
&lt;br /&gt;
==Errors==&lt;br /&gt;
===SSL Error===&lt;br /&gt;
On latest ubuntu version, ubuntu 22.04, after installing and configuring globalprotect VPN, you get this error:&lt;br /&gt;
&lt;br /&gt;
[[File:784px-Vpn ssl error.png|none|thumb]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Fix only for globalprotect====&lt;br /&gt;
create new &amp;lt;code&amp;gt;ssl.conf&amp;lt;/code&amp;gt; file on your pc with the following content:&lt;br /&gt;
vim ~/ssl.conf&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
openssl_conf = openssl_init&lt;br /&gt;
[openssl_init]&lt;br /&gt;
ssl_conf = ssl_sect&lt;br /&gt;
[ssl_sect]&lt;br /&gt;
system_default = system_default_sect&lt;br /&gt;
[system_default_sect]&lt;br /&gt;
Options = UnsafeLegacyRenegotiation&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Then find this file:&lt;br /&gt;
&amp;lt;code&amp;gt;sudo find / -name PanGPUI.desktop -type f&amp;lt;/code&amp;gt;&lt;br /&gt;
or&lt;br /&gt;
&amp;lt;code&amp;gt;locate PanGPUI.desktop&amp;lt;/code&amp;gt; (may need to do sudo updatedb before running this one)&lt;br /&gt;
there should be at least 2 path with this file, ignore this one --&amp;gt; &amp;lt;code&amp;gt;/opt/paloaltonetworks/globalprotect/PanGPUI.desktop&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On my linux, kubuntu 22.04 the file is here: &amp;lt;code&amp;gt;/etc/xdg/autostart/PanGPUI.desktop&amp;lt;/code&amp;gt;&lt;br /&gt;
enter this file and change it from:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[Desktop Entry]&lt;br /&gt;
Name=PanGPUI&lt;br /&gt;
Type=Application&lt;br /&gt;
Exec=/opt/paloaltonetworks/globalprotect/PanGPUI&lt;br /&gt;
Terminal=false&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
to &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[Desktop Entry]&lt;br /&gt;
Name=PanGPUI&lt;br /&gt;
Type=Application&lt;br /&gt;
Exec=OPENSSL_CONF=~/ssl.conf /opt/paloaltonetworks/globalprotect/PanGPUI&lt;br /&gt;
Terminal=false&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After restarting you pc, globalprotect will autostart with the custom ssl settings&lt;br /&gt;
&lt;br /&gt;
====Global fix====&lt;br /&gt;
here is how to workaround it:&lt;br /&gt;
&lt;br /&gt;
open  &amp;lt;code&amp;gt;/usr/lib/ssl/openssl.cnf&amp;lt;/code&amp;gt; &lt;br /&gt;
&lt;br /&gt;
comment out this section:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# [openssl_init]&lt;br /&gt;
&lt;br /&gt;
# providers = provider_sect&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;add this new section under the commented one from earlier:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[openssl_init]&lt;br /&gt;
ssl_conf = ssl_sect&lt;br /&gt;
&lt;br /&gt;
[ssl_sect]&lt;br /&gt;
system_default = system_default_sect&lt;br /&gt;
&lt;br /&gt;
[system_default_sect]&lt;br /&gt;
Options = UnsafeLegacyRenegotiation&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;reboot globalprotect app and the error should be fixed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;source:https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/1960268&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==TAU credentials==&lt;br /&gt;
[[File:Paloalto4.PNG|thumb|right]]&lt;br /&gt;
Fill in pop-upped windows with your TAU credentials:&lt;br /&gt;
&lt;br /&gt;
Open your mobile GoogleAuthenticator and enter code from there&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Congratulations: you are done!&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Submitting_a_job_to_a_slurm_queue&amp;diff=1521</id>
		<title>Submitting a job to a slurm queue</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Submitting_a_job_to_a_slurm_queue&amp;diff=1521"/>
		<updated>2025-04-21T08:56:53Z</updated>

		<summary type="html">&lt;p&gt;Levk: /* Submitting Jobs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Accessing the System ==&lt;br /&gt;
&lt;br /&gt;
To submit jobs to SLURM at Tel Aviv University, you need to access the system through one of the following login nodes:&lt;br /&gt;
&lt;br /&gt;
* slurmlogin.tau.ac.il&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Requirements for Access ===&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Group Membership&amp;#039;&amp;#039;&amp;#039;: You must be part of the &amp;quot;power&amp;quot; group to access the resources.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;University Credentials&amp;#039;&amp;#039;&amp;#039;: Use your Tel Aviv University username and password to log in.&lt;br /&gt;
&lt;br /&gt;
These login nodes are your starting point for submitting jobs, checking job status, and managing your SLURM tasks.&lt;br /&gt;
&lt;br /&gt;
=== SSH Example ===&lt;br /&gt;
&lt;br /&gt;
To access the system using SSH, use the following example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Replace &amp;#039;your_username&amp;#039; with your actual Tel Aviv University username&lt;br /&gt;
ssh your_username@slurmlogin.tau.ac.il&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Your connection will be automatically routed to one of the login nodes:&lt;br /&gt;
powerslurm-login, powerslurm-login2, or powerslurm-login3.&lt;br /&gt;
&lt;br /&gt;
If you have an SSH key set up for password-less login, you can specify it like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Replace &amp;#039;your_username&amp;#039; and &amp;#039;/path/to/your/private_key&amp;#039; accordingly&lt;br /&gt;
ssh -i /path/to/your/private_key your_username@slurmlogin.tau.ac.il&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Environment Modules ==&lt;br /&gt;
&lt;br /&gt;
Environment Modules in SLURM allow users to dynamically modify their shell environment, providing an easy way to load and unload different software applications, libraries, and their dependencies. This system helps avoid conflicts between software versions and ensures the correct environment for running specific applications.&lt;br /&gt;
&lt;br /&gt;
Here are some common commands to work with environment modules:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#List Available Modules: To see all the modules available on the system, use:&lt;br /&gt;
module avail&lt;br /&gt;
&lt;br /&gt;
#To search for a specific module by name (e.g., `gcc`), use:&lt;br /&gt;
module avail gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#Get Detailed Information About a Module: The `module spider` command provides detailed information about a module, including versions, dependencies, and descriptions:&lt;br /&gt;
module spider gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#View Module Settings: To see what environment variables and settings will be modified by a module, use:&lt;br /&gt;
module show gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#Load a Module: To set up the environment for a specific software, use the `module load` command. For example, to load GCC version 12.1.0:&lt;br /&gt;
module load gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#List Loaded Modules: To view all currently loaded modules in your session, use:&lt;br /&gt;
module list&lt;br /&gt;
&lt;br /&gt;
#Unload a Module: To unload a specific module from your environment, use:&lt;br /&gt;
module unload gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#Unload All Modules:** If you need to clear your environment of all loaded modules, use:&lt;br /&gt;
module purge&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;By using these commands, you can easily manage the software environments needed for different tasks, ensuring compatibility and reducing potential conflicts between software versions.&lt;br /&gt;
&lt;br /&gt;
== Basic Job Submission Commands ==&lt;br /&gt;
&lt;br /&gt;
=== Finding Your Account and Partition ===&lt;br /&gt;
&lt;br /&gt;
Before submitting a job, you need to know which partitions you have permission to use.&lt;br /&gt;
&lt;br /&gt;
Run the command `&amp;lt;code&amp;gt;check_my_partitions&amp;lt;/code&amp;gt;` to view a list of all the partitions you have permission to send jobs to.&lt;br /&gt;
&lt;br /&gt;
== Submitting Jobs==&lt;br /&gt;
sbatch: Submits a job script for batch processing.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example&amp;#039;&amp;#039;&amp;#039;:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    sbatch --ntasks=1 --time=10 -p power-general -A power-general-users pre_process.bash&lt;br /&gt;
   # This command submits pre_process.bash to the power-general partition for 10 minutes. &lt;br /&gt;
   # With 1 GPU:&lt;br /&gt;
    sbatch --gres=gpu:1 -p gpu-general -A gpu-general-users gpu_job.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Submitting Multiple Jobs ===&lt;br /&gt;
&lt;br /&gt;
If you need to submit many similar jobs (hundreds or more), you should use a **Slurm job array**. Submitting each job individually using separate `sbatch` commands places a heavy load on the scheduler, slowing down job processing across the cluster. Job arrays allow you to bundle many related jobs together as a single submission. This is more efficient and easier to manage.&lt;br /&gt;
&lt;br /&gt;
Each task in the array runs independently like a separate job, but the array is submitted as a single job ID for scheduling and tracking purposes.&lt;br /&gt;
You can customize the behavior of each task using the environment variable &amp;lt;code&amp;gt;SLURM_ARRAY_TASK_ID&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==== Script Example: Job Array ====&lt;br /&gt;
&lt;br /&gt;
This script submits a job array with 100 tasks, each processing a different input file. The array reduces scheduler load and simplifies job tracking.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --job-name=array_job            # Job name&lt;br /&gt;
#SBATCH --account=power-general-users   # Account name&lt;br /&gt;
#SBATCH --partition=power-general       # Partition name&lt;br /&gt;
#SBATCH --time=02:00:00                 # Max run time (hh:mm:ss)&lt;br /&gt;
#SBATCH --ntasks=1                      # Number of tasks per array job&lt;br /&gt;
#SBATCH --nodes=1                       # Number of nodes&lt;br /&gt;
#SBATCH --cpus-per-task=1               # CPUs per task&lt;br /&gt;
#SBATCH --mem-per-cpu=4G                # Memory per CPU&lt;br /&gt;
#SBATCH --array=1-100                   # Array range: 100 tasks&lt;br /&gt;
#SBATCH --output=array_job_%A_%a.out    # Output file: Job ID and array task ID&lt;br /&gt;
#SBATCH --error=array_job_%A_%a.err     # Error file: Job ID and array task ID&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Starting SLURM array task&amp;quot;&lt;br /&gt;
echo &amp;quot;Job ID: $SLURM_JOB_ID&amp;quot;&lt;br /&gt;
echo &amp;quot;Array Task ID: $SLURM_ARRAY_TASK_ID&amp;quot;&lt;br /&gt;
echo &amp;quot;Running on node(s): $SLURM_JOB_NODELIST&amp;quot;&lt;br /&gt;
echo &amp;quot;Allocated CPUs: $SLURM_JOB_CPUS_PER_NODE&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Your application commands go here&lt;br /&gt;
# You can use $SLURM_ARRAY_TASK_ID to customize behavior per task&lt;br /&gt;
# ./my_program input_${SLURM_ARRAY_TASK_ID}.txt&lt;br /&gt;
echo &amp;quot;Task completed&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this example:&lt;br /&gt;
* The job array consists of 100 tasks.&lt;br /&gt;
* Each task runs the same script but with a different input file.&lt;br /&gt;
* You access the task ID using the environment variable &amp;lt;code&amp;gt;SLURM_ARRAY_TASK_ID&amp;lt;/code&amp;gt;.&lt;br /&gt;
* The output and error logs are separated per task using &amp;lt;code&amp;gt;%A&amp;lt;/code&amp;gt; (job ID) and &amp;lt;code&amp;gt;%a&amp;lt;/code&amp;gt; (array task ID).&lt;br /&gt;
&lt;br /&gt;
==== Script Example: Job Array with different parameters per task ====&lt;br /&gt;
&lt;br /&gt;
This script submits a job array with 3 tasks. Each task runs the same program with a different input file: `data1.txt`, `data2.txt`, and `data3.txt`.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --job-name=array_job            # Job name&lt;br /&gt;
#SBATCH --account=power-general-users   # Account name&lt;br /&gt;
#SBATCH --partition=power-general       # Partition name&lt;br /&gt;
#SBATCH --time=01:00:00                 # Max run time (hh:mm:ss)&lt;br /&gt;
#SBATCH --ntasks=1                      # Number of tasks per array job&lt;br /&gt;
#SBATCH --nodes=1                       # Number of nodes&lt;br /&gt;
#SBATCH --cpus-per-task=1               # CPUs per task&lt;br /&gt;
#SBATCH --mem-per-cpu=2G                # Memory per CPU&lt;br /&gt;
#SBATCH --array=1-3                     # Run 3 tasks with IDs 1, 2, 3&lt;br /&gt;
#SBATCH --output=array_%A_%a.out        # Output file: Job ID and task ID&lt;br /&gt;
#SBATCH --error=array_%A_%a.err         # Error file: Job ID and task ID&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Starting SLURM array task&amp;quot;&lt;br /&gt;
echo &amp;quot;Job ID: $SLURM_JOB_ID&amp;quot;&lt;br /&gt;
echo &amp;quot;Array Task ID: $SLURM_ARRAY_TASK_ID&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Each task runs the program with a different input file&lt;br /&gt;
./my_program data${SLURM_ARRAY_TASK_ID}.txt&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Task completed&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
===Writing Single SLURM Job Scripts===&lt;br /&gt;
&lt;br /&gt;
Here is a simple job script example:&lt;br /&gt;
&lt;br /&gt;
==== Basic Script====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --job-name=my_job             # Job name&lt;br /&gt;
#SBATCH --account=power-general-users # Account name&lt;br /&gt;
#SBATCH --partition=power-general     # Partition name&lt;br /&gt;
#SBATCH --time=02:00:00               # Max run time (hh:mm:ss)&lt;br /&gt;
#SBATCH --ntasks=1                    # Number of tasks&lt;br /&gt;
#SBATCH --nodes=1                     # Number of nodes&lt;br /&gt;
#SBATCH --cpus-per-task=1             # CPUs per task&lt;br /&gt;
#SBATCH --mem-per-cpu=4G              # Memory per CPU&lt;br /&gt;
#SBATCH --output=my_job_%j.out        # Output file&lt;br /&gt;
#SBATCH --error=my_job_%j.err         # Error file&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Starting my SLURM job&amp;quot;&lt;br /&gt;
echo &amp;quot;Job ID: $SLURM_JOB_ID&amp;quot;&lt;br /&gt;
echo &amp;quot;Running on nodes: $SLURM_JOB_NODELIST&amp;quot;&lt;br /&gt;
echo &amp;quot;Allocated CPUs: $SLURM_JOB_CPUS_PER_NODE&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Your application commands go here&lt;br /&gt;
# ./my_program&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Job completed&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To ask for x cores interactively:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=x  --partition=power-general --nodes=1 --pty bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
However, need for now also to set slurm parameters inside the script, or within the interactive job:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
export SLURM_TASKS_PER_NODE=48&lt;br /&gt;
export SLURM_CPUS_ON_NODE=48&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Script for 1 GPU ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --job-name=gpu_job             # Job name&lt;br /&gt;
#SBATCH --account=my_account           # Account name&lt;br /&gt;
#SBATCH --partition=gpu-general        # Partition name&lt;br /&gt;
#SBATCH --time=02:00:00                # Max run time&lt;br /&gt;
#SBATCH --ntasks=1                     # Number of tasks&lt;br /&gt;
#SBATCH --nodes=1                      # Number of nodes&lt;br /&gt;
#SBATCH --cpus-per-task=1              # CPUs per task&lt;br /&gt;
#SBATCH --gres=gpu:1                   # Number of GPUs&lt;br /&gt;
#SBATCH --mem-per-cpu=4G               # Memory per CPU&lt;br /&gt;
#SBATCH --output=my_job_%j.out         # Output file&lt;br /&gt;
#SBATCH --error=my_job_%j.err          # Error file&lt;br /&gt;
&lt;br /&gt;
module load python/python-3.8&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Starting GPU job&amp;quot;&lt;br /&gt;
echo &amp;quot;Job ID: $SLURM_JOB_ID&amp;quot;&lt;br /&gt;
echo &amp;quot;Running on nodes: $SLURM_JOB_NODELIST&amp;quot;&lt;br /&gt;
echo &amp;quot;Allocated CPUs: $SLURM_JOB_CPUS_PER_NODE&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Your GPU commands go here&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Job completed&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
For excluding a node, one may add the following&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
#SBATCH --exclude=compute-0-[100-103],compute-0-67&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Importance of Correct RAM Usage in Jobs===&lt;br /&gt;
&lt;br /&gt;
When writing SLURM job scripts, it&amp;#039;s crucial to understand and correctly specify the memory requirements for your job. &lt;br /&gt;
&lt;br /&gt;
Proper memory allocation ensures efficient resource usage and prevents job failures due to out-of-memory (OOM) errors.&lt;br /&gt;
&lt;br /&gt;
==== Why Correct RAM Usage Matters ====&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Resource Efficiency&amp;#039;&amp;#039;&amp;#039;: Allocating the right amount of memory helps in optimal resource utilization, allowing more jobs to run simultaneously on the cluster.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Job Stability&amp;#039;&amp;#039;&amp;#039;: Underestimating memory requirements can lead to OOM errors, causing your job to fail and waste computational resources.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Performance&amp;#039;&amp;#039;&amp;#039;: Overestimating memory needs can lead to underutilization of resources, potentially delaying other jobs in the queue.&lt;br /&gt;
&lt;br /&gt;
==== How to Specify Memory in SLURM ====&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;--mem&amp;#039;&amp;#039;&amp;#039;: Specifies the total memory required for the job.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;--mem-per-cpu&amp;#039;&amp;#039;&amp;#039;: Specifies the memory required per CPU.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example&amp;#039;&amp;#039;&amp;#039;:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --mem=4G              # Total memory for the job&lt;br /&gt;
#SBATCH --mem-per-cpu=2G      # Memory per CPU&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Interactive Jobs===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#Start an interactive session:&lt;br /&gt;
srun --ntasks=1 -p power-general -A power-general-users --pty bash&lt;br /&gt;
&lt;br /&gt;
#Specify a compute node:&lt;br /&gt;
srun --ntasks=1 -p power-general -A power-general-users --nodelist=&amp;quot;compute-0-12&amp;quot; --pty bash&lt;br /&gt;
&lt;br /&gt;
#Using GUI:&lt;br /&gt;
srun --ntasks=1 -p power-general -A power-general-users --x11 /bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Submitting RELION Jobs===&lt;br /&gt;
&lt;br /&gt;
To submit a RELION job interactively on the &amp;lt;code&amp;gt;gpu-relion&amp;lt;/code&amp;gt; queue with X11 forwarding, use the following steps:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#Start an interactive session with X11:&lt;br /&gt;
srun --ntasks=1 -p gpu-relion -A your_account --x11 --pty bash&lt;br /&gt;
#Load the RELION module:&lt;br /&gt;
module load relion/relion-4.0.1&lt;br /&gt;
#Launch RELION:&lt;br /&gt;
relion&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Running matlab example==&lt;br /&gt;
In this example there are 3 files:&lt;br /&gt;
&lt;br /&gt;
myTable.m ⇒ This matlab file calculates something&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
fprintf(&amp;#039;=======================================\n&amp;#039;);&lt;br /&gt;
fprintf(&amp;#039; a             b             c              d             \n&amp;#039;);&lt;br /&gt;
fprintf(&amp;#039;=======================================\n&amp;#039;);&lt;br /&gt;
while 1&lt;br /&gt;
                for j = 1:10&lt;br /&gt;
                                a = sin(10*j);&lt;br /&gt;
                                b = a*cos(10*j);&lt;br /&gt;
                                c = a + b;&lt;br /&gt;
                                d = a - b;&lt;br /&gt;
                                fprintf(&amp;#039;%+6.5f   %+6.5f   %+6.5f   %+6.5f   \n&amp;#039;,a,b,c,d);&lt;br /&gt;
                end&lt;br /&gt;
end&lt;br /&gt;
fprintf(&amp;#039;=======================================\n&amp;#039;);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
my_table_script.sh ⇒ This script executes the matlab program. Need just to run qsub with this script&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH --mem=50mg&lt;br /&gt;
#SBATCH --partition powers-general&lt;br /&gt;
#SBATCH -A power-general-users&lt;br /&gt;
hostname&lt;br /&gt;
&lt;br /&gt;
cd /a/home/cc/tree/taucc/staff/dvory/matlab&lt;br /&gt;
&lt;br /&gt;
matlab -nodisplay -nosplash -nodesktop -r &amp;quot;run(myTable());exit;&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
run_in_loop.sh ⇒ However, one may also generate many jobs with this file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
for i in {1..100}&lt;br /&gt;
&lt;br /&gt;
do&lt;br /&gt;
&lt;br /&gt;
        sbatch my_table_script.sh&lt;br /&gt;
&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Running my job is with the command (after doing chmod +x &amp;#039;run_in_loop.sh&amp;#039;):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
./run_in_loop.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==AlphaFold==&lt;br /&gt;
&lt;br /&gt;
AlphaFold is a deep learning tool designed for predicting protein structures.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Guides:&amp;#039;&amp;#039;&amp;#039;  &lt;br /&gt;
&lt;br /&gt;
[https://hpcguide.tau.ac.il/index.php?title=Alphafold AlphaFold Guide]&lt;br /&gt;
&lt;br /&gt;
[https://hpcguide.tau.ac.il/index.php?title=Alphafold3 AlphaFold3 Guide]&lt;br /&gt;
&lt;br /&gt;
==Common SLURM Commands==&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#View all queues (partitions):&lt;br /&gt;
sinfo&lt;br /&gt;
#View all jobs:&lt;br /&gt;
squeue&lt;br /&gt;
#View details of a specific job:&lt;br /&gt;
scontrol show job &amp;lt;job_number&amp;gt;&lt;br /&gt;
#Get information about partitions:&lt;br /&gt;
scontrol show partition&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting &amp;amp; Tips ==&lt;br /&gt;
&lt;br /&gt;
=== Common Errors ===&lt;br /&gt;
&lt;br /&gt;
# &amp;lt;code&amp;gt;srun: error: Unable to allocate resources: No partition specified or system default partition&amp;lt;/code&amp;gt;  &amp;lt;br /&amp;gt;&amp;#039;&amp;#039;&amp;#039;Solution:&amp;#039;&amp;#039;&amp;#039; Always specify a partition. Example:  &amp;lt;code&amp;gt;srun --pty -c 1 --mem=2G -p power-general /bin/bash&amp;lt;/code&amp;gt;&lt;br /&gt;
# Job failed, and upon doing scontrol show job job_id or when running sacct -j job_id -o JobID,JobName,State%20  &amp;lt;br /&amp;gt;you see:   &amp;lt;code&amp;gt;JobState=OUT_OF_MEMORY Reason=OutOfMemory&amp;lt;/code&amp;gt;  or :&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
JobID           JobName                State &lt;br /&gt;
------------ ---------- -------------------- &lt;br /&gt;
71             oom_test        OUT_OF_MEMORY &lt;br /&gt;
71.batch          batch        OUT_OF_MEMORY &lt;br /&gt;
71.extern        extern            COMPLETED &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;it means that the ram requested for the job was not enough, please resubmit the job again with more ram. see [https://wikihpc.tau.ac.il/index.php?title=Slurm_user_guide#Estimating_RAM_Usage below] for help with understanding how much ram your job may need.&lt;br /&gt;
&lt;br /&gt;
=== Chain Jobs ===&lt;br /&gt;
Use the &amp;lt;code&amp;gt;--depend&amp;lt;/code&amp;gt; flag to set job dependencies.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sbatch --ntasks=1 --time=60 -p power-general -A power-general-users --depend=45001 do_work.bash&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Always Specify Resources ===&lt;br /&gt;
When submitting jobs, ensure you include all required resources like partition, memory, and CPUs to avoid job failures.&lt;br /&gt;
&lt;br /&gt;
=== Attaching to Running Jobs ===&lt;br /&gt;
If you need to monitor or interact with a running job, use &amp;lt;code&amp;gt;sattach&amp;lt;/code&amp;gt;. This command allows you to attach to a job&amp;#039;s input, output, and error streams in real-time.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sattach &amp;lt;job_id&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To view job steps of a specific job, use the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
scontrol show job &amp;lt;job_id&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look for sections labeled &amp;quot;StepId&amp;quot; within the output. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;For specific job steps, use:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sattach &amp;lt;job_id.step_id&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Note:&amp;#039;&amp;#039;&amp;#039; &amp;lt;code&amp;gt;sattach&amp;lt;/code&amp;gt; is particularly useful for interactive jobs, where you can provide input directly. For non-interactive jobs, it acts like &amp;lt;code&amp;gt;tail -f&amp;lt;/code&amp;gt;, allowing you to monitor the output stream.&lt;br /&gt;
&lt;br /&gt;
=== Estimating RAM Usage ===&lt;br /&gt;
&lt;br /&gt;
When writing SLURM job scripts, it&amp;#039;s crucial to understand and correctly specify the memory requirements for your job. Proper memory allocation ensures efficient resource usage and prevents job failures due to out-of-memory (OOM) errors.&lt;br /&gt;
&lt;br /&gt;
==== Tips for Estimating RAM Usage ====&lt;br /&gt;
&lt;br /&gt;
* Check Application Documentation: Refer to the official documentation or user guides for memory-related information.&lt;br /&gt;
* Run a Small Test Job: Submit a smaller version of your job and monitor its memory usage using commands like `free -m`, `top`, or `htop`.&lt;br /&gt;
* Use Profiling Tools: Tools like `valgrind`, `gprof`, or built-in profilers can help you understand memory usage.&lt;br /&gt;
* Analyze Previous Jobs: Review SLURM logs and job statistics for insights into memory consumption of past jobs.&lt;br /&gt;
* Consult with Peers or Experts: Ask colleagues or experts who have experience with similar workloads.&lt;br /&gt;
&lt;br /&gt;
==== Example: Monitoring Memory Usage ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH --job-name=memory_test&lt;br /&gt;
#SBATCH --account=your_account&lt;br /&gt;
#SBATCH --partition=your_partition&lt;br /&gt;
#SBATCH --time=01:00:00&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --cpus-per-task=1&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --output=memory_test.out&lt;br /&gt;
#SBATCH --error=memory_test.err&lt;br /&gt;
&lt;br /&gt;
# Monitor memory usage&lt;br /&gt;
echo &amp;quot;Memory usage before running the job:&amp;quot;&lt;br /&gt;
free -m&lt;br /&gt;
&lt;br /&gt;
# Your application commands go here&lt;br /&gt;
# ./your_application&lt;br /&gt;
&lt;br /&gt;
# Monitor memory usage after running the job&lt;br /&gt;
echo &amp;quot;Memory usage after running the job:&amp;quot;&lt;br /&gt;
free -m&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== General Tips ====&lt;br /&gt;
&lt;br /&gt;
* Start Small: Begin with a conservative memory request and increase it based on observed usage.&lt;br /&gt;
* Consider Peak Usage: Plan for peak memory usage to avoid OOM errors.&lt;br /&gt;
* Use SLURM&amp;#039;s Memory Reporting: Use `sacct` to view memory usage statistics.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sacct -j &amp;lt;job_id&amp;gt; --format=JobID,JobName,MaxRSS,Elapsed&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=MediaWiki:Common.js&amp;diff=1515</id>
		<title>MediaWiki:Common.js</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=MediaWiki:Common.js&amp;diff=1515"/>
		<updated>2025-03-26T10:23:05Z</updated>

		<summary type="html">&lt;p&gt;Levk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;/* Any JavaScript here will be loaded for all users on every page load. */&lt;br /&gt;
// Set global variables before loading the script&lt;br /&gt;
window.nl_lang = &amp;quot;en&amp;quot;;&lt;br /&gt;
window.nl_pos = &amp;quot;bl&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
// Load the Nagishli widget script&lt;br /&gt;
var nagishScript = document.createElement(&amp;#039;script&amp;#039;);&lt;br /&gt;
nagishScript.src = &amp;quot;https://hpcguide.tau.ac.il/nagish/nagishli.js?v=2.3&amp;quot;;&lt;br /&gt;
nagishScript.charset = &amp;quot;utf-8&amp;quot;;&lt;br /&gt;
nagishScript.defer = true;&lt;br /&gt;
document.head.appendChild(nagishScript);&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=MediaWiki:Common.js&amp;diff=1514</id>
		<title>MediaWiki:Common.js</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=MediaWiki:Common.js&amp;diff=1514"/>
		<updated>2025-03-26T10:21:51Z</updated>

		<summary type="html">&lt;p&gt;Levk: Created page with &amp;quot;/* Any JavaScript here will be loaded for all users on every page load. */ &amp;lt;script&amp;gt; nl_lang = &amp;quot;en&amp;quot;; nl_pos = &amp;quot;bl&amp;quot;; &amp;lt;/script&amp;gt; &amp;lt;script src=&amp;quot;https://hpcguide.tau.ac.il/nagish/nag...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;/* Any JavaScript here will be loaded for all users on every page load. */&lt;br /&gt;
&amp;lt;script&amp;gt;&lt;br /&gt;
nl_lang = &amp;quot;en&amp;quot;;&lt;br /&gt;
nl_pos = &amp;quot;bl&amp;quot;;&lt;br /&gt;
&amp;lt;/script&amp;gt;&lt;br /&gt;
&amp;lt;script src=&amp;quot;https://hpcguide.tau.ac.il/nagish/nagishli.js?v=2.3&amp;quot; charset=&amp;quot;utf-8&amp;quot; defer&amp;gt;&amp;lt;/script&amp;gt;&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Submitting_a_job_to_a_slurm_queue&amp;diff=1513</id>
		<title>Submitting a job to a slurm queue</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Submitting_a_job_to_a_slurm_queue&amp;diff=1513"/>
		<updated>2025-03-20T12:15:02Z</updated>

		<summary type="html">&lt;p&gt;Levk: /* AlphaFold */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Accessing the System ==&lt;br /&gt;
&lt;br /&gt;
To submit jobs to SLURM at Tel Aviv University, you need to access the system through one of the following login nodes:&lt;br /&gt;
&lt;br /&gt;
* powerslurm-login.tau.ac.il&lt;br /&gt;
* powerslurm-login2.tau.ac.il&lt;br /&gt;
&lt;br /&gt;
=== Requirements for Access ===&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Group Membership&amp;#039;&amp;#039;&amp;#039;: You must be part of the &amp;quot;power&amp;quot; group to access the resources.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;University Credentials&amp;#039;&amp;#039;&amp;#039;: Use your Tel Aviv University username and password to log in.&lt;br /&gt;
&lt;br /&gt;
These login nodes are your starting point for submitting jobs, checking job status, and managing your SLURM tasks.&lt;br /&gt;
&lt;br /&gt;
=== SSH Example ===&lt;br /&gt;
&lt;br /&gt;
To access the system using SSH, use the following example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Replace &amp;#039;your_username&amp;#039; with your actual Tel Aviv University username&lt;br /&gt;
ssh your_username@powerslurm-login.tau.ac.il&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you want to connect to the second login node, use:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Replace &amp;#039;your_username&amp;#039; with your actual Tel Aviv University username&lt;br /&gt;
ssh your_username@powerslurm-login2.tau.ac.il&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you have an SSH key set up for password-less login, you can specify it like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Replace &amp;#039;your_username&amp;#039; and &amp;#039;/path/to/your/private_key&amp;#039; accordingly&lt;br /&gt;
ssh -i /path/to/your/private_key your_username@powerslurm-login.tau.ac.il&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Environment Modules ==&lt;br /&gt;
&lt;br /&gt;
Environment Modules in SLURM allow users to dynamically modify their shell environment, providing an easy way to load and unload different software applications, libraries, and their dependencies. This system helps avoid conflicts between software versions and ensures the correct environment for running specific applications.&lt;br /&gt;
&lt;br /&gt;
Here are some common commands to work with environment modules:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#List Available Modules: To see all the modules available on the system, use:&lt;br /&gt;
module avail&lt;br /&gt;
&lt;br /&gt;
#To search for a specific module by name (e.g., `gcc`), use:&lt;br /&gt;
module avail gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#Get Detailed Information About a Module: The `module spider` command provides detailed information about a module, including versions, dependencies, and descriptions:&lt;br /&gt;
module spider gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#View Module Settings: To see what environment variables and settings will be modified by a module, use:&lt;br /&gt;
module show gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#Load a Module: To set up the environment for a specific software, use the `module load` command. For example, to load GCC version 12.1.0:&lt;br /&gt;
module load gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#List Loaded Modules: To view all currently loaded modules in your session, use:&lt;br /&gt;
module list&lt;br /&gt;
&lt;br /&gt;
#Unload a Module: To unload a specific module from your environment, use:&lt;br /&gt;
module unload gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#Unload All Modules:** If you need to clear your environment of all loaded modules, use:&lt;br /&gt;
module purge&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;By using these commands, you can easily manage the software environments needed for different tasks, ensuring compatibility and reducing potential conflicts between software versions.&lt;br /&gt;
&lt;br /&gt;
== Basic Job Submission Commands ==&lt;br /&gt;
&lt;br /&gt;
=== Finding Your Account and Partition ===&lt;br /&gt;
&lt;br /&gt;
Before submitting a job, you need to know which partitions you have permission to use.&lt;br /&gt;
&lt;br /&gt;
Run the command `&amp;lt;code&amp;gt;check_my_partitions&amp;lt;/code&amp;gt;` to view a list of all the partitions you have permission to send jobs to.&lt;br /&gt;
&lt;br /&gt;
== Submitting Jobs==&lt;br /&gt;
sbatch: Submits a job script for batch processing.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example&amp;#039;&amp;#039;&amp;#039;:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    sbatch --ntasks=1 --time=10 -p power-general -A power-general-users pre_process.bash&lt;br /&gt;
   # This command submits pre_process.bash to the power-general partition for 10 minutes. &lt;br /&gt;
   # With 1 GPU:&lt;br /&gt;
    sbatch --gres=gpu:1 -p gpu-general -A gpu-general-users gpu_job.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Writing SLURM Job Scripts===&lt;br /&gt;
&lt;br /&gt;
Here is a simple job script example:&lt;br /&gt;
&lt;br /&gt;
==== Basic Script====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --job-name=my_job             # Job name&lt;br /&gt;
#SBATCH --account=power-general-users # Account name&lt;br /&gt;
#SBATCH --partition=power-general     # Partition name&lt;br /&gt;
#SBATCH --time=02:00:00               # Max run time (hh:mm:ss)&lt;br /&gt;
#SBATCH --ntasks=1                    # Number of tasks&lt;br /&gt;
#SBATCH --nodes=1                     # Number of nodes&lt;br /&gt;
#SBATCH --cpus-per-task=1             # CPUs per task&lt;br /&gt;
#SBATCH --mem-per-cpu=4G              # Memory per CPU&lt;br /&gt;
#SBATCH --output=my_job_%j.out        # Output file&lt;br /&gt;
#SBATCH --error=my_job_%j.err         # Error file&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Starting my SLURM job&amp;quot;&lt;br /&gt;
echo &amp;quot;Job ID: $SLURM_JOB_ID&amp;quot;&lt;br /&gt;
echo &amp;quot;Running on nodes: $SLURM_JOB_NODELIST&amp;quot;&lt;br /&gt;
echo &amp;quot;Allocated CPUs: $SLURM_JOB_CPUS_PER_NODE&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Your application commands go here&lt;br /&gt;
# ./my_program&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Job completed&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To ask for x cores interactively:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=x  --partition=power-general --nodes=1 --pty bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
However, need for now also to set slurm parameters inside the script, or within the interactive job:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
export SLURM_TASKS_PER_NODE=48&lt;br /&gt;
export SLURM_CPUS_ON_NODE=48&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For defining an array, may add:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --array=1-300&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Script for 1 GPU ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --job-name=gpu_job             # Job name&lt;br /&gt;
#SBATCH --account=my_account           # Account name&lt;br /&gt;
#SBATCH --partition=gpu-general        # Partition name&lt;br /&gt;
#SBATCH --time=02:00:00                # Max run time&lt;br /&gt;
#SBATCH --ntasks=1                     # Number of tasks&lt;br /&gt;
#SBATCH --nodes=1                      # Number of nodes&lt;br /&gt;
#SBATCH --cpus-per-task=1              # CPUs per task&lt;br /&gt;
#SBATCH --gres=gpu:1                   # Number of GPUs&lt;br /&gt;
#SBATCH --mem-per-cpu=4G               # Memory per CPU&lt;br /&gt;
#SBATCH --output=my_job_%j.out         # Output file&lt;br /&gt;
#SBATCH --error=my_job_%j.err          # Error file&lt;br /&gt;
&lt;br /&gt;
module load python/python-3.8&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Starting GPU job&amp;quot;&lt;br /&gt;
echo &amp;quot;Job ID: $SLURM_JOB_ID&amp;quot;&lt;br /&gt;
echo &amp;quot;Running on nodes: $SLURM_JOB_NODELIST&amp;quot;&lt;br /&gt;
echo &amp;quot;Allocated CPUs: $SLURM_JOB_CPUS_PER_NODE&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Your GPU commands go here&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Job completed&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
For excluding a node, one may add the following&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
#SBATCH --exclude=compute-0-[100-103],compute-0-67&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Importance of Correct RAM Usage in Jobs===&lt;br /&gt;
&lt;br /&gt;
When writing SLURM job scripts, it&amp;#039;s crucial to understand and correctly specify the memory requirements for your job. &lt;br /&gt;
&lt;br /&gt;
Proper memory allocation ensures efficient resource usage and prevents job failures due to out-of-memory (OOM) errors.&lt;br /&gt;
&lt;br /&gt;
==== Why Correct RAM Usage Matters ====&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Resource Efficiency&amp;#039;&amp;#039;&amp;#039;: Allocating the right amount of memory helps in optimal resource utilization, allowing more jobs to run simultaneously on the cluster.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Job Stability&amp;#039;&amp;#039;&amp;#039;: Underestimating memory requirements can lead to OOM errors, causing your job to fail and waste computational resources.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Performance&amp;#039;&amp;#039;&amp;#039;: Overestimating memory needs can lead to underutilization of resources, potentially delaying other jobs in the queue.&lt;br /&gt;
&lt;br /&gt;
==== How to Specify Memory in SLURM ====&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;--mem&amp;#039;&amp;#039;&amp;#039;: Specifies the total memory required for the job.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;--mem-per-cpu&amp;#039;&amp;#039;&amp;#039;: Specifies the memory required per CPU.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example&amp;#039;&amp;#039;&amp;#039;:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --mem=4G              # Total memory for the job&lt;br /&gt;
#SBATCH --mem-per-cpu=2G      # Memory per CPU&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Interactive Jobs===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#Start an interactive session:&lt;br /&gt;
srun --ntasks=1 -p power-general -A power-general-users --pty bash&lt;br /&gt;
&lt;br /&gt;
#Specify a compute node:&lt;br /&gt;
srun --ntasks=1 -p power-general -A power-general-users --nodelist=&amp;quot;compute-0-12&amp;quot; --pty bash&lt;br /&gt;
&lt;br /&gt;
#Using GUI:&lt;br /&gt;
srun --ntasks=1 -p power-general -A power-general-users --x11 /bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Submitting RELION Jobs===&lt;br /&gt;
&lt;br /&gt;
To submit a RELION job interactively on the &amp;lt;code&amp;gt;gpu-relion&amp;lt;/code&amp;gt; queue with X11 forwarding, use the following steps:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#Start an interactive session with X11:&lt;br /&gt;
srun --ntasks=1 -p gpu-relion -A your_account --x11 --pty bash&lt;br /&gt;
#Load the RELION module:&lt;br /&gt;
module load relion/relion-4.0.1&lt;br /&gt;
#Launch RELION:&lt;br /&gt;
relion&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Running matlab example==&lt;br /&gt;
In this example there are 3 files:&lt;br /&gt;
&lt;br /&gt;
myTable.m ⇒ This matlab file calculates something&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
fprintf(&amp;#039;=======================================\n&amp;#039;);&lt;br /&gt;
fprintf(&amp;#039; a             b             c              d             \n&amp;#039;);&lt;br /&gt;
fprintf(&amp;#039;=======================================\n&amp;#039;);&lt;br /&gt;
while 1&lt;br /&gt;
                for j = 1:10&lt;br /&gt;
                                a = sin(10*j);&lt;br /&gt;
                                b = a*cos(10*j);&lt;br /&gt;
                                c = a + b;&lt;br /&gt;
                                d = a - b;&lt;br /&gt;
                                fprintf(&amp;#039;%+6.5f   %+6.5f   %+6.5f   %+6.5f   \n&amp;#039;,a,b,c,d);&lt;br /&gt;
                end&lt;br /&gt;
end&lt;br /&gt;
fprintf(&amp;#039;=======================================\n&amp;#039;);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
my_table_script.sh ⇒ This script executes the matlab program. Need just to run qsub with this script&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH --mem=50mg&lt;br /&gt;
#SBATCH --partition powers-general&lt;br /&gt;
#SBATCH -A power-general-users&lt;br /&gt;
hostname&lt;br /&gt;
&lt;br /&gt;
cd /a/home/cc/tree/taucc/staff/dvory/matlab&lt;br /&gt;
&lt;br /&gt;
matlab -nodisplay -nosplash -nodesktop -r &amp;quot;run(myTable());exit;&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
run_in_loop.sh ⇒ However, one may also generate many jobs with this file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
for i in {1..100}&lt;br /&gt;
&lt;br /&gt;
do&lt;br /&gt;
&lt;br /&gt;
        sbatch my_table_script.sh&lt;br /&gt;
&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Running my job is with the command (after doing chmod +x &amp;#039;run_in_loop.sh&amp;#039;):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
./run_in_loop.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==AlphaFold==&lt;br /&gt;
&lt;br /&gt;
AlphaFold is a deep learning tool designed for predicting protein structures.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Guides:&amp;#039;&amp;#039;&amp;#039;  &lt;br /&gt;
&lt;br /&gt;
[https://hpcguide.tau.ac.il/index.php?title=Alphafold AlphaFold Guide]&lt;br /&gt;
&lt;br /&gt;
[https://hpcguide.tau.ac.il/index.php?title=Alphafold3 AlphaFold3 Guide]&lt;br /&gt;
&lt;br /&gt;
==Common SLURM Commands==&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#View all queues (partitions):&lt;br /&gt;
sinfo&lt;br /&gt;
#View all jobs:&lt;br /&gt;
squeue&lt;br /&gt;
#View details of a specific job:&lt;br /&gt;
scontrol show job &amp;lt;job_number&amp;gt;&lt;br /&gt;
#Get information about partitions:&lt;br /&gt;
scontrol show partition&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting &amp;amp; Tips ==&lt;br /&gt;
&lt;br /&gt;
=== Common Errors ===&lt;br /&gt;
&lt;br /&gt;
# &amp;lt;code&amp;gt;srun: error: Unable to allocate resources: No partition specified or system default partition&amp;lt;/code&amp;gt;  &amp;lt;br /&amp;gt;&amp;#039;&amp;#039;&amp;#039;Solution:&amp;#039;&amp;#039;&amp;#039; Always specify a partition. Example:  &amp;lt;code&amp;gt;srun --pty -c 1 --mem=2G -p power-general /bin/bash&amp;lt;/code&amp;gt;&lt;br /&gt;
# Job failed, and upon doing scontrol show job job_id or when running sacct -j job_id -o JobID,JobName,State%20  &amp;lt;br /&amp;gt;you see:   &amp;lt;code&amp;gt;JobState=OUT_OF_MEMORY Reason=OutOfMemory&amp;lt;/code&amp;gt;  or :&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
JobID           JobName                State &lt;br /&gt;
------------ ---------- -------------------- &lt;br /&gt;
71             oom_test        OUT_OF_MEMORY &lt;br /&gt;
71.batch          batch        OUT_OF_MEMORY &lt;br /&gt;
71.extern        extern            COMPLETED &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;it means that the ram requested for the job was not enough, please resubmit the job again with more ram. see [https://wikihpc.tau.ac.il/index.php?title=Slurm_user_guide#Estimating_RAM_Usage below] for help with understanding how much ram your job may need.&lt;br /&gt;
&lt;br /&gt;
=== Chain Jobs ===&lt;br /&gt;
Use the &amp;lt;code&amp;gt;--depend&amp;lt;/code&amp;gt; flag to set job dependencies.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sbatch --ntasks=1 --time=60 -p power-general -A power-general-users --depend=45001 do_work.bash&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Always Specify Resources ===&lt;br /&gt;
When submitting jobs, ensure you include all required resources like partition, memory, and CPUs to avoid job failures.&lt;br /&gt;
&lt;br /&gt;
=== Attaching to Running Jobs ===&lt;br /&gt;
If you need to monitor or interact with a running job, use &amp;lt;code&amp;gt;sattach&amp;lt;/code&amp;gt;. This command allows you to attach to a job&amp;#039;s input, output, and error streams in real-time.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sattach &amp;lt;job_id&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To view job steps of a specific job, use the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
scontrol show job &amp;lt;job_id&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look for sections labeled &amp;quot;StepId&amp;quot; within the output. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;For specific job steps, use:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sattach &amp;lt;job_id.step_id&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Note:&amp;#039;&amp;#039;&amp;#039; &amp;lt;code&amp;gt;sattach&amp;lt;/code&amp;gt; is particularly useful for interactive jobs, where you can provide input directly. For non-interactive jobs, it acts like &amp;lt;code&amp;gt;tail -f&amp;lt;/code&amp;gt;, allowing you to monitor the output stream.&lt;br /&gt;
&lt;br /&gt;
=== Estimating RAM Usage ===&lt;br /&gt;
&lt;br /&gt;
When writing SLURM job scripts, it&amp;#039;s crucial to understand and correctly specify the memory requirements for your job. Proper memory allocation ensures efficient resource usage and prevents job failures due to out-of-memory (OOM) errors.&lt;br /&gt;
&lt;br /&gt;
==== Tips for Estimating RAM Usage ====&lt;br /&gt;
&lt;br /&gt;
* Check Application Documentation: Refer to the official documentation or user guides for memory-related information.&lt;br /&gt;
* Run a Small Test Job: Submit a smaller version of your job and monitor its memory usage using commands like `free -m`, `top`, or `htop`.&lt;br /&gt;
* Use Profiling Tools: Tools like `valgrind`, `gprof`, or built-in profilers can help you understand memory usage.&lt;br /&gt;
* Analyze Previous Jobs: Review SLURM logs and job statistics for insights into memory consumption of past jobs.&lt;br /&gt;
* Consult with Peers or Experts: Ask colleagues or experts who have experience with similar workloads.&lt;br /&gt;
&lt;br /&gt;
==== Example: Monitoring Memory Usage ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH --job-name=memory_test&lt;br /&gt;
#SBATCH --account=your_account&lt;br /&gt;
#SBATCH --partition=your_partition&lt;br /&gt;
#SBATCH --time=01:00:00&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --cpus-per-task=1&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --output=memory_test.out&lt;br /&gt;
#SBATCH --error=memory_test.err&lt;br /&gt;
&lt;br /&gt;
# Monitor memory usage&lt;br /&gt;
echo &amp;quot;Memory usage before running the job:&amp;quot;&lt;br /&gt;
free -m&lt;br /&gt;
&lt;br /&gt;
# Your application commands go here&lt;br /&gt;
# ./your_application&lt;br /&gt;
&lt;br /&gt;
# Monitor memory usage after running the job&lt;br /&gt;
echo &amp;quot;Memory usage after running the job:&amp;quot;&lt;br /&gt;
free -m&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== General Tips ====&lt;br /&gt;
&lt;br /&gt;
* Start Small: Begin with a conservative memory request and increase it based on observed usage.&lt;br /&gt;
* Consider Peak Usage: Plan for peak memory usage to avoid OOM errors.&lt;br /&gt;
* Use SLURM&amp;#039;s Memory Reporting: Use `sacct` to view memory usage statistics.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sacct -j &amp;lt;job_id&amp;gt; --format=JobID,JobName,MaxRSS,Elapsed&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Submitting_a_job_to_a_slurm_queue&amp;diff=1512</id>
		<title>Submitting a job to a slurm queue</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Submitting_a_job_to_a_slurm_queue&amp;diff=1512"/>
		<updated>2025-03-20T12:14:48Z</updated>

		<summary type="html">&lt;p&gt;Levk: /* AlphaFold */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Accessing the System ==&lt;br /&gt;
&lt;br /&gt;
To submit jobs to SLURM at Tel Aviv University, you need to access the system through one of the following login nodes:&lt;br /&gt;
&lt;br /&gt;
* powerslurm-login.tau.ac.il&lt;br /&gt;
* powerslurm-login2.tau.ac.il&lt;br /&gt;
&lt;br /&gt;
=== Requirements for Access ===&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Group Membership&amp;#039;&amp;#039;&amp;#039;: You must be part of the &amp;quot;power&amp;quot; group to access the resources.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;University Credentials&amp;#039;&amp;#039;&amp;#039;: Use your Tel Aviv University username and password to log in.&lt;br /&gt;
&lt;br /&gt;
These login nodes are your starting point for submitting jobs, checking job status, and managing your SLURM tasks.&lt;br /&gt;
&lt;br /&gt;
=== SSH Example ===&lt;br /&gt;
&lt;br /&gt;
To access the system using SSH, use the following example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Replace &amp;#039;your_username&amp;#039; with your actual Tel Aviv University username&lt;br /&gt;
ssh your_username@powerslurm-login.tau.ac.il&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you want to connect to the second login node, use:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Replace &amp;#039;your_username&amp;#039; with your actual Tel Aviv University username&lt;br /&gt;
ssh your_username@powerslurm-login2.tau.ac.il&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you have an SSH key set up for password-less login, you can specify it like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Replace &amp;#039;your_username&amp;#039; and &amp;#039;/path/to/your/private_key&amp;#039; accordingly&lt;br /&gt;
ssh -i /path/to/your/private_key your_username@powerslurm-login.tau.ac.il&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Environment Modules ==&lt;br /&gt;
&lt;br /&gt;
Environment Modules in SLURM allow users to dynamically modify their shell environment, providing an easy way to load and unload different software applications, libraries, and their dependencies. This system helps avoid conflicts between software versions and ensures the correct environment for running specific applications.&lt;br /&gt;
&lt;br /&gt;
Here are some common commands to work with environment modules:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#List Available Modules: To see all the modules available on the system, use:&lt;br /&gt;
module avail&lt;br /&gt;
&lt;br /&gt;
#To search for a specific module by name (e.g., `gcc`), use:&lt;br /&gt;
module avail gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#Get Detailed Information About a Module: The `module spider` command provides detailed information about a module, including versions, dependencies, and descriptions:&lt;br /&gt;
module spider gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#View Module Settings: To see what environment variables and settings will be modified by a module, use:&lt;br /&gt;
module show gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#Load a Module: To set up the environment for a specific software, use the `module load` command. For example, to load GCC version 12.1.0:&lt;br /&gt;
module load gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#List Loaded Modules: To view all currently loaded modules in your session, use:&lt;br /&gt;
module list&lt;br /&gt;
&lt;br /&gt;
#Unload a Module: To unload a specific module from your environment, use:&lt;br /&gt;
module unload gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#Unload All Modules:** If you need to clear your environment of all loaded modules, use:&lt;br /&gt;
module purge&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;By using these commands, you can easily manage the software environments needed for different tasks, ensuring compatibility and reducing potential conflicts between software versions.&lt;br /&gt;
&lt;br /&gt;
== Basic Job Submission Commands ==&lt;br /&gt;
&lt;br /&gt;
=== Finding Your Account and Partition ===&lt;br /&gt;
&lt;br /&gt;
Before submitting a job, you need to know which partitions you have permission to use.&lt;br /&gt;
&lt;br /&gt;
Run the command `&amp;lt;code&amp;gt;check_my_partitions&amp;lt;/code&amp;gt;` to view a list of all the partitions you have permission to send jobs to.&lt;br /&gt;
&lt;br /&gt;
== Submitting Jobs==&lt;br /&gt;
sbatch: Submits a job script for batch processing.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example&amp;#039;&amp;#039;&amp;#039;:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    sbatch --ntasks=1 --time=10 -p power-general -A power-general-users pre_process.bash&lt;br /&gt;
   # This command submits pre_process.bash to the power-general partition for 10 minutes. &lt;br /&gt;
   # With 1 GPU:&lt;br /&gt;
    sbatch --gres=gpu:1 -p gpu-general -A gpu-general-users gpu_job.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Writing SLURM Job Scripts===&lt;br /&gt;
&lt;br /&gt;
Here is a simple job script example:&lt;br /&gt;
&lt;br /&gt;
==== Basic Script====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --job-name=my_job             # Job name&lt;br /&gt;
#SBATCH --account=power-general-users # Account name&lt;br /&gt;
#SBATCH --partition=power-general     # Partition name&lt;br /&gt;
#SBATCH --time=02:00:00               # Max run time (hh:mm:ss)&lt;br /&gt;
#SBATCH --ntasks=1                    # Number of tasks&lt;br /&gt;
#SBATCH --nodes=1                     # Number of nodes&lt;br /&gt;
#SBATCH --cpus-per-task=1             # CPUs per task&lt;br /&gt;
#SBATCH --mem-per-cpu=4G              # Memory per CPU&lt;br /&gt;
#SBATCH --output=my_job_%j.out        # Output file&lt;br /&gt;
#SBATCH --error=my_job_%j.err         # Error file&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Starting my SLURM job&amp;quot;&lt;br /&gt;
echo &amp;quot;Job ID: $SLURM_JOB_ID&amp;quot;&lt;br /&gt;
echo &amp;quot;Running on nodes: $SLURM_JOB_NODELIST&amp;quot;&lt;br /&gt;
echo &amp;quot;Allocated CPUs: $SLURM_JOB_CPUS_PER_NODE&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Your application commands go here&lt;br /&gt;
# ./my_program&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Job completed&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To ask for x cores interactively:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=x  --partition=power-general --nodes=1 --pty bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
However, need for now also to set slurm parameters inside the script, or within the interactive job:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
export SLURM_TASKS_PER_NODE=48&lt;br /&gt;
export SLURM_CPUS_ON_NODE=48&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For defining an array, may add:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --array=1-300&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Script for 1 GPU ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --job-name=gpu_job             # Job name&lt;br /&gt;
#SBATCH --account=my_account           # Account name&lt;br /&gt;
#SBATCH --partition=gpu-general        # Partition name&lt;br /&gt;
#SBATCH --time=02:00:00                # Max run time&lt;br /&gt;
#SBATCH --ntasks=1                     # Number of tasks&lt;br /&gt;
#SBATCH --nodes=1                      # Number of nodes&lt;br /&gt;
#SBATCH --cpus-per-task=1              # CPUs per task&lt;br /&gt;
#SBATCH --gres=gpu:1                   # Number of GPUs&lt;br /&gt;
#SBATCH --mem-per-cpu=4G               # Memory per CPU&lt;br /&gt;
#SBATCH --output=my_job_%j.out         # Output file&lt;br /&gt;
#SBATCH --error=my_job_%j.err          # Error file&lt;br /&gt;
&lt;br /&gt;
module load python/python-3.8&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Starting GPU job&amp;quot;&lt;br /&gt;
echo &amp;quot;Job ID: $SLURM_JOB_ID&amp;quot;&lt;br /&gt;
echo &amp;quot;Running on nodes: $SLURM_JOB_NODELIST&amp;quot;&lt;br /&gt;
echo &amp;quot;Allocated CPUs: $SLURM_JOB_CPUS_PER_NODE&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Your GPU commands go here&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Job completed&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
For excluding a node, one may add the following&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
#SBATCH --exclude=compute-0-[100-103],compute-0-67&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Importance of Correct RAM Usage in Jobs===&lt;br /&gt;
&lt;br /&gt;
When writing SLURM job scripts, it&amp;#039;s crucial to understand and correctly specify the memory requirements for your job. &lt;br /&gt;
&lt;br /&gt;
Proper memory allocation ensures efficient resource usage and prevents job failures due to out-of-memory (OOM) errors.&lt;br /&gt;
&lt;br /&gt;
==== Why Correct RAM Usage Matters ====&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Resource Efficiency&amp;#039;&amp;#039;&amp;#039;: Allocating the right amount of memory helps in optimal resource utilization, allowing more jobs to run simultaneously on the cluster.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Job Stability&amp;#039;&amp;#039;&amp;#039;: Underestimating memory requirements can lead to OOM errors, causing your job to fail and waste computational resources.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Performance&amp;#039;&amp;#039;&amp;#039;: Overestimating memory needs can lead to underutilization of resources, potentially delaying other jobs in the queue.&lt;br /&gt;
&lt;br /&gt;
==== How to Specify Memory in SLURM ====&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;--mem&amp;#039;&amp;#039;&amp;#039;: Specifies the total memory required for the job.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;--mem-per-cpu&amp;#039;&amp;#039;&amp;#039;: Specifies the memory required per CPU.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example&amp;#039;&amp;#039;&amp;#039;:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --mem=4G              # Total memory for the job&lt;br /&gt;
#SBATCH --mem-per-cpu=2G      # Memory per CPU&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Interactive Jobs===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#Start an interactive session:&lt;br /&gt;
srun --ntasks=1 -p power-general -A power-general-users --pty bash&lt;br /&gt;
&lt;br /&gt;
#Specify a compute node:&lt;br /&gt;
srun --ntasks=1 -p power-general -A power-general-users --nodelist=&amp;quot;compute-0-12&amp;quot; --pty bash&lt;br /&gt;
&lt;br /&gt;
#Using GUI:&lt;br /&gt;
srun --ntasks=1 -p power-general -A power-general-users --x11 /bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Submitting RELION Jobs===&lt;br /&gt;
&lt;br /&gt;
To submit a RELION job interactively on the &amp;lt;code&amp;gt;gpu-relion&amp;lt;/code&amp;gt; queue with X11 forwarding, use the following steps:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#Start an interactive session with X11:&lt;br /&gt;
srun --ntasks=1 -p gpu-relion -A your_account --x11 --pty bash&lt;br /&gt;
#Load the RELION module:&lt;br /&gt;
module load relion/relion-4.0.1&lt;br /&gt;
#Launch RELION:&lt;br /&gt;
relion&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Running matlab example==&lt;br /&gt;
In this example there are 3 files:&lt;br /&gt;
&lt;br /&gt;
myTable.m ⇒ This matlab file calculates something&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
fprintf(&amp;#039;=======================================\n&amp;#039;);&lt;br /&gt;
fprintf(&amp;#039; a             b             c              d             \n&amp;#039;);&lt;br /&gt;
fprintf(&amp;#039;=======================================\n&amp;#039;);&lt;br /&gt;
while 1&lt;br /&gt;
                for j = 1:10&lt;br /&gt;
                                a = sin(10*j);&lt;br /&gt;
                                b = a*cos(10*j);&lt;br /&gt;
                                c = a + b;&lt;br /&gt;
                                d = a - b;&lt;br /&gt;
                                fprintf(&amp;#039;%+6.5f   %+6.5f   %+6.5f   %+6.5f   \n&amp;#039;,a,b,c,d);&lt;br /&gt;
                end&lt;br /&gt;
end&lt;br /&gt;
fprintf(&amp;#039;=======================================\n&amp;#039;);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
my_table_script.sh ⇒ This script executes the matlab program. Need just to run qsub with this script&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH --mem=50mg&lt;br /&gt;
#SBATCH --partition powers-general&lt;br /&gt;
#SBATCH -A power-general-users&lt;br /&gt;
hostname&lt;br /&gt;
&lt;br /&gt;
cd /a/home/cc/tree/taucc/staff/dvory/matlab&lt;br /&gt;
&lt;br /&gt;
matlab -nodisplay -nosplash -nodesktop -r &amp;quot;run(myTable());exit;&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
run_in_loop.sh ⇒ However, one may also generate many jobs with this file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
for i in {1..100}&lt;br /&gt;
&lt;br /&gt;
do&lt;br /&gt;
&lt;br /&gt;
        sbatch my_table_script.sh&lt;br /&gt;
&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Running my job is with the command (after doing chmod +x &amp;#039;run_in_loop.sh&amp;#039;):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
./run_in_loop.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==AlphaFold==&lt;br /&gt;
&lt;br /&gt;
AlphaFold is a deep learning tool designed for predicting protein structures.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Guides:&amp;#039;&amp;#039;&amp;#039;  &lt;br /&gt;
&lt;br /&gt;
[https://hpcguide.tau.ac.il/index.php?title=Alphafold AlphaFold Guide]&lt;br /&gt;
&lt;br /&gt;
[https://https://hpcguide.tau.ac.il/index.php?title=Alphafold3 AlphaFold3 Guide]&lt;br /&gt;
&lt;br /&gt;
==Common SLURM Commands==&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#View all queues (partitions):&lt;br /&gt;
sinfo&lt;br /&gt;
#View all jobs:&lt;br /&gt;
squeue&lt;br /&gt;
#View details of a specific job:&lt;br /&gt;
scontrol show job &amp;lt;job_number&amp;gt;&lt;br /&gt;
#Get information about partitions:&lt;br /&gt;
scontrol show partition&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting &amp;amp; Tips ==&lt;br /&gt;
&lt;br /&gt;
=== Common Errors ===&lt;br /&gt;
&lt;br /&gt;
# &amp;lt;code&amp;gt;srun: error: Unable to allocate resources: No partition specified or system default partition&amp;lt;/code&amp;gt;  &amp;lt;br /&amp;gt;&amp;#039;&amp;#039;&amp;#039;Solution:&amp;#039;&amp;#039;&amp;#039; Always specify a partition. Example:  &amp;lt;code&amp;gt;srun --pty -c 1 --mem=2G -p power-general /bin/bash&amp;lt;/code&amp;gt;&lt;br /&gt;
# Job failed, and upon doing scontrol show job job_id or when running sacct -j job_id -o JobID,JobName,State%20  &amp;lt;br /&amp;gt;you see:   &amp;lt;code&amp;gt;JobState=OUT_OF_MEMORY Reason=OutOfMemory&amp;lt;/code&amp;gt;  or :&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
JobID           JobName                State &lt;br /&gt;
------------ ---------- -------------------- &lt;br /&gt;
71             oom_test        OUT_OF_MEMORY &lt;br /&gt;
71.batch          batch        OUT_OF_MEMORY &lt;br /&gt;
71.extern        extern            COMPLETED &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;it means that the ram requested for the job was not enough, please resubmit the job again with more ram. see [https://wikihpc.tau.ac.il/index.php?title=Slurm_user_guide#Estimating_RAM_Usage below] for help with understanding how much ram your job may need.&lt;br /&gt;
&lt;br /&gt;
=== Chain Jobs ===&lt;br /&gt;
Use the &amp;lt;code&amp;gt;--depend&amp;lt;/code&amp;gt; flag to set job dependencies.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sbatch --ntasks=1 --time=60 -p power-general -A power-general-users --depend=45001 do_work.bash&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Always Specify Resources ===&lt;br /&gt;
When submitting jobs, ensure you include all required resources like partition, memory, and CPUs to avoid job failures.&lt;br /&gt;
&lt;br /&gt;
=== Attaching to Running Jobs ===&lt;br /&gt;
If you need to monitor or interact with a running job, use &amp;lt;code&amp;gt;sattach&amp;lt;/code&amp;gt;. This command allows you to attach to a job&amp;#039;s input, output, and error streams in real-time.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sattach &amp;lt;job_id&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To view job steps of a specific job, use the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
scontrol show job &amp;lt;job_id&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look for sections labeled &amp;quot;StepId&amp;quot; within the output. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;For specific job steps, use:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sattach &amp;lt;job_id.step_id&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Note:&amp;#039;&amp;#039;&amp;#039; &amp;lt;code&amp;gt;sattach&amp;lt;/code&amp;gt; is particularly useful for interactive jobs, where you can provide input directly. For non-interactive jobs, it acts like &amp;lt;code&amp;gt;tail -f&amp;lt;/code&amp;gt;, allowing you to monitor the output stream.&lt;br /&gt;
&lt;br /&gt;
=== Estimating RAM Usage ===&lt;br /&gt;
&lt;br /&gt;
When writing SLURM job scripts, it&amp;#039;s crucial to understand and correctly specify the memory requirements for your job. Proper memory allocation ensures efficient resource usage and prevents job failures due to out-of-memory (OOM) errors.&lt;br /&gt;
&lt;br /&gt;
==== Tips for Estimating RAM Usage ====&lt;br /&gt;
&lt;br /&gt;
* Check Application Documentation: Refer to the official documentation or user guides for memory-related information.&lt;br /&gt;
* Run a Small Test Job: Submit a smaller version of your job and monitor its memory usage using commands like `free -m`, `top`, or `htop`.&lt;br /&gt;
* Use Profiling Tools: Tools like `valgrind`, `gprof`, or built-in profilers can help you understand memory usage.&lt;br /&gt;
* Analyze Previous Jobs: Review SLURM logs and job statistics for insights into memory consumption of past jobs.&lt;br /&gt;
* Consult with Peers or Experts: Ask colleagues or experts who have experience with similar workloads.&lt;br /&gt;
&lt;br /&gt;
==== Example: Monitoring Memory Usage ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH --job-name=memory_test&lt;br /&gt;
#SBATCH --account=your_account&lt;br /&gt;
#SBATCH --partition=your_partition&lt;br /&gt;
#SBATCH --time=01:00:00&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --cpus-per-task=1&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --output=memory_test.out&lt;br /&gt;
#SBATCH --error=memory_test.err&lt;br /&gt;
&lt;br /&gt;
# Monitor memory usage&lt;br /&gt;
echo &amp;quot;Memory usage before running the job:&amp;quot;&lt;br /&gt;
free -m&lt;br /&gt;
&lt;br /&gt;
# Your application commands go here&lt;br /&gt;
# ./your_application&lt;br /&gt;
&lt;br /&gt;
# Monitor memory usage after running the job&lt;br /&gt;
echo &amp;quot;Memory usage after running the job:&amp;quot;&lt;br /&gt;
free -m&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== General Tips ====&lt;br /&gt;
&lt;br /&gt;
* Start Small: Begin with a conservative memory request and increase it based on observed usage.&lt;br /&gt;
* Consider Peak Usage: Plan for peak memory usage to avoid OOM errors.&lt;br /&gt;
* Use SLURM&amp;#039;s Memory Reporting: Use `sacct` to view memory usage statistics.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sacct -j &amp;lt;job_id&amp;gt; --format=JobID,JobName,MaxRSS,Elapsed&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Submitting_a_job_to_a_slurm_queue&amp;diff=1511</id>
		<title>Submitting a job to a slurm queue</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Submitting_a_job_to_a_slurm_queue&amp;diff=1511"/>
		<updated>2025-03-20T12:14:35Z</updated>

		<summary type="html">&lt;p&gt;Levk: /* AlphaFold */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Accessing the System ==&lt;br /&gt;
&lt;br /&gt;
To submit jobs to SLURM at Tel Aviv University, you need to access the system through one of the following login nodes:&lt;br /&gt;
&lt;br /&gt;
* powerslurm-login.tau.ac.il&lt;br /&gt;
* powerslurm-login2.tau.ac.il&lt;br /&gt;
&lt;br /&gt;
=== Requirements for Access ===&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Group Membership&amp;#039;&amp;#039;&amp;#039;: You must be part of the &amp;quot;power&amp;quot; group to access the resources.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;University Credentials&amp;#039;&amp;#039;&amp;#039;: Use your Tel Aviv University username and password to log in.&lt;br /&gt;
&lt;br /&gt;
These login nodes are your starting point for submitting jobs, checking job status, and managing your SLURM tasks.&lt;br /&gt;
&lt;br /&gt;
=== SSH Example ===&lt;br /&gt;
&lt;br /&gt;
To access the system using SSH, use the following example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Replace &amp;#039;your_username&amp;#039; with your actual Tel Aviv University username&lt;br /&gt;
ssh your_username@powerslurm-login.tau.ac.il&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you want to connect to the second login node, use:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Replace &amp;#039;your_username&amp;#039; with your actual Tel Aviv University username&lt;br /&gt;
ssh your_username@powerslurm-login2.tau.ac.il&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you have an SSH key set up for password-less login, you can specify it like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Replace &amp;#039;your_username&amp;#039; and &amp;#039;/path/to/your/private_key&amp;#039; accordingly&lt;br /&gt;
ssh -i /path/to/your/private_key your_username@powerslurm-login.tau.ac.il&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Environment Modules ==&lt;br /&gt;
&lt;br /&gt;
Environment Modules in SLURM allow users to dynamically modify their shell environment, providing an easy way to load and unload different software applications, libraries, and their dependencies. This system helps avoid conflicts between software versions and ensures the correct environment for running specific applications.&lt;br /&gt;
&lt;br /&gt;
Here are some common commands to work with environment modules:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#List Available Modules: To see all the modules available on the system, use:&lt;br /&gt;
module avail&lt;br /&gt;
&lt;br /&gt;
#To search for a specific module by name (e.g., `gcc`), use:&lt;br /&gt;
module avail gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#Get Detailed Information About a Module: The `module spider` command provides detailed information about a module, including versions, dependencies, and descriptions:&lt;br /&gt;
module spider gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#View Module Settings: To see what environment variables and settings will be modified by a module, use:&lt;br /&gt;
module show gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#Load a Module: To set up the environment for a specific software, use the `module load` command. For example, to load GCC version 12.1.0:&lt;br /&gt;
module load gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#List Loaded Modules: To view all currently loaded modules in your session, use:&lt;br /&gt;
module list&lt;br /&gt;
&lt;br /&gt;
#Unload a Module: To unload a specific module from your environment, use:&lt;br /&gt;
module unload gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#Unload All Modules:** If you need to clear your environment of all loaded modules, use:&lt;br /&gt;
module purge&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;By using these commands, you can easily manage the software environments needed for different tasks, ensuring compatibility and reducing potential conflicts between software versions.&lt;br /&gt;
&lt;br /&gt;
== Basic Job Submission Commands ==&lt;br /&gt;
&lt;br /&gt;
=== Finding Your Account and Partition ===&lt;br /&gt;
&lt;br /&gt;
Before submitting a job, you need to know which partitions you have permission to use.&lt;br /&gt;
&lt;br /&gt;
Run the command `&amp;lt;code&amp;gt;check_my_partitions&amp;lt;/code&amp;gt;` to view a list of all the partitions you have permission to send jobs to.&lt;br /&gt;
&lt;br /&gt;
== Submitting Jobs==&lt;br /&gt;
sbatch: Submits a job script for batch processing.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example&amp;#039;&amp;#039;&amp;#039;:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    sbatch --ntasks=1 --time=10 -p power-general -A power-general-users pre_process.bash&lt;br /&gt;
   # This command submits pre_process.bash to the power-general partition for 10 minutes. &lt;br /&gt;
   # With 1 GPU:&lt;br /&gt;
    sbatch --gres=gpu:1 -p gpu-general -A gpu-general-users gpu_job.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Writing SLURM Job Scripts===&lt;br /&gt;
&lt;br /&gt;
Here is a simple job script example:&lt;br /&gt;
&lt;br /&gt;
==== Basic Script====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --job-name=my_job             # Job name&lt;br /&gt;
#SBATCH --account=power-general-users # Account name&lt;br /&gt;
#SBATCH --partition=power-general     # Partition name&lt;br /&gt;
#SBATCH --time=02:00:00               # Max run time (hh:mm:ss)&lt;br /&gt;
#SBATCH --ntasks=1                    # Number of tasks&lt;br /&gt;
#SBATCH --nodes=1                     # Number of nodes&lt;br /&gt;
#SBATCH --cpus-per-task=1             # CPUs per task&lt;br /&gt;
#SBATCH --mem-per-cpu=4G              # Memory per CPU&lt;br /&gt;
#SBATCH --output=my_job_%j.out        # Output file&lt;br /&gt;
#SBATCH --error=my_job_%j.err         # Error file&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Starting my SLURM job&amp;quot;&lt;br /&gt;
echo &amp;quot;Job ID: $SLURM_JOB_ID&amp;quot;&lt;br /&gt;
echo &amp;quot;Running on nodes: $SLURM_JOB_NODELIST&amp;quot;&lt;br /&gt;
echo &amp;quot;Allocated CPUs: $SLURM_JOB_CPUS_PER_NODE&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Your application commands go here&lt;br /&gt;
# ./my_program&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Job completed&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To ask for x cores interactively:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=x  --partition=power-general --nodes=1 --pty bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
However, need for now also to set slurm parameters inside the script, or within the interactive job:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
export SLURM_TASKS_PER_NODE=48&lt;br /&gt;
export SLURM_CPUS_ON_NODE=48&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For defining an array, may add:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --array=1-300&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Script for 1 GPU ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --job-name=gpu_job             # Job name&lt;br /&gt;
#SBATCH --account=my_account           # Account name&lt;br /&gt;
#SBATCH --partition=gpu-general        # Partition name&lt;br /&gt;
#SBATCH --time=02:00:00                # Max run time&lt;br /&gt;
#SBATCH --ntasks=1                     # Number of tasks&lt;br /&gt;
#SBATCH --nodes=1                      # Number of nodes&lt;br /&gt;
#SBATCH --cpus-per-task=1              # CPUs per task&lt;br /&gt;
#SBATCH --gres=gpu:1                   # Number of GPUs&lt;br /&gt;
#SBATCH --mem-per-cpu=4G               # Memory per CPU&lt;br /&gt;
#SBATCH --output=my_job_%j.out         # Output file&lt;br /&gt;
#SBATCH --error=my_job_%j.err          # Error file&lt;br /&gt;
&lt;br /&gt;
module load python/python-3.8&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Starting GPU job&amp;quot;&lt;br /&gt;
echo &amp;quot;Job ID: $SLURM_JOB_ID&amp;quot;&lt;br /&gt;
echo &amp;quot;Running on nodes: $SLURM_JOB_NODELIST&amp;quot;&lt;br /&gt;
echo &amp;quot;Allocated CPUs: $SLURM_JOB_CPUS_PER_NODE&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Your GPU commands go here&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Job completed&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
For excluding a node, one may add the following&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
#SBATCH --exclude=compute-0-[100-103],compute-0-67&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Importance of Correct RAM Usage in Jobs===&lt;br /&gt;
&lt;br /&gt;
When writing SLURM job scripts, it&amp;#039;s crucial to understand and correctly specify the memory requirements for your job. &lt;br /&gt;
&lt;br /&gt;
Proper memory allocation ensures efficient resource usage and prevents job failures due to out-of-memory (OOM) errors.&lt;br /&gt;
&lt;br /&gt;
==== Why Correct RAM Usage Matters ====&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Resource Efficiency&amp;#039;&amp;#039;&amp;#039;: Allocating the right amount of memory helps in optimal resource utilization, allowing more jobs to run simultaneously on the cluster.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Job Stability&amp;#039;&amp;#039;&amp;#039;: Underestimating memory requirements can lead to OOM errors, causing your job to fail and waste computational resources.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Performance&amp;#039;&amp;#039;&amp;#039;: Overestimating memory needs can lead to underutilization of resources, potentially delaying other jobs in the queue.&lt;br /&gt;
&lt;br /&gt;
==== How to Specify Memory in SLURM ====&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;--mem&amp;#039;&amp;#039;&amp;#039;: Specifies the total memory required for the job.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;--mem-per-cpu&amp;#039;&amp;#039;&amp;#039;: Specifies the memory required per CPU.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example&amp;#039;&amp;#039;&amp;#039;:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --mem=4G              # Total memory for the job&lt;br /&gt;
#SBATCH --mem-per-cpu=2G      # Memory per CPU&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Interactive Jobs===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#Start an interactive session:&lt;br /&gt;
srun --ntasks=1 -p power-general -A power-general-users --pty bash&lt;br /&gt;
&lt;br /&gt;
#Specify a compute node:&lt;br /&gt;
srun --ntasks=1 -p power-general -A power-general-users --nodelist=&amp;quot;compute-0-12&amp;quot; --pty bash&lt;br /&gt;
&lt;br /&gt;
#Using GUI:&lt;br /&gt;
srun --ntasks=1 -p power-general -A power-general-users --x11 /bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Submitting RELION Jobs===&lt;br /&gt;
&lt;br /&gt;
To submit a RELION job interactively on the &amp;lt;code&amp;gt;gpu-relion&amp;lt;/code&amp;gt; queue with X11 forwarding, use the following steps:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#Start an interactive session with X11:&lt;br /&gt;
srun --ntasks=1 -p gpu-relion -A your_account --x11 --pty bash&lt;br /&gt;
#Load the RELION module:&lt;br /&gt;
module load relion/relion-4.0.1&lt;br /&gt;
#Launch RELION:&lt;br /&gt;
relion&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Running matlab example==&lt;br /&gt;
In this example there are 3 files:&lt;br /&gt;
&lt;br /&gt;
myTable.m ⇒ This matlab file calculates something&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
fprintf(&amp;#039;=======================================\n&amp;#039;);&lt;br /&gt;
fprintf(&amp;#039; a             b             c              d             \n&amp;#039;);&lt;br /&gt;
fprintf(&amp;#039;=======================================\n&amp;#039;);&lt;br /&gt;
while 1&lt;br /&gt;
                for j = 1:10&lt;br /&gt;
                                a = sin(10*j);&lt;br /&gt;
                                b = a*cos(10*j);&lt;br /&gt;
                                c = a + b;&lt;br /&gt;
                                d = a - b;&lt;br /&gt;
                                fprintf(&amp;#039;%+6.5f   %+6.5f   %+6.5f   %+6.5f   \n&amp;#039;,a,b,c,d);&lt;br /&gt;
                end&lt;br /&gt;
end&lt;br /&gt;
fprintf(&amp;#039;=======================================\n&amp;#039;);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
my_table_script.sh ⇒ This script executes the matlab program. Need just to run qsub with this script&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH --mem=50mg&lt;br /&gt;
#SBATCH --partition powers-general&lt;br /&gt;
#SBATCH -A power-general-users&lt;br /&gt;
hostname&lt;br /&gt;
&lt;br /&gt;
cd /a/home/cc/tree/taucc/staff/dvory/matlab&lt;br /&gt;
&lt;br /&gt;
matlab -nodisplay -nosplash -nodesktop -r &amp;quot;run(myTable());exit;&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
run_in_loop.sh ⇒ However, one may also generate many jobs with this file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
for i in {1..100}&lt;br /&gt;
&lt;br /&gt;
do&lt;br /&gt;
&lt;br /&gt;
        sbatch my_table_script.sh&lt;br /&gt;
&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Running my job is with the command (after doing chmod +x &amp;#039;run_in_loop.sh&amp;#039;):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
./run_in_loop.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==AlphaFold==&lt;br /&gt;
&lt;br /&gt;
AlphaFold is a deep learning tool designed for predicting protein structures.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Guides:&amp;#039;&amp;#039;&amp;#039;  &lt;br /&gt;
[https://hpcguide.tau.ac.il/index.php?title=Alphafold AlphaFold Guide]&lt;br /&gt;
&lt;br /&gt;
[https://https://hpcguide.tau.ac.il/index.php?title=Alphafold3 AlphaFold3 Guide]&lt;br /&gt;
&lt;br /&gt;
==Common SLURM Commands==&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#View all queues (partitions):&lt;br /&gt;
sinfo&lt;br /&gt;
#View all jobs:&lt;br /&gt;
squeue&lt;br /&gt;
#View details of a specific job:&lt;br /&gt;
scontrol show job &amp;lt;job_number&amp;gt;&lt;br /&gt;
#Get information about partitions:&lt;br /&gt;
scontrol show partition&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting &amp;amp; Tips ==&lt;br /&gt;
&lt;br /&gt;
=== Common Errors ===&lt;br /&gt;
&lt;br /&gt;
# &amp;lt;code&amp;gt;srun: error: Unable to allocate resources: No partition specified or system default partition&amp;lt;/code&amp;gt;  &amp;lt;br /&amp;gt;&amp;#039;&amp;#039;&amp;#039;Solution:&amp;#039;&amp;#039;&amp;#039; Always specify a partition. Example:  &amp;lt;code&amp;gt;srun --pty -c 1 --mem=2G -p power-general /bin/bash&amp;lt;/code&amp;gt;&lt;br /&gt;
# Job failed, and upon doing scontrol show job job_id or when running sacct -j job_id -o JobID,JobName,State%20  &amp;lt;br /&amp;gt;you see:   &amp;lt;code&amp;gt;JobState=OUT_OF_MEMORY Reason=OutOfMemory&amp;lt;/code&amp;gt;  or :&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
JobID           JobName                State &lt;br /&gt;
------------ ---------- -------------------- &lt;br /&gt;
71             oom_test        OUT_OF_MEMORY &lt;br /&gt;
71.batch          batch        OUT_OF_MEMORY &lt;br /&gt;
71.extern        extern            COMPLETED &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;it means that the ram requested for the job was not enough, please resubmit the job again with more ram. see [https://wikihpc.tau.ac.il/index.php?title=Slurm_user_guide#Estimating_RAM_Usage below] for help with understanding how much ram your job may need.&lt;br /&gt;
&lt;br /&gt;
=== Chain Jobs ===&lt;br /&gt;
Use the &amp;lt;code&amp;gt;--depend&amp;lt;/code&amp;gt; flag to set job dependencies.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sbatch --ntasks=1 --time=60 -p power-general -A power-general-users --depend=45001 do_work.bash&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Always Specify Resources ===&lt;br /&gt;
When submitting jobs, ensure you include all required resources like partition, memory, and CPUs to avoid job failures.&lt;br /&gt;
&lt;br /&gt;
=== Attaching to Running Jobs ===&lt;br /&gt;
If you need to monitor or interact with a running job, use &amp;lt;code&amp;gt;sattach&amp;lt;/code&amp;gt;. This command allows you to attach to a job&amp;#039;s input, output, and error streams in real-time.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sattach &amp;lt;job_id&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To view job steps of a specific job, use the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
scontrol show job &amp;lt;job_id&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look for sections labeled &amp;quot;StepId&amp;quot; within the output. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;For specific job steps, use:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sattach &amp;lt;job_id.step_id&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Note:&amp;#039;&amp;#039;&amp;#039; &amp;lt;code&amp;gt;sattach&amp;lt;/code&amp;gt; is particularly useful for interactive jobs, where you can provide input directly. For non-interactive jobs, it acts like &amp;lt;code&amp;gt;tail -f&amp;lt;/code&amp;gt;, allowing you to monitor the output stream.&lt;br /&gt;
&lt;br /&gt;
=== Estimating RAM Usage ===&lt;br /&gt;
&lt;br /&gt;
When writing SLURM job scripts, it&amp;#039;s crucial to understand and correctly specify the memory requirements for your job. Proper memory allocation ensures efficient resource usage and prevents job failures due to out-of-memory (OOM) errors.&lt;br /&gt;
&lt;br /&gt;
==== Tips for Estimating RAM Usage ====&lt;br /&gt;
&lt;br /&gt;
* Check Application Documentation: Refer to the official documentation or user guides for memory-related information.&lt;br /&gt;
* Run a Small Test Job: Submit a smaller version of your job and monitor its memory usage using commands like `free -m`, `top`, or `htop`.&lt;br /&gt;
* Use Profiling Tools: Tools like `valgrind`, `gprof`, or built-in profilers can help you understand memory usage.&lt;br /&gt;
* Analyze Previous Jobs: Review SLURM logs and job statistics for insights into memory consumption of past jobs.&lt;br /&gt;
* Consult with Peers or Experts: Ask colleagues or experts who have experience with similar workloads.&lt;br /&gt;
&lt;br /&gt;
==== Example: Monitoring Memory Usage ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH --job-name=memory_test&lt;br /&gt;
#SBATCH --account=your_account&lt;br /&gt;
#SBATCH --partition=your_partition&lt;br /&gt;
#SBATCH --time=01:00:00&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --cpus-per-task=1&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --output=memory_test.out&lt;br /&gt;
#SBATCH --error=memory_test.err&lt;br /&gt;
&lt;br /&gt;
# Monitor memory usage&lt;br /&gt;
echo &amp;quot;Memory usage before running the job:&amp;quot;&lt;br /&gt;
free -m&lt;br /&gt;
&lt;br /&gt;
# Your application commands go here&lt;br /&gt;
# ./your_application&lt;br /&gt;
&lt;br /&gt;
# Monitor memory usage after running the job&lt;br /&gt;
echo &amp;quot;Memory usage after running the job:&amp;quot;&lt;br /&gt;
free -m&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== General Tips ====&lt;br /&gt;
&lt;br /&gt;
* Start Small: Begin with a conservative memory request and increase it based on observed usage.&lt;br /&gt;
* Consider Peak Usage: Plan for peak memory usage to avoid OOM errors.&lt;br /&gt;
* Use SLURM&amp;#039;s Memory Reporting: Use `sacct` to view memory usage statistics.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sacct -j &amp;lt;job_id&amp;gt; --format=JobID,JobName,MaxRSS,Elapsed&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Submitting_a_job_to_a_slurm_queue&amp;diff=1510</id>
		<title>Submitting a job to a slurm queue</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Submitting_a_job_to_a_slurm_queue&amp;diff=1510"/>
		<updated>2025-03-20T12:14:20Z</updated>

		<summary type="html">&lt;p&gt;Levk: /* AlphaFold */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Accessing the System ==&lt;br /&gt;
&lt;br /&gt;
To submit jobs to SLURM at Tel Aviv University, you need to access the system through one of the following login nodes:&lt;br /&gt;
&lt;br /&gt;
* powerslurm-login.tau.ac.il&lt;br /&gt;
* powerslurm-login2.tau.ac.il&lt;br /&gt;
&lt;br /&gt;
=== Requirements for Access ===&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Group Membership&amp;#039;&amp;#039;&amp;#039;: You must be part of the &amp;quot;power&amp;quot; group to access the resources.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;University Credentials&amp;#039;&amp;#039;&amp;#039;: Use your Tel Aviv University username and password to log in.&lt;br /&gt;
&lt;br /&gt;
These login nodes are your starting point for submitting jobs, checking job status, and managing your SLURM tasks.&lt;br /&gt;
&lt;br /&gt;
=== SSH Example ===&lt;br /&gt;
&lt;br /&gt;
To access the system using SSH, use the following example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Replace &amp;#039;your_username&amp;#039; with your actual Tel Aviv University username&lt;br /&gt;
ssh your_username@powerslurm-login.tau.ac.il&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you want to connect to the second login node, use:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Replace &amp;#039;your_username&amp;#039; with your actual Tel Aviv University username&lt;br /&gt;
ssh your_username@powerslurm-login2.tau.ac.il&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you have an SSH key set up for password-less login, you can specify it like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Replace &amp;#039;your_username&amp;#039; and &amp;#039;/path/to/your/private_key&amp;#039; accordingly&lt;br /&gt;
ssh -i /path/to/your/private_key your_username@powerslurm-login.tau.ac.il&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Environment Modules ==&lt;br /&gt;
&lt;br /&gt;
Environment Modules in SLURM allow users to dynamically modify their shell environment, providing an easy way to load and unload different software applications, libraries, and their dependencies. This system helps avoid conflicts between software versions and ensures the correct environment for running specific applications.&lt;br /&gt;
&lt;br /&gt;
Here are some common commands to work with environment modules:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#List Available Modules: To see all the modules available on the system, use:&lt;br /&gt;
module avail&lt;br /&gt;
&lt;br /&gt;
#To search for a specific module by name (e.g., `gcc`), use:&lt;br /&gt;
module avail gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#Get Detailed Information About a Module: The `module spider` command provides detailed information about a module, including versions, dependencies, and descriptions:&lt;br /&gt;
module spider gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#View Module Settings: To see what environment variables and settings will be modified by a module, use:&lt;br /&gt;
module show gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#Load a Module: To set up the environment for a specific software, use the `module load` command. For example, to load GCC version 12.1.0:&lt;br /&gt;
module load gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#List Loaded Modules: To view all currently loaded modules in your session, use:&lt;br /&gt;
module list&lt;br /&gt;
&lt;br /&gt;
#Unload a Module: To unload a specific module from your environment, use:&lt;br /&gt;
module unload gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#Unload All Modules:** If you need to clear your environment of all loaded modules, use:&lt;br /&gt;
module purge&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;By using these commands, you can easily manage the software environments needed for different tasks, ensuring compatibility and reducing potential conflicts between software versions.&lt;br /&gt;
&lt;br /&gt;
== Basic Job Submission Commands ==&lt;br /&gt;
&lt;br /&gt;
=== Finding Your Account and Partition ===&lt;br /&gt;
&lt;br /&gt;
Before submitting a job, you need to know which partitions you have permission to use.&lt;br /&gt;
&lt;br /&gt;
Run the command `&amp;lt;code&amp;gt;check_my_partitions&amp;lt;/code&amp;gt;` to view a list of all the partitions you have permission to send jobs to.&lt;br /&gt;
&lt;br /&gt;
== Submitting Jobs==&lt;br /&gt;
sbatch: Submits a job script for batch processing.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example&amp;#039;&amp;#039;&amp;#039;:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    sbatch --ntasks=1 --time=10 -p power-general -A power-general-users pre_process.bash&lt;br /&gt;
   # This command submits pre_process.bash to the power-general partition for 10 minutes. &lt;br /&gt;
   # With 1 GPU:&lt;br /&gt;
    sbatch --gres=gpu:1 -p gpu-general -A gpu-general-users gpu_job.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Writing SLURM Job Scripts===&lt;br /&gt;
&lt;br /&gt;
Here is a simple job script example:&lt;br /&gt;
&lt;br /&gt;
==== Basic Script====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --job-name=my_job             # Job name&lt;br /&gt;
#SBATCH --account=power-general-users # Account name&lt;br /&gt;
#SBATCH --partition=power-general     # Partition name&lt;br /&gt;
#SBATCH --time=02:00:00               # Max run time (hh:mm:ss)&lt;br /&gt;
#SBATCH --ntasks=1                    # Number of tasks&lt;br /&gt;
#SBATCH --nodes=1                     # Number of nodes&lt;br /&gt;
#SBATCH --cpus-per-task=1             # CPUs per task&lt;br /&gt;
#SBATCH --mem-per-cpu=4G              # Memory per CPU&lt;br /&gt;
#SBATCH --output=my_job_%j.out        # Output file&lt;br /&gt;
#SBATCH --error=my_job_%j.err         # Error file&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Starting my SLURM job&amp;quot;&lt;br /&gt;
echo &amp;quot;Job ID: $SLURM_JOB_ID&amp;quot;&lt;br /&gt;
echo &amp;quot;Running on nodes: $SLURM_JOB_NODELIST&amp;quot;&lt;br /&gt;
echo &amp;quot;Allocated CPUs: $SLURM_JOB_CPUS_PER_NODE&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Your application commands go here&lt;br /&gt;
# ./my_program&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Job completed&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To ask for x cores interactively:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=x  --partition=power-general --nodes=1 --pty bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
However, need for now also to set slurm parameters inside the script, or within the interactive job:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
export SLURM_TASKS_PER_NODE=48&lt;br /&gt;
export SLURM_CPUS_ON_NODE=48&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For defining an array, may add:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --array=1-300&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Script for 1 GPU ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --job-name=gpu_job             # Job name&lt;br /&gt;
#SBATCH --account=my_account           # Account name&lt;br /&gt;
#SBATCH --partition=gpu-general        # Partition name&lt;br /&gt;
#SBATCH --time=02:00:00                # Max run time&lt;br /&gt;
#SBATCH --ntasks=1                     # Number of tasks&lt;br /&gt;
#SBATCH --nodes=1                      # Number of nodes&lt;br /&gt;
#SBATCH --cpus-per-task=1              # CPUs per task&lt;br /&gt;
#SBATCH --gres=gpu:1                   # Number of GPUs&lt;br /&gt;
#SBATCH --mem-per-cpu=4G               # Memory per CPU&lt;br /&gt;
#SBATCH --output=my_job_%j.out         # Output file&lt;br /&gt;
#SBATCH --error=my_job_%j.err          # Error file&lt;br /&gt;
&lt;br /&gt;
module load python/python-3.8&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Starting GPU job&amp;quot;&lt;br /&gt;
echo &amp;quot;Job ID: $SLURM_JOB_ID&amp;quot;&lt;br /&gt;
echo &amp;quot;Running on nodes: $SLURM_JOB_NODELIST&amp;quot;&lt;br /&gt;
echo &amp;quot;Allocated CPUs: $SLURM_JOB_CPUS_PER_NODE&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Your GPU commands go here&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Job completed&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
For excluding a node, one may add the following&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
#SBATCH --exclude=compute-0-[100-103],compute-0-67&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Importance of Correct RAM Usage in Jobs===&lt;br /&gt;
&lt;br /&gt;
When writing SLURM job scripts, it&amp;#039;s crucial to understand and correctly specify the memory requirements for your job. &lt;br /&gt;
&lt;br /&gt;
Proper memory allocation ensures efficient resource usage and prevents job failures due to out-of-memory (OOM) errors.&lt;br /&gt;
&lt;br /&gt;
==== Why Correct RAM Usage Matters ====&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Resource Efficiency&amp;#039;&amp;#039;&amp;#039;: Allocating the right amount of memory helps in optimal resource utilization, allowing more jobs to run simultaneously on the cluster.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Job Stability&amp;#039;&amp;#039;&amp;#039;: Underestimating memory requirements can lead to OOM errors, causing your job to fail and waste computational resources.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Performance&amp;#039;&amp;#039;&amp;#039;: Overestimating memory needs can lead to underutilization of resources, potentially delaying other jobs in the queue.&lt;br /&gt;
&lt;br /&gt;
==== How to Specify Memory in SLURM ====&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;--mem&amp;#039;&amp;#039;&amp;#039;: Specifies the total memory required for the job.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;--mem-per-cpu&amp;#039;&amp;#039;&amp;#039;: Specifies the memory required per CPU.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example&amp;#039;&amp;#039;&amp;#039;:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --mem=4G              # Total memory for the job&lt;br /&gt;
#SBATCH --mem-per-cpu=2G      # Memory per CPU&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Interactive Jobs===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#Start an interactive session:&lt;br /&gt;
srun --ntasks=1 -p power-general -A power-general-users --pty bash&lt;br /&gt;
&lt;br /&gt;
#Specify a compute node:&lt;br /&gt;
srun --ntasks=1 -p power-general -A power-general-users --nodelist=&amp;quot;compute-0-12&amp;quot; --pty bash&lt;br /&gt;
&lt;br /&gt;
#Using GUI:&lt;br /&gt;
srun --ntasks=1 -p power-general -A power-general-users --x11 /bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Submitting RELION Jobs===&lt;br /&gt;
&lt;br /&gt;
To submit a RELION job interactively on the &amp;lt;code&amp;gt;gpu-relion&amp;lt;/code&amp;gt; queue with X11 forwarding, use the following steps:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#Start an interactive session with X11:&lt;br /&gt;
srun --ntasks=1 -p gpu-relion -A your_account --x11 --pty bash&lt;br /&gt;
#Load the RELION module:&lt;br /&gt;
module load relion/relion-4.0.1&lt;br /&gt;
#Launch RELION:&lt;br /&gt;
relion&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Running matlab example==&lt;br /&gt;
In this example there are 3 files:&lt;br /&gt;
&lt;br /&gt;
myTable.m ⇒ This matlab file calculates something&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
fprintf(&amp;#039;=======================================\n&amp;#039;);&lt;br /&gt;
fprintf(&amp;#039; a             b             c              d             \n&amp;#039;);&lt;br /&gt;
fprintf(&amp;#039;=======================================\n&amp;#039;);&lt;br /&gt;
while 1&lt;br /&gt;
                for j = 1:10&lt;br /&gt;
                                a = sin(10*j);&lt;br /&gt;
                                b = a*cos(10*j);&lt;br /&gt;
                                c = a + b;&lt;br /&gt;
                                d = a - b;&lt;br /&gt;
                                fprintf(&amp;#039;%+6.5f   %+6.5f   %+6.5f   %+6.5f   \n&amp;#039;,a,b,c,d);&lt;br /&gt;
                end&lt;br /&gt;
end&lt;br /&gt;
fprintf(&amp;#039;=======================================\n&amp;#039;);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
my_table_script.sh ⇒ This script executes the matlab program. Need just to run qsub with this script&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH --mem=50mg&lt;br /&gt;
#SBATCH --partition powers-general&lt;br /&gt;
#SBATCH -A power-general-users&lt;br /&gt;
hostname&lt;br /&gt;
&lt;br /&gt;
cd /a/home/cc/tree/taucc/staff/dvory/matlab&lt;br /&gt;
&lt;br /&gt;
matlab -nodisplay -nosplash -nodesktop -r &amp;quot;run(myTable());exit;&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
run_in_loop.sh ⇒ However, one may also generate many jobs with this file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
for i in {1..100}&lt;br /&gt;
&lt;br /&gt;
do&lt;br /&gt;
&lt;br /&gt;
        sbatch my_table_script.sh&lt;br /&gt;
&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Running my job is with the command (after doing chmod +x &amp;#039;run_in_loop.sh&amp;#039;):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
./run_in_loop.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==AlphaFold==&lt;br /&gt;
&lt;br /&gt;
AlphaFold is a deep learning tool designed for predicting protein structures.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Guides:&amp;#039;&amp;#039;&amp;#039;  &lt;br /&gt;
[https://hpcguide.tau.ac.il/index.php?title=Alphafold AlphaFold Guide]&lt;br /&gt;
[https://https://hpcguide.tau.ac.il/index.php?title=Alphafold3 AlphaFold3 Guide]&lt;br /&gt;
&lt;br /&gt;
==Common SLURM Commands==&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#View all queues (partitions):&lt;br /&gt;
sinfo&lt;br /&gt;
#View all jobs:&lt;br /&gt;
squeue&lt;br /&gt;
#View details of a specific job:&lt;br /&gt;
scontrol show job &amp;lt;job_number&amp;gt;&lt;br /&gt;
#Get information about partitions:&lt;br /&gt;
scontrol show partition&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting &amp;amp; Tips ==&lt;br /&gt;
&lt;br /&gt;
=== Common Errors ===&lt;br /&gt;
&lt;br /&gt;
# &amp;lt;code&amp;gt;srun: error: Unable to allocate resources: No partition specified or system default partition&amp;lt;/code&amp;gt;  &amp;lt;br /&amp;gt;&amp;#039;&amp;#039;&amp;#039;Solution:&amp;#039;&amp;#039;&amp;#039; Always specify a partition. Example:  &amp;lt;code&amp;gt;srun --pty -c 1 --mem=2G -p power-general /bin/bash&amp;lt;/code&amp;gt;&lt;br /&gt;
# Job failed, and upon doing scontrol show job job_id or when running sacct -j job_id -o JobID,JobName,State%20  &amp;lt;br /&amp;gt;you see:   &amp;lt;code&amp;gt;JobState=OUT_OF_MEMORY Reason=OutOfMemory&amp;lt;/code&amp;gt;  or :&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
JobID           JobName                State &lt;br /&gt;
------------ ---------- -------------------- &lt;br /&gt;
71             oom_test        OUT_OF_MEMORY &lt;br /&gt;
71.batch          batch        OUT_OF_MEMORY &lt;br /&gt;
71.extern        extern            COMPLETED &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;it means that the ram requested for the job was not enough, please resubmit the job again with more ram. see [https://wikihpc.tau.ac.il/index.php?title=Slurm_user_guide#Estimating_RAM_Usage below] for help with understanding how much ram your job may need.&lt;br /&gt;
&lt;br /&gt;
=== Chain Jobs ===&lt;br /&gt;
Use the &amp;lt;code&amp;gt;--depend&amp;lt;/code&amp;gt; flag to set job dependencies.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sbatch --ntasks=1 --time=60 -p power-general -A power-general-users --depend=45001 do_work.bash&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Always Specify Resources ===&lt;br /&gt;
When submitting jobs, ensure you include all required resources like partition, memory, and CPUs to avoid job failures.&lt;br /&gt;
&lt;br /&gt;
=== Attaching to Running Jobs ===&lt;br /&gt;
If you need to monitor or interact with a running job, use &amp;lt;code&amp;gt;sattach&amp;lt;/code&amp;gt;. This command allows you to attach to a job&amp;#039;s input, output, and error streams in real-time.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sattach &amp;lt;job_id&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To view job steps of a specific job, use the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
scontrol show job &amp;lt;job_id&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look for sections labeled &amp;quot;StepId&amp;quot; within the output. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;For specific job steps, use:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sattach &amp;lt;job_id.step_id&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Note:&amp;#039;&amp;#039;&amp;#039; &amp;lt;code&amp;gt;sattach&amp;lt;/code&amp;gt; is particularly useful for interactive jobs, where you can provide input directly. For non-interactive jobs, it acts like &amp;lt;code&amp;gt;tail -f&amp;lt;/code&amp;gt;, allowing you to monitor the output stream.&lt;br /&gt;
&lt;br /&gt;
=== Estimating RAM Usage ===&lt;br /&gt;
&lt;br /&gt;
When writing SLURM job scripts, it&amp;#039;s crucial to understand and correctly specify the memory requirements for your job. Proper memory allocation ensures efficient resource usage and prevents job failures due to out-of-memory (OOM) errors.&lt;br /&gt;
&lt;br /&gt;
==== Tips for Estimating RAM Usage ====&lt;br /&gt;
&lt;br /&gt;
* Check Application Documentation: Refer to the official documentation or user guides for memory-related information.&lt;br /&gt;
* Run a Small Test Job: Submit a smaller version of your job and monitor its memory usage using commands like `free -m`, `top`, or `htop`.&lt;br /&gt;
* Use Profiling Tools: Tools like `valgrind`, `gprof`, or built-in profilers can help you understand memory usage.&lt;br /&gt;
* Analyze Previous Jobs: Review SLURM logs and job statistics for insights into memory consumption of past jobs.&lt;br /&gt;
* Consult with Peers or Experts: Ask colleagues or experts who have experience with similar workloads.&lt;br /&gt;
&lt;br /&gt;
==== Example: Monitoring Memory Usage ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH --job-name=memory_test&lt;br /&gt;
#SBATCH --account=your_account&lt;br /&gt;
#SBATCH --partition=your_partition&lt;br /&gt;
#SBATCH --time=01:00:00&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --cpus-per-task=1&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --output=memory_test.out&lt;br /&gt;
#SBATCH --error=memory_test.err&lt;br /&gt;
&lt;br /&gt;
# Monitor memory usage&lt;br /&gt;
echo &amp;quot;Memory usage before running the job:&amp;quot;&lt;br /&gt;
free -m&lt;br /&gt;
&lt;br /&gt;
# Your application commands go here&lt;br /&gt;
# ./your_application&lt;br /&gt;
&lt;br /&gt;
# Monitor memory usage after running the job&lt;br /&gt;
echo &amp;quot;Memory usage after running the job:&amp;quot;&lt;br /&gt;
free -m&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== General Tips ====&lt;br /&gt;
&lt;br /&gt;
* Start Small: Begin with a conservative memory request and increase it based on observed usage.&lt;br /&gt;
* Consider Peak Usage: Plan for peak memory usage to avoid OOM errors.&lt;br /&gt;
* Use SLURM&amp;#039;s Memory Reporting: Use `sacct` to view memory usage statistics.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sacct -j &amp;lt;job_id&amp;gt; --format=JobID,JobName,MaxRSS,Elapsed&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Main_Page&amp;diff=1509</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Main_Page&amp;diff=1509"/>
		<updated>2025-03-20T12:13:30Z</updated>

		<summary type="html">&lt;p&gt;Levk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;#039;&amp;#039;&amp;#039;Welcome to HPC Guide.&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
[[Linux basic commands]]&lt;br /&gt;
&lt;br /&gt;
[[Public queues]]&lt;br /&gt;
&lt;br /&gt;
[[Submitting a job to a queue]]&lt;br /&gt;
&lt;br /&gt;
[[Submitting a job to a slurm queue]]&lt;br /&gt;
&lt;br /&gt;
[[PBS-To-SLURM]]&lt;br /&gt;
&lt;br /&gt;
[[Creaing and using conda environment]]&lt;br /&gt;
&lt;br /&gt;
[[Palo Alto VPN for linux]]&lt;br /&gt;
&lt;br /&gt;
[[Alphafold]]&lt;br /&gt;
&lt;br /&gt;
[[Alphafold3]]&lt;br /&gt;
&lt;br /&gt;
[[Using GPU]]&lt;br /&gt;
&lt;br /&gt;
[[security installations]]&lt;br /&gt;
&lt;br /&gt;
[[Install matlab on work station per matlab user]]&lt;br /&gt;
&lt;br /&gt;
[[Storage and scratch]]&lt;br /&gt;
&lt;br /&gt;
This HPC Tutorial is designed for researchers at TAU who are in need of computational power (computer resources) and wish to explore and use our High Performance Computing (HPC) core facilities. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The audience may be completely unaware of the HPC concepts but must have some basic understanding of computers and computer programming.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;What is HPC?&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
“High Performance Computing” (HPC) is computing on a “Supercomputer”, &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
a computer at the front line of contemporary processing capacity – particularly speed of calculation and available memory.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
A computer cluster consists of a set of loosely or tightly connected computers that work together so that in many respects they can be viewed as a single system.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The components of a cluster are usually connected to each other through fast local area networks(“LAN”) with each node (computer used as a server) running its own instance of an operating system. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Computer clusters emerged as a result of convergence of a number of computing trends including the availability of low cost microprocessors, &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
high-speed networks, and software for high performance distributed computing.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Compute clusters are usually deployed to improve performance and availability over that of a single computer, while typically being more cost-effective than single computers of comparable speed or availability.&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Alphafold3&amp;diff=1508</id>
		<title>Alphafold3</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Alphafold3&amp;diff=1508"/>
		<updated>2025-03-20T12:13:12Z</updated>

		<summary type="html">&lt;p&gt;Levk: Created page with &amp;quot;&amp;#039;&amp;#039;&amp;#039;AlphaFold3 Module Guide&amp;#039;&amp;#039;&amp;#039;  == Overview == This guide provides instructions on using the AlphaFold3 module in an HPC environment with Slurm and Apptainer (Singularity).  ==...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;#039;&amp;#039;&amp;#039;AlphaFold3 Module Guide&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
This guide provides instructions on using the AlphaFold3 module in an HPC environment with Slurm and Apptainer (Singularity).&lt;br /&gt;
&lt;br /&gt;
== Loading the Module ==&lt;br /&gt;
To use AlphaFold3, first load the module:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load alphafold3&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This will automatically load the required Apptainer module and set up necessary environment variables.&lt;br /&gt;
&lt;br /&gt;
== Environment Variables ==&lt;br /&gt;
After loading the module, the following environment variables are available:&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;$AF3_CONTAINER&amp;#039;&amp;#039;&amp;#039; - Path to the AlphaFold3 Singularity container&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;$AF3_MODELS&amp;#039;&amp;#039;&amp;#039; - Path to AlphaFold3 model parameters&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;$AF3_DB&amp;#039;&amp;#039;&amp;#039; - Path to the AlphaFold3 database&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;$AF3_SRC&amp;#039;&amp;#039;&amp;#039; - Path to AlphaFold3 source directory&lt;br /&gt;
&lt;br /&gt;
== Understanding Bind Mounts in Singularity ==&lt;br /&gt;
Singularity requires **bind mounts** to provide external directories inside the container. This ensures that the container can access necessary files without modifying its internal structure.&lt;br /&gt;
&lt;br /&gt;
A bind mount follows this format:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
--bind /path/on/host:/path/inside/container&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
- `/path/on/host` → The folder on your actual system (outside the container)&lt;br /&gt;
- `/path/inside/container` → Where this folder will be accessible **inside** the container&lt;br /&gt;
&lt;br /&gt;
For example:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
--bind /home/user/data:/root/input_data&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This allows the **`/home/user/data`** directory on the host system to appear inside the container at **`/root/input_data`**.&lt;br /&gt;
&lt;br /&gt;
== Binding a Folder for Your Input Data ==&lt;br /&gt;
The default AlphaFold3 example from Google DeepMind uses `/root/af_input`, but this is **not a required folder inside the container**. You can bind your input folder to **any directory** inside the container.&lt;br /&gt;
&lt;br /&gt;
For example, if your input JSON is located at `/home/user/alphafold_inputs/fold_input.json`, you can bind it inside the container at **any location**:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
--bind /home/user/alphafold_inputs:/root/custom_folder&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Then, the JSON file should be referenced as:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
--json_path=/root/custom_folder/fold_input.json&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Refer to the official AlphaFold3 documentation for details on how to structure the input file: [AlphaFold3 Input File Guide](https://github.com/google-deepmind/alphafold3/blob/main/docs/input.md)&lt;br /&gt;
&lt;br /&gt;
== Listing All Available Command Flags ==&lt;br /&gt;
To see a full list of available command-line options for AlphaFold3, run the following inside the container:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
singularity exec --nv \&lt;br /&gt;
    --bind $AF3_SRC:/root/custom_folder \&lt;br /&gt;
    --bind /tmp:/root/af_output \&lt;br /&gt;
    --bind $AF3_MODELS:/root/models \&lt;br /&gt;
    --bind $AF3_DB:/root/public_databases \&lt;br /&gt;
    --bind /home/user/alphafold_inputs:/root/custom_folder \&lt;br /&gt;
    $AF3_CONTAINER \&lt;br /&gt;
    python /root/custom_folder/run_alphafold.py --helpfull&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This will provide detailed documentation on all possible flags and parameters that can be used when running AlphaFold3.&lt;br /&gt;
&lt;br /&gt;
== Running AlphaFold3 ==&lt;br /&gt;
To run AlphaFold3 inside the Singularity container, use:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
singularity exec --nv \&lt;br /&gt;
    --bind $AF3_SRC:/root/custom_folder \&lt;br /&gt;
    --bind /tmp:/root/af_output \&lt;br /&gt;
    --bind $AF3_MODELS:/root/models \&lt;br /&gt;
    --bind $AF3_DB:/root/public_databases \&lt;br /&gt;
    --bind /home/user/alphafold_inputs:/root/custom_folder \&lt;br /&gt;
    $AF3_CONTAINER \&lt;br /&gt;
    python /root/custom_folder/run_alphafold.py \&lt;br /&gt;
        --json_path=/root/custom_folder/fold_input.json \&lt;br /&gt;
        --model_dir=/root/models \&lt;br /&gt;
        --db_dir=/root/public_databases \&lt;br /&gt;
        --output_dir=/root/af_output&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Replace `/home/user/alphafold_inputs` with the actual path to your input folder.&lt;br /&gt;
&lt;br /&gt;
== Binding a Folder That is Not Related to Built-In Environment Variables ==&lt;br /&gt;
If you need to bind a folder that is **not already covered by** `$AF3_SRC`, `$AF3_DB`, or `$AF3_MODELS`, you can specify it manually using the `--bind` option. Example:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
--bind /external/data:/root/custom_data&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This makes `/external/data` accessible inside the container at `/root/custom_data`.&lt;br /&gt;
&lt;br /&gt;
Then, reference files from inside the container:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
--json_path=/root/custom_data/my_input.json&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Submitting an AlphaFold3 Job to Slurm ==&lt;br /&gt;
To submit an AlphaFold3 job to Slurm, create a script (e.g., `alphafold3_job.sh`):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --job-name=alphafold3&lt;br /&gt;
#SBATCH --partition=gpu-general&lt;br /&gt;
#SBATCH --output=alphafold3.out&lt;br /&gt;
#SBATCH --error=alphafold3.err&lt;br /&gt;
#SBATCH --gres=gpu:1,af3&lt;br /&gt;
#SBATCH --cpus-per-task=8&lt;br /&gt;
#SBATCH --mem=64G&lt;br /&gt;
#SBATCH --time=1-00:00:00&lt;br /&gt;
&lt;br /&gt;
module load alphafold3&lt;br /&gt;
&lt;br /&gt;
singularity exec --nv \&lt;br /&gt;
    --bind $AF3_SRC:/root/custom_folder \&lt;br /&gt;
    --bind /tmp:/root/af_output \&lt;br /&gt;
    --bind $AF3_MODELS:/root/models \&lt;br /&gt;
    --bind $AF3_DB:/root/public_databases \&lt;br /&gt;
    --bind /home/user/alphafold_inputs:/root/custom_folder \&lt;br /&gt;
    $AF3_CONTAINER \&lt;br /&gt;
    python /root/custom_folder/run_alphafold.py \&lt;br /&gt;
        --json_path=/root/custom_folder/fold_input.json \&lt;br /&gt;
        --model_dir=/root/models \&lt;br /&gt;
        --db_dir=/root/public_databases \&lt;br /&gt;
        --output_dir=/root/af_output&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Submit the job using:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sbatch alphafold3_job.sh&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Unloading the Module ==&lt;br /&gt;
To unload the module when done:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module unload alphafold3&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will also unload the Apptainer module.&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
* If the module does not load, ensure the correct name is used: `module avail alphafold3`&lt;br /&gt;
* If Slurm fails to allocate a node, check available resources: `sinfo -o &amp;quot;%N %G&amp;quot;`&lt;br /&gt;
* If the container fails to run, verify that the paths in `$AF3_CONTAINER`, `$AF3_MODELS`, and `$AF3_DB` are correct.&lt;br /&gt;
* If input files are not found, ensure the correct directory is bound and referenced properly inside the container.&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Palo_Alto_VPN_for_linux&amp;diff=1498</id>
		<title>Palo Alto VPN for linux</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Palo_Alto_VPN_for_linux&amp;diff=1498"/>
		<updated>2024-12-07T18:12:57Z</updated>

		<summary type="html">&lt;p&gt;Levk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;For security reason TelAviv University starts a VPN with double authentication standard.&lt;br /&gt;
&lt;br /&gt;
In order to do that users have to check/fill in their mobile phone at myTAU page&lt;br /&gt;
(https://mytau.tau.ac.il/GetResource.php) and enroll to the service.&lt;br /&gt;
Then you need install GoogleAuthenticator on you mobile device and register it at TAU.&lt;br /&gt;
&lt;br /&gt;
After that you may download and install PaloAlto GlobalProtect VPN client on your device (all&lt;br /&gt;
operation systems are supported: IOS, Android, Linux MAC and even Window)&lt;br /&gt;
&lt;br /&gt;
The steps:&lt;br /&gt;
==Enrollment==&lt;br /&gt;
Go to https://mytau.tau.ac.il/GetResource.php&lt;br /&gt;
&lt;br /&gt;
Choose the “1” then “2” :&lt;br /&gt;
&lt;br /&gt;
Then you will receive SMS with 2-minute code and enter it immediately to the filed:&lt;br /&gt;
Then you will be redirected to the QR code for GoogleAuthenticator account setup:&lt;br /&gt;
Scan it using your mobile Google Authenticator app using “+” on bottom right corner of mobile device&lt;br /&gt;
and enter the generated code from mobile GoogleAuthenticator to the field and press the green button.&lt;br /&gt;
&lt;br /&gt;
==Download==&lt;br /&gt;
Download and install VPN client, from the browser, go to:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://hpcguide.tau.ac.il/vpn/PanGPLinux-5.3.4-c5.tgz GlobalProtect-5.3.4]&lt;br /&gt;
&lt;br /&gt;
[https://hpcguide.tau.ac.il/vpn/PanGPLinux-6.0.1-c6.tgz GlobalProtect-6.0.1]&lt;br /&gt;
&lt;br /&gt;
[https://hpcguide.tau.ac.il/vpn/PanGPLinux-6.1.1-c4.tgz GlobalProtect-6.1.1]&lt;br /&gt;
&lt;br /&gt;
[https://hpcguide.tau.ac.il/vpn/PanGPLinux-6.2.1-c15.tgz GlobalProtect-6.2.1]&lt;br /&gt;
&lt;br /&gt;
Linux package should be extracted and installed appropriated version:&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Debian/Ubuntu&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;dpkg -i GlobalProtect_UI_deb-6.0.1.1-6.deb&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Redhat/Centos&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;yum localinstall GlobalProtect_UI_rpm-6.0.1.1-6.rpm&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Configure==&lt;br /&gt;
&lt;br /&gt;
[[File:Paloalto3.PNG|thumb|right]]&lt;br /&gt;
&lt;br /&gt;
Execute and configure VPN client on Linux (another OS are similar) :&lt;br /&gt;
&lt;br /&gt;
Open client by pressing on the relevant icon (&amp;quot;1&amp;quot; as in the picture on the right)&lt;br /&gt;
&lt;br /&gt;
And enter address &amp;#039;&amp;#039;&amp;#039;vpn.tau.ac.il&amp;#039;&amp;#039;&amp;#039; (&amp;quot;2&amp;quot; as in the picture on the right)&lt;br /&gt;
&lt;br /&gt;
==Errors==&lt;br /&gt;
===SSL Error===&lt;br /&gt;
On latest ubuntu version, ubuntu 22.04, after installing and configuring globalprotect VPN, you get this error:&lt;br /&gt;
&lt;br /&gt;
[[File:784px-Vpn ssl error.png|none|thumb]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Fix only for globalprotect====&lt;br /&gt;
create new &amp;lt;code&amp;gt;ssl.conf&amp;lt;/code&amp;gt; file on your pc with the following content:&lt;br /&gt;
vim ~/ssl.conf&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
openssl_conf = openssl_init&lt;br /&gt;
[openssl_init]&lt;br /&gt;
ssl_conf = ssl_sect&lt;br /&gt;
[ssl_sect]&lt;br /&gt;
system_default = system_default_sect&lt;br /&gt;
[system_default_sect]&lt;br /&gt;
Options = UnsafeLegacyRenegotiation&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Then find this file:&lt;br /&gt;
&amp;lt;code&amp;gt;sudo find / -name PanGPUI.desktop -type f&amp;lt;/code&amp;gt;&lt;br /&gt;
or&lt;br /&gt;
&amp;lt;code&amp;gt;locate PanGPUI.desktop&amp;lt;/code&amp;gt; (may need to do sudo updatedb before running this one)&lt;br /&gt;
there should be at least 2 path with this file, ignore this one --&amp;gt; &amp;lt;code&amp;gt;/opt/paloaltonetworks/globalprotect/PanGPUI.desktop&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On my linux, kubuntu 22.04 the file is here: &amp;lt;code&amp;gt;/etc/xdg/autostart/PanGPUI.desktop&amp;lt;/code&amp;gt;&lt;br /&gt;
enter this file and change it from:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[Desktop Entry]&lt;br /&gt;
Name=PanGPUI&lt;br /&gt;
Type=Application&lt;br /&gt;
Exec=/opt/paloaltonetworks/globalprotect/PanGPUI&lt;br /&gt;
Terminal=false&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
to &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[Desktop Entry]&lt;br /&gt;
Name=PanGPUI&lt;br /&gt;
Type=Application&lt;br /&gt;
Exec=OPENSSL_CONF=~/ssl.conf /opt/paloaltonetworks/globalprotect/PanGPUI&lt;br /&gt;
Terminal=false&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After restarting you pc, globalprotect will autostart with the custom ssl settings&lt;br /&gt;
&lt;br /&gt;
====Global fix====&lt;br /&gt;
here is how to workaround it:&lt;br /&gt;
&lt;br /&gt;
open  &amp;lt;code&amp;gt;/usr/lib/ssl/openssl.cnf&amp;lt;/code&amp;gt; &lt;br /&gt;
&lt;br /&gt;
comment out this section:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# [openssl_init]&lt;br /&gt;
&lt;br /&gt;
# providers = provider_sect&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;add this new section under the commented one from earlier:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[openssl_init]&lt;br /&gt;
ssl_conf = ssl_sect&lt;br /&gt;
&lt;br /&gt;
[ssl_sect]&lt;br /&gt;
system_default = system_default_sect&lt;br /&gt;
&lt;br /&gt;
[system_default_sect]&lt;br /&gt;
Options = UnsafeLegacyRenegotiation&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;reboot globalprotect app and the error should be fixed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;source:https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/1960268&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==TAU credentials==&lt;br /&gt;
[[File:Paloalto4.PNG|thumb|right]]&lt;br /&gt;
Fill in pop-upped windows with your TAU credentials:&lt;br /&gt;
&lt;br /&gt;
Open your mobile GoogleAuthenticator and enter code from there&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Congratulations: you are done!&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Alphafold&amp;diff=1497</id>
		<title>Alphafold</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Alphafold&amp;diff=1497"/>
		<updated>2024-10-28T14:01:03Z</updated>

		<summary type="html">&lt;p&gt;Levk: /* Example Slurm Script */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Alphafold ==&lt;br /&gt;
AlphaFold is an artificial intelligence (AI) program developed by DeepMind (part of Alphabet/Google) that predicts protein structures.&lt;br /&gt;
&lt;br /&gt;
=== Databases ===&lt;br /&gt;
The necessary databases are mounted on nodes with GPUs and are located at `/alphafold_storage/alphafold_db`.&lt;br /&gt;
&lt;br /&gt;
=== Usage ===&lt;br /&gt;
To run AlphaFold, use the `run_alphafold.sh` script located at `/powerapps/share/centos7/alphafold/alphafold-2.3.1/run_alphafold.sh`.&lt;br /&gt;
&lt;br /&gt;
===== &amp;#039;&amp;#039;&amp;#039;Required Parameters&amp;#039;&amp;#039;&amp;#039;: =====&lt;br /&gt;
* `-d &amp;lt;data_dir&amp;gt;`: Path to the directory of supporting data.&lt;br /&gt;
* `-o &amp;lt;output_dir&amp;gt;`: Path to a directory that will store the results.&lt;br /&gt;
* `-f &amp;lt;fasta_paths&amp;gt;`: Path to FASTA files containing sequences. For multiple sequences in a file, it will fold as a multimer. To fold more sequences one after another, separate the files with a comma.&lt;br /&gt;
&lt;br /&gt;
* `-t &amp;lt;max_template_date&amp;gt;`: Maximum template release date to consider (ISO-8601 format, i.e., YYYY-MM-DD). This parameter helps in folding historical test sets.&lt;br /&gt;
&lt;br /&gt;
===== &amp;#039;&amp;#039;&amp;#039;Optional Parameters&amp;#039;&amp;#039;&amp;#039;: =====&lt;br /&gt;
* `-g &amp;lt;use_gpu&amp;gt;`: Enable NVIDIA runtime to run with GPUs (default: true).&lt;br /&gt;
* `-r &amp;lt;run_relax&amp;gt;`: Whether to run the final relaxation step on the predicted models (default: true).&lt;br /&gt;
* `-e &amp;lt;enable_gpu_relax&amp;gt;`: Run relax on GPU if GPU is enabled (default: true).&lt;br /&gt;
* `-n &amp;lt;openmm_threads&amp;gt;`: OpenMM threads (default: all available cores).&lt;br /&gt;
* `-a &amp;lt;gpu_devices&amp;gt;`: Comma-separated list of devices to pass to &amp;#039;CUDA_VISIBLE_DEVICES&amp;#039; (default: 0).&lt;br /&gt;
* `-m &amp;lt;model_preset&amp;gt;`: Choose preset model configuration: &amp;#039;monomer&amp;#039;, &amp;#039;monomer_casp14&amp;#039;, &amp;#039;monomer_ptm&amp;#039;, or &amp;#039;multimer&amp;#039; (default: &amp;#039;monomer&amp;#039;).&lt;br /&gt;
* `-c &amp;lt;db_preset&amp;gt;`: Choose preset MSA database configuration (&amp;#039;reduced_dbs&amp;#039; or &amp;#039;full_dbs&amp;#039;, default: &amp;#039;full_dbs&amp;#039;).&lt;br /&gt;
* `-p &amp;lt;use_precomputed_msas&amp;gt;`: Whether to read MSAs written to disk (default: &amp;#039;false&amp;#039;).&lt;br /&gt;
* `-l &amp;lt;num_multimer_predictions_per_model&amp;gt;`: Number of predictions per model when using `model_preset=multimer` (default: 5).&lt;br /&gt;
* `-b &amp;lt;benchmark&amp;gt;`: Run multiple JAX model evaluations to obtain a timing that excludes compilation time (default: &amp;#039;false&amp;#039;).&lt;br /&gt;
&lt;br /&gt;
==== Example Slurm Script ====&lt;br /&gt;
This script demonstrates how to submit an AlphaFold job using SLURM:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --job-name=AlphaFold-Multimer     # Job name&lt;br /&gt;
#SBATCH --partition=gpu2                  # Specify GPU partition&lt;br /&gt;
#SBATCH --nodes=1                         # Number of nodes&lt;br /&gt;
#SBATCH --ntasks=1                        # Number of tasks (processes)&lt;br /&gt;
#SBATCH --cpus-per-task=4                 # Number of CPU cores per task&lt;br /&gt;
#SBATCH --mem=32G                         # request RAM&lt;br /&gt;
#SBATCH --gres=gpu:1                      # Request 1 GPU&lt;br /&gt;
#SBATCH --output=alphafold_%j.out         # Standard output (with job ID)&lt;br /&gt;
#SBATCH --error=alphafold_%j.err          # Standard error (with job ID)&lt;br /&gt;
&lt;br /&gt;
# Description: AlphaFold-Multimer (Non-Docker) with auto-GPU selection&lt;br /&gt;
&lt;br /&gt;
# Load the required module/environment&lt;br /&gt;
module load alphafold/alphafold_non_docker_2.3.1&lt;br /&gt;
&lt;br /&gt;
# Run the AlphaFold script&lt;br /&gt;
bash $ALPHAFOLD_SCRIPT_PATH/run_alphafold.sh -d $ALPHAFOLD_DB_PATH -o ~/output_dir -f $ALPHAFOLD_SCRIPT_PATH/examples/query.fasta -t $(date +%Y-%m-%d)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Important Notes ====&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Output Directory&amp;#039;&amp;#039;&amp;#039;: You can specify the output directory using the `-o` parameter to store the results. This directory can be anywhere you choose.&lt;br /&gt;
* The `-t` (max_template_date) parameter defines the maximum release date of templates to consider in the format `YYYY-MM-DD`. This is crucial when working with historical test sets, as it restricts the search for templates to those released on or before the specified date. You can use different dates depending on your requirements, such as the current date with `$(date +%Y-%m-%d)` or a specific historical date, like `-t 2021-12-31`.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Memory Requirements&amp;#039;&amp;#039;&amp;#039;: For monomer jobs, at least &amp;#039;&amp;#039;&amp;#039;32GB of RAM&amp;#039;&amp;#039;&amp;#039; is recommended. For multimer jobs, allocate at least &amp;#039;&amp;#039;&amp;#039;64GB of RAM&amp;#039;&amp;#039;&amp;#039;; however, for more complex or large structures, consider using &amp;#039;&amp;#039;&amp;#039;128GB or more&amp;#039;&amp;#039;&amp;#039; to ensure stability.&lt;br /&gt;
&lt;br /&gt;
==== Additional Resources ====&lt;br /&gt;
* You can download the `dummy_test` folder for sample output from this [https://github.com/kalininalab/alphafold_non_docker The Github Repository].&lt;br /&gt;
* For sample data, you can use `/home/alphafold_folder/alphafold_multimer_non_docker/example/query.fasta` or provide your own data for queries.&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Alphafold&amp;diff=1496</id>
		<title>Alphafold</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Alphafold&amp;diff=1496"/>
		<updated>2024-10-28T13:41:18Z</updated>

		<summary type="html">&lt;p&gt;Levk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Alphafold ==&lt;br /&gt;
AlphaFold is an artificial intelligence (AI) program developed by DeepMind (part of Alphabet/Google) that predicts protein structures.&lt;br /&gt;
&lt;br /&gt;
=== Databases ===&lt;br /&gt;
The necessary databases are mounted on nodes with GPUs and are located at `/alphafold_storage/alphafold_db`.&lt;br /&gt;
&lt;br /&gt;
=== Usage ===&lt;br /&gt;
To run AlphaFold, use the `run_alphafold.sh` script located at `/powerapps/share/centos7/alphafold/alphafold-2.3.1/run_alphafold.sh`.&lt;br /&gt;
&lt;br /&gt;
===== &amp;#039;&amp;#039;&amp;#039;Required Parameters&amp;#039;&amp;#039;&amp;#039;: =====&lt;br /&gt;
* `-d &amp;lt;data_dir&amp;gt;`: Path to the directory of supporting data.&lt;br /&gt;
* `-o &amp;lt;output_dir&amp;gt;`: Path to a directory that will store the results.&lt;br /&gt;
* `-f &amp;lt;fasta_paths&amp;gt;`: Path to FASTA files containing sequences. For multiple sequences in a file, it will fold as a multimer. To fold more sequences one after another, separate the files with a comma.&lt;br /&gt;
&lt;br /&gt;
* `-t &amp;lt;max_template_date&amp;gt;`: Maximum template release date to consider (ISO-8601 format, i.e., YYYY-MM-DD). This parameter helps in folding historical test sets.&lt;br /&gt;
&lt;br /&gt;
===== &amp;#039;&amp;#039;&amp;#039;Optional Parameters&amp;#039;&amp;#039;&amp;#039;: =====&lt;br /&gt;
* `-g &amp;lt;use_gpu&amp;gt;`: Enable NVIDIA runtime to run with GPUs (default: true).&lt;br /&gt;
* `-r &amp;lt;run_relax&amp;gt;`: Whether to run the final relaxation step on the predicted models (default: true).&lt;br /&gt;
* `-e &amp;lt;enable_gpu_relax&amp;gt;`: Run relax on GPU if GPU is enabled (default: true).&lt;br /&gt;
* `-n &amp;lt;openmm_threads&amp;gt;`: OpenMM threads (default: all available cores).&lt;br /&gt;
* `-a &amp;lt;gpu_devices&amp;gt;`: Comma-separated list of devices to pass to &amp;#039;CUDA_VISIBLE_DEVICES&amp;#039; (default: 0).&lt;br /&gt;
* `-m &amp;lt;model_preset&amp;gt;`: Choose preset model configuration: &amp;#039;monomer&amp;#039;, &amp;#039;monomer_casp14&amp;#039;, &amp;#039;monomer_ptm&amp;#039;, or &amp;#039;multimer&amp;#039; (default: &amp;#039;monomer&amp;#039;).&lt;br /&gt;
* `-c &amp;lt;db_preset&amp;gt;`: Choose preset MSA database configuration (&amp;#039;reduced_dbs&amp;#039; or &amp;#039;full_dbs&amp;#039;, default: &amp;#039;full_dbs&amp;#039;).&lt;br /&gt;
* `-p &amp;lt;use_precomputed_msas&amp;gt;`: Whether to read MSAs written to disk (default: &amp;#039;false&amp;#039;).&lt;br /&gt;
* `-l &amp;lt;num_multimer_predictions_per_model&amp;gt;`: Number of predictions per model when using `model_preset=multimer` (default: 5).&lt;br /&gt;
* `-b &amp;lt;benchmark&amp;gt;`: Run multiple JAX model evaluations to obtain a timing that excludes compilation time (default: &amp;#039;false&amp;#039;).&lt;br /&gt;
&lt;br /&gt;
==== Example Slurm Script ====&lt;br /&gt;
This script demonstrates how to submit an AlphaFold job using SLURM:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --job-name=AlphaFold-Multimer     # Job name&lt;br /&gt;
#SBATCH --partition=gpu2                  # Specify GPU partition&lt;br /&gt;
#SBATCH --nodes=1                         # Number of nodes&lt;br /&gt;
#SBATCH --ntasks=1                        # Number of tasks (processes)&lt;br /&gt;
#SBATCH --cpus-per-task=4                 # Number of CPU cores per task&lt;br /&gt;
#SBATCH --gres=gpu:1                      # Request 1 GPU&lt;br /&gt;
#SBATCH --output=alphafold_%j.out         # Standard output (with job ID)&lt;br /&gt;
#SBATCH --error=alphafold_%j.err          # Standard error (with job ID)&lt;br /&gt;
&lt;br /&gt;
# Description: AlphaFold-Multimer (Non-Docker) with auto-GPU selection&lt;br /&gt;
&lt;br /&gt;
# Load the required module/environment&lt;br /&gt;
module load alphafold/alphafold_non_docker_2.3.1&lt;br /&gt;
&lt;br /&gt;
# Run the AlphaFold script&lt;br /&gt;
bash $ALPHAFOLD_SCRIPT_PATH/run_alphafold.sh -d $ALPHAFOLD_DB_PATH -o ~/output_dir -f $ALPHAFOLD_SCRIPT_PATH/examples/query.fasta -t $(date +%Y-%m-%d)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Important Notes ====&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Output Directory&amp;#039;&amp;#039;&amp;#039;: You can specify the output directory using the `-o` parameter to store the results. This directory can be anywhere you choose.&lt;br /&gt;
* The `-t` (max_template_date) parameter defines the maximum release date of templates to consider in the format `YYYY-MM-DD`. This is crucial when working with historical test sets, as it restricts the search for templates to those released on or before the specified date. You can use different dates depending on your requirements, such as the current date with `$(date +%Y-%m-%d)` or a specific historical date, like `-t 2021-12-31`.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Memory Requirements&amp;#039;&amp;#039;&amp;#039;: For monomer jobs, at least &amp;#039;&amp;#039;&amp;#039;32GB of RAM&amp;#039;&amp;#039;&amp;#039; is recommended. For multimer jobs, allocate at least &amp;#039;&amp;#039;&amp;#039;64GB of RAM&amp;#039;&amp;#039;&amp;#039;; however, for more complex or large structures, consider using &amp;#039;&amp;#039;&amp;#039;128GB or more&amp;#039;&amp;#039;&amp;#039; to ensure stability.&lt;br /&gt;
&lt;br /&gt;
==== Additional Resources ====&lt;br /&gt;
* You can download the `dummy_test` folder for sample output from this [https://github.com/kalininalab/alphafold_non_docker The Github Repository].&lt;br /&gt;
* For sample data, you can use `/home/alphafold_folder/alphafold_multimer_non_docker/example/query.fasta` or provide your own data for queries.&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Security_installations&amp;diff=1495</id>
		<title>Security installations</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Security_installations&amp;diff=1495"/>
		<updated>2024-10-20T14:31:27Z</updated>

		<summary type="html">&lt;p&gt;Levk: /* Installation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== NAC - Forescout security software for Linux ==&lt;br /&gt;
&lt;br /&gt;
Download and install the Forescout client:&lt;br /&gt;
&lt;br /&gt;
[http://hpcguide.tau.ac.il/nac/ForeScoutSecureConnector_64_visible_daemon.tar.gz ForeScoutSecureConnector_64_visible_daemon.tar.gz]&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
tar -zxvf ForeScoutSecureConnector_64_visible_daemon.tar.gz&lt;br /&gt;
&lt;br /&gt;
cd secure_connector&lt;br /&gt;
&lt;br /&gt;
./install.sh&lt;br /&gt;
&lt;br /&gt;
systemctl start SecureConnector.service&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== EDR - Falcon (for ubuntu 22 and ubuntu 24) ==&lt;br /&gt;
&lt;br /&gt;
Download and install the Falcon client:&lt;br /&gt;
&lt;br /&gt;
[http://hpcguide.tau.ac.il/edr/falcon-sensor_7.18.0-17106_amd64.deb falcon-sensor_7.18.0-17106_amd64.deb]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
dpkg -i falcon-sensor_7.18.0-17106_amd64.deb&lt;br /&gt;
&lt;br /&gt;
systemctl restart falcon-sensor.service&lt;br /&gt;
&lt;br /&gt;
Need to run below command to register the program (Only for new installation):&lt;br /&gt;
&lt;br /&gt;
/opt/CrowdStrike/falconctl -s --cid=cid-code&lt;br /&gt;
&lt;br /&gt;
for the cid-code, please send a request to infosec@tauex.tau.ac.il&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Submitting_a_job_to_a_slurm_queue&amp;diff=1494</id>
		<title>Submitting a job to a slurm queue</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Submitting_a_job_to_a_slurm_queue&amp;diff=1494"/>
		<updated>2024-09-29T15:03:52Z</updated>

		<summary type="html">&lt;p&gt;Levk: /* Troubleshooting &amp;amp; Tips */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Accessing the System ==&lt;br /&gt;
&lt;br /&gt;
To submit jobs to SLURM at Tel Aviv University, you need to access the system through one of the following login nodes:&lt;br /&gt;
&lt;br /&gt;
* powerslurm-login.tau.ac.il&lt;br /&gt;
* powerslurm-login2.tau.ac.il&lt;br /&gt;
&lt;br /&gt;
=== Requirements for Access ===&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Group Membership&amp;#039;&amp;#039;&amp;#039;: You must be part of the &amp;quot;power&amp;quot; group to access the resources.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;University Credentials&amp;#039;&amp;#039;&amp;#039;: Use your Tel Aviv University username and password to log in.&lt;br /&gt;
&lt;br /&gt;
These login nodes are your starting point for submitting jobs, checking job status, and managing your SLURM tasks.&lt;br /&gt;
&lt;br /&gt;
=== SSH Example ===&lt;br /&gt;
&lt;br /&gt;
To access the system using SSH, use the following example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Replace &amp;#039;your_username&amp;#039; with your actual Tel Aviv University username&lt;br /&gt;
ssh your_username@powerslurm-login.tau.ac.il&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you want to connect to the second login node, use:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Replace &amp;#039;your_username&amp;#039; with your actual Tel Aviv University username&lt;br /&gt;
ssh your_username@powerslurm-login2.tau.ac.il&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you have an SSH key set up for password-less login, you can specify it like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Replace &amp;#039;your_username&amp;#039; and &amp;#039;/path/to/your/private_key&amp;#039; accordingly&lt;br /&gt;
ssh -i /path/to/your/private_key your_username@powerslurm-login.tau.ac.il&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Environment Modules ==&lt;br /&gt;
&lt;br /&gt;
Environment Modules in SLURM allow users to dynamically modify their shell environment, providing an easy way to load and unload different software applications, libraries, and their dependencies. This system helps avoid conflicts between software versions and ensures the correct environment for running specific applications.&lt;br /&gt;
&lt;br /&gt;
Here are some common commands to work with environment modules:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#List Available Modules: To see all the modules available on the system, use:&lt;br /&gt;
module avail&lt;br /&gt;
&lt;br /&gt;
#To search for a specific module by name (e.g., `gcc`), use:&lt;br /&gt;
module avail gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#Get Detailed Information About a Module: The `module spider` command provides detailed information about a module, including versions, dependencies, and descriptions:&lt;br /&gt;
module spider gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#View Module Settings: To see what environment variables and settings will be modified by a module, use:&lt;br /&gt;
module show gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#Load a Module: To set up the environment for a specific software, use the `module load` command. For example, to load GCC version 12.1.0:&lt;br /&gt;
module load gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#List Loaded Modules: To view all currently loaded modules in your session, use:&lt;br /&gt;
module list&lt;br /&gt;
&lt;br /&gt;
#Unload a Module: To unload a specific module from your environment, use:&lt;br /&gt;
module unload gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#Unload All Modules:** If you need to clear your environment of all loaded modules, use:&lt;br /&gt;
module purge&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;By using these commands, you can easily manage the software environments needed for different tasks, ensuring compatibility and reducing potential conflicts between software versions.&lt;br /&gt;
&lt;br /&gt;
== Basic Job Submission Commands ==&lt;br /&gt;
&lt;br /&gt;
=== Finding Your Account and Partition ===&lt;br /&gt;
&lt;br /&gt;
Before submitting a job, you need to know which partitions you have permission to use.&lt;br /&gt;
&lt;br /&gt;
Run the command `&amp;lt;code&amp;gt;check_my_partitions&amp;lt;/code&amp;gt;` to view a list of all the partitions you have permission to send jobs to.&lt;br /&gt;
&lt;br /&gt;
== Submitting Jobs==&lt;br /&gt;
sbatch: Submits a job script for batch processing.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example&amp;#039;&amp;#039;&amp;#039;:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    sbatch --ntasks=1 --time=10 -p power-general -A power-general-users pre_process.bash&lt;br /&gt;
   # This command submits pre_process.bash to the power-general partition for 10 minutes. &lt;br /&gt;
   # With 1 GPU:&lt;br /&gt;
    sbatch --gres=gpu:1 -p gpu-general -A gpu-general-users gpu_job.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Writing SLURM Job Scripts===&lt;br /&gt;
&lt;br /&gt;
Here is a simple job script example:&lt;br /&gt;
&lt;br /&gt;
==== Basic Script====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --job-name=my_job             # Job name&lt;br /&gt;
#SBATCH --account=power-general-users # Account name&lt;br /&gt;
#SBATCH --partition=power-general     # Partition name&lt;br /&gt;
#SBATCH --time=02:00:00               # Max run time (hh:mm:ss)&lt;br /&gt;
#SBATCH --ntasks=1                    # Number of tasks&lt;br /&gt;
#SBATCH --cpus-per-task=1             # CPUs per task&lt;br /&gt;
#SBATCH --mem-per-cpu=4G              # Memory per CPU&lt;br /&gt;
#SBATCH --output=my_job_%j.out        # Output file&lt;br /&gt;
#SBATCH --error=my_job_%j.err         # Error file&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Starting my SLURM job&amp;quot;&lt;br /&gt;
echo &amp;quot;Job ID: $SLURM_JOB_ID&amp;quot;&lt;br /&gt;
echo &amp;quot;Running on nodes: $SLURM_JOB_NODELIST&amp;quot;&lt;br /&gt;
echo &amp;quot;Allocated CPUs: $SLURM_JOB_CPUS_PER_NODE&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Your application commands go here&lt;br /&gt;
# ./my_program&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Job completed&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Script for 1 GPU ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --job-name=gpu_job             # Job name&lt;br /&gt;
#SBATCH --account=my_account           # Account name&lt;br /&gt;
#SBATCH --partition=gpu-general        # Partition name&lt;br /&gt;
#SBATCH --time=02:00:00                # Max run time&lt;br /&gt;
#SBATCH --ntasks=1                     # Number of tasks&lt;br /&gt;
#SBATCH --cpus-per-task=1              # CPUs per task&lt;br /&gt;
#SBATCH --gres=gpu:1                   # Number of GPUs&lt;br /&gt;
#SBATCH --mem-per-cpu=4G               # Memory per CPU&lt;br /&gt;
#SBATCH --output=my_job_%j.out         # Output file&lt;br /&gt;
#SBATCH --error=my_job_%j.err          # Error file&lt;br /&gt;
&lt;br /&gt;
module load python/python-3.8&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Starting GPU job&amp;quot;&lt;br /&gt;
echo &amp;quot;Job ID: $SLURM_JOB_ID&amp;quot;&lt;br /&gt;
echo &amp;quot;Running on nodes: $SLURM_JOB_NODELIST&amp;quot;&lt;br /&gt;
echo &amp;quot;Allocated CPUs: $SLURM_JOB_CPUS_PER_NODE&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Your GPU commands go here&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Job completed&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Importance of Correct RAM Usage in Jobs===&lt;br /&gt;
&lt;br /&gt;
When writing SLURM job scripts, it&amp;#039;s crucial to understand and correctly specify the memory requirements for your job. &lt;br /&gt;
&lt;br /&gt;
Proper memory allocation ensures efficient resource usage and prevents job failures due to out-of-memory (OOM) errors.&lt;br /&gt;
&lt;br /&gt;
==== Why Correct RAM Usage Matters ====&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Resource Efficiency&amp;#039;&amp;#039;&amp;#039;: Allocating the right amount of memory helps in optimal resource utilization, allowing more jobs to run simultaneously on the cluster.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Job Stability&amp;#039;&amp;#039;&amp;#039;: Underestimating memory requirements can lead to OOM errors, causing your job to fail and waste computational resources.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Performance&amp;#039;&amp;#039;&amp;#039;: Overestimating memory needs can lead to underutilization of resources, potentially delaying other jobs in the queue.&lt;br /&gt;
&lt;br /&gt;
==== How to Specify Memory in SLURM ====&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;--mem&amp;#039;&amp;#039;&amp;#039;: Specifies the total memory required for the job.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;--mem-per-cpu&amp;#039;&amp;#039;&amp;#039;: Specifies the memory required per CPU.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example&amp;#039;&amp;#039;&amp;#039;:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --mem=4G              # Total memory for the job&lt;br /&gt;
#SBATCH --mem-per-cpu=2G      # Memory per CPU&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Interactive Jobs===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#Start an interactive session:&lt;br /&gt;
srun --ntasks=1 -p power-general -A power-general-users --pty bash&lt;br /&gt;
&lt;br /&gt;
#Specify a compute node:&lt;br /&gt;
srun --ntasks=1 -p power-general -A power-general-users --nodelist=&amp;quot;compute-0-12&amp;quot; --pty bash&lt;br /&gt;
&lt;br /&gt;
#Using GUI:&lt;br /&gt;
srun --ntasks=1 -p power-general -A power-general-users --x11 /bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Submitting RELION Jobs===&lt;br /&gt;
&lt;br /&gt;
To submit a RELION job interactively on the &amp;lt;code&amp;gt;gpu-relion&amp;lt;/code&amp;gt; queue with X11 forwarding, use the following steps:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#Start an interactive session with X11:&lt;br /&gt;
srun --ntasks=1 -p gpu-relion -A your_account --x11 --pty bash&lt;br /&gt;
#Load the RELION module:&lt;br /&gt;
module load relion/relion-4.0.1&lt;br /&gt;
#Launch RELION:&lt;br /&gt;
relion&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==AlphaFold==&lt;br /&gt;
&lt;br /&gt;
AlphaFold is a deep learning tool designed for predicting protein structures.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Guide:&amp;#039;&amp;#039;&amp;#039;  &lt;br /&gt;
[https://hpcguide.tau.ac.il/index.php?title=Alphafold AlphaFold Guide]&lt;br /&gt;
&lt;br /&gt;
==Common SLURM Commands==&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#View all queues (partitions):&lt;br /&gt;
sinfo&lt;br /&gt;
#View all jobs:&lt;br /&gt;
squeue&lt;br /&gt;
#View details of a specific job:&lt;br /&gt;
scontrol show job &amp;lt;job_number&amp;gt;&lt;br /&gt;
#Get information about partitions:&lt;br /&gt;
scontrol show partition&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting &amp;amp; Tips ==&lt;br /&gt;
&lt;br /&gt;
=== Common Errors ===&lt;br /&gt;
&lt;br /&gt;
# &amp;lt;code&amp;gt;srun: error: Unable to allocate resources: No partition specified or system default partition&amp;lt;/code&amp;gt;  &amp;lt;br /&amp;gt;&amp;#039;&amp;#039;&amp;#039;Solution:&amp;#039;&amp;#039;&amp;#039; Always specify a partition. Example:  &amp;lt;code&amp;gt;srun --pty -c 1 --mem=2G -p power-general /bin/bash&amp;lt;/code&amp;gt;&lt;br /&gt;
# Job failed, and upon doing scontrol show job job_id or when running sacct -j job_id -o JobID,JobName,State%20  &amp;lt;br /&amp;gt;you see:   &amp;lt;code&amp;gt;JobState=OUT_OF_MEMORY Reason=OutOfMemory&amp;lt;/code&amp;gt;  or :&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
JobID           JobName                State &lt;br /&gt;
------------ ---------- -------------------- &lt;br /&gt;
71             oom_test        OUT_OF_MEMORY &lt;br /&gt;
71.batch          batch        OUT_OF_MEMORY &lt;br /&gt;
71.extern        extern            COMPLETED &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;it means that the ram requested for the job was not enough, please resubmit the job again with more ram. see [https://wikihpc.tau.ac.il/index.php?title=Slurm_user_guide#Estimating_RAM_Usage below] for help with understanding how much ram your job may need.&lt;br /&gt;
&lt;br /&gt;
=== Chain Jobs ===&lt;br /&gt;
Use the &amp;lt;code&amp;gt;--depend&amp;lt;/code&amp;gt; flag to set job dependencies.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sbatch --ntasks=1 --time=60 -p power-general -A power-general-users --depend=45001 do_work.bash&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Always Specify Resources ===&lt;br /&gt;
When submitting jobs, ensure you include all required resources like partition, memory, and CPUs to avoid job failures.&lt;br /&gt;
&lt;br /&gt;
=== Attaching to Running Jobs ===&lt;br /&gt;
If you need to monitor or interact with a running job, use &amp;lt;code&amp;gt;sattach&amp;lt;/code&amp;gt;. This command allows you to attach to a job&amp;#039;s input, output, and error streams in real-time.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sattach &amp;lt;job_id&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To view job steps of a specific job, use the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
scontrol show job &amp;lt;job_id&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look for sections labeled &amp;quot;StepId&amp;quot; within the output. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;For specific job steps, use:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sattach &amp;lt;job_id.step_id&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Note:&amp;#039;&amp;#039;&amp;#039; &amp;lt;code&amp;gt;sattach&amp;lt;/code&amp;gt; is particularly useful for interactive jobs, where you can provide input directly. For non-interactive jobs, it acts like &amp;lt;code&amp;gt;tail -f&amp;lt;/code&amp;gt;, allowing you to monitor the output stream.&lt;br /&gt;
&lt;br /&gt;
=== Estimating RAM Usage ===&lt;br /&gt;
&lt;br /&gt;
When writing SLURM job scripts, it&amp;#039;s crucial to understand and correctly specify the memory requirements for your job. Proper memory allocation ensures efficient resource usage and prevents job failures due to out-of-memory (OOM) errors.&lt;br /&gt;
&lt;br /&gt;
==== Tips for Estimating RAM Usage ====&lt;br /&gt;
&lt;br /&gt;
* Check Application Documentation: Refer to the official documentation or user guides for memory-related information.&lt;br /&gt;
* Run a Small Test Job: Submit a smaller version of your job and monitor its memory usage using commands like `free -m`, `top`, or `htop`.&lt;br /&gt;
* Use Profiling Tools: Tools like `valgrind`, `gprof`, or built-in profilers can help you understand memory usage.&lt;br /&gt;
* Analyze Previous Jobs: Review SLURM logs and job statistics for insights into memory consumption of past jobs.&lt;br /&gt;
* Consult with Peers or Experts: Ask colleagues or experts who have experience with similar workloads.&lt;br /&gt;
&lt;br /&gt;
==== Example: Monitoring Memory Usage ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH --job-name=memory_test&lt;br /&gt;
#SBATCH --account=your_account&lt;br /&gt;
#SBATCH --partition=your_partition&lt;br /&gt;
#SBATCH --time=01:00:00&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --cpus-per-task=1&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --output=memory_test.out&lt;br /&gt;
#SBATCH --error=memory_test.err&lt;br /&gt;
&lt;br /&gt;
# Monitor memory usage&lt;br /&gt;
echo &amp;quot;Memory usage before running the job:&amp;quot;&lt;br /&gt;
free -m&lt;br /&gt;
&lt;br /&gt;
# Your application commands go here&lt;br /&gt;
# ./your_application&lt;br /&gt;
&lt;br /&gt;
# Monitor memory usage after running the job&lt;br /&gt;
echo &amp;quot;Memory usage after running the job:&amp;quot;&lt;br /&gt;
free -m&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== General Tips ====&lt;br /&gt;
&lt;br /&gt;
* Start Small: Begin with a conservative memory request and increase it based on observed usage.&lt;br /&gt;
* Consider Peak Usage: Plan for peak memory usage to avoid OOM errors.&lt;br /&gt;
* Use SLURM&amp;#039;s Memory Reporting: Use `sacct` to view memory usage statistics.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sacct -j &amp;lt;job_id&amp;gt; --format=JobID,JobName,MaxRSS,Elapsed&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Submitting_a_job_to_a_slurm_queue&amp;diff=1493</id>
		<title>Submitting a job to a slurm queue</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Submitting_a_job_to_a_slurm_queue&amp;diff=1493"/>
		<updated>2024-09-29T15:03:01Z</updated>

		<summary type="html">&lt;p&gt;Levk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Accessing the System ==&lt;br /&gt;
&lt;br /&gt;
To submit jobs to SLURM at Tel Aviv University, you need to access the system through one of the following login nodes:&lt;br /&gt;
&lt;br /&gt;
* powerslurm-login.tau.ac.il&lt;br /&gt;
* powerslurm-login2.tau.ac.il&lt;br /&gt;
&lt;br /&gt;
=== Requirements for Access ===&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Group Membership&amp;#039;&amp;#039;&amp;#039;: You must be part of the &amp;quot;power&amp;quot; group to access the resources.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;University Credentials&amp;#039;&amp;#039;&amp;#039;: Use your Tel Aviv University username and password to log in.&lt;br /&gt;
&lt;br /&gt;
These login nodes are your starting point for submitting jobs, checking job status, and managing your SLURM tasks.&lt;br /&gt;
&lt;br /&gt;
=== SSH Example ===&lt;br /&gt;
&lt;br /&gt;
To access the system using SSH, use the following example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Replace &amp;#039;your_username&amp;#039; with your actual Tel Aviv University username&lt;br /&gt;
ssh your_username@powerslurm-login.tau.ac.il&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you want to connect to the second login node, use:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Replace &amp;#039;your_username&amp;#039; with your actual Tel Aviv University username&lt;br /&gt;
ssh your_username@powerslurm-login2.tau.ac.il&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you have an SSH key set up for password-less login, you can specify it like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Replace &amp;#039;your_username&amp;#039; and &amp;#039;/path/to/your/private_key&amp;#039; accordingly&lt;br /&gt;
ssh -i /path/to/your/private_key your_username@powerslurm-login.tau.ac.il&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Environment Modules ==&lt;br /&gt;
&lt;br /&gt;
Environment Modules in SLURM allow users to dynamically modify their shell environment, providing an easy way to load and unload different software applications, libraries, and their dependencies. This system helps avoid conflicts between software versions and ensures the correct environment for running specific applications.&lt;br /&gt;
&lt;br /&gt;
Here are some common commands to work with environment modules:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#List Available Modules: To see all the modules available on the system, use:&lt;br /&gt;
module avail&lt;br /&gt;
&lt;br /&gt;
#To search for a specific module by name (e.g., `gcc`), use:&lt;br /&gt;
module avail gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#Get Detailed Information About a Module: The `module spider` command provides detailed information about a module, including versions, dependencies, and descriptions:&lt;br /&gt;
module spider gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#View Module Settings: To see what environment variables and settings will be modified by a module, use:&lt;br /&gt;
module show gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#Load a Module: To set up the environment for a specific software, use the `module load` command. For example, to load GCC version 12.1.0:&lt;br /&gt;
module load gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#List Loaded Modules: To view all currently loaded modules in your session, use:&lt;br /&gt;
module list&lt;br /&gt;
&lt;br /&gt;
#Unload a Module: To unload a specific module from your environment, use:&lt;br /&gt;
module unload gcc/gcc-12.1.0&lt;br /&gt;
&lt;br /&gt;
#Unload All Modules:** If you need to clear your environment of all loaded modules, use:&lt;br /&gt;
module purge&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;By using these commands, you can easily manage the software environments needed for different tasks, ensuring compatibility and reducing potential conflicts between software versions.&lt;br /&gt;
&lt;br /&gt;
== Basic Job Submission Commands ==&lt;br /&gt;
&lt;br /&gt;
=== Finding Your Account and Partition ===&lt;br /&gt;
&lt;br /&gt;
Before submitting a job, you need to know which partitions you have permission to use.&lt;br /&gt;
&lt;br /&gt;
Run the command `&amp;lt;code&amp;gt;check_my_partitions&amp;lt;/code&amp;gt;` to view a list of all the partitions you have permission to send jobs to.&lt;br /&gt;
&lt;br /&gt;
== Submitting Jobs==&lt;br /&gt;
sbatch: Submits a job script for batch processing.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example&amp;#039;&amp;#039;&amp;#039;:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    sbatch --ntasks=1 --time=10 -p power-general -A power-general-users pre_process.bash&lt;br /&gt;
   # This command submits pre_process.bash to the power-general partition for 10 minutes. &lt;br /&gt;
   # With 1 GPU:&lt;br /&gt;
    sbatch --gres=gpu:1 -p gpu-general -A gpu-general-users gpu_job.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Writing SLURM Job Scripts===&lt;br /&gt;
&lt;br /&gt;
Here is a simple job script example:&lt;br /&gt;
&lt;br /&gt;
==== Basic Script====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --job-name=my_job             # Job name&lt;br /&gt;
#SBATCH --account=power-general-users # Account name&lt;br /&gt;
#SBATCH --partition=power-general     # Partition name&lt;br /&gt;
#SBATCH --time=02:00:00               # Max run time (hh:mm:ss)&lt;br /&gt;
#SBATCH --ntasks=1                    # Number of tasks&lt;br /&gt;
#SBATCH --cpus-per-task=1             # CPUs per task&lt;br /&gt;
#SBATCH --mem-per-cpu=4G              # Memory per CPU&lt;br /&gt;
#SBATCH --output=my_job_%j.out        # Output file&lt;br /&gt;
#SBATCH --error=my_job_%j.err         # Error file&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Starting my SLURM job&amp;quot;&lt;br /&gt;
echo &amp;quot;Job ID: $SLURM_JOB_ID&amp;quot;&lt;br /&gt;
echo &amp;quot;Running on nodes: $SLURM_JOB_NODELIST&amp;quot;&lt;br /&gt;
echo &amp;quot;Allocated CPUs: $SLURM_JOB_CPUS_PER_NODE&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Your application commands go here&lt;br /&gt;
# ./my_program&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Job completed&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Script for 1 GPU ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --job-name=gpu_job             # Job name&lt;br /&gt;
#SBATCH --account=my_account           # Account name&lt;br /&gt;
#SBATCH --partition=gpu-general        # Partition name&lt;br /&gt;
#SBATCH --time=02:00:00                # Max run time&lt;br /&gt;
#SBATCH --ntasks=1                     # Number of tasks&lt;br /&gt;
#SBATCH --cpus-per-task=1              # CPUs per task&lt;br /&gt;
#SBATCH --gres=gpu:1                   # Number of GPUs&lt;br /&gt;
#SBATCH --mem-per-cpu=4G               # Memory per CPU&lt;br /&gt;
#SBATCH --output=my_job_%j.out         # Output file&lt;br /&gt;
#SBATCH --error=my_job_%j.err          # Error file&lt;br /&gt;
&lt;br /&gt;
module load python/python-3.8&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Starting GPU job&amp;quot;&lt;br /&gt;
echo &amp;quot;Job ID: $SLURM_JOB_ID&amp;quot;&lt;br /&gt;
echo &amp;quot;Running on nodes: $SLURM_JOB_NODELIST&amp;quot;&lt;br /&gt;
echo &amp;quot;Allocated CPUs: $SLURM_JOB_CPUS_PER_NODE&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Your GPU commands go here&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Job completed&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Importance of Correct RAM Usage in Jobs===&lt;br /&gt;
&lt;br /&gt;
When writing SLURM job scripts, it&amp;#039;s crucial to understand and correctly specify the memory requirements for your job. &lt;br /&gt;
&lt;br /&gt;
Proper memory allocation ensures efficient resource usage and prevents job failures due to out-of-memory (OOM) errors.&lt;br /&gt;
&lt;br /&gt;
==== Why Correct RAM Usage Matters ====&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Resource Efficiency&amp;#039;&amp;#039;&amp;#039;: Allocating the right amount of memory helps in optimal resource utilization, allowing more jobs to run simultaneously on the cluster.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Job Stability&amp;#039;&amp;#039;&amp;#039;: Underestimating memory requirements can lead to OOM errors, causing your job to fail and waste computational resources.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Performance&amp;#039;&amp;#039;&amp;#039;: Overestimating memory needs can lead to underutilization of resources, potentially delaying other jobs in the queue.&lt;br /&gt;
&lt;br /&gt;
==== How to Specify Memory in SLURM ====&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;--mem&amp;#039;&amp;#039;&amp;#039;: Specifies the total memory required for the job.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;--mem-per-cpu&amp;#039;&amp;#039;&amp;#039;: Specifies the memory required per CPU.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example&amp;#039;&amp;#039;&amp;#039;:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --mem=4G              # Total memory for the job&lt;br /&gt;
#SBATCH --mem-per-cpu=2G      # Memory per CPU&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Interactive Jobs===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#Start an interactive session:&lt;br /&gt;
srun --ntasks=1 -p power-general -A power-general-users --pty bash&lt;br /&gt;
&lt;br /&gt;
#Specify a compute node:&lt;br /&gt;
srun --ntasks=1 -p power-general -A power-general-users --nodelist=&amp;quot;compute-0-12&amp;quot; --pty bash&lt;br /&gt;
&lt;br /&gt;
#Using GUI:&lt;br /&gt;
srun --ntasks=1 -p power-general -A power-general-users --x11 /bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Submitting RELION Jobs===&lt;br /&gt;
&lt;br /&gt;
To submit a RELION job interactively on the &amp;lt;code&amp;gt;gpu-relion&amp;lt;/code&amp;gt; queue with X11 forwarding, use the following steps:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#Start an interactive session with X11:&lt;br /&gt;
srun --ntasks=1 -p gpu-relion -A your_account --x11 --pty bash&lt;br /&gt;
#Load the RELION module:&lt;br /&gt;
module load relion/relion-4.0.1&lt;br /&gt;
#Launch RELION:&lt;br /&gt;
relion&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==AlphaFold==&lt;br /&gt;
&lt;br /&gt;
AlphaFold is a deep learning tool designed for predicting protein structures.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Guide:&amp;#039;&amp;#039;&amp;#039;  &lt;br /&gt;
[https://hpcguide.tau.ac.il/index.php?title=Alphafold AlphaFold Guide]&lt;br /&gt;
&lt;br /&gt;
==Common SLURM Commands==&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#View all queues (partitions):&lt;br /&gt;
sinfo&lt;br /&gt;
#View all jobs:&lt;br /&gt;
squeue&lt;br /&gt;
#View details of a specific job:&lt;br /&gt;
scontrol show job &amp;lt;job_number&amp;gt;&lt;br /&gt;
#Get information about partitions:&lt;br /&gt;
scontrol show partition&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting &amp;amp; Tips ==&lt;br /&gt;
&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Common Errors&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
# &amp;lt;code&amp;gt;srun: error: Unable to allocate resources: No partition specified or system default partition&amp;lt;/code&amp;gt;  &amp;lt;br /&amp;gt;&amp;#039;&amp;#039;&amp;#039;Solution:&amp;#039;&amp;#039;&amp;#039; Always specify a partition. Example:  &amp;lt;code&amp;gt;srun --pty -c 1 --mem=2G -p power-general /bin/bash&amp;lt;/code&amp;gt;&lt;br /&gt;
# Job failed, and upon doing scontrol show job job_id or when running sacct -j job_id -o JobID,JobName,State%20  &amp;lt;br /&amp;gt;you see:   &amp;lt;code&amp;gt;JobState=OUT_OF_MEMORY Reason=OutOfMemory&amp;lt;/code&amp;gt;  or :&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
JobID           JobName                State &lt;br /&gt;
------------ ---------- -------------------- &lt;br /&gt;
71             oom_test        OUT_OF_MEMORY &lt;br /&gt;
71.batch          batch        OUT_OF_MEMORY &lt;br /&gt;
71.extern        extern            COMPLETED &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;it means that the ram requested for the job was not enough, please resubmit the job again with more ram. see [https://wikihpc.tau.ac.il/index.php?title=Slurm_user_guide#Estimating_RAM_Usage below] for help with understanding how much ram your job may need.&lt;br /&gt;
&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Chain Jobs&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
Use the &amp;lt;code&amp;gt;--depend&amp;lt;/code&amp;gt; flag to set job dependencies.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sbatch --ntasks=1 --time=60 -p power-general -A power-general-users --depend=45001 do_work.bash&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Always Specify Resources&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
When submitting jobs, ensure you include all required resources like partition, memory, and CPUs to avoid job failures.&lt;br /&gt;
&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Attaching to Running Jobs&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
If you need to monitor or interact with a running job, use &amp;lt;code&amp;gt;sattach&amp;lt;/code&amp;gt;. This command allows you to attach to a job&amp;#039;s input, output, and error streams in real-time.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sattach &amp;lt;job_id&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To view job steps of a specific job, use the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
scontrol show job &amp;lt;job_id&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look for sections labeled &amp;quot;StepId&amp;quot; within the output. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;For specific job steps, use:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sattach &amp;lt;job_id.step_id&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Note:&amp;#039;&amp;#039;&amp;#039; &amp;lt;code&amp;gt;sattach&amp;lt;/code&amp;gt; is particularly useful for interactive jobs, where you can provide input directly. For non-interactive jobs, it acts like &amp;lt;code&amp;gt;tail -f&amp;lt;/code&amp;gt;, allowing you to monitor the output stream.&lt;br /&gt;
&lt;br /&gt;
=== Estimating RAM Usage ===&lt;br /&gt;
&lt;br /&gt;
When writing SLURM job scripts, it&amp;#039;s crucial to understand and correctly specify the memory requirements for your job. Proper memory allocation ensures efficient resource usage and prevents job failures due to out-of-memory (OOM) errors.&lt;br /&gt;
&lt;br /&gt;
==== Tips for Estimating RAM Usage ====&lt;br /&gt;
&lt;br /&gt;
* Check Application Documentation: Refer to the official documentation or user guides for memory-related information.&lt;br /&gt;
* Run a Small Test Job: Submit a smaller version of your job and monitor its memory usage using commands like `free -m`, `top`, or `htop`.&lt;br /&gt;
* Use Profiling Tools: Tools like `valgrind`, `gprof`, or built-in profilers can help you understand memory usage.&lt;br /&gt;
* Analyze Previous Jobs: Review SLURM logs and job statistics for insights into memory consumption of past jobs.&lt;br /&gt;
* Consult with Peers or Experts: Ask colleagues or experts who have experience with similar workloads.&lt;br /&gt;
&lt;br /&gt;
==== Example: Monitoring Memory Usage ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH --job-name=memory_test&lt;br /&gt;
#SBATCH --account=your_account&lt;br /&gt;
#SBATCH --partition=your_partition&lt;br /&gt;
#SBATCH --time=01:00:00&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --cpus-per-task=1&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --output=memory_test.out&lt;br /&gt;
#SBATCH --error=memory_test.err&lt;br /&gt;
&lt;br /&gt;
# Monitor memory usage&lt;br /&gt;
echo &amp;quot;Memory usage before running the job:&amp;quot;&lt;br /&gt;
free -m&lt;br /&gt;
&lt;br /&gt;
# Your application commands go here&lt;br /&gt;
# ./your_application&lt;br /&gt;
&lt;br /&gt;
# Monitor memory usage after running the job&lt;br /&gt;
echo &amp;quot;Memory usage after running the job:&amp;quot;&lt;br /&gt;
free -m&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== General Tips ====&lt;br /&gt;
&lt;br /&gt;
* Start Small: Begin with a conservative memory request and increase it based on observed usage.&lt;br /&gt;
* Consider Peak Usage: Plan for peak memory usage to avoid OOM errors.&lt;br /&gt;
* Use SLURM&amp;#039;s Memory Reporting: Use `sacct` to view memory usage statistics.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Example:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sacct -j &amp;lt;job_id&amp;gt; --format=JobID,JobName,MaxRSS,Elapsed&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=NAC_and_EDR_for_Linux&amp;diff=1481</id>
		<title>NAC and EDR for Linux</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=NAC_and_EDR_for_Linux&amp;diff=1481"/>
		<updated>2024-09-23T12:27:45Z</updated>

		<summary type="html">&lt;p&gt;Levk: /* NAC - Forescout security software for Linux */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== NAC - Forescout security software for Linux ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Download and install the Forescout client:&lt;br /&gt;
[http://hpcguide.tau.ac.il/nac/ForeScoutSecureConnector_64_visible_daemon.tar.gz hpcguide.tau.ac.il/nac/ForeScoutSecureConnector_64_visible_daemon.tar.gz]&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Slurm_API&amp;diff=1460</id>
		<title>Slurm API</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Slurm_API&amp;diff=1460"/>
		<updated>2024-04-07T14:01:24Z</updated>

		<summary type="html">&lt;p&gt;Levk: /* Python Example: */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This documentation provides comprehensive guidance on interfacing with the SLURM API for job submission within the PowerSlurm cluster. These instructions can be used for submitting jobs that originate from a web interface. &lt;br /&gt;
&lt;br /&gt;
For a detailed understanding of the API&amp;#039;s capabilities and functionalities, refer to the official SLURM documentation at SLURM REST API Documentation. https://slurm.schedmd.com/rest_api.html&lt;br /&gt;
= Authentication =&lt;br /&gt;
&lt;br /&gt;
==== Introduction ====&lt;br /&gt;
Secure access to the SLURM REST API is managed through JWT (JSON Web Tokens). This section provides a step-by-step guide on how to obtain a JWT token, which is essential for authenticating and authorizing API requests.&lt;br /&gt;
&lt;br /&gt;
==== Prerequisites ====&lt;br /&gt;
&lt;br /&gt;
* An API key provided by the High-Performance Computing (HPC) team.&lt;br /&gt;
&lt;br /&gt;
==== Constants ====&lt;br /&gt;
&lt;br /&gt;
* Base URL for the SLURM REST API: &amp;lt;code&amp;gt;&amp;lt;nowiki&amp;gt;https://slurmtron.tau.ac.il&amp;lt;/nowiki&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
* Endpoint for token generation: &amp;lt;code&amp;gt;&amp;lt;nowiki&amp;gt;https://slurmtron.tau.ac.il/slurmapi/generate-token/&amp;lt;/nowiki&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Python Example for creating a JWT token ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python3&amp;quot;&amp;gt;&lt;br /&gt;
import requests&lt;br /&gt;
&lt;br /&gt;
def get_api_token(username, api_key):&lt;br /&gt;
    &amp;quot;&amp;quot;&amp;quot;&lt;br /&gt;
    Retrieves a JWT token for SLURM REST API access for powerslurm cluster.&lt;br /&gt;
&lt;br /&gt;
    Parameters:&lt;br /&gt;
    username (str): The username of the user requesting the token.&lt;br /&gt;
    api_key (str): The API key provided by the HPC team.&lt;br /&gt;
&lt;br /&gt;
    Returns:&lt;br /&gt;
    str: The API token if the request is successful.&lt;br /&gt;
&lt;br /&gt;
    Raises:&lt;br /&gt;
    Exception: If the request fails with a non-200 status code.&lt;br /&gt;
    &amp;quot;&amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    generate_token_url = &amp;#039;https://slurmtron.tau.ac.il/slurmapi/generate-token/&amp;#039;&lt;br /&gt;
&lt;br /&gt;
    payload = {&lt;br /&gt;
        &amp;#039;username&amp;#039;: username,&lt;br /&gt;
        &amp;#039;api_key&amp;#039;: api_key&lt;br /&gt;
    }&lt;br /&gt;
    &lt;br /&gt;
    response = requests.post(generate_token_url, data=payload)&lt;br /&gt;
&lt;br /&gt;
    if response.status_code == 200:&lt;br /&gt;
        # Extracting the token from the JSON response&lt;br /&gt;
        return response.json()[&amp;#039;SlurmJWT&amp;#039;]&lt;br /&gt;
    else:&lt;br /&gt;
        raise Exception(f&amp;quot;Error: {response.status_code}, {response.text}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Example usage&lt;br /&gt;
# Replace &amp;#039;your_username&amp;#039; and &amp;#039;your_api_key&amp;#039; with actual values&lt;br /&gt;
# token = get_api_token(&amp;#039;your_username&amp;#039;, &amp;#039;your_api_key&amp;#039;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Manual Token Generation (General Method) ====&lt;br /&gt;
&lt;br /&gt;
# Send a POST request to &amp;lt;code&amp;gt;&amp;lt;nowiki&amp;gt;https://slurmtron.tau.ac.il/slurmapi/generate-token/&amp;lt;/nowiki&amp;gt;&amp;lt;/code&amp;gt; with your username and API key.&lt;br /&gt;
# The request should be in the format of a JSON payload containing your credentials.&lt;br /&gt;
# On success, the server will return a JSON response in the format: &amp;lt;code&amp;gt;{ &amp;quot;SlurmJWT&amp;quot;: &amp;quot;token&amp;quot; }&amp;lt;/code&amp;gt;.&lt;br /&gt;
# Extract the &amp;lt;code&amp;gt;SlurmJWT&amp;lt;/code&amp;gt; value from the response. This is your required JWT token.&lt;br /&gt;
&lt;br /&gt;
==== Security and Best Practices ====&lt;br /&gt;
&lt;br /&gt;
* Keep your API key and JWT token confidential.&lt;br /&gt;
* Use the JWT token in the header of your API requests for authorized access to the SLURM REST API.&lt;br /&gt;
&lt;br /&gt;
= Job Submission to SLURM REST API: =&lt;br /&gt;
&lt;br /&gt;
=== Introduction ===&lt;br /&gt;
This section covers the process of submitting a job to the SLURM REST API at Tel Aviv University&lt;br /&gt;
&lt;br /&gt;
==== Prerequisites ====&lt;br /&gt;
&lt;br /&gt;
* Access to the SLURM REST API.&lt;br /&gt;
* An API key and username, provided by the Tel Aviv University HPC team.&lt;br /&gt;
* A tool or library for making HTTP requests (e.g., &amp;lt;code&amp;gt;requests&amp;lt;/code&amp;gt; in Python).&lt;br /&gt;
&lt;br /&gt;
== Python Example: ==&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python3&amp;quot;&amp;gt;&lt;br /&gt;
#!/usr/bin/python3&lt;br /&gt;
&lt;br /&gt;
import requests&lt;br /&gt;
&lt;br /&gt;
# Base URL for authentication and token generation&lt;br /&gt;
base_url_auth = &amp;#039;https://slurmtron.tau.ac.il&amp;#039;&lt;br /&gt;
generate_token_url = f&amp;quot;{base_url_auth}/slurmapi/generate-token/&amp;quot;&lt;br /&gt;
# Base URL for job submission&lt;br /&gt;
base_url = f&amp;quot;{base_url_auth}/slurmrestd&amp;quot;&lt;br /&gt;
# Job submission URL&lt;br /&gt;
job_url = f&amp;#039;{base_url}/slurm/v0.0.40/job/submit&amp;#039;&lt;br /&gt;
&lt;br /&gt;
# User credentials&lt;br /&gt;
current_user = &amp;quot;user&amp;quot;&lt;br /&gt;
api_key = &amp;quot;token&amp;quot;&lt;br /&gt;
&lt;br /&gt;
def get_api_token(username, api_key):&lt;br /&gt;
    &amp;quot;&amp;quot;&amp;quot;&lt;br /&gt;
    Retrieves a JWT token for SLURM REST API access for powerslurm cluster.&lt;br /&gt;
&lt;br /&gt;
    Parameters:&lt;br /&gt;
    username (str): The username of the user requesting the token.&lt;br /&gt;
    api_key (str): The API key provided by the HPC team.&lt;br /&gt;
&lt;br /&gt;
    Returns:&lt;br /&gt;
    str: The API token if the request is successful.&lt;br /&gt;
&lt;br /&gt;
    Raises:&lt;br /&gt;
    Exception: If the request fails with a non-200 status code.&lt;br /&gt;
    &amp;quot;&amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    generate_token_url = &amp;#039;https://slurmtron.tau.ac.il/slurmapi/generate-token/&amp;#039;&lt;br /&gt;
&lt;br /&gt;
    payload = {&lt;br /&gt;
        &amp;#039;username&amp;#039;: username,&lt;br /&gt;
        &amp;#039;api_key&amp;#039;: api_key&lt;br /&gt;
    }&lt;br /&gt;
    &lt;br /&gt;
    response = requests.post(generate_token_url, data=payload)&lt;br /&gt;
&lt;br /&gt;
    if response.status_code == 200:&lt;br /&gt;
        # Extracting the token from the JSON response&lt;br /&gt;
        return response.json()[&amp;#039;SlurmJWT&amp;#039;]&lt;br /&gt;
    else:&lt;br /&gt;
        raise Exception(f&amp;quot;Error: {response.status_code}, {response.text}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Authorization headers with the obtained token&lt;br /&gt;
headers = {&lt;br /&gt;
    &amp;#039;X-SLURM-USER-NAME&amp;#039;: current_user,&lt;br /&gt;
    &amp;#039;X-SLURM-USER-TOKEN&amp;#039;: get_api_token(current_user, api_key)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Job submission request&lt;br /&gt;
jobs_request = requests.post(&lt;br /&gt;
    job_url,&lt;br /&gt;
    headers=headers,&lt;br /&gt;
    json={&lt;br /&gt;
        # Example job script&lt;br /&gt;
        &amp;quot;script&amp;quot;: &amp;quot;#!/bin/bash\n\n&amp;quot;&lt;br /&gt;
                  &amp;quot;srun hostname\n&amp;quot;&lt;br /&gt;
                  &amp;quot;echo \&amp;quot;hello world444\&amp;quot;\n&amp;quot;&lt;br /&gt;
                  &amp;quot;sleep 30&amp;quot;,&lt;br /&gt;
        &amp;quot;job&amp;quot;: {&lt;br /&gt;
            &amp;quot;partition&amp;quot;: &amp;quot;&amp;lt; queue/partition_name &amp;gt;&amp;quot;,&lt;br /&gt;
            &amp;quot;tasks&amp;quot;: 1,&lt;br /&gt;
            &amp;quot;name&amp;quot;: &amp;quot;&amp;lt; job_name&amp;gt; &amp;quot;,&lt;br /&gt;
            &amp;quot;account&amp;quot;: &amp;quot;&amp;lt; account_name &amp;gt;&amp;quot;,&lt;br /&gt;
            &amp;quot;nodes&amp;quot;: &amp;quot;1&amp;quot;,&lt;br /&gt;
            &amp;quot;cpus_per_task&amp;quot;: &amp;lt; cpu_number &amp;gt;,&lt;br /&gt;
            &amp;quot;memory_per_node&amp;quot;: {&lt;br /&gt;
                &amp;quot;number&amp;quot;: &amp;lt;ram in MB &amp;gt;,&lt;br /&gt;
                &amp;quot;set&amp;quot;: True,&lt;br /&gt;
                &amp;quot;infinite&amp;quot;: False&lt;br /&gt;
            },&lt;br /&gt;
            # Full path to your error/output file.&lt;br /&gt;
            &amp;quot;standard_output&amp;quot;: &amp;quot;/path/to/your/output.txt&amp;quot;,&lt;br /&gt;
            &amp;quot;standard_error&amp;quot;: &amp;quot;/path/to/your/error.txt&amp;quot;,&lt;br /&gt;
            &amp;quot;current_working_directory&amp;quot;: &amp;quot;/tmp/&amp;quot;,&lt;br /&gt;
            # Environment modules (module load) should not be used directly under the script parameter. Instead, set all necessary environment variables under the environment parameter.&lt;br /&gt;
            &amp;quot;environment&amp;quot;: [&lt;br /&gt;
                &amp;quot;PATH=/bin:/usr/bin/:/usr/local/bin/&amp;quot;,&lt;br /&gt;
                &amp;quot;LD_LIBRARY_PATH=/lib/:/lib64/:/usr/local/lib&amp;quot;&lt;br /&gt;
            ],&lt;br /&gt;
        },&lt;br /&gt;
    }&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
# Processing the job submission result&lt;br /&gt;
jobs_result = jobs_request.json()[&amp;#039;result&amp;#039;]&lt;br /&gt;
for key, value in jobs_result.items():&lt;br /&gt;
    print(key, value)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
==== Important Notes ====&lt;br /&gt;
&lt;br /&gt;
* The &amp;lt;code&amp;gt;script&amp;lt;/code&amp;gt; parameter in the job submission request is an example. Customize this script to fit your specific job requirements.&lt;br /&gt;
* Use full and appropriate paths for &amp;lt;code&amp;gt;&amp;quot;standard_output&amp;quot;&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;&amp;quot;standard_error&amp;quot;&amp;lt;/code&amp;gt;. Replace the placeholders with actual paths where you want the output and error files to be stored.&lt;br /&gt;
* Environment modules (&amp;lt;code&amp;gt;module load&amp;lt;/code&amp;gt;) should not be used directly under the &amp;lt;code&amp;gt;script&amp;lt;/code&amp;gt; parameter. Instead, set all necessary environment variables under the &amp;lt;code&amp;gt;environment&amp;lt;/code&amp;gt; parameter.&lt;br /&gt;
&lt;br /&gt;
==== Security and Best Practices ====&lt;br /&gt;
&lt;br /&gt;
* Securely handle your API key and other sensitive information.&lt;br /&gt;
* Regularly review and update your scripts to align with updates in the SLURM REST API.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;More Examples&amp;#039;&amp;#039;&amp;#039;:&lt;br /&gt;
&lt;br /&gt;
https://docs.lxp.lu/cloud/slurmrestd/&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Slurm_API&amp;diff=1459</id>
		<title>Slurm API</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Slurm_API&amp;diff=1459"/>
		<updated>2024-04-07T13:57:37Z</updated>

		<summary type="html">&lt;p&gt;Levk: /* Python Example: */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This documentation provides comprehensive guidance on interfacing with the SLURM API for job submission within the PowerSlurm cluster. These instructions can be used for submitting jobs that originate from a web interface. &lt;br /&gt;
&lt;br /&gt;
For a detailed understanding of the API&amp;#039;s capabilities and functionalities, refer to the official SLURM documentation at SLURM REST API Documentation. https://slurm.schedmd.com/rest_api.html&lt;br /&gt;
= Authentication =&lt;br /&gt;
&lt;br /&gt;
==== Introduction ====&lt;br /&gt;
Secure access to the SLURM REST API is managed through JWT (JSON Web Tokens). This section provides a step-by-step guide on how to obtain a JWT token, which is essential for authenticating and authorizing API requests.&lt;br /&gt;
&lt;br /&gt;
==== Prerequisites ====&lt;br /&gt;
&lt;br /&gt;
* An API key provided by the High-Performance Computing (HPC) team.&lt;br /&gt;
&lt;br /&gt;
==== Constants ====&lt;br /&gt;
&lt;br /&gt;
* Base URL for the SLURM REST API: &amp;lt;code&amp;gt;&amp;lt;nowiki&amp;gt;https://slurmtron.tau.ac.il&amp;lt;/nowiki&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
* Endpoint for token generation: &amp;lt;code&amp;gt;&amp;lt;nowiki&amp;gt;https://slurmtron.tau.ac.il/slurmapi/generate-token/&amp;lt;/nowiki&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Python Example for creating a JWT token ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python3&amp;quot;&amp;gt;&lt;br /&gt;
import requests&lt;br /&gt;
&lt;br /&gt;
def get_api_token(username, api_key):&lt;br /&gt;
    &amp;quot;&amp;quot;&amp;quot;&lt;br /&gt;
    Retrieves a JWT token for SLURM REST API access for powerslurm cluster.&lt;br /&gt;
&lt;br /&gt;
    Parameters:&lt;br /&gt;
    username (str): The username of the user requesting the token.&lt;br /&gt;
    api_key (str): The API key provided by the HPC team.&lt;br /&gt;
&lt;br /&gt;
    Returns:&lt;br /&gt;
    str: The API token if the request is successful.&lt;br /&gt;
&lt;br /&gt;
    Raises:&lt;br /&gt;
    Exception: If the request fails with a non-200 status code.&lt;br /&gt;
    &amp;quot;&amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    generate_token_url = &amp;#039;https://slurmtron.tau.ac.il/slurmapi/generate-token/&amp;#039;&lt;br /&gt;
&lt;br /&gt;
    payload = {&lt;br /&gt;
        &amp;#039;username&amp;#039;: username,&lt;br /&gt;
        &amp;#039;api_key&amp;#039;: api_key&lt;br /&gt;
    }&lt;br /&gt;
    &lt;br /&gt;
    response = requests.post(generate_token_url, data=payload)&lt;br /&gt;
&lt;br /&gt;
    if response.status_code == 200:&lt;br /&gt;
        # Extracting the token from the JSON response&lt;br /&gt;
        return response.json()[&amp;#039;SlurmJWT&amp;#039;]&lt;br /&gt;
    else:&lt;br /&gt;
        raise Exception(f&amp;quot;Error: {response.status_code}, {response.text}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Example usage&lt;br /&gt;
# Replace &amp;#039;your_username&amp;#039; and &amp;#039;your_api_key&amp;#039; with actual values&lt;br /&gt;
# token = get_api_token(&amp;#039;your_username&amp;#039;, &amp;#039;your_api_key&amp;#039;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Manual Token Generation (General Method) ====&lt;br /&gt;
&lt;br /&gt;
# Send a POST request to &amp;lt;code&amp;gt;&amp;lt;nowiki&amp;gt;https://slurmtron.tau.ac.il/slurmapi/generate-token/&amp;lt;/nowiki&amp;gt;&amp;lt;/code&amp;gt; with your username and API key.&lt;br /&gt;
# The request should be in the format of a JSON payload containing your credentials.&lt;br /&gt;
# On success, the server will return a JSON response in the format: &amp;lt;code&amp;gt;{ &amp;quot;SlurmJWT&amp;quot;: &amp;quot;token&amp;quot; }&amp;lt;/code&amp;gt;.&lt;br /&gt;
# Extract the &amp;lt;code&amp;gt;SlurmJWT&amp;lt;/code&amp;gt; value from the response. This is your required JWT token.&lt;br /&gt;
&lt;br /&gt;
==== Security and Best Practices ====&lt;br /&gt;
&lt;br /&gt;
* Keep your API key and JWT token confidential.&lt;br /&gt;
* Use the JWT token in the header of your API requests for authorized access to the SLURM REST API.&lt;br /&gt;
&lt;br /&gt;
= Job Submission to SLURM REST API: =&lt;br /&gt;
&lt;br /&gt;
=== Introduction ===&lt;br /&gt;
This section covers the process of submitting a job to the SLURM REST API at Tel Aviv University&lt;br /&gt;
&lt;br /&gt;
==== Prerequisites ====&lt;br /&gt;
&lt;br /&gt;
* Access to the SLURM REST API.&lt;br /&gt;
* An API key and username, provided by the Tel Aviv University HPC team.&lt;br /&gt;
* A tool or library for making HTTP requests (e.g., &amp;lt;code&amp;gt;requests&amp;lt;/code&amp;gt; in Python).&lt;br /&gt;
&lt;br /&gt;
== Python Example: ==&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python3&amp;quot;&amp;gt;&lt;br /&gt;
#!/usr/bin/python3&lt;br /&gt;
&lt;br /&gt;
import requests&lt;br /&gt;
&lt;br /&gt;
# Base URL for authentication and token generation&lt;br /&gt;
base_url_auth = &amp;#039;https://slurmtron.tau.ac.il&amp;#039;&lt;br /&gt;
generate_token_url = f&amp;quot;{base_url_auth}/slurmapi/generate-token/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# User credentials&lt;br /&gt;
current_user = &amp;quot;user&amp;quot;&lt;br /&gt;
api_key = &amp;quot;token&amp;quot;&lt;br /&gt;
&lt;br /&gt;
import requests&lt;br /&gt;
&lt;br /&gt;
def get_api_token(username, api_key):&lt;br /&gt;
    &amp;quot;&amp;quot;&amp;quot;&lt;br /&gt;
    Retrieves a JWT token for SLURM REST API access for powerslurm cluster.&lt;br /&gt;
&lt;br /&gt;
    Parameters:&lt;br /&gt;
    username (str): The username of the user requesting the token.&lt;br /&gt;
    api_key (str): The API key provided by the HPC team.&lt;br /&gt;
&lt;br /&gt;
    Returns:&lt;br /&gt;
    str: The API token if the request is successful.&lt;br /&gt;
&lt;br /&gt;
    Raises:&lt;br /&gt;
    Exception: If the request fails with a non-200 status code.&lt;br /&gt;
    &amp;quot;&amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    generate_token_url = &amp;#039;https://slurmtron.tau.ac.il/slurmapi/generate-token/&amp;#039;&lt;br /&gt;
&lt;br /&gt;
    payload = {&lt;br /&gt;
        &amp;#039;username&amp;#039;: username,&lt;br /&gt;
        &amp;#039;api_key&amp;#039;: api_key&lt;br /&gt;
    }&lt;br /&gt;
    &lt;br /&gt;
    response = requests.post(generate_token_url, data=payload)&lt;br /&gt;
&lt;br /&gt;
    if response.status_code == 200:&lt;br /&gt;
        # Extracting the token from the JSON response&lt;br /&gt;
        return response.json()[&amp;#039;SlurmJWT&amp;#039;]&lt;br /&gt;
    else:&lt;br /&gt;
        raise Exception(f&amp;quot;Error: {response.status_code}, {response.text}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Example usage&lt;br /&gt;
# Replace &amp;#039;your_username&amp;#039; and &amp;#039;your_api_key&amp;#039; with actual values&lt;br /&gt;
# token = get_api_token(&amp;#039;your_username&amp;#039;, &amp;#039;your_api_key&amp;#039;)&lt;br /&gt;
&lt;br /&gt;
# Base URL for job submission&lt;br /&gt;
base_url = f&amp;quot;{base_url_auth}/slurmrestd&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Job submission URL&lt;br /&gt;
job_url = f&amp;#039;{base_url}/slurm/v0.0.40/job/submit&amp;#039;&lt;br /&gt;
&lt;br /&gt;
# Authorization headers with the obtained token&lt;br /&gt;
headers = {&lt;br /&gt;
    &amp;#039;X-SLURM-USER-NAME&amp;#039;: current_user,&lt;br /&gt;
    &amp;#039;X-SLURM-USER-TOKEN&amp;#039;: get_api_token(generate_token_url, current_user, api_key)[&amp;#039;SlurmJWT&amp;#039;]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Job submission request&lt;br /&gt;
jobs_request = requests.post(&lt;br /&gt;
    job_url,&lt;br /&gt;
    headers=headers,&lt;br /&gt;
    json={&lt;br /&gt;
        # Example job script&lt;br /&gt;
        &amp;quot;script&amp;quot;: &amp;quot;#!/bin/bash\n\n&amp;quot;&lt;br /&gt;
                  &amp;quot;srun hostname\n&amp;quot;&lt;br /&gt;
                  &amp;quot;echo \&amp;quot;hello world444\&amp;quot;\n&amp;quot;&lt;br /&gt;
                  &amp;quot;sleep 30&amp;quot;,&lt;br /&gt;
        &amp;quot;job&amp;quot;: {&lt;br /&gt;
            &amp;quot;partition&amp;quot;: &amp;quot;&amp;lt; queue/partition_name &amp;gt;&amp;quot;,&lt;br /&gt;
            &amp;quot;tasks&amp;quot;: 1,&lt;br /&gt;
            &amp;quot;name&amp;quot;: &amp;quot;&amp;lt; job_name&amp;gt; &amp;quot;,&lt;br /&gt;
            &amp;quot;account&amp;quot;: &amp;quot;&amp;lt; account_name &amp;gt;&amp;quot;,&lt;br /&gt;
            &amp;quot;nodes&amp;quot;: &amp;quot;1&amp;quot;,&lt;br /&gt;
            &amp;quot;cpus_per_task&amp;quot;: &amp;lt; cpu_number &amp;gt;,&lt;br /&gt;
            &amp;quot;memory_per_node&amp;quot;: {&lt;br /&gt;
                &amp;quot;number&amp;quot;: &amp;lt;ram in MB &amp;gt;,&lt;br /&gt;
                &amp;quot;set&amp;quot;: True,&lt;br /&gt;
                &amp;quot;infinite&amp;quot;: False&lt;br /&gt;
            },&lt;br /&gt;
            # Full path to your error/output file.&lt;br /&gt;
            &amp;quot;standard_output&amp;quot;: &amp;quot;/path/to/your/output.txt&amp;quot;,&lt;br /&gt;
            &amp;quot;standard_error&amp;quot;: &amp;quot;/path/to/your/error.txt&amp;quot;,&lt;br /&gt;
            &amp;quot;current_working_directory&amp;quot;: &amp;quot;/tmp/&amp;quot;,&lt;br /&gt;
            # Environment modules (module load) should not be used directly under the script parameter. Instead, set all necessary environment variables under the environment parameter.&lt;br /&gt;
            &amp;quot;environment&amp;quot;: [&lt;br /&gt;
                &amp;quot;PATH=/bin:/usr/bin/:/usr/local/bin/&amp;quot;,&lt;br /&gt;
                &amp;quot;LD_LIBRARY_PATH=/lib/:/lib64/:/usr/local/lib&amp;quot;&lt;br /&gt;
            ],&lt;br /&gt;
        },&lt;br /&gt;
    }&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
# Processing the job submission result&lt;br /&gt;
jobs_result = jobs_request.json()[&amp;#039;result&amp;#039;]&lt;br /&gt;
for key, value in jobs_result.items():&lt;br /&gt;
    print(key, value)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Important Notes ====&lt;br /&gt;
&lt;br /&gt;
* The &amp;lt;code&amp;gt;script&amp;lt;/code&amp;gt; parameter in the job submission request is an example. Customize this script to fit your specific job requirements.&lt;br /&gt;
* Use full and appropriate paths for &amp;lt;code&amp;gt;&amp;quot;standard_output&amp;quot;&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;&amp;quot;standard_error&amp;quot;&amp;lt;/code&amp;gt;. Replace the placeholders with actual paths where you want the output and error files to be stored.&lt;br /&gt;
* Environment modules (&amp;lt;code&amp;gt;module load&amp;lt;/code&amp;gt;) should not be used directly under the &amp;lt;code&amp;gt;script&amp;lt;/code&amp;gt; parameter. Instead, set all necessary environment variables under the &amp;lt;code&amp;gt;environment&amp;lt;/code&amp;gt; parameter.&lt;br /&gt;
&lt;br /&gt;
==== Security and Best Practices ====&lt;br /&gt;
&lt;br /&gt;
* Securely handle your API key and other sensitive information.&lt;br /&gt;
* Regularly review and update your scripts to align with updates in the SLURM REST API.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;More Examples&amp;#039;&amp;#039;&amp;#039;:&lt;br /&gt;
&lt;br /&gt;
https://docs.lxp.lu/cloud/slurmrestd/&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Submitting_a_job_to_a_slurm_queue&amp;diff=1453</id>
		<title>Submitting a job to a slurm queue</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Submitting_a_job_to_a_slurm_queue&amp;diff=1453"/>
		<updated>2024-03-07T12:36:29Z</updated>

		<summary type="html">&lt;p&gt;Levk: /* Finding your account and partition */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;SLURM (Simple Linux Utility for Resource Management) is a job scheduler used on many high-performance computing systems. It manages and allocates resources such as compute nodes and controls job execution.&lt;br /&gt;
&lt;br /&gt;
=== Accessing the System ===&lt;br /&gt;
To submit jobs to the SLURM scheduler at Tel Aviv University, you must access the system through one of the designated login nodes. These nodes act as the gateway for submitting and managing your SLURM jobs. The available login nodes are:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;powerslurm-login.tau.ac.il&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;powerslurm-login2.tau.ac.il&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Login Requirements: ====&lt;br /&gt;
&lt;br /&gt;
# Membership in the &amp;quot;power&amp;quot; group: Ensure you are a part of the &amp;quot;power&amp;quot; group which grants the necessary permissions for accessing the HPC resources.&lt;br /&gt;
# University Credentials: Log in using your Tel Aviv University credentials. This ensures secure access and that your job submissions are appropriately accounted for under your user profile.&lt;br /&gt;
&lt;br /&gt;
Remember, these login nodes are the initial point of contact for all your job management tasks, including job submission, monitoring, and other SLURM-related operations.&lt;br /&gt;
&lt;br /&gt;
=== Basic Job Submission Commands ===&lt;br /&gt;
====Finding your account and partition====&lt;br /&gt;
In order to submit jobs to slurm, one needs to know the accounts and partitions she belongs to. Each account may belong to one or more partitions.&lt;br /&gt;
&lt;br /&gt;
To see the account I belong to, please type:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sacctmgr show associations where user=dvory format=Account%20&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you know your partition, and would like to know which account you need to specify when using it, please do (on powerslurm-login)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
check_allowed_account -p &amp;lt;partition&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
For example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
check_allowed_account -p power-general&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Example for submitting jobs====&lt;br /&gt;
# sbatch: Submit a batch job script.&lt;br /&gt;
#* Example: &amp;lt;code&amp;gt;sbatch --ntasks=1 --time=10 -p power-general -A power-general-users pre_process.bash&amp;lt;/code&amp;gt;&lt;br /&gt;
#* This submits &amp;lt;code&amp;gt;pre_process.bash&amp;lt;/code&amp;gt; with 1 task for 10 minutes.&lt;br /&gt;
#* Example of chaining jobs: &amp;lt;code&amp;gt;sbatch --ntasks=128 --time=60 -p power-general -A power-general-users --depend=45001 do_work.bash&amp;lt;/code&amp;gt;&lt;br /&gt;
#* Example with GPU: &amp;lt;code&amp;gt;sbatch --ntasks=1 --time=10 --gres=gpu:2 -p gpu-general -A gpu-general-users pre_process.bash&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;sbatch --gres=gpu:1 -p gpu-general -A gpu-general-users gpu_job.sh&amp;lt;/code&amp;gt; &lt;br /&gt;
# salloc: Allocate resources for an interactive job but doesn&amp;#039;t start it immediately.&lt;br /&gt;
#* Example: &amp;lt;code&amp;gt;salloc --ntasks=8 --time=10 -p power-general -A power-general-users bash&amp;lt;/code&amp;gt;&lt;br /&gt;
# srun: Submit an interactive job with MPI (Message Passing Interface), often called a &amp;quot;job step.&amp;quot;&lt;br /&gt;
#* Example: &amp;lt;code&amp;gt;srun --ntasks=2 -p power-general -A power-general-users --label hostname&amp;lt;/code&amp;gt;&lt;br /&gt;
#* With MPI: &amp;lt;code&amp;gt;srun -intasks=2 -p power-general -A power-general-users--label hostname&amp;lt;/code&amp;gt;&lt;br /&gt;
# sattach: Attach stdin/out/err to an existing job or job step.&lt;br /&gt;
&lt;br /&gt;
=== Interactive Job Examples ===&lt;br /&gt;
* Opening a bash shell: &amp;lt;code&amp;gt;srun --ntasks=56 -p power-general -A power-general-users  --pty bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* Specifying compute nodes: &amp;lt;code&amp;gt;srun --ntasks=56 -p power-general -A power-general-users --nodelist=&amp;quot;compute-0-12&amp;quot; --pty bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* Using a GPU: &amp;lt;code&amp;gt;salloc --ntasks=8 --time=10 --gres=gpu:4 -p gpu-general -A gpu-general-users bash&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Script Examples: ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH --job-name=my_job             # Job name&lt;br /&gt;
#SBATCH --account=power-general-users # Account name for billing&lt;br /&gt;
#SBATCH --partition=power-general     # Partition name&lt;br /&gt;
#SBATCH --time=02:00:00               # Time allotted for the job (hh:mm:ss)&lt;br /&gt;
#SBATCH --ntasks=4                    # Number of tasks (processes)&lt;br /&gt;
#SBATCH --cpus-per-task=1             # Number of CPU cores per task&lt;br /&gt;
#SBATCH --mem-per-cpu=4G              # Memory per CPU core&lt;br /&gt;
#SBATCH --output=my_job_%j.out        # Standard output and error log (%j expands to jobId)&lt;br /&gt;
#SBATCH --error=my_job_%j.err         # Separate file for standard error&lt;br /&gt;
&lt;br /&gt;
# Load modules or software if required&lt;br /&gt;
# module load python/3.8&lt;br /&gt;
&lt;br /&gt;
# Print some information about the job&lt;br /&gt;
echo &amp;quot;Starting my SLURM job&amp;quot;&lt;br /&gt;
echo &amp;quot;Job ID: $SLURM_JOB_ID&amp;quot;&lt;br /&gt;
echo &amp;quot;Running on nodes: $SLURM_JOB_NODELIST&amp;quot;&lt;br /&gt;
echo &amp;quot;Allocated CPUs: $SLURM_JOB_CPUS_PER_NODE&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Run your application, this could be anything from a custom script to standard applications&lt;br /&gt;
# ./my_program&lt;br /&gt;
# python my_script.py&lt;br /&gt;
&lt;br /&gt;
# End of script&lt;br /&gt;
echo &amp;quot;Job completed&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Script example with GPU ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH --job-name=my_job             # Job name&lt;br /&gt;
#SBATCH --account=my_account          # Account name for billing&lt;br /&gt;
#SBATCH --partition=long              # Partition name&lt;br /&gt;
#SBATCH --time=02:00:00               # Time allotted for the job (hh:mm:ss)&lt;br /&gt;
#SBATCH --ntasks=4                    # Number of tasks (processes)&lt;br /&gt;
#SBATCH --cpus-per-task=1             # Number of CPU cores per task&lt;br /&gt;
#SBATCH --gres=gpu:NUMBER_OF_GPUS     # number of GPU&amp;#039;s to use in the job&lt;br /&gt;
#SBATCH --mem-per-cpu=4G              # Memory per CPU core&lt;br /&gt;
#SBATCH --output=my_job_%j.out        # Standard output and error log (%j expands to jobId)&lt;br /&gt;
#SBATCH --error=my_job_%j.err         # Separate file for standard error&lt;br /&gt;
&lt;br /&gt;
# Load modules or software if required&lt;br /&gt;
module load python/3.8&lt;br /&gt;
&lt;br /&gt;
# Print some information about the job&lt;br /&gt;
echo &amp;quot;Starting my SLURM job&amp;quot;&lt;br /&gt;
echo &amp;quot;Job ID: $SLURM_JOB_ID&amp;quot;&lt;br /&gt;
echo &amp;quot;Running on nodes: $SLURM_JOB_NODELIST&amp;quot;&lt;br /&gt;
echo &amp;quot;Allocated CPUs: $SLURM_JOB_CPUS_PER_NODE&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Run your application, this could be anything from a custom script to standard applications&lt;br /&gt;
# ./my_program&lt;br /&gt;
# python my_script.py&lt;br /&gt;
&lt;br /&gt;
# End of script&lt;br /&gt;
echo &amp;quot;Job completed&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Error Handling ===&lt;br /&gt;
* On some clusters, specifying resources is necessary. Without them, the job may fail.&lt;br /&gt;
** Example error: &amp;lt;code&amp;gt;srun: error: Unable to allocate resources: No partition specified or system default partition&amp;lt;/code&amp;gt;&lt;br /&gt;
** Correct usage: &amp;lt;code&amp;gt;srun --pty -c 1 --mem=2G -p power-yoren /bin/bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* Be aware that specifying GPU resources is crucial for jobs&lt;br /&gt;
&lt;br /&gt;
=== SLURM Information Commands ===&lt;br /&gt;
&lt;br /&gt;
* sinfo: View all queues (partitions).&lt;br /&gt;
* squeue: View all jobs.&lt;br /&gt;
* scontrol show partition: View all partitions.&lt;br /&gt;
* scontrol show job &amp;lt;job_number&amp;gt;: View a job&amp;#039;s attributes.&lt;br /&gt;
&lt;br /&gt;
=== Tips for Managing SLURM Jobs ===&lt;br /&gt;
&lt;br /&gt;
* Chain jobs by using the &amp;lt;code&amp;gt;--depend&amp;lt;/code&amp;gt; flag in &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Use &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; for interactive jobs that require specific resources for a limited time.&lt;br /&gt;
* &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; is versatile for both interactive and batch jobs, especially with MPI.&lt;br /&gt;
* Always specify necessary resources in clusters where defaults are not set.&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Alphafold&amp;diff=1444</id>
		<title>Alphafold</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Alphafold&amp;diff=1444"/>
		<updated>2024-03-04T12:04:00Z</updated>

		<summary type="html">&lt;p&gt;Levk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==== &amp;#039;&amp;#039;&amp;#039;Alphafold&amp;#039;&amp;#039;&amp;#039; ====&lt;br /&gt;
AlphaFold is an artificial intelligence (AI) program developed by Alphabets&amp;#039;s/Google&amp;#039;s DeepMind which performs predictions of protein structure.&lt;br /&gt;
&lt;br /&gt;
==== &amp;#039;&amp;#039;&amp;#039;Databases:&amp;#039;&amp;#039;&amp;#039; ====&lt;br /&gt;
Mounted on nodes with gpu, located at /alphafold_storage/alphafold_db.&lt;br /&gt;
&lt;br /&gt;
===== Usage: =====&lt;br /&gt;
use run_alphafold.sh script located at /powerapps/share/centos7/alphafold/alphafold-2.3.1/run_alphafold.sh&lt;br /&gt;
&lt;br /&gt;
Script reference:&lt;br /&gt;
 &amp;lt;code&amp;gt;Required Parameters:&lt;br /&gt;
 -d &amp;lt;data_dir&amp;gt;         Path to directory of supporting data&lt;br /&gt;
 -o &amp;lt;output_dir&amp;gt;       Path to a directory that will store the results.&lt;br /&gt;
 -f &amp;lt;fasta_paths&amp;gt;      Path to FASTA files containing sequences. If a FASTA file contains multiple sequences, then it will be folded as a multimer. To fold more sequences one after another, write the files separated by a comma&lt;br /&gt;
 -t &amp;lt;max_template_date&amp;gt; Maximum template release date to consider (ISO-8601 format - i.e. YYYY-MM-DD). Important if folding historical test sets&lt;br /&gt;
 Optional Parameters:&lt;br /&gt;
 -g &amp;lt;use_gpu&amp;gt;          Enable NVIDIA runtime to run with GPUs (default: true)&lt;br /&gt;
 -r &amp;lt;run_relax&amp;gt;        Whether to run the final relaxation step on the predicted models. Turning relax off might result in predictions with distracting stereochemical violations but might help in case you are having issues with the relaxation stage (default: true)&lt;br /&gt;
 -e &amp;lt;enable_gpu_relax&amp;gt; Run relax on GPU if GPU is enabled (default: true)&lt;br /&gt;
 -n &amp;lt;openmm_threads&amp;gt;   OpenMM threads (default: all available cores)&lt;br /&gt;
 -a &amp;lt;gpu_devices&amp;gt;      Comma separated list of devices to pass to &amp;#039;CUDA_VISIBLE_DEVICES&amp;#039; (default: 0)&lt;br /&gt;
 -m &amp;lt;model_preset&amp;gt;     Choose preset model configuration - the monomer model, the monomer model with extra ensembling, monomer model with pTM head, or multimer model (default: &amp;#039;monomer&amp;#039;)&lt;br /&gt;
 -c &amp;lt;db_preset&amp;gt;        Choose preset MSA database configuration - smaller genetic database config (reduced_dbs) or full genetic database config (full_dbs) (default: &amp;#039;full_dbs&amp;#039;)&lt;br /&gt;
 -p &amp;lt;use_precomputed_msas&amp;gt; Whether to read MSAs that have been written to disk. WARNING: This will not check if the sequence, database or configuration have changed (default: &amp;#039;false&amp;#039;)&lt;br /&gt;
 -l &amp;lt;num_multimer_predictions_per_model&amp;gt; How many predictions (each with a different random seed) will be generated per model. E.g. if this is 2 and there are 5 models then there will be 10 predictions per input. Note: this FLAG only applies if model_preset=multimer (default: 5)&lt;br /&gt;
 -b &amp;lt;benchmark&amp;gt;        Run multiple JAX model evaluations to obtain a timing that excludes the compilation time, which should be more indicative of the time required for inferencing many proteins (default: &amp;#039;false&amp;#039;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== Sample Qsub Script: =====&lt;br /&gt;
create folder for output in your home dir mkdir ~/alphafold_output then run the script&lt;br /&gt;
&lt;br /&gt;
* you may download dummy_test folder from this github as well for the output&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;https://github.com/kalininalab/alphafold_non_docker&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* /home/alphafold_folder/alphafold_multimer_non_docker/example/query.fasta = this is sample data, please point to the data you need to query.&lt;br /&gt;
* The lines &amp;#039;&amp;#039;&amp;#039;export CUDA_VISIBLE_DEVICES=$(python3 /powerapps/scripts/check_avail_gpu.py)&amp;#039;&amp;#039;&amp;#039; and the flag &amp;#039;&amp;#039;&amp;#039;a $CUDA_VISIBLE_DEVICES&amp;#039;&amp;#039;&amp;#039; make it so you can use the next free GPU on the server, please leave it as is.&lt;br /&gt;
* $ALPHAFOLD_SCRIPT_PATH = /powerapps/share/centos7/alphafold/alphafold-2.3.1/&lt;br /&gt;
* $ALPHAFOLD_DB_PATH = /alphafold_storage/alphafold_db&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#PBS -l select=1:ncpus=4:ngpus=1&lt;br /&gt;
##choose any gpu queue: gpu/gpu2&lt;br /&gt;
#PBS -q gpu2&lt;br /&gt;
&lt;br /&gt;
# Description: AlphaFold-Multimer (Non-Docker) with auto-gpu selection&lt;br /&gt;
&lt;br /&gt;
# load conda env&lt;br /&gt;
module load alphafold/alphafold_non_docker_2.3.1&lt;br /&gt;
&lt;br /&gt;
# call to check_available_gpu python script&lt;br /&gt;
# returns the param for CUDA_VISIBLE_DEVICE which the run alphafold script uses&lt;br /&gt;
&lt;br /&gt;
export CUDA_VISIBLE_DEVICES=$(python3 /powerapps/scripts/check_avail_gpu.py)&lt;br /&gt;
# echo &amp;quot;CUDA_VISIBLE_DEVICES: $CUDA_VISIBLE_DEVICES&amp;quot;&lt;br /&gt;
bash $ALPHAFOLD_SCRIPT_PATH/run_alphafold.sh -d $ALPHAFOLD_DB_PATH -o ~/output_dir -f $ALPHAFOLD_SCRIPT_PATH/examples/query.fasta -t 2020-05-14 -a $CUDA_VISIBLE_DEVICES&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== Sample Slurm Script =====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --job-name=AlphaFold-Multimer     # Job name&lt;br /&gt;
#SBATCH --partition=gpu2                  # Specify GPU partition&lt;br /&gt;
#SBATCH --nodes=1                         # Number of nodes&lt;br /&gt;
#SBATCH --ntasks=1                        # Number of tasks (processes)&lt;br /&gt;
#SBATCH --cpus-per-task=4                 # Number of CPU cores per task&lt;br /&gt;
#SBATCH --gres=gpu:1                      # Request 1 GPU&lt;br /&gt;
#SBATCH --output=alphafold_%j.out         # Standard output (with job ID)&lt;br /&gt;
#SBATCH --error=alphafold_%j.err          # Standard error (with job ID)&lt;br /&gt;
&lt;br /&gt;
# Description: AlphaFold-Multimer (Non-Docker) with auto-gpu selection&lt;br /&gt;
&lt;br /&gt;
# Load the required module/environment&lt;br /&gt;
module load alphafold/alphafold_non_docker_2.3.1&lt;br /&gt;
&lt;br /&gt;
# Run the AlphaFold script&lt;br /&gt;
bash $ALPHAFOLD_SCRIPT_PATH/run_alphafold.sh -d $ALPHAFOLD_DB_PATH -o ~/output_dir -f $ALPHAFOLD_SCRIPT_PATH/examples/query.fasta -t 2020-05-14 -a $CUDA_VISIBLE_DEVICES&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Submitting_a_job_to_a_slurm_queue&amp;diff=1443</id>
		<title>Submitting a job to a slurm queue</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Submitting_a_job_to_a_slurm_queue&amp;diff=1443"/>
		<updated>2024-03-04T12:00:14Z</updated>

		<summary type="html">&lt;p&gt;Levk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;SLURM (Simple Linux Utility for Resource Management) is a job scheduler used on many high-performance computing systems. It manages and allocates resources such as compute nodes and controls job execution.&lt;br /&gt;
&lt;br /&gt;
=== Accessing the System ===&lt;br /&gt;
To submit jobs to the SLURM scheduler at Tel Aviv University, you must access the system through one of the designated login nodes. These nodes act as the gateway for submitting and managing your SLURM jobs. The available login nodes are:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;powerslurm-login.tau.ac.il&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;powerslurm-login2.tau.ac.il&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Login Requirements: ====&lt;br /&gt;
&lt;br /&gt;
# Membership in the &amp;quot;power&amp;quot; group: Ensure you are a part of the &amp;quot;power&amp;quot; group which grants the necessary permissions for accessing the HPC resources.&lt;br /&gt;
# University Credentials: Log in using your Tel Aviv University credentials. This ensures secure access and that your job submissions are appropriately accounted for under your user profile.&lt;br /&gt;
&lt;br /&gt;
Remember, these login nodes are the initial point of contact for all your job management tasks, including job submission, monitoring, and other SLURM-related operations.&lt;br /&gt;
&lt;br /&gt;
=== Basic Job Submission Commands ===&lt;br /&gt;
&lt;br /&gt;
# sbatch: Submit a batch job script.&lt;br /&gt;
#* Example: &amp;lt;code&amp;gt;sbatch --ntasks=1 --time=10 pre_process.bash&amp;lt;/code&amp;gt;&lt;br /&gt;
#* This submits &amp;lt;code&amp;gt;pre_process.bash&amp;lt;/code&amp;gt; with 1 task for 10 minutes.&lt;br /&gt;
#* Example of chaining jobs: &amp;lt;code&amp;gt;sbatch --ntasks=128 --time=60 --depend=45001 do_work.bash&amp;lt;/code&amp;gt;&lt;br /&gt;
#* Example with GPU: &amp;lt;code&amp;gt;sbatch --ntasks=1 --time=10 --gres=gpu:2 pre_process.bash&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;sbatch --gres=gpu:1 gpu_job.sh&amp;lt;/code&amp;gt; &lt;br /&gt;
# salloc: Allocate resources for an interactive job but doesn&amp;#039;t start it immediately.&lt;br /&gt;
#* Example: &amp;lt;code&amp;gt;salloc --ntasks=8 --time=10 bash&amp;lt;/code&amp;gt;&lt;br /&gt;
# srun: Submit an interactive job with MPI (Message Passing Interface), often called a &amp;quot;job step.&amp;quot;&lt;br /&gt;
#* Example: &amp;lt;code&amp;gt;srun --ntasks=2 --label hostname&amp;lt;/code&amp;gt;&lt;br /&gt;
#* With MPI: &amp;lt;code&amp;gt;srun -intasks=2 --label hostname&amp;lt;/code&amp;gt;&lt;br /&gt;
# sattach: Attach stdin/out/err to an existing job or job step.&lt;br /&gt;
&lt;br /&gt;
=== Interactive Job Examples ===&lt;br /&gt;
* Opening a bash shell: &amp;lt;code&amp;gt;srun --ntasks=56 --pty bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* Specifying compute nodes: &amp;lt;code&amp;gt;srun --ntasks=56 -p gcohen_2018 --nodelist=&amp;quot;compute-0-12&amp;quot; --pty bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* Using a GPU: &amp;lt;code&amp;gt;salloc --ntasks=8 --time=10 --gres=gpu:4 bash&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Script Examples: ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH --job-name=my_job             # Job name&lt;br /&gt;
#SBATCH --account=my_account          # Account name for billing&lt;br /&gt;
#SBATCH --partition=long              # Partition name&lt;br /&gt;
#SBATCH --time=02:00:00               # Time allotted for the job (hh:mm:ss)&lt;br /&gt;
#SBATCH --ntasks=4                    # Number of tasks (processes)&lt;br /&gt;
#SBATCH --cpus-per-task=1             # Number of CPU cores per task&lt;br /&gt;
#SBATCH --mem-per-cpu=4G              # Memory per CPU core&lt;br /&gt;
#SBATCH --output=my_job_%j.out        # Standard output and error log (%j expands to jobId)&lt;br /&gt;
#SBATCH --error=my_job_%j.err         # Separate file for standard error&lt;br /&gt;
&lt;br /&gt;
# Load modules or software if required&lt;br /&gt;
# module load python/3.8&lt;br /&gt;
&lt;br /&gt;
# Print some information about the job&lt;br /&gt;
echo &amp;quot;Starting my SLURM job&amp;quot;&lt;br /&gt;
echo &amp;quot;Job ID: $SLURM_JOB_ID&amp;quot;&lt;br /&gt;
echo &amp;quot;Running on nodes: $SLURM_JOB_NODELIST&amp;quot;&lt;br /&gt;
echo &amp;quot;Allocated CPUs: $SLURM_JOB_CPUS_PER_NODE&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Run your application, this could be anything from a custom script to standard applications&lt;br /&gt;
# ./my_program&lt;br /&gt;
# python my_script.py&lt;br /&gt;
&lt;br /&gt;
# End of script&lt;br /&gt;
echo &amp;quot;Job completed&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Script example with GPU ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH --job-name=my_job             # Job name&lt;br /&gt;
#SBATCH --account=my_account          # Account name for billing&lt;br /&gt;
#SBATCH --partition=long              # Partition name&lt;br /&gt;
#SBATCH --time=02:00:00               # Time allotted for the job (hh:mm:ss)&lt;br /&gt;
#SBATCH --ntasks=4                    # Number of tasks (processes)&lt;br /&gt;
#SBATCH --cpus-per-task=1             # Number of CPU cores per task&lt;br /&gt;
#SBATCH --gres=gpu:NUMBER_OF_GPUS     # number of GPU&amp;#039;s to use in the job&lt;br /&gt;
#SBATCH --mem-per-cpu=4G              # Memory per CPU core&lt;br /&gt;
#SBATCH --output=my_job_%j.out        # Standard output and error log (%j expands to jobId)&lt;br /&gt;
#SBATCH --error=my_job_%j.err         # Separate file for standard error&lt;br /&gt;
&lt;br /&gt;
# Load modules or software if required&lt;br /&gt;
module load python/3.8&lt;br /&gt;
&lt;br /&gt;
# Print some information about the job&lt;br /&gt;
echo &amp;quot;Starting my SLURM job&amp;quot;&lt;br /&gt;
echo &amp;quot;Job ID: $SLURM_JOB_ID&amp;quot;&lt;br /&gt;
echo &amp;quot;Running on nodes: $SLURM_JOB_NODELIST&amp;quot;&lt;br /&gt;
echo &amp;quot;Allocated CPUs: $SLURM_JOB_CPUS_PER_NODE&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Run your application, this could be anything from a custom script to standard applications&lt;br /&gt;
# ./my_program&lt;br /&gt;
# python my_script.py&lt;br /&gt;
&lt;br /&gt;
# End of script&lt;br /&gt;
echo &amp;quot;Job completed&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Error Handling ===&lt;br /&gt;
* On some clusters, specifying resources is necessary. Without them, the job may fail.&lt;br /&gt;
** Example error: &amp;lt;code&amp;gt;srun: error: Unable to allocate resources: No partition specified or system default partition&amp;lt;/code&amp;gt;&lt;br /&gt;
** Correct usage: &amp;lt;code&amp;gt;srun --pty -c 1 --mem=2G -p power-yoren /bin/bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* Be aware that specifying GPU resources is crucial for jobs&lt;br /&gt;
&lt;br /&gt;
=== SLURM Information Commands ===&lt;br /&gt;
&lt;br /&gt;
* sinfo: View all queues (partitions).&lt;br /&gt;
* squeue: View all jobs.&lt;br /&gt;
* scontrol show partition: View all partitions.&lt;br /&gt;
* scontrol show job &amp;lt;job_number&amp;gt;: View a job&amp;#039;s attributes.&lt;br /&gt;
&lt;br /&gt;
=== Tips for Managing SLURM Jobs ===&lt;br /&gt;
&lt;br /&gt;
* Chain jobs by using the &amp;lt;code&amp;gt;--depend&amp;lt;/code&amp;gt; flag in &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Use &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; for interactive jobs that require specific resources for a limited time.&lt;br /&gt;
* &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; is versatile for both interactive and batch jobs, especially with MPI.&lt;br /&gt;
* Always specify necessary resources in clusters where defaults are not set.&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Submitting_a_job_to_a_slurm_queue&amp;diff=1441</id>
		<title>Submitting a job to a slurm queue</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Submitting_a_job_to_a_slurm_queue&amp;diff=1441"/>
		<updated>2024-01-17T14:04:54Z</updated>

		<summary type="html">&lt;p&gt;Levk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;SLURM (Simple Linux Utility for Resource Management) is a job scheduler used on many high-performance computing systems. It manages and allocates resources such as compute nodes and controls job execution.&lt;br /&gt;
&lt;br /&gt;
=== Accessing the System ===&lt;br /&gt;
To submit jobs to the SLURM scheduler at Tel Aviv University, you must access the system through one of the designated login nodes. These nodes act as the gateway for submitting and managing your SLURM jobs. The available login nodes are:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;powerslurm-login.tau.ac.il&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;powerslurm-login2.tau.ac.il&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Login Requirements: ====&lt;br /&gt;
&lt;br /&gt;
# Membership in the &amp;quot;power&amp;quot; group: Ensure you are a part of the &amp;quot;power&amp;quot; group which grants the necessary permissions for accessing the HPC resources.&lt;br /&gt;
# University Credentials: Log in using your Tel Aviv University credentials. This ensures secure access and that your job submissions are appropriately accounted for under your user profile.&lt;br /&gt;
&lt;br /&gt;
Remember, these login nodes are the initial point of contact for all your job management tasks, including job submission, monitoring, and other SLURM-related operations.&lt;br /&gt;
&lt;br /&gt;
=== Basic Job Submission Commands ===&lt;br /&gt;
&lt;br /&gt;
# sbatch: Submit a batch job script.&lt;br /&gt;
#* Example: &amp;lt;code&amp;gt;sbatch --ntasks=1 --time=10 pre_process.bash&amp;lt;/code&amp;gt;&lt;br /&gt;
#* This submits &amp;lt;code&amp;gt;pre_process.bash&amp;lt;/code&amp;gt; with 1 task for 10 minutes.&lt;br /&gt;
#* Example of chaining jobs: &amp;lt;code&amp;gt;sbatch --ntasks=128 --time=60 --depend=45001 do_work.bash&amp;lt;/code&amp;gt;&lt;br /&gt;
# salloc: Allocate resources for an interactive job but doesn&amp;#039;t start it immediately.&lt;br /&gt;
#* Example: &amp;lt;code&amp;gt;salloc --ntasks=8 --time=10 bash&amp;lt;/code&amp;gt;&lt;br /&gt;
# srun: Submit an interactive job with MPI (Message Passing Interface), often called a &amp;quot;job step.&amp;quot;&lt;br /&gt;
#* Example: &amp;lt;code&amp;gt;srun --ntasks=2 --label hostname&amp;lt;/code&amp;gt;&lt;br /&gt;
#* With MPI: &amp;lt;code&amp;gt;srun -intasks=2 --label hostname&amp;lt;/code&amp;gt;&lt;br /&gt;
# sattach: Attach stdin/out/err to an existing job or job step.&lt;br /&gt;
&lt;br /&gt;
=== Interactive Job Examples ===&lt;br /&gt;
&lt;br /&gt;
* Opening a bash shell: &amp;lt;code&amp;gt;srun --ntasks=56 --pty bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* Specifying compute nodes: &amp;lt;code&amp;gt;srun --ntasks=56 -p gcohen_2018 --nodelist=&amp;quot;compute-0-12&amp;quot; --pty bash&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Script Example: ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH --job-name=my_job             # Job name&lt;br /&gt;
#SBATCH --account=my_account          # Account name for billing&lt;br /&gt;
#SBATCH --partition=long              # Partition name&lt;br /&gt;
#SBATCH --time=02:00:00               # Time allotted for the job (hh:mm:ss)&lt;br /&gt;
#SBATCH --ntasks=4                    # Number of tasks (processes)&lt;br /&gt;
#SBATCH --cpus-per-task=1             # Number of CPU cores per task&lt;br /&gt;
#SBATCH --mem-per-cpu=4G              # Memory per CPU core&lt;br /&gt;
#SBATCH --output=my_job_%j.out        # Standard output and error log (%j expands to jobId)&lt;br /&gt;
#SBATCH --error=my_job_%j.err         # Separate file for standard error&lt;br /&gt;
&lt;br /&gt;
# Load modules or software if required&lt;br /&gt;
# module load python/3.8&lt;br /&gt;
&lt;br /&gt;
# Print some information about the job&lt;br /&gt;
echo &amp;quot;Starting my SLURM job&amp;quot;&lt;br /&gt;
echo &amp;quot;Job ID: $SLURM_JOB_ID&amp;quot;&lt;br /&gt;
echo &amp;quot;Running on nodes: $SLURM_JOB_NODELIST&amp;quot;&lt;br /&gt;
echo &amp;quot;Allocated CPUs: $SLURM_JOB_CPUS_PER_NODE&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Run your application, this could be anything from a custom script to standard applications&lt;br /&gt;
# ./my_program&lt;br /&gt;
# python my_script.py&lt;br /&gt;
&lt;br /&gt;
# End of script&lt;br /&gt;
echo &amp;quot;Job completed&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Error Handling ===&lt;br /&gt;
&lt;br /&gt;
* On some clusters, specifying resources is necessary. Without them, the job may fail.&lt;br /&gt;
** Example error: &amp;lt;code&amp;gt;srun: error: Unable to allocate resources: No partition specified or system default partition&amp;lt;/code&amp;gt;&lt;br /&gt;
** Correct usage: &amp;lt;code&amp;gt;srun --pty -c 1 --mem=2G -p power-yoren /bin/bash&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== SLURM Information Commands ===&lt;br /&gt;
&lt;br /&gt;
* sinfo: View all queues (partitions).&lt;br /&gt;
* squeue: View all jobs.&lt;br /&gt;
* scontrol show partition: View all partitions.&lt;br /&gt;
* scontrol show job &amp;lt;job_number&amp;gt;: View a job&amp;#039;s attributes.&lt;br /&gt;
&lt;br /&gt;
=== Tips for Managing SLURM Jobs ===&lt;br /&gt;
&lt;br /&gt;
* Chain jobs by using the &amp;lt;code&amp;gt;--depend&amp;lt;/code&amp;gt; flag in &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Use &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; for interactive jobs that require specific resources for a limited time.&lt;br /&gt;
* &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; is versatile for both interactive and batch jobs, especially with MPI.&lt;br /&gt;
* Always specify necessary resources in clusters where defaults are not set.&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Slurm_API&amp;diff=1440</id>
		<title>Slurm API</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Slurm_API&amp;diff=1440"/>
		<updated>2024-01-11T13:31:46Z</updated>

		<summary type="html">&lt;p&gt;Levk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This documentation provides comprehensive guidance on interfacing with the SLURM API for job submission within the PowerSlurm cluster. These instructions can be used for submitting jobs that originate from a web interface. &lt;br /&gt;
&lt;br /&gt;
For a detailed understanding of the API&amp;#039;s capabilities and functionalities, refer to the official SLURM documentation at SLURM REST API Documentation. https://slurm.schedmd.com/rest_api.html&lt;br /&gt;
= Authentication =&lt;br /&gt;
&lt;br /&gt;
==== Introduction ====&lt;br /&gt;
Secure access to the SLURM REST API is managed through JWT (JSON Web Tokens). This section provides a step-by-step guide on how to obtain a JWT token, which is essential for authenticating and authorizing API requests.&lt;br /&gt;
&lt;br /&gt;
==== Prerequisites ====&lt;br /&gt;
&lt;br /&gt;
* An API key provided by the High-Performance Computing (HPC) team.&lt;br /&gt;
&lt;br /&gt;
==== Constants ====&lt;br /&gt;
&lt;br /&gt;
* Base URL for the SLURM REST API: &amp;lt;code&amp;gt;&amp;lt;nowiki&amp;gt;https://slurmtron.tau.ac.il&amp;lt;/nowiki&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
* Endpoint for token generation: &amp;lt;code&amp;gt;&amp;lt;nowiki&amp;gt;https://slurmtron.tau.ac.il/slurmapi/generate-token/&amp;lt;/nowiki&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Python Example for creating a JWT token ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python3&amp;quot;&amp;gt;&lt;br /&gt;
import requests&lt;br /&gt;
&lt;br /&gt;
def get_api_token(username, api_key):&lt;br /&gt;
    &amp;quot;&amp;quot;&amp;quot;&lt;br /&gt;
    Retrieves a JWT token for SLURM REST API access for powerslurm cluster.&lt;br /&gt;
&lt;br /&gt;
    Parameters:&lt;br /&gt;
    username (str): The username of the user requesting the token.&lt;br /&gt;
    api_key (str): The API key provided by the HPC team.&lt;br /&gt;
&lt;br /&gt;
    Returns:&lt;br /&gt;
    str: The API token if the request is successful.&lt;br /&gt;
&lt;br /&gt;
    Raises:&lt;br /&gt;
    Exception: If the request fails with a non-200 status code.&lt;br /&gt;
    &amp;quot;&amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    generate_token_url = &amp;#039;https://slurmtron.tau.ac.il/slurmapi/generate-token/&amp;#039;&lt;br /&gt;
&lt;br /&gt;
    payload = {&lt;br /&gt;
        &amp;#039;username&amp;#039;: username,&lt;br /&gt;
        &amp;#039;api_key&amp;#039;: api_key&lt;br /&gt;
    }&lt;br /&gt;
    &lt;br /&gt;
    response = requests.post(generate_token_url, data=payload)&lt;br /&gt;
&lt;br /&gt;
    if response.status_code == 200:&lt;br /&gt;
        # Extracting the token from the JSON response&lt;br /&gt;
        return response.json()[&amp;#039;SlurmJWT&amp;#039;]&lt;br /&gt;
    else:&lt;br /&gt;
        raise Exception(f&amp;quot;Error: {response.status_code}, {response.text}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Example usage&lt;br /&gt;
# Replace &amp;#039;your_username&amp;#039; and &amp;#039;your_api_key&amp;#039; with actual values&lt;br /&gt;
# token = get_api_token(&amp;#039;your_username&amp;#039;, &amp;#039;your_api_key&amp;#039;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Manual Token Generation (General Method) ====&lt;br /&gt;
&lt;br /&gt;
# Send a POST request to &amp;lt;code&amp;gt;&amp;lt;nowiki&amp;gt;https://slurmtron.tau.ac.il/slurmapi/generate-token/&amp;lt;/nowiki&amp;gt;&amp;lt;/code&amp;gt; with your username and API key.&lt;br /&gt;
# The request should be in the format of a JSON payload containing your credentials.&lt;br /&gt;
# On success, the server will return a JSON response in the format: &amp;lt;code&amp;gt;{ &amp;quot;SlurmJWT&amp;quot;: &amp;quot;token&amp;quot; }&amp;lt;/code&amp;gt;.&lt;br /&gt;
# Extract the &amp;lt;code&amp;gt;SlurmJWT&amp;lt;/code&amp;gt; value from the response. This is your required JWT token.&lt;br /&gt;
&lt;br /&gt;
==== Security and Best Practices ====&lt;br /&gt;
&lt;br /&gt;
* Keep your API key and JWT token confidential.&lt;br /&gt;
* Use the JWT token in the header of your API requests for authorized access to the SLURM REST API.&lt;br /&gt;
&lt;br /&gt;
= Job Submission to SLURM REST API: =&lt;br /&gt;
&lt;br /&gt;
=== Introduction ===&lt;br /&gt;
This section covers the process of submitting a job to the SLURM REST API at Tel Aviv University&lt;br /&gt;
&lt;br /&gt;
==== Prerequisites ====&lt;br /&gt;
&lt;br /&gt;
* Access to the SLURM REST API.&lt;br /&gt;
* An API key and username, provided by the Tel Aviv University HPC team.&lt;br /&gt;
* A tool or library for making HTTP requests (e.g., &amp;lt;code&amp;gt;requests&amp;lt;/code&amp;gt; in Python).&lt;br /&gt;
&lt;br /&gt;
== Python Example: ==&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python3&amp;quot;&amp;gt;&lt;br /&gt;
#!/usr/bin/python3&lt;br /&gt;
&lt;br /&gt;
import requests&lt;br /&gt;
&lt;br /&gt;
# Base URL for authentication and token generation&lt;br /&gt;
base_url_auth = &amp;#039;https://slurmtron.tau.ac.il&amp;#039;&lt;br /&gt;
generate_token_url = f&amp;quot;{base_url_auth}/slurmapi/generate-token/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# User credentials&lt;br /&gt;
current_user = &amp;quot;user&amp;quot;&lt;br /&gt;
api_key = &amp;quot;token&amp;quot;&lt;br /&gt;
&lt;br /&gt;
def get_api_token(url, username, api_key):&lt;br /&gt;
    &amp;quot;&amp;quot;&amp;quot;&lt;br /&gt;
    Retrieves an API token for SLURM REST API access.&lt;br /&gt;
&lt;br /&gt;
    Parameters:&lt;br /&gt;
    url (str): The URL endpoint for obtaining the API token.&lt;br /&gt;
    username (str): The username of the user requesting the token.&lt;br /&gt;
    api_key (str): The API key provided by the HPC team.&lt;br /&gt;
&lt;br /&gt;
    Returns:&lt;br /&gt;
    str: The API token if the request is successful.&lt;br /&gt;
&lt;br /&gt;
    Raises:&lt;br /&gt;
    Exception: If the request fails with a non-200 status code.&lt;br /&gt;
    &amp;quot;&amp;quot;&amp;quot;&lt;br /&gt;
    payload = {&lt;br /&gt;
        &amp;#039;username&amp;#039;: username,&lt;br /&gt;
        &amp;#039;api_key&amp;#039;: api_key&lt;br /&gt;
    }&lt;br /&gt;
    response = requests.post(url, data=payload)&lt;br /&gt;
&lt;br /&gt;
    if response.status_code == 200:&lt;br /&gt;
        return response.json()  # Assuming the token is returned in JSON format&lt;br /&gt;
    else:&lt;br /&gt;
        raise Exception(f&amp;quot;Error: {response.status_code}, {response.text}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Base URL for job submission&lt;br /&gt;
base_url = &amp;quot;https://slurmtron.tau.ac.il/slurmrestd&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Job submission URL&lt;br /&gt;
job_url = f&amp;#039;{base_url}/slurm/v0.0.40/job/submit&amp;#039;&lt;br /&gt;
&lt;br /&gt;
# Authorization headers with the obtained token&lt;br /&gt;
headers = {&lt;br /&gt;
    &amp;#039;X-SLURM-USER-NAME&amp;#039;: current_user,&lt;br /&gt;
    &amp;#039;X-SLURM-USER-TOKEN&amp;#039;: get_api_token(current_user, api_key)[&amp;#039;SlurmJWT&amp;#039;]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Job submission request&lt;br /&gt;
jobs_request = requests.post(&lt;br /&gt;
    job_url,&lt;br /&gt;
    headers=headers,&lt;br /&gt;
    json={&lt;br /&gt;
        # Example job script&lt;br /&gt;
        &amp;quot;script&amp;quot;: &amp;quot;#!/bin/bash\n\n&amp;quot;&lt;br /&gt;
                  &amp;quot;srun hostname\n&amp;quot;&lt;br /&gt;
                  &amp;quot;echo \&amp;quot;hello world444\&amp;quot;\n&amp;quot;&lt;br /&gt;
                  &amp;quot;sleep 30&amp;quot;,&lt;br /&gt;
        &amp;quot;job&amp;quot;: {&lt;br /&gt;
            &amp;quot;partition&amp;quot;: &amp;quot;&amp;lt; queue/partition_name &amp;gt;&amp;quot;,&lt;br /&gt;
            &amp;quot;tasks&amp;quot;: 1,&lt;br /&gt;
            &amp;quot;name&amp;quot;: &amp;quot;&amp;lt; job_name&amp;gt; &amp;quot;,&lt;br /&gt;
            &amp;quot;account&amp;quot;: &amp;quot;&amp;lt; account_name &amp;gt;&amp;quot;,&lt;br /&gt;
            &amp;quot;nodes&amp;quot;: &amp;quot;1&amp;quot;,&lt;br /&gt;
            &amp;quot;cpus_per_task&amp;quot;: &amp;lt; cpu_number &amp;gt;,&lt;br /&gt;
            &amp;quot;memory_per_node&amp;quot;: {&lt;br /&gt;
                &amp;quot;number&amp;quot;: &amp;lt;ram in MB &amp;gt;,&lt;br /&gt;
                &amp;quot;set&amp;quot;: True,&lt;br /&gt;
                &amp;quot;infinite&amp;quot;: False&lt;br /&gt;
            },&lt;br /&gt;
            # Full path to your error/output file.&lt;br /&gt;
            &amp;quot;standard_output&amp;quot;: &amp;quot;/path/to/your/output.txt&amp;quot;,&lt;br /&gt;
            &amp;quot;standard_error&amp;quot;: &amp;quot;/path/to/your/error.txt&amp;quot;,&lt;br /&gt;
            &amp;quot;current_working_directory&amp;quot;: &amp;quot;/tmp/&amp;quot;,&lt;br /&gt;
            # Environment modules (module load) should not be used directly under the script parameter. Instead, set all necessary environment variables under the environment parameter.&lt;br /&gt;
            &amp;quot;environment&amp;quot;: [&lt;br /&gt;
                &amp;quot;PATH=/bin:/usr/bin/:/usr/local/bin/&amp;quot;,&lt;br /&gt;
                &amp;quot;LD_LIBRARY_PATH=/lib/:/lib64/:/usr/local/lib&amp;quot;&lt;br /&gt;
            ],&lt;br /&gt;
        },&lt;br /&gt;
    }&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
# Processing the job submission result&lt;br /&gt;
jobs_result = jobs_request.json()[&amp;#039;result&amp;#039;]&lt;br /&gt;
for key, value in jobs_result.items():&lt;br /&gt;
    print(key, value)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Important Notes ====&lt;br /&gt;
&lt;br /&gt;
* The &amp;lt;code&amp;gt;script&amp;lt;/code&amp;gt; parameter in the job submission request is an example. Customize this script to fit your specific job requirements.&lt;br /&gt;
* Use full and appropriate paths for &amp;lt;code&amp;gt;&amp;quot;standard_output&amp;quot;&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;&amp;quot;standard_error&amp;quot;&amp;lt;/code&amp;gt;. Replace the placeholders with actual paths where you want the output and error files to be stored.&lt;br /&gt;
* Environment modules (&amp;lt;code&amp;gt;module load&amp;lt;/code&amp;gt;) should not be used directly under the &amp;lt;code&amp;gt;script&amp;lt;/code&amp;gt; parameter. Instead, set all necessary environment variables under the &amp;lt;code&amp;gt;environment&amp;lt;/code&amp;gt; parameter.&lt;br /&gt;
&lt;br /&gt;
==== Security and Best Practices ====&lt;br /&gt;
&lt;br /&gt;
* Securely handle your API key and other sensitive information.&lt;br /&gt;
* Regularly review and update your scripts to align with updates in the SLURM REST API.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;More Examples&amp;#039;&amp;#039;&amp;#039;:&lt;br /&gt;
&lt;br /&gt;
https://docs.lxp.lu/cloud/slurmrestd/&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Slurm_API&amp;diff=1439</id>
		<title>Slurm API</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Slurm_API&amp;diff=1439"/>
		<updated>2024-01-09T11:47:28Z</updated>

		<summary type="html">&lt;p&gt;Levk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This documentation provides comprehensive guidance on interfacing with the SLURM API for job submission within the PowerSlurm cluster. These instructions can be used for submitting jobs that originate from a web interface. &lt;br /&gt;
&lt;br /&gt;
For a detailed understanding of the API&amp;#039;s capabilities and functionalities, refer to the official SLURM documentation at SLURM REST API Documentation. https://slurm.schedmd.com/rest_api.html&lt;br /&gt;
= Authentication =&lt;br /&gt;
&lt;br /&gt;
==== Introduction ====&lt;br /&gt;
Secure access to the SLURM REST API is managed through JWT (JSON Web Tokens). This section provides a step-by-step guide on how to obtain a JWT token, which is essential for authenticating and authorizing API requests.&lt;br /&gt;
&lt;br /&gt;
==== Prerequisites ====&lt;br /&gt;
&lt;br /&gt;
* An API key provided by the High-Performance Computing (HPC) team.&lt;br /&gt;
&lt;br /&gt;
==== Constants ====&lt;br /&gt;
&lt;br /&gt;
* Base URL for the SLURM REST API: &amp;lt;code&amp;gt;&amp;lt;nowiki&amp;gt;https://slurmtron.tau.ac.il&amp;lt;/nowiki&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
* Endpoint for token generation: &amp;lt;code&amp;gt;&amp;lt;nowiki&amp;gt;https://slurmtron.tau.ac.il/slurmapi/generate-token/&amp;lt;/nowiki&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Python Example for creating a JWT token ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python3&amp;quot;&amp;gt;&lt;br /&gt;
import requests&lt;br /&gt;
&lt;br /&gt;
def get_api_token(username, api_key):&lt;br /&gt;
    &amp;quot;&amp;quot;&amp;quot;&lt;br /&gt;
    Retrieves a JWT token for SLURM REST API access for powerslurm cluster.&lt;br /&gt;
&lt;br /&gt;
    Parameters:&lt;br /&gt;
    username (str): The username of the user requesting the token.&lt;br /&gt;
    api_key (str): The API key provided by the HPC team.&lt;br /&gt;
&lt;br /&gt;
    Returns:&lt;br /&gt;
    str: The API token if the request is successful.&lt;br /&gt;
&lt;br /&gt;
    Raises:&lt;br /&gt;
    Exception: If the request fails with a non-200 status code.&lt;br /&gt;
    &amp;quot;&amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    generate_token_url = &amp;#039;https://slurmtron.tau.ac.il/slurmapi/generate-token/&amp;#039;&lt;br /&gt;
&lt;br /&gt;
    payload = {&lt;br /&gt;
        &amp;#039;username&amp;#039;: username,&lt;br /&gt;
        &amp;#039;api_key&amp;#039;: api_key&lt;br /&gt;
    }&lt;br /&gt;
    &lt;br /&gt;
    response = requests.post(generate_token_url, data=payload)&lt;br /&gt;
&lt;br /&gt;
    if response.status_code == 200:&lt;br /&gt;
        # Extracting the token from the JSON response&lt;br /&gt;
        return response.json()[&amp;#039;SlurmJWT&amp;#039;]&lt;br /&gt;
    else:&lt;br /&gt;
        raise Exception(f&amp;quot;Error: {response.status_code}, {response.text}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Example usage&lt;br /&gt;
# Replace &amp;#039;your_username&amp;#039; and &amp;#039;your_api_key&amp;#039; with actual values&lt;br /&gt;
# token = get_api_token(&amp;#039;your_username&amp;#039;, &amp;#039;your_api_key&amp;#039;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Manual Token Generation (General Method) ====&lt;br /&gt;
&lt;br /&gt;
# Send a POST request to &amp;lt;code&amp;gt;&amp;lt;nowiki&amp;gt;https://slurmtron.tau.ac.il/slurmapi/generate-token/&amp;lt;/nowiki&amp;gt;&amp;lt;/code&amp;gt; with your username and API key.&lt;br /&gt;
# The request should be in the format of a JSON payload containing your credentials.&lt;br /&gt;
# On success, the server will return a JSON response in the format: &amp;lt;code&amp;gt;{ &amp;quot;SlurmJWT&amp;quot;: &amp;quot;token&amp;quot; }&amp;lt;/code&amp;gt;.&lt;br /&gt;
# Extract the &amp;lt;code&amp;gt;SlurmJWT&amp;lt;/code&amp;gt; value from the response. This is your required JWT token.&lt;br /&gt;
&lt;br /&gt;
==== Security and Best Practices ====&lt;br /&gt;
&lt;br /&gt;
* Keep your API key and JWT token confidential.&lt;br /&gt;
* Use the JWT token in the header of your API requests for authorized access to the SLURM REST API.&lt;br /&gt;
&lt;br /&gt;
= Job Submission to SLURM REST API: =&lt;br /&gt;
&lt;br /&gt;
=== Introduction ===&lt;br /&gt;
This section covers the process of submitting a job to the SLURM REST API at Tel Aviv University&lt;br /&gt;
&lt;br /&gt;
==== Prerequisites ====&lt;br /&gt;
&lt;br /&gt;
* Access to the SLURM REST API.&lt;br /&gt;
* An API key and username, provided by the Tel Aviv University HPC team.&lt;br /&gt;
* A tool or library for making HTTP requests (e.g., &amp;lt;code&amp;gt;requests&amp;lt;/code&amp;gt; in Python).&lt;br /&gt;
&lt;br /&gt;
== Python Example: ==&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python3&amp;quot;&amp;gt;&lt;br /&gt;
#!/usr/bin/python3&lt;br /&gt;
&lt;br /&gt;
import requests&lt;br /&gt;
&lt;br /&gt;
# Base URL for authentication and token generation&lt;br /&gt;
base_url_auth = &amp;#039;https://slurmtron.tau.ac.il&amp;#039;&lt;br /&gt;
generate_token_url = f&amp;quot;{base_url_auth}/slurmapi/generate-token/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# User credentials&lt;br /&gt;
current_user = &amp;quot;user&amp;quot;&lt;br /&gt;
api_key = &amp;quot;token&amp;quot;&lt;br /&gt;
&lt;br /&gt;
def get_api_token(url, username, api_key):&lt;br /&gt;
    &amp;quot;&amp;quot;&amp;quot;&lt;br /&gt;
    Retrieves an API token for SLURM REST API access.&lt;br /&gt;
&lt;br /&gt;
    Parameters:&lt;br /&gt;
    url (str): The URL endpoint for obtaining the API token.&lt;br /&gt;
    username (str): The username of the user requesting the token.&lt;br /&gt;
    api_key (str): The API key provided by the HPC team.&lt;br /&gt;
&lt;br /&gt;
    Returns:&lt;br /&gt;
    str: The API token if the request is successful.&lt;br /&gt;
&lt;br /&gt;
    Raises:&lt;br /&gt;
    Exception: If the request fails with a non-200 status code.&lt;br /&gt;
    &amp;quot;&amp;quot;&amp;quot;&lt;br /&gt;
    payload = {&lt;br /&gt;
        &amp;#039;username&amp;#039;: username,&lt;br /&gt;
        &amp;#039;api_key&amp;#039;: api_key&lt;br /&gt;
    }&lt;br /&gt;
    response = requests.post(url, data=payload)&lt;br /&gt;
&lt;br /&gt;
    if response.status_code == 200:&lt;br /&gt;
        return response.json()  # Assuming the token is returned in JSON format&lt;br /&gt;
    else:&lt;br /&gt;
        raise Exception(f&amp;quot;Error: {response.status_code}, {response.text}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
# Base URL for job submission&lt;br /&gt;
base_url = &amp;quot;https://slurmtron.tau.ac.il/slurmrestd&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Job submission URL&lt;br /&gt;
job_url = f&amp;#039;{base_url}/slurm/v0.0.40/job/submit&amp;#039;&lt;br /&gt;
&lt;br /&gt;
# Authorization headers with the obtained token&lt;br /&gt;
headers = {&lt;br /&gt;
    &amp;#039;X-SLURM-USER-NAME&amp;#039;: current_user,&lt;br /&gt;
    &amp;#039;X-SLURM-USER-TOKEN&amp;#039;: get_api_token(generate_token_url, current_user, api_key)[&amp;#039;SlurmJWT&amp;#039;]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Job submission request&lt;br /&gt;
jobs_request = requests.post(&lt;br /&gt;
    job_url,&lt;br /&gt;
    headers=headers,&lt;br /&gt;
    json={&lt;br /&gt;
        # Example job script&lt;br /&gt;
        &amp;quot;script&amp;quot;: &amp;quot;#!/bin/bash\n\n&amp;quot;&lt;br /&gt;
                  &amp;quot;srun hostname\n&amp;quot;&lt;br /&gt;
                  &amp;quot;echo \&amp;quot;hello world444\&amp;quot;\n&amp;quot;&lt;br /&gt;
                  &amp;quot;sleep 30&amp;quot;,&lt;br /&gt;
        &amp;quot;job&amp;quot;: {&lt;br /&gt;
            &amp;quot;partition&amp;quot;: &amp;quot;&amp;lt; queue/partition_name &amp;gt;&amp;quot;,&lt;br /&gt;
            &amp;quot;tasks&amp;quot;: 1,&lt;br /&gt;
            &amp;quot;name&amp;quot;: &amp;quot;&amp;lt; job_name&amp;gt; &amp;quot;,&lt;br /&gt;
            &amp;quot;account&amp;quot;: &amp;quot;&amp;lt; account_name &amp;gt;&amp;quot;,&lt;br /&gt;
            &amp;quot;nodes&amp;quot;: &amp;quot;1&amp;quot;,&lt;br /&gt;
            &amp;quot;cpus_per_task&amp;quot;: &amp;lt; cpu_number &amp;gt;,&lt;br /&gt;
            &amp;quot;memory_per_node&amp;quot;: {&lt;br /&gt;
                &amp;quot;number&amp;quot;: &amp;lt;ram in MB &amp;gt;,&lt;br /&gt;
                &amp;quot;set&amp;quot;: True,&lt;br /&gt;
                &amp;quot;infinite&amp;quot;: False&lt;br /&gt;
            },&lt;br /&gt;
            # Full path to your error/output file.&lt;br /&gt;
            &amp;quot;standard_output&amp;quot;: &amp;quot;/path/to/your/output.txt&amp;quot;,&lt;br /&gt;
            &amp;quot;standard_error&amp;quot;: &amp;quot;/path/to/your/error.txt&amp;quot;,&lt;br /&gt;
            &amp;quot;current_working_directory&amp;quot;: &amp;quot;/tmp/&amp;quot;,&lt;br /&gt;
            # Environment modules (module load) should not be used directly under the script parameter. Instead, set all necessary environment variables under the environment parameter.&lt;br /&gt;
            &amp;quot;environment&amp;quot;: [&lt;br /&gt;
                &amp;quot;PATH=/bin:/usr/bin/:/usr/local/bin/&amp;quot;,&lt;br /&gt;
                &amp;quot;LD_LIBRARY_PATH=/lib/:/lib64/:/usr/local/lib&amp;quot;&lt;br /&gt;
            ],&lt;br /&gt;
        },&lt;br /&gt;
    }&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
# Processing the job submission result&lt;br /&gt;
jobs_result = jobs_request.json()[&amp;#039;result&amp;#039;]&lt;br /&gt;
for key, value in jobs_result.items():&lt;br /&gt;
    print(key, value)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Important Notes ====&lt;br /&gt;
&lt;br /&gt;
* The &amp;lt;code&amp;gt;script&amp;lt;/code&amp;gt; parameter in the job submission request is an example. Customize this script to fit your specific job requirements.&lt;br /&gt;
* Use full and appropriate paths for &amp;lt;code&amp;gt;&amp;quot;standard_output&amp;quot;&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;&amp;quot;standard_error&amp;quot;&amp;lt;/code&amp;gt;. Replace the placeholders with actual paths where you want the output and error files to be stored.&lt;br /&gt;
* Environment modules (&amp;lt;code&amp;gt;module load&amp;lt;/code&amp;gt;) should not be used directly under the &amp;lt;code&amp;gt;script&amp;lt;/code&amp;gt; parameter. Instead, set all necessary environment variables under the &amp;lt;code&amp;gt;environment&amp;lt;/code&amp;gt; parameter.&lt;br /&gt;
&lt;br /&gt;
==== Security and Best Practices ====&lt;br /&gt;
&lt;br /&gt;
* Securely handle your API key and other sensitive information.&lt;br /&gt;
* Regularly review and update your scripts to align with updates in the SLURM REST API.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;More Examples&amp;#039;&amp;#039;&amp;#039;:&lt;br /&gt;
&lt;br /&gt;
https://docs.lxp.lu/cloud/slurmrestd/&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Slurm_API&amp;diff=1431</id>
		<title>Slurm API</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Slurm_API&amp;diff=1431"/>
		<updated>2023-11-30T09:24:06Z</updated>

		<summary type="html">&lt;p&gt;Levk: Created page with &amp;quot;This page describes how to connect and use slurm API in order to submit a job in powerslurm cluster, Including a job originating from a web site  Official Documentation: https...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes how to connect and use slurm API in order to submit a job in powerslurm cluster, Including a job originating from a web site&lt;br /&gt;
&lt;br /&gt;
Official Documentation: https://slurm.schedmd.com/rest_api.html&lt;br /&gt;
= Authentication =&lt;br /&gt;
In-order to authenticate against the API, you need to use your TAU username and a JWT token.&lt;br /&gt;
&lt;br /&gt;
Tokens can only be created on Login nodes, such as powerslurm-login.&lt;br /&gt;
&lt;br /&gt;
Python example for token creation&amp;lt;syntaxhighlight lang=&amp;quot;python3&amp;quot;&amp;gt;&lt;br /&gt;
#!/usr/bin/env python3&lt;br /&gt;
&lt;br /&gt;
import requests&lt;br /&gt;
from jwt import JWT&lt;br /&gt;
from jwt.jwa import HS256&lt;br /&gt;
from jwt.jwk import jwk_from_dict&lt;br /&gt;
from jwt.utils import b64encode&lt;br /&gt;
import time&lt;br /&gt;
import getpass&lt;br /&gt;
&lt;br /&gt;
def generate_jwt_token(expiration_time=60):&lt;br /&gt;
    # Get the currently logged in user&lt;br /&gt;
    current_user = getpass.getuser()&lt;br /&gt;
    with open(&amp;quot;/var/spool/slurm/statesave/jwt_hs256.key&amp;quot;, &amp;quot;rb&amp;quot;) as f:&lt;br /&gt;
        priv_key = f.read()&lt;br /&gt;
    signing_key = jwk_from_dict({&lt;br /&gt;
        &amp;#039;kty&amp;#039;: &amp;#039;oct&amp;#039;,&lt;br /&gt;
        &amp;#039;k&amp;#039;: b64encode(priv_key)&lt;br /&gt;
    })&lt;br /&gt;
    message = {&lt;br /&gt;
        &amp;quot;exp&amp;quot;: int(time.time() + expiration_time),&lt;br /&gt;
        &amp;quot;iat&amp;quot;: int(time.time()),&lt;br /&gt;
        &amp;quot;sun&amp;quot;: current_user&lt;br /&gt;
    }&lt;br /&gt;
    jwt_instance = JWT()&lt;br /&gt;
    compact_jws = jwt_instance.encode(message, signing_key, alg=&amp;#039;HS256&amp;#039;)&lt;br /&gt;
    return compact_jws&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# example output:&lt;br /&gt;
print(generate_jwt_token())&lt;br /&gt;
eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJleHAiOiAxNjk5NTMyNTkwLCAiaWF0IjogMTY5OTUzMjUzMCwgInN1biI6ICJsZXZrIn0.YCxJohapkovR16TQ75DsO3G9ODcisoSeOVbAYwA4Q7E&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Submit Job: =&lt;br /&gt;
The base url for the API is: &amp;lt;nowiki&amp;gt;https://slurmtron.tau.ac.il&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Job submissions url is: &amp;lt;nowiki&amp;gt;https://slurmtron.tau.ac.il/slurm/v0.0.39/job/submit&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Python Example: ==&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python3&amp;quot;&amp;gt;&lt;br /&gt;
#!/usr/bin/python3&lt;br /&gt;
&lt;br /&gt;
import requests&lt;br /&gt;
from jwt import JWT&lt;br /&gt;
from jwt.jwa import HS256&lt;br /&gt;
from jwt.jwk import jwk_from_dict&lt;br /&gt;
from jwt.utils import b64encode&lt;br /&gt;
import time&lt;br /&gt;
import getpass&lt;br /&gt;
&lt;br /&gt;
current_user = getpass.getuser()&lt;br /&gt;
def generate_jwt_token(expiration_time=60):&lt;br /&gt;
    with open(&amp;quot;/var/spool/slurm/statesave/jwt_hs256.key&amp;quot;, &amp;quot;rb&amp;quot;) as f:&lt;br /&gt;
        priv_key = f.read()&lt;br /&gt;
    signing_key = jwk_from_dict({&lt;br /&gt;
        &amp;#039;kty&amp;#039;: &amp;#039;oct&amp;#039;,&lt;br /&gt;
        &amp;#039;k&amp;#039;: b64encode(priv_key)&lt;br /&gt;
    })&lt;br /&gt;
    message = {&lt;br /&gt;
        &amp;quot;exp&amp;quot;: int(time.time() + expiration_time),&lt;br /&gt;
        &amp;quot;iat&amp;quot;: int(time.time()),&lt;br /&gt;
        &amp;quot;sun&amp;quot;: current_user&lt;br /&gt;
    }&lt;br /&gt;
    jwt_instance = JWT()&lt;br /&gt;
    compact_jws = jwt_instance.encode(message, signing_key, alg=&amp;#039;HS256&amp;#039;)&lt;br /&gt;
    return compact_jws&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# api url&lt;br /&gt;
base_url = &amp;quot;https://slurmtron.tau.ac.il&amp;quot;&lt;br /&gt;
# auth token&lt;br /&gt;
jwt_token = generate_jwt_token()&lt;br /&gt;
&lt;br /&gt;
# job submission url&lt;br /&gt;
job_url = f&amp;#039;{base_url}/slurm/v0.0.39/job/submit&amp;#039;&lt;br /&gt;
# Auth Headers&lt;br /&gt;
headers = {&lt;br /&gt;
    &amp;#039;X-SLURM-USER-NAME&amp;#039;: current_user,&lt;br /&gt;
    &amp;#039;X-SLURM-USER-TOKEN&amp;#039;: jwt_token&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# the job request&lt;br /&gt;
jobs_request = requests.post(&lt;br /&gt;
    job_url,&lt;br /&gt;
    headers=headers,&lt;br /&gt;
    json={&lt;br /&gt;
        &amp;quot;script&amp;quot;: &amp;quot;#!/bin/bash\n\n&amp;quot;&lt;br /&gt;
                  &amp;quot;srun hostname\n&amp;quot;&lt;br /&gt;
                  &amp;quot;echo \&amp;quot;hello world444\&amp;quot;\n&amp;quot;&lt;br /&gt;
                  &amp;quot;sleep 30&amp;quot;,&lt;br /&gt;
        &amp;quot;job&amp;quot;: {&lt;br /&gt;
&lt;br /&gt;
            &amp;quot;partition&amp;quot;: &amp;quot;Partition_Name&amp;quot;,&lt;br /&gt;
            &amp;quot;tasks&amp;quot;: 1,&lt;br /&gt;
            &amp;quot;name&amp;quot;: &amp;quot;test&amp;quot;,&lt;br /&gt;
            &amp;quot;account&amp;quot;: &amp;quot;Slurm_Account_Name&amp;quot;,&lt;br /&gt;
            &amp;quot;nodes&amp;quot;: &amp;quot;1&amp;quot;,&lt;br /&gt;
			# how much CPU you need&lt;br /&gt;
            &amp;quot;cpus_per_task&amp;quot;: 2,&lt;br /&gt;
			# How much Memory you need per node, in MB&lt;br /&gt;
            &amp;quot;memory_per_node&amp;quot;: {&lt;br /&gt;
                &amp;quot;number&amp;quot;: 2048,&lt;br /&gt;
                &amp;quot;set&amp;quot;: False,&lt;br /&gt;
                &amp;quot;infinite&amp;quot;: True&lt;br /&gt;
              },&lt;br /&gt;
			# List of nodes where the job must be allocated (uncomment the below 3 lines, and specify node name)&lt;br /&gt;
            # &amp;quot;required_nodes&amp;quot;: [  &lt;br /&gt;
            #     &amp;quot;Node_Name&amp;quot;&lt;br /&gt;
            # ],&lt;br /&gt;
            &amp;quot;standard_input&amp;quot;: &amp;quot;/dev/null&amp;quot;,&lt;br /&gt;
            &amp;quot;standard_output&amp;quot;: &amp;quot;FULL_PATH_TO_OUTPUT_FILE&amp;quot;,&lt;br /&gt;
            &amp;quot;standard_error&amp;quot;: &amp;quot;FULL_PATH_TO_INPUT_FILE&amp;quot;,&lt;br /&gt;
            &amp;quot;environment&amp;quot;: [&lt;br /&gt;
                &amp;quot;PATH=/bin:/usr/bin/:/usr/local/bin/&amp;quot;,&lt;br /&gt;
                &amp;quot;LD_LIBRARY_PATH=/lib/:/lib64/:/usr/local/lib&amp;quot;&lt;br /&gt;
            ],&lt;br /&gt;
        },&lt;br /&gt;
&lt;br /&gt;
    })&lt;br /&gt;
jobs_result = jobs_request.json()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&amp;#039;&amp;#039;&amp;#039;More Examples&amp;#039;&amp;#039;&amp;#039;:&lt;br /&gt;
&lt;br /&gt;
https://docs.lxp.lu/cloud/slurmrestd/&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Alphafold&amp;diff=1424</id>
		<title>Alphafold</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Alphafold&amp;diff=1424"/>
		<updated>2023-09-03T14:20:31Z</updated>

		<summary type="html">&lt;p&gt;Levk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==== &amp;#039;&amp;#039;&amp;#039;Alphafold&amp;#039;&amp;#039;&amp;#039; ====&lt;br /&gt;
AlphaFold is an artificial intelligence (AI) program developed by Alphabets&amp;#039;s/Google&amp;#039;s DeepMind which performs predictions of protein structure.&lt;br /&gt;
&lt;br /&gt;
==== &amp;#039;&amp;#039;&amp;#039;Databases:&amp;#039;&amp;#039;&amp;#039; ====&lt;br /&gt;
Mounted on nodes with gpu, located at /alphafold_storage/alphafold_db.&lt;br /&gt;
&lt;br /&gt;
===== Usage: =====&lt;br /&gt;
use run_alphafold.sh script located at /powerapps/share/centos7/alphafold/alphafold-2.3.1/run_alphafold.sh&lt;br /&gt;
&lt;br /&gt;
Script reference:&lt;br /&gt;
 &amp;lt;code&amp;gt;Required Parameters:&lt;br /&gt;
 -d &amp;lt;data_dir&amp;gt;         Path to directory of supporting data&lt;br /&gt;
 -o &amp;lt;output_dir&amp;gt;       Path to a directory that will store the results.&lt;br /&gt;
 -f &amp;lt;fasta_paths&amp;gt;      Path to FASTA files containing sequences. If a FASTA file contains multiple sequences, then it will be folded as a multimer. To fold more sequences one after another, write the files separated by a comma&lt;br /&gt;
 -t &amp;lt;max_template_date&amp;gt; Maximum template release date to consider (ISO-8601 format - i.e. YYYY-MM-DD). Important if folding historical test sets&lt;br /&gt;
 Optional Parameters:&lt;br /&gt;
 -g &amp;lt;use_gpu&amp;gt;          Enable NVIDIA runtime to run with GPUs (default: true)&lt;br /&gt;
 -r &amp;lt;run_relax&amp;gt;        Whether to run the final relaxation step on the predicted models. Turning relax off might result in predictions with distracting stereochemical violations but might help in case you are having issues with the relaxation stage (default: true)&lt;br /&gt;
 -e &amp;lt;enable_gpu_relax&amp;gt; Run relax on GPU if GPU is enabled (default: true)&lt;br /&gt;
 -n &amp;lt;openmm_threads&amp;gt;   OpenMM threads (default: all available cores)&lt;br /&gt;
 -a &amp;lt;gpu_devices&amp;gt;      Comma separated list of devices to pass to &amp;#039;CUDA_VISIBLE_DEVICES&amp;#039; (default: 0)&lt;br /&gt;
 -m &amp;lt;model_preset&amp;gt;     Choose preset model configuration - the monomer model, the monomer model with extra ensembling, monomer model with pTM head, or multimer model (default: &amp;#039;monomer&amp;#039;)&lt;br /&gt;
 -c &amp;lt;db_preset&amp;gt;        Choose preset MSA database configuration - smaller genetic database config (reduced_dbs) or full genetic database config (full_dbs) (default: &amp;#039;full_dbs&amp;#039;)&lt;br /&gt;
 -p &amp;lt;use_precomputed_msas&amp;gt; Whether to read MSAs that have been written to disk. WARNING: This will not check if the sequence, database or configuration have changed (default: &amp;#039;false&amp;#039;)&lt;br /&gt;
 -l &amp;lt;num_multimer_predictions_per_model&amp;gt; How many predictions (each with a different random seed) will be generated per model. E.g. if this is 2 and there are 5 models then there will be 10 predictions per input. Note: this FLAG only applies if model_preset=multimer (default: 5)&lt;br /&gt;
 -b &amp;lt;benchmark&amp;gt;        Run multiple JAX model evaluations to obtain a timing that excludes the compilation time, which should be more indicative of the time required for inferencing many proteins (default: &amp;#039;false&amp;#039;)&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== Sample Qsub Script: =====&lt;br /&gt;
create folder for output in your home dir mkdir ~/alphafold_output then run the script&lt;br /&gt;
&lt;br /&gt;
* you may download dummy_test folder from this github as well for the output&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;https://github.com/kalininalab/alphafold_non_docker&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* /home/alphafold_folder/alphafold_multimer_non_docker/example/query.fasta = this is sample data, please point to the data you need to query.&lt;br /&gt;
* The lines &amp;#039;&amp;#039;&amp;#039;export CUDA_VISIBLE_DEVICES=$(python3 /powerapps/scripts/check_avail_gpu.py)&amp;#039;&amp;#039;&amp;#039; and the flag &amp;#039;&amp;#039;&amp;#039;a $CUDA_VISIBLE_DEVICES&amp;#039;&amp;#039;&amp;#039; make it so you can use the next free GPU on the server, please leave it as is.&lt;br /&gt;
* $ALPHAFOLD_SCRIPT_PATH = /powerapps/share/centos7/alphafold/alphafold-2.3.1/&lt;br /&gt;
* $ALPHAFOLD_DB_PATH = /alphafold_storage/alphafold_db&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#PBS -l select=1:ncpus=4:ngpus=1&lt;br /&gt;
##choose any gpu queue: gpu/gpu2&lt;br /&gt;
#PBS -q gpu2&lt;br /&gt;
&lt;br /&gt;
# Description: AlphaFold-Multimer (Non-Docker) with auto-gpu selection&lt;br /&gt;
&lt;br /&gt;
# load conda env&lt;br /&gt;
module load alphafold/alphafold_non_docker_2.3.1&lt;br /&gt;
&lt;br /&gt;
# call to check_available_gpu python script&lt;br /&gt;
# returns the param for CUDA_VISIBLE_DEVICE which the run alphafold script uses&lt;br /&gt;
&lt;br /&gt;
export CUDA_VISIBLE_DEVICES=$(python3 /powerapps/scripts/check_avail_gpu.py)&lt;br /&gt;
# echo &amp;quot;CUDA_VISIBLE_DEVICES: $CUDA_VISIBLE_DEVICES&amp;quot;&lt;br /&gt;
bash $ALPHAFOLD_SCRIPT_PATH/run_alphafold.sh -d $ALPHAFOLD_DB_PATH -o ~/output_dir -f $ALPHAFOLD_SCRIPT_PATH/examples/query.fasta -t 2020-05-14 -a $CUDA_VISIBLE_DEVICES&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Containers_cheat_sheet&amp;diff=1417</id>
		<title>Containers cheat sheet</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Containers_cheat_sheet&amp;diff=1417"/>
		<updated>2023-05-14T11:38:22Z</updated>

		<summary type="html">&lt;p&gt;Levk: 1 revision imported&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{|  class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! align=&amp;quot;center&amp;quot; style=&amp;quot;background:#f0f0f0;&amp;quot;| &amp;#039;&amp;#039;&amp;#039;action&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
! align=&amp;quot;center&amp;quot; style=&amp;quot;background:#f0f0f0;&amp;quot;| &amp;#039;&amp;#039;&amp;#039;docker&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
! align=&amp;quot;center&amp;quot; style=&amp;quot;background:#f0f0f0;&amp;quot;| &amp;#039;&amp;#039;&amp;#039;singularity&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
|- align=&amp;quot;center &amp;quot;&lt;br /&gt;
 | pull existing image from docker hub || &amp;lt;pre&amp;gt;docker pull ubuntu:18.04&amp;lt;/pre&amp;gt; || &amp;lt;pre&amp;gt;sudo singularity build ubuntu_18.04.simg docker://ubuntu:18.04&amp;lt;/pre&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
 | login to container shell || &amp;lt;pre&amp;gt;docker run -it ubuntu:18.04 bash&amp;lt;/pre&amp;gt; || &amp;lt;pre&amp;gt;singularity shell ubuntu_18.04.simg&amp;lt;/pre&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
 | update image NOT using a file || Login to container shell:&lt;br /&gt;
&amp;lt;pre&amp;gt;docker run -it ubuntu:18.04 bash&lt;br /&gt;
apt update&lt;br /&gt;
apt install vim&lt;br /&gt;
exit&amp;lt;/pre&amp;gt;&lt;br /&gt;
Commit the changes while the container is down:&lt;br /&gt;
Locate Container ID: &amp;lt;pre&amp;gt;docker ps -a&amp;lt;/pre&amp;gt;&lt;br /&gt;
Commit the changes to a new image tag: &amp;lt;pre&amp;gt;docker commit 19b0acbfef1b ubuntu:18.04_tau&amp;lt;/pre&amp;gt; &lt;br /&gt;
|| &lt;br /&gt;
Create writable image:&amp;lt;pre&amp;gt;sudo singularity build --sandbox ubuntu_18.04_tau.simg docker://ubuntu:18.04&amp;lt;/pre&amp;gt;&lt;br /&gt;
Login to image shell in a writable mode: &amp;lt;pre&amp;gt;sudo singularity shell --writable ubuntu_18.04_tau.simg&lt;br /&gt;
apt update&lt;br /&gt;
apt install vim&lt;br /&gt;
exit&amp;lt;/pre&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
 | create new image || 1. Edit Dockerfile (file name must be Dockerfile):&lt;br /&gt;
&amp;lt;pre&amp;gt;#Download base image&lt;br /&gt;
FROM ubuntu:18.04&lt;br /&gt;
# Update vim&lt;br /&gt;
RUN apt update &amp;amp;&amp;amp; apt install -y vim&lt;br /&gt;
# Define environment variable&lt;br /&gt;
ENV TEAM HPC&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
2. Build image: &lt;br /&gt;
&amp;lt;pre&amp;gt; docker build -t ubuntu:18.04_tau2 .&amp;lt;/pre&amp;gt; &lt;br /&gt;
|| &lt;br /&gt;
1. Edit Singularity file (file name can be anything):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#Download base image&lt;br /&gt;
Bootstrap: docker&lt;br /&gt;
FROM: ubuntu:18.04&lt;br /&gt;
# Update vim&lt;br /&gt;
%post&lt;br /&gt;
  apt-get update&lt;br /&gt;
  apt-get install -y vim&lt;br /&gt;
%environment&lt;br /&gt;
  TEAM=HPC&lt;br /&gt;
  export TEAM&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
2. Build image:&lt;br /&gt;
&amp;lt;pre&amp;gt;sudo singularity build ubuntu_18.04_tau2.simg Singfile&amp;lt;/pre&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
 | run as a daemon || Run image as daemon using infinite sleep:&lt;br /&gt;
&amp;lt;pre&amp;gt;docker run -d ubuntu:18.04_tau2 sleep 999999999&amp;lt;/pre&amp;gt;&lt;br /&gt;
 ||  &lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
 | list running containers || &amp;lt;pre&amp;gt;docker ps&amp;lt;/pre&amp;gt;&lt;br /&gt;
 || &lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
 | list running and stopped containers || &amp;lt;pre&amp;gt;docker ps -a&amp;lt;/pre&amp;gt;&lt;br /&gt;
 || &lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
 | list local images || &amp;lt;pre&amp;gt;docker images&amp;lt;/pre&amp;gt;&lt;br /&gt;
 || &amp;lt;pre&amp;gt;ls&amp;lt;/pre&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
 | Start a downed container (note it must have already been executed at least once) || &amp;lt;pre&amp;gt;docker start c4f7efb53d1a&amp;lt;/pre&amp;gt;&lt;br /&gt;
 || &lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
 | Stop a container || &amp;lt;pre&amp;gt;docker stop c4f7efb53d1a&amp;lt;/pre&amp;gt;&lt;br /&gt;
 || &lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
 | Delete a container || &amp;lt;pre&amp;gt;docker rm 3271c613e00a&amp;lt;/pre&amp;gt;&lt;br /&gt;
 || &lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
 | Delete an image || &amp;lt;pre&amp;gt;docker rmi 9f38484d220f&amp;lt;/pre&amp;gt;&lt;br /&gt;
 || &amp;lt;pre&amp;gt;rm ubuntu_18.04_tau2.simg&amp;lt;/pre&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
 | view container logs || &amp;lt;pre&amp;gt;docker logs f28eb1af539d&amp;lt;/pre&amp;gt;&lt;br /&gt;
 || &lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
 | view disk usage || &amp;lt;pre&amp;gt;docker system df&amp;lt;/pre&amp;gt;&lt;br /&gt;
 || &lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
 | view full details about the docker service || &amp;lt;pre&amp;gt;docker info&amp;lt;/pre&amp;gt;&lt;br /&gt;
 || &lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
 | view full details about a container || &amp;lt;pre&amp;gt;docker inspect containerid&amp;lt;/pre&amp;gt;&lt;br /&gt;
 || &lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
 | start shell inside daemonized container || &lt;br /&gt;
&amp;lt;pre&amp;gt;docker exec -it f28eb1af539d bash&lt;br /&gt;
To exit: &amp;lt;CTL&amp;gt;+p, &amp;lt;CTL&amp;gt;+q (typing &amp;#039;exit&amp;#039; will kill the container)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
 || &lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
 | Run program from within the image without parameters || &amp;lt;pre&amp;gt;docker run -it ubuntu:18.04_tau4 python&amp;lt;/pre&amp;gt;&lt;br /&gt;
 || Add to the Singularity file:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%apprun python&lt;br /&gt;
  exec python &amp;quot;&amp;quot;${@}&amp;quot;&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Build the singularity image:&lt;br /&gt;
&amp;lt;pre&amp;gt;sudo singularity build ubuntu_18.04_tau2.simg Singfile&amp;lt;/pre&amp;gt;&lt;br /&gt;
Run the program:&lt;br /&gt;
&amp;lt;pre&amp;gt;singularity run --app python ubuntu_18.04_tau4.simg&amp;lt;/pre&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
 | Run program from within the image with parameters || &amp;lt;pre&amp;gt;docker run -it ubuntu:18.04_tau4 python -c &amp;quot;print(&amp;#039;hello world&amp;#039;)&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
 || Add to the Singularity file:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%apprun python&lt;br /&gt;
  exec python &amp;quot;${@}&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Build the singularity image:&amp;lt;pre&amp;gt;sudo singularity build ubuntu_18.04_tau2.simg Singfile&amp;lt;/pre&amp;gt;&lt;br /&gt;
Run the program: &amp;lt;pre&amp;gt;singularity run --app python ubuntu_18.04_tau4.simg -c &amp;quot;print(&amp;#039;hello world&amp;#039;)&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
|- align=&amp;quot;center&amp;quot;&lt;br /&gt;
 | Add help to image ||  || Add to the Singularity file:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%help&lt;br /&gt;
  Python version 2.7.15rc1&lt;br /&gt;
 &lt;br /&gt;
  Usage&lt;br /&gt;
  =====&lt;br /&gt;
  View help: &lt;br /&gt;
    $ singularity help ubuntu_18.04_tau4.simg &lt;br /&gt;
  Run Python from the container: &lt;br /&gt;
    $ singularity run --app python ubuntu_18.04_tau4.simg&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Build the singularity image: &amp;lt;pre&amp;gt;sudo singularity build ubuntu_18.04_tau2.simg Singfile&amp;lt;/pre&amp;gt;&lt;br /&gt;
View the help: &amp;lt;pre&amp;gt;singularity help ubuntu_18.04_tau4.simg&amp;lt;/pre&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
[[Category:Servers:Linux operational tasks]]&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Alphafold&amp;diff=1401</id>
		<title>Alphafold</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Alphafold&amp;diff=1401"/>
		<updated>2022-12-06T12:55:34Z</updated>

		<summary type="html">&lt;p&gt;Levk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Alphafold ===&lt;br /&gt;
AlphaFold is an artificial intelligence (AI) program developed by Alphabets&amp;#039;s/Google&amp;#039;s DeepMind which performs predictions of protein structure.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== How to use===&lt;br /&gt;
&lt;br /&gt;
use &amp;lt;b&amp;gt;run_alphafold.sh&amp;lt;/b&amp;gt; script located at /home/alphafold_folder/alphafold_multimer_non_docker (in compute-0-300)&lt;br /&gt;
&lt;br /&gt;
script reference:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Usage: run_alphafold.sh &amp;lt;OPTIONS&amp;gt;&lt;br /&gt;
Required Parameters:&lt;br /&gt;
-d &amp;lt;data_dir&amp;gt;     Path to directory with supporting data: AlphaFold parameters and genetic and template databases. Set to the target of download_all_databases.sh.&lt;br /&gt;
-o &amp;lt;output_dir&amp;gt;   Path to a directory that will store the results.&lt;br /&gt;
-f &amp;lt;fasta_path&amp;gt;   Path to a FASTA file containing a single sequence.&lt;br /&gt;
-t &amp;lt;max_template_date&amp;gt; Maximum template release date to consider (ISO-8601 format: YYYY-MM-DD). Important if folding historical test sets.&lt;br /&gt;
Optional Parameters:&lt;br /&gt;
-n &amp;lt;openmm_threads&amp;gt;   OpenMM threads (default: all available cores)&lt;br /&gt;
-b &amp;lt;benchmark&amp;gt;    Run multiple JAX model evaluations to obtain a timing that excludes the compilation time, which should be more indicative of the time required for inferencing many proteins (default: false)&lt;br /&gt;
-g &amp;lt;use_gpu&amp;gt;      Enable NVIDIA runtime to run with GPUs (default: true)&lt;br /&gt;
-a &amp;lt;gpu_devices&amp;gt;  Comma separated list of devices to pass to &amp;#039;CUDA_VISIBLE_DEVICES&amp;#039; (default: 0)&lt;br /&gt;
-m &amp;lt;model_preset&amp;gt;  Choose preset model configuration - the monomer model (monomer), the monomer model with extra ensembling (monomer_casp14), monomer model with pTM head (monomer_ptm), or multimer model (multimer) (default: monomer)&lt;br /&gt;
-p &amp;lt;db_preset&amp;gt;       Choose preset MSA database configuration - smaller genetic database config (reduced_dbs) or full genetic database config (full_dbs) (default: full_dbs)&lt;br /&gt;
-u &amp;lt;use_precomputed_msas&amp;gt;       Whether to read MSAs that have been written to disk. WARNING: This will not check if the sequence, database or configuration have changed. (default: false)&lt;br /&gt;
-r &amp;lt;remove_msas_after_use&amp;gt;       Whether, after structure prediction(s), to delete MSAs that have been written to disk to significantly free up storage space. (default: false)&lt;br /&gt;
-i &amp;lt;is_prokaryote&amp;gt;   Optional for multimer system, not used by the single chain system. This should contain a boolean specifying true where the target complex is from a prokaryote, and false where it is not, or where the origin is unknown. These values determine the pairing method for the MSA (default: false)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Databases ====&lt;br /&gt;
We downloaded the databases to /home/alphafold_folder/alphafold_data on compute-0-300&lt;br /&gt;
you may use it, or copy it to your own storage and point to it with -d flag of the run script.&lt;br /&gt;
also, you may download the databases to your own storage via the script &amp;lt;b&amp;gt;download_all_data.sh&amp;lt;/b&amp;gt; located at /home/alphafold_folder/alphafold_multimer_non_docker/scripts/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Sample qsub script ====&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Note:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
create folder for output in your home dir&lt;br /&gt;
mkdir ~/alphafold_output&lt;br /&gt;
then run the script&lt;br /&gt;
* you may download dummy_test folder from this github as well for the output&lt;br /&gt;
https://github.com/kalininalab/alphafold_non_docker&lt;br /&gt;
* /home/alphafold_folder/alphafold_multimer_non_docker/example/query.fasta = this is sample data, please point to the data you need to query.&lt;br /&gt;
* The lines &amp;#039;&amp;#039;&amp;#039;export CUDA_VISIBLE_DEVICES=$(python3 /powerapps/scripts/check_avail_gpu.py)&amp;#039;&amp;#039;&amp;#039; and the flag &amp;lt;b&amp;gt;-a $CUDA_VISIBLE_DEVICES&amp;lt;/b&amp;gt; make it so you can use the next free GPU on the server, please leave it as is.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#PBS -l select=1:ncpus=4:ngpus=1&lt;br /&gt;
#PBS -q gpu&lt;br /&gt;
&lt;br /&gt;
# Description: AlphaFold-Multimer (Non-Docker) with auto-gpu selection&lt;br /&gt;
# Original Author: Lev Arie Krapivner&lt;br /&gt;
&lt;br /&gt;
# load miniconda&lt;br /&gt;
module load miniconda/miniconda3-4.7.12-environmentally&lt;br /&gt;
# activate relevant venv&lt;br /&gt;
conda activate /powerapps/share/centos7/miniconda/miniconda3-4.7.12-environmentally/envs/alphafold_non_docker&lt;br /&gt;
# run alphafold&lt;br /&gt;
cd /home/alphafold_folder/alphafold_multimer_non_docker/&lt;br /&gt;
# call to check_available_gpu python script&lt;br /&gt;
# returns the param for CUDA_VISIBLE_DEVICE which the run alphafold script uses&lt;br /&gt;
&lt;br /&gt;
export CUDA_VISIBLE_DEVICES=$(python3 /powerapps/scripts/check_avail_gpu.py)&lt;br /&gt;
# echo &amp;quot;CUDA_VISIBLE_DEVICES: $CUDA_VISIBLE_DEVICES&amp;quot;&lt;br /&gt;
bash run_alphafold.sh -d /home/alphafold_folder/alphafold_data -o ~/output_dir -f /home/alphafold_folder/alphafold_multimer_non_docker/example/query.fasta -t 2020-05-14 -a $CUDA_VISIBLE_DEVICES&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Sample qsub script 2 ====&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Note:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
create folder for output in your home dir&lt;br /&gt;
mkdir ~/alphafold_output&lt;br /&gt;
then run the script&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#PBS -l select=1:ncpus=4:ngpus=1&lt;br /&gt;
#PBS -q gpu&lt;br /&gt;
&lt;br /&gt;
# Description: AlphaFold-Multimer (Non-Docker) with auto-gpu selection&lt;br /&gt;
# Original Author: Lev Arie Krapivner&lt;br /&gt;
&lt;br /&gt;
# load conda env&lt;br /&gt;
module load alphafold/alphafold_non_docker_2.2.0&lt;br /&gt;
cd /home/alphafold_folder/alphafold_multimer_non_docker/&lt;br /&gt;
# call to check_available_gpu python script&lt;br /&gt;
# returns the param for CUDA_VISIBLE_DEVICE which the run alphafold script uses&lt;br /&gt;
&lt;br /&gt;
export CUDA_VISIBLE_DEVICES=$(python3 /powerapps/scripts/check_avail_gpu.py)&lt;br /&gt;
# echo &amp;quot;CUDA_VISIBLE_DEVICES: $CUDA_VISIBLE_DEVICES&amp;quot;&lt;br /&gt;
run_alphafold.sh -d /home/alphafold_folder/alphafold_data -o ~/output_dir -f /home/alphafold_folder/alphafold_multimer_non_docker/example/query.fasta -t 2020-05-14 -a $CUDA_VISIBLE_DEVICES&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://github.com/amorehead/alphafold_non_docker Alphafold - non_docker source]&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Alphafold&amp;diff=1400</id>
		<title>Alphafold</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Alphafold&amp;diff=1400"/>
		<updated>2022-12-06T12:54:16Z</updated>

		<summary type="html">&lt;p&gt;Levk: /* Sample qsub script 2 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Alphafold ===&lt;br /&gt;
AlphaFold is an artificial intelligence (AI) program developed by Alphabets&amp;#039;s/Google&amp;#039;s DeepMind which performs predictions of protein structure.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== How to use===&lt;br /&gt;
&lt;br /&gt;
use &amp;lt;b&amp;gt;run_alphafold.sh&amp;lt;/b&amp;gt; script located at /home/alphafold_folder/alphafold_multimer_non_docker (in compute-0-300)&lt;br /&gt;
&lt;br /&gt;
script reference:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Usage: run_alphafold.sh &amp;lt;OPTIONS&amp;gt;&lt;br /&gt;
Required Parameters:&lt;br /&gt;
-d &amp;lt;data_dir&amp;gt;     Path to directory with supporting data: AlphaFold parameters and genetic and template databases. Set to the target of download_all_databases.sh.&lt;br /&gt;
-o &amp;lt;output_dir&amp;gt;   Path to a directory that will store the results.&lt;br /&gt;
-f &amp;lt;fasta_path&amp;gt;   Path to a FASTA file containing a single sequence.&lt;br /&gt;
-t &amp;lt;max_template_date&amp;gt; Maximum template release date to consider (ISO-8601 format: YYYY-MM-DD). Important if folding historical test sets.&lt;br /&gt;
Optional Parameters:&lt;br /&gt;
-n &amp;lt;openmm_threads&amp;gt;   OpenMM threads (default: all available cores)&lt;br /&gt;
-b &amp;lt;benchmark&amp;gt;    Run multiple JAX model evaluations to obtain a timing that excludes the compilation time, which should be more indicative of the time required for inferencing many proteins (default: false)&lt;br /&gt;
-g &amp;lt;use_gpu&amp;gt;      Enable NVIDIA runtime to run with GPUs (default: true)&lt;br /&gt;
-a &amp;lt;gpu_devices&amp;gt;  Comma separated list of devices to pass to &amp;#039;CUDA_VISIBLE_DEVICES&amp;#039; (default: 0)&lt;br /&gt;
-m &amp;lt;model_preset&amp;gt;  Choose preset model configuration - the monomer model (monomer), the monomer model with extra ensembling (monomer_casp14), monomer model with pTM head (monomer_ptm), or multimer model (multimer) (default: monomer)&lt;br /&gt;
-p &amp;lt;db_preset&amp;gt;       Choose preset MSA database configuration - smaller genetic database config (reduced_dbs) or full genetic database config (full_dbs) (default: full_dbs)&lt;br /&gt;
-u &amp;lt;use_precomputed_msas&amp;gt;       Whether to read MSAs that have been written to disk. WARNING: This will not check if the sequence, database or configuration have changed. (default: false)&lt;br /&gt;
-r &amp;lt;remove_msas_after_use&amp;gt;       Whether, after structure prediction(s), to delete MSAs that have been written to disk to significantly free up storage space. (default: false)&lt;br /&gt;
-i &amp;lt;is_prokaryote&amp;gt;   Optional for multimer system, not used by the single chain system. This should contain a boolean specifying true where the target complex is from a prokaryote, and false where it is not, or where the origin is unknown. These values determine the pairing method for the MSA (default: false)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Databases ====&lt;br /&gt;
We downloaded the databases to /home/alphafold_folder/alphafold_data on compute-0-300&lt;br /&gt;
you may use it, or copy it to your own storage and point to it with -d flag of the run script.&lt;br /&gt;
also, you may download the databases to your own storage via the script &amp;lt;b&amp;gt;download_all_data.sh&amp;lt;/b&amp;gt; located at /home/alphafold_folder/alphafold_multimer_non_docker/scripts/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Sample qsub script ====&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Note:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
create folder for output in your home dir&lt;br /&gt;
mkdir ~/alphafold_output&lt;br /&gt;
then run the script&lt;br /&gt;
* you may download dummy_test folder from this github as well for the output&lt;br /&gt;
https://github.com/kalininalab/alphafold_non_docker&lt;br /&gt;
* /home/alphafold_folder/alphafold_multimer_non_docker/example/query.fasta = this is sample data, please point to the data you need to query.&lt;br /&gt;
* The lines &amp;#039;&amp;#039;&amp;#039;export CUDA_VISIBLE_DEVICES=$(python3 /powerapps/scripts/check_avail_gpu.py)&amp;#039;&amp;#039;&amp;#039; and the flag &amp;lt;b&amp;gt;-a $CUDA_VISIBLE_DEVICES&amp;lt;/b&amp;gt; make it so you can use the next free GPU on the server, please leave it as is.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#PBS -l select=1:ncpus=4:ngpus=1&lt;br /&gt;
#PBS -q gpu&lt;br /&gt;
&lt;br /&gt;
# Description: AlphaFold-Multimer (Non-Docker) with auto-gpu selection&lt;br /&gt;
# Original Author: Lev Arie Krapivner&lt;br /&gt;
&lt;br /&gt;
# load miniconda&lt;br /&gt;
module load miniconda/miniconda3-4.7.12-environmentally&lt;br /&gt;
# activate relevant venv&lt;br /&gt;
conda activate /powerapps/share/centos7/miniconda/miniconda3-4.7.12-environmentally/envs/alphafold_non_docker&lt;br /&gt;
# run alphafold&lt;br /&gt;
cd /home/alphafold_folder/alphafold_multimer_non_docker/&lt;br /&gt;
# call to check_available_gpu python script&lt;br /&gt;
# returns the param for CUDA_VISIBLE_DEVICE which the run alphafold script uses&lt;br /&gt;
&lt;br /&gt;
export CUDA_VISIBLE_DEVICES=$(python3 /powerapps/scripts/check_avail_gpu.py)&lt;br /&gt;
# echo &amp;quot;CUDA_VISIBLE_DEVICES: $CUDA_VISIBLE_DEVICES&amp;quot;&lt;br /&gt;
bash run_alphafold.sh -d /home/alphafold_folder/alphafold_data -o ~/output_dir -f /home/alphafold_folder/alphafold_multimer_non_docker/example/query.fasta -t 2020-05-14 -a $CUDA_VISIBLE_DEVICES&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Sample qsub script 2 ====&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Note:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
create folder for output in your home dir&lt;br /&gt;
mkdir ~/alphafold_output&lt;br /&gt;
then run the script&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#PBS -l select=1:ncpus=4:ngpus=1&lt;br /&gt;
#PBS -q gpu&lt;br /&gt;
&lt;br /&gt;
# Description: AlphaFold-Multimer (Non-Docker) with auto-gpu selection&lt;br /&gt;
# Original Author: Lev Arie Krapivner&lt;br /&gt;
&lt;br /&gt;
# load conda env&lt;br /&gt;
module load alphafold/alphafold_non_docker_2.2.0&lt;br /&gt;
cd /home/alphafold_folder/alphafold_multimer_non_docker/&lt;br /&gt;
# call to check_available_gpu python script&lt;br /&gt;
# returns the param for CUDA_VISIBLE_DEVICE which the run alphafold script uses&lt;br /&gt;
&lt;br /&gt;
export CUDA_VISIBLE_DEVICES=$(python3 /powerapps/scripts/check_avail_gpu.py)&lt;br /&gt;
# echo &amp;quot;CUDA_VISIBLE_DEVICES: $CUDA_VISIBLE_DEVICES&amp;quot;&lt;br /&gt;
run_alphafold.sh -d PATH_TO_DATABNASE_FOLDER -o ~/output_dir -f PATH_TO_FASTA_FILE -t 2020-05-14 -a $CUDA_VISIBLE_DEVICES&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://github.com/amorehead/alphafold_non_docker Alphafold - non_docker source]&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Alphafold&amp;diff=1399</id>
		<title>Alphafold</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Alphafold&amp;diff=1399"/>
		<updated>2022-12-06T12:53:02Z</updated>

		<summary type="html">&lt;p&gt;Levk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Alphafold ===&lt;br /&gt;
AlphaFold is an artificial intelligence (AI) program developed by Alphabets&amp;#039;s/Google&amp;#039;s DeepMind which performs predictions of protein structure.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== How to use===&lt;br /&gt;
&lt;br /&gt;
use &amp;lt;b&amp;gt;run_alphafold.sh&amp;lt;/b&amp;gt; script located at /home/alphafold_folder/alphafold_multimer_non_docker (in compute-0-300)&lt;br /&gt;
&lt;br /&gt;
script reference:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Usage: run_alphafold.sh &amp;lt;OPTIONS&amp;gt;&lt;br /&gt;
Required Parameters:&lt;br /&gt;
-d &amp;lt;data_dir&amp;gt;     Path to directory with supporting data: AlphaFold parameters and genetic and template databases. Set to the target of download_all_databases.sh.&lt;br /&gt;
-o &amp;lt;output_dir&amp;gt;   Path to a directory that will store the results.&lt;br /&gt;
-f &amp;lt;fasta_path&amp;gt;   Path to a FASTA file containing a single sequence.&lt;br /&gt;
-t &amp;lt;max_template_date&amp;gt; Maximum template release date to consider (ISO-8601 format: YYYY-MM-DD). Important if folding historical test sets.&lt;br /&gt;
Optional Parameters:&lt;br /&gt;
-n &amp;lt;openmm_threads&amp;gt;   OpenMM threads (default: all available cores)&lt;br /&gt;
-b &amp;lt;benchmark&amp;gt;    Run multiple JAX model evaluations to obtain a timing that excludes the compilation time, which should be more indicative of the time required for inferencing many proteins (default: false)&lt;br /&gt;
-g &amp;lt;use_gpu&amp;gt;      Enable NVIDIA runtime to run with GPUs (default: true)&lt;br /&gt;
-a &amp;lt;gpu_devices&amp;gt;  Comma separated list of devices to pass to &amp;#039;CUDA_VISIBLE_DEVICES&amp;#039; (default: 0)&lt;br /&gt;
-m &amp;lt;model_preset&amp;gt;  Choose preset model configuration - the monomer model (monomer), the monomer model with extra ensembling (monomer_casp14), monomer model with pTM head (monomer_ptm), or multimer model (multimer) (default: monomer)&lt;br /&gt;
-p &amp;lt;db_preset&amp;gt;       Choose preset MSA database configuration - smaller genetic database config (reduced_dbs) or full genetic database config (full_dbs) (default: full_dbs)&lt;br /&gt;
-u &amp;lt;use_precomputed_msas&amp;gt;       Whether to read MSAs that have been written to disk. WARNING: This will not check if the sequence, database or configuration have changed. (default: false)&lt;br /&gt;
-r &amp;lt;remove_msas_after_use&amp;gt;       Whether, after structure prediction(s), to delete MSAs that have been written to disk to significantly free up storage space. (default: false)&lt;br /&gt;
-i &amp;lt;is_prokaryote&amp;gt;   Optional for multimer system, not used by the single chain system. This should contain a boolean specifying true where the target complex is from a prokaryote, and false where it is not, or where the origin is unknown. These values determine the pairing method for the MSA (default: false)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Databases ====&lt;br /&gt;
We downloaded the databases to /home/alphafold_folder/alphafold_data on compute-0-300&lt;br /&gt;
you may use it, or copy it to your own storage and point to it with -d flag of the run script.&lt;br /&gt;
also, you may download the databases to your own storage via the script &amp;lt;b&amp;gt;download_all_data.sh&amp;lt;/b&amp;gt; located at /home/alphafold_folder/alphafold_multimer_non_docker/scripts/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Sample qsub script ====&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Note:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
create folder for output in your home dir&lt;br /&gt;
mkdir ~/alphafold_output&lt;br /&gt;
then run the script&lt;br /&gt;
* you may download dummy_test folder from this github as well for the output&lt;br /&gt;
https://github.com/kalininalab/alphafold_non_docker&lt;br /&gt;
* /home/alphafold_folder/alphafold_multimer_non_docker/example/query.fasta = this is sample data, please point to the data you need to query.&lt;br /&gt;
* The lines &amp;#039;&amp;#039;&amp;#039;export CUDA_VISIBLE_DEVICES=$(python3 /powerapps/scripts/check_avail_gpu.py)&amp;#039;&amp;#039;&amp;#039; and the flag &amp;lt;b&amp;gt;-a $CUDA_VISIBLE_DEVICES&amp;lt;/b&amp;gt; make it so you can use the next free GPU on the server, please leave it as is.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#PBS -l select=1:ncpus=4:ngpus=1&lt;br /&gt;
#PBS -q gpu&lt;br /&gt;
&lt;br /&gt;
# Description: AlphaFold-Multimer (Non-Docker) with auto-gpu selection&lt;br /&gt;
# Original Author: Lev Arie Krapivner&lt;br /&gt;
&lt;br /&gt;
# load miniconda&lt;br /&gt;
module load miniconda/miniconda3-4.7.12-environmentally&lt;br /&gt;
# activate relevant venv&lt;br /&gt;
conda activate /powerapps/share/centos7/miniconda/miniconda3-4.7.12-environmentally/envs/alphafold_non_docker&lt;br /&gt;
# run alphafold&lt;br /&gt;
cd /home/alphafold_folder/alphafold_multimer_non_docker/&lt;br /&gt;
# call to check_available_gpu python script&lt;br /&gt;
# returns the param for CUDA_VISIBLE_DEVICE which the run alphafold script uses&lt;br /&gt;
&lt;br /&gt;
export CUDA_VISIBLE_DEVICES=$(python3 /powerapps/scripts/check_avail_gpu.py)&lt;br /&gt;
# echo &amp;quot;CUDA_VISIBLE_DEVICES: $CUDA_VISIBLE_DEVICES&amp;quot;&lt;br /&gt;
bash run_alphafold.sh -d /home/alphafold_folder/alphafold_data -o ~/output_dir -f /home/alphafold_folder/alphafold_multimer_non_docker/example/query.fasta -t 2020-05-14 -a $CUDA_VISIBLE_DEVICES&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Sample qsub script 2 ====&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Note:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
create folder for output in your home dir&lt;br /&gt;
mkdir ~/alphafold_output&lt;br /&gt;
then run the script&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#PBS -l select=1:ncpus=4:ngpus=1&lt;br /&gt;
#PBS -q gpu&lt;br /&gt;
&lt;br /&gt;
# Description: AlphaFold-Multimer (Non-Docker) with auto-gpu selection&lt;br /&gt;
# Original Author: Lev Arie Krapivner&lt;br /&gt;
&lt;br /&gt;
# load conda env&lt;br /&gt;
module load alphafold/alphafold_non_docker_2.2.0&lt;br /&gt;
cd /home/alphafold_folder/alphafold_multimer_non_docker/&lt;br /&gt;
# call to check_available_gpu python script&lt;br /&gt;
# returns the param for CUDA_VISIBLE_DEVICE which the run alphafold script uses&lt;br /&gt;
&lt;br /&gt;
export CUDA_VISIBLE_DEVICES=$(python3 /powerapps/scripts/check_avail_gpu.py)&lt;br /&gt;
# echo &amp;quot;CUDA_VISIBLE_DEVICES: $CUDA_VISIBLE_DEVICES&amp;quot;&lt;br /&gt;
run_alphafold.sh -d PATH_TO_DV_FOLDER-o ~/output_dir -f PATH_TO_FASTA_FILE -t 2020-05-14 -a $CUDA_VISIBLE_DEVICES&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://github.com/amorehead/alphafold_non_docker Alphafold - non_docker source]&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Alphafold&amp;diff=1398</id>
		<title>Alphafold</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Alphafold&amp;diff=1398"/>
		<updated>2022-12-06T12:52:38Z</updated>

		<summary type="html">&lt;p&gt;Levk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Alphafold ===&lt;br /&gt;
AlphaFold is an artificial intelligence (AI) program developed by Alphabets&amp;#039;s/Google&amp;#039;s DeepMind which performs predictions of protein structure.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== How to use===&lt;br /&gt;
&lt;br /&gt;
use &amp;lt;b&amp;gt;run_alphafold.sh&amp;lt;/b&amp;gt; script located at /home/alphafold_folder/alphafold_multimer_non_docker (in compute-0-300)&lt;br /&gt;
&lt;br /&gt;
script reference:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Usage: run_alphafold.sh &amp;lt;OPTIONS&amp;gt;&lt;br /&gt;
Required Parameters:&lt;br /&gt;
-d &amp;lt;data_dir&amp;gt;     Path to directory with supporting data: AlphaFold parameters and genetic and template databases. Set to the target of download_all_databases.sh.&lt;br /&gt;
-o &amp;lt;output_dir&amp;gt;   Path to a directory that will store the results.&lt;br /&gt;
-f &amp;lt;fasta_path&amp;gt;   Path to a FASTA file containing a single sequence.&lt;br /&gt;
-t &amp;lt;max_template_date&amp;gt; Maximum template release date to consider (ISO-8601 format: YYYY-MM-DD). Important if folding historical test sets.&lt;br /&gt;
Optional Parameters:&lt;br /&gt;
-n &amp;lt;openmm_threads&amp;gt;   OpenMM threads (default: all available cores)&lt;br /&gt;
-b &amp;lt;benchmark&amp;gt;    Run multiple JAX model evaluations to obtain a timing that excludes the compilation time, which should be more indicative of the time required for inferencing many proteins (default: false)&lt;br /&gt;
-g &amp;lt;use_gpu&amp;gt;      Enable NVIDIA runtime to run with GPUs (default: true)&lt;br /&gt;
-a &amp;lt;gpu_devices&amp;gt;  Comma separated list of devices to pass to &amp;#039;CUDA_VISIBLE_DEVICES&amp;#039; (default: 0)&lt;br /&gt;
-m &amp;lt;model_preset&amp;gt;  Choose preset model configuration - the monomer model (monomer), the monomer model with extra ensembling (monomer_casp14), monomer model with pTM head (monomer_ptm), or multimer model (multimer) (default: monomer)&lt;br /&gt;
-p &amp;lt;db_preset&amp;gt;       Choose preset MSA database configuration - smaller genetic database config (reduced_dbs) or full genetic database config (full_dbs) (default: full_dbs)&lt;br /&gt;
-u &amp;lt;use_precomputed_msas&amp;gt;       Whether to read MSAs that have been written to disk. WARNING: This will not check if the sequence, database or configuration have changed. (default: false)&lt;br /&gt;
-r &amp;lt;remove_msas_after_use&amp;gt;       Whether, after structure prediction(s), to delete MSAs that have been written to disk to significantly free up storage space. (default: false)&lt;br /&gt;
-i &amp;lt;is_prokaryote&amp;gt;   Optional for multimer system, not used by the single chain system. This should contain a boolean specifying true where the target complex is from a prokaryote, and false where it is not, or where the origin is unknown. These values determine the pairing method for the MSA (default: false)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Databases ====&lt;br /&gt;
We downloaded the databases to /home/alphafold_folder/alphafold_data on compute-0-300&lt;br /&gt;
you may use it, or copy it to your own storage and point to it with -d flag of the run script.&lt;br /&gt;
also, you may download the databases to your own storage via the script &amp;lt;b&amp;gt;download_all_data.sh&amp;lt;/b&amp;gt; located at /home/alphafold_folder/alphafold_multimer_non_docker/scripts/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Sample qsub script ====&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Note:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
create folder for output in your home dir&lt;br /&gt;
mkdir ~/alphafold_output&lt;br /&gt;
then run the script&lt;br /&gt;
* you may download dummy_test folder from this github as well for the output&lt;br /&gt;
https://github.com/kalininalab/alphafold_non_docker&lt;br /&gt;
* /home/alphafold_folder/alphafold_multimer_non_docker/example/query.fasta = this is sample data, please point to the data you need to query.&lt;br /&gt;
* The lines &amp;#039;&amp;#039;&amp;#039;export CUDA_VISIBLE_DEVICES=$(python3 /powerapps/scripts/check_avail_gpu.py)&amp;#039;&amp;#039;&amp;#039; and the flag &amp;lt;b&amp;gt;-a $CUDA_VISIBLE_DEVICES&amp;lt;/b&amp;gt; make it so you can use the next free GPU on the server, please leave it as is.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#PBS -l select=1:ncpus=4:ngpus=1&lt;br /&gt;
#PBS -q gpu&lt;br /&gt;
&lt;br /&gt;
# Description: AlphaFold-Multimer (Non-Docker) with auto-gpu selection&lt;br /&gt;
# Original Author: Lev Arie Krapivner&lt;br /&gt;
&lt;br /&gt;
# load miniconda&lt;br /&gt;
module load miniconda/miniconda3-4.7.12-environmentally&lt;br /&gt;
# activate relevant venv&lt;br /&gt;
conda activate /powerapps/share/centos7/miniconda/miniconda3-4.7.12-environmentally/envs/alphafold_non_docker&lt;br /&gt;
# run alphafold&lt;br /&gt;
cd /home/alphafold_folder/alphafold_multimer_non_docker/&lt;br /&gt;
# call to check_available_gpu python script&lt;br /&gt;
# returns the param for CUDA_VISIBLE_DEVICE which the run alphafold script uses&lt;br /&gt;
&lt;br /&gt;
export CUDA_VISIBLE_DEVICES=$(python3 /powerapps/scripts/check_avail_gpu.py)&lt;br /&gt;
# echo &amp;quot;CUDA_VISIBLE_DEVICES: $CUDA_VISIBLE_DEVICES&amp;quot;&lt;br /&gt;
bash run_alphafold.sh -d /home/alphafold_folder/alphafold_data -o ~/output_dir -f /home/alphafold_folder/alphafold_multimer_non_docker/example/query.fasta -t 2020-05-14 -a $CUDA_VISIBLE_DEVICES&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Sample qsub script 2 ====&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Note:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
create folder for output in your home dir&lt;br /&gt;
mkdir ~/alphafold_output&lt;br /&gt;
then run the script&lt;br /&gt;
* you may download dummy_test folder from this github as well for the output&lt;br /&gt;
https://github.com/kalininalab/alphafold_non_docker&lt;br /&gt;
* /home/alphafold_folder/alphafold_multimer_non_docker/example/query.fasta = this is sample data, please point to the data you need to query.&lt;br /&gt;
* The lines &amp;#039;&amp;#039;&amp;#039;export CUDA_VISIBLE_DEVICES=$(python3 /powerapps/scripts/check_avail_gpu.py)&amp;#039;&amp;#039;&amp;#039; and the flag &amp;lt;b&amp;gt;-a $CUDA_VISIBLE_DEVICES&amp;lt;/b&amp;gt; make it so you can use the next free GPU on the server, please leave it as is.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#PBS -l select=1:ncpus=4:ngpus=1&lt;br /&gt;
#PBS -q gpu&lt;br /&gt;
&lt;br /&gt;
# Description: AlphaFold-Multimer (Non-Docker) with auto-gpu selection&lt;br /&gt;
# Original Author: Lev Arie Krapivner&lt;br /&gt;
&lt;br /&gt;
# load conda env&lt;br /&gt;
module load alphafold/alphafold_non_docker_2.2.0&lt;br /&gt;
cd /home/alphafold_folder/alphafold_multimer_non_docker/&lt;br /&gt;
# call to check_available_gpu python script&lt;br /&gt;
# returns the param for CUDA_VISIBLE_DEVICE which the run alphafold script uses&lt;br /&gt;
&lt;br /&gt;
export CUDA_VISIBLE_DEVICES=$(python3 /powerapps/scripts/check_avail_gpu.py)&lt;br /&gt;
# echo &amp;quot;CUDA_VISIBLE_DEVICES: $CUDA_VISIBLE_DEVICES&amp;quot;&lt;br /&gt;
run_alphafold.sh -d PATH_TO_DV_FOLDER-o ~/output_dir -f PATH_TO_FASTA_FILE -t 2020-05-14 -a $CUDA_VISIBLE_DEVICES&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://github.com/amorehead/alphafold_non_docker Alphafold - non_docker source]&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Palo_Alto_VPN_for_linux&amp;diff=1396</id>
		<title>Palo Alto VPN for linux</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Palo_Alto_VPN_for_linux&amp;diff=1396"/>
		<updated>2022-09-11T11:19:14Z</updated>

		<summary type="html">&lt;p&gt;Levk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;For security reason TelAviv University starts a VPN with double authentication standard.&lt;br /&gt;
&lt;br /&gt;
In order to do that users have to check/fill in their mobile phone at myTAU page&lt;br /&gt;
(https://mytau.tau.ac.il/GetResource.php) and enroll to the service.&lt;br /&gt;
Then you need install GoogleAuthenticator on you mobile device and register it at TAU.&lt;br /&gt;
&lt;br /&gt;
After that you may download and install PaloAlto GlobalProtect VPN client on your device (all&lt;br /&gt;
operation systems are supported: IOS, Android, Linux MAC and even Window)&lt;br /&gt;
&lt;br /&gt;
The steps:&lt;br /&gt;
==Enrollment==&lt;br /&gt;
Go to https://mytau.tau.ac.il/GetResource.php&lt;br /&gt;
&lt;br /&gt;
Choose the “1” then “2” :&lt;br /&gt;
&lt;br /&gt;
Then you will receive SMS with 2-minute code and enter it immediately to the filed:&lt;br /&gt;
Then you will be redirected to the QR code for GoogleAuthenticator account setup:&lt;br /&gt;
Scan it using your mobile Google Authenticator app using “+” on bottom right corner of mobile device&lt;br /&gt;
and enter the generated code from mobile GoogleAuthenticator to the field and press the green button.&lt;br /&gt;
&lt;br /&gt;
==Download==&lt;br /&gt;
Download and install VPN client, from the browser, go to:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://hpcguide.tau.ac.il/vpn/PanGPLinux-5.3.4-c5.tgz GlobalProtect-5.3.4]&lt;br /&gt;
&lt;br /&gt;
[https://hpcguide.tau.ac.il/vpn/PanGPLinux-6.0.1-c6.tgz GlobalProtect-6.0.1]&lt;br /&gt;
&lt;br /&gt;
Linux package should be extracted and installed appropriated version:&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Debian/Ubuntu&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;dpkg -i GlobalProtect_UI_deb-6.0.1.1-6.deb&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Redhat/Centos&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;yum localinstall GlobalProtect_UI_rpm-6.0.1.1-6.rpm&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Configure==&lt;br /&gt;
&lt;br /&gt;
[[File:Paloalto3.PNG|thumb|right]]&lt;br /&gt;
&lt;br /&gt;
Execute and configure VPN client on Linux (another OS are similar) :&lt;br /&gt;
&lt;br /&gt;
Open client by pressing on the relevant icon (&amp;quot;1&amp;quot; as in the picture on the right)&lt;br /&gt;
&lt;br /&gt;
And enter address &amp;#039;&amp;#039;&amp;#039;vpn.tau.ac.il&amp;#039;&amp;#039;&amp;#039; (&amp;quot;2&amp;quot; as in the picture on the right)&lt;br /&gt;
&lt;br /&gt;
==Errors==&lt;br /&gt;
===SSL Error===&lt;br /&gt;
On latest ubuntu version, ubuntu 22.04, after installing and configuring globalprotect VPN, you get this error:&lt;br /&gt;
&lt;br /&gt;
[[File:784px-Vpn ssl error.png|none|thumb]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Fix only for globalprotect====&lt;br /&gt;
create new &amp;lt;code&amp;gt;ssl.conf&amp;lt;/code&amp;gt; file on your pc with the following content:&lt;br /&gt;
vim ~/ssl.conf&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
openssl_conf = openssl_init&lt;br /&gt;
[openssl_init]&lt;br /&gt;
ssl_conf = ssl_sect&lt;br /&gt;
[ssl_sect]&lt;br /&gt;
system_default = system_default_sect&lt;br /&gt;
[system_default_sect]&lt;br /&gt;
Options = UnsafeLegacyRenegotiation&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Then find this file:&lt;br /&gt;
&amp;lt;code&amp;gt;sudo find / -name PanGPUI.desktop -type f&amp;lt;/code&amp;gt;&lt;br /&gt;
or&lt;br /&gt;
&amp;lt;code&amp;gt;locate PanGPUI.desktop&amp;lt;/code&amp;gt; (may need to do sudo updatedb before running this one)&lt;br /&gt;
there should be at least 2 path with this file, ignore this one --&amp;gt; &amp;lt;code&amp;gt;/opt/paloaltonetworks/globalprotect/PanGPUI.desktop&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On my linux, kubuntu 22.04 the file is here: &amp;lt;code&amp;gt;/etc/xdg/autostart/PanGPUI.desktop&amp;lt;/code&amp;gt;&lt;br /&gt;
enter this file and change it from:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[Desktop Entry]&lt;br /&gt;
Name=PanGPUI&lt;br /&gt;
Type=Application&lt;br /&gt;
Exec=/opt/paloaltonetworks/globalprotect/PanGPUI&lt;br /&gt;
Terminal=false&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
to &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[Desktop Entry]&lt;br /&gt;
Name=PanGPUI&lt;br /&gt;
Type=Application&lt;br /&gt;
Exec=OPENSSL_CONF=~/ssl.conf /opt/paloaltonetworks/globalprotect/PanGPUI&lt;br /&gt;
Terminal=false&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After restarting you pc, globalprotect will autostart with the custom ssl settings&lt;br /&gt;
&lt;br /&gt;
====Global fix====&lt;br /&gt;
here is how to workaround it:&lt;br /&gt;
&lt;br /&gt;
open  &amp;lt;code&amp;gt;/usr/lib/ssl/openssl.cnf&amp;lt;/code&amp;gt; &lt;br /&gt;
&lt;br /&gt;
comment out this section:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# [openssl_init]&lt;br /&gt;
&lt;br /&gt;
# providers = provider_sect&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;add this new section under the commented one from earlier:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[openssl_init]&lt;br /&gt;
ssl_conf = ssl_sect&lt;br /&gt;
&lt;br /&gt;
[ssl_sect]&lt;br /&gt;
system_default = system_default_sect&lt;br /&gt;
&lt;br /&gt;
[system_default_sect]&lt;br /&gt;
Options = UnsafeLegacyRenegotiation&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;reboot globalprotect app and the error should be fixed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;source:https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/1960268&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==TAU credentials==&lt;br /&gt;
[[File:Paloalto4.PNG|thumb|right]]&lt;br /&gt;
Fill in pop-upped windows with your TAU credentials:&lt;br /&gt;
&lt;br /&gt;
Open your mobile GoogleAuthenticator and enter code from there&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Congratulations: you are done!&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Palo_Alto_VPN_for_linux&amp;diff=1395</id>
		<title>Palo Alto VPN for linux</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Palo_Alto_VPN_for_linux&amp;diff=1395"/>
		<updated>2022-08-14T08:08:59Z</updated>

		<summary type="html">&lt;p&gt;Levk: /* Download */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;For security reason TelAviv University starts a VPN with double authentication standard.&lt;br /&gt;
&lt;br /&gt;
In order to do that users have to check/fill in their mobile phone at myTAU page&lt;br /&gt;
(https://mytau.tau.ac.il/GetResource.php) and enroll to the service.&lt;br /&gt;
Then you need install GoogleAuthenticator on you mobile device and register it at TAU.&lt;br /&gt;
&lt;br /&gt;
After that you may download and install PaloAlto GlobalProtect VPN client on your device (all&lt;br /&gt;
operation systems are supported: IOS, Android, Linux MAC and even Window)&lt;br /&gt;
&lt;br /&gt;
The steps:&lt;br /&gt;
==Enrollment==&lt;br /&gt;
Go to https://mytau.tau.ac.il/GetResource.php&lt;br /&gt;
&lt;br /&gt;
Choose the “1” then “2” :&lt;br /&gt;
&lt;br /&gt;
Then you will receive SMS with 2-minute code and enter it immediately to the filed:&lt;br /&gt;
Then you will be redirected to the QR code for GoogleAuthenticator account setup:&lt;br /&gt;
Scan it using your mobile Google Authenticator app using “+” on bottom right corner of mobile device&lt;br /&gt;
and enter the generated code from mobile GoogleAuthenticator to the field and press the green button.&lt;br /&gt;
&lt;br /&gt;
==Download==&lt;br /&gt;
Download and install VPN client, from the browser, go to:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://hpcguide.tau.ac.il/vpn/PanGPLinux-5.3.4-c5.tgz GlobalProtect-5.3.4]&lt;br /&gt;
&lt;br /&gt;
[https://hpcguide.tau.ac.il/vpn/PanGPLinux-6.0.1-c6.tgz GlobalProtect-6.0.1]&lt;br /&gt;
&lt;br /&gt;
Linux package should be extracted and installed appropriated version:&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Debian/Ubuntu&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;dpkg -i GlobalProtect_UI_deb-6.0.1.1-6.deb&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Redhat/Centos&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;yum localinstall GlobalProtect_UI_rpm-6.0.1.1-6.rpm&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Configure==&lt;br /&gt;
&lt;br /&gt;
[[File:Paloalto3.PNG|thumb|right]]&lt;br /&gt;
&lt;br /&gt;
Execute and configure VPN client on Linux (another OS are similar) :&lt;br /&gt;
&lt;br /&gt;
Open client by pressing on the relevant icon (&amp;quot;1&amp;quot; as in the picture on the right)&lt;br /&gt;
&lt;br /&gt;
And enter address &amp;#039;&amp;#039;&amp;#039;vpn.tau.ac.il&amp;#039;&amp;#039;&amp;#039; (&amp;quot;2&amp;quot; as in the picture on the right)&lt;br /&gt;
&lt;br /&gt;
==Errors==&lt;br /&gt;
===SSL Error===&lt;br /&gt;
On latest ubuntu version, ubuntu 22.04, after installing and configuring globalprotect VPN, you get this error:&lt;br /&gt;
&lt;br /&gt;
[[File:784px-Vpn ssl error.png|none|thumb]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Fix only for globalprotect====&lt;br /&gt;
create new &amp;lt;code&amp;gt;ssl.conf&amp;lt;/code&amp;gt; file on your pc with the following content:&lt;br /&gt;
vim ~/ssl.conf&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
openssl_conf = openssl_init&lt;br /&gt;
[openssl_init]&lt;br /&gt;
ssl_conf = ssl_sect&lt;br /&gt;
[ssl_sect]&lt;br /&gt;
system_default = system_default_sect&lt;br /&gt;
[system_default_sect]&lt;br /&gt;
Options = UnsafeLegacyRenegotiation&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Then find this file:&lt;br /&gt;
&amp;lt;code&amp;gt;sudo find / -name PanGPUI.desktop -type f&amp;lt;/code&amp;gt;&lt;br /&gt;
or&lt;br /&gt;
&amp;lt;code&amp;gt;locate PanGPUI.desktop&amp;lt;/code&amp;gt; (may need to do sudo updatedb before running this one)&lt;br /&gt;
there should be at least 2 path with this file, ignore this one --&amp;gt; &amp;lt;code&amp;gt;/opt/paloaltonetworks/globalprotect/PanGPUI.desktop&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On my linux, kubuntu 22.04 the file is here: &amp;lt;code&amp;gt;/etc/xdg/autostart/PanGPUI.desktop&amp;lt;/code&amp;gt;&lt;br /&gt;
enter this file and change it from:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[Desktop Entry]&lt;br /&gt;
Name=PanGPUI&lt;br /&gt;
Type=Application&lt;br /&gt;
Exec=OPENSSL_CONF=/opt/paloaltonetworks/globalprotect/PanGPUI&lt;br /&gt;
Terminal=false&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
to &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[Desktop Entry]&lt;br /&gt;
Name=PanGPUI&lt;br /&gt;
Type=Application&lt;br /&gt;
Exec=OPENSSL_CONF=~/ssl.conf /opt/paloaltonetworks/globalprotect/PanGPUI&lt;br /&gt;
Terminal=false&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After restarting you pc, globalprotect will autostart with the custom ssl settings&lt;br /&gt;
&lt;br /&gt;
====Global fix====&lt;br /&gt;
here is how to workaround it:&lt;br /&gt;
&lt;br /&gt;
open  &amp;lt;code&amp;gt;/usr/lib/ssl/openssl.cnf&amp;lt;/code&amp;gt; &lt;br /&gt;
&lt;br /&gt;
comment out this section:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# [openssl_init]&lt;br /&gt;
&lt;br /&gt;
# providers = provider_sect&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;add this new section under the commented one from earlier:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[openssl_init]&lt;br /&gt;
ssl_conf = ssl_sect&lt;br /&gt;
&lt;br /&gt;
[ssl_sect]&lt;br /&gt;
system_default = system_default_sect&lt;br /&gt;
&lt;br /&gt;
[system_default_sect]&lt;br /&gt;
Options = UnsafeLegacyRenegotiation&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;reboot globalprotect app and the error should be fixed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;source:https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/1960268&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==TAU credentials==&lt;br /&gt;
[[File:Paloalto4.PNG|thumb|right]]&lt;br /&gt;
Fill in pop-upped windows with your TAU credentials:&lt;br /&gt;
&lt;br /&gt;
Open your mobile GoogleAuthenticator and enter code from there&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Congratulations: you are done!&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Palo_Alto_VPN_for_linux&amp;diff=1394</id>
		<title>Palo Alto VPN for linux</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Palo_Alto_VPN_for_linux&amp;diff=1394"/>
		<updated>2022-08-14T08:04:00Z</updated>

		<summary type="html">&lt;p&gt;Levk: /* Download */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;For security reason TelAviv University starts a VPN with double authentication standard.&lt;br /&gt;
&lt;br /&gt;
In order to do that users have to check/fill in their mobile phone at myTAU page&lt;br /&gt;
(https://mytau.tau.ac.il/GetResource.php) and enroll to the service.&lt;br /&gt;
Then you need install GoogleAuthenticator on you mobile device and register it at TAU.&lt;br /&gt;
&lt;br /&gt;
After that you may download and install PaloAlto GlobalProtect VPN client on your device (all&lt;br /&gt;
operation systems are supported: IOS, Android, Linux MAC and even Window)&lt;br /&gt;
&lt;br /&gt;
The steps:&lt;br /&gt;
==Enrollment==&lt;br /&gt;
Go to https://mytau.tau.ac.il/GetResource.php&lt;br /&gt;
&lt;br /&gt;
Choose the “1” then “2” :&lt;br /&gt;
&lt;br /&gt;
Then you will receive SMS with 2-minute code and enter it immediately to the filed:&lt;br /&gt;
Then you will be redirected to the QR code for GoogleAuthenticator account setup:&lt;br /&gt;
Scan it using your mobile Google Authenticator app using “+” on bottom right corner of mobile device&lt;br /&gt;
and enter the generated code from mobile GoogleAuthenticator to the field and press the green button.&lt;br /&gt;
&lt;br /&gt;
==Download==&lt;br /&gt;
Download and install VPN client, from the browser, go to:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://hpcguide.tau.ac.il/vpn/PanGPLinux-5.3.4-c5.tgz GlobalProtect-5.3.4]&lt;br /&gt;
&lt;br /&gt;
[https://hpcguide.tau.ac.il/vpn/PanGPLinux-6.0.1-c6.tgz GlobalProtect-6.0.1]&lt;br /&gt;
&lt;br /&gt;
Linux package should be extracted and installed appropriated version:&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Debian/Ubuntu&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
dpkg -i GlobalProtect_UI_deb-5.3.1.0-36.deb&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Redhat/Centos&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
yum localinstall GlobalProtect_UI_rpm-5.3.1.0-36.rpm&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Configure==&lt;br /&gt;
&lt;br /&gt;
[[File:Paloalto3.PNG|thumb|right]]&lt;br /&gt;
&lt;br /&gt;
Execute and configure VPN client on Linux (another OS are similar) :&lt;br /&gt;
&lt;br /&gt;
Open client by pressing on the relevant icon (&amp;quot;1&amp;quot; as in the picture on the right)&lt;br /&gt;
&lt;br /&gt;
And enter address &amp;#039;&amp;#039;&amp;#039;vpn.tau.ac.il&amp;#039;&amp;#039;&amp;#039; (&amp;quot;2&amp;quot; as in the picture on the right)&lt;br /&gt;
&lt;br /&gt;
==Errors==&lt;br /&gt;
===SSL Error===&lt;br /&gt;
On latest ubuntu version, ubuntu 22.04, after installing and configuring globalprotect VPN, you get this error:&lt;br /&gt;
&lt;br /&gt;
[[File:784px-Vpn ssl error.png|none|thumb]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Fix only for globalprotect====&lt;br /&gt;
create new &amp;lt;code&amp;gt;ssl.conf&amp;lt;/code&amp;gt; file on your pc with the following content:&lt;br /&gt;
vim ~/ssl.conf&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
openssl_conf = openssl_init&lt;br /&gt;
[openssl_init]&lt;br /&gt;
ssl_conf = ssl_sect&lt;br /&gt;
[ssl_sect]&lt;br /&gt;
system_default = system_default_sect&lt;br /&gt;
[system_default_sect]&lt;br /&gt;
Options = UnsafeLegacyRenegotiation&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Then find this file:&lt;br /&gt;
&amp;lt;code&amp;gt;sudo find / -name PanGPUI.desktop -type f&amp;lt;/code&amp;gt;&lt;br /&gt;
or&lt;br /&gt;
&amp;lt;code&amp;gt;locate PanGPUI.desktop&amp;lt;/code&amp;gt; (may need to do sudo updatedb before running this one)&lt;br /&gt;
there should be at least 2 path with this file, ignore this one --&amp;gt; &amp;lt;code&amp;gt;/opt/paloaltonetworks/globalprotect/PanGPUI.desktop&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On my linux, kubuntu 22.04 the file is here: &amp;lt;code&amp;gt;/etc/xdg/autostart/PanGPUI.desktop&amp;lt;/code&amp;gt;&lt;br /&gt;
enter this file and change it from:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[Desktop Entry]&lt;br /&gt;
Name=PanGPUI&lt;br /&gt;
Type=Application&lt;br /&gt;
Exec=OPENSSL_CONF=/opt/paloaltonetworks/globalprotect/PanGPUI&lt;br /&gt;
Terminal=false&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
to &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[Desktop Entry]&lt;br /&gt;
Name=PanGPUI&lt;br /&gt;
Type=Application&lt;br /&gt;
Exec=OPENSSL_CONF=~/ssl.conf /opt/paloaltonetworks/globalprotect/PanGPUI&lt;br /&gt;
Terminal=false&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After restarting you pc, globalprotect will autostart with the custom ssl settings&lt;br /&gt;
&lt;br /&gt;
====Global fix====&lt;br /&gt;
here is how to workaround it:&lt;br /&gt;
&lt;br /&gt;
open  &amp;lt;code&amp;gt;/usr/lib/ssl/openssl.cnf&amp;lt;/code&amp;gt; &lt;br /&gt;
&lt;br /&gt;
comment out this section:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# [openssl_init]&lt;br /&gt;
&lt;br /&gt;
# providers = provider_sect&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;add this new section under the commented one from earlier:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[openssl_init]&lt;br /&gt;
ssl_conf = ssl_sect&lt;br /&gt;
&lt;br /&gt;
[ssl_sect]&lt;br /&gt;
system_default = system_default_sect&lt;br /&gt;
&lt;br /&gt;
[system_default_sect]&lt;br /&gt;
Options = UnsafeLegacyRenegotiation&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;reboot globalprotect app and the error should be fixed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;source:https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/1960268&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==TAU credentials==&lt;br /&gt;
[[File:Paloalto4.PNG|thumb|right]]&lt;br /&gt;
Fill in pop-upped windows with your TAU credentials:&lt;br /&gt;
&lt;br /&gt;
Open your mobile GoogleAuthenticator and enter code from there&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Congratulations: you are done!&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Palo_Alto_VPN_for_linux&amp;diff=1393</id>
		<title>Palo Alto VPN for linux</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Palo_Alto_VPN_for_linux&amp;diff=1393"/>
		<updated>2022-08-14T08:02:58Z</updated>

		<summary type="html">&lt;p&gt;Levk: /* Download */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;For security reason TelAviv University starts a VPN with double authentication standard.&lt;br /&gt;
&lt;br /&gt;
In order to do that users have to check/fill in their mobile phone at myTAU page&lt;br /&gt;
(https://mytau.tau.ac.il/GetResource.php) and enroll to the service.&lt;br /&gt;
Then you need install GoogleAuthenticator on you mobile device and register it at TAU.&lt;br /&gt;
&lt;br /&gt;
After that you may download and install PaloAlto GlobalProtect VPN client on your device (all&lt;br /&gt;
operation systems are supported: IOS, Android, Linux MAC and even Window)&lt;br /&gt;
&lt;br /&gt;
The steps:&lt;br /&gt;
==Enrollment==&lt;br /&gt;
Go to https://mytau.tau.ac.il/GetResource.php&lt;br /&gt;
&lt;br /&gt;
Choose the “1” then “2” :&lt;br /&gt;
&lt;br /&gt;
Then you will receive SMS with 2-minute code and enter it immediately to the filed:&lt;br /&gt;
Then you will be redirected to the QR code for GoogleAuthenticator account setup:&lt;br /&gt;
Scan it using your mobile Google Authenticator app using “+” on bottom right corner of mobile device&lt;br /&gt;
and enter the generated code from mobile GoogleAuthenticator to the field and press the green button.&lt;br /&gt;
&lt;br /&gt;
==Download==&lt;br /&gt;
Download and install VPN client, from the browser, go to:&lt;br /&gt;
[https://hpcguide.tau.ac.il/vpn/PanGPLinux-6.0.1-c6.tgz GlobalProtect-6.0.1]&lt;br /&gt;
--&lt;br /&gt;
[https://hpcguide.tau.ac.il/vpn/PanGPLinux-6.0.1-c6.tgz GlobalProtect-6.0.1]&lt;br /&gt;
&lt;br /&gt;
Linux package should be extracted and installed appropriated version:&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Debian/Ubuntu&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
dpkg -i GlobalProtect_UI_deb-5.3.1.0-36.deb&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Redhat/Centos&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
yum localinstall GlobalProtect_UI_rpm-5.3.1.0-36.rpm&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Configure==&lt;br /&gt;
&lt;br /&gt;
[[File:Paloalto3.PNG|thumb|right]]&lt;br /&gt;
&lt;br /&gt;
Execute and configure VPN client on Linux (another OS are similar) :&lt;br /&gt;
&lt;br /&gt;
Open client by pressing on the relevant icon (&amp;quot;1&amp;quot; as in the picture on the right)&lt;br /&gt;
&lt;br /&gt;
And enter address &amp;#039;&amp;#039;&amp;#039;vpn.tau.ac.il&amp;#039;&amp;#039;&amp;#039; (&amp;quot;2&amp;quot; as in the picture on the right)&lt;br /&gt;
&lt;br /&gt;
==Errors==&lt;br /&gt;
===SSL Error===&lt;br /&gt;
On latest ubuntu version, ubuntu 22.04, after installing and configuring globalprotect VPN, you get this error:&lt;br /&gt;
&lt;br /&gt;
[[File:784px-Vpn ssl error.png|none|thumb]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Fix only for globalprotect====&lt;br /&gt;
create new &amp;lt;code&amp;gt;ssl.conf&amp;lt;/code&amp;gt; file on your pc with the following content:&lt;br /&gt;
vim ~/ssl.conf&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
openssl_conf = openssl_init&lt;br /&gt;
[openssl_init]&lt;br /&gt;
ssl_conf = ssl_sect&lt;br /&gt;
[ssl_sect]&lt;br /&gt;
system_default = system_default_sect&lt;br /&gt;
[system_default_sect]&lt;br /&gt;
Options = UnsafeLegacyRenegotiation&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Then find this file:&lt;br /&gt;
&amp;lt;code&amp;gt;sudo find / -name PanGPUI.desktop -type f&amp;lt;/code&amp;gt;&lt;br /&gt;
or&lt;br /&gt;
&amp;lt;code&amp;gt;locate PanGPUI.desktop&amp;lt;/code&amp;gt; (may need to do sudo updatedb before running this one)&lt;br /&gt;
there should be at least 2 path with this file, ignore this one --&amp;gt; &amp;lt;code&amp;gt;/opt/paloaltonetworks/globalprotect/PanGPUI.desktop&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On my linux, kubuntu 22.04 the file is here: &amp;lt;code&amp;gt;/etc/xdg/autostart/PanGPUI.desktop&amp;lt;/code&amp;gt;&lt;br /&gt;
enter this file and change it from:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[Desktop Entry]&lt;br /&gt;
Name=PanGPUI&lt;br /&gt;
Type=Application&lt;br /&gt;
Exec=OPENSSL_CONF=/opt/paloaltonetworks/globalprotect/PanGPUI&lt;br /&gt;
Terminal=false&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
to &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[Desktop Entry]&lt;br /&gt;
Name=PanGPUI&lt;br /&gt;
Type=Application&lt;br /&gt;
Exec=OPENSSL_CONF=~/ssl.conf /opt/paloaltonetworks/globalprotect/PanGPUI&lt;br /&gt;
Terminal=false&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After restarting you pc, globalprotect will autostart with the custom ssl settings&lt;br /&gt;
&lt;br /&gt;
====Global fix====&lt;br /&gt;
here is how to workaround it:&lt;br /&gt;
&lt;br /&gt;
open  &amp;lt;code&amp;gt;/usr/lib/ssl/openssl.cnf&amp;lt;/code&amp;gt; &lt;br /&gt;
&lt;br /&gt;
comment out this section:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# [openssl_init]&lt;br /&gt;
&lt;br /&gt;
# providers = provider_sect&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;add this new section under the commented one from earlier:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[openssl_init]&lt;br /&gt;
ssl_conf = ssl_sect&lt;br /&gt;
&lt;br /&gt;
[ssl_sect]&lt;br /&gt;
system_default = system_default_sect&lt;br /&gt;
&lt;br /&gt;
[system_default_sect]&lt;br /&gt;
Options = UnsafeLegacyRenegotiation&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;reboot globalprotect app and the error should be fixed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;source:https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/1960268&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==TAU credentials==&lt;br /&gt;
[[File:Paloalto4.PNG|thumb|right]]&lt;br /&gt;
Fill in pop-upped windows with your TAU credentials:&lt;br /&gt;
&lt;br /&gt;
Open your mobile GoogleAuthenticator and enter code from there&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Congratulations: you are done!&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Palo_Alto_VPN_for_linux&amp;diff=1392</id>
		<title>Palo Alto VPN for linux</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Palo_Alto_VPN_for_linux&amp;diff=1392"/>
		<updated>2022-08-14T08:02:26Z</updated>

		<summary type="html">&lt;p&gt;Levk: /* Download */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;For security reason TelAviv University starts a VPN with double authentication standard.&lt;br /&gt;
&lt;br /&gt;
In order to do that users have to check/fill in their mobile phone at myTAU page&lt;br /&gt;
(https://mytau.tau.ac.il/GetResource.php) and enroll to the service.&lt;br /&gt;
Then you need install GoogleAuthenticator on you mobile device and register it at TAU.&lt;br /&gt;
&lt;br /&gt;
After that you may download and install PaloAlto GlobalProtect VPN client on your device (all&lt;br /&gt;
operation systems are supported: IOS, Android, Linux MAC and even Window)&lt;br /&gt;
&lt;br /&gt;
The steps:&lt;br /&gt;
==Enrollment==&lt;br /&gt;
Go to https://mytau.tau.ac.il/GetResource.php&lt;br /&gt;
&lt;br /&gt;
Choose the “1” then “2” :&lt;br /&gt;
&lt;br /&gt;
Then you will receive SMS with 2-minute code and enter it immediately to the filed:&lt;br /&gt;
Then you will be redirected to the QR code for GoogleAuthenticator account setup:&lt;br /&gt;
Scan it using your mobile Google Authenticator app using “+” on bottom right corner of mobile device&lt;br /&gt;
and enter the generated code from mobile GoogleAuthenticator to the field and press the green button.&lt;br /&gt;
&lt;br /&gt;
==Download==&lt;br /&gt;
Download and install VPN client, from the browser, go to:&lt;br /&gt;
&lt;br /&gt;
[https://hpcguide.tau.ac.il/vpn/PanGPLinux-5.3.4-c5.tgz GlobalProtect-5.3.4]&lt;br /&gt;
--&lt;br /&gt;
[https://hpcguide.tau.ac.il/vpn/PanGPLinux-6.0.1-c6.tgz GlobalProtect-6.0.1]&lt;br /&gt;
&lt;br /&gt;
Linux package should be extracted and installed appropriated version:&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Debian/Ubuntu&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
dpkg -i GlobalProtect_UI_deb-5.3.1.0-36.deb&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Redhat/Centos&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
yum localinstall GlobalProtect_UI_rpm-5.3.1.0-36.rpm&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Configure==&lt;br /&gt;
&lt;br /&gt;
[[File:Paloalto3.PNG|thumb|right]]&lt;br /&gt;
&lt;br /&gt;
Execute and configure VPN client on Linux (another OS are similar) :&lt;br /&gt;
&lt;br /&gt;
Open client by pressing on the relevant icon (&amp;quot;1&amp;quot; as in the picture on the right)&lt;br /&gt;
&lt;br /&gt;
And enter address &amp;#039;&amp;#039;&amp;#039;vpn.tau.ac.il&amp;#039;&amp;#039;&amp;#039; (&amp;quot;2&amp;quot; as in the picture on the right)&lt;br /&gt;
&lt;br /&gt;
==Errors==&lt;br /&gt;
===SSL Error===&lt;br /&gt;
On latest ubuntu version, ubuntu 22.04, after installing and configuring globalprotect VPN, you get this error:&lt;br /&gt;
&lt;br /&gt;
[[File:784px-Vpn ssl error.png|none|thumb]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Fix only for globalprotect====&lt;br /&gt;
create new &amp;lt;code&amp;gt;ssl.conf&amp;lt;/code&amp;gt; file on your pc with the following content:&lt;br /&gt;
vim ~/ssl.conf&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
openssl_conf = openssl_init&lt;br /&gt;
[openssl_init]&lt;br /&gt;
ssl_conf = ssl_sect&lt;br /&gt;
[ssl_sect]&lt;br /&gt;
system_default = system_default_sect&lt;br /&gt;
[system_default_sect]&lt;br /&gt;
Options = UnsafeLegacyRenegotiation&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Then find this file:&lt;br /&gt;
&amp;lt;code&amp;gt;sudo find / -name PanGPUI.desktop -type f&amp;lt;/code&amp;gt;&lt;br /&gt;
or&lt;br /&gt;
&amp;lt;code&amp;gt;locate PanGPUI.desktop&amp;lt;/code&amp;gt; (may need to do sudo updatedb before running this one)&lt;br /&gt;
there should be at least 2 path with this file, ignore this one --&amp;gt; &amp;lt;code&amp;gt;/opt/paloaltonetworks/globalprotect/PanGPUI.desktop&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On my linux, kubuntu 22.04 the file is here: &amp;lt;code&amp;gt;/etc/xdg/autostart/PanGPUI.desktop&amp;lt;/code&amp;gt;&lt;br /&gt;
enter this file and change it from:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[Desktop Entry]&lt;br /&gt;
Name=PanGPUI&lt;br /&gt;
Type=Application&lt;br /&gt;
Exec=OPENSSL_CONF=/opt/paloaltonetworks/globalprotect/PanGPUI&lt;br /&gt;
Terminal=false&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
to &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[Desktop Entry]&lt;br /&gt;
Name=PanGPUI&lt;br /&gt;
Type=Application&lt;br /&gt;
Exec=OPENSSL_CONF=~/ssl.conf /opt/paloaltonetworks/globalprotect/PanGPUI&lt;br /&gt;
Terminal=false&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After restarting you pc, globalprotect will autostart with the custom ssl settings&lt;br /&gt;
&lt;br /&gt;
====Global fix====&lt;br /&gt;
here is how to workaround it:&lt;br /&gt;
&lt;br /&gt;
open  &amp;lt;code&amp;gt;/usr/lib/ssl/openssl.cnf&amp;lt;/code&amp;gt; &lt;br /&gt;
&lt;br /&gt;
comment out this section:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# [openssl_init]&lt;br /&gt;
&lt;br /&gt;
# providers = provider_sect&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;add this new section under the commented one from earlier:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[openssl_init]&lt;br /&gt;
ssl_conf = ssl_sect&lt;br /&gt;
&lt;br /&gt;
[ssl_sect]&lt;br /&gt;
system_default = system_default_sect&lt;br /&gt;
&lt;br /&gt;
[system_default_sect]&lt;br /&gt;
Options = UnsafeLegacyRenegotiation&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;reboot globalprotect app and the error should be fixed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;source:https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/1960268&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==TAU credentials==&lt;br /&gt;
[[File:Paloalto4.PNG|thumb|right]]&lt;br /&gt;
Fill in pop-upped windows with your TAU credentials:&lt;br /&gt;
&lt;br /&gt;
Open your mobile GoogleAuthenticator and enter code from there&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Congratulations: you are done!&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Palo_Alto_VPN_for_linux&amp;diff=1391</id>
		<title>Palo Alto VPN for linux</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Palo_Alto_VPN_for_linux&amp;diff=1391"/>
		<updated>2022-08-14T08:02:06Z</updated>

		<summary type="html">&lt;p&gt;Levk: /* Download */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;For security reason TelAviv University starts a VPN with double authentication standard.&lt;br /&gt;
&lt;br /&gt;
In order to do that users have to check/fill in their mobile phone at myTAU page&lt;br /&gt;
(https://mytau.tau.ac.il/GetResource.php) and enroll to the service.&lt;br /&gt;
Then you need install GoogleAuthenticator on you mobile device and register it at TAU.&lt;br /&gt;
&lt;br /&gt;
After that you may download and install PaloAlto GlobalProtect VPN client on your device (all&lt;br /&gt;
operation systems are supported: IOS, Android, Linux MAC and even Window)&lt;br /&gt;
&lt;br /&gt;
The steps:&lt;br /&gt;
==Enrollment==&lt;br /&gt;
Go to https://mytau.tau.ac.il/GetResource.php&lt;br /&gt;
&lt;br /&gt;
Choose the “1” then “2” :&lt;br /&gt;
&lt;br /&gt;
Then you will receive SMS with 2-minute code and enter it immediately to the filed:&lt;br /&gt;
Then you will be redirected to the QR code for GoogleAuthenticator account setup:&lt;br /&gt;
Scan it using your mobile Google Authenticator app using “+” on bottom right corner of mobile device&lt;br /&gt;
and enter the generated code from mobile GoogleAuthenticator to the field and press the green button.&lt;br /&gt;
&lt;br /&gt;
==Download==&lt;br /&gt;
Download and install VPN client, from the browser, go to:&lt;br /&gt;
&lt;br /&gt;
[https://hpcguide.tau.ac.il/vpn/PanGPLinux-5.3.4-c5.tgz GlobalProtect-5.3.4]&lt;br /&gt;
[https://hpcguide.tau.ac.il/vpn/PanGPLinux-6.0.1-c6.tgz GlobalProtect-6.0.1]&lt;br /&gt;
&lt;br /&gt;
Linux package should be extracted and installed appropriated version:&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Debian/Ubuntu&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
dpkg -i GlobalProtect_UI_deb-5.3.1.0-36.deb&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Redhat/Centos&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
yum localinstall GlobalProtect_UI_rpm-5.3.1.0-36.rpm&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Configure==&lt;br /&gt;
&lt;br /&gt;
[[File:Paloalto3.PNG|thumb|right]]&lt;br /&gt;
&lt;br /&gt;
Execute and configure VPN client on Linux (another OS are similar) :&lt;br /&gt;
&lt;br /&gt;
Open client by pressing on the relevant icon (&amp;quot;1&amp;quot; as in the picture on the right)&lt;br /&gt;
&lt;br /&gt;
And enter address &amp;#039;&amp;#039;&amp;#039;vpn.tau.ac.il&amp;#039;&amp;#039;&amp;#039; (&amp;quot;2&amp;quot; as in the picture on the right)&lt;br /&gt;
&lt;br /&gt;
==Errors==&lt;br /&gt;
===SSL Error===&lt;br /&gt;
On latest ubuntu version, ubuntu 22.04, after installing and configuring globalprotect VPN, you get this error:&lt;br /&gt;
&lt;br /&gt;
[[File:784px-Vpn ssl error.png|none|thumb]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Fix only for globalprotect====&lt;br /&gt;
create new &amp;lt;code&amp;gt;ssl.conf&amp;lt;/code&amp;gt; file on your pc with the following content:&lt;br /&gt;
vim ~/ssl.conf&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
openssl_conf = openssl_init&lt;br /&gt;
[openssl_init]&lt;br /&gt;
ssl_conf = ssl_sect&lt;br /&gt;
[ssl_sect]&lt;br /&gt;
system_default = system_default_sect&lt;br /&gt;
[system_default_sect]&lt;br /&gt;
Options = UnsafeLegacyRenegotiation&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Then find this file:&lt;br /&gt;
&amp;lt;code&amp;gt;sudo find / -name PanGPUI.desktop -type f&amp;lt;/code&amp;gt;&lt;br /&gt;
or&lt;br /&gt;
&amp;lt;code&amp;gt;locate PanGPUI.desktop&amp;lt;/code&amp;gt; (may need to do sudo updatedb before running this one)&lt;br /&gt;
there should be at least 2 path with this file, ignore this one --&amp;gt; &amp;lt;code&amp;gt;/opt/paloaltonetworks/globalprotect/PanGPUI.desktop&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On my linux, kubuntu 22.04 the file is here: &amp;lt;code&amp;gt;/etc/xdg/autostart/PanGPUI.desktop&amp;lt;/code&amp;gt;&lt;br /&gt;
enter this file and change it from:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[Desktop Entry]&lt;br /&gt;
Name=PanGPUI&lt;br /&gt;
Type=Application&lt;br /&gt;
Exec=OPENSSL_CONF=/opt/paloaltonetworks/globalprotect/PanGPUI&lt;br /&gt;
Terminal=false&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
to &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[Desktop Entry]&lt;br /&gt;
Name=PanGPUI&lt;br /&gt;
Type=Application&lt;br /&gt;
Exec=OPENSSL_CONF=~/ssl.conf /opt/paloaltonetworks/globalprotect/PanGPUI&lt;br /&gt;
Terminal=false&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After restarting you pc, globalprotect will autostart with the custom ssl settings&lt;br /&gt;
&lt;br /&gt;
====Global fix====&lt;br /&gt;
here is how to workaround it:&lt;br /&gt;
&lt;br /&gt;
open  &amp;lt;code&amp;gt;/usr/lib/ssl/openssl.cnf&amp;lt;/code&amp;gt; &lt;br /&gt;
&lt;br /&gt;
comment out this section:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# [openssl_init]&lt;br /&gt;
&lt;br /&gt;
# providers = provider_sect&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;add this new section under the commented one from earlier:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[openssl_init]&lt;br /&gt;
ssl_conf = ssl_sect&lt;br /&gt;
&lt;br /&gt;
[ssl_sect]&lt;br /&gt;
system_default = system_default_sect&lt;br /&gt;
&lt;br /&gt;
[system_default_sect]&lt;br /&gt;
Options = UnsafeLegacyRenegotiation&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;reboot globalprotect app and the error should be fixed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;source:https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/1960268&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==TAU credentials==&lt;br /&gt;
[[File:Paloalto4.PNG|thumb|right]]&lt;br /&gt;
Fill in pop-upped windows with your TAU credentials:&lt;br /&gt;
&lt;br /&gt;
Open your mobile GoogleAuthenticator and enter code from there&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Congratulations: you are done!&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Palo_Alto_VPN_for_linux&amp;diff=1390</id>
		<title>Palo Alto VPN for linux</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Palo_Alto_VPN_for_linux&amp;diff=1390"/>
		<updated>2022-08-02T08:42:40Z</updated>

		<summary type="html">&lt;p&gt;Levk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;For security reason TelAviv University starts a VPN with double authentication standard.&lt;br /&gt;
&lt;br /&gt;
In order to do that users have to check/fill in their mobile phone at myTAU page&lt;br /&gt;
(https://mytau.tau.ac.il/GetResource.php) and enroll to the service.&lt;br /&gt;
Then you need install GoogleAuthenticator on you mobile device and register it at TAU.&lt;br /&gt;
&lt;br /&gt;
After that you may download and install PaloAlto GlobalProtect VPN client on your device (all&lt;br /&gt;
operation systems are supported: IOS, Android, Linux MAC and even Window)&lt;br /&gt;
&lt;br /&gt;
The steps:&lt;br /&gt;
==Enrollment==&lt;br /&gt;
Go to https://mytau.tau.ac.il/GetResource.php&lt;br /&gt;
&lt;br /&gt;
Choose the “1” then “2” :&lt;br /&gt;
&lt;br /&gt;
Then you will receive SMS with 2-minute code and enter it immediately to the filed:&lt;br /&gt;
Then you will be redirected to the QR code for GoogleAuthenticator account setup:&lt;br /&gt;
Scan it using your mobile Google Authenticator app using “+” on bottom right corner of mobile device&lt;br /&gt;
and enter the generated code from mobile GoogleAuthenticator to the field and press the green button.&lt;br /&gt;
&lt;br /&gt;
==Download==&lt;br /&gt;
Download and install VPN client, from the browser, go to:&lt;br /&gt;
&lt;br /&gt;
If within vpn tunnel, download one of the below versions:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://hpc-tftp.tau.ac.il/public_files/PanGPLinux-5.3.1-c9.tgz&lt;br /&gt;
http://hpc-tftp.tau.ac.il/public_files/PanGPLinux-6.0.0-c18.tgz&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If without vpn tunnel, may download one of the below file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
https://www.tau.ac.il/~danny/vpn/PanGPLinux-5.3.1-c9.tgz&lt;br /&gt;
https://www.tau.ac.il/~danny/vpn/PanGPLinux-5.3.2-c3.tgz&lt;br /&gt;
https://www.tau.ac.il/~danny/vpn/PanGPLinux-6.0.0-c18.tgz&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Linux package should be extracted and installed appropriated version:&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Debian/Ubuntu&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
dpkg -i GlobalProtect_UI_deb-5.3.1.0-36.deb&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Redhat/Centos&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
yum localinstall GlobalProtect_UI_rpm-5.3.1.0-36.rpm&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Configure==&lt;br /&gt;
&lt;br /&gt;
[[File:Paloalto3.PNG|thumb|right]]&lt;br /&gt;
&lt;br /&gt;
Execute and configure VPN client on Linux (another OS are similar) :&lt;br /&gt;
&lt;br /&gt;
Open client by pressing on the relevant icon (&amp;quot;1&amp;quot; as in the picture on the right)&lt;br /&gt;
&lt;br /&gt;
And enter address &amp;#039;&amp;#039;&amp;#039;vpn.tau.ac.il&amp;#039;&amp;#039;&amp;#039; (&amp;quot;2&amp;quot; as in the picture on the right)&lt;br /&gt;
&lt;br /&gt;
==Errors==&lt;br /&gt;
===SSL Error===&lt;br /&gt;
On latest ubuntu version, ubuntu 22.04, after installing and configuring globalprotect VPN, you get this error:&lt;br /&gt;
&lt;br /&gt;
[[File:784px-Vpn ssl error.png|none|thumb]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Fix only for globalprotect====&lt;br /&gt;
create new &amp;lt;code&amp;gt;ssl.conf&amp;lt;/code&amp;gt; file on your pc with the following content:&lt;br /&gt;
vim ~/ssl.conf&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
openssl_conf = openssl_init&lt;br /&gt;
[openssl_init]&lt;br /&gt;
ssl_conf = ssl_sect&lt;br /&gt;
[ssl_sect]&lt;br /&gt;
system_default = system_default_sect&lt;br /&gt;
[system_default_sect]&lt;br /&gt;
Options = UnsafeLegacyRenegotiation&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Then find this file:&lt;br /&gt;
&amp;lt;code&amp;gt;sudo find / -name PanGPUI.desktop -type f&amp;lt;/code&amp;gt;&lt;br /&gt;
or&lt;br /&gt;
&amp;lt;code&amp;gt;locate PanGPUI.desktop&amp;lt;/code&amp;gt; (may need to do sudo updatedb before running this one)&lt;br /&gt;
there should be at least 2 path with this file, ignore this one --&amp;gt; &amp;lt;code&amp;gt;/opt/paloaltonetworks/globalprotect/PanGPUI.desktop&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On my linux, kubuntu 22.04 the file is here: &amp;lt;code&amp;gt;/etc/xdg/autostart/PanGPUI.desktop&amp;lt;/code&amp;gt;&lt;br /&gt;
enter this file and change it from:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[Desktop Entry]&lt;br /&gt;
Name=PanGPUI&lt;br /&gt;
Type=Application&lt;br /&gt;
Exec=OPENSSL_CONF=/opt/paloaltonetworks/globalprotect/PanGPUI&lt;br /&gt;
Terminal=false&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
to &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[Desktop Entry]&lt;br /&gt;
Name=PanGPUI&lt;br /&gt;
Type=Application&lt;br /&gt;
Exec=OPENSSL_CONF=~/ssl.conf /opt/paloaltonetworks/globalprotect/PanGPUI&lt;br /&gt;
Terminal=false&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After restarting you pc, globalprotect will autostart with the custom ssl settings&lt;br /&gt;
&lt;br /&gt;
====Global fix====&lt;br /&gt;
here is how to workaround it:&lt;br /&gt;
&lt;br /&gt;
open  &amp;lt;code&amp;gt;/usr/lib/ssl/openssl.cnf&amp;lt;/code&amp;gt; &lt;br /&gt;
&lt;br /&gt;
comment out this section:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# [openssl_init]&lt;br /&gt;
&lt;br /&gt;
# providers = provider_sect&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;add this new section under the commented one from earlier:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[openssl_init]&lt;br /&gt;
ssl_conf = ssl_sect&lt;br /&gt;
&lt;br /&gt;
[ssl_sect]&lt;br /&gt;
system_default = system_default_sect&lt;br /&gt;
&lt;br /&gt;
[system_default_sect]&lt;br /&gt;
Options = UnsafeLegacyRenegotiation&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;reboot globalprotect app and the error should be fixed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;source:https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/1960268&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==TAU credentials==&lt;br /&gt;
[[File:Paloalto4.PNG|thumb|right]]&lt;br /&gt;
Fill in pop-upped windows with your TAU credentials:&lt;br /&gt;
&lt;br /&gt;
Open your mobile GoogleAuthenticator and enter code from there&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Congratulations: you are done!&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Palo_Alto_VPN_for_linux&amp;diff=1389</id>
		<title>Palo Alto VPN for linux</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Palo_Alto_VPN_for_linux&amp;diff=1389"/>
		<updated>2022-07-20T09:41:40Z</updated>

		<summary type="html">&lt;p&gt;Levk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;For security reason TelAviv University starts a VPN with double authentication standard.&lt;br /&gt;
&lt;br /&gt;
In order to do that users have to check/fill in their mobile phone at myTAU page&lt;br /&gt;
(https://mytau.tau.ac.il/GetResource.php) and enroll to the service.&lt;br /&gt;
Then you need install GoogleAuthenticator on you mobile device and register it at TAU.&lt;br /&gt;
&lt;br /&gt;
After that you may download and install PaloAlto GlobalProtect VPN client on your device (all&lt;br /&gt;
operation systems are supported: IOS, Android, Linux MAC and even Window)&lt;br /&gt;
&lt;br /&gt;
The steps:&lt;br /&gt;
==Enrollment==&lt;br /&gt;
Go to https://mytau.tau.ac.il/GetResource.php&lt;br /&gt;
&lt;br /&gt;
Choose the “1” then “2” :&lt;br /&gt;
&lt;br /&gt;
Then you will receive SMS with 2-minute code and enter it immediately to the filed:&lt;br /&gt;
Then you will be redirected to the QR code for GoogleAuthenticator account setup:&lt;br /&gt;
Scan it using your mobile Google Authenticator app using “+” on bottom right corner of mobile device&lt;br /&gt;
and enter the generated code from mobile GoogleAuthenticator to the field and press the green button.&lt;br /&gt;
&lt;br /&gt;
==Download==&lt;br /&gt;
Download and install VPN client, from the browser, go to:&lt;br /&gt;
&lt;br /&gt;
If within vpn tunnel, download one of the below versions:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://hpc-tftp.tau.ac.il/public_files/PanGPLinux-5.3.1-c9.tgz&lt;br /&gt;
http://hpc-tftp.tau.ac.il/public_files/PanGPLinux-6.0.0-c18.tgz&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If without vpn tunnel, may download one of the below file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
https://www.tau.ac.il/~danny/vpn/PanGPLinux-5.3.1-c9.tgz&lt;br /&gt;
https://www.tau.ac.il/~danny/vpn/PanGPLinux-5.3.2-c3.tgz&lt;br /&gt;
https://www.tau.ac.il/~danny/vpn/PanGPLinux-6.0.0-c18.tgz&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Linux package should be extracted and installed appropriated version:&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Debian/Ubuntu&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
dpkg -i GlobalProtect_UI_deb-5.3.1.0-36.deb&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Redhat/Centos&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
yum localinstall GlobalProtect_UI_rpm-5.3.1.0-36.rpm&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Configure==&lt;br /&gt;
&lt;br /&gt;
[[File:Paloalto3.PNG|thumb|right]]&lt;br /&gt;
&lt;br /&gt;
Execute and configure VPN client on Linux (another OS are similar) :&lt;br /&gt;
&lt;br /&gt;
Open client by pressing on the relevant icon (&amp;quot;1&amp;quot; as in the picture on the right)&lt;br /&gt;
&lt;br /&gt;
And enter address &amp;#039;&amp;#039;&amp;#039;vpn.tau.ac.il&amp;#039;&amp;#039;&amp;#039; (&amp;quot;2&amp;quot; as in the picture on the right)&lt;br /&gt;
&lt;br /&gt;
==Error==&lt;br /&gt;
===SSL Error===&lt;br /&gt;
On latest ubuntu version, ubuntu 22.04, after installing and configuring globalprotect VPN, you get this error:&lt;br /&gt;
&lt;br /&gt;
[[File:784px-Vpn ssl error.png|none|thumb]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Fix only for globalprotect====&lt;br /&gt;
create new &amp;lt;code&amp;gt;ssl.conf&amp;lt;/code&amp;gt; file on your pc with the following content:&lt;br /&gt;
vim ~/ssl.conf&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
openssl_conf = openssl_init&lt;br /&gt;
[openssl_init]&lt;br /&gt;
ssl_conf = ssl_sect&lt;br /&gt;
[ssl_sect]&lt;br /&gt;
system_default = system_default_sect&lt;br /&gt;
[system_default_sect]&lt;br /&gt;
Options = UnsafeLegacyRenegotiation&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Then find this file:&lt;br /&gt;
&amp;lt;code&amp;gt;sudo find / -name PanGPUI.desktop -type f&amp;lt;/code&amp;gt;&lt;br /&gt;
or&lt;br /&gt;
&amp;lt;code&amp;gt;locate PanGPUI.desktop&amp;lt;/code&amp;gt; (may need to do sudo updatedb before running this one)&lt;br /&gt;
there should be at least 2 path with this file, ignore this one --&amp;gt; &amp;lt;code&amp;gt;/opt/paloaltonetworks/globalprotect/PanGPUI.desktop&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On my linux, kubuntu 22.04 the file is here: &amp;lt;code&amp;gt;/etc/xdg/autostart/PanGPUI.desktop&amp;lt;/code&amp;gt;&lt;br /&gt;
enter this file and change it from:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[Desktop Entry]&lt;br /&gt;
Name=PanGPUI&lt;br /&gt;
Type=Application&lt;br /&gt;
Exec=OPENSSL_CONF=/opt/paloaltonetworks/globalprotect/PanGPUI&lt;br /&gt;
Terminal=false&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
to &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[Desktop Entry]&lt;br /&gt;
Name=PanGPUI&lt;br /&gt;
Type=Application&lt;br /&gt;
Exec=OPENSSL_CONF=~/ssl.conf /opt/paloaltonetworks/globalprotect/PanGPUI&lt;br /&gt;
Terminal=false&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After restarting you pc, globalprotect will autostart with the custom ssl settings&lt;br /&gt;
&lt;br /&gt;
====Global fix====&lt;br /&gt;
here is how to workaround it:&lt;br /&gt;
&lt;br /&gt;
open  &amp;lt;code&amp;gt;/usr/lib/ssl/openssl.cnf&amp;lt;/code&amp;gt; &lt;br /&gt;
&lt;br /&gt;
comment out this section:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# [openssl_init]&lt;br /&gt;
&lt;br /&gt;
# providers = provider_sect&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;add this new section under the commented one from earlier:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[openssl_init]&lt;br /&gt;
ssl_conf = ssl_sect&lt;br /&gt;
&lt;br /&gt;
[ssl_sect]&lt;br /&gt;
system_default = system_default_sect&lt;br /&gt;
&lt;br /&gt;
[system_default_sect]&lt;br /&gt;
Options = UnsafeLegacyRenegotiation&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;reboot globalprotect app and the error should be fixed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;source:https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/1960268&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==TAU credentials==&lt;br /&gt;
[[File:Paloalto4.PNG|thumb|right]]&lt;br /&gt;
Fill in pop-upped windows with your TAU credentials:&lt;br /&gt;
&lt;br /&gt;
Open your mobile GoogleAuthenticator and enter code from there&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Congratulations: you are done!&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Palo_Alto_VPN_for_linux&amp;diff=1388</id>
		<title>Palo Alto VPN for linux</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Palo_Alto_VPN_for_linux&amp;diff=1388"/>
		<updated>2022-07-20T09:40:45Z</updated>

		<summary type="html">&lt;p&gt;Levk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;For security reason TelAviv University starts a VPN with double authentication standard.&lt;br /&gt;
&lt;br /&gt;
In order to do that users have to check/fill in their mobile phone at myTAU page&lt;br /&gt;
(https://mytau.tau.ac.il/GetResource.php) and enroll to the service.&lt;br /&gt;
Then you need install GoogleAuthenticator on you mobile device and register it at TAU.&lt;br /&gt;
&lt;br /&gt;
After that you may download and install PaloAlto GlobalProtect VPN client on your device (all&lt;br /&gt;
operation systems are supported: IOS, Android, Linux MAC and even Window)&lt;br /&gt;
&lt;br /&gt;
The steps:&lt;br /&gt;
==Enrollment==&lt;br /&gt;
Go to https://mytau.tau.ac.il/GetResource.php&lt;br /&gt;
&lt;br /&gt;
Choose the “1” then “2” :&lt;br /&gt;
&lt;br /&gt;
Then you will receive SMS with 2-minute code and enter it immediately to the filed:&lt;br /&gt;
Then you will be redirected to the QR code for GoogleAuthenticator account setup:&lt;br /&gt;
Scan it using your mobile Google Authenticator app using “+” on bottom right corner of mobile device&lt;br /&gt;
and enter the generated code from mobile GoogleAuthenticator to the field and press the green button.&lt;br /&gt;
&lt;br /&gt;
==Download==&lt;br /&gt;
Download and install VPN client, from the browser, go to:&lt;br /&gt;
&lt;br /&gt;
If within vpn tunnel, download one of the below versions:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://hpc-tftp.tau.ac.il/public_files/PanGPLinux-5.3.1-c9.tgz&lt;br /&gt;
http://hpc-tftp.tau.ac.il/public_files/PanGPLinux-6.0.0-c18.tgz&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If without vpn tunnel, may download one of the below file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
https://www.tau.ac.il/~danny/vpn/PanGPLinux-5.3.1-c9.tgz&lt;br /&gt;
https://www.tau.ac.il/~danny/vpn/PanGPLinux-5.3.2-c3.tgz&lt;br /&gt;
https://www.tau.ac.il/~danny/vpn/PanGPLinux-6.0.0-c18.tgz&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Linux package should be extracted and installed appropriated version:&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Debian/Ubuntu&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
dpkg -i GlobalProtect_UI_deb-5.3.1.0-36.deb&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Redhat/Centos&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
yum localinstall GlobalProtect_UI_rpm-5.3.1.0-36.rpm&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Configure==&lt;br /&gt;
&lt;br /&gt;
[[File:Paloalto3.PNG|thumb|right]]&lt;br /&gt;
&lt;br /&gt;
Execute and configure VPN client on Linux (another OS are similar) :&lt;br /&gt;
&lt;br /&gt;
Open client by pressing on the relevant icon (&amp;quot;1&amp;quot; as in the picture on the right)&lt;br /&gt;
&lt;br /&gt;
And enter address &amp;#039;&amp;#039;&amp;#039;vpn.tau.ac.il&amp;#039;&amp;#039;&amp;#039; (&amp;quot;2&amp;quot; as in the picture on the right)&lt;br /&gt;
&lt;br /&gt;
==Error==&lt;br /&gt;
===SSL Error===&lt;br /&gt;
On latest ubuntu version, ubuntu 22.04, after installing and configuring globalprotect VPN, you get this error:&lt;br /&gt;
&lt;br /&gt;
[[File:784px-Vpn ssl error.png|none|thumb]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Fix only for globalprotect====&lt;br /&gt;
create new &amp;lt;code&amp;gt;ssl.conf&amp;lt;/file file on your pc with the following content:&lt;br /&gt;
vim ~/ssl.conf&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
openssl_conf = openssl_init&lt;br /&gt;
[openssl_init]&lt;br /&gt;
ssl_conf = ssl_sect&lt;br /&gt;
[ssl_sect]&lt;br /&gt;
system_default = system_default_sect&lt;br /&gt;
[system_default_sect]&lt;br /&gt;
Options = UnsafeLegacyRenegotiation&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Then find this file:&lt;br /&gt;
&amp;lt;code&amp;gt;sudo find / -name PanGPUI.desktop -type f&amp;lt;/code&amp;gt;&lt;br /&gt;
or&lt;br /&gt;
&amp;lt;code&amp;gt;locate PanGPUI.desktop&amp;lt;/code&amp;gt; (may need to do sudo updatedb before running this one)&lt;br /&gt;
there should be at least 2 path with this file, ignore this one --&amp;gt; &amp;lt;code&amp;gt;/opt/paloaltonetworks/globalprotect/PanGPUI.desktop&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On my linux, kubuntu 22.04 the file is here: &amp;lt;code&amp;gt;/etc/xdg/autostart/PanGPUI.desktop&amp;lt;/code&amp;gt;&lt;br /&gt;
enter this file and change it from:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[Desktop Entry]&lt;br /&gt;
Name=PanGPUI&lt;br /&gt;
Type=Application&lt;br /&gt;
Exec=OPENSSL_CONF=/opt/paloaltonetworks/globalprotect/PanGPUI&lt;br /&gt;
Terminal=false&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
to &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[Desktop Entry]&lt;br /&gt;
Name=PanGPUI&lt;br /&gt;
Type=Application&lt;br /&gt;
Exec=OPENSSL_CONF=~/ssl.conf /opt/paloaltonetworks/globalprotect/PanGPUI&lt;br /&gt;
Terminal=false&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After restarting you pc, globalprotect will autostart with the custom ssl settings&lt;br /&gt;
&lt;br /&gt;
====Global fix====&lt;br /&gt;
here is how to workaround it:&lt;br /&gt;
&lt;br /&gt;
open  &amp;lt;code&amp;gt;/usr/lib/ssl/openssl.cnf&amp;lt;/code&amp;gt; &lt;br /&gt;
&lt;br /&gt;
comment out this section:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# [openssl_init]&lt;br /&gt;
&lt;br /&gt;
# providers = provider_sect&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;add this new section under the commented one from earlier:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[openssl_init]&lt;br /&gt;
ssl_conf = ssl_sect&lt;br /&gt;
&lt;br /&gt;
[ssl_sect]&lt;br /&gt;
system_default = system_default_sect&lt;br /&gt;
&lt;br /&gt;
[system_default_sect]&lt;br /&gt;
Options = UnsafeLegacyRenegotiation&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;reboot globalprotect app and the error should be fixed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;source:https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/1960268&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==TAU credentials==&lt;br /&gt;
[[File:Paloalto4.PNG|thumb|right]]&lt;br /&gt;
Fill in pop-upped windows with your TAU credentials:&lt;br /&gt;
&lt;br /&gt;
Open your mobile GoogleAuthenticator and enter code from there&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Congratulations: you are done!&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Palo_Alto_VPN_for_linux&amp;diff=1387</id>
		<title>Palo Alto VPN for linux</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Palo_Alto_VPN_for_linux&amp;diff=1387"/>
		<updated>2022-07-20T09:32:43Z</updated>

		<summary type="html">&lt;p&gt;Levk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;For security reason TelAviv University starts a VPN with double authentication standard.&lt;br /&gt;
&lt;br /&gt;
In order to do that users have to check/fill in their mobile phone at myTAU page&lt;br /&gt;
(https://mytau.tau.ac.il/GetResource.php) and enroll to the service.&lt;br /&gt;
Then you need install GoogleAuthenticator on you mobile device and register it at TAU.&lt;br /&gt;
&lt;br /&gt;
After that you may download and install PaloAlto GlobalProtect VPN client on your device (all&lt;br /&gt;
operation systems are supported: IOS, Android, Linux MAC and even Window)&lt;br /&gt;
&lt;br /&gt;
The steps:&lt;br /&gt;
==Enrollment==&lt;br /&gt;
Go to https://mytau.tau.ac.il/GetResource.php&lt;br /&gt;
&lt;br /&gt;
Choose the “1” then “2” :&lt;br /&gt;
&lt;br /&gt;
Then you will receive SMS with 2-minute code and enter it immediately to the filed:&lt;br /&gt;
Then you will be redirected to the QR code for GoogleAuthenticator account setup:&lt;br /&gt;
Scan it using your mobile Google Authenticator app using “+” on bottom right corner of mobile device&lt;br /&gt;
and enter the generated code from mobile GoogleAuthenticator to the field and press the green button.&lt;br /&gt;
&lt;br /&gt;
==Download==&lt;br /&gt;
Download and install VPN client, from the browser, go to:&lt;br /&gt;
&lt;br /&gt;
If within vpn tunnel, download one of the below versions:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://hpc-tftp.tau.ac.il/public_files/PanGPLinux-5.3.1-c9.tgz&lt;br /&gt;
http://hpc-tftp.tau.ac.il/public_files/PanGPLinux-6.0.0-c18.tgz&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If without vpn tunnel, may download one of the below file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
https://www.tau.ac.il/~danny/vpn/PanGPLinux-5.3.1-c9.tgz&lt;br /&gt;
https://www.tau.ac.il/~danny/vpn/PanGPLinux-5.3.2-c3.tgz&lt;br /&gt;
https://www.tau.ac.il/~danny/vpn/PanGPLinux-6.0.0-c18.tgz&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Linux package should be extracted and installed appropriated version:&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Debian/Ubuntu&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
dpkg -i GlobalProtect_UI_deb-5.3.1.0-36.deb&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Redhat/Centos&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
yum localinstall GlobalProtect_UI_rpm-5.3.1.0-36.rpm&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Configure==&lt;br /&gt;
&lt;br /&gt;
[[File:Paloalto3.PNG|thumb|right]]&lt;br /&gt;
&lt;br /&gt;
Execute and configure VPN client on Linux (another OS are similar) :&lt;br /&gt;
&lt;br /&gt;
Open client by pressing on the relevant icon (&amp;quot;1&amp;quot; as in the picture on the right)&lt;br /&gt;
&lt;br /&gt;
And enter address &amp;#039;&amp;#039;&amp;#039;vpn.tau.ac.il&amp;#039;&amp;#039;&amp;#039; (&amp;quot;2&amp;quot; as in the picture on the right)&lt;br /&gt;
&lt;br /&gt;
==Error==&lt;br /&gt;
===SSL Error===&lt;br /&gt;
On latest ubuntu version, ubuntu 22.04, after installing and configuring globalprotect VPN, you get this error:&lt;br /&gt;
&lt;br /&gt;
[[File:784px-Vpn ssl error.png|none|thumb]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Fix only for globalprotect====&lt;br /&gt;
create new &amp;lt;code&amp;gt;ssl.conf&amp;lt;/file file on your pc with the following content:&lt;br /&gt;
vim ~/ssl.conf&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
openssl_conf = openssl_init&lt;br /&gt;
[openssl_init]&lt;br /&gt;
ssl_conf = ssl_sect&lt;br /&gt;
[ssl_sect]&lt;br /&gt;
system_default = system_default_sect&lt;br /&gt;
[system_default_sect]&lt;br /&gt;
Options = UnsafeLegacyRenegotiation&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Global fix====&lt;br /&gt;
here is how to workaround it:&lt;br /&gt;
&lt;br /&gt;
open  &amp;lt;code&amp;gt;/usr/lib/ssl/openssl.cnf&amp;lt;/code&amp;gt; &lt;br /&gt;
&lt;br /&gt;
comment out this section:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# [openssl_init]&lt;br /&gt;
&lt;br /&gt;
# providers = provider_sect&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;add this new section under the commented one from earlier:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[openssl_init]&lt;br /&gt;
ssl_conf = ssl_sect&lt;br /&gt;
&lt;br /&gt;
[ssl_sect]&lt;br /&gt;
system_default = system_default_sect&lt;br /&gt;
&lt;br /&gt;
[system_default_sect]&lt;br /&gt;
Options = UnsafeLegacyRenegotiation&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;reboot globalprotect app and the error should be fixed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;source:https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/1960268&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==TAU credentials==&lt;br /&gt;
[[File:Paloalto4.PNG|thumb|right]]&lt;br /&gt;
Fill in pop-upped windows with your TAU credentials:&lt;br /&gt;
&lt;br /&gt;
Open your mobile GoogleAuthenticator and enter code from there&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Congratulations: you are done!&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Palo_Alto_VPN_for_linux&amp;diff=1386</id>
		<title>Palo Alto VPN for linux</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Palo_Alto_VPN_for_linux&amp;diff=1386"/>
		<updated>2022-07-20T09:30:48Z</updated>

		<summary type="html">&lt;p&gt;Levk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;For security reason TelAviv University starts a VPN with double authentication standard.&lt;br /&gt;
&lt;br /&gt;
In order to do that users have to check/fill in their mobile phone at myTAU page&lt;br /&gt;
(https://mytau.tau.ac.il/GetResource.php) and enroll to the service.&lt;br /&gt;
Then you need install GoogleAuthenticator on you mobile device and register it at TAU.&lt;br /&gt;
&lt;br /&gt;
After that you may download and install PaloAlto GlobalProtect VPN client on your device (all&lt;br /&gt;
operation systems are supported: IOS, Android, Linux MAC and even Window)&lt;br /&gt;
&lt;br /&gt;
The steps:&lt;br /&gt;
==Enrollment==&lt;br /&gt;
Go to https://mytau.tau.ac.il/GetResource.php&lt;br /&gt;
&lt;br /&gt;
Choose the “1” then “2” :&lt;br /&gt;
&lt;br /&gt;
Then you will receive SMS with 2-minute code and enter it immediately to the filed:&lt;br /&gt;
Then you will be redirected to the QR code for GoogleAuthenticator account setup:&lt;br /&gt;
Scan it using your mobile Google Authenticator app using “+” on bottom right corner of mobile device&lt;br /&gt;
and enter the generated code from mobile GoogleAuthenticator to the field and press the green button.&lt;br /&gt;
&lt;br /&gt;
==Download==&lt;br /&gt;
Download and install VPN client, from the browser, go to:&lt;br /&gt;
&lt;br /&gt;
If within vpn tunnel, download one of the below versions:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://hpc-tftp.tau.ac.il/public_files/PanGPLinux-5.3.1-c9.tgz&lt;br /&gt;
http://hpc-tftp.tau.ac.il/public_files/PanGPLinux-6.0.0-c18.tgz&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If without vpn tunnel, may download one of the below file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
https://www.tau.ac.il/~danny/vpn/PanGPLinux-5.3.1-c9.tgz&lt;br /&gt;
https://www.tau.ac.il/~danny/vpn/PanGPLinux-5.3.2-c3.tgz&lt;br /&gt;
https://www.tau.ac.il/~danny/vpn/PanGPLinux-6.0.0-c18.tgz&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Linux package should be extracted and installed appropriated version:&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Debian/Ubuntu&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
dpkg -i GlobalProtect_UI_deb-5.3.1.0-36.deb&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Redhat/Centos&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
yum localinstall GlobalProtect_UI_rpm-5.3.1.0-36.rpm&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Configure==&lt;br /&gt;
&lt;br /&gt;
[[File:Paloalto3.PNG|thumb|right]]&lt;br /&gt;
&lt;br /&gt;
Execute and configure VPN client on Linux (another OS are similar) :&lt;br /&gt;
&lt;br /&gt;
Open client by pressing on the relevant icon (&amp;quot;1&amp;quot; as in the picture on the right)&lt;br /&gt;
&lt;br /&gt;
And enter address &amp;#039;&amp;#039;&amp;#039;vpn.tau.ac.il&amp;#039;&amp;#039;&amp;#039; (&amp;quot;2&amp;quot; as in the picture on the right)&lt;br /&gt;
&lt;br /&gt;
==Error==&lt;br /&gt;
===SSL Error===&lt;br /&gt;
On latest ubuntu version, ubuntu 22.04, after installing and configuring globalprotect VPN, you get this error:&lt;br /&gt;
&lt;br /&gt;
[[File:784px-Vpn ssl error.png|none|thumb]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Fix only for globalprotect====&lt;br /&gt;
&lt;br /&gt;
====Global fix====&lt;br /&gt;
here is how to workaround it:&lt;br /&gt;
&lt;br /&gt;
open  &amp;lt;code&amp;gt;/usr/lib/ssl/openssl.cnf&amp;lt;/code&amp;gt; &lt;br /&gt;
&lt;br /&gt;
comment out this section:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# [openssl_init]&lt;br /&gt;
&lt;br /&gt;
# providers = provider_sect&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;add this new section under the commented one from earlier:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[openssl_init]&lt;br /&gt;
ssl_conf = ssl_sect&lt;br /&gt;
&lt;br /&gt;
[ssl_sect]&lt;br /&gt;
system_default = system_default_sect&lt;br /&gt;
&lt;br /&gt;
[system_default_sect]&lt;br /&gt;
Options = UnsafeLegacyRenegotiation&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;reboot globalprotect app and the error should be fixed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;source:https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/1960268&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==TAU credentials==&lt;br /&gt;
[[File:Paloalto4.PNG|thumb|right]]&lt;br /&gt;
Fill in pop-upped windows with your TAU credentials:&lt;br /&gt;
&lt;br /&gt;
Open your mobile GoogleAuthenticator and enter code from there&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Congratulations: you are done!&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Palo_Alto_VPN_for_linux&amp;diff=1385</id>
		<title>Palo Alto VPN for linux</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Palo_Alto_VPN_for_linux&amp;diff=1385"/>
		<updated>2022-07-19T11:32:38Z</updated>

		<summary type="html">&lt;p&gt;Levk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;For security reason TelAviv University starts a VPN with double authentication standard.&lt;br /&gt;
&lt;br /&gt;
In order to do that users have to check/fill in their mobile phone at myTAU page&lt;br /&gt;
(https://mytau.tau.ac.il/GetResource.php) and enroll to the service.&lt;br /&gt;
Then you need install GoogleAuthenticator on you mobile device and register it at TAU.&lt;br /&gt;
&lt;br /&gt;
After that you may download and install PaloAlto GlobalProtect VPN client on your device (all&lt;br /&gt;
operation systems are supported: IOS, Android, Linux MAC and even Window)&lt;br /&gt;
&lt;br /&gt;
The steps:&lt;br /&gt;
==Enrollment==&lt;br /&gt;
Go to https://mytau.tau.ac.il/GetResource.php&lt;br /&gt;
&lt;br /&gt;
Choose the “1” then “2” :&lt;br /&gt;
&lt;br /&gt;
Then you will receive SMS with 2-minute code and enter it immediately to the filed:&lt;br /&gt;
Then you will be redirected to the QR code for GoogleAuthenticator account setup:&lt;br /&gt;
Scan it using your mobile Google Authenticator app using “+” on bottom right corner of mobile device&lt;br /&gt;
and enter the generated code from mobile GoogleAuthenticator to the field and press the green button.&lt;br /&gt;
&lt;br /&gt;
==Download==&lt;br /&gt;
Download and install VPN client, from the browser, go to:&lt;br /&gt;
&lt;br /&gt;
If within vpn tunnel, download one of the below versions:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://hpc-tftp.tau.ac.il/public_files/PanGPLinux-5.3.1-c9.tgz&lt;br /&gt;
http://hpc-tftp.tau.ac.il/public_files/PanGPLinux-6.0.0-c18.tgz&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If without vpn tunnel, may download one of the below file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
https://www.tau.ac.il/~danny/vpn/PanGPLinux-5.3.1-c9.tgz&lt;br /&gt;
https://www.tau.ac.il/~danny/vpn/PanGPLinux-5.3.2-c3.tgz&lt;br /&gt;
https://www.tau.ac.il/~danny/vpn/PanGPLinux-6.0.0-c18.tgz&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Linux package should be extracted and installed appropriated version:&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Debian/Ubuntu&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
dpkg -i GlobalProtect_UI_deb-5.3.1.0-36.deb&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Redhat/Centos&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
yum localinstall GlobalProtect_UI_rpm-5.3.1.0-36.rpm&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Configure==&lt;br /&gt;
&lt;br /&gt;
[[File:Paloalto3.PNG|thumb|right]]&lt;br /&gt;
&lt;br /&gt;
Execute and configure VPN client on Linux (another OS are similar) :&lt;br /&gt;
&lt;br /&gt;
Open client by pressing on the relevant icon (&amp;quot;1&amp;quot; as in the picture on the right)&lt;br /&gt;
&lt;br /&gt;
And enter address &amp;#039;&amp;#039;&amp;#039;vpn.tau.ac.il&amp;#039;&amp;#039;&amp;#039; (&amp;quot;2&amp;quot; as in the picture on the right)&lt;br /&gt;
&lt;br /&gt;
==Error==&lt;br /&gt;
===SSL Error===&lt;br /&gt;
On latest ubuntu version, ubuntu 22.04, after installing and configuring globalprotect VPN, you get this error:&lt;br /&gt;
&lt;br /&gt;
[[File:784px-Vpn ssl error.png|none|thumb]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
here is how to workaround it:&lt;br /&gt;
&lt;br /&gt;
open  &amp;lt;code&amp;gt;/usr/lib/ssl/openssl.cnf&amp;lt;/code&amp;gt; &lt;br /&gt;
&lt;br /&gt;
comment out this section:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# [openssl_init]&lt;br /&gt;
&lt;br /&gt;
# providers = provider_sect&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;add this new section under the commented one from earlier:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[openssl_init]&lt;br /&gt;
ssl_conf = ssl_sect&lt;br /&gt;
&lt;br /&gt;
[ssl_sect]&lt;br /&gt;
system_default = system_default_sect&lt;br /&gt;
&lt;br /&gt;
[system_default_sect]&lt;br /&gt;
Options = UnsafeLegacyRenegotiation&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;reboot globalprotect app and the error should be fixed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;source:https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/1960268&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==TAU credentials==&lt;br /&gt;
[[File:Paloalto4.PNG|thumb|right]]&lt;br /&gt;
Fill in pop-upped windows with your TAU credentials:&lt;br /&gt;
&lt;br /&gt;
Open your mobile GoogleAuthenticator and enter code from there&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Congratulations: you are done!&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=File:784px-Vpn_ssl_error.png&amp;diff=1384</id>
		<title>File:784px-Vpn ssl error.png</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=File:784px-Vpn_ssl_error.png&amp;diff=1384"/>
		<updated>2022-07-19T11:32:04Z</updated>

		<summary type="html">&lt;p&gt;Levk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
	<entry>
		<id>https://hpcguide.tau.ac.il/index.php?title=Palo_Alto_VPN_for_linux&amp;diff=1383</id>
		<title>Palo Alto VPN for linux</title>
		<link rel="alternate" type="text/html" href="https://hpcguide.tau.ac.il/index.php?title=Palo_Alto_VPN_for_linux&amp;diff=1383"/>
		<updated>2022-07-19T11:30:28Z</updated>

		<summary type="html">&lt;p&gt;Levk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;For security reason TelAviv University starts a VPN with double authentication standard.&lt;br /&gt;
&lt;br /&gt;
In order to do that users have to check/fill in their mobile phone at myTAU page&lt;br /&gt;
(https://mytau.tau.ac.il/GetResource.php) and enroll to the service.&lt;br /&gt;
Then you need install GoogleAuthenticator on you mobile device and register it at TAU.&lt;br /&gt;
&lt;br /&gt;
After that you may download and install PaloAlto GlobalProtect VPN client on your device (all&lt;br /&gt;
operation systems are supported: IOS, Android, Linux MAC and even Window)&lt;br /&gt;
&lt;br /&gt;
The steps:&lt;br /&gt;
==Enrollment==&lt;br /&gt;
Go to https://mytau.tau.ac.il/GetResource.php&lt;br /&gt;
&lt;br /&gt;
Choose the “1” then “2” :&lt;br /&gt;
&lt;br /&gt;
Then you will receive SMS with 2-minute code and enter it immediately to the filed:&lt;br /&gt;
Then you will be redirected to the QR code for GoogleAuthenticator account setup:&lt;br /&gt;
Scan it using your mobile Google Authenticator app using “+” on bottom right corner of mobile device&lt;br /&gt;
and enter the generated code from mobile GoogleAuthenticator to the field and press the green button.&lt;br /&gt;
&lt;br /&gt;
==Download==&lt;br /&gt;
Download and install VPN client, from the browser, go to:&lt;br /&gt;
&lt;br /&gt;
If within vpn tunnel, download one of the below versions:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://hpc-tftp.tau.ac.il/public_files/PanGPLinux-5.3.1-c9.tgz&lt;br /&gt;
http://hpc-tftp.tau.ac.il/public_files/PanGPLinux-6.0.0-c18.tgz&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If without vpn tunnel, may download one of the below file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
https://www.tau.ac.il/~danny/vpn/PanGPLinux-5.3.1-c9.tgz&lt;br /&gt;
https://www.tau.ac.il/~danny/vpn/PanGPLinux-5.3.2-c3.tgz&lt;br /&gt;
https://www.tau.ac.il/~danny/vpn/PanGPLinux-6.0.0-c18.tgz&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Linux package should be extracted and installed appropriated version:&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Debian/Ubuntu&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
dpkg -i GlobalProtect_UI_deb-5.3.1.0-36.deb&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Redhat/Centos&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
yum localinstall GlobalProtect_UI_rpm-5.3.1.0-36.rpm&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Configure==&lt;br /&gt;
&lt;br /&gt;
[[File:Paloalto3.PNG|thumb|right]]&lt;br /&gt;
&lt;br /&gt;
Execute and configure VPN client on Linux (another OS are similar) :&lt;br /&gt;
&lt;br /&gt;
Open client by pressing on the relevant icon (&amp;quot;1&amp;quot; as in the picture on the right)&lt;br /&gt;
&lt;br /&gt;
And enter address &amp;#039;&amp;#039;&amp;#039;vpn.tau.ac.il&amp;#039;&amp;#039;&amp;#039; (&amp;quot;2&amp;quot; as in the picture on the right)&lt;br /&gt;
&lt;br /&gt;
==Error==&lt;br /&gt;
===SSL Error===&lt;br /&gt;
On latest ubuntu version, ubuntu 22.04, after installing and configuring globalprotect VPN, you get this error:&lt;br /&gt;
&lt;br /&gt;
[[File:Vpn ssl error.png|none|thumb]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
here is how to workaround it:&lt;br /&gt;
&lt;br /&gt;
open  &amp;lt;code&amp;gt;/usr/lib/ssl/openssl.cnf&amp;lt;/code&amp;gt; &lt;br /&gt;
&lt;br /&gt;
comment out this section:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# [openssl_init]&lt;br /&gt;
&lt;br /&gt;
# providers = provider_sect&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;add this new section under the commented one from earlier:&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[openssl_init]&lt;br /&gt;
ssl_conf = ssl_sect&lt;br /&gt;
&lt;br /&gt;
[ssl_sect]&lt;br /&gt;
system_default = system_default_sect&lt;br /&gt;
&lt;br /&gt;
[system_default_sect]&lt;br /&gt;
Options = UnsafeLegacyRenegotiation&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;reboot globalprotect app and the error should be fixed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;source:https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/1960268&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==TAU credentials==&lt;br /&gt;
[[File:Paloalto4.PNG|thumb|right]]&lt;br /&gt;
Fill in pop-upped windows with your TAU credentials:&lt;br /&gt;
&lt;br /&gt;
Open your mobile GoogleAuthenticator and enter code from there&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Congratulations: you are done!&lt;/div&gt;</summary>
		<author><name>Levk</name></author>
	</entry>
</feed>