Difference between revisions of "Main Page"

From HPC Guide
Jump to navigation Jump to search
Line 8: Line 8:
  
 
[[Submitting a job to a slurm queue]]
 
[[Submitting a job to a slurm queue]]
 +
 +
[[Creaing and using conda environment]]
  
 
[[Palo Alto VPN for linux]]
 
[[Palo Alto VPN for linux]]

Revision as of 10:43, 2 July 2023

Welcome to HPC Guide.

Linux basic commands

Public queues

Submitting a job to a queue

Submitting a job to a slurm queue

Creaing and using conda environment

Palo Alto VPN for linux

Alphafold

Using GPU

This HPC Tutorial is designed for researchers at TAU who are in need of computational power (computer resources) and wish to explore and use our High Performance Computing (HPC) core facilities.
The audience may be completely unaware of the HPC concepts but must have some basic understanding of computers and computer programming.

What is HPC?

“High Performance Computing” (HPC) is computing on a “Supercomputer”,
a computer at the front line of contemporary processing capacity – particularly speed of calculation and available memory.
A computer cluster consists of a set of loosely or tightly connected computers that work together so that in many respects they can be viewed as a single system.
The components of a cluster are usually connected to each other through fast local area networks(“LAN”) with each node (computer used as a server) running its own instance of an operating system.
Computer clusters emerged as a result of convergence of a number of computing trends including the availability of low cost microprocessors,
high-speed networks, and software for high performance distributed computing.
Compute clusters are usually deployed to improve performance and availability over that of a single computer, while typically being more cost-effective than single computers of comparable speed or availability.