This is a cache of https://www.uibk.ac.at/zid/systeme/hpc-systeme/common/tutorials/resources-howto.html. It is a snapshot of the page at 2024-11-21T05:59:34.354+0100.
Resource requirements and limitations – Universität Innsbruck

Resource requirements and limitations

The SGE batch scheduling system dispatches jobs according to the availability of resources, necessary for their successful completion. Therefore an unspecified resource or a resource specified unnecessarily high, might lead to an avoidable delay in the scheduling of your jobs. It is therefore advisable to carefully specify the required resources for each of your jobs, in order to optimally utilize our HPC systems.

Status information and resource limitations

The following sections will provide you with a brief introduction on how to obtain information about the overall and the actually available resources on the cluster.

Host specific resource and status information (qhost)

To get an abbreviated overview of the currently available resources on each of the cluster's execution hosts, use the command qhost. For a complete representation of all available host-specific resource attributes execute

qhost -F

For more information about standard resource attributes consult the complex man page.

Queue specific resource information

To obtain a list of the available queues on the cluster execute

qconf -sql

Use the following command to get detailed resource information for a specific queue:

qconf -sq queue

Slot limitations

There are limitations on the number of available slots per user, as well as the maximum number of slots per job. As a rule of thumb, a single job may occupy approximately half of the cluster and each user may fill the cluster up to about 75% with his jobs.

Depending on the specific HPC system there are also transient limitations on the number of available slots per user, which come into effect only at times of high cluster load (and consequently increased competition for the available resources). Execute

qquota

to see these limits and your actual resource consumption (there's nothing displayed, if you have no running jobs on the cluster).

Note: The transient limits are usually not enforced, but they may cause problems for interactive sessions (see the section Submitting interactive jobs on the subject) or for short running jobs. Please contact the zid cluster administration if you experience problems or need more resources for the progress of an urgent project.

Specifying resource requirements for jobs

The subsequent sections will provide sample cases of how to specify the resource requirements for your jobs.

Runtime limits

For an optimal scheduling of your jobs, i.e. to allow your job to run as soon as the necessary resources are available, it is advisable to specify the job runtime as closely as possible (in this way exploiting the so called backfilling possibilities of SGE in the case of ongoing resource reservations.) Runtime limits are specified with the h_rt resource attribute. For example, submit your job with the following command line, if it will for sure not take more than 4 hours and 30 minutes (wallclock time) for its completion:

qsub -l h_rt=4:30:00 job_script.sh

Note: Do not use runtime limits too aggressively or if you are unsure about the actual duration of your jobs, as the jobs will be terminated as soon as the specified runtime limits are exceeded. If no runtime limits are provided, the default runtime limits of the system are taken into account, which can be taken from the queue specific resource information.

Memory usage

In order to avoid job failures due to memory oversubscription, the maximum available amount of memory per process is by default limited to a cluster specific value (issue the command qconf -sc | grep "default\|h_vmem" to find out the default value).

If your job requires less than 1.5 GByte of memory per process, you can explicitly specify this by setting the SGE's resource parameter h_vmem as in the following example:

qsub -l h_vmem=1500M -pe openmpi-fillup 4 job_script.sh

This will reserve a total of 6 GByte of memory for your job, potentially distributed over several hosts. Memory values are specified in bytes by positive decimal (1500), octal (02734) or hexadecimal (0x5dc) integers. For convenience the multipliers k(1000), K(1024), m, M, g and G can be appended.

Note: If you know that your memory requirements lie below the default limit, please do specify the lower value.

Altering resource requirements for pending jobs

You can alter (most of the) the resource requirements of pending jobs at any time with SGE's qalter command. For example, to change the parallel environment, including the number of desired slots of a waiting parallel job, enter:

qalter -pe openmpi-fillup 8-16 YOUR_JOB_ID

Note that when changing resource limits, you must restate all resource limits existing for the job, else they will be reset to the default.

Example:

You have submitted a job with 10 hours run time and 4G virtual memory per job slot:

qstat -j YOUR_JOB_ID|grep 'hard resource_list'
hard resource_list:         h_vmem=4G,h_rt=36000

You want to extend the job's run time to 20 hours, leaving the existing memory limit unchanged. Issue

qalter -l h_vmem=4G,h_rt=20:00:00

to set the new job limits. -l and -pe may be set independently.

Nach oben scrollen