Get Set of Available Workers
Get Set of Available Workers
availableWorkers( methods = getOption2("parallelly.availableWorkers.methods", c("mc.cores", "_R_CHECK_LIMIT_CORES_", "PBS", "SGE", "Slurm", "LSF", "custom", "system", "fallback")), na.rm = TRUE, logical = getOption2("parallelly.availableCores.logical", TRUE), default = "localhost", which = c("auto", "min", "max", "all") )
methods |
A character vector specifying how to infer the number of available cores. |
na.rm |
If TRUE, only non-missing settings are considered/returned. |
logical |
Passed as-is to |
default |
The default set of workers. |
which |
A character specifying which set / sets to return.
If |
The default set of workers for each method is
rep("localhost", times = availableCores(methods = method, logical = logical))
,
which means that each will at least use as many parallel workers on the
current machine that availableCores()
allows for that method.
In addition, the following settings ("methods") are also acknowledged:
"PBS"
-
Query TORQUE/PBS environment variable PBS_NODEFILE.
If this is set and specifies an existing file, then the set
of workers is read from that file, where one worker (node)
is given per line.
An example of a job submission that results in this is
qsub -l nodes = 4:ppn = 2
, which requests four nodes each
with two cores.
"SGE"
-
Query Sun/Oracle Grid Engine (SGE) environment variable
PE_HOSTFILE.
An example of a job submission that results in this is
qsub -pe mpi 8
(or qsub -pe ompi 8
), which
requests eight cores on a any number of machines.
"LSF"
-
Query LSF/OpenLava environment variable LSB_HOSTS.
"Slurm"
-
Query Slurm environment variable SLURM_JOB_NODELIST (fallback
to legacy SLURM_NODELIST) and parse set of nodes.
Then query Slurm environment variable SLURM_JOB_CPUS_PER_NODE
(fallback SLURM_TASKS_PER_NODE) to infer how many CPU cores
Slurm have alloted to each of the nodes. If SLURM_CPUS_PER_TASK
is set, which is always a scalar, then that is respected too, i.e.
if it is smaller, then that is used for all nodes.
For example, if SLURM_NODELIST="n1,n[03-05]"
(expands to
c("n1", "n03", "n04", "n05")
) and SLURM_JOB_CPUS_PER_NODE="2(x2),3,2"
(expands to c(2, 2, 3, 2, 2)
), then
c("n1", "n1", "n03", "n03", "n04", "n04", "n04", "n05", "n05")
is
returned. If in addition, SLURM_CPUS_PER_TASK=1
, which can happen
depending on hyperthreading configurations on the Slurm cluster, then
c("n1", "n03", "n04", "n05")
is returned.
"custom"
-
If option parallelly.availableWorkers.custom is set and a function,
then this function will be called (without arguments) and it's value
will be coerced to a character vector, which will be interpreted as
hostnames of available workers.
Return a character vector of workers, which typically consists of names of machines / compute nodes, but may also be IP numbers.
availableWorkers(methods = "Slurm")
will expand SLURM_JOB_NODELIST
using scontrol show hostnames "$SLURM_JOB_NODELIST"
, if available.
If not available, then it attempts to parse the compressed nodelist based
on a best-guess understanding on what the possible syntax may be.
One known limitation is that "multi-dimensional" ranges are not supported,
e.g. "a[1-2]b[3-4]"
is expanded by scontrol
to
c("a1b3", "a1b4", "a2b3", "a2b4")
. If scontrol
is not
available, then any components that failed to be parsed are dropped with
an informative warning message. If no compents could be parsed, then
the result of methods = "Slurm"
will be empty.
To get the number of available workers on the current machine,
see availableCores()
.
message(paste("Available workers:", paste(sQuote(availableWorkers()), collapse = ", "))) ## Not run: options(mc.cores = 2L) message(paste("Available workers:", paste(sQuote(availableWorkers()), collapse = ", "))) ## End(Not run) ## Not run: ## Always use two workers on host 'n1' and one on host 'n2' options(parallelly.availableWorkers.custom = function() { c("n1", "n1", "n2") }) message(paste("Available workers:", paste(sQuote(availableWorkers()), collapse = ", "))) ## End(Not run)
Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.