Configuration Reference¶
ReFrame’s behavior can be configured through its configuration file, environment variables and command-line options. An option can be specified via multiple paths (e.g., a configuration file parameter and an environment variable), in which case command-line options precede environment variables, which in turn precede configuration file options. This section provides a complete reference guide of the configuration options of ReFrame that can be set in its configuration file or specified using environment variables.
ReFrame’s configuration is in JSON syntax.
The full schema describing it can be found in reframe/schemas/config.json
file.
The final configuration for ReFrame is validated against this schema.
The syntax we use to describe the different configuration objects follows the convention: OBJECT[.OBJECT]*.PROPERTY
.
Even if a configuration object contains a list of other objects, this is not reflected in the above syntax, as all objects in a certain list are homogeneous.
For example, by systems.partitions.name
we designate the name
property of any partition object inside the partitions
property of any system object inside the top level systems
object.
If we were to use indices, that would be rewritten as systems[i].partitions[j].name
where i
indexes the systems and j
indexes the partitions of the i-th system.
For cases, where the objects in a list are not homogeneous, e.g., the logging handlers, we surround the object type with ..
.
For example, the logging.handlers_perflog..filelog..name
syntax designates the name
attribute of the filelog
logging handler.
Top-level Configuration¶
The top-level configuration object is essentially the full configuration of ReFrame. It consists of the following properties, which we also call conventionally configuration sections:
- systems¶
- Required:
Yes
A list of system configuration objects.
- environments¶
- Required:
Yes
A list of environment configuration objects.
- logging¶
- Required:
Yes
A list of logging configuration objects.
- modes¶
- Required:
No
A list of execution mode configuration objects.
- general¶
- Required:
No
A list of general configuration objects.
- storage¶
- Required:
No
A list of storage configuration objects
Added in version 4.7.
- autodetect_methods¶
- Required:
No
- Default:
["py::socket.gethostname"]
A list of system auto-detection methods for identifying the current system.
The list can contain two types of methods:
Python methods: These are prefixed with
py::
and should point to a Python callable taking zero arguments and returning a string. If the specified Python callable is not prefixed with a module, it will be looked up in the loaded configuration files starting from the last file. If the requested symbol cannot be found, a warning will be issued and the method will be ignored.Shell commands: Any string not prefixed with
py::
will be treated as a shell command and will be executed during auto-detection to retrieve the hostname. The standard output of the command will be used.
If the
--system
option is not passed, ReFrame will try to autodetect the current system trying the methods in this list successively, until one of them succeeds. The resulting name will be matched against thehostnames
patterns of each system and the system that matches first will be used as the current one.The auto-detection methods can also be controlled through the
RFM_AUTODETECT_METHODS
environment variable.Added in version 4.3.
Warning
Changed in version 4.0.0: The schedulers
section is removed.
Scheduler options should be set per partition using the sched_options
attribute.
System Configuration¶
- systems.name¶
- Required:
Yes
The name of this system. Only alphanumeric characters, dashes (
-
) and underscores (_
) are allowed.
- systems.descr¶
- Required:
No
- Default:
""
The description of this system.
- systems.hostnames¶
- Required:
Yes
A list of hostname regular expression patterns in Python syntax, which will be used by the framework in order to automatically select a system configuration. For the auto-selection process, check the configuration of the
autodetect_methods
option.
- systems.max_local_jobs¶
The maximum number of forced local build or run jobs allowed.
Forced local jobs run within the execution context of ReFrame.
- Required:
No
- Default:
8
Added in version 3.10.0.
- systems.modules_system¶
- required:
No
- default:
"nomod"
The modules system that should be used for loading environment modules on this system. Available values are the following:
tmod
: The classic Tcl implementation of the environment modules (version 3.2).tmod31
: The classic Tcl implementation of the environment modules (version 3.1). A separate backend is required for Tmod 3.1, because Python bindings are different from Tmod 3.2.tmod32
: A synonym oftmod
.tmod4
: The new environment modules implementation (versions older than 4.1 are not supported).lmod
: The Lua implementation of the environment modules.spack
: Spack’s built-in mechanism for managing modules.nomod
: This is to denote that no modules system is used by this system.
Normally, upon loading the configuration of the system ReFrame checks that a sane installation exists for the modules system requested and will issue an error if it fails to find one. The modules system sanity check is skipped when the
resolve_module_conflicts
is set toFalse
. This is useful in cases where the current system does not have a modules system but the remote partitions have one and you would like ReFrame to generate the module commands.Added in version 3.4: The
spack
backend is added.Changed in version 4.5.0: The modules system sanity check is skipped when the
config.general.resolve_module_conflicts
is not set.
- systems.modules¶
- Required:
No
- Default:
[]
A list of environment module objects to be loaded always when running on this system. These modules modify the ReFrame environment. This is useful in cases where a particular module is needed, for example, to submit jobs on a specific system.
- systems.env_vars¶
- Required:
No
- Default:
[]
A list of environment variables to be set always when running on this system. These variables modify the ReFrame environment. Each environment variable is specified as a two-element list containing the variable name and its value. You may reference other environment variables when defining an environment variable here. ReFrame will expand its value. Variables are set after the environment modules are loaded.
Added in version 4.0.0.
- systems.variables¶
- systems.prefix¶
- Required:
No
- Default:
"."
Directory prefix for a ReFrame run on this system. Any directories or files produced by ReFrame will use this prefix, if not specified otherwise.
- systems.stagedir¶
- Required:
No
- Default:
"${RFM_PREFIX}/stage"
Stage directory prefix for this system. This is the directory prefix, where ReFrame will create the stage directories for each individual test case.
- systems.outputdir¶
- Required:
No
- Default:
"${RFM_PREFIX}/output"
Output directory prefix for this system. This is the directory prefix, where ReFrame will save information about the successful tests.
- systems.resourcesdir¶
- Required:
No
- Default:
"."
Directory prefix where external test resources (e.g., large input files) are stored. You may reference this prefix from within a regression test by accessing the
resourcesdir
attribute of the current system.
- systems.partitions¶
- Required:
Yes
A list of system partition configuration objects. This list must have at least one element.
- systems.sched_options¶
- Required:
No
- Default:
{}
Scheduler options for the local scheduler that is associated with the ReFrame’s execution context. To understand the difference between the different execution contexts, please refer to “Where each pipeline stage is executed?” For the available scheduler options, see the
sched_options
in the partition configuration below.Added in version 4.1.
Warning
This option is broken in 4.0.
System Partition Configuration¶
- systems.partitions.name¶
- Required:
Yes
The name of this partition. Only alphanumeric characters, dashes (
-
) and underscores (_
) are allowed.
- systems.partitions.descr¶
- Required:
No
- Default:
""
The description of this partition.
- systems.partitions.scheduler¶
- Required:
Yes
The job scheduler that will be used to launch jobs on this partition. Supported schedulers are the following:
flux
: Jobs will be launched using the Flux Framework scheduler.local
: Jobs will be launched locally without using any job scheduler.lsf
: Jobs will be launched using the LSF scheduler.oar
: Jobs will be launched using the OAR scheduler.pbs
: Jobs will be launched using the PBS Pro scheduler.sge
: Jobs will be launched using the Sun Grid Engine scheduler.slurm
: Jobs will be launched using the Slurm scheduler. This backend requires job accounting to be enabled in the target system. If not, you should consider using thesqueue
backend below.squeue
: Jobs will be launched using the Slurm scheduler. This backend does not rely on job accounting to retrieve job statuses, but ReFrame does its best to query the job state as reliably as possible.ssh
: Jobs will be launched on a remote host using SSH.The remote host will be selected from the list of hosts specified in
ssh_hosts
. The scheduler keeps track of the hosts that it has submitted jobs to, and it will select the next available one in a round-robin fashion. For connecting to a remote host, the options specified inaccess
will be used.When a job is submitted with this scheduler, its stage directory will be copied over to a unique temporary directory on the remote host, then the job will be executed and, finally, any produced artifacts will be copied back.
The contents of the stage directory are copied to the remote host either using
rsync
, if available, orscp
as a second choice. The sameaccess
options will be used in those operations as well. Please note, that the connection options ofssh
andscp
differ and ReFrame will not attempt to translate any options between the two utilities in casescp
is selected for copying to the remote host. In this case, it is preferable to set up the host connection options in~/.ssh/config
and leaveaccess
blank.Job-scheduler command line options can be used to interact with the
ssh
backend. More specifically, if the--distribute
option is used, a test will be generated for each host listed inssh_hosts
. You can also pin a test to a specific host if you pass the#host
directive to the-J
option, e.g.,-J '#host=myhost'
.torque
: Jobs will be launched using the Torque scheduler.
Added in version 3.7.2: Support for the SGE scheduler is added.
Added in version 3.8.2: Support for the OAR scheduler is added.
Added in version 3.11.0: Support for the LSF scheduler is added.
Added in version 4.4: The
ssh
scheduler is added.Note
The way that multiple node jobs are submitted using the SGE scheduler can be very site-specific. For this reason, the
sge
scheduler backend does not try to interpret any related arguments, e.g.,num_tasks
,num_tasks_per_node
etc. Users must specify how these resources are to be requested by setting theresources
partition configuration parameter and then request them from inside a test using theextra_resources
test attribute. Here is an example configuration for a system partition namedfoo
that defines different ways for submitting MPI-only, OpenMP-only and MPI+OpenMP jobs:{ 'name': 'foo', 'scheduler': 'sge', 'resources': [ { 'name': 'smp', 'options': ['-pe smp {num_slots}'] }, { 'name': 'mpi', 'options': ['-pe mpi {num_slots}'] }, { 'name': 'mpismp', 'options': ['-pe mpismp {num_slots}'] } ] }
Each test then can request the different type of slots as follows:
self.extra_resouces = { 'smp': {'num_slots': self.num_cpus_per_task}, 'mpi': {'num_slots': self.num_tasks}, 'mpismp': {'num_slots': self.num_tasks*self.num_cpus_per_task} }
Notice that defining
extra_resources
allows the test to be portable to other systems that have different schedulers; theextra_resources
will be simply ignored in this case and the scheduler backend will interpret the different test fields in the appropriate way.
- systems.partitions.sched_options¶
- Required:
No
- Default:
{}
Scheduler-specific options for this partition. See below for the available options.
Added in version 4.1.
Warning
This option is broken in 4.0.
- systems.partitions.sched_options.sched_access_in_submit¶
- Required:
No
- Default:
false
Normally, ReFrame will pass the
access
options to the job script only. When this attribute istrue
the options are passed in the submission command instead.This option is relevant for the LSF, OAR, PBS and Slurm backends.
Added in version 4.7.
- systems.partitions.sched_options.ssh_hosts¶
- Required:
No
- Default:
[]
List of hosts in a partition that uses the
ssh
scheduler.
- systems.partitions.sched_options.ignore_reqnodenotavail¶
- Required:
No
- Default:
false
Ignore the
ReqNodeNotAvail
Slurm state.If a job associated to a test is in pending state with the Slurm reason
ReqNodeNotAvail
and a list of unavailable nodes is also specified, ReFrame will check the status of the nodes and, if all of them are indeed down, it will cancel the job. Sometimes, however, when Slurm’s backfill algorithm takes too long to compute, Slurm will set the pending reason toReqNodeNotAvail
and mark all system nodes as unavailable, causing ReFrame to kill the job. In such cases, you may set this parameter totrue
to avoid this.This option is relevant for the Slurm backends only.
- systems.partitions.sched_options.job_submit_timeout¶
- Required:
No
- Default:
60
Timeout in seconds for the job submission command.
If timeout is reached, the test issuing that command will be marked as a failure.
- systems.partitions.sched_options.resubmit_on_errors¶
- Required:
No
- Default:
[]
If any of the listed errors occur, try to resubmit the job after some seconds.
As an example, you could have ReFrame trying to resubmit a job in case that the maximum submission limit per user is reached by setting this field to
["QOSMaxSubmitJobPerUserLimit"]
. You can ignore multiple errors at the same time if you add more error strings in the list.This option is relevant for the Slurm backends only.
Added in version 3.4.1.
Warning
Job submission is a synchronous operation in ReFrame. If this option is set, ReFrame’s execution will block until the error conditions specified in this list are resolved. No other test would be able to proceed.
- systems.partitions.sched_options.unqualified_hostnames¶
- Required:
No
- Default:
false
Use unqualified hostnames in the
local
scheduler backend.Added in version 4.7.
- systems.partitions.sched_options.use_nodes_option¶
- Required:
No
- Default:
false
Always emit the
--nodes
Slurm option in the preamble of the job script.This option is relevant for the Slurm backends only.
- systems.partitions.launcher¶
- Required:
Yes
The parallel job launcher that will be used in this partition to launch parallel programs. Available values are the following:
alps
: Parallel programs will be launched using the Cray ALPSaprun
command.clush
: Parallel programs will be launched using the ClusterShellclush
command. This launcher uses the partition’saccess
property in order to determine the options to be passed toclush
.ibrun
: Parallel programs will be launched using theibrun
command. This is a custom parallel program launcher used at TACC.local
: No parallel program launcher will be used. The program will be launched locally.lrun
: Parallel programs will be launched using LC Launcher’slrun
command.lrun-gpu
: Parallel programs will be launched using LC Launcher’slrun -M "-gpu"
command that enables the CUDA-aware Spectrum MPI.mpirun
: Parallel programs will be launched using thempirun
command.mpiexec
: Parallel programs will be launched using thempiexec
command.pdsh
: Parallel programs will be launched using thepdsh
command. This launcher uses the partition’saccess
property in order to determine the options to be passed topdsh
.srun
: Parallel programs will be launched using Slurm’ssrun
command.srunalloc
: Parallel programs will be launched using Slurm’ssrun
command, but job allocation options will also be emitted. This can be useful when combined with thelocal
job scheduler.ssh
: Parallel programs will be launched using SSH. This launcher uses the partition’saccess
property in order to determine the remote host and any additional options to be passed to the SSH client. The ssh command will be launched in “batch mode,” meaning that password-less access to the remote host must be configured. Here is an example configuration for the ssh launcher:{ 'name': 'foo' 'scheduler': 'local', 'launcher': 'ssh' 'access': ['-l admin', 'remote.host'], 'environs': ['builtin'], }
upcrun
: Parallel programs will be launched using the UPCupcrun
command.upcxx-run
: Parallel programs will be launched using the UPC++upcxx-run
command.
Tip
Added in version 4.0.0: ReFrame also allows you to register your own custom launchers simply by defining them in the configuration. You can follow a small tutorial here.
- systems.partitions.access¶
- Required:
No
- Default:
[]
A list of job scheduler options that will be passed to the generated job script for gaining access to that logical partition.
Note
For the pbs
and torque
backends, options accepted in the access
and resources
parameters may either refer to actual qsub
options or may just be resources specifications to be passed to the -l
option.
The backend assumes a qsub
option, if the options passed in these attributes start with a -
.
Note
If constraints are specified in access
for the Slurm backends,
these will be AND’ed with any additional constraints passed either through the test job options
or the -J
command-line option.
In other words, any constraint passed in access
will always be present in the generated job script.
- systems.partitions.environs¶
- required:
No
- default:
[]
A list of environment names that ReFrame will use to run regression tests on this partition. Each environment must be defined in the
environments
section of the configuration and the definition of the environment must be valid for this partition.
- systems.partitions.container_platforms¶
- Required:
No
- Default:
[]
A list for container platform configuration objects. This will allow launching regression tests that use containers on this partition.
- systems.partitions.modules¶
- required:
No
- default:
[]
A list of environment module objects to be loaded before running a regression test on this partition.
- systems.partitions.time_limit¶
- Required:
No
- Default:
null
The time limit for the jobs submitted on this partition. When the value is
null
, no time limit is applied.
- systems.partitions.env_vars¶
- Required:
No
- Default:
[]
A list of environment variables to be set before running a regression test on this partition. Each environment variable is specified as a two-element list containing the variable name and its value. You may reference other environment variables when defining an environment variable here. ReFrame will expand its value. Variables are set after the environment modules are loaded.
Added in version 4.0.0.
- systems.partitions.variables¶
- systems.partitions.max_jobs¶
- Required:
No
- Default:
8
The maximum number of concurrent regression tests that may be active (i.e., not completed) on this partition. This option is relevant only when ReFrame executes with the asynchronous execution policy.
- systems.partitions.prepare_cmds¶
- Required:
No
- Default:
[]
List of shell commands to be emitted before any environment loading commands are emitted.
Added in version 3.5.0.
- systems.partitions.resources¶
- Required:
No
- Default:
[]
A list of job scheduler resource specification objects.
- systems.partitions.processor¶
- Required:
No
- Default:
{}
Processor information for this partition stored in a processor info object. If not set, ReFrame will try to determine this information as follows:
If the processor configuration metadata file in
~/.reframe/topology/{system}-{part}/processor.json
exists, the topology information is loaded from there. These files are generated automatically by ReFrame from previous runs.If the corresponding metadata files are not found, the processor information will be auto-detected. If the system partition is local (i.e.,
local
scheduler +local
launcher), the processor information is auto-detected unconditionally and stored in the corresponding metadata file for this partition. If the partition is remote, ReFrame will not try to auto-detect it unless theRFM_REMOTE_DETECT
or thegeneral.remote_detect
configuration option is set. The steps to auto-detect the remote processor information are the following:ReFrame creates a temporary directory created under
.
by default. This temporary directory prefix can be changed by setting theRFM_REMOTE_WORKDIR
environment variable or thegeneral.remote_workdir
configuration option. This directory must be a shared between the remote node and the one ReFrame is running on.ReFrame changes to that directory and creates a clone of itself:
A set of custom commands for the ReFrame installation can be specified through
general.remote_install
configuration option (as a list). The installation commands must make sure that the fresh ReFrame clone is found in the system path.If no custom commands are passed, ReFrame tries to perform the installation first using
./bootstrap.sh
. If this is not possible, it tries to usepip
.
ReFrame launches a job for the topology auto-detection
reframe --detect-host-topology=topo.json
. The--detect-host-topology
option causes ReFrame to detect the topology of the current host, which in this case would be one of the remote compute nodes.
In case of errors during auto-detection, ReFrame will simply issue a warning and continue.
Note
The directory prefix for storing topology information is configurable through the
topology_prefix
configuration option.
Added in version 3.5.0.
Changed in version 3.7.0: ReFrame is now able to detect the processor information automatically.
Changed in version 4.7: Directory prefix for topology files is now configurable.
- systems.partitions.devices¶
- Required:
No
- Default:
[]
A list with device info objects for this partition.
Added in version 3.5.0.
- systems.partitions.features¶
- Required:
No
- Default:
[]
User defined features of the partition.
These are accessible through the
features
attribute of thecurrent_partition
and can also be selected through the extended syntax ofvalid_systems
. The values of this list must be alphanumeric strings starting with a non-digit character and may also contain a-
.Added in version 3.11.0.
- systems.partitions.extras¶
- Required:
No
- Default:
{}
User defined attributes of the partition.
These are accessible through the
extras
attribute of thecurrent_partition
and can also be selected through the extended syntax ofvalid_systems
. The attributes of this object must be alphanumeric strings starting with a non-digit character and their values can be of any type.By default, the values of the
scheduler
andlauncher
of the partition are added to the partition’s extras, if not already present.Added in version 3.5.0.
Changed in version 4.6.0: The default
scheduler
andlauncher
extras are added.
Container Platform Configuration¶
ReFrame can launch containerized applications, but you need to configure properly a system partition in order to do that by defining a container platform configuration.
- systems.partitions.container_platforms.type¶
- Required:
Yes
The type of the container platform. Available values are the following:
Apptainer
: The Apptainer container runtime.Docker
: The Docker container runtime.Sarus
: The Sarus container runtime.Shifter
: The Shifter container runtime.Singularity
: The Singularity container runtime.
- systems.partitions.container_platforms.default¶
- Required:
No
If set to
true
, this is the default container platform of this partition. If not specified, the default container platform is assumed to be the first in the list ofcontainer_platforms
.Added in version 3.12.0.
- systems.partitions.container_platforms.modules¶
- Required:
No
- Default:
[]
A list of environment module objects to be loaded when running containerized tests using this container platform.
- systems.partitions.container_platforms.env_vars¶
- Required:
No
- Default:
[]
List of environment variables to be set when running containerized tests using this container platform. Each environment variable is specified as a two-element list containing the variable name and its value. You may reference other environment variables when defining an environment variable here. ReFrame will expand its value. Variables are set after the environment modules are loaded.
Added in version 4.0.0.
- systems.partitions.container_platforms.variables¶
Custom Job Scheduler Resources¶
ReFrame allows you to define custom scheduler resources for each partition that can then be transparently accessed through the extra_resources
attribute of a test or from an environment.
- systems.partitions.resources.name¶
- required:
Yes
The name of this resources. This name will be used to request this resource in a regression test’s
extra_resources
.
- systems.partitions.resources.options¶
- required:
No
- default:
[]
A list of options to be passed to this partition’s job scheduler. The option strings can contain placeholders of the form
{placeholder_name}
. These placeholders may be replaced with concrete values by a regression test through theextra_resources
attribute.For example, one could define a
gpu
resource for a multi-GPU system that uses Slurm as follows:'resources': [ { 'name': 'gpu', 'options': ['--gres=gpu:{num_gpus_per_node}'] } ]
A regression test then may request this resource as follows:
self.extra_resources = {'gpu': {'num_gpus_per_node': '8'}}
And the generated job script will have the following line in its preamble:
#SBATCH --gres=gpu:8
A resource specification may also start with
#PREFIX
, in which case#PREFIX
will replace the standard job script prefix of the backend scheduler of this partition. This is useful in cases of job schedulers like Slurm, that allow alternative prefixes for certain features. An example is the DataWarp functionality of Slurm which is supported by the#DW
prefix. One could then define DataWarp related resources as follows:'resources': [ { 'name': 'datawarp', 'options': [ '#DW jobdw capacity={capacity} access_mode={mode} type=scratch', '#DW stage_out source={out_src} destination={out_dst} type={stage_filetype}' ] } ]
A regression test that needs to make use of that resource, it can set its
extra_resources
as follows:self.extra_resources = { 'datawarp': { 'capacity': '100GB', 'mode': 'striped', 'out_src': '$DW_JOB_STRIPED/name', 'out_dst': '/my/file', 'stage_filetype': 'file' } }
Environment Configuration¶
Environments defined in this section will be used for running regression tests. They are associated with system partitions.
- environments.name¶
- Required:
Yes
The name of this environment.
- environments.modules¶
- Required:
No
- Default:
[]
A list of environment module objects to be loaded when this environment is loaded.
- environments.env_vars¶
- Required:
No
- Default:
[]
A list of environment variables to be set when loading this environment. Each environment variable is specified as a two-element list containing the variable name and its value. You may reference other environment variables when defining an environment variable here. ReFrame will expand its value. Variables are set after the environment modules are loaded.
Added in version 4.0.0.
- environments.variables¶
- environments.features¶
- Required:
No
- Default:
[]
User defined features of the environment. These are accessible through the
features
attribute of thecurrent_environ
and can also be selected through the extended syntax ofvalid_prog_environs
. The values of this list must be alphanumeric strings starting with a non-digit character and may also contain a-
.Added in version 3.11.0.
- environments.extras¶
- Required:
No
- Default:
{}
User defined attributes of the environment. These are accessible through the
extras
attribute of thecurrent_environ
and can also be selected through the extended syntax ofvalid_prog_environs
. The attributes of this object must be alphanumeric strings starting with a non-digit character and their values can be of any type.Added in version 3.9.1.
- environments.prepare_cmds¶
- Required:
No
- Default:
[]
List of shell commands to be emitted before any commands that load the environment.
Added in version 4.3.0.
- environments.cc¶
- Required:
No
- Default:
"cc"
The C compiler to be used with this environment.
- environments.cxx¶
- Required:
No
- Default:
"CC"
The C++ compiler to be used with this environment.
- environments.ftn¶
- Required:
No
- Default:
"ftn"
The Fortran compiler to be used with this environment.
- environments.cppflags¶
- Required:
No
- Default:
[]
A list of C preprocessor flags to be used with this environment by default.
- environments.cflags¶
- Required:
No
- Default:
[]
A list of C flags to be used with this environment by default.
- environments.cxxflags¶
- Required:
No
- Default:
[]
A list of C++ flags to be used with this environment by default.
- environments.fflags¶
- Required:
No
- Default:
[]
A list of Fortran flags to be used with this environment by default.
- environments.ldflags¶
- Required:
No
- Default:
[]
A list of linker flags to be used with this environment by default.
- environments.nvcc¶
- Required:
No
- Default:
"nvcc"
The NVIDIA CUDA compiler to be used with this environment.
Added in version 4.6.
- environments.target_systems¶
- Required:
No
- Default:
["*"]
A list of systems or system/partitions combinations that this environment definition is valid for. A
*
entry denotes any system. In case of multiple definitions of an environment, the most specific to the current system partition will be used. For example, if the current system/partition combination isdaint:mc
, the second definition of thePrgEnv-gnu
environment will be used:'environments': [ { 'name': 'PrgEnv-gnu', 'modules': ['PrgEnv-gnu'] }, { 'name': 'PrgEnv-gnu', 'modules': ['PrgEnv-gnu', 'openmpi'], 'cc': 'mpicc', 'cxx': 'mpicxx', 'ftn': 'mpif90', 'target_systems': ['daint:mc'] } ]
However, if the current system was
daint:gpu
, the first definition would be selected, despite the fact that the second definition is relevant for another partition of the same system. To better understand this, ReFrame resolves definitions in a hierarchical way. It first looks for definitions for the current partition, then for the containing system and, finally, for global definitions (the*
pseudo-system).
- environments.resources¶
- Required:
No
- Default:
{}
Scheduler resources associated with this environments.
This is the equivalent of a test’s
extra_resources
.Added in version 4.6.
Logging Configuration¶
Logging in ReFrame is handled by logger objects which further delegate message to logging handlers which are eventually responsible for emitting or sending the log records to their destinations. You may define different logger objects per system but not per partition.
- logging.level¶
- Required:
No
- Default:
"undefined"
The level associated with this logger object. There are the following levels in decreasing severity order:
critical
: Catastrophic errors; the framework cannot proceed with its execution.error
: Normal errors; the framework may or may not proceed with its execution.warning
: Warning messages.info
: Informational messages.verbose
: More informational messages.debug
: Debug messages.debug2
: Further debug messages.undefined
: This is the lowest level; does not filter any message.
If a message is logged by the framework, its severity level will be checked by the logger and if it is higher from the logger’s level, it will be passed down to its handlers.
Added in version 3.3: The
debug2
andundefined
levels are added.Changed in version 3.3: The default level is now
undefined
.
- logging.handlers¶
- Required:
Yes
A list of logging handlers responsible for handling normal framework output.
- logging.handlers_perflog¶
- Required:
Yes
A list of logging handlers responsible for handling performance data from tests.
- logging.perflog_compat¶
- Required:
No
- Default:
false
Emit a separate log record for each performance variable. Set this option to
true
if you want to keep compatibility with the performance logging prior to ReFrame 4.0.
- logging.target_systems¶
- Required:
No
- Default:
["*"]
A list of systems or system/partitions combinations that this logging configuration is valid for. For a detailed description of this property, have a look at the
target_systems
definition for environments.
Common logging handler properties¶
All logging handlers share the following set of common attributes:
- logging.handlers.type¶
- logging.handlers_perflog.type¶
- Required:
Yes
The type of handler. There are the following types available:
file
: This handler sends log records to file. See here for more details.filelog
: This handler sends performance log records to files. See here for more details.graylog
: This handler sends performance log records to Graylog. See here for more details.stream
: This handler sends log records to a file stream. See here for more details.syslog
: This handler sends log records to a Syslog facility. See here for more details.httpjson
: This handler sends log records in JSON format using HTTP post requests. See here for more details.
- logging.handlers.level¶
- logging.handlers_perflog.level¶
- Required:
No
- Default:
"info"
The log level associated with this handler.
- logging.handlers.format¶
- logging.handlers_perflog.format¶
- Required:
No
- Default:
"%(message)s"
Log record format string.
ReFrame accepts all log record placeholders from Python’s logging mechanism and adds the following ones:
%(check_build_locally)s
The value of the
build_locally
attribute.%(check_build_time_limit)s
The value of the
build_time_limit
attribute.%(check_descr)s
The value of the
descr
attribute.%(check_display_name)s
The value of the
display_name
attribute.%(check_environ)s
The name of the test’s
current_environ
.%(check_env_vars)s
The value of the
env_vars
attribute.%(check_exclusive_access)s
The value of the
exclusive_access
attribute.%(check_executable)s
The value of the
executable
attribute.%(check_executable_opts)s
The value of the
executable_opts
attribute.%(check_extra_resources)s
The value of the
extra_resources
attribute.%(check_fail_phase)s
The phase where the test has failed.
%(check_fail_reason)s
The failure reason if the test has failed.
%(check_hashcode)s
The unique hash associated with this test.
%(check_info)s
Various information about this test; essentially the return value of the test’s
info()
function.%(check_job_completion_time)s
Same as the
(check_job_completion_time_unix)s
but formatted according todatefmt
.%(check_job_completion_time_unix)s
The completion time of the associated run job (see
completion_time
).%(check_job_exitcode)s
The exit code of the associated run job.
%(check_job_nodelist)s
The list of nodes that the associated run job has run on.
%(check_job_submit_time)s
The submission time of the associated run job (see
submit_time
).%(check_jobid)s
The ID of the associated run job.
%(check_keep_files)s
The value of the
keep_files
attribute.%(check_local)s
The value of the
local
attribute.%(check_maintainers)s
The value of the
maintainers
attribute.%(check_max_pending_time)s
The value of the
max_pending_time
attribute.%(check_modules)s
The value of the
modules
attribute.%(check_name)s
The value of the
name
attribute.%(check_num_cpus_per_task)s
The value of the
num_cpus_per_task
attribute.%(check_num_gpus_per_node)s
The value of the
num_gpus_per_node
attribute.%(check_num_tasks)s
The value of the
num_tasks
attribute.%(check_num_tasks_per_core)s
The value of the
num_tasks_per_core
attribute.%(check_num_tasks_per_node)s
The value of the
num_tasks_per_node
attribute.%(check_num_tasks_per_socket)s
The value of the
num_tasks_per_socket
attribute.%(check_outputdir)s
The value of the
outputdir
attribute.%(check_partition)s
The name of the test’s
current_partition
.%(check_perfvalues)s
All the performance variables of the test combined. These will be formatted according to
format_perfvars
.%(check_postbuild_cmds)s
The value of the
postbuild_cmds
attribute.%(check_postrun_cmds)s
The value of the
postrun_cmds
attribute.%(check_prebuild_cmds)s
The value of the
prebuild_cmds
attribute.%(check_prefix)s
The value of the
prefix
attribute.%(check_prerun_cmds)s
The value of the
prerun_cmds
attribute.%(check_result)s
The result of the test (
pass
orfail
).%(check_readonly_files)s
The value of the
readonly_files
attribute.%(check_short_name)s
The value of the
short_name
attribute.%(check_sourcepath)s
The value of the
sourcepath
attribute.%(check_sourcesdir)s
The value of the
sourcesdir
attribute.%(check_stagedir)s
The value of the
stagedir
attribute.%(check_strict_check)s
The value of the
strict_check
attribute.%(check_system)s
The name of the test’s
current_system
.%(check_tags)s
The value of the
tags
attribute.%(check_time_limit)s
The value of the
time_limit
attribute.%(check_unique_name)s
The value of the
unique_name
attribute.%(check_use_multithreading)s
The value of the
use_multithreading
attribute.%(check_valid_prog_environs)s
The value of the
valid_prog_environs
attribute.%(check_valid_systems)s
The value of the
valid_systems
attribute.%(check_variables)s
DEPRECATED: Please use
%(check_env_vars)s
instead.%(hostname)s
The hostname where ReFrame runs.
%(osuser)s
The name of the OS user running ReFrame.
%(osgroup)s
The name of the OS group running ReFrame.
%(version)s
The ReFrame version.
ReFrame allows you to log any test variable, parameter or property if they are marked as “loggable”. The log record placeholder will have the form
%(check_NAME)s
whereNAME
is the variable name, the parameter name or the property name that is marked as loggable.There is also the special
%(check_#ALL)s
format placeholder which expands to all the loggable test attributes. These include all the above placeholders and any additional loggable variables or parameters defined by the test. On expanding this placeholder, ReFrame will try to guess the delimiter to use for separating the different attributes based on the existing format. If it cannot guess it, it will default to|
.Since this can lead to very long records, you may consider using it with the
ignore_keys
parameter to filter out some attributes that are not of interest.
Added in version 3.3: Allow arbitrary test attributes to be logged.
Added in version 3.4.2: Allow arbitrary job attributes to be logged.
Changed in version 3.11.0: Limit the number of attributes that can be logged. User attributes or properties must be explicitly marked as “loggable” in order to be selectable for logging.
Added in version 4.0: The %(check_result)s
placeholder is added.
Added in version 4.3: The %(check_#ALL)s
special placeholder is added.
Added in version 4.7: The %(check_fail_phase)s
and %(check_fail_reason)s
placeholders are added.
Added in version 4.8: The %(hostname)s
placeholder is added.
- logging.handlers.format_perfvars¶
- logging.handlers_perflog.format_perfvars¶
- Required:
No
- Default:
""
Format specifier for logging the performance variables.
This defines how the
%(check_perfvalues)s
will be formatted. Since a test may define multiple performance variables, the formatting specified in this field will be repeated for each performance variable sequentially in the same line.Important
The last character of this format will be interpreted as the final delimiter of the formatted performance variables to the rest of the record.
The following log record placeholders are defined additionally by this format specifier:
Log record placeholders
Description
%(check_perf_lower_thres)s
The lower threshold of the logged performance variable.
%(check_perf_ref)s
The reference value of the logged performance variable.
%(check_perf_unit)s
The measurement unit of the logged performance variable.
%(check_perf_upper_thres)s
The upper threshold of the logged performance variable.
%(check_perf_value)s
The actual value of the logged performance variable.
%(check_perf_var)s
The name of the logged performance variable.
Important
ReFrame versions prior to 4.0 logged a separate line for each performance variable and the
%(check_perf_*)s
attributes could be used directly in theformat
. You can re-enable this behavior by setting theconfig.logging.perflog_compat
logging configuration parameter.Added in version 4.0.0.
- logging.handlers.datefmt¶
- logging.handlers_perflog.datefmt
- Required:
No
- Default:
"%FT%T"
Time format to be used for printing timestamps fields. There are two timestamp fields available:
%(asctime)s
and%(check_job_completion_time)s
. In addition to the format directives supported by the standard library’s time.strftime() function, ReFrame allows you to use the%:z
directive – a GNUdate
extension – that will print the time zone difference in a RFC3339 compliant way, i.e.,+/-HH:MM
instead of+/-HHMM
.
The file
log handler¶
This log handler handles output to normal files.
The additional properties for the file
handler are the following:
- logging.handlers..file..name¶
- logging.handlers_perflog..file..name¶
- Required:
No
The name of the file where this handler will write log records. If not specified, ReFrame will create a log file prefixed with
rfm-
in the system’s temporary directory.Changed in version 3.3: The
name
parameter is no more required and the default log file resides in the system’s temporary directory.
- logging.handlers..file..append¶
- logging.handlers_perflog..file..append¶
- Required:
No
- Default:
false
Controls whether this handler should append to its file or not.
- logging.handlers..file..timestamp¶
- logging.handlers_perflog..file..timestamp¶
- Required:
No
- Default:
false
Append a timestamp to this handler’s log file. This property may also accept a date format as described in the
datefmt
property. If the handler’sname
property is set tofilename.log
and this property is set totrue
or to a specific timestamp format, the resulting log file will befilename_<timestamp>.log
.
The filelog
log handler¶
This handler is meant for performance logging only and logs the performance of a test in one or more files.
The additional properties for the filelog
handler are the following:
- logging.handlers_perflog..filelog..basedir¶
- Required:
No
- Default:
"./perflogs"
The base directory of performance data log files.
- logging.handlers_perflog..filelog..ignore_keys¶
A list of log record format placeholders that will be ignored by the special
%(check_#ALL)s
placeholder.Added in version 4.3.
- logging.handlers_perflog..filelog..prefix¶
- Required:
Yes
This is a directory prefix (usually dynamic), appended to the
basedir
, where the performance logs of a test will be stored. This attribute accepts any of the check-specific formatting placeholders. This allows to create dynamic paths based on the current system, partition and/or programming environment a test executes with. For example, a value of%(check_system)s/%(check_partition)s
would generate the following structure of performance log files:{basedir}/ system1/ partition1/ <test_class_name>.log partition2/ <test_class_name>.log ... system2/ ...
- logging.handlers_perflog..filelog..append¶
- Required:
No
- Default:
true
Open each log file in append mode.
Changed in version 4.0.0: The filelog
handler is very cautious when generating a test log file: if a change is detected in the information that is being logged, the hanlder will not append to the same file, but it will instead create a new one, saving the old file using the .h<N>
suffix, where N
is an integer that is increased every time a new file is being created due to such changes.
Examples of changes in the logged information are when the log record format changes or a new performance metric is added, deleted or has its name changed.
This behavior guarantees that each log file is consistent and it will not break existing parsers.
Changed in version 4.3: In the generated log file, the name of the test class name is used instead of the test’s short name (which included the test’s hash). This allows the results of different variants of a parameterized test to be stored in the same log file facilitating post-processing.
The graylog
log handler¶
This handler is meant for performance logging only and sends log records to a Graylog server.
The additional properties for the graylog
handler are the following:
- logging.handlers_perflog..graylog..address¶
- Required:
Yes
The address of the Graylog server defined as
host:port
.
- logging.handlers_perflog..graylog..extras¶
- Required:
No
- Default:
{}
A set of optional key/value pairs to be passed with each log record to the server. These may depend on the server configuration.
This log handler uses internally pygelf.
If pygelf
is not available, this log handler will be ignored.
GELF is a format specification for log messages that are sent over the network.
The graylog
handler sends log messages in JSON format using an HTTP POST request to the specified address.
More details on this log format may be found here.
An example configuration of this handler for performance logging is shown here:
{
'type': 'graylog',
'address': 'graylog-server:12345',
'level': 'info',
'format': '%(message)s',
'extras': {
'facility': 'reframe',
'data-version': '1.0'
}
}
Although the format
attribute is defined for this handler, it is not only the log message that will be transmitted the Graylog server.
This handler transmits the whole log record, meaning that all the information will be available and indexable at the remote end.
The stream
log handler¶
This handler sends log records to a file stream.
The additional properties for the stream
handler are the following:
- logging.handlers..stream..name¶
- logging.handlers_perflog..stream..name¶
- Required:
No
- Default:
"stdout"
The name of the file stream to send records to. There are only two available streams:
stdout
: the standard output.stderr
: the standard error.
The syslog
log handler¶
This handler sends log records to UNIX syslog.
The additional properties for the syslog
handler are the following:
- logging.handlers..syslog..socktype¶
- logging.handlers_perflog..syslog..socktype¶
- Required:
No
- Default:
"udp"
The socket type where this handler will send log records to. There are two socket types:
udp
: A UDP datagram socket.tcp
: A TCP stream socket.
- logging.handlers..syslog..facility¶
- logging.handlers_perflog..syslog..facility¶
- Required:
No
- Default:
"user"
The Syslog facility where this handler will send log records to. The list of supported facilities can be found here.
- logging.handlers..syslog..address¶
- logging.handlers_perflog..syslog..address¶
- Required:
Yes
The socket address where this handler will connect to. This can either be of the form
<host>:<port>
or simply a path that refers to a Unix domain socket.
The httpjson
log handler¶
This handler sends log records in JSON format to a server using HTTP POST requests.
The additional properties for the httpjson
handler are the following:
- logging.handlers_perflog..httpjson..url¶
- Required:
Yes
The URL to be used in the HTTP(S) request server.
- logging.handlers_perflog..httpjson..extra_headers¶
- Required:
No
- Default:
{}
A set of optional key/value pairs to be sent as HTTP message headers (e.g. API keys). These may depend on the server configuration.
Added in version 4.2.
- logging.handlers_perflog..httpjson..extras¶
- Required:
No
- Default:
{}
A set of optional key/value pairs to be passed with each log record to the server. These may depend on the server configuration.
- logging.handlers_perflog..httpjson..ignore_keys¶
- Required:
No
- Default:
[]
These keys will be excluded from the log record that will be sent to the server.
The httpjson
handler sends log messages in JSON format using an HTTP POST request to the specified URL.
An example configuration of this handler for performance logging is shown here:
{
'type': 'httpjson',
'url': 'http://httpjson-server:12345/rfm',
'level': 'info',
'extra_headers': {'Authorization': 'Token YOUR_API_TOKEN'},
'extras': {
'facility': 'reframe',
'data-version': '1.0'
},
'ignore_keys': ['check_perfvalues']
}
This handler transmits the whole log record, meaning that all the information will be available and indexable at the remote end.
- logging.handlers_perflog..httpjson..debug¶
- Required:
No
- Default:
false
If set, the
httpjson
handler will not attempt to send the data to the server, but it will instead dump the JSON record in the current directory. The filename has the following form:httpjson_record_<timestamp>.json
.Added in version 4.1.
- logging.handlers_perflog..httpjson..json_formatter¶
A callable for converting the log record into JSON.
The formatter’s signature is the following:
- json_formatter(record: object, extras: Dict[str, str], ignore_keys: Set[str]) str ¶
- Parameters:
record – The prepared log record. The log record is a simple Python object with all the placeholders listed in
format
, as well as all the default Python log record placeholders. In addition to those, there is also the special__rfm_check__
attribute that contains a reference to the actual test for which the performance is being logged.extras – Any extra attributes specified in
extras
.ignore_keys – The set of keys specified in
ignore_keys
. ReFrame always adds the default Python log record placeholders in this set.
- Returns:
A string representation of the JSON record to be sent to the server or
None
if the record should not be sent to the server.
Note
This configuration parameter can only be used in a Python configuration file.
Added in version 4.1.
Execution Mode Configuration¶
ReFrame allows you to define groups of command line options that are collectively called execution modes.
An execution mode can then be selected from the command line with the --mode
option.
The options of an execution mode will be passed to ReFrame as if they were specified in the command line.
- modes.name¶
- Required:
Yes
The name of this execution mode. This can be used with the
--mode
command line option to invoke this mode.
- modes.options¶
- Required:
No
- Default:
[]
The command-line options associated with this execution mode.
- modes.target_systems¶
- Required:
No
- Default:
["*"]
A list of systems only that this execution mode is valid for. For a detailed description of this property, have a look at the
target_systems
definition for environments.
Result storage configuration¶
Added in version 4.7.
- storage.backend¶
- Required:
No
- Default:
"sqlite"
The backend to use for storing the test results.
Currently, only Sqlite can be used as a storage backend.
- storage.enable¶
- Required:
No
- Default:
true
Enable results storage.
- storage.sqlite_conn_timeout¶
- Required:
No
- Default:
60
Timeout in seconds for SQLite database connections.
- storage.sqlite_db_file¶
- Required:
No
- Default:
"${HOME}/.reframe/reports/results.db"
The SQLite database file to use.
- storage.sqlite_db_file_mode¶
- Required:
No
- Default:
"644"
The permissions of the SQLite database file in octal form.
The mode will only taken into account upon creation of the DB file. Permissions of an existing DB file have to be changed manually.
- storage.target_systems¶
- Required:
No
- Default:
["*"]
A list of systems only that this storage configuration is valid for.
For a detailed description of this property, have a look at the
target_systems
definition for environments.
General Configuration¶
- general.check_search_path¶
- Required:
No
- Default:
["${RFM_INSTALL_PREFIX}/checks/"]
A list of paths (files or directories) where ReFrame will look for regression test files. If the search path is set through the environment variable, it should be a colon separated list. If specified from command line, the search path is constructed by specifying multiple times the command line option.
- general.check_search_recursive¶
- Required:
No
- Default:
false
Search directories in the search path recursively.
- general.clean_stagedir¶
- Required:
No
- Default:
true
Clean stage directory of tests before populating it.
Added in version 3.1.
- general.colorize¶
- Required:
No
- Default:
true
Use colors in output. The command-line option sets the configuration option to
false
.
- general.compress_report¶
- Required:
No
- Default:
false
Compress the generated run report file. See the documentation of the
--compress-report
option for more information.Added in version 3.12.0.
- general.dump_pipeline_progress¶
Dump pipeline progress for the asynchronous execution policy in
pipeline-progress.json
. This option is meant for debug purposes only.- Required:
No
- Default:
False
Added in version 3.10.0.
- general.failure_inspect_lines¶
- Required:
No
- Default:
10
Number of the last lines of stdout/stderr to be printed in case of test failures.
Added in version 4.7.
- general.flex_alloc_strict¶
- Required:
No
- Default:
False
Fail flexible tests if their minimum task requirement is not satisfied.
Added in version 4.7.
- general.git_timeout¶
- Required:
No
- Default:
5
Timeout value in seconds used when checking if a git repository exists.
- general.pipeline_timeout¶
Timeout in seconds for advancing the pipeline in the asynchronous execution policy.
ReFrame’s asynchronous execution policy will try to advance as many tests as possible in their pipeline, but some tests may take too long to proceed (e.g., due to copying of large files) blocking the advancement of previously started tests. If this timeout value is exceeded and at least one test has progressed, ReFrame will stop processing new tests and it will try to further advance tests that have already started. See Tweaking the throughput and interactivity of test jobs in the asynchronous execution policy for more guidance on how to set this.
- Required:
No
- Default:
10
Added in version 3.10.0.
- general.perf_info_level¶
- Required:
No
- Default:
"info"
The log level at which the immediate performance info will be printed.
As soon as a performance test is finished, ReFrame will log its performance on the standard output immediately. This option controls at which verbosity level this info will appear.
For a list of available log levels, refer to the
level
logger configuration parameter.Added in version 4.0.0.
- general.remote_detect¶
- Required:
No
- Default:
false
Try to auto-detect processor information of remote partitions as well. This may slow down the initialization of the framework, since it involves submitting auto-detection jobs to the remote partitions.
Added in version 3.7.0.
- general.remote_install¶
- Required:
No
- Default:
[]
List of commands to install reframe in the remote partition in order to auto-detect processor information.
Added in version 4.7.0.
- general.remote_workdir¶
- Required:
No
- Default:
"."
The temporary directory prefix that will be used to create a fresh ReFrame clone, in order to auto-detect the processor information of a remote partition.
Added in version 3.7.0.
- general.ignore_check_conflicts¶
- Required:
No
- Default:
false
Ignore test name conflicts when loading tests.
Deprecated since version 3.8.0: This option will be removed in a future version.
- general.topology_prefix¶
- Required:
No
- Default:
"${HOME}/.reframe/topology"
Directory prefix for storing the auto-detected processor topology.
Added in version 4.7.
- general.trap_job_errors¶
- Required:
No
- Default:
false
Trap command errors in the generated job scripts and let them exit immediately.
Added in version 3.2.
- general.keep_stage_files¶
- Required:
No
- Default:
false
Keep stage files of tests even if they succeed.
- general.module_map_file¶
- Required:
No
- Default:
""
File containing module mappings.
- general.module_mappings¶
- Required:
No
- Default:
[]
A list of module mappings. If specified through the environment variable, the mappings must be separated by commas. If specified from command line, multiple module mappings are defined by passing the command line option multiple times.
- general.non_default_craype¶
- Required:
No
- Default:
false
Test a non-default Cray Programming Environment. This will emit some special instructions in the generated build and job scripts. See also
--non-default-craype
for more details.
- general.purge_environment¶
- Required:
No
- Default:
false
Purge any loaded environment modules before running any tests.
- general.report_file¶
- Required:
No
- Default:
"${HOME}/.reframe/reports/run-report-{sessionid}.json"
The file where ReFrame will store its report.
Added in version 3.1.
Changed in version 3.2: Default value has changed to avoid generating a report file per session.
Changed in version 4.0.0: Default value was reverted back to generate a new file per run.
- general.report_junit¶
- Required:
No
- Default:
null
The file where ReFrame will store its report in JUnit format. The report adheres to the XSD schema here.
Added in version 3.6.0.
- general.resolve_module_conflicts¶
- Required:
No
- Default:
true
ReFrame by default resolves any module conflicts and emits the right sequence of
module unload
andmodule load
commands, in order to load the requested modules. This option disables this behavior if set tofalse
.You should avoid using this option for modules system that cannot handle module conflicts automatically, such as early Tmod verions.
Disabling the automatic module conflict resolution, however, can be useful when modules in a remote system partition are not present on the host where ReFrame runs. In order to resolve any module conflicts and generate the right load sequence of modules, ReFrame loads temporarily the requested modules and tracks any conflicts along the way. By disabling this option, ReFrame will simply emit the requested
module load
commands without attempting to load any module.Added in version 3.6.0.
- general.save_log_files¶
- Required:
No
- Default:
false
Save any log files generated by ReFrame to its output directory
- general.target_systems¶
- Required:
No
- Default:
["*"]
A list of systems or system/partitions combinations that these general options are valid for. For a detailed description of this property, have a look at the
target_systems
definition for environments.
- general.table_format¶
- Required:
No
- Default:
"pretty"
Set the formatting of tabular output.
The acceptable values are the following:
csv
: Generate CSV outputplain
: Generate a plain table without any linespretty
: (default) Generate a pretty table
- general.timestamp_dirs¶
- Required:
No
- Default:
""
Append a timestamp to ReFrame directory prefixes. Valid formats are those accepted by the time.strftime() function. If specified from the command line without any argument,
"%FT%T"
will be used as a time format.
- general.unload_modules¶
- Required:
No
- Default:
[]
A list of environment module objects to unload before executing any test. If specified using an the environment variable, a space separated list of modules is expected. If specified from the command line, multiple modules can be passed by passing the command line option multiple times.
- general.use_login_shell¶
- Required:
No
- Default:
false
Use a login shell for the generated job scripts. This option will cause ReFrame to emit
-l
in the shebang of shell scripts. This option, if set totrue
, may cause ReFrame to fail, if the shell changes permanently to a different directory during its start up.
- general.user_modules¶
- Required:
No
- Default:
[]
A list of environment module objects to be loaded before executing any test. If specified using an the environment variable, a space separated list of modules is expected. If specified from the command line, multiple modules can be passed by passing the command line option multiple times.
- general.verbose¶
- Required:
No
- Default:
0
Set the verbosity level of the output. The higher the number, the more verbose the output will be. If set to a negative number, this will decrease the verbosity level.
Module Objects¶
Added in version 3.3.
A module object in ReFrame’s configuration represents an environment module. It can either be a simple string or a JSON object with the following attributes:
- environments.modules.name¶
- systems.modules.name¶
- systems.partitions.modules.name¶
- systems.partitions.container_platforms.modules.name¶
- Required:
Yes
The name of the module.
- environments.modules.collection¶
- systems.modules.collection¶
- systems.partitions.modules.collection¶
- systems.partitions.container_platforms.modules.collection¶
- Required:
No
- Default:
false
A boolean value indicating whether this module refers to a module collection. Module collections are treated differently from simple modules when loading.
- environments.modules.path¶
- systems.modules.path¶
- systems.partitions.modules.path¶
- systems.partitions.container_platforms.modules.path¶
- Required:
No
- Default:
null
If the module is not present in the default
MODULEPATH
, the module’s location can be specified here. ReFrame will make sure to set and restore theMODULEPATH
accordingly for loading the module.Added in version 3.5.0.
See also
Module collections with Environment Modules and Lmod.
Processor Info¶
Added in version 3.5.0.
A processor info object in ReFrame’s configuration is used to hold information about the processor of a system partition and is made available to the tests through the processor
attribute of the current_partition
.
Note
In the following the term logical CPUs refers to the smallest processing unit recognized by the OS.
Depending on the microarchitecture, this can either be a core or a hardware thread in processors that support simultaneous multithreading and this feature is enabled.
Therefore, properties such as num_cpus_per_core
may have a value greater than one.
- systems.partitions.processor.arch¶
- Required:
No
- Default:
None
The microarchitecture of the processor.
- systems.partitions.processor.model¶
- Required:
No
- Default:
None
The model of the processor.
Added in version 4.6.
- systems.partitions.processor.platform¶
- Required:
No
- Default:
None
The hardware platform for this processor (e.g.,
x86_64
,arm64
etc.)Added in version 4.6.
- systems.partitions.processor.num_cpus¶
- Required:
No
- Default:
None
Number of logical CPUs.
- systems.partitions.processor.num_cpus_per_core¶
- Required:
No
- Default:
None
Number of logical CPUs per core.
- systems.partitions.processor.num_cpus_per_socket¶
- Required:
No
- Default:
None
Number of logical CPUs per socket.
- systems.partitions.processor.num_sockets¶
- Required:
No
- Default:
None
Number of sockets.
- systems.partitions.processor.topology¶
- Required:
No
- Default:
None
Processor topology. An example follows:
'topology': { 'numa_nodes': ['0x000000ff'], 'sockets': ['0x000000ff'], 'cores': ['0x00000003', '0x0000000c', '0x00000030', '0x000000c0'], 'caches': [ { 'type': 'L3', 'size': 6291456, 'linesize': 64, 'associativity': 0, 'num_cpus': 8, 'cpusets': ['0x000000ff'] }, { 'type': 'L2', 'size': 262144, 'linesize': 64, 'associativity': 4, 'num_cpus': 2, 'cpusets': ['0x00000003', '0x0000000c', '0x00000030', '0x000000c0'] }, { 'type': 'L1', 'size': 32768, 'linesize': 64, 'associativity': 0, 'num_cpus': 2, 'cpusets': ['0x00000003', '0x0000000c', '0x00000030', '0x000000c0'] } ] }
Device Info¶
Added in version 3.5.0.
A device info object in ReFrame’s configuration is used to hold information about a specific type of devices in a system partition and is made available to the tests through the devices
attribute of the current_partition
.
- systems.partitions.devices.type¶
- Required:
No
- Default:
None
The type of the device, for example
"gpu"
.
- systems.partitions.devices.arch
- Required:
No
- Default:
None
The microarchitecture of the device.
- systems.partitions.devices.model
- Required:
No
- Default:
None
The model of the device.
Added in version 4.6.
- systems.partitions.devices.num_devices¶
- Required:
No
- Default:
None
Number of devices of this type inside the system partition.