Test API Reference¶
This page provides a reference guide of the ReFrame API for writing regression tests covering all the relevant details. Internal data structures and APIs are covered only to the extent that this might be helpful to the final user of the framework.
Test Base Classes¶
- class reframe.core.pipeline.CompileOnlyRegressionTest(*args, **kwargs)[source]¶
Bases:
RegressionTest
Base class for compile-only regression tests.
These tests are by default local and will skip the run phase of the regression test pipeline.
The standard output and standard error of the test will be set to those of the compilation stage.
Compile-only tests do not need to define a sanity checking function, since the compile stage will always fail if the compilation fails. However, if a sanity function is defined, it will be used to validate the test.
This class is also directly available under the top-level
reframe
module.Changed in version 4.6: Compile-only tests do not require an explicit sanity checking function.
- check_sanity()[source]¶
The sanity checking phase of the regression test pipeline.
- Raises:
reframe.core.exceptions.SanityError – If the sanity check fails.
reframe.core.exceptions.ReframeSyntaxError – If the sanity function cannot be resolved due to ambiguous syntax.
- setup(partition, environ, **job_opts)[source]¶
The setup stage of the regression test pipeline.
Similar to the
RegressionTest.setup()
, except that no run job is created for this test.
- class reframe.core.pipeline.RegressionMixin(*args, **kwargs)[source]¶
Bases:
object
Base mixin class for regression tests.
Multiple inheritance from more than one
RegressionTest
class is not allowed in ReFrame. Hence, mixin classes provide the flexibility to bundle reusable test add-ons, leveraging the metaclass magic implemented inRegressionTestMeta
. Using this metaclass allows mixin classes to use powerful ReFrame features, such as hooks, parameters or variables.Added in version 3.4.2.
- class reframe.core.pipeline.RegressionTest(*args, **kwargs)[source]¶
Bases:
RegressionMixin
,JSONSerializable
Base class for regression tests.
All regression tests must eventually inherit from this class. This class provides the implementation of the pipeline phases that the regression test goes through during its lifetime.
This class accepts parameters at the class definition, i.e., the test class can be defined as follows:
class MyTest(RegressionTest, param='foo', ...):
where
param
is one of the following:- Parameters:
pin_prefix – lock the test prefix to the directory where the current class lives.
require_version –
a list of ReFrame version specifications that this test is allowed to run. A version specification string can have one of the following formats:
VERSION
: Specifies a single version.{OP}VERSION
, where{OP}
can be any of>
,>=
,<
,<=
,==
and!=
. For example, the version specification string'>=3.5.0'
will allow the following test to be loaded only by ReFrame 3.5.0 and higher. The==VERSION
specification is the equivalent ofVERSION
.V1..V2
: Specifies a range of versions.
The test will be selected if any of the versions is satisfied, even if the versions specifications are conflicting.
special – allow pipeline stage methods to be overriden in this class.
Note
Changed in version 2.19: Base constructor takes no arguments.
Added in version 3.3: The
pin_prefix
class definition parameter is added.Added in version 3.7.0: The
require_verion
class definition parameter is added.Warning
Changed in version 3.4.2: Multiple inheritance with a shared common ancestor is not allowed.
- build_locally = True¶
Added in version 3.3.
Always build the source code for this test locally. If set to
False
, ReFrame will spawn a build job on the partition where the test will run. Setting this toFalse
is useful when cross-compilation is not supported on the system where ReFrame is run. Normally, ReFrame will mark the test as a failure if the spawned job exits with a non-zero exit code. However, certain scheduler backends, such as thesqueue
do not set it. In such cases, it is the user’s responsibility to check whether the build phase failed by adding an appropriate sanity check.- Type:
boolean
- Default:
True
- build_system = None¶
Added in version 2.14.
The build system to be used for this test. If not specified, the framework will try to figure it out automatically based on the value of
sourcepath
.This field may be set using either a string referring to a concrete build system class name (see build systems) or an instance of
reframe.core.buildsystems.BuildSystem
. The former is the recommended way.- Type:
- Default:
None
.
- build_time_limit = None¶
Added in version 3.5.1.
The time limit for the build job of the regression test.
It is specified similarly to the
time_limit
attribute.
- check_performance()[source]¶
The performance checking phase of the regression test pipeline.
- Raises:
reframe.core.exceptions.SanityError – If the performance check fails.
- check_sanity()[source]¶
The sanity checking phase of the regression test pipeline.
- Raises:
reframe.core.exceptions.SanityError – If the sanity check fails.
reframe.core.exceptions.ReframeSyntaxError – If the sanity function cannot be resolved due to ambiguous syntax.
- ci_extras = {}¶
Added in version 4.2.
Extra options to be passed to the child CI pipeline generated for this test using the
--ci-generate
option.This variable is a dictionary whose keys refer the CI generate backend and the values can be in any CI backend-specific format.
Currently, the only key supported is
'gitlab'
and the values is a Gitlab configuration in JSON format. For example, if we want a pipeline to run only when files inbackend
orsrc/main.c
have changed, this variable should be set as follows:ci_extras = { 'only': {'changes': ['backend/*', 'src/main.c']} }
- Type:
- Default:
{}
- cleanup(remove_files=False)[source]¶
The cleanup phase of the regression test pipeline.
- Parameters:
remove_files – If
True
, the stage directory associated with this test will be removed.
- compile()[source]¶
The compilation phase of the regression test pipeline.
- Raises:
reframe.core.exceptions.ReframeError – In case of errors.
- compile_complete()[source]¶
Check if the build phase has completed.
- Returns:
True
if the associated build job has finished,False
otherwise.If no job descriptor is yet associated with this test,
True
is returned.- Raises:
reframe.core.exceptions.ReframeError – In case of errors.
- container_platform = _NoRuntime¶
Added in version 2.20.
The container platform to be used for launching this test.
This field is set automatically by the default container runtime associated with the current system partition. Users may also set this, explicitly overriding any partition setting. If the
image
attribute ofcontainer_platform
is set, then the test will run inside a container using the specified container runtime.self.container_platform = 'Singularity' self.container_platform.image = 'docker://ubuntu:18.04' self.container_platform.command = 'cat /etc/os-release'
If the test will run inside a container, the
executable
andexecutable_opts
attributes are ignored. The container platform’scommand
will be used instead.Note
Only the run phase of the test will run inside the container. If you enable the containerized run in a non run-only test, the compilation phase will still run natively.
- Type:
str
orContainerPlatform
.- Default:
the container runtime specified in the current system partition’s configuration (see also Container Platform Configuration).
Changed in version 3.12.0: This field is now set automatically from the current partition’s configuration.
- property current_environ¶
The programming environment that the regression test is currently executing with.
This is set by the framework during the
setup()
phase.
- property current_partition¶
The system partition the regression test is currently executing on.
This is set by the framework during the
setup()
phase.
- property current_system¶
The system the regression test is currently executing on.
This is set by the framework during the initialization phase.
- Type:
- depends_on(target, how=None, *args, **kwargs)[source]¶
Add a dependency to another test.
- Parameters:
target – The name of the test that this one will depend on.
how –
A callable that defines how the test cases of this test depend on the the test cases of the target test. This callable should accept two arguments:
The source test case (i.e., a test case of this test) represented as a two-element tuple containing the names of the partition and the environment of the current test case.
Test destination test case (i.e., a test case of the target test) represented as a two-element tuple containing the names of the partition and the environment of the current target test case.
It should return
True
if a dependency between the source and destination test cases exists,False
otherwise.This function will be called multiple times by the framework when the test DAG is constructed, in order to determine the connectivity of the two tests.
In the following example, this test depends on
T1
when their partitions match, otherwise their test cases are independent.def by_part(src, dst): p0, _ = src p1, _ = dst return p0 == p1 self.depends_on('T0', how=by_part)
The framework offers already a set of predefined relations between the test cases of inter-dependent tests. See the
reframe.utility.udeps
for more details.The default
how
function isreframe.utility.udeps.by_case()
, where test cases on different partitions and environments are independent.
Added in version 2.21.
Changed in version 3.3: Dependencies between test cases from different partitions are now allowed. The
how
argument now accepts a callable.Deprecated since version 3.3: Passing an integer to the
how
argument as well as using thesubdeps
argument is deprecated.Changed in version 4.0.0: Passing an integer to the
how
argument is no longer supported.
- descr¶
A detailed description of the test.
- Type:
- Default:
''
Changed in version 4.0: The default value is now the empty string.
- property display_name¶
A human-readable version of the name this test.
This name contains a string representation of the various parameters of this specific test variant.
- Type:
Note
The display name may not be unique.
Added in version 3.10.0.
- env_vars = {}¶
Environment variables to be set before running this test.
The value of the environment variables can be of any type. ReFrame will invoke
str()
on it whenever it needs to emit it in a script.- Type:
Dict[str, object]
- Default:
{}
Added in version 4.0.0.
- exclusive_access = False¶
Specify whether this test needs exclusive access to nodes.
- Type:
boolean
- Default:
False
- executable¶
The name of the executable to be launched during the run phase.
If this variable is undefined when entering the compile pipeline stage, it will be set to
os.path.join('.', self.unique_name)
. Classes that override the compile stage may leave this variable undefined.- Type:
- Default:
required
Changed in version 3.7.3: Default value changed from
os.path.join('.', self.unique_name)
torequired
.
- executable_opts = []¶
List of options to be passed to the
executable
.- Type:
List[str]
- Default:
[]
- extra_resources = {}¶
Added in version 2.8.
Extra resources for this test.
This field is for specifying custom resources needed by this test. These resources are defined in the configuration of a system partition. For example, assume that two additional resources, named
gpu
anddatawarp
, are defined in the configuration file as follows:'resources': [ { 'name': 'gpu', 'options': ['--gres=gpu:{num_gpus_per_node}'] }, { 'name': 'datawarp', 'options': [ '#DW jobdw capacity={capacity}', '#DW stage_in source={stagein_src}' ] } ]
A regression test may then instantiate the above resources by setting the
extra_resources
attribute as follows:self.extra_resources = { 'gpu': {'num_gpus_per_node': 2} 'datawarp': { 'capacity': '100GB', 'stagein_src': '/foo' } }
The generated batch script (for Slurm) will then contain the following lines:
#SBATCH --gres=gpu:2 #DW jobdw capacity=100GB #DW stage_in source=/foo
Notice that if the resource specified in the configuration uses an alternative directive prefix (in this case
#DW
), this will replace the standard prefix of the backend scheduler (in this case#SBATCH
)If the resource name specified in this variable does not match a resource name in the partition configuration, it will be simply ignored. The
num_gpus_per_node
attribute translates internally to the_rfm_gpu
resource, so that settingself.num_gpus_per_node = 2
is equivalent to the following:self.extra_resources = {'_rfm_gpu': {'num_gpus_per_node': 2}}
- Type:
Dict[str, Dict[str, object]]
- Default:
{}
Note
Changed in version 2.9: A new more powerful syntax was introduced that allows also custom job script directive prefixes.
- property fixture_variant¶
The point in the fixture space for the test.
This can be seen as an index to the fixture space representing a unique combination of the fixture variants. This number is directly mapped from
variant_num
.- Type:
- getdep(target, environ=None, part=None)[source]¶
Retrieve the test case of a target dependency.
- Parameters:
target – The name of the target dependency to be retrieved.
environ – The name of the programming environment that will be used to retrieve the test case of the target test. If
None
,RegressionTest.current_environ
will be used.
Added in version 2.21.
Changed in version 3.8.0: Setting
environ
orpart
to'*'
will skip the match check on the environment and partition, respectively.
- info()[source]¶
Provide live information for this test.
This method is used by the front-end to print the status message during the test’s execution. This function is also called to provide the message for the check_info logging attribute. By default, it returns a message reporting the test name, the current partition and the current programming environment that the test is currently executing on.
Added in version 2.10.
- Returns:
a string with an informational message about this test
Note
When overriding this method, you should pay extra attention on how you use the
RegressionTest
’s attributes, because this method may be called at any point of the test’s lifetime.
- is_local()[source]¶
Check if the test will execute locally.
A test executes locally if the
local
attribute is set or if the current partition’s scheduler does not support job submission.
- property job¶
The job descriptor associated with this test.
This is set by the framework during the
setup()
phase.- Type:
- keep_files = []¶
List of files to be kept after the test finishes.
By default, the framework saves the standard output, the standard error and the generated shell script that was used to run this test.
These files will be copied over to the test’s output directory during the
cleanup()
phase.Directories are also accepted in this field.
Relative path names are resolved against the stage directory.
- Type:
List[str]
- Default:
[]
Changed in version 3.3: This field accepts now also file glob patterns.
- local = False¶
Always execute this test locally.
- Type:
boolean
- Default:
False
- property logger¶
A logger associated with this test.
You can use this logger to log information for your test.
- maintainers = []¶
List of people responsible for this test.
When the test fails, this contact list will be printed out.
- Type:
List[str]
- Default:
[]
- max_pending_time = None¶
Added in version 3.0.
The maximum time a job can be pending before starting running.
Time duration is specified as of the
time_limit
attribute.- Type:
- Default:
None
- modules = []¶
List of modules to be loaded before running this test.
These modules will be loaded during the
setup()
phase.- Type:
List[str]
orDict[str, object]
- Default:
[]
- property name¶
The name of the test.
This is an alias of
display_name
but omitting any implicit parameters starting with$
that are inserted by the--repeat
,--distribute
and other similar options.Changed in version 4.7: The implicit parameters starting with
$
are now omitted.
- num_cpus_per_task = None¶
Number of CPUs per task required by this test.
Ignored if
None
.- Type:
integral or
None
- Default:
None
- num_gpus_per_node = None¶
Number of GPUs per node required by this test. This attribute is translated internally to the
_rfm_gpu
resource. For more information on test resources, have a look at theextra_resources
attribute.- Type:
integral or
None
- Default:
None
Changed in version 4.0.0: The default value changed to
None
.
- num_tasks = 1¶
Number of tasks required by this test.
If the number of tasks is set to zero or a negative value, ReFrame will try to flexibly allocate the number of tasks based on the command line option
--flex-alloc-nodes
. A negative number is used to indicate the minimum number of tasks required for the test. In this case the minimum number of tasks is the absolute value of the number, while Settingnum_tasks
to zero is equivalent to setting it to-num_tasks_per_node
.Setting
num_tasks
toNone
has a scheduler-specific interpretation, but in principle, passes the responsibility of producing a correct job script to the user by setting the appropriate scheduler options. More specifically, the different backends interpret theNone
num_tasks
as follows:flux
: not applicable.local
: not applicable.lsf
: Neither the-nnodes
nor the-n
will be emitted.oar
: Resets it to1
.pbs
: Resets it to1
.sge
: not applicable.slurm
: Neither the--ntasks
nor the--nodes
option (if theuse_nodes_option
is specified) will be emitted.squeue
: Seeslurm
backend.torque
: Seepbs
backend.
- Type:
integral or
None
- Default:
1
- num_tasks_per_core = None¶
Number of tasks per core required by this test.
Ignored if
None
.- Type:
integral or
None
- Default:
None
- num_tasks_per_node = None¶
Number of tasks per node required by this test.
Ignored if
None
.- Type:
integral or
None
- Default:
None
- num_tasks_per_socket = None¶
Number of tasks per socket required by this test.
Ignored if
None
.- Type:
integral or
None
- Default:
None
- property outputdir¶
The output directory of the test.
This is set during the
setup()
phase.Added in version 2.13.
- Type:
str
.
- property param_variant¶
The point in the parameter space for the test.
This can be seen as an index to the paraemter space representing a unique combination of the parameter values. This number is directly mapped from
variant_num
.- Type:
- perf_patterns¶
Patterns for verifying the performance of this test.
If set to
None
, no performance checking will be performed.- Type:
A dictionary with keys of type
str
and deferrable expressions (i.e., the result of a sanity function) as values.None
is also allowed.- Default:
None
Note
You are advised to follow the new syntax for defining performance variables in your tests using either the
@performance_function
builtin or theperf_variables
.
- perf_variables = {}¶
The performance variables associated with the test.
In this context, a performance variable is a key-value pair, where the key is the desired variable name and the value is the deferred performance expression (i.e. the result of a deferrable performance function) that computes or extracts the performance variable’s value.
By default, ReFrame will populate this field during the test’s instantiation with all the member functions decorated with the
@performance_function
decorator. If no performance functions are present in the class, no performance checking or reporting will be carried out.This mapping may be extended or replaced by other performance variables that may be defined in any pipeline hook executing before the performance stage. To this end, deferred performance functions can be created inline using the utility
make_performance_function()
.- Type:
A dictionary with keys of type
str
and deferred performance expressions as values (see Deferrable performance functions).- Default:
Collection of performance variables associated to each of the member functions decorated with the
@performance_function
decorator.
Added in version 3.8.0.
- postbuild_cmds = []¶
Added in version 3.0.
List of shell commands to be executed after a successful compilation.
These commands are emitted in the script after the actual build commands generated by the selected build system.
- Type:
List[str]
- Default:
[]
- postrun_cmds = []¶
Added in version 3.0.
List of shell commands to execute after the parallel launch command.
See
prerun_cmds
for a more detailed description of the semantics.- Type:
List[str]
- Default:
[]
- prebuild_cmds = []¶
Added in version 3.0.
List of shell commands to be executed before compiling.
These commands are emitted in the build script before the actual build commands generated by the selected build system.
- Type:
List[str]
- Default:
[]
- prerun_cmds = []¶
Added in version 3.0.
List of shell commands to execute before the parallel launch command.
These commands do not execute in the context of ReFrame. Instead, they are emitted in the generated job script just before the actual job launch command.
- Type:
List[str]
- Default:
[]
- readonly_files = []¶
List of files or directories (relative to the
sourcesdir
) that will be symlinked in the stage directory and not copied.You can use this variable to avoid copying very large files to the stage directory.
- Type:
List[str]
- Default:
[]
- reference = {}¶
The set of reference values for this test.
The reference values are specified as a scoped dictionary keyed on the performance variables defined in
perf_patterns
and scoped under the system/partition combinations. The reference itself is a four-tuple that contains the reference value, the lower and upper thresholds and the measurement unit.An example follows:
self.reference = { 'sys0:part0': { 'perfvar0': (50, -0.1, 0.1, 'Gflop/s'), 'perfvar1': (20, -0.1, 0.1, 'GB/s') }, 'sys0:part1': { 'perfvar0': (100, -0.1, 0.1, 'Gflop/s'), 'perfvar1': (40, -0.1, 0.1, 'GB/s') } }
To better understand how to set the performance reference tuple, here are some examples with both positive and negative reference values:
Performance Tuple
Expected
Lowest
Highest
(100, -0.01, 0.02, 'MB/s')
100 MB/s
99 MB/s
102 MB/s
(100, -0.01, None, 'MB/s')
100 MB/s
99 MB/s
inf MB/s
(100, None, 0.02, 'MB/s')
100 MB/s
-inf MB/s
102 MB/s
(-100, -0.01, 0.02, 'C')
-100 C
-101 C
-98 C
(-100, -0.01, None, 'C')
-100 C
-101 C
inf C
(-100, None, 0.02, 'C')
-100 C
-inf C
-98 C
During the performance stage of the pipeline, the reference tuple elements, except the unit, are passed to the
assert_reference()
function along with the obtained performance value in order to actually assess whether the test passes the performance check or not.- Type:
A scoped dictionary with system names as scopes, performance variables as keys and reference tuples as values. The elements of reference tuples cannot be deferrable expressions.
Note
Changed in version 3.0: The measurement unit is required. The user should explicitly specify
None
if no unit is available.Changed in version 3.8.0: The unit in the reference tuple is again optional, but it is recommended to use it for clarity.
Changed in version 4.0.0: Deferrable expressions are not allowed in reference tuples.
- require_reference = False¶
Require that a reference is defined for each system that this test is run on.
If this is set and a reference is not found for the current system, the test will fail.
- Type:
boolean
- Default:
False
Added in version 4.0.0.
- run()[source]¶
The run phase of the regression test pipeline.
This call is non-blocking. It simply submits the job associated with this test and returns.
- run_complete()[source]¶
Check if the run phase has completed.
- Returns:
True
if the associated job has finished,False
otherwise.If no job descriptor is yet associated with this test,
True
is returned.- Raises:
reframe.core.exceptions.ReframeError – In case of errors.
- run_wait()[source]¶
Wait for the run phase of this test to finish.
- Raises:
reframe.core.exceptions.ReframeError – In case of errors.
- sanity_patterns¶
Patterns for checking the sanity of this test.
If not set, a sanity error may be raised during sanity checking if no other sanity checking functions already exist.
- Type:
A deferrable expression (i.e., the result of a sanity function)
- Default:
required
Note
Changed in version 2.9: The default behaviour has changed and it is now considered a sanity failure if this attribute is set to
required
.If a test does not care about its output, this must be stated explicitly as follows:
self.sanity_patterns = sn.assert_true(1)
Changed in version 3.6: The default value has changed from
None
torequired
.Note
You are advised to follow the new syntax for defining the sanity check of the test using the builtin
@sanity_function
decorator.
- set_var_default(name, value)[source]¶
Set the default value of a variable if variable is undefined.
A variable is undefined if it is declared and required and no value is yet assigned to it.
- Parameters:
name – The name of the variable.
value – The value to set the variable to.
- Raises:
ValueError – If the variable does not exist
Added in version 3.10.1.
- setup(partition, environ, **job_opts)[source]¶
The setup phase of the regression test pipeline.
- Parameters:
partition – The system partition to set up this test for.
environ – The environment to set up this test for.
job_opts – Options to be passed through to the backend scheduler. When overriding this method users should always pass through
job_opts
to the base class method.
- Raises:
reframe.core.exceptions.ReframeError – In case of errors.
- property short_name¶
A short version of the test’s display name.
The shortened version coincides with the
unique_name
for simple tests and combines the test’s class name and a hash code for parameterised tests.Added in version 4.0.0.
- skip(msg=None)[source]¶
Skip test.
- Parameters:
msg – A message explaining why the test was skipped.
Added in version 3.5.1.
- skip_if(cond, msg=None)[source]¶
Skip test if condition is true.
- Parameters:
cond – The condition to check for skipping the test.
msg – A message explaining why the test was skipped.
Added in version 3.5.1.
- skip_if_no_procinfo(msg=None)[source]¶
Skip test if no processor topology information is available.
This method has effect only if called after the
setup
stage.- Parameters:
msg – A message explaining why the test was skipped. If not specified, a default message will be used.
Added in version 3.9.1.
- sourcepath¶
The path to the source file or source directory of the test.
It must be a path relative to the
sourcesdir
, pointing to a subfolder or a file contained insourcesdir
. This applies also in the case wheresourcesdir
is a Git repository.If it refers to a regular file, this file will be compiled using the
SingleSource
build system. If it refers to a directory, ReFrame will try to infer the build system to use for the project and will fall back in using theMake
build system, if it cannot find a more specific one.- Type:
- Default:
''
- sourcesdir = src¶
The directory containing the test’s resources.
This directory may be specified with an absolute path or with a path relative to the location of the test. Its contents will always be copied to the stage directory of the test.
This attribute may also accept a URL, in which case ReFrame will treat it as a Git repository and will try to clone its contents in the stage directory of the test.
If set to
None
, the test has no resources an no action is taken.- Type:
str
orNone
- Default:
'src'
if such a directory exists at the test level, otherwiseNone
Note
Changed in version 2.9: Allow
None
values to be set also in regression tests with a compilation phaseChanged in version 2.10: Support for Git repositories was added.
Changed in version 3.0: Default value is now conditionally set to either
'src'
orNone
.
- property stagedir¶
The stage directory of the test.
This is set during the
setup()
phase.- Type:
str
.
- property stderr¶
The name of the file containing the standard error of the test.
This is set during the
setup()
phase.This attribute is evaluated lazily, so it can by used inside sanity expressions.
- Type:
str
orNone
if a run job has not yet been created.
- property stdout¶
The name of the file containing the standard output of the test.
This is set during the
setup()
phase.This attribute is evaluated lazily, so it can by used inside sanity expressions.
- Type:
str
orNone
if a run job has not yet been created.
- strict_check = True¶
Mark this test as a strict performance test.
If a test is marked as non-strict, the performance checking phase will always succeed, unless the
--strict
command-line option is passed when invoking ReFrame.- Type:
boolean
- Default:
True
- tags = set()¶
Set of tags associated with this test.
This test can be selected from the frontend using any of these tags.
- Type:
Set[str]
- Default:
an empty set
- time_limit = None¶
Time limit for this test.
Time limit is specified as a string in the form
<days>d<hours>h<minutes>m<seconds>s
or as number of seconds. If set toNone
, thetime_limit
of the current system partition will be used.Note
Changed in version 2.15: This attribute may be set to
None
.Warning
Changed in version 3.0: The old syntax using a
(h, m, s)
tuple is deprecated.Changed in version 3.2: - The old syntax using a
(h, m, s)
tuple is dropped. - Support of timedelta objects is dropped. - Number values are now accepted.Changed in version 3.5.1: The default value is now
None
and it can be set globally per partition via the configuration.
- use_multithreading = None¶
Specify whether this tests needs simultaneous multithreading enabled.
Ignored if
None
.- Type:
boolean or
None
- Default:
None
- valid_prog_environs¶
List of programming environments supported by this test.
The syntax of this attribute is exactly the same as of the
valid_systems
except that thea:b
entries are invalid.- Type:
List[str]
- Default:
required
See also
Changed in version 2.12: Programming environments can now be specified using wildcards.
Changed in version 2.17: Support for wildcards is dropped.
Changed in version 3.3: Default value changed from
[]
toNone
.Changed in version 3.6: Default value changed from
None
torequired
.Changed in version 3.11.0: Extend syntax to support features and key/value pairs.
- valid_systems¶
List of systems or system features or system properties required by this test.
Each entry in this list is a requirement and can have one of the following forms:
sysname
: The test is valid for system namedsysname
.sysname:partname
: The test is valid for the partitionpartname
of systemsysname
.*
: The test is valid for any system.*:partname
: The test is valid for any partition namedpartname
in any system.+feat
: The test is valid for all partitions that define featurefeat
as a feature.-feat
: The test is valid for all partitions that do not define featurefeat
as a feature.%key=val
: The test is valid for all partitions that define the extra propertykey
with the valueval
.
Multiple features and key/value pairs can be included in a single entry of the
valid_systems
list, in which case an AND operation on these constraints is implied. For example, the test defining the following will be valid for all systems that have define bothfeat1
andfeat2
and setfoo=1
:valid_systems = [r'+feat1 +feat2 %foo=1']
Any partition/environment extra or partition resource can be specified as a feature constraint without having to explicitly state this in the partition’s/environment’s feature list. For example, if
key1
is part of the partition/environment extras list, then+key1
will select that partition or environment.For key/value pairs comparisons, ReFrame will automatically convert the value in the key/value spec to the type of the value of the corresponding entry in the partitions
extras
property. In the above example, if the type offoo
property is integer,1
will be converted to an integer value. If a conversion to the target type is not possible, then the requested key/value pair is not matched.Multiple entries in the
valid_systems
list are implicitly ORed, such that the following example implies that the test is valid for eithersys1
or for any other system that does not definefeat
.valid_systems = ['sys1', '-feat']
- Type:
List[str]
- Default:
None
Changed in version 3.3: Default value changed from
[]
toNone
.Changed in version 3.6: Default value changed from
None
torequired
.Changed in version 3.11.0: Extend syntax to support features and key/value pairs.
- class reframe.core.pipeline.RunOnlyRegressionTest(*args, **kwargs)[source]¶
Bases:
RegressionTest
Base class for run-only regression tests.
This class is also directly available under the top-level
reframe
module.- compile()[source]¶
The compilation phase of the regression test pipeline.
This is a no-op for this type of test.
- compile_wait()[source]¶
Wait for compilation phase to finish.
This is a no-op for this type of test.
- run()[source]¶
The run phase of the regression test pipeline.
The resources of the test are copied to the stage directory and the rest of execution is delegated to the
RegressionTest.run()
.
- setup(partition, environ, **job_opts)[source]¶
The setup stage of the regression test pipeline.
Similar to the
RegressionTest.setup()
, except that no build job is created for this test.
Test Decorators¶
- @reframe.core.decorators.simple_test[source]¶
Class decorator for registering tests with ReFrame.
The decorated class must derive from
reframe.core.pipeline.RegressionTest
. This decorator is also available directly under thereframe
module.Added in version 2.13.
Builtins¶
Added in version 3.4.2.
ReFrame test base classes and, in particular, the reframe.core.pipeline.RegressionMixin
class, define a set of functions and decorators that can be used to define essential test elements, such as variables, parameters, fixtures, pipeline hooks etc.
These are called builtins because they are directly available for use inside the test class body that is being defined without the need to import any module.
However, almost all of these builtins are also available from the reframe.core.builtins
module.
The use of this module is required only when creating new tests programmatically using the make_test()
function.
Changed in version 3.7.0: Expose @deferrable
as a builtin.
Changed in version 3.11.0: Builtins are now available also through the reframe.core.builtins
module.
- reframe.core.pipeline.RegressionMixin.bind(func, name=None)¶
Bind a free function to a regression test.
By default, the function is bound with the same name as the free function. However, the function can be bound using a different name with the
name
argument.- Parameters:
func – external function to be bound to a class.
name – bind the function under a different name.
Note
This is the only builtin that is not available through the
reframe.core.builtins
module. The reason is that thebind()
method needs to access the class namespace directly in order to bind the free function to the class.Added in version 3.6.2.
- @reframe.core.builtins.deferrable[source]¶
Convert the decorated function to a deferred expression.
See Deferrable Functions Reference for further information on deferrable functions.
- reframe.core.builtins.fixture(cls, *, scope='test', action='fork', variants='all', variables=None)¶
Insert a new fixture in the current test.
A fixture is a regression test that creates, prepares and/or manages a resource for another regression test. Fixtures may contain other fixtures and so on, forming a directed acyclic graph. A parent fixture (or a regular regression test) requires the resources managed by its child fixtures in order to run, and it may only access these fixture resources after its
setup
pipeline stage. The execution of parent fixtures is postponed until all their respective children have completed execution. However, the destruction of the resources managed by a fixture occurs in reverse order, only after all the parent fixtures have been destroyed. This destruction of resources takes place during thecleanup
pipeline stage of the regression test. Fixtures must not define the membersvalid_systems
andvalid_prog_environs
. These variables are defined based on the values specified in the parent test, ensuring that the fixture runs with a suitable system partition and programming environment combination. A fixture’sname
attribute may be internally mangled depending on the arguments passed during the fixture declaration. Hence, manually setting or modifying thename
attribute in the fixture class is disallowed, and breaking this restriction will result in undefined behavior.Warning
The fixture name mangling is considered an internal framework mechanism and it may change in future versions without any notice. Users must not express any logic in their tests that relies on a given fixture name mangling scheme.
By default, the resources managed by a fixture are private to the parent test. However, it is possible to share these resources across different tests by passing the appropriate fixture
scope
argument. The different scope levels are independent from each other and a fixture only executes once per scope, where all the tests that belong to that same scope may use the same resources managed by a given fixture instance. The available scopes are:session: This scope encloses all the tests and fixtures that run in the full ReFrame session. This may include tests that use different system partition and programming environment combinations. The fixture class must derive from
RunOnlyRegressionTest
to avoid any implicit dependencies on the partition or the programming environment used.partition: This scope spans across a single system partition. This may include different tests that run on the same partition but use different programming environments. Fixtures with this scope must be independent of the programming environment, which restricts the fixture class to derive from
RunOnlyRegressionTest
.environment: The extent of this scope covers a single combination of system partition and programming environment. Since the fixture is guaranteed to have the same partition and programming environment as the parent test, the fixture class can be any derived class from
RegressionTest
.test: This scope covers a single instance of the parent test, where the resources provided by the fixture are exclusive to each parent test instance. The fixture class can be any derived class from
RegressionTest
.
Rather than specifying the scope at the fixture class definition, ReFrame fixtures set the scope level from the consumer side (i.e. when used by another test or fixture). A test may declare multiple fixtures using the same class, where fixtures with different scopes are guaranteed to point to different instances of the fixture class. On the other hand, when two or more fixtures use the same fixture class and have the same scope, these different fixtures will point to the same underlying resource if the fixtures refer to the same variant of the fixture class. The example below illustrates the different fixture scope usages:
class MyFixture(rfm.RunOnlyRegressionTest): my_var = variable(int, value=1) ... @rfm.simple_test class TestA(rfm.RegressionTest): valid_systems = ['p1', 'p2'] valid_prog_environs = ['e1', 'e2'] # Fixture shared throughout the full session f1 = fixture(MyFixture, scope='session') # Fixture shared for each supported partition f2 = fixture(MyFixture, scope='partition') # Fixture shared for each supported part+environ f3 = fixture(MyFixture, scope='environment') # Fixture private evaluation of MyFixture f4 = fixture(MyFixture, scope='test') ... @rfm.simple_test class TestB(rfm.RegressionTest): valid_systems = ['p1'] valid_prog_environs = ['e1'] # Another private instance of MyFixture f1 = fixture(MyFixture, scope='test') # Same as f3 in TestA for p1 + e1 f2 = fixture(MyFixture, scope='environment') # Same as f1 in TestA f3 = fixture(MyFixture, scope='session') ... @run_after('setup') def access_fixture_resources(self): # Dummy pipeline hook to illustrate fixture resource access assert self.f1.my_var is not self.f2.my_var assert self.f1.my_var is not self.f3.my_var
TestA
supports two different valid systems and another two valid programming environments. Assuming that both environments are supported by each of the system partitions'p1'
and'p2'
, this test will execute a total of four times. This test uses the very simpleMyFixture
fixture multiple times using different scopes, where fixturef1
(session scope) will be shared across the four test instances, and fixturef4
(test scope) will be executed once per test instance. On the other hand,f2
(partition scope) will run once per partition supported by testTestA
, and the multiple per-partition executions (i.e. for each programming environment) will share the same underlying resource forf2
. Lastly,f3
will run a total of four times, which is once per partition and environment combination. This simpleTestA
shows how multiple instances from the same test can share resources, but the real power behind fixtures is illustrated withTestB
, where this resource sharing is extended across different tests. For simplicity,TestB
only supports a single partition'p1'
and programming environment'e1'
, and similarly toTestA
,f1
(test scope) causes a private evaluation of the fixtureMyFixture
. However, the resources managed by fixturesf2
(environment scope) andf3
(session scope) are shared withTest1
.Fixtures are treated by ReFrame as first-class ReFrame tests, which means that these classes can use the same built-in functionalities as in regular tests decorated with
@rfm.simple_test
. This includes theparameter()
built-in, where fixtures may have more than one variant. When this occurs, a parent test may select to either treat a parameterized fixture as a test parameter, or instead, to gather all the fixture variants from a single instance of the parent test. In essence, fixtures implement fork-join model whose behavior may be controlled through theaction
argument. This argument may be set to one of the following options:fork: This option parameterizes the parent test as a function of the fixture variants. The fixture handle will resolve to a single instance of the fixture.
join: This option gathers all the variants from a fixture into a single instance of the parent test. The fixture handle will point to a list containing all the fixture variants.
A test may declare multiple fixtures with different
action
options, where the defaultaction
option is'fork'
. The example below illustrates the behavior of these two different options.class ParamFix(rfm.RegressionTest): p = parameter(range(5)) # A simple test parameter ... @rfm.simple_test class TestC(rfm.RegressionTest): # Parameterize TestC for each ParamFix variant f = fixture(ParamFix, action='fork') ... @run_after('setup') def access_fixture_resources(self): print(self.f.p) # Prints the fixture's variant parameter value @rfm.simple_test class TestD(rfm.RegressionTest): # Gather all fixture variants into a single test f = fixture(ParamFix, action='join') ... @run_after('setup') def reduce_range(self): # Sum all the values of p for each fixture variant res = functools.reduce(lambda x, y: x+y, (fix.p for fix in self.f)) n = len(self.f)-1 assert res == (n*n + n)/2
Here
ParamFix
is a simple fixture class with a single parameter. When the testTestC
uses this fixture with a'fork'
action, the test is implicitly parameterized over each variant ofParamFix
. Hence, when theaccess_fixture_resources()
post-setup hook accesses the fixturef
, it only access a single instance of theParamFix
fixture. On the other hand, when this same fixture is used with a'join'
action byTestD
, the test is not parameterized and all theParamFix
instances are gathered intof
as a list. Thus, the post-setup pipeline hookreduce_range()
can access all the fixture variants and compute a reduction of the differentp
values.When declaring a fixture, a parent test may select a subset of the fixture variants through the
variants
argument. This variant selection can be done by either passing an iterable containing valid variant indices (see Test variants for further information on how the test variants are indexed), or instead, passing a mapping with the parameter name (of the fixture class) as keys and filtering functions as values. These filtering functions are unary functions that return the value of a boolean expression on the values of the specified parameter, and they all must evaluate toTrue
for at least one of the fixture class variants. See the example below for an illustration on how to filter-out fixture variants.class ComplexFixture(rfm.RegressionTest): # A fixture with 400 different variants. p0 = parameter(range(100)) p1 = parameter(['a', 'b', 'c', 'd']) ... @rfm.simple_test class TestE(rfm.RegressionTest): # Select the fixture variants with boolean conditions foo = fixture(ComplexFixture, variants={'p0': lambda x: x<10, 'p1': lambda x: x=='d'}) # Select the fixture variants by index bar = fixture(ComplexFixture, variants=range(300,310)) ...
A parent test may also specify the value of different variables in the fixture class to be set before its instantiation. Each variable must have been declared in the fixture class with the
variable()
built-in, otherwise it is silently ignored. This variable specification is equivalent to deriving a new class from the fixture class, and setting these variable values in the class body of a newly derived class. Therefore, when fixture declarations use the same fixture class and pass different values to thevariables
argument, the fixture class is interpreted as a different class for each of these fixture declarations. See the example below.class Fixture(rfm.RegressionTest): v = variable(int, value=1) ... @rfm.simple_test class TestF(rfm.RegressionTest): foo = fixture(Fixture) bar = fixture(Fixture, variables={'v':5}) baz = fixture(Fixture, variables={'v':10}) ... @run_after('setup') def print_fixture_variables(self): print(self.foo.v) # Prints 1 print(self.bar.v) # Prints 5 print(self.baz.v) # Prints 10
The test
TestF
declares the fixturesfoo
,bar
andbaz
using the sameFixture
class. If no variables were set inbar
andbaz
, this would result into the same fixture being declared multiple times in the same scope (implicitly set to'test'
), which would lead to a single instance ofFixture
being referred to byfoo
,bar
andbaz
. However, in this case ReFrame identifies that the declared fixtures pass different values to thevariables
argument in the fixture declaration, and executes these three fixtures separately.Note
Mappings passed to the
variables
argument that define the same class variables in different order are interpreted as the same value. The two fixture declarations below are equivalent, and bothfoo
andbar
will point to the same instance of the fixture classMyResource
.foo = fixture(MyResource, variables={'a':1, 'b':2}) bar = fixture(MyResource, variables={'b':2, 'a':1})
Early access to fixture objects
The test instance represented by a fixture can be accessed fully from within a test only after the setup stage. The reason for that is that fixtures eventually translate into test dependencies and access to the parent dependencies cannot happen before the this stage.
However, it is often useful, especially in the case of parameterized fixtures, to be able to access the fixture parameters earlier, e.g., in a post-init hook in order to properly set the
valid_systems
andvalid_prog_environs
of the test. These attributes cannot be set later than the test’s initialization in order to have an effect.For this reason, early access to fixture objects is allowed only for retrieving their parameters.
class Fixture(rfm.RegressionTest): x = parameter([1, 2, 3]) class Test(rfm.RunOnlyRegressionTest): foo = fixture(Fixture) executable = './myexec' valid_prog_environs = ['*'] @run_after('init') def early_access(self): # Only fixture parameters can be accessed here! if self.foo.x == 1: self.valid_systems = ['sys1] else: self.valid_systems = ['sys2'] @run_after('setup') def normal_access(self): # Any test attribute of the associated fixture test can be # accessed here self.executable_opts = [ '-i', os.path.join(self.foo.stagedir, 'input.txt') ]
During test initialization, ReFrame binds the
foo
name to a proxy object that holds the parameterization of the target fixture. This proxy object is recursive, so that if fixturefoo
contained another fixture namedbar
, it would allow you to access any parameters of that fixture withself.foo.bar.param
.During the test setup stage, the
foo
’s binding changes and it is now bound to the exact test instance that was executed for the target test instance.- Parameters:
cls – A class derived from
RegressionTest
that manages a given resource. The base from this class may be further restricted to other derived classes ofRegressionTest
depending on thescope
parameter.scope – Sets the extent to which other regression tests may share the resources managed by a fixture. The available scopes are, from more to less restrictive,
'test'
,'environment'
,'partition'
and'session'
. By default a fixture’s scope is set to'test'
, which makes the resource private to the test that uses the fixture. This means that when multiple regression tests use the same fixture class with a'test'
scope, the fixture will run once per regression test. When the scope is set to'environment'
, the resources managed by the fixture are shared across all the tests that use the fixture and run on the same system partition and use the same programming environment. When the scope is set to'partition'
, the resources managed by the fixture are shared instead across all the tests that use the fixture and run on the same system partition. Lastly, when the scope is set to'session'
, the resources managed by the fixture are shared across the full ReFrame session. Fixtures with either'partition'
or'session'
scopes may be shared across different regression tests under different programming environments, and for this reason, when using these two scopes, the fixture classcls
is required to derive fromRunOnlyRegressionTest
.action – Set the behavior of a parameterized fixture to either
'fork'
or'join'
. With a'fork'
action, a parameterized fixture effectively parameterizes the regression test. On the other hand, a'join'
action gathers all the fixture variants into the same instance of the regression test. By default, theaction
parameter is set to'fork'
.variants – Filter or sub-select a subset of the variants from a parameterized fixture. This argument can be either an iterable with the indices from the desired variants, or a mapping containing unary functions that return the value of a boolean expression on the values of a given parameter.
variables – Mapping to set the values of fixture’s variables. The variables are set after the fixture class has been created (i.e. after the class body has executed) and before the fixture class is instantiated.
Added in version 3.9.0.
Changed in version 3.11.0: Allow early access of fixture objects.
- @reframe.core.builtins.loggable_as(name)[source]¶
Mark a property as loggable.
- Parameters:
name – An alternative name that will be used for logging this property. If
None
, the name of the decorated property will be used.- Raises:
ValueError – if the decorated function is not a property.
Added in version 3.10.2.
- @reframe.core.builtins.loggable¶
Equivalent to
loggable_as(None)
.
- reframe.core.builtins.parameter(values=None, inherit_params=False, filter_params=None, fmt=None, loggable=True)¶
Inserts a new test parameter.
At the class level, these parameters are stored in a separate namespace referred to as the parameter space. If a parameter with a matching name is already present in the parameter space of a parent class, the existing parameter values will be combined with those provided by this method following the inheritance behavior set by the arguments
inherit_params
andfilter_params
. Instead, if no parameter with a matching name exists in any of the parent parameter spaces, a new regression test parameter is created. A regression test can be parameterized as follows:class Foo(rfm.RegressionTest): variant = parameter(['A', 'B']) # print(variant) # Error: a parameter may only be accessed from the class instance @run_after('init') def do_something(self): if self.variant == 'A': do_this() else: do_other()
One of the most powerful features of these built-in functions is that they store their input information at the class level. However, a parameter may only be accessed from the class instance and accessing it directly from the class body is disallowed. With this approach, extending or specializing an existing parameterized regression test becomes straightforward, since the test attribute additions and modifications made through built-in functions in the parent class are automatically inherited by the child test. For instance, continuing with the example above, one could override the
do_something()
hook in theFoo
regression test as follows:class Bar(Foo): @run_after('init') def do_something(self): if self.variant == 'A': override_this() else: override_other()
Moreover, a derived class may extend, partially extend and/or modify the parameter values provided in the base class as shown below.
class ExtendVariant(Bar): # Extend the full set of inherited variant parameter values # to ['A', 'B', 'C'] variant = parameter(['C'], inherit_params=True) class PartiallyExtendVariant(Bar): # Extend a subset of the inherited variant parameter values # to ['A', 'D'] variant = parameter(['D'], inherit_params=True, filter_params=lambda x: x[:1]) class ModifyVariant(Bar): # Modify the variant parameter values to ['AA', 'BA'] variant = parameter(inherit_params=True, filter_params=lambda x: map(lambda y: y+'A', x))
A parameter with no values is referred to as an abstract parameter (i.e. a parameter that is declared but not defined). Therefore, classes with at least one abstract parameter are considered abstract classes.
class AbstractA(Bar): variant = parameter() class AbstractB(Bar): variant = parameter(inherit_params=True, filter_params=lambda x: [])
- Parameters:
values – An iterable containing the parameter values.
inherit_params – If
True
, the parameter values defined in any base class will be inherited. In this case, the parameter values provided in the current class will extend the set of inherited parameter values. If the parameter does not exist in any of the parent parameter spaces, this option has no effect.filter_params – Function to filter/modify the inherited parameter values that may have been provided in any of the parent parameter spaces. This function must accept a single iterable argument and return an iterable. It will be called with the inherited parameter values and it must return the filtered set of parameter values. This function will only have an effect if used with
inherit_params=True
.fmt – A formatting function that will be used to format the values of this parameter in the test’s
display_name
. This function should take as argument the parameter value and return a string representation of the value. If the returned value is not a string, it will be converted using thestr()
function.loggable – Mark this parameter as loggable. If
True
, this parameter will become a log record attribute under the namecheck_NAME
, whereNAME
is the name of the parameter (default:True
)
- Returns:
A new test parameter.
Added in version 3.10.0: The
fmt
argument is added.Added in version 3.11.0: The
loggable
argument is added.Changed in version 4.5: Parameters are now loggable by default.
- @reframe.core.builtins.performance_function(unit, *, perf_key=None)[source]¶
Decorate a test member function to mark it as a performance metric function.
This decorator converts the decorated method into a performance deferrable function (see “Deferrable performance functions” for more details) whose evaluation is deferred to the performance stage of the regression test. The decorated function must take a single argument without a default value (i.e.
self
) and any number of arguments with default values. A test may decorate multiple member functions as performance functions, where each of the decorated functions must be provided with the unit of the performance quantity to be extracted from the test. Any performance function may be overridden in a derived class and multiple bases may define their own performance functions. In the event of a name conflict, the derived class will follow Python’s MRO to choose the appropriate performance function. However, defining more than one performance function with the same name in the same class is disallowed.The full set of performance functions of a regression test is stored under
perf_variables
as key-value pairs, where, by default, the key is the name of the decorated member function, and the value is the deferred performance function itself. Optionally, the key under which a performance function is stored inperf_variables
can be customised by passing the desired key as theperf_key
argument to this decorator.- Parameters:
unit – A string representing the measurement unit of this metric.
Added in version 3.8.0.
- @reframe.core.builtins.require_deps[source]¶
Decorator to denote that a function will use the test dependencies.
The arguments of the decorated function must be named after the dependencies that the function intends to use. The decorator will bind the arguments to a partial realization of the
getdep()
function, such that conceptually the new function arguments will be the following:new_arg = functools.partial(getdep, orig_arg_name)
The converted arguments are essentially functions accepting a single argument, which is the target test’s programming environment. Additionally, this decorator will attach the function to run after the test’s setup phase, but before any other “post-setup” pipeline hook.
Warning
Changed in version 3.7.0: Using this functionality from the
reframe
orreframe.core.decorators
modules is now deprecated. You should use the built-in function described here.Changed in version 4.0.0: You may only use this function as framework built-in.
- @reframe.core.builtins.run_after(stage, *, always_last=False)[source]¶
Attach the decorated function after a certain pipeline stage.
This is analogous to
run_before()
, except that the hook will execute right after the stage it was attached to. This decorator also supports'init'
as a validstage
argument, where in this case, the hook will execute right after the test is initialized (i.e. after the__init__()
method is called) and before entering the test’s pipeline. In essence, a post-init hook is equivalent to defining additional__init__()
functions in the test. The following codeclass MyTest(rfm.RegressionTest): @run_after('init') def foo(self): self.x = 1
is equivalent to
class MyTest(rfm.RegressionTest): def __init__(self): self.x = 1
Changed in version 3.5.2: Add support for post-init hooks.
- @reframe.core.builtins.run_before(stage, *, always_last=False)[source]¶
Attach the decorated function before a certain pipeline stage.
The function will run just before the specified pipeline stage and it cannot accept any arguments except
self
. This decorator can be stacked, in which case the function will be attached to multiple pipeline stages. See above for the validstage
argument values.- Parameters:
stage – The pipeline stage where this function will be attached to. See Pipeline Hooks for the list of valid stage values.
always_last – Run this hook at the end of the stage’s hook chain instead of the beginning. If multiple tests set this flag for a hook in the same stage, then all
always_last
hooks will be executed in MRO order at the end of stage’s hook chain. See Pipeline Hooks for an example execution.
Changed in version 4.4: The
always_last
argument was added.Changed in version 4.5: Multiple tests can set
always_last
in the same stage.
- @reframe.core.builtins.sanity_function[source]¶
Decorate a test member function to mark it as a sanity check.
This decorator will convert the given function into a
deferrable()
and mark it to be executed during the test’s sanity stage. When this decorator is used, manually assigning a value tosanity_patterns
in the test is not allowed.Decorated functions may be overridden by derived classes, and derived classes may also decorate a different method as the test’s sanity function. Decorating multiple member functions in the same class is not allowed. However, a
RegressionTest
may inherit from multipleRegressionMixin
classes with their own sanity functions. In this case, the derived class will follow Python’s MRO to find a suitable sanity function.Added in version 3.7.0.
- reframe.core.builtins.variable(*args, **kwargs)¶
Insert a new test variable.
Declaring a test variable through the
variable()
built-in allows for a more robust test implementation than if the variables were just defined as regular test attributes (e.g.self.a = 10
). Using variables declared through thevariable()
built-in has a number of advantages:Variables are type checked, so attempts to assign a value of a wrong type will cause the test to fail.
You can set the values of variables from the command line using the
-S
option.You can avoid variable redefinitions.
You can control whether a variable can be inherited multiple times.
The following is an example of type checking performed by variables:
class Foo(rfm.RegressionTest): my_var = variable(int, value=8) not_a_var = my_var - 4 @run_after('init') def access_vars(self): print(self.my_var) # prints 8. # self.my_var = 'override' # Error: my_var must be an int! self.not_a_var = 'override' # This will work, but is dangerous! self.my_var = 10 # tests may also assign values the standard way
Here, the argument
value
in thevariable()
built-in sets the default value for the variable. This value may be accessed directly from the class body, as long as it was assigned before either in the same class body or in the class body of a parent class. This behavior extends the standard Python data model, where a regular class attribute from a parent class is never available in the class body of a child class. Hence, using thevariable()
built-in enables us to directly use or modify any variables that may have been declared upstream the class hierarchy, without altering their original value at the parent class level.class Bar(Foo): print(my_var) # prints 8 # print(not_a_var) # This is standard Python and raises a NameError # Since my_var is available, we can also update its value: my_var = 4 # Bar inherits the full declaration of my_var with the original # type-checking. # my_var = 'override' # Wrong type error again! @run_after('init') def access_vars(self): print(self.my_var) # prints 4 print(self.not_a_var) # prints 4 print(Foo.my_var) # prints 8 print(Bar.my_var) # prints 4
Here,
Bar
inherits the variables fromFoo
and can see thatmy_var
has already been declared in the parent class. Therefore, the value ofmy_var
is updated ensuring that the new value complies to the original variable declaration. However, the value ofmy_var
atFoo
remains unchanged.These examples above assumed that a default value can be provided to the variables in the bases tests, but that might not always be the case. For example, when writing a test library, one might want to leave some variables undefined and force the user to set these when using the test. As shown in the example below, imposing such requirement is as simple as not passing any
value
to thevariable()
built-in, which marks the given variable as required.# Test as written in the library class EchoBaseTest(rfm.RunOnlyRegressionTest): what = variable(str) valid_systems = ['*'] valid_prog_environs = ['*'] @run_before('run') def set_executable(self): self.executable = f'echo {self.what}' @sanity_function def assert_what(self): return sn.assert_found(fr'{self.what}') # Test as written by the user @rfm.simple_test class HelloTest(EchoBaseTest): what = 'Hello' # A parameterized test with type-checking @rfm.simple_test class FoodTest(EchoBaseTest): param = parameter(['Bacon', 'Eggs']) @run_after('init') def set_vars_with_params(self): self.what = self.param
Similarly to a variable with a value already assigned to it, the value of a required variable may be set either directly in the class body, on the
__init__()
method, or in any other hook before it is referenced. Otherwise an error will be raised indicating that a required variable has not been set. Conversely, a variable with a default value already assigned to it can be made required by assigning it therequired
keyword. However, thisrequired
keyword is only available in the class body.class MyRequiredTest(HelloTest): what = required
Running the above test will cause the
set_exec_and_sanity()
hook fromEchoBaseTest
to throw an error indicating that the variablewhat
has not been set.Finally, variables may alias each other. If a variable is an alias of another one it behaves in the exact same way as its target. If a change is made to the target variable, this is reflected to the alias and vice versa. However, alias variables are independently loggable: an alias may be logged but not its target and vice versa. Aliased variables are useful when you want to rename a variable and you want to keep the old one for compatibility reasons.
- Parameters:
types – the supported types for the variable.
value – the default value assigned to the variable. If no value is provided, the variable is set as
required
.field – the field validator to be used for this variable. If no field argument is provided, it defaults to
reframe.core.fields.TypedField
. The provided field validator by this argument must derive fromreframe.core.fields.Field
.alias – the target variable if this variable is an alias. This must refer to an already declared variable and neither default value nor a field can be specified for an alias variable.
loggable – Mark this variable as loggable. If
True
, this variable will become a log record attribute under the namecheck_NAME
, whereNAME
is the name of the variable (defaultTrue
).merge_func –
Enable multiple inheritance for this variable by defining a merge strategy of their default values. (default:
None
).This is a function that accepts two arguments of the type of the variable and returns a new value of the same type. The new default value of the variable will be determined as follows:
current_value = merge_func(parent_value, current_value)
If
current_value
is undefined andparent_value
is not, thencurrent_value = parent_value
. Ifparent_value
is undefined or both values are undefined, the variable remains intact.kwargs – keyword arguments to be forwarded to the constructor of the field validator.
- Returns:
A new test variable.
Added in version 3.10.2: The
loggable
argument is added.Added in version 4.0.0: Alias variable are introduced.
Changed in version 4.5: Variables are now loggable by default.
Changed in version 4.6: The
merge_func
parameter is added.
Pipeline Hooks¶
ReFrame provides a mechanism to allow attaching arbitrary functions to run before or after a given stage of the execution pipeline.
This is achieved through the @run_before
and @run_after
test builtins.
Once attached to a given stage, these functions are referred to as pipeline hooks.
A hook may be attached to multiple pipeline stages and multiple hooks may also be attached to the same pipeline stage.
Pipeline hooks attached to multiple stages will be executed on each pipeline stage the hook was attached to.
Pipeline stages with multiple hooks attached will execute these hooks in the order in which they were attached to the given pipeline stage.
A derived class will inherit all the pipeline hooks defined in its bases, except for those whose hook function is overridden by the derived class.
A function that overrides a pipeline hook from any of the base classes will not be a pipeline hook unless the overriding function is explicitly reattached to any pipeline stage.
In the event of a name clash arising from multiple inheritance, the inherited pipeline hook will be chosen following Python’s MRO.
A function may be attached to any of the following stages (listed in order of execution): init
, setup
, compile
, run
, sanity
, performance
and cleanup
.
The init
stage refers to the test’s instantiation and it runs before entering the execution pipeline.
Therefore, a test function cannot be attached to run before the init
stage.
Hooks attached to any other stage will run exactly before or after this stage executes.
So although a “post-init” and a “pre-setup” hook will both run after a test has been initialized and before the test goes through the first pipeline stage, they will execute in different times:
the post-init hook will execute right after the test is initialized.
The framework will then continue with other activities and it will execute the pre-setup hook just before it schedules the test for executing its setup stage.
Pipeline hooks are normally executed in reverse MRO order, i.e., the hooks of the least specialized class will be executed first.
In the following example, BaseTest.x()
will execute before DerivedTest.y()
:
class BaseTest(rfm.RegressionTest):
@run_after('setup')
def x(self):
'''Hook x'''
class DerivedTest(BaseTeset):
@run_after('setup')
def y(self):
'''Hook y'''
This order can be altered using the always_last
argument of the @run_before
and @run_after
decorators.
In this case, all hooks of the same stage defined with always_last=True
will be executed in MRO order at the end of the stage’s hook chain.
For example, given the following hierarchy:
class X(rfm.RunOnlyRegressionTest): @run_before('run', always_last=True) def hook_a(self): pass @run_before('run') def hook_b(self): pass class Y(X): @run_before('run', always_last=True) def hook_c(self): pass @run_before('run') def hook_d(self): pass
the run hooks of Y
will be executed as follows:
X.hook_b, Y.hook_d, Y.hook_c, X.hook_a
See also
@run_before
,@run_after
decorators
Note
Pipeline hooks do not execute in the test’s stage directory, but in the directory that ReFrame executes in.
However, the test’s stagedir
can be accessed by explicitly changing the working directory from within the hook function itself (see the change_dir
utility for further details):
import reframe.utility.osext as osext
class MyTest(rfm.RegressionTest):
...
@run_after('run')
def my_post_run_hook(self):
# Access the stage directory
with osext.change_dir(self.stagedir):
...
Note
In versions prior to 4.3.4, overriding a pipeline hook in a subclass would re-attach it from scratch in the stage, therefore changing its execution order relative to other hooks of the superclass. This is fixed in versions >= 4.3.4 where the execution order of the hook is not changed. However, the fix may break existing workaround code that used to call explicitly the base class’ hook from the derived one. Check issue #3012 for details on how to properly address this.
Warning
Changed in version 3.7.0: Declaring pipeline hooks using the same name functions from the reframe
or reframe.core.decorators
modules is now deprecated.
You should use the builtin functions described in the Builtins section..
Changed in version 4.0.0: Pipeline hooks can only be defined through the built-in functions described in this section.
Warning
Changed in version 3.9.2: Execution of pipeline hooks until this version was implementation-defined. In practice, hooks of a derived class were executed before those of its parents.
This version defines the execution order of hooks, which now follows a strict reverse MRO order, so that parent hooks will execute before those of derived classes. Tests that relied on the execution order of hooks might break with this change.
Test variants¶
Through the parameter()
and fixture()
builtins, a regression test may store multiple versions or variants of a regression test at the class level.
During class creation, the test’s parameter and fixture spaces are constructed and combined, assigning a unique index to each of the available test variants.
In most cases, the user does not need to be aware of all the internals related to this variant indexing, since ReFrame will run by default all the available variants for each of the registered tests.
On the other hand, in more complex use cases such as setting dependencies across different test variants, or when performing some complex variant sub-selection on a fixture declaration, the user may need to access some of this low-level information related to the variant indexing.
Therefore, classes that derive from the base RegressionMixin
provide classmethods and properties to query these data.
Warning
When selecting test variants through their variant index, no index ordering should ever be assumed, being the user’s responsibility to ensure on each ReFrame run that the selected index corresponds to the desired parameter and/or fixture variants.
- RegressionMixin.num_variants¶
Total number of variants of the test.
- classmethod RegressionMixin.get_variant_nums(**conditions)¶
Get the variant numbers that meet the specified conditions.
The given conditions enable filtering the parameter space of the test. Filtering the fixture space is not allowed.
# Filter out the test variants where my_param is greater than 3 cls.get_variant_nums(my_param=lambda x: x < 4)
The returned list of variant numbers can be passed to
variant_name()
in order to retrieve the actual test name.- Parameters:
conditions –
keyword arguments where the key is the test parameter name and the value is either a single value or a unary function that evaluates to
True
if the parameter point must be kept,False
otherwise. If a single value is passed this is implicitly converted to the equality function, such thatget_variant_nums(p=10)
is equivalent to
get_variant_nums(p=lambda x: x == 10)
- classmethod RegressionMixin.variant_name(variant_num=None)¶
Return the name of the test variant with a specific variant number.
- Parameters:
variant_num – An integer in the range of
[0, cls.num_variants)
.
Dynamic Creation of Tests¶
Added in version 3.10.0.
- reframe.core.meta.make_test(name, bases, body, methods=None, module=None, **kwargs)[source]¶
Define a new test class programmatically.
Using this method is completely equivalent to using the
class
to define the test class. More specifically, the following:from reframe.core.meta import make_test hello_cls = make_test( 'HelloTest', (rfm.RunOnlyRegressionTest,), { 'valid_systems': ['*'], 'valid_prog_environs': ['*'], 'executable': 'echo', 'sanity_patterns': sn.assert_true(1) } )
is completely equivalent to
class HelloTest(rfm.RunOnlyRegressionTest): valid_systems = ['*'] valid_prog_environs = ['*'] executable = 'echo' sanity_patterns: sn.assert_true(1) hello_cls = HelloTest
Test builtins can also be used when defining the body of the test by accessing them through the
reframe.core.builtins
. Methods can also be bound to the newly created tests using themethods
argument. The following is an example:import reframe.core.builtins as builtins from reframe.core.meta import make_test def set_message(obj): obj.executable_opts = [obj.message] def validate(obj): return sn.assert_found(obj.message, obj.stdout) hello_cls = make_test( 'HelloTest', (rfm.RunOnlyRegressionTest,), { 'valid_systems': ['*'], 'valid_prog_environs': ['*'], 'executable': 'echo', 'message': builtins.variable(str) }, methods=[ builtins.run_before('run')(set_message), builtins.sanity_function(validate) ] )
- Parameters:
name – The name of the new test class.
bases – A tuple of the base classes of the class that is being created.
body – A mapping of key/value pairs that will be inserted as class attributes in the newly created class.
methods – A list of functions to be bound as methods to the class that is being created. The functions will be bound with their original name.
module – The module name of the new test class. If
None
, the module of the caller will be used.kwargs – Any keyword arguments to be passed to the
RegressionTestMeta
metaclass.
Added in version 3.10.0.
Changed in version 3.11.0: Added the
methods
argument.Added in version 4.2: Added the
module
argument.
Environments and Systems¶
- class reframe.core.environments.Environment(name, modules=None, env_vars=None, extras=None, features=None, prepare_cmds=None)[source]¶
Bases:
JSONSerializable
This class abstracts away an environment to run regression tests.
It is simply a collection of modules to be loaded and environment variables to be set when this environment is loaded by the framework.
Warning
Users may not create
Environment
objects directly.- property env_vars¶
The environment variables associated with this environment.
- Type:
OrderedDict[str, str]
Added in version 4.0.0.
- property extras¶
User defined properties specified in the configuration.
Added in version 3.9.1.
- Type:
Dict[str, object]
- property features¶
Used defined features specified in the configuration.
Added in version 3.11.0.
- Type:
List[str]
- property modules¶
The modules associated with this environment.
- Type:
List[str]
- property modules_detailed¶
A view of the modules associated with this environment in a detailed format.
Each module is represented as a dictionary with the following attributes:
name
: the name of the module.collection
:True
if the module name refers to a module collection.
- Type:
List[Dict[str, object]]
Added in version 3.3.
- property prepare_cmds¶
The prepare commands associated with this environment.
Added in version 4.3.0.
- Type:
List[str]
- class reframe.core.environments.ProgEnvironment(name, modules=None, env_vars=None, extras=None, features=None, prepare_cmds=None, cc='cc', cxx='CC', ftn='ftn', nvcc='nvcc', cppflags=None, cflags=None, cxxflags=None, fflags=None, ldflags=None, resources=None, **kwargs)[source]¶
Bases:
Environment
A class representing a programming environment.
This type of environment adds also properties for retrieving the compiler and compilation flags.
Warning
Users may not create
ProgEnvironment
objects directly.- property cflags¶
The C compiler flags of this programming environment.
- Type:
List[str]
- property cppflags¶
The preprocessor flags of this programming environment.
- Type:
List[str]
- property cxxflags¶
The C++ compiler flags of this programming environment.
- Type:
List[str]
- property fflags¶
The Fortran compiler flags of this programming environment.
- Type:
List[str]
- property ldflags¶
The linker flags of this programming environment.
- Type:
List[str]
- property resources¶
The scheduler resources associated with this environment.
Added in version 4.6.0.
- Type:
Dict[str, object]
- class reframe.core.environments._EnvironmentSnapshot(name='env_snapshot')[source]¶
Bases:
Environment
An environment snapshot.
- reframe.core.environments.snapshot()[source]¶
Create an environment snapshot
- Returns:
An instance of
_EnvironmentSnapshot
.
- class reframe.core.systems.DeviceInfo(info)[source]¶
Bases:
_ReadOnlyInfo
,JSONSerializable
A representation of a device inside ReFrame.
You can access all the keys of the device configuration object.
Added in version 3.5.0.
Warning
Users may not create
DeviceInfo
objects directly.- property num_devices¶
Number of devices of this type.
It will return 1 if it wasn’t set in the configuration.
- Type:
integral
- class reframe.core.systems.ProcessorInfo(info)[source]¶
Bases:
_ReadOnlyInfo
,JSONSerializable
A representation of a processor inside ReFrame.
You can access all the keys of the processor configuration object.
Added in version 3.5.0.
Warning
Users may not create
ProcessorInfo
objects directly.- property num_cores¶
Total number of cores.
- Type:
integral or
None
- property num_cores_per_numa_node¶
Number of cores per NUMA node.
- Type:
integral or
None
- property num_cores_per_socket¶
Number of cores per socket.
- Type:
integral or
None
- property num_numa_nodes¶
Number of NUMA nodes.
- Type:
integral or
None
- class reframe.core.systems.System(name, descr, hostnames, modules_system, modules_system_validate, preload_env, prefix, outputdir, resourcesdir, stagedir, partitions)[source]¶
Bases:
JSONSerializable
A representation of a system inside ReFrame.
Warning
Users may not create
System
objects directly.- property hostnames¶
The hostname patterns associated with this system.
- Type:
List[str]
- property modules_system¶
The modules system name associated with this system.
- property partitions¶
The system partitions associated with this system.
- Type:
List[SystemPartition]
- property preload_environ¶
The environment to load whenever ReFrame runs on this system.
Added in version 2.19.
- property resourcesdir¶
Global resources directory for this system.
This directory may be used for storing large files related to regression tests. The value of this directory is controlled by the resourcesdir configuration parameter.
- Type:
- class reframe.core.systems.SystemPartition(*, parent, name, sched_type, launcher_type, descr, access, container_runtime, container_environs, resources, local_env, environs, max_jobs, prepare_cmds, processor, devices, extras, features, time_limit)[source]¶
Bases:
JSONSerializable
A representation of a system partition inside ReFrame.
Warning
Users may not create
SystemPartition
objects directly.- property access¶
The scheduler options for accessing this system partition.
- Type:
List[str]
- property container_environs¶
Environments associated with the different container platforms.
- Type:
Dict[str, Environment]
- property devices¶
A list of devices in the current partition.
Added in version 3.5.0.
- Type:
List[reframe.core.systems.DeviceInfo]
- property environs¶
The programming environments associated with this system partition.
- Type:
List[ProgEnvironment]
- property extras¶
User defined properties associated with this partition.
These extras are defined in the configuration.
Added in version 3.5.0.
- Type:
Dict[str, object]
- property features¶
User defined features associated with this partition.
These features are defined in the configuration.
Added in version 3.11.0.
- Type:
List[str]
- property fullname¶
Return the fully-qualified name of this partition.
The fully-qualified name is of the form
<parent-system-name>:<partition-name>
.- Type:
- property launcher_type¶
The type of the backend launcher of this partition.
Added in version 3.2.
- Type:
a subclass of
reframe.core.launchers.JobLauncher
.
- property local_env¶
The local environment associated with this partition.
- Type:
Environment
- property max_jobs¶
The maximum number of concurrent jobs allowed on this partition.
- Type:
integral
- property prepare_cmds¶
Commands to be emitted before loading the modules.
- Type:
List[str]
- property processor¶
Processor information for the current partition.
Added in version 3.5.0.
- property resources¶
The resources template strings associated with this partition.
This is a dictionary, where the key is the name of a resource and the value is the scheduler options or directives associated with this resource.
- Type:
Dict[str, List[str]]
- property scheduler¶
The backend scheduler of this partition.
- Type:
reframe.core.schedulers.JobScheduler
.
Note
Changed in version 2.8: Prior versions returned a string representing the scheduler and job launcher combination.
Changed in version 3.2: The property now stores a
JobScheduler
instance.
- select_devices(devtype)[source]¶
Return all devices of the requested type:
- Parameters:
devtype – The type of the device info objects to return.
- Returns:
A list of
DeviceInfo
objects of the specified type.
Jobs and Parallel Launchers¶
- class reframe.core.schedulers.Job(*args, **kwargs)[source]¶
Bases:
JSONSerializable
A job descriptor.
A job descriptor is created by the framework after the “setup” phase and is associated with the test.
Warning
Users may not create a job descriptor directly.
- property completion_time¶
The completion time of this job as a floating point number expressed in seconds since the epoch, in UTC.
This attribute is
None
if the job hasn’t been finished yet, or if ReFrame runtime hasn’t perceived it yet.The accuracy of this timestamp depends on the backend scheduler. The
slurm
scheduler backend relies on job accounting and returns the actual termination time of the job. The rest of the backends report as completion time the moment when the framework realizes that the spawned job has finished. In this case, the accuracy depends on the execution policy used. If tests are executed with the serial execution policy, this is close to the real completion time, but if the asynchronous execution policy is used, it can differ significantly.- Type:
float
orNone
- property exception¶
The last exception that this job encountered.
The scheduler will raise this exception the next time the status of this job is queried.
- exclusive_access = False¶
Request exclusive access on the nodes for this job.
- Type:
- Default:
false
Note
This attribute is set by the framework just before submitting the job based on the test information.
Added in version 3.11.0.
- property exitcode¶
The exit code of this job.
This may or may not be set depending on the scheduler backend.
Added in version 2.21.
- Type:
int
orNone
- property jobid¶
The ID of this job.
Added in version 2.21.
Changed in version 3.2: Job ID type is now a string.
- Type:
str
orNone
- launcher¶
The (parallel) program launcher that will be used to launch the (parallel) executable of this job.
Users are allowed to explicitly set the current job launcher, but this is only relevant in rare situations, such as when you want to wrap the current launcher command. For this specific scenario, you may have a look at the
reframe.core.launchers.LauncherWrapper
class.The following example shows how you can replace the current partition’s launcher for this test with the “local” launcher:
from reframe.core.backends import getlauncher @run_after('setup') def set_launcher(self): self.job.launcher = getlauncher('local')()
- max_pending_time = None¶
Maximum pending time for this job.
See
reframe.core.pipeline.RegressionTest.max_pending_time
for more details.Note
This attribute is set by the framework just before submitting the job based on the test information.
Added in version 3.11.0.
- property name¶
The name of this job.
- property nodelist¶
The list of node names assigned to this job.
This attribute is supported by the
local
,pbs
,slurm
,squeue
,ssh
, andtorque
scheduler backends.This attribute is an empty list if no nodes are assigned to the job yet.
The
squeue
scheduler backend, i.e., Slurm without accounting, might not set this attribute for jobs that finish very quickly.For the
local
scheduler backend, this returns a one-element list containing the hostname of the current host.This attribute might be useful in a flexible regression test for determining the actual nodes that were assigned to the test. For more information on flexible node allocation, see the
--flex-alloc-nodes
command-line option.Added in version 2.17.
Changed in version 4.7: Default value is the empty list.
- Type:
List[str]
- num_cpus_per_task = None¶
Number of processing elements associated with each task for this job.
- Type:
integral or
NoneType
- Default:
None
Note
This attribute is set by the framework just before submitting the job based on the test information.
Added in version 3.11.0.
- num_tasks = 1¶
Number of tasks for this job.
- Type:
integral
- Default:
1
Note
This attribute is set by the framework just before submitting the job based on the test information.
Added in version 3.11.0.
Changed in version 4.1: Allow
None
values
- num_tasks_per_core = None¶
Number of tasks per core for this job.
- Type:
integral or
NoneType
- Default:
None
Note
This attribute is set by the framework just before submitting the job based on the test information.
Added in version 3.11.0.
- num_tasks_per_node = None¶
Number of tasks per node for this job.
- Type:
integral or
NoneType
- Default:
None
Note
This attribute is set by the framework just before submitting the job based on the test information.
Added in version 3.11.0.
- num_tasks_per_socket = None¶
Number of tasks per socket for this job.
- Type:
integral or
NoneType
- Default:
None
Note
This attribute is set by the framework just before submitting the job based on the test information.
Added in version 3.11.0.
- options = []¶
Arbitrary options to be passed to the backend job scheduler.
- Type:
List[str]
- Default:
[]
- pin_nodes = []¶
Pin the jobs on the given nodes.
The list of nodes will be transformed to a suitable string and be passed to the scheduler’s options. Currently it will have an effect only for the Slurm scheduler.
- Type:
List[str]
- Default:
[]
Added in version 3.11.0.
- property sched_flex_alloc_nodes¶
The argument of the
--flex-alloc-nodes
command line option.
- property scheduler¶
The scheduler where this job is assigned to.
- property script_filename¶
The filename of the generated job script.
- property state¶
The state of this job.
The value of this field is scheduler-specific.
Added in version 2.21.
- Type:
:class`str` or
None
- property stderr¶
The file where the standard error of the job is saved.
- property stdout¶
The file where the standard output of the job is saved.
- property submit_time¶
The submission time of this job as a floating point number expressed in seconds since the epoch, in UTC.
This attribute is
None
if the job hasn’t been submitted yet.This attribute is set right after the job is submitted and can vary significantly from the time the jobs starts running, depending on the scheduler.
- Type:
float
orNone
- time_limit = None¶
Time limit for this job.
See
reframe.core.pipeline.RegressionTest.time_limit
for more details.Note
This attribute is set by the framework just before submitting the job based on the test information.
Added in version 3.11.0.
- use_smt = None¶
Enable SMT for this job.
- Type:
bool
orNoneType
- Default:
None
Note
This attribute is set by the framework just before submitting the job based on the test information.
Added in version 3.11.0.
- property workdir¶
The working directory for this job.
- reframe.core.schedulers.filter_nodes_by_state(nodelist, state)[source]¶
Filter nodes by their state
- Parameters:
nodelist – List of
Node
instances to filter.state – The state of the nodes. If
all
, the initial list is returned untouched. Ifavail
, only the available nodes will be returned. All other values are interpreted as a state string. State match is exclusive unless the*
is added at the end of the state string.
- Returns:
the filtered node list
- class reframe.core.launchers.JobLauncher(*args, **kwargs)[source]¶
Bases:
object
Abstract base class for job launchers.
A job launcher is the executable that actually launches a distributed program to multiple nodes, e.g.,
mpirun
,srun
etc.Note
Changed in version 4.0.0: Users may create job launchers directly.
Changed in version 2.8: Job launchers do not get a reference to a job during their initialization.
- abstract command(job)[source]¶
The launcher command to be emitted for a specific job.
Launcher backends provide concrete implementations of this method.
- Parameters:
job – A job descriptor.
- Returns:
the basic launcher command as a list of tokens.
- modifier¶
Optional modifier of the launcher command.
This will be combined with the
modifier_options
and prepended to the parallel launch command.- Type:
- Default:
''
Added in version 4.6.0.
- modifier_options = []¶
Options to be passed to the launcher
modifier
.If the modifier is empty, these options will be ignored.
- Type:
List[str]
- Default:
[]
- Versionadded::
4.6.0
- options = []¶
List of options to be passed to the job launcher invocation.
- Type:
List[str]
- Default:
[]
- class reframe.core.launchers.LauncherWrapper(*args, **kwargs)[source]¶
Bases:
JobLauncher
Wrap a launcher object so as to modify its invocation.
This is useful for parallel debuggers. For example, to launch a regression test using the ARM DDT debugger, you can do the following:
@run_after('setup') def set_launcher(self): self.job.launcher = LauncherWrapper(self.job.launcher, 'ddt', ['--offline'])
If the current system partition uses native Slurm for job submission, this setup will generate the following command in the submission script:
ddt --offline srun <test_executable>
If the current partition uses
mpirun
instead, it will generateddt --offline mpirun -np <num_tasks> ... <test_executable>
- Parameters:
target_launcher – The launcher to wrap.
wrapper_command – The wrapper command.
wrapper_options – List of options to pass to the wrapper command.
- reframe.core.backends.getlauncher(name)¶
Retrieve the
reframe.core.launchers.JobLauncher
concrete implementation for a parallel launcher backend.- Parameters:
name – The registered name of the launcher backend.
- reframe.core.backends.getscheduler(name)¶
Retrieve the
reframe.core.schedulers.JobScheduler
concrete implementation for a scheduler backend.- Parameters:
name – The registered name of the scheduler backend.
Runtime Services¶
- class reframe.core.runtime.RuntimeContext(site_config, *, use_timestamps=False)[source]¶
Bases:
object
The runtime context of the framework.
There is a single instance of this class globally in the framework.
Added in version 2.13.
- get_default(option)[source]¶
Get the default value for the option as defined in the configuration schema.
- Parameters:
option – The option whose default value is requested
- Returns:
The default value of the requested option
- Raises:
KeyError – if option does not have a default value
Added in version 4.2.
- get_option(option, default=None)[source]¶
Get a configuration option.
- Parameters:
option – The option to be retrieved.
default – The value to return if
option
cannot be retrieved.
- Returns:
The value of the option.
Changed in version 3.11.0: Add
default
named argument.
- property modules_system¶
The environment modules system used in the current host.
- property system¶
The current host system.
- reframe.core.runtime.is_env_loaded(environ)[source]¶
Check if environment is loaded.
- Parameters:
environ (Environment) – Environment to check for.
- Returns:
True
if this environment is loaded,False
otherwise.
- reframe.core.runtime.loadenv(*environs)[source]¶
Load environments in the current Python context.
- Parameters:
environs (List[Environment]) – A list of environments to load.
- Returns:
A tuple containing snapshot of the current environment upon entry to this function and a list of shell commands required to load the environments.
- Return type:
Tuple[_EnvironmentSnapshot, List[str]]
- class reframe.core.runtime.module_use(*paths)[source]¶
Bases:
object
Context manager for temporarily modifying the module path.
- reframe.core.runtime.runtime()[source]¶
Get the runtime context of the framework.
Added in version 2.13.
- Returns:
A
reframe.core.runtime.RuntimeContext
object.
Modules Systems¶
- class reframe.core.modules.ModulesSystem(backend)[source]¶
Bases:
object
A modules system.
- available_modules(substr=None)[source]¶
Return a list of available modules that contain
substr
in their name.- Return type:
List[str]
- conflicted_modules(name, collection=False, path=None)[source]¶
Return the list of the modules conflicting with module
name
.If module
name
resolves to multiple real modules, then the returned list will be the concatenation of the conflict lists of all the real modules.- Parameters:
name – The name of the module.
collection – The module is a “module collection” (TMod4/LMod only).
path – The path where the module resides if not in the default
MODULEPATH
.
- Returns:
A list of conflicting module names.
Changed in version 3.3: The
collection
argument is added.Changed in version 3.5.0: The
path
argument is added.
- emit_load_commands(name, collection=False, path=None)[source]¶
Return the appropriate shell commands for loading a module.
Module mappings are not taken into account by this function.
- Parameters:
name – The name of the module to load.
collection – The module is a “module collection” (TMod4/LMod only)
path – The path where the module resides if not in the default
MODULEPATH
.
- Returns:
A list of shell commands.
Changed in version 3.3: The
collection
argument was added and module mappings are no more taken into account by this function.Changed in version 3.5.0: The
path
argument is added.
- emit_unload_commands(name, collection=False, path=None)[source]¶
Return the appropriate shell commands for unloading a module.
Module mappings are not taken into account by this function.
- Parameters:
name – The name of the module to unload.
collection – The module is a “module collection” (TMod4/LMod only)
path – The path where the module resides if not in the default
MODULEPATH
.
- Returns:
A list of shell commands.
Changed in version 3.3: The
collection
argument was added and module mappings are no more taken into account by this function.Changed in version 3.5.0: The
path
argument is added.
- execute(cmd, *args)[source]¶
Execute an arbitrary module command.
- Parameters:
cmd – The command to execute, e.g.,
load
,restore
etc.args – The arguments to pass to the command.
- Returns:
The command output.
- is_module_loaded(name)[source]¶
Check if module
name
is loaded.If module
name
refers to multiple real modules, this method will returnTrue
only if all the referees are loaded.
- load_module(name, collection=False, path=None, force=False)[source]¶
Load the module
name
.- Parameters:
collection – The module is a “module collection” (TMod4/Lmod only)
path – The path where the module resides if not in the default
MODULEPATH
.force – If set, forces the loading, unloading first any conflicting modules currently loaded. If module
name
refers to multiple real modules, all of the target modules will be loaded.
- Returns:
A list of two-element tuples, where each tuple contains the module that was loaded and the list of modules that had to be unloaded first due to conflicts. This list will be normally of size one, but it can be longer if there is mapping that maps module
name
to multiple other modules.
Changed in version 3.3: - The
collection
argument is added. - This function now returns a list of tuples.Changed in version 3.5.0: - The
path
argument is added. - Theforce
argument is now the last argument.
- property name¶
The name of this module system.
- property searchpath¶
The module system search path as a list of directories.
- unload_module(name, collection=False, path=None)[source]¶
Unload module
name
.- Parameters:
name – The name of the module to unload. If module
name
is resolved to multiple real modules, all the referred to modules will be unloaded in reverse order.collection – The module is a “module collection” (TMod4 only)
path – The path where the module resides if not in the default
MODULEPATH
.
Changed in version 3.3: The
collection
argument was added.Changed in version 3.5.0: The
path
argument is added.
- property version¶
The version of this module system.
Build Systems¶
Added in version 2.14.
ReFrame delegates the compilation of the regression test to a build system. Build systems in ReFrame are entities that are responsible for generating the necessary shell commands for compiling a code. Each build system defines a set of attributes that users may set in order to customize their compilation. An example usage is the following:
self.build_system = 'SingleSource'
self.build_system.cflags = ['-fopenmp']
Users simply set the build system to use in their regression tests and then they configure it. If no special configuration is needed for the compilation, users may completely ignore the build systems. ReFrame will automatically pick one based on the regression test attributes and will try to compile the code.
All build systems in ReFrame derive from the abstract base class reframe.core.buildsystems.BuildSystem
.
This class defines a set of common attributes, such us compilers, compilation flags etc. that all subclasses inherit.
It is up to the concrete build system implementations on how to use or not these attributes.
- class reframe.core.buildsystems.Autotools(*args, **kwargs)[source]¶
Bases:
ConfigureBasedBuildSystem
A build system for compiling Autotools-based projects.
This build system will emit the following commands:
Create a build directory if
builddir
is notNone
and change to it.Invoke
configure
to configure the project by setting the corresponding flags for compilers and compiler flags.Issue
make
to compile the code.
- builddir = None¶
The CMake build directory, where all the generated files will be placed.
- Type:
- Default:
None
- cc¶
The C compiler to be used. If empty and
flags_from_environ
isTrue
, the compiler defined in the current programming environment will be used.- Type:
- Default:
''
- cflags = []¶
The C compiler flags to be used. If empty and
flags_from_environ
isTrue
, the corresponding flags defined in the current programming environment will be used.- Type:
List[str]
- Default:
[]
- config_opts = []¶
Additional configuration options to be passed to the configure step.
- Type:
List[str]
- Default:
[]
- configuredir = .¶
The directory of the configure script.
This can be changed to do an out of source build without copying the entire source tree.
- Type:
- Default:
'.'
- cppflags = []¶
The preprocessor flags to be used. If empty and
flags_from_environ
isTrue
, the corresponding flags defined in the current programming environment will be used.- Type:
List[str]
- Default:
[]
- cxx¶
The C++ compiler to be used. If empty and
flags_from_environ
isTrue
, the compiler defined in the current programming environment will be used.- Type:
- Default:
''
- cxxflags = []¶
The C++ compiler flags to be used. If empty and
flags_from_environ
isTrue
, the corresponding flags defined in the current programming environment will be used.- Type:
List[str]
- Default:
[]
- fflags = []¶
The Fortran compiler flags to be used. If empty and
flags_from_environ
isTrue
, the corresponding flags defined in the current programming environment will be used.- Type:
List[str]
- Default:
[]
- flags_from_environ = True¶
Set compiler and compiler flags from the current programming environment if not specified otherwise.
- Type:
- Default:
True
- ftn¶
The Fortran compiler to be used. If empty and
flags_from_environ
isTrue
, the compiler defined in the current programming environment will be used.- Type:
- Default:
''
- ldflags = []¶
The linker flags to be used. If empty and
flags_from_environ
isTrue
, the corresponding flags defined in the current programming environment will be used.- Type:
List[str]
- Default:
[]
- make_opts = []¶
Options to be passed to the subsequent
make
invocation.- Type:
List[str]
- Default:
[]
- nvcc¶
The CUDA compiler to be used. If empty and
flags_from_environ
isTrue
, the compiler defined in the current programming environment will be used.- Type:
- Default:
''
- srcdir = None¶
The top-level directory of the code.
This is set automatically by the framework based on the
reframe.core.pipeline.RegressionTest.sourcepath
attribute.- Type:
- Default:
None
- class reframe.core.buildsystems.BuildSystem(*args, **kwargs)[source]¶
Bases:
object
The abstract base class of any build system.
Concrete build systems inherit from this class and must override the
emit_build_commands()
abstract function.- cc¶
The C compiler to be used. If empty and
flags_from_environ
isTrue
, the compiler defined in the current programming environment will be used.- Type:
- Default:
''
- cflags = []¶
The C compiler flags to be used. If empty and
flags_from_environ
isTrue
, the corresponding flags defined in the current programming environment will be used.- Type:
List[str]
- Default:
[]
- cppflags = []¶
The preprocessor flags to be used. If empty and
flags_from_environ
isTrue
, the corresponding flags defined in the current programming environment will be used.- Type:
List[str]
- Default:
[]
- cxx¶
The C++ compiler to be used. If empty and
flags_from_environ
isTrue
, the compiler defined in the current programming environment will be used.- Type:
- Default:
''
- cxxflags = []¶
The C++ compiler flags to be used. If empty and
flags_from_environ
isTrue
, the corresponding flags defined in the current programming environment will be used.- Type:
List[str]
- Default:
[]
- fflags = []¶
The Fortran compiler flags to be used. If empty and
flags_from_environ
isTrue
, the corresponding flags defined in the current programming environment will be used.- Type:
List[str]
- Default:
[]
- flags_from_environ = True¶
Set compiler and compiler flags from the current programming environment if not specified otherwise.
- Type:
- Default:
True
- ftn¶
The Fortran compiler to be used. If empty and
flags_from_environ
isTrue
, the compiler defined in the current programming environment will be used.- Type:
- Default:
''
- ldflags = []¶
The linker flags to be used. If empty and
flags_from_environ
isTrue
, the corresponding flags defined in the current programming environment will be used.- Type:
List[str]
- Default:
[]
- nvcc¶
The CUDA compiler to be used. If empty and
flags_from_environ
isTrue
, the compiler defined in the current programming environment will be used.- Type:
- Default:
''
- class reframe.core.buildsystems.BuildSystemMeta(name, bases, namespace, **kwargs)[source]¶
Bases:
RegressionTestMeta
,ABCMeta
Build systems metaclass.
- class reframe.core.buildsystems.CMake(*args, **kwargs)[source]¶
Bases:
ConfigureBasedBuildSystem
A build system for compiling CMake-based projects.
This build system will emit the following commands:
Create a build directory if
builddir
is notNone
and change to it.Invoke
cmake
to configure the project by setting the corresponding CMake flags for compilers and compiler flags.Issue
make
to compile the code.
- class reframe.core.buildsystems.ConfigureBasedBuildSystem(*args, **kwargs)[source]¶
Bases:
BuildSystem
Abstract base class for configured-based build systems.
- builddir = None¶
The CMake build directory, where all the generated files will be placed.
- Type:
- Default:
None
- config_opts = []¶
Additional configuration options to be passed to the configure step.
- Type:
List[str]
- Default:
[]
- make_opts = []¶
Options to be passed to the subsequent
make
invocation.- Type:
List[str]
- Default:
[]
- srcdir = None¶
The top-level directory of the code.
This is set automatically by the framework based on the
reframe.core.pipeline.RegressionTest.sourcepath
attribute.- Type:
- Default:
None
- class reframe.core.buildsystems.CustomBuild(*args, **kwargs)[source]¶
Bases:
BuildSystem
Custom build system.
This build system backend allows users to use custom build scripts to build the test code. It does not do any interpretation of the current test environment and it simply runs the supplied
commands
.Users should use this build system with caution, because environment management, reproducibility and any potential side effects are all controlled by the user’s custom build system.
Added in version 3.11.0.
- commands¶
The commands to run for building the test code.
- Type:
List[str]
- class reframe.core.buildsystems.EasyBuild(*args, **kwargs)[source]¶
Bases:
BuildSystem
A build system for building test code using EasyBuild.
ReFrame will use EasyBuild to build and install the code in the test’s stage directory by default. ReFrame uses environment variables to configure EasyBuild for running, so users can pass additional options to the
eb
command and modify the default behaviour.Added in version 3.5.0.
- easyconfigs = []¶
The list of easyconfig files to build and install. This field is required.
- Type:
List[str]
- Default:
[]
- emit_package = False¶
Instruct EasyBuild to emit a package for the built software. This will essentially pass the
--package
option toeb
.- Type:
- Default:
False
- property generated_modules¶
List of the EasyBuild generated modules.
This list will be populated after the build succeeds.
- options = []¶
Options to pass to the
eb
command.- Type:
List[str]
- Default:
[]
- package_opts = {}¶
Options controlling the package creation from EasyBuild. For each key/value pair of this dictionary, ReFrame will pass
--package-{key}={val}
to the EasyBuild invocation.- Type:
Dict[str, str]
- Default:
{}
- prefix = easybuild¶
Default prefix for the EasyBuild installation.
Relative paths will be appended to the stage directory of the test. ReFrame will set the following environment variables before running EasyBuild.
export EASYBUILD_BUILDPATH={prefix}/build export EASYBUILD_INSTALLPATH={prefix} export EASYBUILD_PREFIX={prefix} export EASYBUILD_SOURCEPATH={prefix}
Users can change these defaults by passing specific options to the
eb
command.- Type:
- Default:
easybuild
- class reframe.core.buildsystems.Make(*args, **kwargs)[source]¶
Bases:
BuildSystem
A build system for compiling codes using
make
.The generated build command has the following form:
make -j [N] [-f MAKEFILE] [-C SRCDIR] CC="X" CXX="X" FC="X" NVCC="X" CPPFLAGS="X" CFLAGS="X" CXXFLAGS="X" FCFLAGS="X" LDFLAGS="X" OPTIONS
The compiler and compiler flags variables will only be passed if they are not
None
. Their value is determined by the corresponding attributes ofBuildSystem
. If you want to completely disable passing these variables to themake
invocation, you should make sure not to set any of the correspoding attributes and set also theBuildSystem.flags_from_environ
flag toFalse
.- cc¶
The C compiler to be used. If empty and
flags_from_environ
isTrue
, the compiler defined in the current programming environment will be used.- Type:
- Default:
''
- cflags = []¶
The C compiler flags to be used. If empty and
flags_from_environ
isTrue
, the corresponding flags defined in the current programming environment will be used.- Type:
List[str]
- Default:
[]
- cppflags = []¶
The preprocessor flags to be used. If empty and
flags_from_environ
isTrue
, the corresponding flags defined in the current programming environment will be used.- Type:
List[str]
- Default:
[]
- cxx¶
The C++ compiler to be used. If empty and
flags_from_environ
isTrue
, the compiler defined in the current programming environment will be used.- Type:
- Default:
''
- cxxflags = []¶
The C++ compiler flags to be used. If empty and
flags_from_environ
isTrue
, the corresponding flags defined in the current programming environment will be used.- Type:
List[str]
- Default:
[]
- fflags = []¶
The Fortran compiler flags to be used. If empty and
flags_from_environ
isTrue
, the corresponding flags defined in the current programming environment will be used.- Type:
List[str]
- Default:
[]
- flags_from_environ = True¶
Set compiler and compiler flags from the current programming environment if not specified otherwise.
- Type:
- Default:
True
- ftn¶
The Fortran compiler to be used. If empty and
flags_from_environ
isTrue
, the compiler defined in the current programming environment will be used.- Type:
- Default:
''
- ldflags = []¶
The linker flags to be used. If empty and
flags_from_environ
isTrue
, the corresponding flags defined in the current programming environment will be used.- Type:
List[str]
- Default:
[]
- makefile = None¶
Instruct build system to use this Makefile. This option is useful when having non-standard Makefile names.
- Type:
- Default:
None
- max_concurrency = 1¶
Limit concurrency for
make
jobs. This attribute controls the-j
option passed tomake
. If notNone
,make
will be invoked asmake -j max_concurrency
. Otherwise, it will invoked asmake -j
.- Type:
integer
- Default:
1
Note
Changed in version 2.19: The default value is now
1
- nvcc¶
The CUDA compiler to be used. If empty and
flags_from_environ
isTrue
, the compiler defined in the current programming environment will be used.- Type:
- Default:
''
- options = []¶
Append these options to the
make
invocation. This variable is also useful for passing variables or targets tomake
.- Type:
List[str]
- Default:
[]
- srcdir = None¶
The top-level directory of the code.
This is set automatically by the framework based on the
reframe.core.pipeline.RegressionTest.sourcepath
attribute.- Type:
- Default:
None
- class reframe.core.buildsystems.SingleSource(*args, **kwargs)[source]¶
Bases:
BuildSystem
A build system for compiling a single source file.
The generated build command will have the following form:
COMP CPPFLAGS XFLAGS SRCFILE -o EXEC LDFLAGS
COMP
is the required compiler for compilingSRCFILE
. This build system will automatically detect the programming language of the source file and pick the correct compiler. See also theSingleSource.lang
attribute.CPPFLAGS
are the preprocessor flags and are passed to any compiler.XFLAGS
is any ofCFLAGS
,CXXFLAGS
orFCFLAGS
depending on the programming language of the source file.SRCFILE
is the source file to be compiled. This is set up automatically by the framework. See also theSingleSource.srcfile
attribute.EXEC
is the executable to be generated. This is also set automatically by the framework. See also theSingleSource.executable
attribute.LDFLAGS
are the linker flags.
For CUDA codes, the language assumed is C++ (for the compilation flags) and the compiler used is
BuildSystem.nvcc
.- executable = None¶
The executable file to be generated.
This is set automatically by the framework based on the
reframe.core.pipeline.RegressionTest.executable
attribute.- Type:
str
orNone
- include_path = []¶
The include path to be used for this compilation.
All the elements of this list will be appended to the
BuildSystem.cppflags
, by prepending to each of them the-I
option.- Type:
List[str]
- Default:
[]
- lang = None¶
The programming language of the file that needs to be compiled. If not specified, the build system will try to figure it out automatically based on the extension of the source file. The automatically detected extensions are the following:
C: .c and .upc.
C++: .cc, .cp, .cxx, .cpp, .CPP, .c++ and .C.
Fortran: .f, .for, .ftn, .F, .FOR, .fpp, .FPP, .FTN, .f90, .f95, .f03, .f08, .F90, .F95, .F03 and .F08.
CUDA: .cu.
- Type:
str
orNone
- srcfile = None¶
The source file to compile. This is automatically set by the framework based on the
reframe.core.pipeline.RegressionTest.sourcepath
attribute.- Type:
str
orNone
- class reframe.core.buildsystems.Spack(*args, **kwargs)[source]¶
Bases:
BuildSystem
A build system for building test code using Spack.
ReFrame will use a user-provided Spack environment in order to build and test a set of specs.
Added in version 3.6.1.
- config_opts = []¶
A list of Spack configurations in flattened YAML.
- Type:
List[str]
- Default:
[]
Added in version 4.2.
- emit_load_cmds = True¶
Emit the necessary
spack load
commands before running the test.
- env_create_opts = []¶
Options to pass to
spack env create
.- Type:
List[str]
- Default:
[]
- environment = None¶
The Spack environment to use for building this test.
ReFrame will activate and install this environment. This environment will also be used to run the test.
spack env activate -V -d <environment directory>
ReFrame looks for environments in the test’s
sourcesdir
.If this field is None, the default, the environment name will be automatically set to rfm_spack_env.
- Type:
str
orNone
- Default:
None
Note
Changed in version 3.7.3: The field is no longer required and the Spack environment will be automatically created if not provided.
- install_opts = []¶
Options to pass to
spack install
.- Type:
List[str]
- Default:
[]
- install_tree = None¶
The directory where Spack will install the packages requested by this test.
After activating the Spack environment, ReFrame will set the install_tree Spack configuration in the given environment with the following command:
spack config add "config:install_tree:root:<install tree>"
Relative paths are resolved against the test’s stage directory. If this field and the Spack environment are both None, the default, the install directory will be automatically set to opt/spack. If this field None but the Spack environment is not, then install_tree will not be set automatically and the install tree of the given environment will not be overridden.
- Type:
str
orNone
- Default:
None
Added in version 3.7.3.
- preinstall_cmds = []¶
A list of commands to run after a Spack environment is created, but before it is installed.
- Type:
List[str]
- Default:
[]
- specs = []¶
A list of additional specs to build and install within the given environment.
ReFrame will add the specs to the active environment by emititing the following command:
spack add spec1 spec2 ... specN
If no spec is passed, ReFrame will simply install what is prescribed by the environment.
- Type:
List[str]
- Default:
[]
Container Platforms¶
Added in version 2.20.
- class reframe.core.containers.Apptainer[source]¶
Bases:
Singularity
Container platform backend for running containers with Apptainer.
Added in version 4.0.0.
- class reframe.core.containers.ContainerPlatform[source]¶
Bases:
ABC
The abstract base class of any container platform.
- command¶
The command to be executed within the container.
If no command is given, then the default command of the corresponding container image is going to be executed.
Added in version 3.5.0: Changed the attribute name from commands to command and its type to a string.
- Type:
str
orNone
- Default:
None
- mount_points¶
List of mount point pairs for directories to mount inside the container.
Each mount point is specified as a tuple of
(/path/in/host, /path/in/container)
. The stage directory of the ReFrame test is always mounted under/rfm_workdir
inside the container, independelty of this field.- Type:
list[tuple[str, str]]
- Default:
[]
- options¶
Additional options to be passed to the container runtime when executed.
- Type:
list[str]
- Default:
[]
- pull_image¶
Pull the container image before running.
This does not have any effect for the Singularity container platform.
Added in version 3.5.
- Type:
- Default:
True
- workdir¶
The working directory of ReFrame inside the container.
This is the directory where the test’s stage directory is mounted inside the container. This directory is always mounted regardless if
mount_points
is set or not.- Type:
- Default:
/rfm_workdir
Changed in version 3.12.0: This attribute is no more deprecated.
- class reframe.core.containers.Docker[source]¶
Bases:
ContainerPlatform
Container platform backend for running containers with Docker.
- class reframe.core.containers.Sarus[source]¶
Bases:
ContainerPlatform
Container platform backend for running containers with Sarus.
- with_mpi¶
Enable MPI support when launching the container.
- Type:
boolean
- Default:
False
- class reframe.core.containers.Shifter[source]¶
Bases:
Sarus
Container platform backend for running containers with Shifter.
- class reframe.core.containers.Singularity[source]¶
Bases:
ContainerPlatform
Container platform backend for running containers with Singularity.
- with_cuda¶
Enable CUDA support when launching the container.
- Type:
boolean
- Default:
False
The reframe
module¶
The reframe
module offers direct access to the basic test classes, constants and decorators.
- class reframe.CompileOnlyRegressionTest¶
- class reframe.RegressionTest¶
- class reframe.RunOnlyRegressionTest¶
- @reframe.simple_test¶
Mapping of Test Attributes to Job Scheduler Backends¶
Test attribute |
Slurm option |
Torque option |
PBS option |
---|---|---|---|
|
|
|
|
|
|
see |
see |
|
|
n/a |
n/a |
|
|
n/a |
n/a |
|
|
see |
see |
|
|
|
|
|
|
n/a |
n/a |
|
|
n/a |
n/a |
If any of the attributes is set to None
it will not be emitted at all in the job script.
In cases that the attribute is required, it will be set to 1
.
1 The --nodes
option may also be emitted if the use_nodes_option
scheduler configuration parameter is set.