Command Line Reference¶
Synopsis¶
- reframe [OPTION]... ACTION¶
Description¶
ReFrame provides both a programming interface for writing regression tests and a command-line interface for managing and running the tests, which is detailed here.
The reframe
command is part of ReFrame’s frontend.
This frontend is responsible for loading and running regression tests written in ReFrame.
ReFrame executes tests by sending them down to a well defined pipeline.
The implementation of the different stages of this pipeline is part of ReFrame’s core architecture, but the frontend is responsible for driving this pipeline and executing tests through it.
Usually, ReFrame processes tests in three phases:
It discovers and loads tests from the filesystem.
It filters the loaded tests based on the current system and any other criteria specified by the user.
It acts upon the selected tests.
There are also ReFrame commands that do not operate on a set of tests.
Commands¶
ReFrame commands are mutually exclusive and one of them must always be specified. There are commands that act upon the selected tests and others that have a helper function, such as querying the configuration, querying the results database etc.
Changed in version 4.7: ReFrame commands are now mutually exclusive and only one can be specified every time.
Test commands¶
- --ci-generate=FILE¶
Generate a Gitlab child pipeline specification in
FILE
that will run the selected tests.You can set up your Gitlab CI to use the generated file to run every test as a separate Gitlab job respecting test dependencies. For more information, have a look in Integrating into a CI pipeline.
Note
This option will not work with the test generation options.
Added in version 3.4.1.
- --describe¶
Print a detailed description of the selected tests in JSON format and exit.
Note
The generated test description corresponds to its state after it has been initialized. If any of its attributes are changed or set during its execution, their updated values will not be shown by this listing.
Added in version 3.10.0.
- --dry-run¶
Dry run the selected tests.
The dry-run mode will try to execute as much of the test pipeline as possible. More specifically, the tests will not be submitted and will not be run for real, but their stage directory will be prepared and the corresponding job script will be emitted. Similarly, the sanity and performance functions will not be evaluated but all the preparation will happen. Tests run in dry-run mode will not fail unless there is a programming error in the test or if the test tries to use a resource that is not produced in dry run mode (e.g., access the standard output or a resource produced by a dependency outside any sanity or performance function). In this case, users can call the
is_dry_run()
method in their test and take a specific action if the test is run in dry-run mode.Added in version 4.1.
- -L, --list-detailed[=T|C]¶
List selected tests providing more details for each test.
The unique id of each test (see also
unique_name
) as well as the file where each test is defined are printed.This option accepts optionally a single argument denoting what type of listing is requested. Please refer to
-l
for an explanation of this argument.Added in version 3.10.0: Support for different types of listing is added.
Changed in version 4.0.5: The variable names to which fixtures are bound are also listed. See Test Naming Scheme for more information.
- -l, --list[=T|C]¶
List selected tests and their dependencies.
This option accepts optionally a single argument denoting what type of listing is requested. There are two types of possible listings:
Regular test listing (
T
, the default): This type of listing lists the tests and their dependencies or fixtures using theirdisplay_name
. A test that is listed as a dependency of another test will not be listed separately.Concretized test case listing (
C
): This type of listing lists the exact test cases and their dependencies as they have been concretized for the current system and environment combinations. This listing shows practically the exact test DAG that will be executed.
Added in version 3.10.0: Support for different types of listing is added.
Changed in version 4.0.5: The variable names to which fixtures are bound are also listed. See Test Naming Scheme for more information.
- --list-tags¶
List the unique tags of the selected tests.
The tags are printed in alphabetical order.
Added in version 3.6.0.
- -r, --run¶
Run the selected tests.
Result storage commands¶
- --delete-stored-sessions=SELECT_SPEC¶
Delete the stored sessions matching the given selection criteria.
Check Selecting sessions and test cases for information on the exact syntax of
SELECT_SPEC
.Added in version 4.7.
- --describe-stored-sessions=SELECT_SPEC¶
Get detailed information of the sessions matching the given selection criteria.
The output is in JSON format. Check Selecting sessions and test cases for information on the exact syntax of
SELECT_SPEC
.Added in version 4.7.
- --describe-stored-testcases=SELECT_SPEC¶
Get detailed information of the test cases matching the given selection criteria.
This option can be combined with
--name
and--filter-expr
to restrict further the test cases.Check Selecting sessions and test cases for information on the exact syntax of
SELECT_SPEC
.Added in version 4.7.
- --list-stored-sessions[=SELECT_SPEC|all]¶
List sessions stored in the results database matching the given selection criteria.
If
all
is given instead ofSELECT_SPEC
, all stored sessions will be listed. This is equivalent to19700101T0000+0000:now
. If theSELECT_SPEC
is not specified, only the sessions of last week will be listed (equivalent tonow-1w:now
).Check Selecting sessions and test cases for information on the exact syntax of
SELECT_SPEC
.Added in version 4.7.
- --list-stored-testcases=CMPSPEC¶
Select and list information of stored testcases.
The
CMPSPEC
argument specifies how testcases will be selected, aggregated and presented. This option can be combined with--name
and--filter-expr
to restrict the listed tests.Check the Querying past results section for the exact syntax of
CMPSPEC
.Added in version 4.7.
- --performance-compare=CMPSPEC¶
Compare the performance of test cases that have run in the past.
The
CMPSPEC
argument specifies how testcases will be selected, aggregated and presented. This option can be combined with--name
and--filter-expr
to restrict the listed tests.Check the Querying past results section for the exact syntax of
CMPSPEC
.Added in version 4.7.
Other commands¶
- --detect-host-topology[=FILE]¶
Detect the local host processor topology, store it to
FILE
and exit.If no
FILE
is specified, the standard output will be used.Added in version 3.7.0.
- --show-config [PARAM]¶
Show the value of configuration parameter
PARAM
as this is defined for the currently selected system and exit.The parameter value is printed in JSON format. If
PARAM
is not specified or if it set toall
, the whole configuration for the currently selected system will be shown. Configuration parameters are formatted as a path navigating from the top-level configuration object to the actual parameter. The/
character acts as a selector of configuration object properties or an index in array objects. The@
character acts as a selector by name for configuration objects that have aname
property. Here are some example queries:Retrieve all the partitions of the current system:
reframe --show-config=systems/0/partitions
Retrieve the job scheduler of the partition named
default
:reframe --show-config=systems/0/partitions/@default/scheduler
Retrieve the check search path for system
foo
:reframe --system=foo --show-config=general/0/check_search_path
- -V, --version¶
Print version and exit.
Test discovery and test loading¶
This is the very first phase of the frontend.
ReFrame will search for tests in its check search path and will load them.
When ReFrame loads a test, it actually instantiates it, meaning that it will call its __init__()
method unconditionally whether this test is meant to run on the selected system or not.
This is something that test developers should bear in mind.
- -c, --checkpath=PATH¶
A filesystem path where ReFrame should search for tests.
PATH
can be a directory or a single test file. If it is a directory, ReFrame will search for test files inside this directory load all tests found in them. This option can be specified multiple times, in which case eachPATH
will be searched in order.The check search path can also be set using the
RFM_CHECK_SEARCH_PATH
environment variable or thecheck_search_path
general configuration parameter.
- -R, --recursive¶
Search for test files recursively in directories found in the check search path.
This option can also be set using the
RFM_CHECK_SEARCH_RECURSIVE
environment variable or thecheck_search_recursive
general configuration parameter.
Note
ReFrame will fail to load a test with a relative import unless any of the following holds true:
The test is located under ReFrame’s installation prefix.
The parent directory of the test contains an
__init__.py
file.
For versions prior to 4.6, relative imports are supported only for case (1).
Test filtering¶
After all tests in the search path have been loaded, they are first filtered by the selected system.
Any test that is not valid for the current system, it will be filtered out.
The current system is either auto-selected or explicitly specified with the --system
option.
Tests can be filtered by different attributes and there are specific command line options for achieving this.
A common characteristic of all test filtering options is that if a test is selected, then all its dependencies will be selected, too, regardless if they match the filtering criteria or not.
This happens recursively so that if test T1
depends on T2
and T2
depends on T3
, then selecting T1
would also select T2
and T3
.
- --cpu-only¶
Select tests that do not target GPUs.
These are all tests with
num_gpus_per_node
equals to zero This option and--gpu-only
are mutually exclusive.The
--gpu-only
and--cpu-only
check only the value of thenum_gpus_per_node
attribute of tests. The value of this attribute is not required to be non-zero for GPU tests. Tests may or may not make use of it.Deprecated since version 4.4: Please use
-E 'not num_gpus_per_node'
instead.
- -E, --filter-expr=EXPR¶
Select only tests that satisfy the given expression.
The expression
EXPR
can be any valid Python expression on the test variables or parameters. For example,-E num_tasks > 10
will select all tests, whosenum_tasks
exceeds10
. You may use any test variable in expression, even user-defined. Multiple variables can also be included such as-E num_tasks >= my_param
, wheremy_param
is user-defined parameter.Added in version 4.4.
- --failed¶
Select only the failed test cases for a previous run.
This option can only be used in combination with the
--restore-session
. To rerun the failed cases from the last run, you can usereframe --restore-session --failed -r
.Added in version 3.4.
- --gpu-only¶
Select tests that can run on GPUs.
These are all tests with
num_gpus_per_node
greater than zero. This option and--cpu-only
are mutually exclusive.Deprecated since version 4.4: Please use
-E num_gpus_per_node
instead.
- --maintainer=MAINTAINER¶
Filter tests by maintainer.
MAINTAINER
is interpreted as a Python Regular Expression; all tests that have at least a matching maintainer will be selected.MAINTAINER
being a regular expression has the implication that--maintainer 'foo'
will select also tests that define'foobar'
as a maintainer. To restrict the selection to tests defining only'foo'
, you should use--maintainer 'foo$'
.This option may be specified multiple times, in which case only tests defining or matching all maintainers will be selected.
Added in version 3.9.1.
Changed in version 4.1.0: The
MAINTAINER
pattern is matched anywhere in the maintainer’s name and not at its beginning. If you want to match at the beginning of the name, you should prepend^
.
- -n, --name=NAME¶
Filter tests by name.
NAME
is interpreted as a Python Regular Expression; any test whose display name matchesNAME
will be selected. The display name of a test encodes also any parameterization information. See Test Naming Scheme for more details on how the tests are automatically named by the framework.Before matching, any whitespace will be removed from the display name of the test.
This option may be specified multiple times, in which case tests with any of the specified names will be selected:
-n NAME1 -n NAME2
is therefore equivalent to-n 'NAME1|NAME2'
.If the special notation
<test_name>@<variant_num>
is passed as theNAME
argument, then an exact match will be performed selecting the variantvariant_num
of the testtest_name
.You may also select a test by its hash code using the notation
/<test-hash>
for theNAME
argument.Note
Fixtures cannot be selected.
Changed in version 3.10.0: The option’s behaviour was adapted and extended in order to work with the updated test naming scheme.
Changed in version 4.0.0: Support selecting tests by their hash code.
Changed in version 4.1.0: The
NAME
pattern is matched anywhere in the test name and not at its beginning. If you want to match at the beginning of a test name, you should prepend^
.
- -p, --prgenv=NAME¶
Filter tests by programming environment.
NAME
is interpreted as a Python Regular Expression; any test for which at least one valid programming environment is matchingNAME
will be selected.This option may be specified multiple times, in which case only tests matching all of the specified programming environments will be selected.
- --skip-prgenv-check¶
Do not filter tests against programming environments.
Even if the
-p
option is not specified, ReFrame will filter tests based on the programming environments defined for the currently selected system. This option disables that filter completely.
- --skip-system-check¶
Do not filter tests against the selected system.
- -T, --exclude-tag=TAG¶
Exclude tests by tags.
TAG
is interpreted as a Python Regular Expression; any test with tags matchingTAG
will be excluded.This option may be specified multiple times, in which case tests with any of the specified tags will be excluded:
-T TAG1 -T TAG2
is therefore equivalent to-T 'TAG1|TAG2'
.Changed in version 4.1.0: The
TAG
pattern is matched anywhere in the tag name and not at its beginning. If you want to match at the beginning of a tag, you should prepend^
.
- -t, --tag=TAG¶
Filter tests by tag.
TAG
is interpreted as a Python Regular Expression; all tests that have at least a matching tag will be selected.TAG
being a regular expression has the implication that-t 'foo'
will select also tests that define'foobar'
as a tag. To restrict the selection to tests defining only'foo'
, you should use-t 'foo$'
.This option may be specified multiple times, in which case only tests defining or matching all tags will be selected.
Changed in version 4.1.0: The
TAG
pattern is matched anywhere in the tag name and not at its beginning. If you want to match at the beginning of a tag, you should prepend^
.
- -x, --exclude=NAME¶
Exclude tests by name.
NAME
is interpreted as a Python Regular Expression; any test whose name matchesNAME
will be excluded.This option may be specified multiple times, in which case tests with any of the specified names will be excluded:
-x NAME1 -x NAME2
is therefore equivalent to-x 'NAME1|NAME2'
.Changed in version 4.1.0: The
NAME
pattern is matched anywhere in the test name and not at its beginning. If you want to match at the beginning of a test name, you should prepend^
.
Options controlling ReFrame output¶
- --compress-report¶
Compress the generated run report (see
--report-file
). The generated report is a JSON file formatted in a human readable form. If this option is enabled, the generated JSON file will be a single stream of text without additional spaces or new lines.This option can also be set using the
RFM_COMPRESS_REPORT
environment variable or thecompress_report
general configuration parameter.Added in version 3.12.0.
- --dont-restage¶
Do not restage a test if its stage directory exists. Normally, if the stage directory of a test exists, ReFrame will remove it and recreate it. This option disables this behavior.
This option can also be set using the
RFM_CLEAN_STAGEDIR
environment variable or theclean_stagedir
general configuration parameter.Added in version 3.1.
- --keep-stage-files¶
Keep test stage directories even for tests that finish successfully.
This option can also be set using the
RFM_KEEP_STAGE_FILES
environment variable or thekeep_stage_files
general configuration parameter.
- -o, --output=DIR¶
Directory prefix for test output files.
When a test finishes successfully, ReFrame copies important output files to a test-specific directory for future reference. This test-specific directory is of the form
{output_prefix}/{system}/{partition}/{environment}/{test_name}
, whereoutput_prefix
is set by this option. The test files saved in this directory are the following:The ReFrame-generated build script, if not a run-only test.
The standard output and standard error of the build phase, if not a run-only test.
The ReFrame-generated job script, if not a compile-only test.
The standard output and standard error of the run phase, if not a compile-only test.
Any additional files specified by the
keep_files
regression test attribute.
This option can also be set using the
RFM_OUTPUT_DIR
environment variable or theoutputdir
system configuration parameter.
- --perflogdir=DIR¶
Directory prefix for logging performance data.
This option is relevant only to the
filelog
logging handler.This option can also be set using the
RFM_PERFLOG_DIR
environment variable or thebasedir
logging handler configuration parameter.
- --prefix=DIR¶
General directory prefix for ReFrame-generated directories.
The base stage and output directories (see below) will be specified relative to this prefix if not specified explicitly.
This option can also be set using the
RFM_PREFIX
environment variable or theprefix
system configuration parameter.
- --report-file=FILE¶
The file where ReFrame will store its report.
The
FILE
argument may contain the special placeholder{sessionid}
, in which case ReFrame will generate a new report each time it is run by appending a counter to the report file. If the report is generated in the default location (see thereport_file
configuration option), a symlink to the latest report namedlatest.json
will also be created.This option can also be set using the
RFM_REPORT_FILE
environment variable or thereport_file
general configuration parameter.Added in version 3.1.
Added in version 4.2: Symlink to the latest report is now created.
- --report-junit=FILE¶
Instruct ReFrame to generate a JUnit XML report in
FILE
.The generated report adheres to the XSD schema here where each retry is treated as an individual testsuite.
This option can also be set using the
RFM_REPORT_JUNIT
environment variable or thereport_junit
general configuration parameter.Added in version 3.6.0.
Changed in version 3.6.1: Added support for retries in the JUnit XML report.
- -s, --stage=DIR¶
Directory prefix for staging test resources.
ReFrame does not execute tests from their original source directory. Instead it creates a test-specific stage directory and copies all test resources there. It then changes to that directory and executes the test. This test-specific directory is of the form
{stage_prefix}/{system}/{partition}/{environment}/{test_name}
, wherestage_prefix
is set by this option. If a test finishes successfully, its stage directory will be removed.This option can also be set using the
RFM_STAGE_DIR
environment variable or thestagedir
system configuration parameter.
- --save-log-files¶
Save ReFrame log files in the output directory before exiting.
Only log files generated by
file
log handlers will be copied.This option can also be set using the
RFM_SAVE_LOG_FILES
environment variable or thesave_log_files
general configuration parameter.
- --timestamp [TIMEFMT]¶
Append a timestamp to the output and stage directory prefixes.
TIMEFMT
can be any valid strftime(3) time format. If not specified,TIMEFMT
is set to%FT%T
.This option can also be set using the
RFM_TIMESTAMP_DIRS
environment variable or thetimestamp_dirs
general configuration parameter.
Options controlling ReFrame execution¶
- --disable-hook=HOOK¶
Disable the pipeline hook named
HOOK
from all the tests that will run.This feature is useful when you have implemented test workarounds as pipeline hooks, in which case you can quickly disable them from the command line. This option may be specified multiple times in order to disable multiple hooks at the same time.
Added in version 3.2.
- --duration=TIMEOUT¶
Run the test session repeatedly until the specified timeout expires.
TIMEOUT
can be specified in one of the following forms:<int>
or<float>
: number of seconds<days>d<hours>h<minutes>m<seconds>s
: a string denoting days, hours, minutes and/or seconds.
At the end, failures from every run will be reported and, similarly, the failure statistics printed by the
--failure-stats
option will include all runs.Added in version 4.2.
- --exec-order=ORDER¶
Impose an execution order for the independent tests. The
ORDER
argument can take one of the following values:name
: Order tests by their display name.rname
: Order tests by their display name in reverse order.uid
: Order tests by their unique name.ruid
: Order tests by their unique name in reverse order.random
: Randomize the order of execution.
If this option is not specified the order of execution of independent tests is implementation defined. This option can be combined with any of the listing options (
-l
or-L
) to list the tests in the order.Added in version 4.0.0.
- --exec-policy=POLICY¶
The execution policy to be used for running tests.
There are two policies defined:
serial
: Tests will be executed sequentially.async
: Tests will be executed asynchronously. This is the default policy.The
async
execution policy executes the build and run phases of tests asynchronously by submitting their associated jobs in a non-blocking way. ReFrame’s runtime monitors the progress of each test and will resume the pipeline execution of an asynchronously spawned test as soon as its build or run phase have finished. Note that the rest of the pipeline stages are still executed sequentially in this policy.Concurrency can be controlled by setting the
max_jobs
system partition configuration parameter. As soon as the concurrency limit is reached, ReFrame will first poll the status of all its pending tests to check if any execution slots have been freed up. If there are tests that have finished their build or run phase, ReFrame will keep pushing tests for execution until the concurrency limit is reached again. If no execution slots are available, ReFrame will throttle job submission.
- --max-retries=NUM¶
The maximum number of times a failing test can be retried.
The test stage and output directories will receive a
_retry<N>
suffix every time the test is retried.
- --maxfail=NUM¶
The maximum number of failing test cases before the execution is aborted.
After
NUM
failed test cases the rest of the test cases will be aborted. The counter of the failed test cases is reset to 0 in every retry.
- --mode=MODE¶
ReFrame execution mode to use.
An execution mode is simply a predefined set of options that is set in the
modes
configuration parameter. Additional options can be passed to the command line, in which case they will be combined with the options defined in the selected execution mode. More specifically, any additional ReFrame options will be appended to the command line options of the selected mode. As a result, if a normal option is specified both inside the execution mode and the in the command line, the command line option will take precedence. On the other hand, if an option that is allowed to be specified multiple times, e.g., the-S
option, is passed both inside the execution mode and in the command line, their values will be combined. For example, if the execution modefoo
defines-S modules=foo
, the invocation--mode=foo -S num_tasks=10
is the equivalent of-S modules=foo -S num_tasks=10
.Changed in version 4.1: Options that can be specified multiple times are now combined between execution modes and the command line.
- --reruns=N¶
Rerun the whole test session
N
times.In total, the selected tests will run
N+1
times as the first time does not count as a rerun.At the end, failures from every run will be reported and, similarly, the failure statistics printed by the
--failure-stats
option will include all runs.Although similar to
--repeat
, this option behaves differently. This option repeats the whole test session multiple times. All the tests of the session will finish before a new run is started. The--repeat
option on the other hand generates clones of the selected tests and schedules them for running in a single session. As a result, all the test clones will run (by default) concurrently.Added in version 4.2.
- --restore-session [REPORT1[,REPORT2,...]]¶
Restore a testing session that has run previously.
REPORT1
etc. are a run report files generated by ReFrame. If a report is not given, ReFrame will pick the last report file found in the default location of report files (see the--report-file
option). If passed alone, this option will simply rerun all the test cases that have run previously based on the report file data. It is more useful to combine this option with any of the test filtering options, in which case only the selected test cases will be executed. The difference in test selection process when using this option is that the dependencies of the selected tests will not be selected for execution, as they would normally, but they will be restored. For example, if testT1
depends onT2
andT2
depends onT3
, then runningreframe -n T1 -r
would cause bothT2
andT3
to run. However, by doingreframe -n T1 --restore-session -r
, onlyT1
would run and its immediate dependenceT2
will be restored. This is useful when you have deep test dependencies or some of the tests in the dependency chain are very time consuming.Multiple reports may be passed as a comma-separated list. ReFrame will try to restore any required test case by looking it up in each report sequentially. If it cannot find it, it will issue an error and exit.
Note
In order for a test case to be restored, its stage directory must be present. This is not a problem when rerunning a failed case, since the stage directories of its dependencies are automatically kept, but if you want to rerun a successful test case, you should make sure to have run with the
--keep-stage-files
option.Note
This option will not work with the test generation options.
Added in version 3.4.
Changed in version 3.6.1: Multiple report files are now accepted.
- --retries-threshold=VALUE[%]¶
Skip retries (see
--max-retries
) if failures exceed the given threshold.Threshold can be specified either as an absolute value or as a percentage using the
%
character, e.g.,--retries-threshold=30%
. Note that in certain shells the%
character may need to be escaped.Added in version 4.7.
- -S, --setvar=[TEST.]VAR=VAL¶
Set variable
VAR
in all tests or optionally only in testTEST
toVAL
.TEST
can have the form[TEST.][FIXT.]*
, in which caseVAR
will be set in fixtureFIXT
ofTEST
. Note that this syntax is recursive on fixtures, so that a variable can be set in a fixture arbitrarily deep.TEST
prefix refers to the test class name, not the test name andFIXT
refers to the fixture variable name inside the referenced test, i.e., the test variable to which the fixture is bound. The fixture variable name is referred to as'<varname>
when listing tests with the-l
and-L
options.Multiple variables can be set at the same time by passing this option multiple times. This option cannot change arbitrary test attributes, but only test variables declared with the
variable
built-in. If an attempt is made to change an inexistent variable or a test parameter, a warning will be issued.ReFrame will try to convert
VAL
to the type of the variable. If it does not succeed, a warning will be issued and the variable will not be set.VAL
can take the special value@none
to denote that the variable must be set toNone
. Boolean variables can be set in one of the following ways:By passing
true
,yes
or1
to set them toTrue
.By passing
false
,no
or0
to set them toFalse
.
Passing any other value will issue an error.
Note
Boolean variables in a test must be declared of type
Bool
and not of the built-inbool
type, in order to adhere to the aforementioned behaviour. If a variable is defined asbool
there is no way you can set it toFalse
, since all strings in Python evaluate toTrue
.Sequence and mapping types can also be set from the command line by using the following syntax:
Sequence types:
-S seqvar=1,2,3,4
Mapping types:
-S mapvar=a:1,b:2,c:3
Nested mapping types can also be converted using JSON syntax. For example, the
extra_resources
complex dictionary could be set with-S extra_resources='{"gpu": {"num_gpus_per_node":8}}'
.Conversions to arbitrary objects are also supported. See
ConvertibleType
for more details.Variable assignments passed from the command line happen before the test is instantiated and is the exact equivalent of assigning a new value to the variable at the end of the test class body. This has a number of implications that users of this feature should be aware of:
In the following test,
num_tasks
will have always the value1
regardless of any command-line assignment of the variablefoo
:
@rfm.simple_test class my_test(rfm.RegressionTest): foo = variable(int, value=1) num_tasks = foo
Tip
In cases where the class body expresses logic as a function of a variable and this variable, as well as its dependent logic, need to be controlled externally, the variable’s default value (i.e. the value set through the value argument) may be modified as follows through an environment variable and not through the -S option:
import os @rfm.simple_test class my_test(rfm.RegressionTest): max_nodes = variable(int, value=int(os.getenv('MAX_NODES', 1))) # Parameterise number of nodes num_nodes = parameter((1 << i for i in range(0, int(max_nodes))))
If the variable is set in any pipeline hook, the command line assignment will have an effect until the variable assignment in the pipeline hook is reached. The variable will be then overwritten.
The test filtering happens after a test is instantiated, so the only way to scope a variable assignment is to prefix it with the test class name. However, this has some positive side effects:
Passing
-S valid_systems='*'
and-S valid_prog_environs='*'
is the equivalent of passing the--skip-system-check
and--skip-prgenv-check
options.Users could alter the behavior of tests based on tag values that they pass from the command line, by changing the behavior of a test in a post-init hook based on the value of the
tags
attribute.Users could force a test with required variables to run if they set these variables from the command line. For example, the following test could only be run if invoked with
-S num_tasks=<NUM>
:
@rfm.simple_test class my_test(rfm.RegressionTest): num_tasks = required
Added in version 3.8.0.
Changed in version 3.9.3: Proper handling of boolean variables.
Changed in version 3.11.1: Allow setting variables in fixtures.
Changed in version 4.4: Allow setting nested mapping types using JSON syntax.
- --skip-performance-check¶
Skip performance checking phase.
The phase is completely skipped, meaning that performance data will not be logged.
- --skip-sanity-check¶
Skip sanity checking phase.
Options controlling job submission¶
- -J, --job-option=OPTION¶
Pass
OPTION
directly to the job scheduler backend.The syntax of
OPTION
is-J key=value
. IfOPTION
starts with-
it will be passed verbatim to the backend job scheduler. IfOPTION
starts with#
it will be emitted verbatim in the job script. Otherwise, ReFrame will pass--key value
or-k value
(ifkey
is a single character) to the backend scheduler. Any job options specified with this command-line option will be emitted after any job options specified in theaccess
system partition configuration parameter.Especially for the Slurm backends, constraint options, such as
-J constraint=value
,-J C=value
,-J --constraint=value
or-J -C=value
, are going to be combined with any constraint options specified in theaccess
system partition configuration parameter. For example, if-C x
is specified in theaccess
and-J C=y
is passed to the command-line, ReFrame will pass-C x&y
as a constraint to the scheduler. Notice, however, that if constraint options are specified through multiple-J
options, only the last one will be considered. If you wish to completely overwrite any constraint options passed inaccess
, you should consider passing explicitly the Slurm directive with-J '#SBATCH --constraint=new'
.Changed in version 3.0: This option has become more flexible.
Changed in version 3.1: Use
&
to combine constraints.
Options controlling flexible node allocation¶
ReFrame can automatically set the number of tasks of a test, if its num_tasks
attribute is set to a value less than or equal to zero.
This scheme is conveniently called flexible node allocation and is valid only for the Slurm backend.
When allocating nodes automatically, ReFrame will take into account all node limiting factors, such as partition access
options, and any job submission control options described above.
Particularly for Slurm constraints, ReFrame will only recognize simple AND or OR constraints and any parenthesized expression of them.
The full syntax of Slurm constraints is not currently supported.
Nodes from this pool are allocated according to different policies. If no node can be selected, the test will be marked as a failure with an appropriate message.
- --flex-alloc-nodes=POLICY¶
Set the flexible node allocation policy.
Available values are the following:
Any of the values supported by the
--distribute
option.Any positive integer: flexible tests will be assigned as many tasks as needed in order to span over the specified number of nodes from the node pool.
Changed in version 3.1: It is now possible to pass an arbitrary node state as a flexible node allocation parameter.
Changed in version 4.6: Align the state selection with the
--distribute
option. See the--distribute
for more details.Slurm OR constraints and parenthesized expressions are supported in flexible node allocation.
Changed in version 4.7: The test is not marked as a failure if not enough nodes are available, but it is skipped instead. To enforce a failure, use
--flex-alloc-strict
- --flex-alloc-strict¶
Fail flexible tests if their minimum task requirement is not satisfied. Otherwise the tests will be skipped.
Added in version 4.7.
Options controlling ReFrame environment¶
ReFrame offers the ability to dynamically change its environment as well as the environment of tests. It does so by leveraging the selected system’s environment modules system.
- -M, --map-module=MAPPING¶
Apply a module mapping.
ReFrame allows manipulating test modules on-the-fly using module mappings. A module mapping has the form
old_module: module1 [module2]...
and will cause ReFrame to replace a module with another list of modules upon load time. For example, the mappingfoo: foo/1.2
will load modulefoo/1.2
whenever modulefoo
needs to be loaded. A mapping may also be self-referring, e.g.,gnu: gnu gcc/10.1
, however cyclic dependencies in module mappings are not allowed and ReFrame will issue an error if it detects one. This option is especially useful for running tests using a newer version of a software or library.This option may be specified multiple times, in which case multiple mappings will be applied.
This option can also be set using the
RFM_MODULE_MAPPINGS
environment variable or themodule_mappings
general configuration parameter.Changed in version 3.3: If the mapping replaces a module collection, all new names must refer to module collections, too.
See also
Module collections with Environment Modules and Lmod.
- -m, --module=NAME¶
Load environment module
NAME
before acting on any tests.This option may be specified multiple times, in which case all specified modules will be loaded in order. ReFrame will not perform any automatic conflict resolution.
This option can also be set using the
RFM_USER_MODULES
environment variable or theuser_modules
general configuration parameter.
- --module-mappings=FILE¶
A file containing module mappings.
Each line of the file contains a module mapping in the form described in the
-M
option. This option may be combined with the-M
option, in which case module mappings specified will be applied additionally.This option can also be set using the
RFM_MODULE_MAP_FILE
environment variable or themodule_map_file
general configuration parameter.
- --module-path=PATH¶
Manipulate the
MODULEPATH
environment variable before acting on any tests.If
PATH
starts with the-
character, it will be removed from theMODULEPATH
, whereas if it starts with the+
character, it will be added to it. In all other cases,PATH
will completely override MODULEPATH. This option may be specified multiple times, in which case all the paths specified will be added or removed in order.Added in version 3.3.
- --non-default-craype¶
Test a non-default Cray Programming Environment.
Since CDT 19.11, this option can be used in conjunction with
-m
, which will load the target CDT. For example:reframe -m cdt/20.03 --non-default-craype -r
This option causes ReFrame to properly set the
LD_LIBRARY_PATH
for such cases. It will emit the following code after all the environment modules of a test have been loaded:export LD_LIBRARY_PATH=$CRAY_LD_LIBRARY_PATH:$LD_LIBRARY_PATH
This option can also be set using the
RFM_NON_DEFAULT_CRAYPE
environment variable or thenon_default_craype
general configuration parameter.
- --purge-env¶
Unload all environment modules before acting on any tests.
This will unload also sticky Lmod modules.
This option can also be set using the
RFM_PURGE_ENVIRONMENT
environment variable or thepurge_environment
general configuration parameter.
- -u, --unload-module=NAME¶
Unload environment module
NAME
before acting on any tests.This option may be specified multiple times, in which case all specified modules will be unloaded in order.
This option can also be set using the
RFM_UNLOAD_MODULES
environment variable or theunload_modules
general configuration parameter.
Options for generating tests dynamically¶
These options generate new tests dynamically from a set of previously selected tests. The way the tests are generated and how they interact with the test filtering options poses some limitations:
These tests do not have an associated test file and are different from their original tests although the share the same base name. As a result, the
--restore-session
option cannot be used to restore dynamically generated tests.Since these tests are generated after the test selection phase, the
--ci-generate
option cannot be used to generate a child pipeline, as the child pipeline uses the-n
option to select the tests for running.
- --distribute[=NODESTATE]¶
Distribute the selected tests on all the nodes in state
NODESTATE
in their respective valid partitions.ReFrame will parameterize and run the tests on the selected nodes. Effectively, it will dynamically create new tests that inherit from the original tests and add a new parameter named
$nid
which contains the list of nodes that the test must run on. The new tests are named with the following pattern{orig_test_basename}_{partition_fullname}
.When determining the list of nodes to distribute the selected tests, ReFrame will take into account any job options passed through the
-J
option.You can optionally specify the state of the nodes to consider when distributing the test through the
NODESTATE
argument:all
: Tests will run on all the nodes of their respective valid partitions regardless of the node state.avail
: Tests will run on all the nodes of their respective valid partitions that are available for running jobs. Note that if a node is currently allocated to another job it is still considered as “available.”NODESTATE
: Tests will run on all the nodes of their respective valid partitions that are exclusively in stateNODESTATE
. IfNODESTATE
is not specified,idle
is assumed.NODESTATE*
: Tests will run on all the nodes of their respective valid partitions that are at least in stateNODESTATE
.
The state of the nodes will be determined once, before beginning the execution of the tests, so it might be different at the time the tests are actually submitted.
Note
Currently, only single-node jobs can be distributed and only local or the Slurm-based backends support this feature.
Note
Distributing tests with dependencies is not supported, but you can distribute tests that use fixtures.
Note
This option is supported only for the
local
,squeue
,slurm
andssh
scheduler backends.Added in version 3.11.0.
Added in version 4.6: The
avail
argument is introduced and the ability to differentiate between exclusive and non-exclusive node states.Changed in version 4.6:
--distribute=NODESTATE
now matches nodes that are exclusively in stateNODESTATE
, so that the default--distribute=idle
will match only the Slurm nodes that are in theIDLE
state exclusively. To achieve the previous behaviour, you should use--distribute=idle*
.
- -P, --parameterize=[TEST.]VAR=VAL0,VAL1,...¶
Parameterize a test on an existing variable.
The test will behave as if the variable
VAR
was a parameter taking the valuesVAL0,VAL1,...
. The values will be converted based on the type of the target variableVAR
. TheTEST.
prefix will only parameterize the variableVAR
of testTEST
.The
-P
can be specified multiple times in order to parameterize multiple variables.Note
Conversely to the
-S
option that can set a variable in an arbitrarily nested fixture, the-P
option can only parameterize the leaf test: it cannot be used to parameterize a fixture of the test.Note
The
-P
option supports only tests that use fixtures. Tests that use raw dependencies are not supported.Added in version 4.3.
- --repeat=N¶
Repeat the selected tests
N
times. This option can be used in conjunction with the--distribute
option in which case the selected tests will be repeated multiple times and distributed on individual nodes of the system’s partitions.Note
Repeating tests with dependencies is not supported, but you can repeat tests that use fixtures.
Added in version 3.12.0.
Miscellaneous options¶
- -C, --config-file=FILE¶
Use
FILE
as configuration file for ReFrame.This option can be passed multiple times, in which case multiple configuration files will be read and loaded successively. The base of the configuration chain is always the builtin configuration file, namely the
${RFM_INSTALL_PREFIX}/reframe/core/settings.py
. At any point, the user can “break” the chain of configuration files by prefixing the configuration file name with a colon as in the following example:-C :/path/to/new_config.py
. This will ignore any previously loaded configuration file and will only load the one specified. Note, however, that the builtin configuration file cannot be overriden; It will always be loaded first in the chain.This option can also be set using the
RFM_CONFIG_FILES
environment variable.In order to determine its final configuration, ReFrame first loads the builtin configuration file unconditionally and then starts looking for possible configuration file locations defined in the
RFM_CONFIG_PATH
environment variable. For each directory defined in theRFM_CONFIG_PATH
, ReFrame looks for a file namedsettings.py
orsettings.json
inside it and loads it. If both asettings.py
and asettings.json
files are found, the Python configuration will be preferred. ReFrame, finally, processes any configuration files specified in the command line or in theRFM_CONFIG_FILES
environment variable.Changed in version 4.0.0.
- --failure-stats¶
Print failure statistics at the end of the run.
- -h, --help¶
Print a short help message and exit.
- --nocolor¶
Disable output coloring.
This option can also be set using the
RFM_COLORIZE
environment variable or thecolorize
general configuration parameter.
- --performance-report[=CMPSPEC]¶
Print a report summarizing the performance of all performance tests that have run in the current session.
For each test all of their performance variables are reported and optionally compared to past results based on the
CMPSPEC
specified. If not specified,CMPSPEC
defaults tonow:now/last:/+job_nodelist+result
, meaning that the current performance will not be compared to any past run and, additionally, thejob_nodelist
and the test result (pass
orfail
) will be listed.For the exact syntax of
CMPSPEC
, refer to Querying past results.Changed in version 4.7: The format of the performance report has changed and the optional
CMPSPEC
argument is now added.
- -q, --quiet¶
Decrease the verbosity level.
This option can be specified multiple times. Every time this option is specified, the verbosity level will be decreased by one. This option can be combined arbitrarily with the
-v
option, in which case the final verbosity level will be determined by the final combination. For example, specifying-qv
will not change the verbosity level, since the two options cancel each other, but-qqv
is equivalent to-q
. For a list of ReFrame’s verbosity levels, see the description of the-v
option.Added in version 3.9.3.
- --session-extras KV_DATA¶
Annotate the current session with custom key/value metadata.
The key/value data is specified as a comma-separated list of key=value pairs. When listing stored sessions with the
--list-stored-sessions
option, any associated custom metadata will be presented.This option can be specified multiple times, in which case the data from all options will be combined in a single list of key/value data.
Added in version 4.7.
- --system=NAME¶
Load the configuration for system
NAME
.The
NAME
must be a valid system name in the configuration file. It may also have the formSYSNAME:PARTNAME
, in which case the configuration of systemSYSNAME
will be loaded, but as if it hadPARTNAME
as its sole partition. Of course,PARTNAME
must be a valid partition of systemSYSNAME
. If this option is not specified, ReFrame will try to pick the correct configuration entry automatically. It does so by trying to match the hostname of the current machine again the hostname patterns defined in thehostnames
system configuration parameter. The system with the first match becomes the current system.This option can also be set using the
RFM_SYSTEM
environment variable.
- --table-format=csv|plain|pretty¶
Set the formatting of tabular output printed by the options
--performance-compare
,--performance-report
and the options controlling the stored sessions.The acceptable values are the following:
csv
: Generate CSV outputplain
: Generate a plain table without any vertical lines allowing for easygrep
-ingpretty
: (default) Generate a pretty table
Added in version 4.7.
- --upgrade-config-file=OLD[:NEW]¶
Convert the old-style configuration file
OLD
, place it into the new fileNEW
and exit.If a new file is not given, a file in the system temporary directory will be created.
- -v, --verbose¶
Increase verbosity level of output.
This option can be specified multiple times. Every time this option is specified, the verbosity level will be increased by one. There are the following message levels in ReFrame listed in increasing verbosity order:
critical
,error
,warning
,info
,verbose
anddebug
. The base verbosity level of the output is defined by thelevel
stream logging handler configuration parameter.This option can also be set using the
RFM_VERBOSE
environment variable or theverbose
general configuration parameter.
Test Naming Scheme¶
Added in version 3.10.0.
This section describes the test naming scheme. This scheme has superseded the old one in ReFrame 4.0.
Each ReFrame test is assigned a unique name, which will be used internally by the framework to reference the test. Any test-specific path component will use that name, too. It is formed as follows for the various types of tests:
Regular tests: The unique name is simply the test class name. This implies that you cannot load two tests with the same class name within the same run session even if these tests reside in separate directories.
Parameterized tests: The unique name is formed by the test class name followed by an
_
and the variant number of the test. Each point in the parameter space of the test is assigned a unique variant number.Fixtures: The unique name is formed by the test class name followed by an
_
and a hash. The hash is constructed by combining the information of the fixture variant (if the fixture is parameterized), the fixture’s scope and any fixture variables that were explicitly set.
Since unique names can be cryptic, they are not listed by the -l
option, but are listed when a detailed listing is requested by using the -L
option.
A human readable version of the test name, which is called the display name, is also constructed for each test. This name encodes all the parameterization information as well as the fixture-specific information (scopes, variables). The format of the display name is the following in BNF notation:
<display_name> ::= <test_class_name> (<params>)* (<scope> ("'"<fixtvar>)+)?
<params> ::= "%" <parametrization> "=" <pvalue>
<parametrization> ::= (<fname> ".")* <pname>
<scope> ::= "~" <scope_descr>
<scope_descr> ::= <first> ("+" <second>)*
<test_class_name> ::= (* as in Python *)
<fname> ::= (* string *)
<pname> ::= (* string *)
<pvalue> ::= (* string *)
<first> ::= (* string *)
<second> ::= (* string *)
<fixtvar> ::= (* string *)
The following is an example of a fictitious complex test that is itself parameterized and depends on parameterized fixtures as well.
import reframe as rfm
class MyFixture(rfm.RunOnlyRegressionTest):
p = parameter([1, 2])
class X(rfm.RunOnlyRegressionTest):
foo = variable(int, value=1)
@rfm.simple_test
class TestA(rfm.RunOnlyRegressionTest):
f = fixture(MyFixture, scope='test', action='join')
x = parameter([3, 4])
t = fixture(MyFixture, scope='test')
l = fixture(X, scope='environment', variables={'foo': 10})
valid_systems = ['*']
valid_prog_environs = ['*']
Here is how this test is listed where the various components of the display name can be seen:
- TestA %x=4 %l.foo=10 %t.p=2 /8804be5d
^MyFixture %p=1 ~TestA_3 't 'f /f027ee75
^MyFixture %p=2 ~TestA_3 't 'f /830323a4
^X %foo=10 ~generic:default+builtin 'l /7dae3cc5
- TestA %x=3 %l.foo=10 %t.p=2 /89f6f5d1
^MyFixture %p=1 ~TestA_2 't 'f /02368516
^MyFixture %p=2 ~TestA_2 't 'f /854b99b5
^X %foo=10 ~generic:default+builtin 'l /7dae3cc5
- TestA %x=4 %l.foo=10 %t.p=1 /af9b2941
^MyFixture %p=2 ~TestA_1 't 'f /f0383f7f
^MyFixture %p=1 ~TestA_1 't 'f /d07f4281
^X %foo=10 ~generic:default+builtin 'l /7dae3cc5
- TestA %x=3 %l.foo=10 %t.p=1 /a9e50aa3
^MyFixture %p=2 ~TestA_0 't 'f /b894ab05
^MyFixture %p=1 ~TestA_0 't 'f /ca376ca8
^X %foo=10 ~generic:default+builtin 'l /7dae3cc5
Found 4 check(s)
Notice that the variable name to which every fixture is bound in its parent test is also listed as '<varname>
.
This is useful for setting variables down the fixture hierarchy using the -S
option.
Display names may not always be unique. Assume the following test:
class MyTest(RegressionTest):
p = parameter([1, 1, 1])
This generates three different tests with different unique names, but their display name is the same for all: MyTest %p=1
.
Notice that this example leads to a name conflict with the old naming scheme, since all tests would be named MyTest_1
.
Each test is also associated with a hash code that is derived from the test name, its parameters and their values.
As in the example listing above, the hash code of each test is printed with the -l
option and individual tests can be selected by their hash using the -n
option, e.g., -n /1c51609b
.
The stage and output directories, as well as the performance log file of the filelog
performance log handler will use the hash code for the test-specific directories and files.
This might lead to conflicts for tests as the one above when executing them with the asynchronous execution policy, but ensures consistency of performance record files when parameter values are added to or deleted from a test parameter.
More specifically, the test’s hash will not change if a new parameter value is added or deleted or even if the parameter values are shuffled.
Test variants on the other side are more volatile and can change with such changes.
Also users should not rely on how the variant numbers are assigned to a test, as this is an implementation detail.
Changed in version 4.0.0: A hash code is associated with each test.
Differences from the old naming scheme¶
Prior to version 3.10, ReFrame used to encode the parameter values of an instance of parameterized test in its name. It did so by taking the string representation of the value and replacing any non-alphanumeric character with an underscore. This could lead to very large and hard to read names when a test defined multiple parameters or the parameter type was more complex. Very large test names meant also very large path names which could also lead to problems and random failures. Fixtures followed a similar naming pattern making them hard to debug.
Result storage¶
Added in version 4.7.
ReFrame stores the results of every session that has executed at least one test into a database.
There is only one storage backend supported at the moment and this is SQLite.
The full session information as recorded in a run report file (see --report-file
) is stored in the database.
The test cases of the session are indexed by their run job completion time for quick retrieval of all the test cases that have run in a certain period of time.
The database file is controlled by the sqlite_db_file
configuration parameter and multiple ReFrame processes can access it safely simultaneously.
There are several command-line options that allow users to query the results database, such as the --list-stored-sessions
, --list-stored-testcases
, --describe-stored-sessions
etc.
Other options that access the results database are the --performance-compare
and --performance-report
which compare the performance results of the same test cases in different periods of time or from different sessions.
Check the Commands section for the complete list and details of each option related to the results database.
Since the report file information is now kept in the results database, there is no need to keep the report files separately, although this remains the default behavior for backward compatibility.
You can disable the report generation by turning off the generate_file_reports
configuration parameter.
The file report of any session can be retrieved from the database with the --describe-stored-sessions
option.
Querying past results¶
Added in version 4.7.
ReFrame provides several options for querying and inspecting past sessions and test case results. All those options follow a common syntax that builds on top of the following elements:
Selection of sessions and test cases
Grouping of test cases and performance aggregations
Selection of test case attributes to present
Throughout the documentation, we use the <select>
notation for (1), <aggr>
for (2) and <cols>
for (3).
For the options performing aggregations on test case performance we use the notation <cmpspec>
which can take one of the following forms:
<cmpspec> := <select>/<select>/<aggr>/<cols>
for explicit performance comparisons (see--performance-compare
).<cmpspec> := <select>/<aggr>/<cols>
for implicit performance comparisons (see--performance-report
) or for simple performance aggregations (see--list-stored-testcases
).
In the following we present in detail the exact syntax of every of the above syntactic elements.
Selecting sessions and test cases¶
The syntax for selecting sessions or test cases can take one of the following forms:
<select> := <session_uuid>
: An explicit session UUID.<select> := ?<session_filter>
: A valid Python expression on the available session information including any user-specific session extras (see also--session-extras
), e.g.,?'xyz=="123"'
. In this case, the testcases from all sessions matching the filter will be retrieved.<select> := <time_period>
: A time period specification (see below for details).
Time periods¶
The general syntax of time period specification is the following:
<time_period> := <ts_start>:<ts_end>
<ts_start>
and <ts_end>
are timestamp denoting the start and end of the requested period.
More specifically, the syntax of each timestamp is the following:
<abs_timestamp>[+|-<amount>w|d|h|m]
The <abs_timestamp>
is an absolute timestamp in one of the following strptime
-compatible formats or the special value now
: %Y%m%d
, %Y%m%dT%H%M
, %Y%m%dT%H%M%S
, %Y%m%dT%H%M%S%z
.
Optionally, a shift argument can be appended with +
or -
signs, followed by the number of weeks (w
), days (d
), hours (h
) or minutes (m
).
For example, the period of the last 10 days can be specified as now-10d:now
.
Similarly, the period of the week starting on August 5, 2024 will be specified as 20240805:20240805+1w
.
Grouping test cases and aggregating performance¶
The aggregation specification follows the general syntax:
<aggr> := <aggr_fn>:[<cols>]
The <aggr_fn>
is a symbolic name for a function to aggregate the performance of the grouped test cases.
It can take one of the following values:
first
: retrieve the performance data of the first test case onlylast
: retrieve the performance data of the last test case onlymax
: retrieve the maximum of all test casesmean
: calculate the mean over all test casesmedian
: retrieve the median of all test casesmin
: retrieve the minimum of all test cases
The test cases are by default grouped by the following attributes:
The test
name
The system name
The partition name
The environment name
The performance variable name (see
@performance_function
andperf_variables
)The performance variable unit
The <cols>
subspec specifies how the test cases will be grouped and can take one of the two following forms:
+attr1+attr2...
: In this form the test cases will be grouped based on the default group-by attributes plus the user-specified ones (attr1
,attr2
etc.)attr1,attr2,...
: In this form the test cases will be grouped based on the user-specified attributes only (attr1
,attr2
etc.).
As an attribute for grouping test cases, any loggable test variable or parameter can be selected, as well as the following pseudo-attributes which are extracted or calculated on-the-fly:
basename
: The test’s name stripped off from any parameters. This is a equivalent to the test’s class name.pvar
: the name of the performance variablepval
: the value of the performance variable (i.e., the obtained performance)pref
: the reference value of the performance variableplower
: the lower threshold of the performance variable as an absolute valuepupper
: the upper threshold of the performance variable as an absolute valuepunit
: the unit of the performance variablepdiff
: the difference as a percentage between the base and target performance values when a performance comparison is attempted. More specifically,pdiff = (pval_base - pval_target) / pval_target
.psamples
: the number of test cases aggregated.sysenv
: The system/partition/environment combination as a single string of the form{system}:{partition}+{environ}
Note
For performance comparisons, either implicit or explicit, the aggregation applies to both the base and target test cases.
Presenting the results¶
The selection of the final columns of the results table is specified by the same syntax as the <cols>
subspec described above.
However, for performance comparisons, ReFrame will generate two columns for every attribute in the subspec that is not also a group-by attribute, suffixed with _A
and _B
.
These columns contain the aggregated values of the corresponding attributes.
Note that only the aggregation of pval
(i.e. the test case performance) can be controlled (see Grouping test cases and aggregating performance)
All other attributes are aggregated by joining their unique values.
Examples¶
Here are some examples of performance comparison specs:
Compare the test cases of the session
7a70b2da-1544-4ac4-baf4-0fcddd30b672
with the mean performance of the last 10 days:7a70b2da-1544-4ac4-baf4-0fcddd30b672/now-10d:now/mean:/
Compare the best performance of the test cases run on two specific days, group by the node list and report also the test result:
20240701:20240701+1d/20240705:20240705+1d/max:+job_nodelist/+result
Grammar¶
The formal grammar of the comparison syntax in BNF form is the following. Note that parts that have a grammar defined elsewhere (e.g., Python attributes and expressions, UUIDs etc.) are omitted.
<cmpspec> ::= (<select> "/")? <select> "/" <aggr> "/" <cols>
<aggr> ::= <aggr_fn> ":" <cols>
<aggr_fn> ::= "first" | "last" | "max" | "min" | "mean" | "median"
<cols> ::= <extra_cols> | <explicit_cols>
<extra_cols> ::= ("+" <attr>)+
<explicit_cols> ::= <attr> ("," <attr>)*
<attr> ::= /* any Python attribute */
<select> ::= <session_uuid> | <session_filter> | <time_period>
<session_uuid> ::= /* any valid UUID */
<session_filter> ::= "?" <python_expr>
<python_expr> ::= /* any valid Python expression */
<time_period> ::= <timestamp> ":" <timestamp>
<timestamp> ::= ("now" | <abs_timestamp>) (("+" | "-") <number> ("w" | "d" | "h" | "m"))?
<abs_timestamp> ::= /* any timestamp of the format `%Y%m%d`, `%Y%m%dT%H%M`, `%Y%m%dT%H%M%S` */
<number> ::= [0-9]+
Environment¶
Several aspects of ReFrame can be controlled through environment variables.
Usually environment variables have counterparts in command line options or configuration parameters.
In such cases, command-line options take precedence over environment variables, which in turn precede configuration parameters.
Boolean environment variables can have any value of true
, yes
, y
(case insensitive) or 1
to denote true and any value of false
, no
, n
(case insensitive) or 0
to denote false.
Changed in version 3.9.2: Values 1
and 0
are now valid for boolean environment variables.
Here is an alphabetical list of the environment variables recognized by ReFrame. Whenever an environment variable is associated with a configuration option, its default value is omitted as it is the same.
- RFM_AUTODETECT_FQDN¶
Use the fully qualified domain name as the hostname. This is a boolean variable and defaults to
0
.Associated command line option
N/A
Associated configuration parameter
N/A
Added in version 3.11.0.
Changed in version 4.0.0: This variable now defaults to
0
.Deprecated since version 4.3: Please use
RFM_AUTODETECT_METHODS=py::fqdn
in the future.
- RFM_AUTODETECT_METHOD¶
Method to use for detecting the current system and pick the right configuration. The following values can be used:
hostname
: Thehostname
command will be used to detect the current system. This is the default value, if not specified.
Associated command line option
N/A
Associated configuration parameter
N/A
Added in version 3.11.0.
Deprecated since version 4.3: This has no effect. For setting multiple auto-detection methods, please use the
RFM_AUTODETECT_METHODS
.
- RFM_AUTODETECT_METHODS¶
A comma-separated list of system auto-detection methods. Please refer to the
autodetect_methods
configuration parameter for more information on how to set this variable.Added in version 4.3.
- RFM_AUTODETECT_XTHOSTNAME¶
Use
/etc/xthostname
file, if present, to retrieve the current system’s name. If the file cannot be found, the hostname will be retrieved using thehostname
command. This is a boolean variable and defaults to0
.This option meaningful for Cray systems.
Associated command line option
N/A
Associated configuration parameter
N/A
Added in version 3.11.0.
Changed in version 4.0.0: This variable now defaults to
0
.Deprecated since version 4.3: Please use
RFM_AUTODETECT_METHODS='cat /etc/xthostname,hostname'
in the future.
- RFM_CHECK_SEARCH_PATH¶
A colon-separated list of filesystem paths where ReFrame should search for tests.
Associated command line option
Associated configuration parameter
- RFM_CHECK_SEARCH_RECURSIVE¶
Search for test files recursively in directories found in the check search path.
Associated command line option
Associated configuration parameter
- RFM_CLEAN_STAGEDIR¶
Clean stage directory of tests before populating it.
Added in version 3.1.
Associated command line option
Associated configuration parameter
- RFM_COLORIZE¶
Enable output coloring.
Associated command line option
Associated configuration parameter
- RFM_COMPRESS_REPORT¶
Compress the generated run report file.
Associated command line option
Associated configuration parameter
Added in version 3.12.0.
- RFM_CONFIG_FILE¶
Set the configuration file for ReFrame.
Associated command line option
Associated configuration parameter
N/A
Deprecated since version 4.0.0: Please use the
RFM_CONFIG_FILES
instead.
- RFM_CONFIG_FILES¶
A colon-separated list of configuration files to load. Refer to the documentation of the
--config-file
option for a detailed description on how ReFrame loads its configuration.Associated command line option
Associated configuration parameter
N/A
Added in version 4.0.0.
- RFM_CONFIG_PATH¶
A colon-separated list of directories that contain ReFrame configuration files. Refer to the documentation of the
--config-file
option for a detailed description on how ReFrame loads its configuration.Associated command line option
N/A
Associated configuration parameter
N/A
Added in version 4.0.0.
- RFM_FLEX_ALLOC_STRICT¶
Fail flexible tests if their minimum task requirement is not satisfied.
Associated command line option
Associated configuration parameter
Added in version 4.7.
- RFM_GENERATE_FILE_REPORTS¶
Store session reports also in files.
Associated command line option
n/a
Associated configuration parameter
generate_file_reports
Added in version 4.7.
- RFM_GIT_TIMEOUT¶
Timeout value in seconds used when checking if a git repository exists.
Associated command line option
N/A
Associated configuration parameter
Added in version 3.9.0.
- RFM_GRAYLOG_ADDRESS¶
The address of the Graylog server to send performance logs. The address is specified in
host:port
format.Associated command line option
N/A
Associated configuration parameter
Added in version 3.1.
- RFM_HTTPJSON_URL¶
The URL of the server to send performance logs in JSON format. The URL is specified in
scheme://host:port/path
format.Associated command line option
N/A
Associated configuration parameter
Added in version 3.6.1.
- RFM_IGNORE_REQNODENOTAVAIL¶
Do not treat specially jobs in pending state with the reason
ReqNodeNotAvail
(Slurm only).Associated command line option
N/A
Associated configuration parameter
- RFM_INSTALL_PREFIX¶
The framework’s installation prefix. Users cannot set this variable. ReFrame will set it always upon startup.
- RFM_KEEP_STAGE_FILES¶
Keep test stage directories even for tests that finish successfully.
Associated command line option
Associated configuration parameter
- RFM_MODULE_MAP_FILE¶
A file containing module mappings.
Associated command line option
Associated configuration parameter
- RFM_MODULE_MAPPINGS¶
A comma-separated list of module mappings.
Associated command line option
Associated configuration parameter
- RFM_NON_DEFAULT_CRAYPE¶
Test a non-default Cray Programming Environment.
Associated command line option
Associated configuration parameter
- RFM_OUTPUT_DIR¶
Directory prefix for test output files.
Associated command line option
Associated configuration parameter
- RFM_PERF_INFO_LEVEL¶
Logging level at which the immediate performance information is logged.
Associated command line option
n/a
Associated configuration parameter
- RFM_PERF_REPORT_SPEC¶
The default
CMPSPEC
of the--performance-report
option.Associated command line option
Associated configuration parameter
perf_report_spec
Added in version 4.7.
- RFM_PERFLOG_DIR¶
Directory prefix for logging performance data.
Associated command line option
Associated configuration parameter
- RFM_PIPELINE_TIMEOUT¶
Timeout in seconds for advancing the pipeline in the asynchronous execution policy. See Tweaking the throughput and interactivity of test jobs in the asynchronous execution policy for more guidance on how to set this.
Associated command line option
N/A
Associated configuration parameter
Added in version 3.10.0.
- RFM_PREFIX¶
General directory prefix for ReFrame-generated directories.
Associated command line option
Associated configuration parameter
- RFM_PURGE_ENVIRONMENT¶
Unload all environment modules before acting on any tests.
Associated command line option
Associated configuration parameter
- RFM_REMOTE_DETECT¶
Auto-detect processor information of remote partitions as well.
Associated command line option
N/A
Associated configuration parameter
Added in version 3.7.0.
- RFM_REMOTE_WORKDIR¶
The temporary directory prefix that will be used to create a fresh ReFrame clone, in order to auto-detect the processor information of a remote partition.
Associated command line option
N/A
Associated configuration parameter
Added in version 3.7.0.
- RFM_REPORT_FILE¶
The file where ReFrame will store its report.
Added in version 3.1.
Associated command line option
Associated configuration parameter
- RFM_REPORT_JUNIT¶
The file where ReFrame will generate a JUnit XML report.
Added in version 3.6.0.
Associated command line option
Associated configuration parameter
- RFM_RESOLVE_MODULE_CONFLICTS¶
Resolve module conflicts automatically.
Added in version 3.6.0.
Associated command line option
N/A
Associated configuration parameter
- RFM_SAVE_LOG_FILES¶
Save ReFrame log files in the output directory before exiting.
Associated command line option
Associated configuration parameter
- RFM_SCHED_ACCESS_IN_SUBMIT¶
Pass access options in the submission command (relevant for LSF, OAR, PBS and Slurm).
Associated command line option
N/A
Associated configuration parameter
:attr:
sched_access_in_submit
Added in version 4.7.
- RFM_STAGE_DIR¶
Directory prefix for staging test resources.
Associated command line option
Associated configuration parameter
- RFM_SQLITE_CONN_TIMEOUT¶
Timeout for SQLite database connections.
Associated command line option
N/A
Associated configuration parameter
Added in version 4.7.
- RFM_SQLITE_DB_FILE¶
The SQLite database file for storing test results.
Associated command line option
N/A
Associated configuration parameter
Added in version 4.7.
- RFM_SQLITE_DB_FILE_MODE¶
The permissions of the SQLite database file in octal form.
Associated command line option
N/A
Associated configuration parameter
Added in version 4.7.
- RFM_SYSLOG_ADDRESS¶
The address of the Syslog server to send performance logs. The address is specified in
host:port
format. If no port is specified, the address refers to a UNIX socket.Associated command line option
N/A
Associated configuration parameter
Added in version 3.1.
- RFM_SYSTEM¶
Set the current system.
Associated command line option
Associated configuration parameter
N/A
- RFM_TABLE_FORMAT¶
Set the format of the tables printed by various options accessing the results storage.
Associated command line option
Associated configuration parameter
Added in version 4.7.
- RFM_TIMESTAMP_DIRS¶
Append a timestamp to the output and stage directory prefixes.
Associated command line option
Associated configuration parameter
- RFM_TRAP_JOB_ERRORS¶
Trap job errors in submitted scripts and fail tests automatically.
Associated configuration parameter
Added in version 3.9.0.
- RFM_UNLOAD_MODULES¶
A comma-separated list of environment modules to be unloaded before acting on any tests.
Associated command line option
Associated configuration parameter
- RFM_USE_LOGIN_SHELL¶
Use a login shell for the generated job scripts.
Associated command line option
N/A
Associated configuration parameter
- RFM_USER_MODULES¶
A comma-separated list of environment modules to be loaded before acting on any tests.
Associated command line option
Associated configuration parameter
Configuration File¶
The configuration file of ReFrame defines the systems and environments to test as well as parameters controlling the framework’s behavior.
ReFrame loads multiple configuration files to determine its final configuration.
First, it loads unconditionally its builtin configuration which is located in ${RFM_INSTALL_PREFIX}/reframe/core/settings.py
.
If the RFM_CONFIG_PATH
environment variable is defined, ReFrame will look for configuration files named either settings.py
or settings.json
(in that order) in every location in the path and will load them.
Finally, the --config-file
option is processed and any configuration files specified will also be loaded.
For a complete reference of the available configuration options, please refer to the reframe.settings(8) man page.
Reporting Bugs¶
For bugs, feature request, help, please open an issue on Github: <https://github.com/reframe-hpc/reframe>
See Also¶
See full documentation online: <https://reframe-hpc.readthedocs.io/>