Tutorial 6: Tips and Tricks

New in version 3.4.

This tutorial focuses on some less known aspects of ReFrame’s command line interface that can be helpful.

Debugging

ReFrame tests are Python classes inside Python source files, so the usual debugging techniques for Python apply, but the ReFrame frontend will filter some errors and stack traces by default in order to keep the output clean. Generally, ReFrame will not print the full stack trace for user programming errors and will not block the test loading process. If a test has errors and cannot be loaded, an error message will be printed and the loading of the remaining tests will continue. In the following, we have inserted a small typo in the hello2.py tutorial example:

./bin/reframe -c tutorials/basics/hello -R -l
[ReFrame Setup]
  version:           3.10.0-dev.3+149af549
  command:           './bin/reframe -c tutorials/basics/hello -R -l'
  launched by:       user@host
  working directory: '/home/user/Repositories/reframe'
  settings file:     '<builtin>'
  check search path: (R) '/home/user/Repositories/reframe/tutorials/basics/hello'
  stage directory:   '/home/user/Repositories/reframe/stage'
  output directory:  '/home/user/Repositories/reframe/output'

./bin/reframe: skipping test file '/home/user/Repositories/reframe/tutorials/basics/hello/hello2.py': name error: tutorials/basics/hello/hello2.py:13: name 'paramter' is not defined
    lang = paramter(['c', 'cpp'])
 (rerun with '-v' for more information)
[List of matched checks]
- HelloTest
Found 1 check(s)

Log file(s) saved in '/var/folders/h7/k7cgrdl13r996m4dmsvjq7v80000gp/T/rfm-bzqy3nc7.log'

Notice how ReFrame prints also the source code line that caused the error. This is not always the case, however. ReFrame cannot always track a user error back to its source and this is particularly true for the ReFrame-specific syntactic elements, such as the class builtins. In such cases, ReFrame will just print the error message but not the source code context. In the following example, we introduce a typo in the argument of the @run_before decorator:

./bin/reframe: skipping test file '/Users/user/Repositories/reframe/tutorials/basics/hello/hello2.py': reframe syntax error: invalid pipeline stage specified: 'compil' (rerun with '-v' for more information)
[List of matched checks]
- HelloTest (found in '/Users/user/Repositories/reframe/tutorials/basics/hello/hello1.py')
Found 1 check(s)

As suggested by the warning message, passing -v will give you the stack trace for each of the failing tests, as well as some more information about what is going on during the loading.

./bin/reframe -c tutorials/basics/hello -R -l -v
[ReFrame Setup]
  version:           3.10.0-dev.3+149af549
  command:           './bin/reframe -c tutorials/basics/hello -R -l -v'
  launched by:       user@host
  working directory: '/home/user/Repositories/reframe'
  settings file:     '<builtin>'
  check search path: (R) '/home/user/Repositories/reframe/tutorials/basics/hello'
  stage directory:   '/home/user/Repositories/reframe/stage'
  output directory:  '/home/user/Repositories/reframe/output'

./bin/reframe: skipping test file '/home/user/Repositories/reframe/tutorials/basics/hello/hello2.py': name error: tutorials/basics/hello/hello2.py:13: name 'paramter' is not defined
    lang = paramter(['c', 'cpp'])
 (rerun with '-v' for more information)
Traceback (most recent call last):
  File "/home/user/Repositories/reframe/reframe/frontend/loader.py", line 237, in load_from_file
    util.import_module_from_file(filename, force)
  File "/home/user/Repositories/reframe/reframe/utility/__init__.py", line 109, in import_module_from_file
    return importlib.import_module(module_name)
  File "/usr/local/Cellar/python@3.9/3.9.1_6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
  File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 790, in exec_module
  File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
  File "/home/user/Repositories/reframe/tutorials/basics/hello/hello2.py", line 12, in <module>
    class HelloMultiLangTest(rfm.RegressionTest):
  File "/home/user/Repositories/reframe/tutorials/basics/hello/hello2.py", line 13, in HelloMultiLangTest
    lang = paramter(['c', 'cpp'])
NameError: name 'paramter' is not defined

Loaded 1 test(s)
Generated 1 test case(s)
Filtering test cases(s) by name: 1 remaining
Filtering test cases(s) by tags: 1 remaining
Filtering test cases(s) by other attributes: 1 remaining
Final number of test cases: 1
[List of matched checks]
- HelloTest
Found 1 check(s)

Log file(s) saved in '/var/folders/h7/k7cgrdl13r996m4dmsvjq7v80000gp/T/rfm-l21cjjas.log'

Tip

The -v option can be given multiple times to increase the verbosity level further.

Debugging deferred expressions

Although deferred expressions that are used in sanity and performance functions behave similarly to normal Python expressions, you need to understand their implicit evaluation rules. One of the rules is that str() triggers the implicit evaluation, so trying to use the standard print() function with a deferred expression, you might get unexpected results if that expression is not yet to be evaluated. For this reason, ReFrame offers a sanity function counterpart of print(), which allows you to safely print deferred expressions.

Let’s see that in practice, by printing the filename of the standard output for HelloMultiLangTest test. The stdout is a deferred expression and it will get its value later on while the test executes. Trying to use the standard print here print() function here would be of little help, since it would simply give us None, which is the value of stdout when the test is created.

import reframe as rfm
import reframe.utility.sanity as sn


@rfm.simple_test
class HelloMultiLangTest(rfm.RegressionTest):
    lang = parameter(['c', 'cpp'])
    valid_systems = ['*']
    valid_prog_environs = ['*']

    @run_after('compile')
    def set_sourcepath(self):
        self.sourcepath = f'hello.{self.lang}'

    @sanity_function
    def validate_output(self):
        return sn.assert_found(r'Hello, World\!', sn.print(self.stdout))

If we run the test, we can see that the correct standard output filename will be printed after sanity:

./bin/reframe -C tutorials/config/settings.py -c tutorials/basics/hello/hello2.py -r
[ReFrame Setup]
  version:           3.10.0-dev.3+149af549
  command:           './bin/reframe -C tutorials/config/settings.py -c tutorials/basics/hello/hello2.py -r'
  launched by:       user@host
  working directory: '/home/user/Repositories/reframe'
  settings file:     'tutorials/config/settings.py'
  check search path: '/home/user/Repositories/reframe/tutorials/basics/hello/hello2.py'
  stage directory:   '/home/user/Repositories/reframe/stage'
  output directory:  '/home/user/Repositories/reframe/output'

[==========] Running 2 check(s)
[==========] Started on Sun Jan 23 00:11:07 2022

[----------] start processing checks
[ RUN      ] HelloMultiLangTest %lang=cpp @catalina:default+gnu
[ RUN      ] HelloMultiLangTest %lang=cpp @catalina:default+clang
[ RUN      ] HelloMultiLangTest %lang=c @catalina:default+gnu
[ RUN      ] HelloMultiLangTest %lang=c @catalina:default+clang
rfm_HelloMultiLangTest_cpp_job.out
[       OK ] (1/4) HelloMultiLangTest %lang=cpp @catalina:default+gnu
rfm_HelloMultiLangTest_cpp_job.out
[       OK ] (2/4) HelloMultiLangTest %lang=cpp @catalina:default+clang
rfm_HelloMultiLangTest_c_job.out
[       OK ] (3/4) HelloMultiLangTest %lang=c @catalina:default+gnu
rfm_HelloMultiLangTest_c_job.out
[       OK ] (4/4) HelloMultiLangTest %lang=c @catalina:default+clang
[----------] all spawned checks have finished

[  PASSED  ] Ran 4/4 test case(s) from 2 check(s) (0 failure(s), 0 skipped)
[==========] Finished on Sun Jan 23 00:11:10 2022
Run report saved in '/home/user/.reframe/reports/run-report.json'
Log file(s) saved in '/var/folders/h7/k7cgrdl13r996m4dmsvjq7v80000gp/T/rfm-jumlrg66.log'

Debugging sanity and performance patterns

When creating a new test that requires a complex output parsing for either the sanity or performance pipeline stages, tuning the functions decorated by @sanity_function or @performance_function may involve some trial and error to debug the complex regular expressions required. For lightweight tests which execute in a few seconds, this trial and error may not be an issue at all. However, when dealing with tests which take longer to run, this method can quickly become tedious and inefficient.

Tip

When dealing with make-based projects which take a long time to compile, you can use the command line option --dont-restage in order to speed up the compile stage in subsequent runs.

When a test fails, ReFrame will keep the test output in the stage directory after its execution, which means that one can load this output into a Python shell or another helper script without having to rerun the expensive test again. If the test is not failing but the user still wants to experiment or modify the existing sanity or performance functions, the command line option --keep-stage-files can be used when running ReFrame to avoid deleting the stage directory. With the executable’s output available in the stage directory, one can simply use the re module to debug regular expressions as shown below.

>>> import re

>>> # Read the test's output
>>> with open(the_output_file, 'r') as f:
...     test_output = ''.join(f.readlines())
...
>>> # Evaluate the regular expression
>>> re.find(the_regex_pattern, test_output)

Alternatively to using the re module, one could use all the sanity utility provided by ReFrame directly from the Python shell. In order to do so, if ReFrame was installed manually using the bootstrap.sh script, one will have to make all the Python modules from the external directory accessible to the Python shell as shown below.

>>> import sys
>>> import os

>>> # Make the external modules available
>>> sys.path = [os.path.abspath('external')] + sys.path

>>> # Import ReFrame-provided sanity functions
>>> import reframe.utility.sanity as sn

>>> # Evaluate the regular expression
>>> assert sn.evaluate(sn.assert_found(the_regex_pattern, the_output_file))

Debugging test loading

If you are new to ReFrame, you might wonder sometimes why your tests are not loading or why your tests are not running on the partition they were supposed to run. This can be due to ReFrame picking the wrong configuration entry or that your test is not written properly (not decorated, no valid_systems etc.). If you try to load a test file and list its tests by increasing twice the verbosity level, you will get enough output to help you debug such issues. Let’s try loading the tutorials/basics/hello/hello2.py file:

./bin/reframe -C tutorials/config/settings.py -c tutorials/basics/hello/hello2.py -l -vv
Loading user configuration
Loading configuration file: 'tutorials/config/settings.py'
Detecting system
Looking for a matching configuration entry for system 'host'
Configuration found: picking system 'generic'
Selecting subconfig for 'generic'
Initializing runtime
Selecting subconfig for 'generic:default'
Initializing system partition 'default'
Selecting subconfig for 'generic'
Initializing system 'generic'
Initializing modules system 'nomod'
detecting topology info for generic:default
> found topology file '/home/user/.reframe/topology/generic-default/processor.json'; loading...
> device auto-detection is not supported
[ReFrame Environment]
  RFM_CHECK_SEARCH_PATH=<not set>
  RFM_CHECK_SEARCH_RECURSIVE=<not set>
  RFM_CLEAN_STAGEDIR=<not set>
  RFM_COLORIZE=n
  RFM_COMPACT_TEST_NAMES=n
  RFM_CONFIG_FILE=<not set>
  RFM_DUMP_PIPELINE_PROGRESS=<not set>
  RFM_GIT_TIMEOUT=<not set>
  RFM_GRAYLOG_ADDRESS=<not set>
  RFM_HTTPJSON_URL=<not set>
  RFM_IGNORE_CHECK_CONFLICTS=<not set>
  RFM_IGNORE_REQNODENOTAVAIL=<not set>
  RFM_INSTALL_PREFIX=/home/user/Repositories/reframe
  RFM_KEEP_STAGE_FILES=<not set>
  RFM_MODULE_MAPPINGS=<not set>
  RFM_MODULE_MAP_FILE=<not set>
  RFM_NON_DEFAULT_CRAYPE=<not set>
  RFM_OUTPUT_DIR=<not set>
  RFM_PERFLOG_DIR=<not set>
  RFM_PIPELINE_TIMEOUT=<not set>
  RFM_PREFIX=<not set>
  RFM_PURGE_ENVIRONMENT=<not set>
  RFM_REMOTE_DETECT=<not set>
  RFM_REMOTE_WORKDIR=<not set>
  RFM_REPORT_FILE=<not set>
  RFM_REPORT_JUNIT=<not set>
  RFM_RESOLVE_MODULE_CONFLICTS=<not set>
  RFM_SAVE_LOG_FILES=<not set>
  RFM_STAGE_DIR=<not set>
  RFM_SYSLOG_ADDRESS=<not set>
  RFM_SYSTEM=<not set>
  RFM_TIMESTAMP_DIRS=<not set>
  RFM_TRAP_JOB_ERRORS=<not set>
  RFM_UNLOAD_MODULES=<not set>
  RFM_USER_MODULES=<not set>
  RFM_USE_LOGIN_SHELL=<not set>
  RFM_VERBOSE=<not set>
[ReFrame Setup]
  version:           3.10.0-dev.3+149af549
  command:           './bin/reframe -C tutorials/config/settings.py -c tutorials/basics/hello/hello2.py -l -vv'
  launched by:       user@host
  working directory: '/home/user/Repositories/reframe'
  settings file:     'tutorials/config/settings.py'
  check search path: '/home/user/Repositories/reframe/tutorials/basics/hello/hello2.py'
  stage directory:   '/home/user/Repositories/reframe/stage'
  output directory:  '/home/user/Repositories/reframe/output'

Looking for tests in '/home/user/Repositories/reframe/tutorials/basics/hello/hello2.py'
Validating '/home/user/Repositories/reframe/tutorials/basics/hello/hello2.py': OK
  > Loaded 2 test(s)
Loaded 2 test(s)
Generated 2 test case(s)
Filtering test cases(s) by name: 2 remaining
Filtering test cases(s) by tags: 2 remaining
Filtering test cases(s) by other attributes: 2 remaining
Building and validating the full test DAG
Full test DAG:
  ('HelloMultiLangTest_cpp', 'generic:default', 'builtin') -> []
  ('HelloMultiLangTest_c', 'generic:default', 'builtin') -> []
Final number of test cases: 2
[List of matched checks]
- HelloMultiLangTest %lang=cpp
- HelloMultiLangTest %lang=c
Found 2 check(s)

Log file(s) saved in '/var/folders/h7/k7cgrdl13r996m4dmsvjq7v80000gp/T/rfm-fs1arce0.log'

You can see all the different phases ReFrame’s frontend goes through when loading a test. The first “strange” thing to notice in this log is that ReFrame picked the generic system configuration. This happened because it couldn’t find a system entry with a matching hostname pattern. However, it did not impact the test loading, because these tests are valid for any system, but it will affect the tests when running (see Tutorial 1: Getting Started with ReFrame) since the generic system does not define any C++ compiler.

After loading the configuration, ReFrame will print out its relevant environment variables and will start examining the given files in order to find and load ReFrame tests. Before attempting to load a file, it will validate it and check if it looks like a ReFrame test. If it does, it will load that file by importing it. This is where any ReFrame tests are instantiated and initialized (see Loaded 2 test(s)), as well as the actual test cases (combination of tests, system partitions and environments) are generated. Then the test cases are filtered based on the various filtering command line options as well as the programming environments that are defined for the currently selected system. Finally, the test case dependency graph is built and everything is ready for running (or listing).

Try passing a specific system or partition with the --system option or modify the test (e.g., removing the decorator that registers it) and see how the logs change.

Execution modes

ReFrame allows you to create pre-defined ways of running it, which you can invoke from the command line. These are called execution modes and are essentially named groups of command line options that will be passed to ReFrame whenever you request them. These are defined in the configuration file and can be requested with the --mode command-line option. The following configuration defines an execution mode named maintenance and sets up ReFrame in a certain way (selects tests to run, sets up stage and output paths etc.)

 'modes': [
     {
         'name': 'maintenance',
         'options': [
             '--unload-module=reframe',
             '--exec-policy=async',
             '--strict',
             '--output=/path/to/$USER/regression/maintenance',
             '--perflogdir=/path/to/$USER/regression/maintenance/logs',
             '--stage=$SCRATCH/regression/maintenance/stage',
             '--report-file=/path/to/$USER/regression/maintenance/reports/maint_report_{sessionid}.json',
             '-Jreservation=maintenance',
             '--save-log-files',
             '--tag=maintenance',
             '--timestamp=%F_%H-%M-%S'
         ]
     },
]

The execution modes come handy in situations that you have a standardized way of running ReFrame and you don’t want to create and maintain shell scripts around it. In this example, you can simply run ReFrame with

./bin/reframe --mode=maintenance -r

and it will be equivalent to passing explicitly all the above options. You can still pass any additional command line option and it will supersede or be combined (depending on the behaviour of the option) with those defined in the execution mode. In this particular example, we could change just the reservation name by running

./bin/reframe --mode=maintenance -J reservation=maint -r

There are two options that you can’t use inside execution modes and these are the -C and --system. The reason is that these option select the configuration file and the configuration entry to load.

Manipulating ReFrame’s environment

ReFrame runs the selected tests in the same environment as the one that it executes. It does not unload any environment modules nor sets or unsets any environment variable. Nonetheless, it gives you the opportunity to modify the environment that the tests execute. You can either purge completely all environment modules by passing the --purge-env option or ask ReFrame to load or unload some environment modules before starting running any tests by using the -m and -u options respectively. Of course you could manage the environment manually, but it’s more convenient if you do that directly through ReFrame’s command-line. If you used an environment module to load ReFrame, e.g., reframe, you can use the -u to have ReFrame unload it before running any tests, so that the tests start in a clean environment:

./bin/reframe -u reframe [...]

Environment Modules Mappings

ReFrame allows you to replace environment modules used in tests with other modules on the fly. This is quite useful if you want to test a new version of a module or another combination of modules. Assume you have a test that loads a gromacs module:

class GromacsTest(rfm.RunOnlyRegressionTest):
    ...
    modules = ['gromacs']

This test would use the default version of the module in the system, but you might want to test another version, before making that new one the default. You can ask ReFrame to temporarily replace the gromacs module with another one as follows:

./bin/reframe -n GromacsTest -M 'gromacs:gromacs/2020.5' -r

Every time ReFrame tries to load the gromacs module, it will replace it with gromacs/2020.5. You can specify multiple mappings at once or provide a file with mappings using the --module-mappings option. You can also replace a single module with multiple modules.

A very convenient feature of ReFrame in dealing with modules is that you do not have to care about module conflicts at all, regardless of the modules system backend. ReFrame will take care of unloading any conflicting modules, if the underlying modules system cannot do that automatically. In case of module mappings, it will also respect the module order of the replacement modules and will produce the correct series of “load” and “unload” commands needed by the modules system backend used.

Retrying and Rerunning Tests

If you are running ReFrame regularly as part of a continuous testing procedure you might not want it to generate alerts for transient failures. If a ReFrame test fails, you might want to retry a couple of times before marking it as a failure. You can achieve this with the --max-retries. ReFrame will then retry the failing test cases a maximum number of times before reporting them as actual failures. The failed test cases will not be retried immediately after they have failed, but rather at the end of the run session. This is done to give more chances of success in case the failures have been transient.

Another interesting feature introduced in ReFrame 3.4 is the ability to restore a previous test session. Whenever it runs, ReFrame stores a detailed JSON report of the last run under $HOME/.reframe (see --report-file). Using that file, ReFrame can restore a previous run session using the --restore-session. This option is useful when you combine it with the various test filtering options. For example, you might want to rerun only the failed tests or just a specific test in a dependency chain. Let’s see an artificial example that uses the following test dependency graph.

_images/deps-complex.svg

Complex test dependency graph. Nodes in red are set to fail.

Tests T2 and T8 are set to fail. Let’s run the whole test DAG:

./bin/reframe -c unittests/resources/checks_unlisted/deps_complex.py -r
[ReFrame Setup]
  version:           3.10.0-dev.3+149af549
  command:           './bin/reframe -c unittests/resources/checks_unlisted/deps_complex.py -r'
  launched by:       user@host
  working directory: '/home/user/Repositories/reframe'
  settings file:     '<builtin>'
  check search path: '/home/user/Repositories/reframe/unittests/resources/checks_unlisted/deps_complex.py'
  stage directory:   '/home/user/Repositories/reframe/stage'
  output directory:  '/home/user/Repositories/reframe/output'

[==========] Running 10 check(s)
[==========] Started on Sat Jan 22 23:44:18 2022

[----------] start processing checks
[ RUN      ] T0 @generic:default+builtin
[       OK ] ( 1/10) T0 @generic:default+builtin
[ RUN      ] T4 @generic:default+builtin
[       OK ] ( 2/10) T4 @generic:default+builtin
[ RUN      ] T5 @generic:default+builtin
[       OK ] ( 3/10) T5 @generic:default+builtin
[ RUN      ] T1 @generic:default+builtin
[       OK ] ( 4/10) T1 @generic:default+builtin
[ RUN      ] T8 @generic:default+builtin
[     FAIL ] ( 5/10) T8 @generic:default+builtin
==> test failed during 'setup': test staged in '/home/user/Repositories/reframe/stage/generic/default/builtin/T8'
[     FAIL ] ( 6/10) T9 @generic:default+builtin
==> test failed during 'startup': test staged in None
[ RUN      ] T6 @generic:default+builtin
[       OK ] ( 7/10) T6 @generic:default+builtin
[ RUN      ] T2 @generic:default+builtin
[ RUN      ] T3 @generic:default+builtin
[     FAIL ] ( 8/10) T2 @generic:default+builtin
==> test failed during 'sanity': test staged in '/home/user/Repositories/reframe/stage/generic/default/builtin/T2'
[     FAIL ] ( 9/10) T7 @generic:default+builtin
==> test failed during 'startup': test staged in None
[       OK ] (10/10) T3 @generic:default+builtin
[----------] all spawned checks have finished

[  FAILED  ] Ran 10/10 test case(s) from 10 check(s) (4 failure(s), 0 skipped)
[==========] Finished on Sat Jan 22 23:44:21 2022

==============================================================================
SUMMARY OF FAILURES
------------------------------------------------------------------------------
FAILURE INFO for T8
  * Expanded name: T8
  * Description: T8
  * System partition: generic:default
  * Environment: builtin
  * Stage directory: /home/user/Repositories/reframe/stage/generic/default/builtin/T8
  * Node list:
  * Job type: local (id=None)
  * Dependencies (conceptual): ['T1']
  * Dependencies (actual): [('T1', 'generic:default', 'builtin')]
  * Maintainers: []
  * Failing phase: setup
  * Rerun with '-n T8 -p builtin --system generic:default -r'
  * Reason: exception
Traceback (most recent call last):
  File "/home/user/Repositories/reframe/reframe/frontend/executors/__init__.py", line 291, in _safe_call
    return fn(*args, **kwargs)
  File "/home/user/Repositories/reframe/reframe/core/hooks.py", line 82, in _fn
    getattr(obj, h.__name__)()
  File "/home/user/Repositories/reframe/reframe/core/hooks.py", line 32, in _fn
    func(*args, **kwargs)
  File "/home/user/Repositories/reframe/unittests/resources/checks_unlisted/deps_complex.py", line 180, in fail
    raise Exception
Exception

------------------------------------------------------------------------------
FAILURE INFO for T9
  * Expanded name: T9
  * Description: T9
  * System partition: generic:default
  * Environment: builtin
  * Stage directory: None
  * Node list:
  * Job type: local (id=None)
  * Dependencies (conceptual): ['T8']
  * Dependencies (actual): [('T8', 'generic:default', 'builtin')]
  * Maintainers: []
  * Failing phase: startup
  * Rerun with '-n T9 -p builtin --system generic:default -r'
  * Reason: task dependency error: dependencies failed
------------------------------------------------------------------------------
FAILURE INFO for T2
  * Expanded name: T2
  * Description: T2
  * System partition: generic:default
  * Environment: builtin
  * Stage directory: /home/user/Repositories/reframe/stage/generic/default/builtin/T2
  * Node list: tresa.localNone
  * Job type: local (id=49427)
  * Dependencies (conceptual): ['T6']
  * Dependencies (actual): [('T6', 'generic:default', 'builtin')]
  * Maintainers: []
  * Failing phase: sanity
  * Rerun with '-n T2 -p builtin --system generic:default -r'
  * Reason: sanity error: 31 != 30
------------------------------------------------------------------------------
FAILURE INFO for T7
  * Expanded name: T7
  * Description: T7
  * System partition: generic:default
  * Environment: builtin
  * Stage directory: None
  * Node list:
  * Job type: local (id=None)
  * Dependencies (conceptual): ['T2']
  * Dependencies (actual): [('T2', 'generic:default', 'builtin')]
  * Maintainers: []
  * Failing phase: startup
  * Rerun with '-n T7 -p builtin --system generic:default -r'
  * Reason: task dependency error: dependencies failed
------------------------------------------------------------------------------
Run report saved in '/home/user/.reframe/reports/run-report.json'
Log file(s) saved in '/var/folders/h7/k7cgrdl13r996m4dmsvjq7v80000gp/T/rfm-92y3fr5s.log'

You can restore the run session and run only the failed test cases as follows:

./bin/reframe --restore-session --failed -r

Of course, as expected, the run will fail again, since these tests were designed to fail.

Instead of running the failed test cases of a previous run, you might simply want to rerun a specific test. This has little meaning if you don’t use dependencies, because it would be equivalent to running it separately using the -n option. However, if a test was part of a dependency chain, using --restore-session will not rerun its dependencies, but it will rather restore them. This is useful in cases where the test that we want to rerun depends on time-consuming tests. There is a little tweak, though, for this to work: you need to have run with --keep-stage-files in order to keep the stage directory even for tests that have passed. This is due to two reasons: (a) if a test needs resources from its parents, it will look into their stage directories and (b) ReFrame stores the state of a finished test case inside its stage directory and it will need that state information in order to restore a test case.

Let’s try to rerun the T6 test from the previous test dependency chain:

./bin/reframe -c unittests/resources/checks_unlisted/deps_complex.py --keep-stage-files -r
./bin/reframe --restore-session --keep-stage-files -n T6 -r

Notice how only the T6 test was rerun and none of its dependencies, since they were simply restored:

[ReFrame Setup]
  version:           3.10.0-dev.3+149af549
  command:           './bin/reframe --restore-session --keep-stage-files -n T6 -r'
  launched by:       user@host
  working directory: '/home/user/Repositories/reframe'
  settings file:     '<builtin>'
  check search path: '/home/user/Repositories/reframe/unittests/resources/checks_unlisted/deps_complex.py'
  stage directory:   '/home/user/Repositories/reframe/stage'
  output directory:  '/home/user/Repositories/reframe/output'

[==========] Running 1 check(s)
[==========] Started on Sat Jan 22 23:44:25 2022

[----------] start processing checks
[ RUN      ] T6 @generic:default+builtin
[       OK ] (1/1) T6 @generic:default+builtin
[----------] all spawned checks have finished

[  PASSED  ] Ran 1/1 test case(s) from 1 check(s) (0 failure(s), 0 skipped)
[==========] Finished on Sat Jan 22 23:44:25 2022
Run report saved in '/home/user/.reframe/reports/run-report.json'
Log file(s) saved in '/var/folders/h7/k7cgrdl13r996m4dmsvjq7v80000gp/T/rfm-mug0a4cb.log'

If we tried to run T6 without restoring the session, we would have to rerun also the whole dependency chain, i.e., also T5, T1, T4 and T0.

./bin/reframe -c unittests/resources/checks_unlisted/deps_complex.py -n T6 -r
[ReFrame Setup]
  version:           3.10.0-dev.3+149af549
  command:           './bin/reframe -c unittests/resources/checks_unlisted/deps_complex.py -n T6 -r'
  launched by:       user@host
  working directory: '/home/user/Repositories/reframe'
  settings file:     '<builtin>'
  check search path: '/home/user/Repositories/reframe/unittests/resources/checks_unlisted/deps_complex.py'
  stage directory:   '/home/user/Repositories/reframe/stage'
  output directory:  '/home/user/Repositories/reframe/output'

[==========] Running 5 check(s)
[==========] Started on Sat Jan 22 23:44:25 2022

[----------] start processing checks
[ RUN      ] T0 @generic:default+builtin
[       OK ] (1/5) T0 @generic:default+builtin
[ RUN      ] T4 @generic:default+builtin
[       OK ] (2/5) T4 @generic:default+builtin
[ RUN      ] T5 @generic:default+builtin
[       OK ] (3/5) T5 @generic:default+builtin
[ RUN      ] T1 @generic:default+builtin
[       OK ] (4/5) T1 @generic:default+builtin
[ RUN      ] T6 @generic:default+builtin
[       OK ] (5/5) T6 @generic:default+builtin
[----------] all spawned checks have finished

[  PASSED  ] Ran 5/5 test case(s) from 5 check(s) (0 failure(s), 0 skipped)
[==========] Finished on Sat Jan 22 23:44:28 2022
Run report saved in '/home/user/.reframe/reports/run-report.json'
Log file(s) saved in '/var/folders/h7/k7cgrdl13r996m4dmsvjq7v80000gp/T/rfm-ktylyaqk.log'

Integrating into a CI pipeline

New in version 3.4.1.

Instead of running your tests, you can ask ReFrame to generate a child pipeline specification for the Gitlab CI. This will spawn a CI job for each ReFrame test respecting test dependencies. You could run your tests in a single job of your Gitlab pipeline, but you would not take advantage of the parallelism across different CI jobs. Having a separate CI job per test makes it also easier to spot the failing tests.

As soon as you have set up a runner for your repository, it is fairly straightforward to use ReFrame to automatically generate the necessary CI steps. The following is an example of .gitlab-ci.yml file that does exactly that:

stages:
  - generate
  - test

generate-pipeline:
  stage: generate
  script:
    - reframe --ci-generate=${CI_PROJECT_DIR}/pipeline.yml -c ${CI_PROJECT_DIR}/path/to/tests
  artifacts:
    paths:
      - ${CI_PROJECT_DIR}/pipeline.yml

test-jobs:
  stage: test
  trigger:
    include:
      - artifact: pipeline.yml
        job: generate-pipeline
    strategy: depend

It defines two stages. The first one, called generate, will call ReFrame to generate the pipeline specification for the desired tests. All the usual test selection options can be used to select specific tests. ReFrame will process them as usual, but instead of running the selected tests, it will generate the correct steps for running each test individually as a Gitlab job in a child pipeline. The generated ReFrame command that will run each individual test reuses the -C, -R, -v and --mode options passed to the initial invocation of ReFrame that was used to generate the pipeline. Users can define CI-specific execution modes in their configuration in order to pass arbitrary options to the ReFrame invocation in the child pipeline.

Finally, we pass the generated CI pipeline file to second phase as an artifact and we are done! If image keyword is defined in .gitlab-ci.yml, the emitted pipeline will use the same image as the one defined in the parent pipeline. Besides, each job in the generated pipeline will output a separate junit report which can be used to create GitLab badges.

The following figure shows one part of the automatically generated pipeline for the test graph depicted above.

_images/gitlab-ci.png

Snapshot of a Gitlab pipeline generated automatically by ReFrame.

Note

The ReFrame executable must be available in the Gitlab runner that will run the CI jobs.