Tutorial 2: Customizing Further a Regression Test

In this section, we are going to show some more elaborate use cases of ReFrame. The corresponding scripts as well as the source code of the examples discussed here can be found in the directory tutorial/advanced/.

Working with Makefiles

We have already shown how you can compile a single source file associated with your regression test. In this example, we show how ReFrame can leverage Makefiles to build executables.

Compiling a regression test through a Makefile is straightforward with ReFrame. If the sourcepath attribute refers to a directory, then ReFrame will try to figure out the type of project and select the correct build system. If it is not a CMake or Autotools based project, it will try to use make for building it. More specifically, ReFrame first copies the sourcesdir to the stage directory at the beginning of the compilation phase, then switches to the {stagedir}/{sourcepath}/ and, finally, invokes make.

Note

The sourcepath attribute must be a relative path refering to a subdirectory of sourcesdir, i.e., relative paths starting with .. will be rejected.

By default, sourcepath is the empty string and sourcesdir is set to 'src/', if such a directory exists. As a result, by not specifying a sourcepath at all, ReFrame will eventually compile the files found in the src/ directory. This is exactly what our first example here does.

For completeness, here are the contents of Makefile provided:

EXECUTABLE := advanced_example1 

.SUFFIXES: .o .c

OBJS := advanced_example1.o

$(EXECUTABLE): $(OBJS)
	$(CC) $(CFLAGS) $(LDFLAGS) -o $@ $^
 
$(OBJS): advanced_example1.c
	$(CC) $(CPPFLAGS) $(CFLAGS) -c $(LDFLAGS) -o $@ $^ 

The corresponding tutorial/advanced/src/advanced_example1.c source file consists of a simple printing of a message, whose content depends on the preprocessor variable MESSAGE:

#include <stdio.h>

int main(){
#ifdef MESSAGE
    char *message = "SUCCESS";
#else
    char *message = "FAILURE";
#endif
    printf("Setting of preprocessor variable: %s\n", message);
    return 0;
}

The purpose of the regression test in this case is to set the preprocessor variable MESSAGE via CPPFLAGS and then check the standard output for the message SUCCESS, which indicates that the preprocessor flag has been passed and processed correctly by the Makefile.

The contents of this regression test are the following:

# Copyright 2016-2020 Swiss National Supercomputing Centre (CSCS/ETH Zurich)
# ReFrame Project Developers. See the top-level LICENSE file for details.
#
# SPDX-License-Identifier: BSD-3-Clause

import reframe as rfm
import reframe.utility.sanity as sn


@rfm.simple_test
class MakefileTest(rfm.RegressionTest):
    def __init__(self):
        self.descr = ('ReFrame tutorial demonstrating the use of Makefiles '
                      'and compile options')
        self.valid_systems = ['*']
        self.valid_prog_environs = ['*']
        self.executable = './advanced_example1'
        self.build_system = 'Make'
        self.build_system.cppflags = ['-DMESSAGE']
        self.sanity_patterns = sn.assert_found('SUCCESS', self.stdout)
        self.maintainers = ['put-your-name-here']
        self.tags = {'tutorial'}

The important bit here is how we set up the build system for this test:

    self.build_system = 'Make'
    self.build_system.cppflags = ['-DMESSAGE']

First, we set the build system to Make and then set the preprocessor flags for the compilation. ReFrame will invoke make as follows:

make -j 1 CC='cc' CXX='CC' FC='ftn' NVCC='nvcc' CPPFLAGS='-DMESSAGE'

The compiler variables (CC, CXX etc.) are set based on the corresponding values specified in the coniguration of the current environment. You may instruct the build system to ignore the default values from the environment by setting the following:

self.build_system.flags_from_environ = False

In this case, make will be invoked as follows:

make -j 1 CPPFLAGS='-DMESSAGE'

Notice that the -j 1 option is always generated. You may change the maximum build concurrency as follows:

self.build_system.max_concurrency = 4

By setting max_concurrency to None, no limit for concurrent parallel jobs will be placed. This means that make -j will be used for building.

Finally, you may also customize the name of the Makefile. You can achieve that by setting the corresponding variable of the Make build system:

self.build_system.makefile = 'Makefile_custom'

More details on ReFrame’s build systems, you may find here.

Retrieving the source code from a Git repository

It might be the case that a regression test needs to clone its source code from a remote repository. This can be achieved in two ways with ReFrame. One way is to set the sourcesdir attribute to None and explicitly clone or checkout a repository using the prebuild_cmds:

self.sourcesdir = None
self.prebuild_cmds = ['git clone https://github.com/me/myrepo .']

By setting sourcesdir to None, you are telling ReFrame that you are going to provide the source files in the stage directory. The working directory of the prebuild_cmds and postbuild_cmds commands will be the stage directory of the test.

An alternative way to retrieve specifically a Git repository is to assign its URL directly to the sourcesdir attribute:

self.sourcesdir = 'https://github.com/me/myrepo'

ReFrame will attempt to clone this repository inside the stage directory by executing git clone <repo> . and will then procede with the compilation as described above.

Note

ReFrame recognizes only URLs in the sourcesdir attribute and requires passwordless access to the repository. This means that the SCP-style repository specification will not be accepted. You will have to specify it as URL using the ssh:// protocol (see Git documentation page).

Add a configuration step before compiling the code

It is often the case that a configuration step is needed before compiling a code with make. To address this kind of projects, ReFrame aims to offer specific abstractions for “configure-make” style of build systems. It supports CMake-based projects through the CMake build system, as well as Autotools-based projects through the Autotools build system.

For other build systems, you can achieve the same effect using the Make build system and the prebuild_cmds for performing the configuration step. The following code snippet will configure a code with ./custom_configure before invoking make:

self.prebuild_cmds = ['./custom_configure -with-mylib']
self.build_system = 'Make'
self.build_system.cppflags = ['-DHAVE_FOO']
self.build_system.flags_from_environ = False

The generated build script then will have the following lines:

./custom_configure -with-mylib
make -j 1 CPPFLAGS='-DHAVE_FOO'

Implementing a Run-Only Regression Test

There are cases when it is desirable to perform regression testing for an already built executable. The following test uses the echo Bash shell command to print a random integer between specific lower and upper bounds. Here is the full regression test:

# Copyright 2016-2020 Swiss National Supercomputing Centre (CSCS/ETH Zurich)
# ReFrame Project Developers. See the top-level LICENSE file for details.
#
# SPDX-License-Identifier: BSD-3-Clause

import reframe as rfm
import reframe.utility.sanity as sn


@rfm.simple_test
class ExampleRunOnlyTest(rfm.RunOnlyRegressionTest):
    def __init__(self):
        self.descr = ('ReFrame tutorial demonstrating the class'
                      'RunOnlyRegressionTest')
        self.valid_systems = ['*']
        self.valid_prog_environs = ['*']
        self.sourcesdir = None

        lower = 90
        upper = 100
        self.executable = 'echo "Random: $((RANDOM%({1}+1-{0})+{0}))"'.format(
            lower, upper)
        self.sanity_patterns = sn.assert_bounded(sn.extractsingle(
            r'Random: (?P<number>\S+)', self.stdout, 'number', float),
            lower, upper)
        self.maintainers = ['put-your-name-here']
        self.tags = {'tutorial'}

There is nothing special for this test compared to those presented earlier except that it derives from the RunOnlyRegressionTest and that it does not contain any resources (self.sourcesdir = None). Note that run-only regression tests may also have resources, as for instance a precompiled executable or some input data. The copying of these resources to the stage directory is performed at the beginning of the run phase. For standard regression tests, this happens at the beginning of the compilation phase, instead. Furthermore, in this particular test the executable consists only of standard Bash shell commands. For this reason, we can set sourcesdir to None informing ReFrame that the test does not have any resources.

Note

Changed in version 3.0: It is no more necessary to explicitly set sourcesdir to None for run-only tests without resources.

Implementing a Compile-Only Regression Test

ReFrame provides the option to write compile-only tests which consist only of a compilation phase without a specified executable. This kind of tests must derive from the CompileOnlyRegressionTest class provided by the framework. The following example reuses the code of our first example in this section and checks that no warnings are issued by the compiler:

# Copyright 2016-2020 Swiss National Supercomputing Centre (CSCS/ETH Zurich)
# ReFrame Project Developers. See the top-level LICENSE file for details.
#
# SPDX-License-Identifier: BSD-3-Clause

import reframe as rfm
import reframe.utility.sanity as sn


@rfm.simple_test
class ExampleCompileOnlyTest(rfm.CompileOnlyRegressionTest):
    def __init__(self):
        self.descr = ('ReFrame tutorial demonstrating the class'
                      'CompileOnlyRegressionTest')
        self.valid_systems = ['*']
        self.valid_prog_environs = ['*']
        self.sanity_patterns = sn.assert_not_found('warning', self.stderr)
        self.maintainers = ['put-your-name-here']
        self.tags = {'tutorial'}

The important thing to note here is that the standard output and standard error of the tests, accessible through the stdout and stderr attributes, are now the corresponding those of the compilation command. So sanity checking can be done in exactly the same way as with a normal test.

Leveraging Environment Variables

We have already demonstrated in “Tutorial 1: The Basics” that ReFrame allows you to load the required modules for regression tests and also set any needed environment variables. When setting environment variables for your test through the variables attribute, you can assign them values of other, already defined, environment variables using the standard notation $OTHER_VARIABLE or ${OTHER_VARIABLE}. The following regression test sets the CUDA_HOME environment variable to the value of the CUDATOOLKIT_HOME and then compiles and runs a simple program:

# Copyright 2016-2020 Swiss National Supercomputing Centre (CSCS/ETH Zurich)
# ReFrame Project Developers. See the top-level LICENSE file for details.
#
# SPDX-License-Identifier: BSD-3-Clause

import reframe as rfm
import reframe.utility.sanity as sn


@rfm.simple_test
class EnvironmentVariableTest(rfm.RegressionTest):
    def __init__(self):
        self.descr = ('ReFrame tutorial demonstrating the use'
                      'of environment variables provided by loaded modules')
        self.valid_systems = ['daint:gpu']
        self.valid_prog_environs = ['*']
        self.modules = ['cudatoolkit']
        self.variables = {'CUDA_HOME': '$CUDATOOLKIT_HOME'}
        self.executable = './advanced_example4'
        self.build_system = 'Make'
        self.build_system.makefile = 'Makefile_example4'
        self.sanity_patterns = sn.assert_found(r'SUCCESS', self.stdout)
        self.maintainers = ['put-your-name-here']
        self.tags = {'tutorial'}

Before discussing this test in more detail, let’s first have a look in the source code and the Makefile of this example:

#include <stdio.h>
#include <stdlib.h>
#include <string.h>

#ifndef CUDA_HOME
#   define CUDA_HOME ""
#endif

int main() {
    char *cuda_home_compile = CUDA_HOME;
    char *cuda_home_runtime = getenv("CUDA_HOME");
    if (cuda_home_runtime &&
        strnlen(cuda_home_runtime, 256) &&
        strnlen(cuda_home_compile, 256) &&
        !strncmp(cuda_home_compile, cuda_home_runtime, 256)) {
        printf("SUCCESS\n");
    } else {
        printf("FAILURE\n");
        printf("Compiled with CUDA_HOME=%s, ran with CUDA_HOME=%s\n",
               cuda_home_compile,
               cuda_home_runtime ? cuda_home_runtime : "<null>");
    }

    return 0;
}

This program is pretty basic, but enough to demonstrate the use of environment variables from ReFrame. It simply compares the value of the CUDA_HOME macro with the value of the environment variable CUDA_HOME at runtime, printing SUCCESS if they are not empty and match. The Makefile for this example compiles this source by simply setting CUDA_HOME to the value of the CUDA_HOME environment variable:

EXECUTABLE := advanced_example4

CPPFLAGS = -DCUDA_HOME=\"$(CUDA_HOME)\"

.SUFFIXES: .o .c

OBJS := advanced_example4.o

$(EXECUTABLE): $(OBJS)
	$(CC) $(CFLAGS) $(LDFLAGS) -o $@ $^

$(OBJS): advanced_example4.c
	$(CC) $(CPPFLAGS) $(CFLAGS) -c $(LDFLAGS) -o $@ $^

clean:
	/bin/rm -f $(OBJS) $(EXECUTABLE)

Coming back now to the ReFrame regression test, the CUDATOOLKIT_HOME environment variable is defined by the cudatoolkit module. If you try to run the test, you will see that it will succeed, meaning that the CUDA_HOME variable was set correctly both during the compilation and the runtime. ReFrame will generate the following instructions in the shell scripts (build and run) associated with this test:

module load cudatoolkit
export CUDA_HOME=$CUDATOOLKIT_HOME

Finally, as already mentioned previously, since the name of the makefile is not one of the standard ones, it must be set explicitly in the build system:

self.build_system.makefile = 'Makefile_example4'

Setting a Time Limit for Regression Tests

ReFrame gives you the option to limit the execution time of regression tests. The following example demonstrates how you can achieve this by limiting the execution time of a test that tries to sleep 100 seconds:

# Copyright 2016-2020 Swiss National Supercomputing Centre (CSCS/ETH Zurich)
# ReFrame Project Developers. See the top-level LICENSE file for details.
#
# SPDX-License-Identifier: BSD-3-Clause

import reframe as rfm
import reframe.utility.sanity as sn


@rfm.simple_test
class TimeLimitTest(rfm.RunOnlyRegressionTest):
    def __init__(self):
        self.descr = ('ReFrame tutorial demonstrating the use '
                      'of a user-defined time limit')
        self.valid_systems = ['daint:gpu', 'daint:mc']
        self.valid_prog_environs = ['*']
        self.time_limit = '1m'
        self.executable = 'sleep'
        self.executable_opts = ['100']
        self.sanity_patterns = sn.assert_found(
            r'CANCELLED.*DUE TO TIME LIMIT', self.stderr)
        self.maintainers = ['put-your-name-here']
        self.tags = {'tutorial'}

The important bit here is the following line that sets the time limit for the test to one minute:

self.time_limit = '1m'

The time_limit attribute is a string in the form <DAYS>d<HOURS>h<MINUTES>m<SECONDS>s and will impose a runtime limit to the associated job. It will not kill the test if the job is pending for more time. Time limits are implemented for all the scheduler backends.

The sanity condition for this test verifies that associated job has been canceled due to the time limit (note that this message is SLURM-specific).

self.sanity_patterns = sn.assert_found(
    r'CANCELLED.*DUE TO TIME LIMIT', self.stderr)

Note

New in version 3.0: The max_pending_time attribute was added to force termination of a test if it is pending for too long.

Applying a sanity function iteratively

It is often the case that a common sanity pattern has to be applied many times. In this example we will demonstrate how the above situation can be easily tackled using the sanity functions offered by ReFrame. Specifically, we would like to execute the following shell script and check that its output is correct:

#!/usr/bin/env bash

if [ -z $LOWER ]; then
    export LOWER=90
fi

if [ -z $UPPER ]; then
    export UPPER=100
fi

for i in {1..100}; do
    echo Random: $((RANDOM%($UPPER+1-$LOWER)+$LOWER))
done

The above script simply prints 100 random integers between the limits given by the variables LOWER and UPPER. In the corresponding regression test we want to check that all the random numbers printed lie between 90 and 100 ensuring that the script executed correctly. Hence, a common sanity check has to be applied to all the printed random numbers. In ReFrame this can achieved by the use of map() sanity function accepting a function and an iterable as arguments. Through map() the given function will be applied to all the members of the iterable object. Note that since map() is a sanity function, its execution will be deferred. The ReFrame test is the following:

# Copyright 2016-2020 Swiss National Supercomputing Centre (CSCS/ETH Zurich)
# ReFrame Project Developers. See the top-level LICENSE file for details.
#
# SPDX-License-Identifier: BSD-3-Clause

import reframe as rfm
import reframe.utility.sanity as sn


@rfm.simple_test
class DeferredIterationTest(rfm.RunOnlyRegressionTest):
    def __init__(self):
        self.descr = ('ReFrame tutorial demonstrating the use of deferred '
                      'iteration via the `map` sanity function.')
        self.valid_systems = ['*']
        self.valid_prog_environs = ['*']
        self.executable = './random_numbers.sh'
        numbers = sn.extractall(
            r'Random: (?P<number>\S+)', self.stdout, 'number', float)
        self.sanity_patterns = sn.and_(
            sn.assert_eq(sn.count(numbers), 100),
            sn.all(sn.map(lambda x: sn.assert_bounded(x, 90, 100), numbers)))
        self.maintainers = ['put-your-name-here']
        self.tags = {'tutorial'}

First the random numbers are extracted through the extractall() function as follows:

numbers = sn.extractall(
    r'Random: (?P<number>\S+)', self.stdout, 'number', float)

The numbers variable is a deferred iterable, which upon evaluation will return all the extracted numbers. In order to check that the extracted numbers lie within the specified limits, we make use of the map() sanity function, which will apply the assert_bounded() to all the elements of numbers.

There is still a small complication that needs to be addressed. The all() function returns True for empty iterables, which is not what we want. So we must ensure that all the numbers are extracted as well. To achieve this, we make use of count() to get the number of elements contained in numbers combined with assert_eq() to check that the number is indeed 100. Finally, both of the above conditions have to be satisfied for the program execution to be considered successful, hence the use of the and_() function. Note that the and operator is not deferrable and will trigger the evaluation of any deferrable argument passed to it. The full syntax for the sanity_patterns is the following:

self.sanity_patterns = sn.and_(
    sn.assert_eq(sn.count(numbers), 100),
    sn.all(sn.map(lambda x: sn.assert_bounded(x, 90, 100), numbers)))

Note

New in version 2.13: ReFrame offers also the allx() sanity function which, conversely to the builtin all() function, will return False if its iterable argument is empty.

Customizing the Generated Job Script

It is often the case that you must run some commands before or after the parallel launch of your executable. This can be easily achieved by using the prerun_cmds and postrun_cmds attributes of a ReFrame test.

The following example is a slightly modified version of the previous one. The lower and upper limits for the random numbers are now set inside a helper shell script in scripts/limits.sh and we want also to print the word FINISHED after our executable has finished. In order to achieve this, we need to source the helper script just before launching the executable and echo the desired message just after it finishes. Here is the test file:

# Copyright 2016-2020 Swiss National Supercomputing Centre (CSCS/ETH Zurich)
# ReFrame Project Developers. See the top-level LICENSE file for details.
#
# SPDX-License-Identifier: BSD-3-Clause

import reframe as rfm
import reframe.utility.sanity as sn


@rfm.simple_test
class PrerunDemoTest(rfm.RunOnlyRegressionTest):
    def __init__(self):
        self.descr = ('ReFrame tutorial demonstrating the use of '
                      'pre- and post-run commands')
        self.valid_systems = ['*']
        self.valid_prog_environs = ['*']
        self.prerun_cmds  = ['source scripts/limits.sh']
        self.postrun_cmds = ['echo FINISHED']
        self.executable = './random_numbers.sh'
        numbers = sn.extractall(
            r'Random: (?P<number>\S+)', self.stdout, 'number', float)
        self.sanity_patterns = sn.all([
            sn.assert_eq(sn.count(numbers), 100),
            sn.all(sn.map(lambda x: sn.assert_bounded(x, 50, 80), numbers)),
            sn.assert_found('FINISHED', self.stdout)
        ])
        self.maintainers = ['put-your-name-here']
        self.tags = {'tutorial'}

Notice the use of the prerun_cmds and postrun_cmds attributes. These are lists of shell commands that are emitted verbatim in the job script. The generated job script for this example is the following:

#!/bin/bash -l
#SBATCH --job-name="prerun_demo_check_daint_gpu_PrgEnv-gnu"
#SBATCH --time=0:10:0
#SBATCH --ntasks=1
#SBATCH --output=prerun_demo_check.out
#SBATCH --error=prerun_demo_check.err
#SBATCH --constraint=gpu
module load daint-gpu
module unload PrgEnv-cray
module load PrgEnv-gnu
source scripts/limits.sh
srun ./random_numbers.sh
echo FINISHED

ReFrame generates the job shell script using the following pattern:

#!/bin/bash -l
{job_scheduler_preamble}
{test_environment}
{prerun_cmds}
{parallel_launcher} {executable} {executable_opts}
{postrun_cmds}

The job_scheduler_preamble contains the directives that control the job allocation. The test_environment are the necessary commands for setting up the environment of the test. This is the place where the modules and environment variables specified in modules and variables attributes are emitted. Then the commands specified in prerun_cmds follow, while those specified in the postrun_cmds come after the launch of the parallel job. The parallel launch itself consists of three parts:

  1. The parallel launcher program (e.g., srun, mpirun etc.) with its options,

  2. the regression test executable as specified in the executable attribute and

  3. the options to be passed to the executable as specified in the executable_opts attribute.

A key thing to note about the generated job script is that ReFrame submits it from the stage directory of the test, so that all relative paths are resolved against it.

Working with parameterized tests

New in version 2.13.

We have seen already in the basic tutorial how we could better organize the tests so as to avoid code duplication by using test class hierarchies. An alternative technique, which could also be used in parallel with the class hierarchies, is to use parameterized tests. The following is a test that takes a variant parameter, which controls which variant of the code will be used. Depending on that value, the test is set up differently:

# Copyright 2016-2020 Swiss National Supercomputing Centre (CSCS/ETH Zurich)
# ReFrame Project Developers. See the top-level LICENSE file for details.
#
# SPDX-License-Identifier: BSD-3-Clause

import reframe as rfm
import reframe.utility.sanity as sn


@rfm.parameterized_test(['MPI'], ['OpenMP'])
class MatrixVectorTest(rfm.RegressionTest):
    def __init__(self, variant):
        self.descr = 'Matrix-vector multiplication test (%s)' % variant
        self.valid_systems = ['daint:gpu', 'daint:mc']
        self.valid_prog_environs = ['PrgEnv-cray', 'PrgEnv-gnu',
                                    'PrgEnv-intel', 'PrgEnv-pgi']
        self.build_system = 'SingleSource'
        self.prgenv_flags = {
            'PrgEnv-cray':  ['-homp'],
            'PrgEnv-gnu':   ['-fopenmp'],
            'PrgEnv-intel': ['-openmp'],
            'PrgEnv-pgi':   ['-mp']
        }

        if variant == 'MPI':
            self.num_tasks = 8
            self.num_tasks_per_node = 2
            self.num_cpus_per_task = 4
            self.sourcepath = 'example_matrix_vector_multiplication_mpi_openmp.c'
        elif variant == 'OpenMP':
            self.sourcepath = 'example_matrix_vector_multiplication_openmp.c'
            self.num_cpus_per_task = 4

        self.variables = {
            'OMP_NUM_THREADS': str(self.num_cpus_per_task)
        }
        matrix_dim = 1024
        iterations = 100
        self.executable_opts = [str(matrix_dim), str(iterations)]

        expected_norm = matrix_dim
        found_norm = sn.extractsingle(
            r'The L2 norm of the resulting vector is:\s+(?P<norm>\S+)',
            self.stdout, 'norm', float)
        self.sanity_patterns = sn.all([
            sn.assert_found(
                r'time for single matrix vector multiplication', self.stdout),
            sn.assert_lt(sn.abs(expected_norm - found_norm), 1.0e-6)
        ])
        self.maintainers = ['you-can-type-your-email-here']
        self.tags = {'tutorial'}

    @rfm.run_before('compile')
    def setflags(self):
        if self.prgenv_flags is not None:
            env = self.current_environ.name
            self.build_system.cflags = self.prgenv_flags[env]

If you have already gone through the Tutorial 1: The Basics, this test can be easily understood. The new bit here is the @parameterized_test decorator of the MatrixVectorTest class. This decorator takes an arbitrary number of arguments, which are either of a sequence type (i.e., list, tuple etc.) or of a mapping type (i.e., dictionary). Each of the decorator’s arguments corresponds to the constructor arguments of the decorated test that will be used to instantiate it. In the example shown, the test will be instantiated twice, once passing variant as MPI and a second time with variant passed as OpenMP. The framework will try to generate unique names for the generated tests by stringifying the arguments passed to the test’s constructor:

[ReFrame Setup]
  version:           3.0-dev6 (rev: 89d50861)
  command:           './bin/reframe -C tutorial/config/settings.py -c tutorial/advanced/advanced_example8.py -l'
  launched by:       karakasv@daint101
  working directory: '/path/to/reframe'
  check search path: (R) '/path/to/reframe/tutorial/advanced/advanced_example8.py'
  stage directory:   '/path/to/reframe/stage'
  output directory:  '/path/to/reframe/output'

[List of matched checks]
  - MatrixVectorTest_MPI (found in /path/to/reframe/tutorial/advanced/advanced_example8.py)
  - MatrixVectorTest_OpenMP (found in /path/to/reframe/tutorial/advanced/advanced_example8.py)

Found 2 check(s).

There are a couple of different ways that we could have used the @parameterized_test decorator. One is to use dictionaries for specifying the instantiations of our test class. The dictionaries will be converted to keyword arguments and passed to the constructor of the test class:

@rfm.parameterized_test({'variant': 'MPI'}, {'variant': 'OpenMP'})

Another way, which is quite useful if you want to generate lots of different tests at the same time, is to use either list comprehensions or generator expressions for specifying the different test instantiations:

@rfm.parameterized_test(*([variant] for variant in ['MPI', 'OpenMP']))

Tip

Combining parameterized tests and test class hierarchies can offer you a very flexible way for generating multiple related tests at once keeping at the same time the maintenance cost low. We use this technique extensively in our tests.

Flexible Regression Tests

New in version 2.15.

ReFrame can automatically set the number of tasks of a particular test, if its num_tasks attribute is set to a negative value or zero. In ReFrame’s terminology, such tests are called flexible. Negative values indicate the minimum number of tasks that are acceptable for this test (a value of -4 indicates that at least 4 tasks are required). A zero value indicates the default minimum number of tasks which is equal to num_tasks_per_node.

By default, ReFrame will spawn such a test on all the idle nodes of the current system partition, but this behavior can be adjusted with the --flex-alloc-nodes command-line option. Flexible tests are very useful for diagnostics tests, e.g., tests for checking the health of a whole set nodes. In this example, we demonstrate this feature through a simple test that runs hostname. The test will verify that all the nodes print the expected host name:

# Copyright 2016-2020 Swiss National Supercomputing Centre (CSCS/ETH Zurich)
# ReFrame Project Developers. See the top-level LICENSE file for details.
#
# SPDX-License-Identifier: BSD-3-Clause

import reframe as rfm
import reframe.utility.sanity as sn


@rfm.simple_test
class HostnameCheck(rfm.RunOnlyRegressionTest):
    def __init__(self):
        self.valid_systems = ['daint:gpu', 'daint:mc']
        self.valid_prog_environs = ['PrgEnv-cray']
        self.executable = 'hostname'
        self.sourcesdir = None
        self.num_tasks = 0
        self.num_tasks_per_node = 1
        self.sanity_patterns = sn.assert_eq(
            sn.getattr(self, 'num_tasks'),
            sn.count(sn.findall(r'nid\d+', self.stdout))
        )
        self.maintainers = ['you-can-type-your-email-here']
        self.tags = {'tutorial'}

The first thing to notice in this test is that num_tasks is set to zero. This is a requirement for flexible tests:

self.num_tasks = 0

The sanity function of this test simply counts the host names and verifies that they are as many as expected:

self.sanity_patterns = sn.assert_eq(
    sn.getattr(self, 'num_tasks'),
    sn.count(sn.findall(r'nid\d+', self.stdout))
)

Notice, however, that the sanity check does not use num_tasks directly, but rather access the attribute through the sn.getattr() sanity function, which is a replacement for the getattr() builtin. The reason for that is that at the time the sanity check expression is created, num_tasks is 0 and it will only be set to its actual value during the run phase. Consequently, we need to defer the attribute retrieval, thus we use the sn.getattr() sanity function instead of accessing it directly

Testing containerized applications

New in version 2.20.

ReFrame can be used also to test applications that run inside a container. A container-based test can be written as RunOnlyRegressionTest that sets the container_platform. The following example shows a simple test that runs some basic commands inside an Ubuntu 18.04 container and checks that the test has indeed run inside the container and that the stage directory was correctly mounted:

# Copyright 2016-2020 Swiss National Supercomputing Centre (CSCS/ETH Zurich)
# ReFrame Project Developers. See the top-level LICENSE file for details.
#
# SPDX-License-Identifier: BSD-3-Clause

import reframe as rfm
import reframe.utility.sanity as sn


@rfm.simple_test
class Example10Test(rfm.RunOnlyRegressionTest):
    def __init__(self):
        self.descr = 'Run commands inside a container'
        self.valid_systems = ['daint:gpu']
        self.valid_prog_environs = ['PrgEnv-cray']
        self.container_platform = 'Singularity'
        self.container_platform.image = 'docker://ubuntu:18.04'
        self.container_platform.commands = [
            'pwd', 'ls', 'cat /etc/os-release'
        ]
        self.container_platform.workdir = '/workdir'
        self.sanity_patterns = sn.all([
            sn.assert_found(r'^' + self.container_platform.workdir,
                            self.stdout),
            sn.assert_found(r'^advanced_example1.c', self.stdout),
            sn.assert_found(r'18.04.\d+ LTS \(Bionic Beaver\)', self.stdout),
        ])
        self.maintainers = ['put-your-name-here']
        self.tags = {'tutorial'}

A container-based test in ReFrame requires that the container_platform is set:

        self.container_platform.image = 'docker://ubuntu:18.04'

This attribute accepts a string that corresponds to the name of the platform and it instantiates the appropriate ContainerPlatform object behind the scenes. In this case, the test will be using Singularity as a container platform. If such a platform is not configured for the current system, the test will fail. For a complete list of supported container platforms, the user is referred to the configuration reference.

As soon as the container platform to be used is defined, you need to specify the container image to use and the commands to run inside the container:

        self.container_platform.image = 'docker://ubuntu:18.04'
        self.container_platform.commands = [
            'pwd', 'ls', 'cat /etc/os-release'
        ]

These two attributes are mandatory for container-based check. The image attribute specifies the name of an image from a registry, whereas the commands attribute provides the list of commands to be run inside the container. It is important to note that the executable and executable_opts attributes of the actual test are ignored in case of container-based tests.

In the above example, ReFrame will run the container as follows:

singularity exec -B"/path/to/test/stagedir:/workdir" docker://ubuntu:18.04 bash -c 'cd rfm_workdir; pwd; ls; cat /etc/os-release'

By default ReFrame will mount the stage directory of the test under /rfm_workdir inside the container and it will always prepend a cd command to that directory. The user commands then are then run from that directory one after the other. Once the commands are executed, the container is stopped and ReFrame goes on with the sanity and performance checks.

Users may also change the default mount point of the stage directory by using workdir attribute:

        self.container_platform.workdir = '/workdir'

Besides the stage directory, additional mount points can be specified through the mount_points attribute:

self.container_platform.mount_points = [('/path/to/host/dir1', '/path/to/container/mount_point1'),
                                        ('/path/to/host/dir2', '/path/to/container/mount_point2')]

For a complete list of the available attributes of a specific container platform, the reader is referred to ReFrame Programming APIs guide.