Customizing Further a Regression Test

In this section, we are going to show some more elaborate use cases of ReFrame. Through the use of more advanced examples, we will demonstrate further customization options which modify the default options of the ReFrame pipeline. The corresponding scripts as well as the source code of the examples discussed here can be found in the directory tutorial/advanced.

Working with Makefiles

We have already shown how you can compile a single source file associated with your regression test. In this example, we show how ReFrame can leverage Makefiles to build executables.

Compiling a regression test through a Makefile is straightforward with ReFrame. If the sourcepath attribute refers to a directory, then ReFrame will automatically invoke make in that directory. More specifically, ReFrame first copies the sourcesdir to the stage directory at the beginning of the compilation phase and then constructs the path os.path.join('{STAGEDIR}', self.sourcepath) to determine the actual compilation path. If this is a directory, it will invoke make in it.

Note

The sourcepath attribute must be a relative path refering to a subdirectory of sourcesdir, i.e., relative paths starting with .. will be rejected.

By default, sourcepath is the empty string and sourcesdir is set to 'src/'. As a result, by not specifying a sourcepath at all, ReFrame will eventually compile the files found in the src/ directory. This is exactly what our first example here does.

For completeness, here are the contents of Makefile provided:

EXECUTABLE := advanced_example1 

.SUFFIXES: .o .c

OBJS := advanced_example1.o

$(EXECUTABLE): $(OBJS)
	$(CC) $(CFLAGS) $(LDFLAGS) -o $@ $^
 
$(OBJS): advanced_example1.c
	$(CC) $(CPPFLAGS) $(CFLAGS) -c $(LDFLAGS) -o $@ $^ 

The corresponding advanced_example1.c source file consists of a simple printing of a message, whose content depends on the preprocessor variable MESSAGE:

#include <stdio.h>

int main(){
#ifdef MESSAGE
    char *message = "SUCCESS";
#else
    char *message = "FAILURE";
#endif
    printf("Setting of preprocessor variable: %s\n", message);
    return 0;
}

The purpose of the regression test in this case is to set the preprocessor variable MESSAGE via CPPFLAGS and then check the standard output for the message SUCCESS, which indicates that the preprocessor flag has been passed and processed correctly by the Makefile.

The contents of this regression test are the following (tutorial/advanced/advanced_example1.py):

import reframe as rfm
import reframe.utility.sanity as sn


@rfm.simple_test
class MakefileTest(rfm.RegressionTest):
    def __init__(self):
        super().__init__()
        self.descr = ('ReFrame tutorial demonstrating the use of Makefiles '
                      'and compile options')
        self.valid_systems = ['*']
        self.valid_prog_environs = ['*']
        self.executable = './advanced_example1'
        self.build_system = 'Make'
        self.build_system.cppflags = ['-DMESSAGE']
        self.sanity_patterns = sn.assert_found('SUCCESS', self.stdout)
        self.maintainers = ['put-your-name-here']
        self.tags = {'tutorial'}

The important bit here is how we set up the build system for this test:

    self.build_system = 'Make'
    self.build_system.cppflags = ['-DMESSAGE']

First, we set the build system to Make and then set the preprocessor flags for the compilation. ReFrame will invoke make as follows:

make -j CC='cc' CXX='CC' FC='ftn' NVCC='nvcc' CPPFLAGS='-DMESSAGE'

The compiler variables (CC, CXX etc.) are set based on the corresponding values specified in the coniguration of the current environment. You may instruct the build system to ignore the default values from the environment by setting the following:

self.build_system.flags_from_environ = False

In this case, make will be invoked as follows:

make -j CPPFLAGS='-DMESSAGE'

Notice that the -j option is always generated. If you want to limit build concurrency, you can do it as follows:

self.build_system.max_concurrency = 4

Finally, you may also customize the name of the Makefile. You can achieve that by setting the corresponding variable of the Make build system:

self.build_system.makefile = 'Makefile_custom'

More details on ReFrame’s build systems, you may find here.

Retrieving the source code from a Git repository

It might be the case that a regression test needs to clone its source code from a remote repository. This can be achieved in two ways with ReFrame. One way is to set the sourcesdir attribute to None and explicitly clone or checkout a repository using the prebuild_cmd:

self.sourcesdir = None
self.prebuild_cmd = ['git clone https://github.com/me/myrepo .']

By setting sourcesdir to None, you are telling ReFrame that you are going to provide the source files in the stage directory. The working directory of the prebuild_cmd and postbuild_cmd commands will be the stage directory of the test.

An alternative way to retrieve specifically a Git repository is to assign its URL directly to the sourcesdir attribute:

self.sourcesdir = 'https://github.com/me/myrepo'

ReFrame will attempt to clone this repository inside the stage directory by executing git clone <repo> . and will then procede with the compilation as described above.

Note

ReFrame recognizes only URLs in the sourcesdir attribute and requires passwordless access to the repository. This means that the SCP-style repository specification will not be accepted. You will have to specify it as URL using the ssh:// protocol (see Git documentation page).

Add a configuration step before compiling the code

It is often the case that a configuration step is needed before compiling a code with make. To address this kind of projects, ReFrame aims to offer specific abstractions for “configure-make”-style build systems. It supports CMake-based projects through the CMake build system, as well as Autotools-based projects through the Autotools build system.

For other build systems, you can achieve the same effect using the Make build system and the prebuild_cmd for performing the configuration step. The following code snippet will configure a code with ./custom_configure before invoking make:

self.prebuild_cmd = ['./custom_configure -with-mylib']
self.build_system = 'Make'
self.build_system.cppflags = ['-DHAVE_FOO']
self.build_system.flags_from_environ = False

The generated build script then will have the following lines:

./custom_configure -with-mylib
make -j CPPFLAGS='-DHAVE_FOO'

Implementing a Run-Only Regression Test

There are cases when it is desirable to perform regression testing for an already built executable. The following test uses the echo Bash shell command to print a random integer between specific lower and upper bounds. Here is the full regression test (tutorial/advanced/advanced_example2.py):

import reframe as rfm
import reframe.utility.sanity as sn


@rfm.simple_test
class ExampleRunOnlyTest(rfm.RunOnlyRegressionTest):
    def __init__(self):
        super().__init__()
        self.descr = ('ReFrame tutorial demonstrating the class'
                      'RunOnlyRegressionTest')
        self.valid_systems = ['*']
        self.valid_prog_environs = ['*']
        self.sourcesdir = None

        lower = 90
        upper = 100
        self.executable = 'echo "Random: $((RANDOM%({1}+1-{0})+{0}))"'.format(
            lower, upper)
        self.sanity_patterns = sn.assert_bounded(sn.extractsingle(
            r'Random: (?P<number>\S+)', self.stdout, 'number', float),
            lower, upper)
        self.maintainers = ['put-your-name-here']
        self.tags = {'tutorial'}

There is nothing special for this test compared to those presented earlier except that it derives from the RunOnlyRegressionTest and that it does not contain any resources (self.sourcesdir = None). Note that run-only regression tests may also have resources, as for instance a precompiled executable or some input data. The copying of these resources to the stage directory is performed at the beginning of the run phase. For standard regression tests, this happens at the beginning of the compilation phase, instead. Furthermore, in this particular test the executable consists only of standard Bash shell commands. For this reason, we can set sourcesdir to None informing ReFrame that the test does not have any resources.

Implementing a Compile-Only Regression Test

ReFrame provides the option to write compile-only tests which consist only of a compilation phase without a specified executable. This kind of tests must derive from the CompileOnlyRegressionTest class provided by the framework. The following example (tutorial/advanced/advanced_example3.py) reuses the code of our first example in this section and checks that no warnings are issued by the compiler:

import reframe as rfm
import reframe.utility.sanity as sn


@rfm.simple_test
class ExampleCompileOnlyTest(rfm.CompileOnlyRegressionTest):
    def __init__(self):
        super().__init__()
        self.descr = ('ReFrame tutorial demonstrating the class'
                      'CompileOnlyRegressionTest')
        self.valid_systems = ['*']
        self.valid_prog_environs = ['*']
        self.sanity_patterns = sn.assert_not_found('warning', self.stderr)
        self.maintainers = ['put-your-name-here']
        self.tags = {'tutorial'}

The important thing to note here is that the standard output and standard error of the tests, accessible through the stdout and stderr attributes, are now the corresponding those of the compilation command. So sanity checking can be done in exactly the same way as with a normal test.

Leveraging Environment Variables

We have already demonstrated in the tutorial that ReFrame allows you to load the required modules for regression tests and also set any needed environment variables. When setting environment variables for your test through the variables attribute, you can assign them values of other, already defined, environment variables using the standard notation $OTHER_VARIABLE or ${OTHER_VARIABLE}. The following regression test (tutorial/advanced/advanced_example4.py) sets the CUDA_HOME environment variable to the value of the CUDATOOLKIT_HOME and then compiles and runs a simple program:

import reframe as rfm
import reframe.utility.sanity as sn


@rfm.simple_test
class EnvironmentVariableTest(rfm.RegressionTest):
    def __init__(self):
        super().__init__()
        self.descr = ('ReFrame tutorial demonstrating the use'
                      'of environment variables provided by loaded modules')
        self.valid_systems = ['daint:gpu']
        self.valid_prog_environs = ['*']
        self.modules = ['cudatoolkit']
        self.variables = {'CUDA_HOME': '$CUDATOOLKIT_HOME'}
        self.executable = './advanced_example4'
        self.build_system = 'Make'
        self.build_system.makefile = 'Makefile_example4'
        self.sanity_patterns = sn.assert_found(r'SUCCESS', self.stdout)
        self.maintainers = ['put-your-name-here']
        self.tags = {'tutorial'}

Before discussing this test in more detail, let’s first have a look in the source code and the Makefile of this example:

#include <stdio.h>
#include <stdlib.h>
#include <string.h>

#ifndef CUDA_HOME
#   define CUDA_HOME ""
#endif

int main() {
    char *cuda_home_compile = CUDA_HOME;
    char *cuda_home_runtime = getenv("CUDA_HOME");
    if (cuda_home_runtime &&
        strnlen(cuda_home_runtime, 256) &&
        strnlen(cuda_home_compile, 256) &&
        !strncmp(cuda_home_compile, cuda_home_runtime, 256)) {
        printf("SUCCESS\n");
    } else {
        printf("FAILURE\n");
        printf("Compiled with CUDA_HOME=%s, ran with CUDA_HOME=%s\n",
               cuda_home_compile,
               cuda_home_runtime ? cuda_home_runtime : "<null>");
    }

    return 0;
}

This program is pretty basic, but enough to demonstrate the use of environment variables from ReFrame. It simply compares the value of the CUDA_HOME macro with the value of the environment variable CUDA_HOME at runtime, printing SUCCESS if they are not empty and match. The Makefile for this example compiles this source by simply setting CUDA_HOME to the value of the CUDA_HOME environment variable:

EXECUTABLE := advanced_example4

CPPFLAGS = -DCUDA_HOME=\"$(CUDA_HOME)\"

.SUFFIXES: .o .c

OBJS := advanced_example4.o

$(EXECUTABLE): $(OBJS)
	$(CC) $(CFLAGS) $(LDFLAGS) -o $@ $^

$(OBJS): advanced_example4.c
	$(CC) $(CPPFLAGS) $(CFLAGS) -c $(LDFLAGS) -o $@ $^

clean:
	/bin/rm -f $(OBJS) $(EXECUTABLE)

Coming back now to the ReFrame regression test, the CUDATOOLKIT_HOME environment variable is defined by the cudatoolkit module. If you try to run the test, you will see that it will succeed, meaning that the CUDA_HOME variable was set correctly both during the compilation and the runtime.

When ReFrame sets up a test, it first loads its required modules and then sets the required environment variables expanding their values. This has the result that CUDA_HOME takes the correct value in our example at the compilation time.

At runtime, ReFrame will generate the following instructions in the shell script associated with this test:

module load cudatoolkit
export CUDA_HOME=$CUDATOOLKIT_HOME

This ensures that the environment of the test is also set correctly at runtime.

Finally, as already mentioned previously, since the name of the makefile is not one of the standard ones, it must be set explicitly in the build system:

self.build_system.makefile = 'Makefile_example4'

Setting a Time Limit for Regression Tests

ReFrame gives you the option to limit the execution time of regression tests. The following example (tutorial/advanced/advanced_example5.py) demonstrates how you can achieve this by limiting the execution time of a test that tries to sleep 100 seconds:

import reframe as rfm
import reframe.utility.sanity as sn


@rfm.simple_test
class TimeLimitTest(rfm.RunOnlyRegressionTest):
    def __init__(self):
        super().__init__()
        self.descr = ('ReFrame tutorial demonstrating the use'
                      'of a user-defined time limit')
        self.valid_systems = ['daint:gpu', 'daint:mc']
        self.valid_prog_environs = ['*']
        self.time_limit = (0, 1, 0)
        self.executable = 'sleep'
        self.executable_opts = ['100']
        self.sanity_patterns = sn.assert_found(
            r'CANCELLED.*DUE TO TIME LIMIT', self.stderr)
        self.maintainers = ['put-your-name-here']
        self.tags = {'tutorial'}

The important bit here is the following line that sets the time limit for the test to one minute:

self.time_limit = (0, 1, 0)

The time_limit attribute is a three-tuple in the form (HOURS, MINUTES, SECONDS). Time limits are implemented for all the scheduler backends.

The sanity condition for this test verifies that associated job has been canceled due to the time limit (note that this message is SLURM-specific).

self.sanity_patterns = sn.assert_found(
    r'CANCELLED.*DUE TO TIME LIMIT', self.stderr)

Applying a sanity function iteratively

It is often the case that a common sanity pattern has to be applied many times. In this example we will demonstrate how the above situation can be easily tackled using the sanity functions offered by ReFrame. Specifically, we would like to execute the following shell script and check that its output is correct:

#!/usr/bin/env bash

if [ -z $LOWER ]; then
    export LOWER=90
fi

if [ -z $UPPER ]; then
    export UPPER=100
fi

for i in {1..100}; do
    echo Random: $((RANDOM%($UPPER+1-$LOWER)+$LOWER))
done

The above script simply prints 100 random integers between the limits given by the variables LOWER and UPPER. In the corresponding regression test we want to check that all the random numbers printed lie between 90 and 100 ensuring that the script executed correctly. Hence, a common sanity check has to be applied to all the printed random numbers. In ReFrame this can achieved by the use of map sanity function accepting a function and an iterable as arguments. Through map the given function will be applied to all the members of the iterable object. Note that since map is a sanity function, its execution will be deferred. The contents of the ReFrame regression test contained in advanced_example6.py are the following:

import reframe as rfm
import reframe.utility.sanity as sn


@rfm.simple_test
class DeferredIterationTest(rfm.RunOnlyRegressionTest):
    def __init__(self):
        super().__init__()
        self.descr = ('ReFrame tutorial demonstrating the use of deferred '
                      'iteration via the `map` sanity function.')
        self.valid_systems = ['*']
        self.valid_prog_environs = ['*']
        self.executable = './random_numbers.sh'
        numbers = sn.extractall(
            r'Random: (?P<number>\S+)', self.stdout, 'number', float)
        self.sanity_patterns = sn.and_(
            sn.assert_eq(sn.count(numbers), 100),
            sn.all(sn.map(lambda x: sn.assert_bounded(x, 90, 100), numbers)))
        self.maintainers = ['put-your-name-here']
        self.tags = {'tutorial'}

First the random numbers are extracted through the extractall function as follows:

numbers = sn.extractall(
    r'Random: (?P<number>\S+)', self.stdout, 'number', float)

The numbers variable is a deferred iterable, which upon evaluation will return all the extracted numbers. In order to check that the extracted numbers lie within the specified limits, we make use of the map sanity function, which will apply the assert_bounded to all the elements of numbers. Additionally, our requirement is that all the numbers satisfy the above constraint and we therefore use all.

There is still a small complication that needs to be addressed. The all function returns True for empty iterables, which is not what we want. So we must ensure that all the numbers are extracted as well. To achieve this, we make use of count to get the number of elements contained in numbers combined with assert_eq to check that the number is indeed 100. Finally, both of the above conditions have to be satisfied for the program execution to be considered successful, hence the use of the and_ function. Note that the and operator is not deferrable and will trigger the evaluation of any deferrable argument passed to it.

The full syntax for the sanity_patterns is the following:

self.sanity_patterns = sn.and_(
    sn.assert_eq(sn.count(numbers), 100),
    sn.all(sn.map(lambda x: sn.assert_bounded(x, 90, 100), numbers)))

Customizing the Generated Job Script

It is often the case that you must run some commands before and/or after the parallel launch of your executable. This can be easily achieved by using the pre_run and post_run attributes of RegressionTest.

The following example is a slightly modified version of the previous one. The lower and upper limits for the random numbers are now set inside a helper shell script in scripts/limits.sh and we want also to print the word FINISHED after our executable has finished. In order to achieve this, we need to source the helper script just before launching the executable and echo the desired message just after it finishes. Here is the test file:

import reframe as rfm
import reframe.utility.sanity as sn


@rfm.simple_test
class PrerunDemoTest(rfm.RunOnlyRegressionTest):
    def __init__(self):
        super().__init__()
        self.descr = ('ReFrame tutorial demonstrating the use of '
                      'pre- and post-run commands')
        self.valid_systems = ['*']
        self.valid_prog_environs = ['*']
        self.pre_run  = ['source scripts/limits.sh']
        self.post_run = ['echo FINISHED']
        self.executable = './random_numbers.sh'
        numbers = sn.extractall(
            r'Random: (?P<number>\S+)', self.stdout, 'number', float)
        self.sanity_patterns = sn.all([
            sn.assert_eq(sn.count(numbers), 100),
            sn.all(sn.map(lambda x: sn.assert_bounded(x, 50, 80), numbers)),
            sn.assert_found('FINISHED', self.stdout)
        ])
        self.maintainers = ['put-your-name-here']
        self.tags = {'tutorial'}

Notice the use of the pre_run and post_run attributes. These are list of shell commands that are emitted verbatim in the job script. The generated job script for this example is the following:

#!/bin/bash -l
#SBATCH --job-name="prerun_demo_check_daint_gpu_PrgEnv-gnu"
#SBATCH --time=0:10:0
#SBATCH --ntasks=1
#SBATCH --output=/path/to/stage/gpu/prerun_demo_check/PrgEnv-gnu/prerun_demo_check.out
#SBATCH --error=/path/to/stage/gpu/prerun_demo_check/PrgEnv-gnu/prerun_demo_check.err
#SBATCH --constraint=gpu
module load daint-gpu
module unload PrgEnv-cray
module load PrgEnv-gnu
source scripts/limits.sh
srun ./random_numbers.sh
echo FINISHED

ReFrame generates the job shell script using the following pattern:

#!/bin/bash -l
{job_scheduler_preamble}
{test_environment}
{pre_run}
{parallel_launcher} {executable} {executable_opts}
{post_run}

The job_scheduler_preamble contains the directives that control the job allocation. The test_environment are the necessary commands for setting up the environment of the test. This is the place where the modules and environment variables specified in modules and variables attributes are emitted. Then the commands specified in pre_run follow, while those specified in the post_run come after the launch of the parallel job. The parallel launch itself consists of three parts:

  1. The parallel launcher program (e.g., srun, mpirun etc.) with its options,
  2. the regression test executable as specified in the executable attribute and
  3. the options to be passed to the executable as specified in the executable_opts attribute.

A key thing to note about the generated job script is that ReFrame submits it from the stage directory of the test, so that all relative paths are resolved against it.

Working with parameterized tests

New in version 2.13.

We have seen already in the basic tutorial how we could better organize the tests so as to avoid code duplication by using test class hierarchies. An alternative technique, which could also be used in parallel with the class hierarchies, is to use parameterized tests. The following is a test that takes a variant parameter, which controls which variant of the code will be used. Depending on that value, the test is set up differently:

import reframe as rfm
import reframe.utility.sanity as sn


@rfm.parameterized_test(['MPI'], ['OpenMP'])
class MatrixVectorTest(rfm.RegressionTest):
    def __init__(self, variant):
        super().__init__()
        self.descr = 'Matrix-vector multiplication test (%s)' % variant
        self.valid_systems = ['daint:gpu', 'daint:mc']
        self.valid_prog_environs = ['PrgEnv-cray', 'PrgEnv-gnu',
                                    'PrgEnv-intel', 'PrgEnv-pgi']
        self.build_system = 'SingleSource'
        self.prgenv_flags = {
            'PrgEnv-cray':  ['-homp'],
            'PrgEnv-gnu':   ['-fopenmp'],
            'PrgEnv-intel': ['-openmp'],
            'PrgEnv-pgi':   ['-mp']
        }

        if variant == 'MPI':
            self.num_tasks = 8
            self.num_tasks_per_node = 2
            self.num_cpus_per_task = 4
            self.sourcepath = 'example_matrix_vector_multiplication_mpi_openmp.c'
        elif variant == 'OpenMP':
            self.sourcepath = 'example_matrix_vector_multiplication_openmp.c'
            self.num_cpus_per_task = 4

        self.variables = {
            'OMP_NUM_THREADS': str(self.num_cpus_per_task)
        }
        matrix_dim = 1024
        iterations = 100
        self.executable_opts = [str(matrix_dim), str(iterations)]

        expected_norm = matrix_dim
        found_norm = sn.extractsingle(
            r'The L2 norm of the resulting vector is:\s+(?P<norm>\S+)',
            self.stdout, 'norm', float)
        self.sanity_patterns = sn.all([
            sn.assert_found(
                r'time for single matrix vector multiplication', self.stdout),
            sn.assert_lt(sn.abs(expected_norm - found_norm), 1.0e-6)
        ])
        self.maintainers = ['you-can-type-your-email-here']
        self.tags = {'tutorial'}

    def setup(self, partition, environ, **job_opts):
        if self.prgenv_flags is not None:
            self.build_system.cflags = self.prgenv_flags[environ.name]

        super().setup(partition, environ, **job_opts)

If you have already gone through the tutorial, this test can be easily understood. The new bit here is the @parameterized_test decorator of the MatrixVectorTest class. This decorator takes an arbitrary number of arguments, which are either of a sequence type (i.e., list, tuple etc.) or of a mapping type (i.e., dictionary). Each of the decorator’s arguments corresponds to the constructor arguments of the decorated test that will be used to instantiate it. In the example shown, the test will be instantiated twice, once passing variant as MPI and a second time with variant passed as OpenMP. The framework will try to generate unique names for the generated tests by stringifying the arguments passed to the test’s constructor:

Command line: ./bin/reframe -C tutorial/config/settings.py -c tutorial/advanced/advanced_example8.py -l
Reframe version: 2.15-dev1
Launched by user: XXX
Launched on host: daint101
Reframe paths
=============
    Check prefix      :
    Check search path : 'tutorial/advanced/advanced_example8.py'
    Stage dir prefix  : current/working/dir/reframe/stage/
    Output dir prefix : current/working/dir/reframe/output/
    Logging dir       : current/working/dir/reframe/logs
List of matched checks
======================
  * MatrixVectorTest_MPI (Matrix-vector multiplication test (MPI))
  * MatrixVectorTest_OpenMP (Matrix-vector multiplication test (OpenMP))
Found 2 check(s).

There are a couple of different ways that we could have used the @parameterized_test decorator. One is to use dictionaries for specifying the instantiations of our test class. The dictionaries will be converted to keyword arguments and passed to the constructor of the test class:

@rfm.parameterized_test({'variant': 'MPI'}, {'variant': 'OpenMP'})

Another way, which is quite useful if you want to generate lots of different tests at the same time, is to use either list comprehensions or generator expressions for specifying the different test instantiations:

@rfm.parameterized_test(*([variant] for variant in ['MPI', 'OpenMP']))

Note

In versions of the framework prior to 2.13, this could be achieved by explicitly instantiating your tests inside the _get_checks() method.

Tip

Combining parameterized tests and test class hierarchies can offer you a very flexible way for generating multiple related tests at once keeping at the same time the maintenance cost low. We use this technique extensively in our tests.

Flexible Regression Tests

New in version 2.15.

ReFrame can automatically set the number of tasks of a particular test, if its num_tasks attribute is set to 0. In ReFrame’s terminology, such tests are called flexible. By default, ReFrame will spawn such a test on all the idle nodes of the current system partition, but this behavior can be adjusted from the command-line. Flexible tests are very useful for diagnostics tests, e.g., tests for checking the health of a whole set nodes. In this example, we demonstrate this feature through a simple test that runs hostname. The test will verify that all the nodes print the expected host name:

import reframe as rfm
import reframe.utility.sanity as sn


@rfm.simple_test
class HostnameCheck(rfm.RunOnlyRegressionTest):
    def __init__(self):
        super().__init__()
        self.valid_systems = ['daint:gpu', 'daint:mc']
        self.valid_prog_environs = ['PrgEnv-cray']
        self.executable = 'hostname'
        self.sourcesdir = None
        self.num_tasks = 0
        self.num_tasks_per_node = 1
        self.sanity_patterns = sn.assert_eq(
            self.num_tasks_assigned,
            sn.count(sn.findall(r'nid\d+', self.stdout))
        )
        self.maintainers = ['you-can-type-your-email-here']
        self.tags = {'tutorial'}

    @property
    @sn.sanity_function
    def num_tasks_assigned(self):
        return self.job.num_tasks

The first thing to notice in this test is that num_tasks is set to 0. This is a requirement for flexible tests:

self.num_tasks = 0

The sanity function of this test simply counts the host names and verifies that they are as many as expected:

self.sanity_patterns = sn.assert_eq(
    self.num_tasks_assigned,
    sn.count(sn.findall(r'nid\d+', self.stdout))
)

Notice, however, that the sanity check does not use num_tasks for verification, but rather a different, custom attribute, the num_tasks_assigned. This happens for two reasons:

  1. At the time the sanity check expression is created, num_tasks is 0. So the actual number of tasks assigned must be a deferred expression as well.
  2. When ReFrame will determine and set the number of tasks of the test, it will not set the num_tasks attribute of the RegressionTest. It will only set the corresponding attribute of the associated job instance.

Here is how the new deferred attribute is defined:

@property
@sn.sanity_function
def num_tasks_assigned(self):
    return self.job.num_tasks

The behavior of the flexible task allocation is controlled by the --flex-alloc-tasks command line option. See the corresponding section for more information.