ReFrame Test Library (experimental)¶
This is a collection of generic tests that you can either run out-of-the-box by specializing them for your system using the -S
option or create your site-specific tests by building upon them.
Scientific Applications¶
- class hpctestlib.sciapps.amber.nve.amber_nve_check(*args, **kwargs)[source]¶
Bases:
reframe.core.pipeline.RunOnlyRegressionTest
Amber NVE test.
Amber is a suite of biomolecular simulation programs. It began in the late 1970’s, and is maintained by an active development community.
This test is parametrized over the benchmark type (see
benchmark_info
) and the variant of the code (seevariant
). Each test instance executes the benchmark, validates numerically its output and extracts and reports a performance metric.- benchmark¶
The name of the benchmark that this test encodes.
This is set from the corresponding value in the
benchmark_info
parameter pack during initialization.- Type
- Required
Yes
- benchmark_info = (('Cellulose_production_NVE', -443246.0, 5e-05), ('FactorIX_production_NVE', -234188.0, 0.0001), ('JAC_production_NVE_4fs', -44810.0, 0.001), ('JAC_production_NVE', -58138.0, 0.0005))¶
Parameter pack encoding the benchmark information.
The first element of the tuple refers to the benchmark name, the second is the energy reference and the third is the tolerance threshold.
- Type
Tuple[str, float, float]
- Values
[ ('Cellulose_production_NVE', -443246.0, 5.0E-05), ('FactorIX_production_NVE', -234188.0, 1.0E-04), ('JAC_production_NVE_4fs', -44810.0, 1.0E-03), ('JAC_production_NVE', -58138.0, 5.0E-04) ]
- energy_ref¶
Energy value reference.
This is set from the corresponding value in the
benchmark_info
parameter pack during initialization.- Type
float
- Required
Yes
- energy_tol¶
Energy value tolerance.
This is set from the corresponding value in the
benchmark_info
parameter pack during initialization.- Type
float
- Required
Yes
- input_file¶
The input file to use.
This is set to
mdin.CPU
ormdin.GPU
depending on the test variant during initialization.- Type
- Required
Yes
- output_file = 'amber.out'¶
The output file to pass to the Amber executable.
- Type
- Required
No
- Default
'amber.out'
- class hpctestlib.sciapps.gromacs.benchmarks.gromacs_check(*args, **kwargs)[source]¶
Bases:
reframe.core.pipeline.RunOnlyRegressionTest
GROMACS benchmark test.
GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles.
The benchmarks consist on a set of different inputs files that vary in the number of atoms and can be found in the following repository, which is also versioned: https://github.com/victorusu/GROMACS_Benchmark_Suite/.
Each test instance validates numerically its output and extracts and reports a performance metric.
- benchmark_info = (('HECBioSim/Crambin', -204107.0, 0.001), ('HECBioSim/Glutamine-Binding-Protein', -724598.0, 0.001), ('HECBioSim/hEGFRDimer', -3328920.0, 0.001), ('HECBioSim/hEGFRDimerSmallerPL', -3270800.0, 0.001), ('HECBioSim/hEGFRDimerPair', -12073300.0, 0.001), ('HECBioSim/hEGFRtetramerPair', -20983100.0, 0.001))¶
Parameter pack encoding the benchmark information.
The first element of the tuple refers to the benchmark name, the second is the energy reference and the third is the tolerance threshold.
- Type
Tuple[str, float, float]
- Values
Data Analytics¶
- class hpctestlib.data_analytics.spark.spark_checks.compute_pi_check(*args, **kwargs)[source]¶
Bases:
reframe.core.pipeline.RunOnlyRegressionTest
Test Apache Spark by computing PI.
Apache Spark is a unified analytics engine for large-scale data processing. It provides high-level APIs in Java, Scala, Python and R, and an optimized engine that supports general execution graphs. It also supports a rich set of higher-level tools including Spark SQL for SQL and structured data processing, MLlib for machine learning, GraphX for graph processing, and Structured Streaming for incremental computation and stream processing (see spark.apache.org).
This test checks that Spark is functioning correctly. To do this, it is necessary to define the tolerance of acceptable deviation. The tolerance is used to check that the computations are executed correctly, by comparing the value of pi calculated to the one obtained from the math library. The default assumption is that Spark is already installed on the system under test.
- executor_memory¶
Amount of memory to use per executor process, following the JVM memory strings convention, i.e a number with a size unit suffix (“k”, “m”, “g” or “t”) (e.g. 512m, 2g)
- Type
- Required
Yes
- tolerance = 0.01¶
The absolute tolerance of the computed value of PI
- Type
- Required
No
- Default
0.01
Python¶
- class hpctestlib.python.numpy.numpy_ops.numpy_ops_check(*args, **kwargs)[source]¶
Bases:
reframe.core.pipeline.RunOnlyRegressionTest
NumPy basic operations test.
NumPy is the fundamental package for scientific computing in Python. It provides a multidimensional array object, various derived objects (such as masked arrays and matrices), and an assortment of routines for fast operations on arrays, including mathematical, logical, shape manipulation, sorting, selecting, I/O, discrete Fourier transforms, basic linear algebra, basic statistical operations, random simulation and much more.
This test test performs some fundamental NumPy linear algebra operations (matrix product, SVD, Cholesky decomposition, eigendecomposition, and inverse matrix calculation) and users the execution time as a performance metric. The default assumption is that NumPy is already installed on the currest system.
Interactive Computing¶
- class hpctestlib.interactive.jupyter.ipcmagic.ipcmagic_check(*args, **kwargs)[source]¶
Bases:
reframe.core.pipeline.RunOnlyRegressionTest
Test ipcmagic via a distributed TensorFlow training with ipyparallel.
ipcmagic is a Python package and collection of CLI scripts for controlling clusters for Jupyter. For more information, please have a look here.
This test checks the ipcmagic performance. To do this, a single-layer neural network is trained against a noisy linear function. The parameters of the fitted linear function are returned in the end along with the resulting loss function. The default assumption is that ipcmagic is already installed on the system under test.
Machine Learning¶
- class hpctestlib.ml.tensorflow.horovod.tensorflow_cnn_check(*args, **kwargs)[source]¶
Bases:
reframe.core.pipeline.RunOnlyRegressionTest
Run a synthetic CNN benchmark with TensorFlow2 and Horovod.
TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML powered applications. For more information, refer to https://www.tensorflow.org/.
Horovod is a distributed deep learning training framework for TensorFlow, Keras, PyTorch, and Apache MXNet. The goal of Horovod is to make distributed deep learning fast and easy to use. For more information refer to https://github.com/horovod/horovod.
This test runs the Horovod
tensorflow2_synthentic_benchmark.py
example, checks its sanity and extracts the GPU performance.- model = 'InceptionV3'¶
The name of the model to use for this benchmark.
- Type
- Default
'InceptionV3'
- class hpctestlib.ml.pytorch.horovod.pytorch_cnn_check(*args, **kwargs)[source]¶
Bases:
reframe.core.pipeline.RunOnlyRegressionTest
Run a synthetic CNN benchmark with PyTorch and Horovod.
PyTorch is a Python package that provides tensor computation like NumPy with strong GPU acceleration and deep neural networks built on a tape-based autograd system. For more information, refer to https://pytorch.org/.
Horovod is a distributed deep learning training framework for TensorFlow, Keras, PyTorch, and Apache MXNet. The goal of Horovod is to make distributed deep learning fast and easy to use. For more information refer to https://github.com/horovod/horovod.
This test runs the Horovod
pytorch_synthentic_benchmark.py
example, checks its sanity and extracts the GPU performance.- model = 'inception_v3'¶
The name of the model to use for this benchmark.
- Type
- Default
'inception_v3'