Tutorial 10: Unit Testing#
Unit testing is a software development practice that allows developers to verify the functionality of individual units or components of their codebase. In modsim repositories, unit tests play a vital role in verifying custom scripting libraries tailored to the project. This tutorial introduces a project-wide alias, streamlining the execution of unit tests using the pytest [49] framework.
References#
Environment#
SCons and WAVES can be installed in a Conda environment with the Conda package manager. See the Conda installation and Conda environment management documentation for more details about using Conda.
Note
The SALib and numpy versions may not need to be this strict for most tutorials. However, Tutorial: Sensitivity Study uncovered some undocumented SALib version sensitivity to numpy surrounding the numpy v2 rollout.
Create the tutorials environment if it doesn’t exist
$ conda create --name waves-tutorial-env --channel conda-forge waves 'scons>=4.6' matplotlib pandas pyyaml xarray seaborn 'numpy>=2' 'salib>=1.5.1' pytest
Activate the environment
$ conda activate waves-tutorial-env
Some tutorials require additional third-party software that is not available for the Conda package manager. This
software must be installed separately and either made available to SConstruct by modifying your system’s PATH
or by
modifying the SConstruct search paths provided to the waves.scons_extensions.add_program()
method.
Warning
STOP! Before continuing, check that the documentation version matches your installed package version.
You can find the documentation version in the upper-left corner of the webpage.
You can find the installed WAVES version with
waves --version
.
If they don’t match, you can launch identically matched documentation with the WAVES Command-Line Utility
docs subcommand as waves docs
.
Directory Structure#
Create and change to a new project root directory to house the tutorial files if you have not already done so. For example
$ mkdir -p ~/waves-tutorials
$ cd ~/waves-tutorials
$ pwd
/home/roppenheimer/waves-tutorials
Note
If you skipped any of the previous tutorials, run the following commands to create a copy of the necessary tutorial files.
$ pwd
/home/roppenheimer/waves-tutorials
$ waves fetch --overwrite --tutorial 9 && mv tutorial_09_post_processing_SConstruct SConstruct
WAVES fetch
Destination directory: '/home/roppenheimer/waves-tutorials'
Regression Script#
In the
waves-tutorials/modsim_package/python
directory, create a new file namedregression.py
from the contents below
waves-tutorials/modsim_package/python/regression.py
1#!/usr/bin/env python
2import sys
3import pathlib
4import argparse
5
6import pandas
7import yaml
8
9
10def sort_dataframe(dataframe, index_column="time", sort_columns=["time", "set_name"]):
11 """Return a sorted dataframe and set an index
12
13 1. sort columns by column name
14 2. sort rows by column values ``sort_columns``
15 3. set an index
16
17 :returns: sorted and indexed dataframe
18 :rtype: pandas.DataFrame
19 """
20 return dataframe.reindex(sorted(dataframe.columns), axis=1).sort_values(sort_columns).set_index(index_column)
21
22
23def csv_files_match(current_csv, expected_csv, index_column="time", sort_columns=["time", "set_name"]):
24 """Compare two pandas DataFrame objects and determine if they match.
25
26 :param pandas.DataFrame current_csv: Current CSV data of generated plot.
27 :param pandas.DataFrame expected_csv: Expected CSV data.
28
29 :returns: True if the CSV files match, False otherwise.
30 :rtype: bool
31 """
32 current = sort_dataframe(current_csv, index_column=index_column, sort_columns=sort_columns)
33 expected = sort_dataframe(expected_csv, index_column=index_column, sort_columns=sort_columns)
34 try:
35 pandas.testing.assert_frame_equal(current, expected)
36 except AssertionError as err:
37 print(
38 f"The CSV regression test failed. Data in expected CSV file and current CSV file do not match.\n{err}",
39 file=sys.stderr,
40 )
41 equal = False
42 else:
43 equal = True
44 return equal
45
46
47def main(
48 first_file: pathlib.Path,
49 second_file: pathlib.Path,
50 output_file: pathlib.Path,
51) -> None:
52 """Compare CSV files and return an error code if they differ
53
54 :param first_file: path-like or file-like object containing the first CSV dataset
55 :param second_file: path-like or file-like object containing the second CSV dataset
56 """
57 regression_results = {}
58
59 # CSV regression file comparison
60 first_data = pandas.read_csv(first_file)
61 second_data = pandas.read_csv(second_file)
62 regression_results.update({"CSV comparison": csv_files_match(first_data, second_data)})
63
64 with open(output_file, "w") as output:
65 output.write(yaml.safe_dump(regression_results))
66
67 if len(regression_results.values()) < 1 or not all(regression_results.values()):
68 sys.exit("One or more regression tests failed")
69
70
71def get_parser() -> argparse.ArgumentParser:
72 """Return parser for CLI options
73
74 All options should use the double-hyphen ``--option VALUE`` syntax to avoid clashes with the Abaqus option syntax,
75 including flag style arguments ``--flag``. Single hyphen ``-f`` flag syntax often clashes with the Abaqus command
76 line options and should be avoided.
77
78 :returns: parser
79 :rtype:
80 """
81 script_name = pathlib.Path(__file__)
82 default_output_file = f"{script_name.stem}.yaml"
83
84 prog = f"python {script_name.name} "
85 cli_description = "Compare CSV files and return an error code if they differ"
86 parser = argparse.ArgumentParser(description=cli_description, prog=prog)
87 parser.add_argument(
88 "FIRST_FILE",
89 type=pathlib.Path,
90 help="First CSV file for comparison",
91 )
92 parser.add_argument(
93 "SECOND_FILE",
94 type=pathlib.Path,
95 help="Second CSV file for comparison",
96 )
97 parser.add_argument(
98 "--output-file",
99 type=pathlib.Path,
100 default=default_output_file,
101 help="Regression test pass/fail list",
102 )
103 return parser
104
105
106if __name__ == "__main__":
107 parser = get_parser()
108 args = parser.parse_args()
109 main(
110 args.FIRST_FILE,
111 args.SECOND_FILE,
112 args.output_file,
113 )
This script is introduced early for Tutorial 11: Regression Testing because unit testing the post_processing.py functions requires advanced testing techniques. The functions of regression.py can be tested more directly as an introduction to Python unit testing.
Unit test file#
In the
waves-tutorials/modsim_package/python/tests
directory, Create a new file namedtest_regression.py
from the contents below
waves-tutorials/modsim_package/python/tests/test_regression.py
1import pandas
2
3from modsim_package.python import regression
4
5
6def test_sort_dataframe():
7 data = {
8 "time": [0.0, 0.5, 1.0],
9 "Column1": [1, 2, 3],
10 "Column2": [4, 5, 6],
11 }
12 control = pandas.DataFrame.from_dict(data)
13 unsorted_copy = control[["Column2", "Column1", "time"]]
14
15 sorted_control = regression.sort_dataframe(control, sort_columns=["time"])
16 sorted_copy = regression.sort_dataframe(unsorted_copy, sort_columns=["time"])
17
18 pandas.testing.assert_frame_equal(sorted_control, sorted_copy)
19
20
21def test_csv_files_match():
22 data = {
23 "time": [0.0, 0.5, 1.0],
24 "Column1": [1, 2, 3],
25 "Column2": [4, 5, 6],
26 }
27 # Control DataFrame
28 control = pandas.DataFrame.from_dict(data)
29
30 # Identical DataFrame
31 identical_copy = control.copy()
32 unsorted_copy = control[["time", "Column2", "Column1"]]
33
34 # Different DataFrame
35 different_copy = control.copy()
36 different_copy.loc[0, "Column1"] = 999
37
38 # Assert that the function returns False when the DataFrames differ
39 assert regression.csv_files_match(control, different_copy, sort_columns=["time"]) is False
40
41 # Assert that the function returns True when the DataFrames are identical
42 assert regression.csv_files_match(control, identical_copy, sort_columns=["time"]) is True
43
44 # Assert that the function returns True when the sorted DataFrames are identical
45 assert regression.csv_files_match(control, unsorted_copy, sort_columns=["time"]) is True
In the test_regression.py
file, you’ll find a test implementation of two simple functions within the
regression.py
module. However, the remaining functions delve into more complex territory which need advanced
techniques such as mocking
. These aspects are intentionally left as exercises for you, the reader, to explore and
master. For a deeper understanding of how mocking operates in Python, refer to Unittest Mock [50]. A
more complete suite of unit tests may be found in the ModSim Templates.
SConscript#
In the
waves-tutorials
directory, create a new file namedunit_testing
from the contents below
waves-tutorials/unit_testing
1#! /usr/bin/env python
2"""Rectangle compression workflow
3
4Requires the following ``SConscript(..., exports={})``
5
6* ``env`` - The SCons construction environment with the following required keys
7
8 * ``regression_alias`` - String for the alias collecting the regression test suite targets
9"""
10
11import pathlib
12
13# Inherit the parent construction environment
14Import("env")
15
16# Set unit test workflow variables
17build_directory = pathlib.Path(Dir(".").abspath)
18workflow_name = build_directory.name
19
20# Collect the target nodes to build a concise alias for all targets
21workflow = []
22
23# Unit test target
24workflow.extend(
25 env.Command(
26 target=[f"{workflow_name}_results.xml"],
27 source=["#/modsim_package/python/tests/test_regression.py"],
28 action="pytest --junitxml=${TARGETS[0]}",
29 )
30)
31
32env.Alias(workflow_name, workflow)
33env.Alias(env["regression_alias"], workflow)
For this SCons task, the primary purpose of the pytest JUnit XML output report is to provide SCons with a build target to track. If the project uses a continuous integration server, the output may be used for automated test reporting [51].
SConstruct#
Update the
SConstruct
file. Adiff
against theSConstruct
file from Tutorial 09: Post-Processing is included below to help identify the changes made in this tutorial.
waves-tutorials/SConstruct
--- /home/runner/work/waves/waves/build/docs/tutorials_tutorial_09_post_processing_SConstruct
+++ /home/runner/work/waves/waves/build/docs/tutorials_tutorial_10_unit_testing_SConstruct
@@ -81,6 +81,7 @@
"project_name": project_name,
"project_dir": project_dir,
"version": version,
+ "regression_alias": "regression",
}
for key, value in project_variables.items():
env[key] = value
@@ -116,6 +117,11 @@
build_dir = env["variant_dir_base"] / workflow
SConscript(workflow, variant_dir=build_dir, exports={"env": env}, duplicate=False)
+# Add unit test target
+test_workflow = "unit_testing"
+test_build_dir = env["variant_dir_base"] / test_workflow
+SConscript(test_workflow, variant_dir=test_build_dir, exports={"env": env}, duplicate=False)
+
# Comments used in tutorial code snippets: marker-7
# Add default target list to help message
Our test alias is initialized similarly to that of the workflow aliases. In order to clarify that the tests are not
part of a modsim workflow, the unit_testing
call is made separately from the workflow loop. Additionally, a
regression test alias is added as a collector alias for future expansion beyond the unit tests in
Tutorial 11: Regression Testing.
Build Targets#
Build the test results
$ pwd
/home/roppenheimer/waves-tutorials
$ scons unit_testing
<output truncated>
Output Files#
Explore the contents of the build
directory using the tree
command against the build
directory, as shown
below. Note that the output files from the previous tutorials may also exist in the build
directory, but the
directory is specified by name to reduce clutter in the output shown.
$ pwd
/home/roppenheimer/waves-tutorials
$ tree build/unit_testing/
build/unit_testing/
└── unit_testing_results.xml
0 directories, 1 file