Internal API#
SCons Extensions#
- class waves.scons_extensions.AbaqusPseudoBuilder(builder: Builder, override_cpus: int | None = None)[source]
Bases:
object
Abaqus Pseudo-Builder class which allows users to customize the Abaqus Pseudo-Builder.
- Parameters:
builder – A Builder generated by
waves.scons_extensions.abaqus_solver_builder_factory()
override_cpus – Override the task-specific default number of CPUs. This kwarg value is most useful if propagated from a user-specified option at execution time. If None, Abaqus Pseudo-Builder tasks will use the task-specific default.
Warning
You must use an AbaqusSolver Builder generated from
waves.scons_extensions.abaqus_solver_builder_factory()
. Using the non-builder-factorywaves.scons_extensions.abaqus_solver()
(i.e. a Builder that does not use theprogram_options
kwarg) is not supported.- __call__(
- env: SConsEnvironment,
- job: str,
- inp: str | None = None,
- user: str | None = None,
- cpus: int = 1,
- oldjob: str | None = None,
- write_restart: bool = False,
- double: str = 'both',
- extra_sources: List[str] | None = None,
- extra_targets: List[str] | None = None,
- extra_options: str = '',
- **kwargs,
SCons Pseudo-Builder for running Abaqus jobs.
This SCons Pseudo-Builder wraps the WAVES Abaqus builders to automatically adjust the Abaqus command, sources list, and target list when specifying restart jobs and user subroutines.
Note
Restart files that are only used by Abaqus/Explicit (i.e.
.abq
,.pac
, and.sel
) are not currently added to the source and target lists when specifyingoldjob
orwrite_restart
. Useextra_sources
andextra_targets
to manually add them when needed.- Parameters:
job – Abaqus job name without file extension.
inp – Abaqus input file name with file extension. Defaults to
job
.inp.user – User subroutine.
cpus – CPUs to use for simulation. Is superceded by
override_cpus
if provided during object instantiation. The CPUs option is escaped in the action string, i.e. changing the number of CPUs will not trigger a rebuild.oldjob – Name of job to restart/import.
write_restart – If True, add restart files to target list. This is required if you want to use these restart files for a restart job.
double – Passthrough option for Abaqus’
-double ${double}
.extra_sources – Additional sources to supply to builder.
extra_targets – Additional targets to supply to builder.
extra_options – Additional Abaqus options to supply to builder. Should not include any Abaqus options available as kwargs, e.g. cpus, oldjob, user, input, job.
kwargs – Any additional kwargs are passed through to the builder.
- Returns:
All targets associated with Abaqus simulation.
SConstruct#import waves # Allow user to override simulation-specific default number of CPUs AddOption('--solve-cpus', type='int') env = Environment() env.AddMethod(waves.scons_extensions.add_program, "AddProgram") env["ABAQUS_PROGRAM"] = env.AddProgram(["abaqus"]) env.Append(BUILDERS={ "AbaqusSolver": waves.scons_extensions.abaqus_solver_builder_factory( program=env["ABAQUS_PROGRAM"] ) }) env.AddMethod( waves.scons_extensions.AbaqusPseudoBuilder( builder=env.AbaqusSolver, override_cpus=env.GetOption("solve_cpus")), "Abaqus", )
To define a simple Abaqus simulation:
env.Abaqus(job='simulation_1')
The job name can differ from the input file name:
env.Abaqus(job='assembly_simulation_1', inp='simulation_1.inp')
Specifying a user subroutine automatically adds the user subroutine to the source list:
env.Abaqus(job='simulation_1', user='user.f')
If you write restart files, you can add the restart files to the target list with:
env.Abaqus(job='simulation_1', write_restart=True)
This is important when you expect to use the restart files, as SCons will know to check that the required restart files exist and are up-to-date:
env.Abaqus(job='simulation_2', oldjob='simulation_1')
If your Abaqus job depends on files which aren’t detected by an implicit dependency scanner, you can add them to the source list directly:
env.Abaqus(job='simulation_1', user='user.f', extra_sources=['user_subroutine_input.csv'])
You can specify the default number of CPUs for the simulation:
env.Abaqus(job='simulation_1', cpus=4)
- class waves.scons_extensions.WAVESEnvironment(
- *args,
- ABAQUS_PROGRAM: str = 'abaqus',
- ANSYS_PROGRAM: str = 'ansys',
- CCX_PROGRAM: str = 'ccx',
- CHARMRUN_PROGRAM: str = 'charmrun',
- FIERRO_EXPLICIT_PROGRAM: str = 'fierro-parallel-explicit',
- FIERRO_IMPLICIT_PROGRAM: str = 'fierro-parallel-implicit',
- INCITER_PROGRAM: str = 'inciter',
- MPIRUN_PROGRAM: str = 'mpirun',
- PYTHON_PROGRAM: str = 'python',
- SIERRA_PROGRAM: str = 'sierra',
- SPHINX_BUILD_PROGRAM: str = 'sphinx-build',
- **kwargs,
Bases:
SConsEnvironment
Thin overload of SConsEnvironment with WAVES construction environment methods and builders
- AbaqusDatacheck(target, source, *args, **kwargs)[source]
Builder from factory
waves.scons_extensions.abaqus_solver_builder_factory()
using thewaves.scons_extensions.abaqus_datacheck_emitter()
.- Variables:
program –
${ABAQUS_PROGRAM}
- Parameters:
target – The task target list
source – The task source list
args – All positional arguments are passed through to the builder (not to the builder factory)
kwargs – All keyword arguments are passed through to the builder (not to the builder factory)
- AbaqusExplicit(target, source, *args, **kwargs)[source]
Builder from factory
waves.scons_extensions.abaqus_solver_builder_factory()
using thewaves.scons_extensions.abaqus_explicit_emitter()
.- Variables:
program –
${ABAQUS_PROGRAM}
- Parameters:
target – The task target list
source – The task source list
args – All positional arguments are passed through to the builder (not to the builder factory)
kwargs – All keyword arguments are passed through to the builder (not to the builder factory)
- AbaqusJournal(target, source, *args, **kwargs)[source]
Builder from factory
waves.scons_extensions.abaqus_journal_builder_factory()
- Variables:
program –
${ABAQUS_PROGRAM}
- Parameters:
target – The task target list
source – The task source list
args – All positional arguments are passed through to the builder (not to the builder factory)
kwargs – All keyword arguments are passed through to the builder (not to the builder factory)
- AbaqusPseudoBuilder(job, *args, override_cpus: int | None = None, **kwargs)[source]
Construction environment pseudo-builder from
waves.scons_extensions.AbaqusPseudoBuilder
When using this environment pseudo-builder, do not provide the first
env
argument- Parameters:
job – Abaqus job name.
override_cpus – Override the task-specific default number of CPUs. This kwarg value is most useful if propagated from a user-specified option at execution time. If None, Abaqus Pseudo-Builder tasks will use the task-specific default.
args – All other positional arguments are passed through to
waves.scons_extensions.AbaqusPseudoBuilder.__call__`()
kwargs – All other keyword arguments are passed through to
waves.scons_extensions.AbaqusPseudoBuilder.__call__`()
- AbaqusSolver(target, source, *args, **kwargs)[source]
Builder from factory
waves.scons_extensions.abaqus_solver_builder_factory()
- Variables:
program –
${ABAQUS_PROGRAM}
- Parameters:
target – The task target list
source – The task source list
args – All positional arguments are passed through to the builder (not to the builder factory)
kwargs – All keyword arguments are passed through to the builder (not to the builder factory)
- AbaqusStandard(target, source, *args, **kwargs)[source]
Builder from factory
waves.scons_extensions.abaqus_solver_builder_factory()
using thewaves.scons_extensions.abaqus_standard_emitter()
.- Variables:
program –
${ABAQUS_PROGRAM}
- Parameters:
target – The task target list
source – The task source list
args – All positional arguments are passed through to the builder (not to the builder factory)
kwargs – All keyword arguments are passed through to the builder (not to the builder factory)
- AddCubit(*args, **kwargs)[source]
Construction environment method from
waves.scons_extensions.add_cubit()
When using this environment method, do not provide the first
env
argument
- AddCubitPython(*args, **kwargs)[source]
Construction environment method from
waves.scons_extensions.add_cubit_python()
When using this environment method, do not provide the first
env
argument
- AddProgram(*args, **kwargs)[source]
Construction environment method from
waves.scons_extensions.add_program()
When using this environment method, do not provide the first
env
argument
- AnsysAPDL(target, source, *args, **kwargs)[source]
Builder from factory
waves.scons_extensions.ansys_apdl_builder_factory()
- Variables:
program –
${ANSYS_PROGRAM}
- Parameters:
target – The task target list
source – The task source list
args – All positional arguments are passed through to the builder (not to the builder factory)
kwargs – All keyword arguments are passed through to the builder (not to the builder factory)
- CalculiX(target, source, *args, **kwargs)[source]
Builder from factory
waves.scons_extensions.calculix_builder_factory()
- Variables:
program –
${CCX_PROGRAM}
- Parameters:
target – The task target list
source – The task source list
args – All positional arguments are passed through to the builder (not to the builder factory)
kwargs – All keyword arguments are passed through to the builder (not to the builder factory)
- CheckProgram(*args, **kwargs)[source]
Construction environment method from
waves.scons_extensions.check_program()
When using this environment method, do not provide the first
env
argument
- CopySubstfile(*args, **kwargs)[source]
Construction environment method from
waves.scons_extensions.copy_substfile()
When using this environment method, do not provide the first
env
argument
- FierroExplicit(target, source, *args, **kwargs)[source]
Builder from factory
waves.scons_extensions.fierro_explicit_builder_factory()
- Variables:
program –
${MPIRUN_PROGRAM}
subcommand –
${FIERRO_EXPLICIT_PROGRAM}
- Parameters:
target – The task target list
source – The task source list
args – All positional arguments are passed through to the builder (not to the builder factory)
kwargs – All keyword arguments are passed through to the builder (not to the builder factory)
- FierroImplicit(target, source, *args, **kwargs)[source]
Builder from factory
waves.scons_extensions.fierro_implicit_builder_factory()
- Variables:
program –
${MPIRUN_PROGRAM}
subcommand –
${FIERRO_IMPLICIT_PROGRAM}
- Parameters:
target – The task target list
source – The task source list
args – All positional arguments are passed through to the builder (not to the builder factory)
kwargs – All keyword arguments are passed through to the builder (not to the builder factory)
- FindProgram(*args, **kwargs)[source]
Construction environment method from
waves.scons_extensions.find_program()
When using this environment method, do not provide the first
env
argument
- FirstTargetBuilder(target, source, *args, **kwargs)[source]
Builder from factory
waves.scons_extensions.first_target_builder_factory()
- Parameters:
target – The task target list
source – The task source list
args – All positional arguments are passed through to the builder (not to the builder factory)
kwargs – All keyword arguments are passed through to the builder (not to the builder factory)
- ParameterStudySConscript(*args, **kwargs)[source]
Construction environment method from
waves.scons_extensions.parameter_study_sconscript()
When using this environment method, do not provide the first
env
argument
- ParameterStudyTask(*args, **kwargs)[source]
Construction environment pseudo-builder from
waves.scons_extensions.parameter_study_task()
When using this environment pseudo-builder, do not provide the first
env
argument
- ParameterStudyWrite(*args, **kwargs)[source]
Construction environment method from
waves.scons_extensions.parameter_study_write()
When using this environment method, do not provide the first
env
argument
- PrintBuildFailures(*args, **kwargs)[source]
Construction environment method from
waves.scons_extensions.print_build_failures()
When using this environment method, do not provide the first
env
argument
- ProjectAlias(*args, **kwargs)[source]
Construction environment method from
waves.scons_extensions.project_alias()
When using this environment method, do not provide the first
env
argument
- ProjectHelp(*args, **kwargs)[source]
Construction environment method from
waves.scons_extensions.project_help()
When using this environment method, do not provide the first
env
argument
- PythonScript(target, source, *args, **kwargs)[source]
Builder from factory
waves.scons_extensions.python_builder_factory()
- Variables:
program –
${PYTHON_PROGRAM}
- Parameters:
target – The task target list
source – The task source list
args – All positional arguments are passed through to the builder (not to the builder factory)
kwargs – All keyword arguments are passed through to the builder (not to the builder factory)
- QuinoaSolver(target, source, *args, **kwargs)[source]
Builder from factory
waves.scons_extensions.quinoa_builder_factory()
- Variables:
program –
${CHARMRUN_PROGRAM}
subcommand –
${INCITER_PROGRAM}
- Parameters:
target – The task target list
source – The task source list
args – All positional arguments are passed through to the builder (not to the builder factory)
kwargs – All keyword arguments are passed through to the builder (not to the builder factory)
- Sierra(target, source, *args, **kwargs)[source]
Builder from factory
waves.scons_extensions.sierra_builder_factory()
- Variables:
program –
${SIERRA_PROGRAM}
- Parameters:
target – The task target list
source – The task source list
args – All positional arguments are passed through to the builder (not to the builder factory)
kwargs – All keyword arguments are passed through to the builder (not to the builder factory)
- SphinxBuild(target, source, *args, **kwargs)[source]
Builder from factory
waves.scons_extensions.sphinx_build()
- Variables:
program –
${SPHINX_BUILD_PROGRAM}
- Parameters:
target – The task target list
source – The task source list
args – All positional arguments are passed through to the builder (not to the builder factory)
kwargs – All keyword arguments are passed through to the builder (not to the builder factory)
- SphinxPDF(target, source, *args, **kwargs)[source]
Builder from factory
waves.scons_extensions.sphinx_latexpdf()
- Variables:
program –
${SPHINX_BUILD_PROGRAM}
- Parameters:
target – The task target list
source – The task source list
args – All positional arguments are passed through to the builder (not to the builder factory)
kwargs – All keyword arguments are passed through to the builder (not to the builder factory)
- SubstitutionSyntax(*args, **kwargs)[source]
Construction environment method from
waves.scons_extensions.substitution_syntax()
When using this environment method, do not provide the first
env
argument
- Truchas(target, source, *args, **kwargs)[source]
Builder from factory
waves.scons_extensions.truchas_builder_factory()
- Variables:
program –
${MPIRUN_PROGRAM}
subcommand –
${TRUCHAS_PROGRAM}
- Parameters:
target – The task target list
source – The task source list
args – All positional arguments are passed through to the builder (not to the builder factory)
kwargs – All keyword arguments are passed through to the builder (not to the builder factory)
- waves.scons_extensions.abaqus_datacheck_emitter(
- target: list,
- source: list,
- env: SConsEnvironment,
- suffixes: Iterable[str] = ('.odb', '.dat', '.msg', '.com', '.prt', '.023', '.mdl', '.sim', '.stt'),
- appending_suffixes: Iterable[str] | None = None,
- stdout_extension: str = '.stdout',
Abaqus solver emitter for datacheck targets
SCons emitter for
waves.scons_extensions.abaqus_solver_builder_factory()
based builders. Built onwaves.scons_extensions.abaqus_solver_emitter_factory()
.Appends the target list with
job
task keyword argument named targets before passing through thewaves.scons_extensions.first_target_emitter()
emitter.Searches for the
job
task keyword argument and appends the target list withf"{job}{suffix}"
targets using thesuffixes
list.Searches for a file ending in the stdout extension. If none is found, creates a target by appending the stdout extension to the first target in the
target
list. The associated Builder requires at least one target for this reason. The stdout file is always placed at the end of the returned target list.This is an SCons emitter function and not an emitter factory. The suffix arguments:
suffixes
andappending_suffixes
are only relevant for developers writing new emitters which call this function as a base. The suffixes list emits targets where the suffix replaces the first target’s suffix, e.g. fortarget.ext
emit a new targettarget.suffix
. The appending suffixes list emits targets where the suffix appends the first target’s suffix, e.g. fortarget.ext
emit a new targettarget.ext.appending_suffix
.The emitter will assume all emitted targets build in the current build directory. If the target(s) must be built in a build subdirectory, e.g. in a parameterized target build, then the first target must be provided with the build subdirectory, e.g.
parameter_set1/target.ext
. When in doubt, provide a STDOUT redirect file with the.stdout
extension as a target, e.g.target.stdout
orparameter_set1/target.stdout
.SConstruct#import waves env = Environment() env.AddMethod(waves.scons_extensions.add_program, "AddProgram") env["ABAQUS_PROGRAM"] = env.AddProgram(["abaqus"]) env.Append(BUILDERS={ "AbaqusDatacheck": waves.scons_extensions.abaqus_solver_builder_factory( program=env["ABAQUS_PROGRAM"], emitter=waves.scons_extensions.abaqus_datacheck_emitter, ) }) env.AbaqusDatacheck(target=["job.odb"], source=["input.inp"], job="job")
Note
The
job
keyword argument must be provided in the task definition.- Parameters:
target – The target file list of strings
source – The source file list of SCons.Node.FS.File objects
env – The builder’s SCons construction environment object
suffixes – Suffixes which should replace the first target’s extension
appending_suffixes – Suffixes which should append the first target’s extension
stdout_extension – The extension used by the STDOUT/STDERR redirect file
- Returns:
target, source
- waves.scons_extensions.abaqus_explicit_emitter(
- target: list,
- source: list,
- env: SConsEnvironment,
- suffixes: Iterable[str] = ('.odb', '.dat', '.msg', '.com', '.prt', '.sta'),
- appending_suffixes: Iterable[str] | None = None,
- stdout_extension: str = '.stdout',
Abaqus solver emitter for Explicit targets
SCons emitter for
waves.scons_extensions.abaqus_solver_builder_factory()
based builders. Built onwaves.scons_extensions.abaqus_solver_emitter_factory()
.Appends the target list with
job
task keyword argument named targets before passing through thewaves.scons_extensions.first_target_emitter()
emitter.Searches for the
job
task keyword argument and appends the target list withf"{job}{suffix}"
targets using thesuffixes
list.Searches for a file ending in the stdout extension. If none is found, creates a target by appending the stdout extension to the first target in the
target
list. The associated Builder requires at least one target for this reason. The stdout file is always placed at the end of the returned target list.This is an SCons emitter function and not an emitter factory. The suffix arguments:
suffixes
andappending_suffixes
are only relevant for developers writing new emitters which call this function as a base. The suffixes list emits targets where the suffix replaces the first target’s suffix, e.g. fortarget.ext
emit a new targettarget.suffix
. The appending suffixes list emits targets where the suffix appends the first target’s suffix, e.g. fortarget.ext
emit a new targettarget.ext.appending_suffix
.The emitter will assume all emitted targets build in the current build directory. If the target(s) must be built in a build subdirectory, e.g. in a parameterized target build, then the first target must be provided with the build subdirectory, e.g.
parameter_set1/target.ext
. When in doubt, provide a STDOUT redirect file with the.stdout
extension as a target, e.g.target.stdout
orparameter_set1/target.stdout
.SConstruct#import waves env = Environment() env.AddMethod(waves.scons_extensions.add_program, "AddProgram") env["ABAQUS_PROGRAM"] = env.AddProgram(["abaqus"]) env.Append(BUILDERS={ "AbaqusExplicit": waves.scons_extensions.abaqus_solver_builder_factory( program=env["ABAQUS_PROGRAM"], emitter=waves.scons_extensions.abaqus_explicit_emitter, ) }) env.AbaqusExplicit(target=["job.odb"], source=["input.inp"], job="job")
Note
The
job
keyword argument must be provided in the task definition.- Parameters:
target – The target file list of strings
source – The source file list of SCons.Node.FS.File objects
env – The builder’s SCons construction environment object
suffixes – Suffixes which should replace the first target’s extension
appending_suffixes – Suffixes which should append the first target’s extension
stdout_extension – The extension used by the STDOUT/STDERR redirect file
- Returns:
target, source
- waves.scons_extensions.abaqus_extract(program: str = 'abaqus') Builder [source]
Abaqus ODB file extraction Builder
This builder executes the
odb_extract
command line utility against an ODB file in the source list. The ODB file must be the first file in the source list. If there is more than one ODB file in the source list, all but the first file are ignored byodb_extract
.This builder is unique in that no targets are required. The Builder emitter will append the builder managed targets and
odb_extract
target name constructions automatically. The first target determines the working directory for the emitter targets. If the target(s) must be built in a build subdirectory, e.g. in a parameterized target build, then at least one target must be provided with the build subdirectory, e.g.parameter_set1/target.h5
. When in doubt, provide the expected H5 file as a target, e.g.source[0].h5
.The target list may specify an output H5 file name that differs from the ODB file base name as
new_name.h5
. If the first file in the target list does not contain the*.h5
extension, or if there is no file in the target list, the target list will be prepended with a name matching the ODB file base name and the*.h5
extension.The builder emitter appends the CSV file created by the
abaqus odbreport
command as executed byodb_extract
unlessdelete_report_file
is set toTrue
.This builder supports the keyword arguments:
output_type
,odb_report_args
,delete_report_file
with behavior as described in the ODB Extract command line interface.Format of HDF5 file#/ # Top level group required in all hdf5 files /<instance name>/ # Groups containing data of each instance found in an odb FieldOutputs/ # Group with multiple xarray datasets for each field output <field name>/ # Group with datasets containing field output data for a specified set or surface # If no set or surface is specified, the <field name> will be # 'ALL_NODES' or 'ALL_ELEMENTS' HistoryOutputs/ # Group with multiple xarray datasets for each history output <region name>/ # Group with datasets containing history output data for specified history region name # If no history region name is specified, the <region name> will be 'ALL NODES' Mesh/ # Group written from an xarray dataset with all mesh information for this instance /<instance name>_Assembly/ # Group containing data of assembly instance found in an odb Mesh/ # Group written from an xarray dataset with all mesh information for this instance /odb/ # Catch all group for data found in the odbreport file not already organized by instance info/ # Group with datasets that mostly give odb meta-data like name, path, etc. jobData/ # Group with datasets that contain additional odb meta-data rootAssembly/ # Group with datasets that match odb file organization per Abaqus documentation sectionCategories/ # Group with datasets that match odb file organization per Abaqus documentation /xarray/ # Group with a dataset that lists the location of all data written from xarray datasets
SConstruct#import waves env = Environment() env.AddMethod(waves.scons_extensions.add_program, "AddProgram") env["ABAQUS_PROGRAM"] = env.AddProgram(["abaqus"]) env.Append(BUILDERS={"AbaqusExtract": waves.scons_extensions.abaqus_extract()}) env.AbaqusExtract(target=["my_job.h5", "my_job.csv"], source=["my_job.odb"])
- Parameters:
program – An absolute path or basename string for the abaqus program
- Returns:
Abaqus extract builder
- waves.scons_extensions.abaqus_input_scanner() Scanner [source]
Abaqus input file dependency scanner
Custom SCons scanner that searches for the
INPUT=
parameter and associated file dependencies inside Abaqus*.inp
files.- Returns:
Abaqus input file dependency Scanner
- waves.scons_extensions.abaqus_journal(
- program: str = 'abaqus',
- required: str = 'cae -noGUI ${SOURCE.abspath}',
- action_prefix: str = 'cd ${TARGET.dir.abspath} &&',
- action_suffix: str = '> ${TARGETS[-1].abspath} 2>&1',
- environment_suffix: str = '> ${TARGETS[-2].abspath} 2>&1',
Construct and return an Abaqus journal file SCons builder
This builder requires that the journal file to execute is the first source in the list. The builder returned by this function accepts all SCons Builder arguments. The arguments of this function are also available as keyword arguments of the builder. When provided during task definition, the keyword arguments override the builder returned by this function.
Builder/Task keyword arguments
program
: The Abaqus command line executable absolute or relative pathrequired
: A space delimited string of Abaqus required argumentsabaqus_options
: The Abaqus command line options provided as a stringjournal_options
: The journal file command line options provided as a stringaction_prefix
: Advanced behavior. Most users should accept the defaultsaction_suffix
: Advanced behavior. Most users should accept the defaults.environment_suffix
: Advanced behavior. Most users should accept the defaults.
At least one target must be specified. The first target determines the working directory for the builder’s action, as shown in the action code snippet below. The action changes the working directory to the first target’s parent directory prior to executing the journal file.
The Builder emitter will append the builder managed targets automatically. Appends
target[0]
.abaqus_v6.env andtarget[0]
.stdout to thetarget
list.The emitter will assume all emitted targets build in the current build directory. If the target(s) must be built in a build subdirectory, e.g. in a parameterized target build, then the first target must be provided with the build subdirectory, e.g.
parameter_set1/my_target.ext
. When in doubt, provide a STDOUT redirect file as a target, e.g.target.stdout
.Abaqus journal builder action keywords#${action_prefix} ${program} -information environment ${environment_suffix} ${action_prefix} ${program} ${required} ${abaqus_options} -- ${journal_options} ${action_suffix}
With the default argument values, this expands to
Abaqus journal builder action default expansion#cd ${TARGET.dir.abspath} && abaqus -information environment > ${TARGETS[-2].abspath} 2>&1 cd ${TARGET.dir.abspath} && abaqus cae -noGui ${SOURCE.abspath} ${abaqus_options} -- ${journal_options} > ${TARGETS[-1].abspath} 2>&1
SConstruct#import waves env = Environment() env.AddMethod(waves.scons_extensions.add_program, "AddProgram") env["ABAQUS_PROGRAM"] = env.AddProgram(["abaqus"]) env.Append(BUILDERS={"AbaqusJournal": waves.scons_extensions.abaqus_journal()}) env.AbaqusJournal(target=["my_journal.cae"], source=["my_journal.py"], journal_options="")
- Parameters:
program – The Abaqus command line executable absolute or relative path
required – A space delimited string of Abaqus required arguments
action_prefix – Advanced behavior. Most users should accept the defaults.
action_suffix – Advanced behavior. Most users should accept the defaults.
environment_suffix – Advanced behavior. Most users should accept the defaults.
- Returns:
Abaqus journal builder
- waves.scons_extensions.abaqus_journal_builder_factory(
- environment: str = '',
- action_prefix: str = 'cd ${TARGET.dir.abspath} &&',
- program: str = 'abaqus',
- program_required: str = 'cae -noGUI ${SOURCES[0].abspath}',
- program_options: str = '',
- subcommand: str = '--',
- subcommand_required: str = '',
- subcommand_options: str = '',
- action_suffix: str = '> ${TARGETS[-1].abspath} 2>&1',
- emitter=<function first_target_emitter>,
- **kwargs,
Abaqus journal builder factory
This builder factory extends
waves.scons_extensions.first_target_builder_factory()
. This builder factory uses thewaves.scons_extensions.first_target_emitter()
. At least one task target must be specified in the task definition and the last target will always be the expected STDOUT and STDERR redirection output file,TARGETS[-1]
ending in*.stdout
.Warning
Users overriding the
emitter
keyword argument are responsible for providing an emitter with equivalent STDOUT file handling behavior aswaves.scons_extensions.first_target_emitter()
or updating theaction_suffix
option to match their emitter’s behavior.With the default options this builder requires the following sources file provided in the order:
Abaqus journal file:
*.py
action string construction#${environment} ${action_prefix} ${program} ${program_required} ${program_options} ${subcommand} ${subcommand_required} ${subcommand_options} ${action_suffix}
action string default expansion#${environment} cd ${TARGET.dir.abspath} && abaqus cae -noGUI=${SOURCES[0].abspath} ${program_options} -- ${subcommand_required} ${subcommand_options} > ${TARGETS[-1].abspath} 2>&1
SConstruct#import waves env = Environment() env.AddMethod(waves.scons_extensions.add_program, "AddProgram") env["ABAQUS_PROGRAM"] = env.AddProgram(["abaqus"]) env.Append(BUILDERS={ "AbaqusJournal": waves.scons_extensions.abaqus_journal_builder_factory( program=env["ABAQUS_PROGRAM"] ) }) env.AbaqusJournal(target=["my_journal.cae"], source=["my_journal.py"])
The builder returned by this factory accepts all SCons Builder arguments. The arguments of this function are also available as keyword arguments of the builder. When provided during task definition, the task keyword arguments override the builder keyword arguments.
- Parameters:
environment – This variable is intended primarily for use with builders and tasks that can not execute from an SCons construction environment. For instance, when tasks execute on a remote server with SSH wrapped actions using
waves.scons_extensions.ssh_builder_actions()
and therefore must initialize the remote environment as part of the builder action.action_prefix – This variable is intended to perform directory change operations prior to program execution
program – The Abaqus absolute or relative path
program_required – Space delimited string of required Abaqus options and arguments that are crucial to builder behavior and should not be modified except by advanced users
program_options – Space delimited string of optional Abaqus options and arguments that can be freely modified by the user
subcommand – The shell separator for positional arguments used to separate Abaqus program from Abaqus journal file arguments and options
subcommand_required – Space delimited string of required Abaqus journal file options and arguments that are crucial to builder behavior and should not be modified except by advanced users.
subcommand_options – Space delimited string of optional Abaqus journal file options and arguments that can be freely modified by the user
action_suffix – This variable is intended to perform program STDOUT and STDERR redirection operations.
emitter – An SCons emitter function. This is not a keyword argument in the action string.
kwargs – Any additional keyword arguments are passed directly to the SCons builder object.
- Returns:
Abaqus journal builder
- waves.scons_extensions.abaqus_solver(
- program: str = 'abaqus',
- required: str = '-interactive -ask_delete no -job ${job_name} -input ${SOURCE.filebase}',
- action_prefix: str = 'cd ${TARGET.dir.abspath} &&',
- action_suffix: str = '> ${TARGETS[-1].abspath} 2>&1',
- environment_suffix: str = '> ${TARGETS[-2].abspath} 2>&1',
- emitter: Literal['standard', 'explicit', 'datacheck', None] = None,
Construct and return an Abaqus solver SCons builder
This builder requires that the root input file is the first source in the list. The builder returned by this function accepts all SCons Builder arguments. The arguments of this function are also available as keyword arguments of the builder. When provided during task definition, the keyword arguments override the builder returned by this function.
Builder/Task keyword arguments
program
: The Abaqus command line executable absolute or relative pathrequired
: A space delimited string of Abaqus required argumentsjob_name
: The job name string. If not specifiedjob_name
defaults to the root input file stem. The Builderemitter will append common Abaqus output files as targets automatically from the
job_name
, e.g.job_name.odb
.
abaqus_options
: The Abaqus command line options provided as a string.suffixes
: override the emitter targets with a new list of extensions, e.g.AbaqusSolver(target=[], source=["input.inp"], suffixes=[".odb"])
will emit only one file namedjob_name.odb
.action_prefix
: Advanced behavior. Most users should accept the defaultsaction_suffix
: Advanced behavior. Most users should accept the defaults.environment_suffix
: Advanced behavior. Most users should accept the defaults.
The first target determines the working directory for the builder’s action, as shown in the action code snippet below. The action changes the working directory to the first target’s parent directory prior to executing the journal file.
This builder is unique in that no targets are required. The Builder emitter will append the builder managed targets automatically. The target list only appends those extensions which are common to Abaqus analysis operations. Some extensions may need to be added explicitly according to the Abaqus simulation solver, type, or options. If you find that SCons isn’t automatically cleaning some Abaqus output files, they are not in the automatically appended target list.
The emitter will assume all emitted targets build in the current build directory. If the target(s) must be built in a build subdirectory, e.g. in a parameterized target build, then the first target must be provided with the build subdirectory, e.g.
parameter_set1/job_name.odb
. When in doubt, provide a STDOUT redirect file as a target, e.g.target.stdout
.The
-interactive
option is always appended to the builder action to avoid exiting the Abaqus task before the simulation is complete. The-ask_delete no
option is always appended to the builder action to overwrite existing files in programmatic execution, where it is assumed that the Abaqus solver target(s) should be re-built when their source files change.SConstruct#import waves env = Environment() env.AddMethod(waves.scons_extensions.add_program, "AddProgram") env["ABAQUS_PROGRAM"] = env.AddProgram(["abaqus"]) env.Append(BUILDERS={ "AbaqusSolver": waves.scons_extensions.abaqus_solver(), "AbaqusStandard": waves.scons_extensions.abaqus_solver(emitter='standard'), "AbaqusOld": waves.scons_extensions.abaqus_solver(program="abq2019") }) env.AbaqusSolver(target=[], source=["input.inp"], job_name="my_job", abaqus_options="-cpus 4") env.AbaqusSolver(target=[], source=["input.inp"], job_name="my_job", suffixes=[".odb"])
Abaqus solver builder action keywords#${action_prefix} ${program} -information environment ${environment_suffix} ${action_prefix} ${program} ${required} ${abaqus_options} ${action_suffix}
Abaqus solver builder action default expansion#cd ${TARGET.dir.abspath} && abaqus -information environment > ${TARGETS[-2].abspath} 2>&1 cd ${TARGET.dir.abspath} && ${program} -interactive -ask_delete no -job ${job_name} -input ${SOURCE.filebase} ${abaqus_options} > ${TARGETS[-1].abspath} 2>&1
- Parameters:
program – An absolute path or basename string for the abaqus program
required – A space delimited string of Abaqus required arguments
action_prefix – Advanced behavior. Most users should accept the defaults.
action_suffix – Advanced behavior. Most users should accept the defaults.
environment_suffix – Advanced behavior. Most users should accept the defaults.
emitter –
emit file extensions based on the value of this variable. Overridden by the
suffixes
keyword argument that may be provided in the Task definition.”standard”: [“.odb”, “.dat”, “.msg”, “.com”, “.prt”, “.sta”]
”explicit”: [“.odb”, “.dat”, “.msg”, “.com”, “.prt”, “.sta”]
”datacheck”: [“.odb”, “.dat”, “.msg”, “.com”, “.prt”, “.023”, “.mdl”, “.sim”, “.stt”]
default value: [“.odb”, “.dat”, “.msg”, “.com”, “.prt”]
- Returns:
Abaqus solver builder
- waves.scons_extensions.abaqus_solver_builder_factory(
- environment: str = '',
- action_prefix: str = 'cd ${TARGET.dir.abspath} &&',
- program: str = 'abaqus',
- program_required: str = '-interactive -ask_delete no -job ${job} -input ${SOURCE.filebase}',
- program_options: str = '',
- subcommand: str = '',
- subcommand_required: str = '',
- subcommand_options: str = '',
- action_suffix: str = '> ${TARGETS[-1].abspath} 2>&1',
- emitter=<function first_target_emitter>,
- **kwargs,
Abaqus solver builder factory
This builder factory extends
waves.scons_extensions.first_target_builder_factory()
. This builder factory uses thewaves.scons_extensions.first_target_emitter()
. At least one task target must be specified in the task definition and the last target will always be the expected STDOUT and STDERR redirection output file,TARGETS[-1]
ending in*.stdout
.Warning
Users overriding the
emitter
keyword argument are responsible for providing an emitter with equivalent STDOUT file handling behavior aswaves.scons_extensions.first_target_emitter()
or updating theaction_suffix
option to match their emitter’s behavior.With the default options this builder requires the following sources file provided in the order:
Abaqus solver file:
*.inp
action string construction#${environment} ${action_prefix} ${program} ${program_required} ${program_options} ${subcommand} ${subcommand_required} ${subcommand_options} ${action_suffix}
action string default expansion#${environment} cd ${TARGET.dir.abspath} && abaqus -interactive -ask_delete no -job ${job} -input ${SOURCE.filebase} ${program_options} ${subcommand} ${subcommand_required} ${subcommand_options} > ${TARGETS[-1].abspath} 2>&1
SConstruct#import waves env = Environment() env.AddMethod(waves.scons_extensions.add_program, "AddProgram") env["ABAQUS_PROGRAM"] = env.AddProgram(["abaqus"]) env.Append(BUILDERS={ "AbaqusSolver": waves.scons_extensions.abaqus_solver_builder_factory( program=env["ABAQUS_PROGRAM"] ) }) env.AbaqusSolver(target=["job.odb"], source=["input.inp"], job="job")
Note
The
job
keyword argument must be provided in the task definition.The builder returned by this factory accepts all SCons Builder arguments. The arguments of this function are also available as keyword arguments of the builder. When provided during task definition, the task keyword arguments override the builder keyword arguments.
- Parameters:
environment – This variable is intended primarily for use with builders and tasks that can not execute from an SCons construction environment. For instance, when tasks execute on a remote server with SSH wrapped actions using
waves.scons_extensions.ssh_builder_actions()
and therefore must initialize the remote environment as part of the builder action.action_prefix – This variable is intended to perform directory change operations prior to program execution
program – The Abaqus absolute or relative path
program_required – Space delimited string of required Abaqus options and arguments that are crucial to builder behavior and should not be modified except by advanced users
program_options – Space delimited string of optional Abaqus options and arguments that can be freely modified by the user
subcommand – The subcommand absolute or relative path
subcommand_required – Space delimited string of required subcommand options and arguments that are crucial to builder behavior and should not be modified except by advanced users.
subcommand_options – Space delimited string of optional subcommand options and arguments that can be freely modified by the user
action_suffix – This variable is intended to perform program STDOUT and STDERR redirection operations.
emitter – An SCons emitter function. This is not a keyword argument in the action string.
kwargs – Any additional keyword arguments are passed directly to the SCons builder object.
- Returns:
Abaqus solver builder
- waves.scons_extensions.abaqus_solver_emitter_factory(
- suffixes: Iterable[str] = ('.odb', '.dat', '.msg', '.com', '.prt'),
- appending_suffixes: Iterable[str] | None = None,
- stdout_extension: str = '.stdout',
Abaqus solver emitter factory
SCons emitter factory that returns emitters for
waves.scons_extensions.abaqus_solver_builder_factory()
based builders.Emitters returned by this factory append the target list with
job
task keyword argument named targets before passing through thewaves.scons_extensions.first_target_emitter()
emitter.Searches for the
job
task keyword argument and appends the target list withf"{job}{suffix}"
targets using thesuffixes
list.Searches for a file ending in the stdout extension. If none is found, creates a target by appending the stdout extension to the first target in the
target
list. The associated Builder requires at least one target for this reason. The stdout file is always placed at the end of the returned target list.The
suffixes
list emits targets where the suffix replaces the first target’s suffix, e.g. fortarget.ext
emit a new targettarget.suffix
. The appending suffixes list emits targets where the suffix appends the first target’s suffix, e.g. fortarget.ext
emit a new targettarget.ext.appending_suffix
.The emitter will assume all emitted targets build in the current build directory. If the target(s) must be built in a build subdirectory, e.g. in a parameterized target build, then the first target must be provided with the build subdirectory, e.g.
parameter_set1/target.ext
. When in doubt, provide a STDOUT redirect file with the.stdout
extension as a target, e.g.target.stdout
orparameter_set1/target.stdout
.SConstruct#import waves env = Environment() env.AddMethod(waves.scons_extensions.add_program, "AddProgram") env["ABAQUS_PROGRAM"] = env.AddProgram(["abaqus"]) env.Append(BUILDERS={ "AbaqusStandard": waves.scons_extensions.abaqus_solver_builder_factory( program=env["ABAQUS_PROGRAM"], emitter=waves.scons_extensions.abaqus_solver_emitter_factory( suffixes=[".odb", ".dat", ".msg", ".com", ".prt", ".sta"], ) ) }) env.AbaqusStandard(target=["job.odb"], source=["input.inp"], job="job")
Note
The
job
keyword argument must be provided in the task definition.- Parameters:
suffixes – Suffixes which should replace the first target’s extension
appending_suffixes – Suffixes which should append the first target’s extension
stdout_extension – The extension used by the STDOUT/STDERR redirect file
- Returns:
emitter function
- waves.scons_extensions.abaqus_standard_emitter(
- target: list,
- source: list,
- env: SConsEnvironment,
- suffixes: Iterable[str] = ('.odb', '.dat', '.msg', '.com', '.prt', '.sta'),
- appending_suffixes: Iterable[str] | None = None,
- stdout_extension: str = '.stdout',
Abaqus solver emitter for Standard targets
SCons emitter for
waves.scons_extensions.abaqus_solver_builder_factory()
based builders. Built onwaves.scons_extensions.abaqus_solver_emitter_factory()
.Appends the target list with
job
task keyword argument named targets before passing through thewaves.scons_extensions.first_target_emitter()
emitter.Searches for the
job
task keyword argument and appends the target list withf"{job}{suffix}"
targets using thesuffixes
list.Searches for a file ending in the stdout extension. If none is found, creates a target by appending the stdout extension to the first target in the
target
list. The associated Builder requires at least one target for this reason. The stdout file is always placed at the end of the returned target list.This is an SCons emitter function and not an emitter factory. The suffix arguments:
suffixes
andappending_suffixes
are only relevant for developers writing new emitters which call this function as a base. The suffixes list emits targets where the suffix replaces the first target’s suffix, e.g. fortarget.ext
emit a new targettarget.suffix
. The appending suffixes list emits targets where the suffix appends the first target’s suffix, e.g. fortarget.ext
emit a new targettarget.ext.appending_suffix
.The emitter will assume all emitted targets build in the current build directory. If the target(s) must be built in a build subdirectory, e.g. in a parameterized target build, then the first target must be provided with the build subdirectory, e.g.
parameter_set1/target.ext
. When in doubt, provide a STDOUT redirect file with the.stdout
extension as a target, e.g.target.stdout
orparameter_set1/target.stdout
.SConstruct#import waves env = Environment() env.AddMethod(waves.scons_extensions.add_program, "AddProgram") env["ABAQUS_PROGRAM"] = env.AddProgram(["abaqus"]) env.Append(BUILDERS={ "AbaqusStandard": waves.scons_extensions.abaqus_solver_builder_factory( program=env["ABAQUS_PROGRAM"], emitter=waves.scons_extensions.abaqus_standard_emitter, ) }) env.AbaqusStandard(target=["job.odb"], source=["input.inp"], job="job")
Note
The
job
keyword argument must be provided in the task definition.- Parameters:
target – The target file list of strings
source – The source file list of SCons.Node.FS.File objects
env – The builder’s SCons construction environment object
suffixes – Suffixes which should replace the first target’s extension
appending_suffixes – Suffixes which should append the first target’s extension
stdout_extension – The extension used by the STDOUT/STDERR redirect file
- Returns:
target, source
- waves.scons_extensions.action_list_scons(actions: Iterable[str]) ListAction [source]
Convert a list of action strings to an SCons.Action.ListAction object
- Parameters:
actions – List of action strings
- Returns:
SCons.Action.ListAction object of SCons.Action.CommandAction
- waves.scons_extensions.action_list_strings(builder: Builder) List[str] [source]
Return a builder’s action list as a list of str
- Parameters:
builder – The builder to extract the action list from
- Returns:
list of builder actions
- waves.scons_extensions.add_cubit(env: SConsEnvironment, names: Iterable[str]) str [source]
Modifies environment variables with the paths required to
import cubit
in a Python3 environment.Returns the absolute path of the first program name found. Appends
PATH
with first program’s parent directory if a program is found and the directory is not already onPATH
. PrependsPYTHONPATH
withparent/bin
. PrependsLD_LIBRARY_PATH
withparent/bin/python3
.Returns None if no program name is found.
Example Cubit environment modification#import waves env = Environment() env.AddMethod(waves.scons_extensions.add_cubit, "AddCubit") env["CUBIT_PROGRAM"] = env.AddCubit(["cubit"])
- Parameters:
env – The SCons construction environment object to modify
names – list of string program names for the main Cubit executable. May include an absolute path.
- Returns:
Absolute path of the Cubit executable. None if none of the names are found.
- waves.scons_extensions.add_cubit_python(env: SConsEnvironment, names: Iterable[str]) str [source]
Modifies environment variables with the paths required to
import cubit
with the Cubit Python interpreter.Returns the absolute path of the first Cubit Python intepreter found. Appends
PATH
with Cubit Python parent directory if a program is found and the directory is not already onPATH
. PrependsPYTHONPATH
withparent/bin
.Returns None if no Cubit Python interpreter is found.
Example Cubit environment modification#import waves env = Environment() env.AddMethod(waves.scons_extensions.add_cubit_python, "AddCubitPython") env["CUBIT_PROGRAM"] = env.AddCubitPython(["cubit"])
- Parameters:
env – The SCons construction environment object to modify
names – list of string program names for the main Cubit executable. May include an absolute path.
- Returns:
Absolute path of the Cubit Python intepreter. None if none of the names are found.
- waves.scons_extensions.add_program(env: SConsEnvironment, names: Iterable[str]) str [source]
Search for a program from a list of possible program names. Add first found to system
PATH
.Returns the absolute path of the first program name found. Appends
PATH
with first program’s parent directory if a program is found and the directory is not already onPATH
. Returns None if no program name is found.Example search for an executable named “program”#import waves env = Environment() env.AddMethod(waves.scons_extensions.add_program, "AddProgram") env["PROGRAM"] = env.AddProgram(["program"])
- Parameters:
env – The SCons construction environment object to modify
names – list of string program names. May include an absolute path.
- Returns:
Absolute path of the found program. None if none of the names are found.
- waves.scons_extensions.ansys_apdl_builder_factory(
- environment: str = '',
- action_prefix: str = 'cd ${TARGET.dir.abspath} &&',
- program: str = 'ansys',
- program_required: str = '-i ${SOURCES[0].abspath} -o ${TARGETS[-1].abspath}',
- program_options: str = '',
- subcommand: str = '',
- subcommand_required: str = '',
- subcommand_options: str = '',
- action_suffix: str = '',
- emitter=<function first_target_emitter>,
- **kwargs,
Ansys APDL builder factory.
Warning
This is an experimental builder. It is subject to change without warning.
Warning
This builder does not have a tutorial and is not included in the regression test suite yet. Contact the development team if you encounter problems or have recommendations for improved design behavior.
This builder factory extends
waves.scons_extensions.first_target_builder_factory()
. This builder factory uses thewaves.scons_extensions.first_target_emitter()
. At least one task target must be specified in the task definition and the last target will always be the expected STDOUT and STDERR redirection output file,TARGETS[-1]
ending in*.stdout
.Warning
Users overriding the
emitter
keyword argument are responsible for providing an emitter with equivalent STDOUT file handling behavior aswaves.scons_extensions.first_target_emitter()
or updating theprogram_required
option to match their emitter’s behavior.With the default options this builder requires the following sources file provided in the order:
Ansys APDL file:
*.dat
action string construction#${environment} ${action_prefix} ${program} ${program_required} ${program_options} ${subcommand} ${subcommand_required} ${subcommand_options} ${action_suffix}
action string default expansion#${environment} cd ${TARGET.dir.abspath} && ansys -i ${SOURCES[0].abspath} -o ${TARGETS[-1].abspath} ${program_options} ${subcommand} ${subcommand_required} ${subcommand_options} ${action_suffix}
SConstruct#import waves env = Environment() env.AddMethod(waves.scons_extensions.add_program, "AddProgram") env["ANSYS_PROGRAM"] = env.AddProgram(["ansys232"]) env.Append(BUILDERS={ "AnsysAPDL": waves.scons_extensions.ansys_apdl_builder_factory( program=env["ANSYS_PROGRAM"] ) }) env.AnsysAPDL( target=["job.rst"], source=["source.dat"], program_options="-j job" )
The builder returned by this factory accepts all SCons Builder arguments. The arguments of this function are also available as keyword arguments of the builder. When provided during task definition, the task keyword arguments override the builder keyword arguments.
- Parameters:
environment – This variable is intended primarily for use with builders and tasks that can not execute from an SCons construction environment. For instance, when tasks execute on a remote server with SSH wrapped actions using
waves.scons_extensions.ssh_builder_actions()
and therefore must initialize the remote environment as part of the builder action.action_prefix – This variable is intended to perform directory change operations prior to program execution
program – The Ansys absolute or relative path
program_required – Space delimited string of required Ansys options and arguments that are crucial to builder behavior and should not be modified except by advanced users
program_options – Space delimited string of optional Ansys options and arguments that can be freely modified by the user
subcommand – A subcommand absolute or relative path
subcommand_required – Space delimited string of required subcommand options and arguments that are crucial to builder behavior and should not be modified except by advanced users.
subcommand_options – Space delimited string of optional subcommand options and arguments that can be freely modified by the user
action_suffix – This variable is intended to perform program STDOUT and STDERR redirection operations.
emitter – An SCons emitter function. This is not a keyword argument in the action string.
kwargs – Any additional keyword arguments are passed directly to the SCons builder object.
- Returns:
Ansys builder
- waves.scons_extensions.append_env_path(env: SConsEnvironment, program: str) None [source]
Append SCons contruction environment
PATH
with the program’s parent directoryUses the SCons AppendENVPath method. If the program parent directory is already on
PATH
, thePATH
directory order is preserved.Example environment modification#import waves env = Environment() env["PROGRAM"] = waves.scons_extensions.find_program(env, ["program"]) if env["PROGRAM"]: waves.append_env_path(env, env["PROGRAM"])
- Parameters:
env – The SCons construction environment object to modify
program – An absolute path for the program to add to SCons construction environment
PATH
- Raises:
FileNotFoundError – if the
program
absolute path does not exist.
- waves.scons_extensions.builder_factory(
- environment: str = '',
- action_prefix: str = '',
- program: str = '',
- program_required: str = '',
- program_options: str = '',
- subcommand: str = '',
- subcommand_required: str = '',
- subcommand_options: str = '',
- action_suffix: str = '',
- emitter=None,
- **kwargs,
Template builder factory returning a builder with no emitter
This builder provides a template action string with placeholder keyword arguments in the action string. The default behavior will not do anything unless the
program
orsubcommand
argument is updated to include an executable program. Because this builder has no emitter, all task targets must be fully specified in the task definition. Seewaves.scons_extensions.first_target_builder_factory()
for an example of the default options used by most WAVES builders.action string construction#${environment} ${action_prefix} ${program} ${program_required} ${program_options} ${subcommand} ${subcommand_required} ${subcommand_options} ${action_suffix}
action string default expansion#${environment} ${action_prefix} ${program} ${program_required} ${program_options} ${subcommand} ${subcommand_required} ${subcommand_options} ${action_suffix}
- Parameters:
environment – This variable is intended primarily for use with builders and tasks that can not execute from an SCons construction environment. For instance, when tasks execute on a remote server with SSH wrapped actions using
waves.scons_extensions.ssh_builder_actions()
and therefore must initialize the remote environment as part of the builder action.action_prefix – This variable is intended to perform directory change operations prior to program execution. By default, SCons performs actions in the parent directory of the SConstruct file. However, many computational science and engineering programs leave output files in the current working directory, so it is convenient and sometimes necessary to change to the target’s parent directory prior to execution.
program – This variable is intended to contain the primary command line executable absolute or relative path
program_required – This variable is intended to contain a space delimited string of required program options and arguments that are crucial to builder behavior and should not be modified except by advanced users.
program_options – This variable is intended to contain a space delimited string of optional program options and arguments that can be freely modified by the user.
subcommand – This variable is intended to contain the program’s subcommand. If the program variable is set to a launch controlling program, e.g.
mpirun
orcharmrun
, then the subcommand may need to contain the full target executable program and any subcommands.subcommand_required – This variable is intended to contain a space delimited string of required subcommand options and arguments that are crucial to builder behavior and should not be modified except by advanced users.
subcommand_options – This variable is intended to contain a space delimited string of optional subcommand options and arguments that can be freely modified by the user.
action_suffix – This variable is intended to perform program STDOUT and STDERR redirection operations. By default, SCons streams all STDOUT and STDERR to the terminal. However, in long or parallel workflows this may clutter the terminal and make it difficult to isolate critical debugging information, so it is convenient to redirect each program’s output to a task specific log file for later inspection and troubleshooting.
emitter – An SCons emitter function. This is not a keyword argument in the action string.
kwargs – Any additional keyword arguments are passed directly to the SCons builder object.
- Returns:
SCons template builder
- waves.scons_extensions.calculix_builder_factory(
- environment: str = '',
- action_prefix: str = 'cd ${TARGET.dir.abspath} &&',
- program: str = 'ccx',
- program_required: str = '-i ${SOURCE.filebase}',
- program_options: str = '',
- subcommand: str = '',
- subcommand_required: str = '',
- subcommand_options: str = '',
- action_suffix: str = '> ${TARGETS[-1].abspath} 2>&1',
- emitter=<function first_target_emitter>,
- **kwargs,
CalculiX builder factory.
Warning
This is an experimental builder. It is subject to change without warning.
This builder factory extends
waves.scons_extensions.first_target_builder_factory()
. This builder factory uses thewaves.scons_extensions.first_target_emitter()
. At least one task target must be specified in the task definition and the last target will always be the expected STDOUT and STDERR redirection output file,TARGETS[-1]
ending in*.stdout
.Warning
Users overriding the
emitter
keyword argument are responsible for providing an emitter with equivalent STDOUT file handling behavior aswaves.scons_extensions.first_target_emitter()
or updating theaction_suffix
option to match their emitter’s behavior.Warning
CalculiX always appends the
.inp
extension to the input file argument. Stripping the extension in the builder requires a file basename without preceding relative or absolute path. This builder is fragile to current working directory. Most users should not modify theaction_prefix
.With the default options this builder requires the following sources file provided in the order:
CalculiX input file:
*.inp
action string construction#${environment} ${action_prefix} ${program} ${program_required} ${program_options} ${subcommand} ${subcommand_required} ${subcommand_options} ${action_suffix}
action string default expansion#${environment} cd ${TARGET.dir.abspath} && ccx -i ${SOURCE.filebase} ${program_required} ${subcommand} ${subcommand_required} ${subcommand_options} > ${TARGETS[-1].abspath} 2>&1
SConstruct#import waves env = Environment() env.AddMethod(waves.scons_extensions.add_program, "AddProgram") env["CCX_PROGRAM"] = env.AddProgram(["ccx"]) env.Append(BUILDERS={ "CalculiX": waves.scons_extensions.calculix_builder_factory( subcommand=env["CCX_PROGRAM"] ) }) env.CalculiX( target=["target.stdout"], source=["source.inp"], )
The builder returned by this factory accepts all SCons Builder arguments. The arguments of this function are also available as keyword arguments of the builder. When provided during task definition, the task keyword arguments override the builder keyword arguments.
- Parameters:
environment – This variable is intended primarily for use with builders and tasks that can not execute from an SCons construction environment. For instance, when tasks execute on a remote server with SSH wrapped actions using
waves.scons_extensions.ssh_builder_actions()
and therefore must initialize the remote environment as part of the builder action.action_prefix – This variable is intended to perform directory change operations prior to program execution
program – The CalculiX
ccx
absolute or relative pathprogram_required – Space delimited string of required CalculiX options and arguments that are crucial to builder behavior and should not be modified except by advanced users
program_options – Space delimited string of optional CalculiX options and arguments that can be freely modified by the user
subcommand – A subcommand absolute or relative path
subcommand_required – Space delimited string of required subcommand options and arguments that are crucial to builder behavior and should not be modified except by advanced users.
subcommand_options – Space delimited string of optional subcommand options and arguments that can be freely modified by the user
action_suffix – This variable is intended to perform program STDOUT and STDERR redirection operations.
emitter – An SCons emitter function. This is not a keyword argument in the action string.
kwargs – Any additional keyword arguments are passed directly to the SCons builder object.
- Returns:
CalculiX builder
- waves.scons_extensions.catenate_actions(**outer_kwargs)[source]
Decorator factory to apply the
catenate_builder_actions
to a function that returns an SCons Builder.Accepts the same keyword arguments as the
waves.scons_extensions.catenate_builder_actions()
import SCons.Builder import waves @waves.scons_extensions.catenate_actions def my_builder(): return SCons.Builder.Builder(action=["echo $SOURCE > $TARGET", "echo $SOURCE >> $TARGET"])
- waves.scons_extensions.catenate_builder_actions(
- builder: Builder,
- program: str = '',
- options: str = '',
Catenate a builder’s arguments and prepend the program and options
${program} ${options} "action one && action two"
- Parameters:
builder – The SCons builder to modify
program – wrapping executable
options – options for the wrapping executable
- Returns:
modified builder
- waves.scons_extensions.check_program(env: SConsEnvironment, prog_name: str) str [source]
Replacement for SCons CheckProg like behavior without an SCons configure object
Example search for an executable named “program”#import waves env = Environment() env.AddMethod(waves.scons_extensions.check_program, "CheckProgram") env["PROGRAM"] = env.CheckProgram(["program"])
- Parameters:
env – The SCons construction environment object to modify
prog_name – string program name to search in the construction environment path
- waves.scons_extensions.conda_environment(
- program: str = 'conda',
- subcommand: str = 'env export',
- required: str = '--file ${TARGET.abspath}',
- options: str = '',
- action_prefix: str = 'cd ${TARGET.dir.abspath} &&',
Create a Conda environment file with
conda env export
This builder is intended to help WAVES workflows document the Conda environment used in the current build. The arguments of this function are also available as keyword arguments of the builder. When provided during task definition, the keyword arguments override the builder returned by this function.
Builder/Task keyword arguments
program
: The Conda command line executable absolute or relative pathsubcommand
: The Conda environment export subcommandrequired
: A space delimited string of subcommand required argumentsoptions
: A space delimited string of subcommand optional argumentsaction_prefix
: Advanced behavior. Most users should accept the defaults
At least one target must be specified. The first target determines the working directory for the builder’s action, as shown in the action code snippet below. The action changes the working directory to the first target’s parent directory prior to creating the Conda environment file.
Conda environment builder action default expansion#${action_prefix} ${program} ${subcommand} ${required} ${options}"
Conda environment builder action default expansion#cd ${TARGET.dir.abspath} && conda env export --file ${TARGET.abspath} ${options}
The modsim owner may choose to re-use this builder throughout their project configuration to provide various levels of granularity in the recorded Conda environment state. It’s recommended to include this builder at least once for any workflows that also use the
waves.scons_extensions.python_builder_factory()
. The builder may be re-used once per build sub-directory to provide more granular build environment reproducibility in the event that sub-builds are run at different times with variations in the active Conda environment. For per-Python script task environment reproducibility, the builder source list can be linked to the output of awaves.scons_extensions.python_builder_factory()
task with a target environment file name to match.The first recommendation, always building the project wide Conda environment file, is demonstrated in the example usage below.
SConstruct#import waves env = Environment() env.Append(BUILDERS={"CondaEnvironment": waves.scons_extensions.conda_environment()}) environment_target = env.CondaEnvironment(target=["environment.yaml"]) env.AlwaysBuild(environment_target)
- Parameters:
program – The Conda command line executable absolute or relative path
subcommand – The Conda environment export subcommand
required – A space delimited string of subcommand required arguments
options – A space delimited string of subcommand optional arguments
action_prefix – Advanced behavior. Most users should accept the defaults
- Returns:
Conda environment builder
- waves.scons_extensions.construct_action_list(
- actions: Iterable[str],
- prefix: str = '${action_prefix}',
- suffix: str = '',
Return an action list with a common pre/post-fix
Returns the constructed action list with pre/post fix strings as
f"{prefix} {new_action} {suffix}"
where SCons action objects are converted to their string representation. If a string is passed instead of a list, it is first converted to a list. If an empty list is passed, and empty list is returned.
- Parameters:
actions – List of action strings
prefix – Common prefix to prepend to each action
suffix – Common suffix to append to each action
- Returns:
action list
- waves.scons_extensions.copy_substfile(
- env: SConsEnvironment,
- source_list: list,
- substitution_dictionary: dict | None = None,
- build_subdirectory: str = '.',
- symlink: bool = False,
Pseudo-builder to copy source list to build directory and perform template substitutions on
*.in
filenamesSCons Pseudo-Builder to chain two builders: a builder with the SCons Copy action and the SCons Substfile builders. Files are first copied to the build (variant) directory and then template substitution is performed on template files (any file ending with
*.in
suffix) to create a file without the template suffix.When pseudo-builders are added to the environment with the SCons AddMethod function they can be accessed with the same syntax as a normal builder. When called from the construction environment, the
env
argument is omitted. See the example below.To avoid dependency cycles, the source file(s) should be passed by absolute path.
SConstruct#import pathlib import waves current_directory = pathlib.Path(Dir(".").abspath) env = Environment() env.AddMethod(waves.scons_extensions.copy_substfile, "CopySubstfile") source_list = [ "#/subdir3/file_three.ext", # File found with respect to project root directory using SCons notation current_directory / file_one.ext, # File found in current SConscript directory current_directory / "subdir2/file_two", # File found below current SConscript directory current_directory / "file_four.ext.in" # File with substitutions matching substitution dictionary keys ] substitution_dictionary = { "@variable_one@": "value_one" } env.CopySubstfile(source_list, substitution_dictionary=substitution_dictionary)
- Parameters:
env – An SCons construction environment to use when defining the targets.
source_list – List of pathlike objects or strings. Will be converted to list of pathlib.Path objects.
substitution_dictionary – key: value pairs for template substitution. The keys must contain the optional template characters if present, e.g.
@variable@
. The template character, e.g.@
, can be anything that works in the SCons Substfile builder.build_subdirectory – build subdirectory relative path prepended to target files
symlink – Whether symbolic links are created as new symbolic links. If true, symbolic links are shallow copies as a new symbolic link. If false, symbolic links are copied as a new file (dereferenced).
- Returns:
SCons NodeList of Copy and Substfile target nodes
- waves.scons_extensions.fierro_explicit_builder_factory(
- environment: str = '',
- action_prefix: str = 'cd ${TARGET.dir.abspath} &&',
- program: str = 'mpirun',
- program_required: str = '',
- program_options: str = '-np 1',
- subcommand: str = 'fierro-parallel-explicit',
- subcommand_required: str = '${SOURCE.abspath}',
- subcommand_options: str = '',
- action_suffix: str = '> ${TARGETS[-1].abspath} 2>&1',
- emitter=<function first_target_emitter>,
- **kwargs,
Fierro explicit builder factory.
Warning
This is an experimental builder. It is subject to change without warning.
This builder factory extends
waves.scons_extensions.first_target_builder_factory()
. This builder factory uses thewaves.scons_extensions.first_target_emitter()
. At least one task target must be specified in the task definition and the last target will always be the expected STDOUT and STDERR redirection output file,TARGETS[-1]
ending in*.stdout
.Warning
Users overriding the
emitter
keyword argument are responsible for providing an emitter with equivalent STDOUT file handling behavior aswaves.scons_extensions.first_target_emitter()
or updating theaction_suffix
option to match their emitter’s behavior.With the default options this builder requires the following sources file provided in the order:
Fierro input file:
*.yaml
action string construction#${environment} ${action_prefix} ${program} ${program_required} ${program_options} ${subcommand} ${subcommand_required} ${subcommand_options} ${action_suffix}
action string default expansion#${environment} cd ${TARGET.dir.abspath} && mpirun ${program_required} -np 1 fierro-parallel-explicit ${SOURCE.abspath} ${subcommand_options} > ${TARGETS[-1].abspath} 2>&1
SConstruct#import waves env = Environment() env.AddMethod(waves.scons_extensions.add_program, "AddProgram") env["FIERRO_EXPLICIT_PROGRAM"] = env.AddProgram(["fierro-parallel-explicit"]) env.Append(BUILDERS={ "FierroExplicit": waves.scons_extensions.fierro_explicit_builder_factory( subcommand=env["FIERRO_EXPLICIT_PROGRAM"] ) }) env.FierroExplicit( target=["target.stdout"], source=["source.yaml"], )
The builder returned by this factory accepts all SCons Builder arguments. The arguments of this function are also available as keyword arguments of the builder. When provided during task definition, the task keyword arguments override the builder keyword arguments.
- Parameters:
environment – This variable is intended primarily for use with builders and tasks that can not execute from an SCons construction environment. For instance, when tasks execute on a remote server with SSH wrapped actions using
waves.scons_extensions.ssh_builder_actions()
and therefore must initialize the remote environment as part of the builder action.action_prefix – This variable is intended to perform directory change operations prior to program execution
program – The mpirun absolute or relative path
program_required – Space delimited string of required mpirun options and arguments that are crucial to builder behavior and should not be modified except by advanced users
program_options – Space delimited string of optional mpirun options and arguments that can be freely modified by the user
subcommand – The Fierro absolute or relative path
subcommand_required – Space delimited string of required Fierro options and arguments that are crucial to builder behavior and should not be modified except by advanced users.
subcommand_options – Space delimited string of optional Fierro options and arguments that can be freely modified by the user
action_suffix – This variable is intended to perform program STDOUT and STDERR redirection operations.
emitter – An SCons emitter function. This is not a keyword argument in the action string.
kwargs – Any additional keyword arguments are passed directly to the SCons builder object.
- Returns:
Fierro explicit builder
- waves.scons_extensions.fierro_implicit_builder_factory(
- environment: str = '',
- action_prefix: str = 'cd ${TARGET.dir.abspath} &&',
- program: str = 'mpirun',
- program_required: str = '',
- program_options: str = '-np 1',
- subcommand: str = 'fierro-parallel-implicit',
- subcommand_required: str = '${SOURCE.abspath}',
- subcommand_options: str = '',
- action_suffix: str = '> ${TARGETS[-1].abspath} 2>&1',
- emitter=<function first_target_emitter>,
- **kwargs,
Fierro implicit builder factory.
Warning
This is an experimental builder. It is subject to change without warning.
This builder factory extends
waves.scons_extensions.first_target_builder_factory()
. This builder factory uses thewaves.scons_extensions.first_target_emitter()
. At least one task target must be specified in the task definition and the last target will always be the expected STDOUT and STDERR redirection output file,TARGETS[-1]
ending in*.stdout
.With the default options this builder requires the following sources file provided in the order:
Fierro input file:
*.yaml
action string construction#${environment} ${action_prefix} ${program} ${program_required} ${program_options} ${subcommand} ${subcommand_required} ${subcommand_options} ${action_suffix}
action string default expansion#${environment} cd ${TARGET.dir.abspath} && mpirun ${program_required} -np 1 fierro-parallel-implicit ${SOURCE.abspath} ${subcommand_options} > ${TARGETS[-1].abspath} 2>&1
SConstruct#import waves env = Environment() env.AddMethod(waves.scons_extensions.add_program, "AddProgram") env["FIERRO_IMPLICIT_PROGRAM"] = env.AddProgram(["fierro-parallel-implicit"]) env.Append(BUILDERS={ "FierroImplicit": waves.scons_extensions.fierro_implicit_builder_factory( subcommand=env["FIERRO_IMPLICIT_PROGRAM"] ) }) env.FierroImplicit( target=["target.stdout"], source=["source.yaml"], )
The builder returned by this factory accepts all SCons Builder arguments. The arguments of this function are also available as keyword arguments of the builder. When provided during task definition, the task keyword arguments override the builder keyword arguments.
- Parameters:
environment – This variable is intended primarily for use with builders and tasks that can not execute from an SCons construction environment. For instance, when tasks execute on a remote server with SSH wrapped actions using
waves.scons_extensions.ssh_builder_actions()
and therefore must initialize the remote environment as part of the builder action.action_prefix – This variable is intended to perform directory change operations prior to program execution
program – The mpirun absolute or relative path
program_required – Space delimited string of required mpirun options and arguments that are crucial to builder behavior and should not be modified except by advanced users
program_options – Space delimited string of optional mpirun options and arguments that can be freely modified by the user
subcommand – The Fierro absolute or relative path
subcommand_required – Space delimited string of required Fierro options and arguments that are crucial to builder behavior and should not be modified except by advanced users.
subcommand_options – Space delimited string of optional Fierro options and arguments that can be freely modified by the user
action_suffix – This variable is intended to perform program STDOUT and STDERR redirection operations.
emitter – An SCons emitter function. This is not a keyword argument in the action string.
kwargs – Any additional keyword arguments are passed directly to the SCons builder object.
- Returns:
Fierro implicit builder
- waves.scons_extensions.find_program(env: SConsEnvironment, names: Iterable[str]) str [source]
Search for a program from a list of possible program names.
Returns the absolute path of the first program name found. If path parts contain spaces, the part will be wrapped in double quotes.
Example search for an executable named “program”#import waves env = Environment() env.AddMethod(waves.scons_extensions.find_program, "FindProgram") env["PROGRAM"] = env.FindProgram(["program"])
- Parameters:
env – The SCons construction environment object to modify
names – list of string program names. May include an absolute path.
- Returns:
Absolute path of the found program. None if none of the names are found.
- waves.scons_extensions.first_target_builder_factory(
- environment: str = '',
- action_prefix: str = 'cd ${TARGET.dir.abspath} &&',
- program: str = '',
- program_required: str = '',
- program_options: str = '',
- subcommand: str = '',
- subcommand_required: str = '',
- subcommand_options: str = '',
- action_suffix: str = '> ${TARGETS[-1].abspath} 2>&1',
- emitter=<function first_target_emitter>,
- **kwargs,
Template builder factory with WAVES default action behaviors and a task STDOUT file emitter
This builder factory extends
waves.scons_extensions.builder_factory()
to provide a template action string with placeholder keyword arguments and WAVES builder default behavior. The default behavior will not do anything unless theprogram
orsubcommand
argument is updated to include an executable program. This builder factory uses thewaves.scons_extensions.first_target_emitter()
. At least one task target must be specified in the task definition and the last target will always be the expected STDOUT and STDERR redirection output file,TARGETS[-1]
ending in*.stdout
.Warning
Users overriding the
emitter
keyword argument are responsible for providing an emitter with equivalent STDOUT file handling behavior aswaves.scons_extensions.first_target_emitter()
or updating theaction_suffix
option to match their emitter’s behavior.action string construction#${environment} ${action_prefix} ${program} ${program_required} ${program_options} ${subcommand} ${subcommand_required} ${subcommand_options} ${action_suffix}
action string default expansion#${environment} cd ${TARGET.dir.abspath} && ${program} ${program_required} ${program_options} ${subcommand} ${subcommand_required} ${subcommand_options} > ${TARGETS[-1].abspath} 2>&1
- Parameters:
environment – This variable is intended primarily for use with builders and tasks that can not execute from an SCons construction environment. For instance, when tasks execute on a remote server with SSH wrapped actions using
waves.scons_extensions.ssh_builder_actions()
and therefore must initialize the remote environment as part of the builder action.action_prefix – This variable is intended to perform directory change operations prior to program execution. By default, SCons performs actions in the parent directory of the SConstruct file. However, many computational science and engineering programs leave output files in the current working directory, so it is convenient and sometimes necessary to change to the target’s parent directory prior to execution.
program – This variable is intended to containg the primary command line executable absolute or relative path
program_required – This variable is intended to contain a space delimited string of required program options and arguments that are crucial to builder behavior and should not be modified except by advanced users.
program_options – This variable is intended to contain a space delimited string of optional program options and arguments that can be freely modified by the user.
subcommand – This variable is intended to contain the program’s subcommand. If the program variable is set to a launch controlling program, e.g.
mpirun
orcharmrun
, then the subcommand may need to contain the full target executable program and any subcommands.subcommand_required – This variable is intended to contain a space delimited string of subcommand required options and arguments that are crucial to builder behavior and should not be modified except by advanced users.
subcommand_options – This variable is intended to contain a space delimited string of optional subcommand options and arguments that can be freely modified by the user.
action_suffix – This variable is intended to perform program STDOUT and STDERR redirection operations. By default, SCons streams all STDOUT and STDERR to the terminal. However, in long or parallel workflows this may clutter the terminal and make it difficult to isolate critical debugging information, so it is convenient to redirect each program’s output to a task specific log file for later inspection and troubleshooting.
emitter – An SCons emitter function. This is not a keyword argument in the action string.
kwargs – Any additional keyword arguments are passed directly to the SCons builder object.
- Returns:
SCons template builder
- waves.scons_extensions.first_target_emitter(
- target: list,
- source: list,
- env: SConsEnvironment,
- suffixes: Iterable[str] | None = None,
- appending_suffixes: Iterable[str] | None = None,
- stdout_extension: str = '.stdout',
SCons emitter function that emits new targets based on the first target
Searches for a file ending in the stdout extension. If none is found, creates a target by appending the stdout extension to the first target in the
target
list. The associated Builder requires at least one target for this reason. The stdout file is always placed at the end of the returned target list.This is an SCons emitter function and not an emitter factory. The suffix arguments:
suffixes
andappending_suffixes
are only relevant for developers writing new emitters which call this function as a base. The suffixes list emits targets where the suffix replaces the first target’s suffix, e.g. fortarget.ext
emit a new targettarget.suffix
. The appending suffixes list emits targets where the suffix appends the first target’s suffix, e.g. fortarget.ext
emit a new targettarget.ext.appending_suffix
.The emitter will assume all emitted targets build in the current build directory. If the target(s) must be built in a build subdirectory, e.g. in a parameterized target build, then the first target must be provided with the build subdirectory, e.g.
parameter_set1/target.ext
. When in doubt, provide a STDOUT redirect file with the.stdout
extension as a target, e.g.target.stdout
orparameter_set1/target.stdout
.- Parameters:
target – The target file list of strings
source – The source file list of SCons.Node.FS.File objects
env – The builder’s SCons construction environment object
suffixes – Suffixes which should replace the first target’s extension
appending_suffixes – Suffixes which should append the first target’s extension
stdout_extension – The extension used by the STDOUT/STDERR redirect file
- Returns:
target, source
- waves.scons_extensions.matlab_script(
- program: str = 'matlab',
- action_prefix: str = 'cd ${TARGET.dir.abspath} &&',
- action_suffix: str = '> ${TARGETS[-1].abspath} 2>&1',
- environment_suffix: str = '> ${TARGETS[-2].abspath} 2>&1',
Matlab script SCons builder
Warning
Experimental implementation is subject to change
This builder requires that the Matlab script to execute is the first source in the list. The builder returned by this function accepts all SCons Builder arguments. The arguments of this function are also available as keyword arguments of the builder. When provided during task definition, the keyword arguments override the builder returned by this function.
Builder/Task keyword arguments
program
: The Matlab command line executable absolute or relative pathmatlab_options
: The Matlab command line options provided as a string.script_options
: The Matlab function interface options in Matlab syntax and provided as a string.action_prefix
: Advanced behavior. Most users should accept the defaultsaction_suffix
: Advanced behavior. Most users should accept the defaults.environment_suffix
: Advanced behavior. Most users should accept the defaults.
The parent directory absolute path is added to the Matlab
path
variable prior to execution. All required Matlab files should be co-located in the same source directory.At least one target must be specified. The first target determines the working directory for the builder’s action, as shown in the action code snippet below. The action changes the working directory to the first target’s parent directory prior to executing the python script.
The Builder emitter will append the builder managed targets automatically. Appends
target[0].matlab.env and ``target[0]
.stdout to thetarget
list.The emitter will assume all emitted targets build in the current build directory. If the target(s) must be built in a build subdirectory, e.g. in a parameterized target build, then the first target must be provided with the build subdirectory, e.g.
parameter_set1/my_target.ext
. When in doubt, provide a STDOUT redirect file as a target, e.g.target.stdout
.Matlab script builder action keywords#${action_prefix} ${program} ${matlab_options} -batch "path(path, '${SOURCE.dir.abspath}'); [fileList, productList] = matlab.codetools.requiredFilesAndProducts('${SOURCE.file}'); disp(cell2table(fileList)); disp(struct2table(productList, 'AsArray', true)); exit;" ${environment_suffix} ${action_prefix} ${program} ${matlab_options} -batch "path(path, '${SOURCE.dir.abspath}'); ${SOURCE.filebase}(${script_options})" ${action_suffix}
Matlab script builder action default expansion#cd ${TARGET.dir.abspath} && matlab ${matlab_options} -batch "path(path, '${SOURCE.dir.abspath}'); [fileList, productList] = matlab.codetools.requiredFilesAndProducts('${SOURCE.file}'); disp(cell2table(fileList)); disp(struct2table(productList, 'AsArray', true)); exit;" > ${TARGETS[-2].abspath} 2>&1 cd ${TARGET.dir.abspath} && matlab ${matlab_options} -batch "path(path, '${SOURCE.dir.abspath}'); ${SOURCE.filebase}(${script_options})" > ${TARGETS[-1].abspath} 2>&1
- Parameters:
program – An absolute path or basename string for the Matlab program.
action_prefix – Advanced behavior. Most users should accept the defaults.
action_suffix – Advanced behavior. Most users should accept the defaults.
environment_suffix – Advanced behavior. Most users should accept the defaults.
- Returns:
Matlab script builder
- waves.scons_extensions.parameter_study_sconscript(
- env: SConsEnvironment,
- *args,
- variant_dir=None,
- exports: dict | None = None,
- study=None,
- set_name: str = '',
- subdirectories: bool = False,
- **kwargs,
Wrap the SCons SConscript call to unpack parameter generators
Always overrides the
exports
dictionary withset_name
andparameters
keys. Whenstudy
is a dictionary or parameter generator, theparameters
are overridden. Whenstudy
is a parameter generator, theset_name
is overridden.If the study is a WAVES parameter generator object, call SConscript once per
set_name
andparameters
in the generator’s parameter study dictionary.If the study is a
dict
, call SConscript with the study asparameters
and use theset_name
from the method API.In all other cases, the SConscript call is given the
set_name
from the method API and an emptyparameters
dictionary.
SConstruct#import pathlib import waves env = Environment() env.Append(BUILDERS={ "AbaqusJournal": waves.scons_extensions.abaqus_journal(), "AbaqusSolver": waves.scons_extensions.abaqus_solver() }) env.AddMethod(waves.scons_extensions.parameter_study_sconscript, "ParameterStudySConscript") parameter_study_file = pathlib.Path("parameter_study.h5") parameter_generator = waves.parameter_generators.CartesianProduct( {"parameter_one": [1, 2, 3]}, output_file=parameter_study_file, previous_parameter_study=parameter_study_file ) studies = ( ("SConscript", parameter_generator), ("SConscript", {"parameter_one": 1}) ) for workflow, study in studies: env.ParameterStudySConscript(workflow, variant_dir="build", study=study, subdirectories=True)
SConscript#Import("env", "set_name", "parameters") env.AbaqusJournal( target=["job.inp"], source=["journal.py"], journal_options="--input=${SOURCE.abspath} --output=${TARGET.abspath} --option ${parameter_one}" **parameters ) env.AbaqusSolver( target=["job.odb"], source=["job.inp"], job="job", **parameters )
- Parameters:
env – SCons construction environment. Do not provide when using this function as a construction environment method, e.g.
env.ParameterStudySConscript
.args – All positional arguments are passed through to the SConscript call directly
variant_dir – The SConscript API variant directory argument
exports – Dictionary of
{key: value}
pairs for theexports
variables. Must use the dictionary style because the calling script’s namespace is not available to the function namespace.study – Parameter generator or dictionary simulation parameters
set_name – Set name to use when not provided a
study
. Overridden by thestudy
set names whenstudy
is a parameter generator.kwargs – All other keyword arguments are passed through to the SConscript call directly
subdirectories – Switch to use parameter generator
study
set names as subdirectories. Ignored whenstudy
is not a parameter generator.
- Returns:
SConscript
Export()
variables. When called with a parameter generator study, theExport()
variables are returned as a list with one entry per parameter set.- Raises:
TypeError – if
exports
is not a dictionary
- waves.scons_extensions.parameter_study_task(
- env: SConsEnvironment,
- builder: Builder,
- *args,
- study=None,
- subdirectories: bool = False,
- **kwargs,
Parameter study pseudo-builder.
SCons Pseudo-Builder aids in task construction for WAVES parameter studies with any SCons builder. Works with WAVES parameter generators or parameter dictionaries to reduce parameter study task definition boilerplate and make nominal workflow definitions directly re-usable in parameter studies.
If the study is a WAVES parameter generator object, loop over the parameter sets and replace
@{set_name}
in any*args
and**kwargs
that are strings, paths, or lists of strings and paths.If the study is a
dict()
, unpack the dictionary as keyword arguments directly into the builder.In all other cases, the task is passed through unchanged to the builder and the study variable is ignored.
When chaining parameter study tasks, arguments belonging to the parameter study can be prefixed by the template
@{set_name}source.ext
. If the task uses a parameter study, the set name prefix will be replaced to match the task path modifications, e.g.parameter_set0_source.ext
orparameter_set0/source.ext
depending on thesubdirectories
boolean. If the task is not part of a parameter study, the set name will be removed from the source, e.g.source.ext
. The@
symbol is used as the delimiter to reduce with clashes in shell variable syntax and SCons substitution syntax.When pseudo-builders are added to the environment with the SCons AddMethod function they can be accessed with the same syntax as a normal builder. When called from the construction environment, the
env
argument is omitted.This pseudo-builder is most powerful when used in an SConscript call to a separate workflow configuration file. The SConscript file can then be called with a nominal parameter dictionary or with a parameter generator object. See the example below.
SConstruct#import pathlib import waves env = Environment() env.Append(BUILDERS={ "AbaqusJournal": waves.scons_extensions.abaqus_journal(), "AbaqusSolver": waves.scons_extensions.abaqus_solver() }) env.AddMethod(waves.scons_extensions.parameter_study_task, "ParameterStudyTask") parameter_study_file = pathlib.Path("parameter_study.h5") parameter_generator = waves.parameter_generators.CartesianProduct( {"parameter_one": [1, 2, 3]}, output_file=parameter_study_file, previous_parameter_study=parameter_study_file ) studies = ( ("SConscript", parameter_generator), ("SConscript", {"parameter_one": 1}) ) for workflow, study in studies: SConscript(workflow, exports={"env": env, "study": study})
SConscript#Import("env", "study") env.ParameterStudyTask( env.AbaqusJournal, target=["@{set_name}job.inp"], source=["journal.py"], journal_options="--input=${SOURCE.abspath} --output=${TARGET.abspath} --option ${parameter_one}" study=study, subdirectories=True, ) env.ParameterStudyTask( env.AbaqusSolver, target=["@{set_name}job.odb"], source=["@{set_name}job.inp"], job="job", study=study, subdirectories=True, )
- Parameters:
env – An SCons construction environment to use when defining the targets.
builder – The builder to parameterize
args – All other positional arguments are passed through to the builder after
@{set_name}
string substitutionsstudy – Parameter generator or dictionary parameter set to provide to the builder. Parameter generators are unpacked with set name directory prefixes. Dictionaries are unpacked as keyword arguments.
subdirectories – Switch to use parameter generator
study
set names as subdirectories. Ignored whenstudy
is not a parameter generator.kwargs – all other keyword arguments are passed through to the builder after
@{set_name}
string substitutions
- Returns:
SCons NodeList of target nodes
- waves.scons_extensions.parameter_study_write(
- env: SConsEnvironment,
- parameter_generator,
- **kwargs,
Pseudo-builder to write a parameter generator’s parameter study file
SConstruct#import pathlib import waves env = Environment() env.AddMethod(waves.scons_extensions.parameter_study_write, "ParameterStudyWrite") parameter_study_file = pathlib.Path("parameter_study.h5") parameter_generator = waves.parameter_generators.CartesianProduct( {"parameter_one": [1, 2, 3]}, output_file=parameter_study_file, previous_parameter_study=parameter_study_file ) env.ParameterStudyWrite(parameter_generator)
- Parameters:
parameter_generator – WAVES ParameterGenerator class
kwargs – All other keyword arguments are passed directly to the
waves.parameter_generators.ParameterGenerator.write()
method.
- Returns:
SCons NodeList of target nodes
- waves.scons_extensions.print_action_signature_string(s, target, source, env) None [source]
Print the action string used to calculate the action signature
Designed to behave similarly to SCons
--debug=presub
option usingPRINT_CMD_LINE_FUNC
feature: https://scons.org/doc/production/HTML/scons-man.html#cv-PRINT_CMD_LINE_FUNCSConstruct#import waves env = Environment(PRINT_CMD_LINE_FUNC=waves.scons_extensions.print_action_signature_string) env.Command( target=["target.txt"], source=["SConstruct"], action=["echo 'Hello World!' > ${TARGET.relpath}"] ) .. code-block:: :caption: shell $ scons target.txt scons: Reading SConscript files ... scons: done reading SConscript files. scons: Building targets ... Building target.txt with action signature string: echo 'Hello World!' > target.txt_relpath echo 'Hello World!' > target.txt scons: done building targets.
- waves.scons_extensions.print_build_failures(
- env: ~SCons.Script.SConscript.SConsEnvironment = <SCons.Script.SConscript.SConsEnvironment object>,
- print_stdout: bool = True,
On exit, query the SCons reported build failures and print the associated node’s STDOUT file, if it exists
SConstruct#AddOption( "--print-build-failures", dest="print_build_failures", default=False, action="store_true", help="Print task *.stdout target file(s) on build failures. (default: '%default')" ) env = Environment( print_build_failures=GetOption("print_build_failures") ) env.AddMethod(waves.scons_extensions.print_build_failures, "PrintBuildFailures") env.PrintBuildFailures(print_stdout=env["print_build_failures"])
- Parameters:
env – SCons construction environment
print_stdout – Boolean to set the exit behavior. If False, don’t modify the exit behavior.
- waves.scons_extensions.project_alias(
- env: SConsEnvironment = None,
- *args,
- description: str = '',
- target_descriptions: dict = {},
- **kwargs,
Wrapper around the SCons Alias method. Appends and returns target descriptions dictionary.
- Parameters:
env – The SCons construction environment object to modify.
args – All other positional arguments are passed to the SCons Alias method.
description – String representing metadata of the alias.
target_descriptions – Mutable dictionary used to keep track of all alias’s metadata. If the function is called with a user-supplied dictionary, the accumulated target descriptions are reset to match the provided dictionary and all previously accumulated descriptions are discarded. If an existing alias is called it will overwrite the previous description.
kwargs – All other keyword arguments are passed to the SCons Alias method.
- Returns:
target descriptions dictionary
- waves.scons_extensions.project_help(
- env: ~SCons.Script.SConscript.SConsEnvironment = <SCons.Script.SConscript.SConsEnvironment object>,
- append: bool = True,
- local_only: bool = True,
- target_descriptions: dict | None = None,
Add default targets and alias lists to project help message
See the SCons Help documentation for appending behavior. Thin wrapper around
- Parameters:
env – The SCons construction environment object to modify
append – append to the
env.Help
message (default). When False, theenv.Help
message will be overwritten ifenv.Help
has not been previously called.local_only – Limit help message to the project specific content when True. Only applies to SCons >=4.6.0
target_descriptions – dictionary containing target metadata.
- waves.scons_extensions.project_help_aliases(
- env: ~SCons.Script.SConscript.SConsEnvironment = <SCons.Script.SConscript.SConsEnvironment object>,
- append: bool = True,
- local_only: bool = True,
- target_descriptions: dict | None = None,
Add the alias list to the project’s help message
See the SCons Help documentation for appending behavior. Adds text to the project help message formatted as
Target Aliases: Alias_1 Alias_2
where the aliases are recovered from
SCons.Node.Alias.default_ans
.- Parameters:
env – The SCons construction environment object to modify
append – append to the
env.Help
message (default). When False, theenv.Help
message will be overwritten ifenv.Help
has not been previously called.local_only – Limit help message to the project specific content when True. Only applies to SCons >=4.6.0
target_descriptions – dictionary containing target metadata.
- waves.scons_extensions.project_help_default_targets(
- env: ~SCons.Script.SConscript.SConsEnvironment = <SCons.Script.SConscript.SConsEnvironment object>,
- append: bool = True,
- local_only: bool = True,
- target_descriptions: dict | None = None,
Add a default targets list to the project’s help message
See the SCons Help documentation for appending behavior. Adds text to the project help message formatted as
Default Targets: Default_Target_1 Default_Target_2
where the targets are recovered from
SCons.Script.DEFAULT_TARGETS
.- Parameters:
env – The SCons construction environment object to modify
append – append to the
env.Help
message (default). When False, theenv.Help
message will be overwritten ifenv.Help
has not been previously called.local_only – Limit help message to the project specific content when True. Only applies to SCons >=4.6.0
target_descriptions – dictionary containing target metadata.
- waves.scons_extensions.python_builder_factory(
- environment: str = '',
- action_prefix: str = 'cd ${TARGET.dir.abspath} &&',
- program: str = 'python',
- program_required: str = '',
- program_options: str = '',
- subcommand: str = '${SOURCE.abspath}',
- subcommand_required: str = '',
- subcommand_options: str = '',
- action_suffix: str = '> ${TARGETS[-1].abspath} 2>&1',
- emitter=<function first_target_emitter>,
- **kwargs,
Python builder factory
This builder factory extends
waves.scons_extensions.first_target_builder_factory()
. This builder factory uses thewaves.scons_extensions.first_target_emitter()
. At least one task target must be specified in the task definition and the last target will always be the expected STDOUT and STDERR redirection output file,TARGETS[-1]
ending in*.stdout
.Warning
Users overriding the
emitter
keyword argument are responsible for providing an emitter with equivalent STDOUT file handling behavior aswaves.scons_extensions.first_target_emitter()
or updating theaction_suffix
option to match their emitter’s behavior.With the default options this builder requires the following sources file provided in the order:
Python script:
*.py
action string construction#${environment} ${action_prefix} ${program} ${program_required} ${program_options} ${subcommand} ${subcommand_required} ${subcommand_options} ${action_suffix}
action string default expansion#${environment} cd ${TARGET.dir.abspath} && python ${program_required} ${program_options} ${SOURCE.abspath} ${subcommand_required} ${subcommand_options} > ${TARGETS[-1].abspath} 2>&1
SConstruct#import waves env = Environment() env.Append(BUILDERS={"PythonScript": waves.scons_extensions.python_builder_factory()}) env.PythonScript(target=["my_output.stdout"], source=["my_script.py"])
The builder returned by this factory accepts all SCons Builder arguments. The arguments of this function are also available as keyword arguments of the builder. When provided during task definition, the task keyword arguments override the builder keyword arguments.
- Parameters:
environment – This variable is intended primarily for use with builders and tasks that can not execute from an SCons construction environment. For instance, when tasks execute on a remote server with SSH wrapped actions using
waves.scons_extensions.ssh_builder_actions()
and therefore must initialize the remote environment as part of the builder action.action_prefix – This variable is intended to perform directory change operations prior to program execution
program – The Python interpreter absolute or relative path
program_required – Space delimited string of required Python interpreter options and arguments that are crucial to builder behavior and should not be modified except by advanced users
program_options – Space delimited string of optional Python interpreter options and arguments that can be freely modified by the user
subcommand – The Python script absolute or relative path
subcommand_required – Space delimited string of required Python script options and arguments that are crucial to builder behavior and should not be modified except by advanced users.
subcommand_options – Space delimited string of optional Python script options and arguments that can be freely modified by the user
action_suffix – This variable is intended to perform program STDOUT and STDERR redirection operations.
emitter – An SCons emitter function. This is not a keyword argument in the action string.
kwargs – Any additional keyword arguments are passed directly to the SCons builder object.
- Returns:
Python builder
- waves.scons_extensions.quinoa_builder_factory(
- environment: str = '',
- action_prefix: str = 'cd ${TARGET.dir.abspath} &&',
- program: str = 'charmrun',
- program_required: str = '',
- program_options: str = '+p1',
- subcommand: str = 'inciter',
- subcommand_required: str = '--control ${SOURCES[0].abspath} --input ${SOURCES[1].abspath}',
- subcommand_options: str = '',
- action_suffix: str = '> ${TARGETS[-1].abspath} 2>&1',
- emitter=<function first_target_emitter>,
- **kwargs,
Quinoa builder factory
This builder factory extends
waves.scons_extensions.first_target_builder_factory()
. This builder factory uses thewaves.scons_extensions.first_target_emitter()
. At least one task target must be specified in the task definition and the last target will always be the expected STDOUT and STDERR redirection output file,TARGETS[-1]
ending in*.stdout
.Warning
Users overriding the
emitter
keyword argument are responsible for providing an emitter with equivalent STDOUT file handling behavior aswaves.scons_extensions.first_target_emitter()
or updating theaction_suffix
option to match their emitter’s behavior.With the default options this builder requires the following sources file provided in the order:
Quinoa control file:
*.q
Exodus mesh file:
*.exo
action string construction#${environment} ${action_prefix} ${program} ${program_required} ${program_options} ${subcommand} ${subcommand_required} ${subcommand_options} ${action_suffix}
action string default expansion#${environment} cd ${TARGET.dir.abspath} && charmrun ${program_required} +p1 inciter --control ${SOURCES[0].abspath} --input ${SOURCES[1].abspath} ${subcommand_options} > ${TARGETS[-1].abspath} 2>&1
SConstruct#import waves env = waves.scons_extensions.shell_environment("module load quinoa") env.Append(BUILDERS={ "QuinoaSolver": waves.scons_extensions.quinoa_builder_factory(), }) # Serial execution with "+p1" env.QuinoaSolver(target=["flow.stdout"], source=["flow.lua", "box.exo"]) # Parallel execution with "+p4" env.QuinoaSolver(target=["flow.stdout"], source=["flow.lua", "box.exo"], program_options="+p4")
The builder returned by this factory accepts all SCons Builder arguments. The arguments of this function are also available as keyword arguments of the builder. When provided during task definition, the task keyword arguments override the builder keyword arguments.
- Parameters:
environment – This variable is intended primarily for use with builders and tasks that can not execute from an SCons construction environment. For instance, when tasks execute on a remote server with SSH wrapped actions using
waves.scons_extensions.ssh_builder_actions()
and therefore must initialize the remote environment as part of the builder action.action_prefix – This variable is intended to perform directory change operations prior to program execution
program – The charmrun absolute or relative path
program_required – Space delimited string of required charmrun options and arguments that are crucial to builder behavior and should not be modified except by advanced users
program_options – Space delimited string of optional charmrun options and arguments that can be freely modified by the user
subcommand – The inciter (quinoa executable) absolute or relative path
subcommand_required – Space delimited string of required inciter (quinoa executable) options and arguments that are crucial to builder behavior and should not be modified except by advanced users.
subcommand_options – Space delimited string of optional inciter (quinoa executable) options and arguments that can be freely modified by the user
action_suffix – This variable is intended to perform program STDOUT and STDERR redirection operations.
emitter – An SCons emitter function. This is not a keyword argument in the action string.
kwargs – Any additional keyword arguments are passed directly to the SCons builder object.
- Returns:
Quinoa builder
- waves.scons_extensions.sbatch(
- program: str = 'sbatch',
- required: str = '--wait --output=${TARGETS[-1].abspath}',
- action_prefix: str = 'cd ${TARGET.dir.abspath} &&',
-
The builder returned by this function accepts all SCons Builder arguments. The arguments of this function are also available as keyword arguments of the builder. When provided during task definition, the keyword arguments override the builder returned by this function.
Builder/Task keyword arguments
program
: The sbatch command line executable absolute or relative pathrequired
: A space delimited string of sbatch required argumentsslurm_job
: The command to submit with sbatchsbatch_options
: Optional sbatch optionsaction_prefix
: Advanced behavior. Most users should accept the defaults
The builder does not use a SLURM batch script. Instead, it requires the
slurm_job
variable to be defined with the command string to execute.At least one target must be specified. The first target determines the working directory for the builder’s action, as shown in the action code snippet below. The action changes the working directory to the first target’s parent directory prior to executing the journal file.
The Builder emitter will append the builder managed targets automatically. Appends
target[0]
.stdout to thetarget
list.SLURM sbatch builder action keywords#${action_prefix} ${program} ${required} ${sbatch_options} --wrap "${slurm_job}"
SLURM sbatch builder action default expansion#cd ${TARGET.dir.abspath} && sbatch --wait --output=${TARGETS[-1].abspath} ${sbatch_options} --wrap "${slurm_job}"
SConstruct#import waves env = Environment() env.Append(BUILDERS={"SlurmSbatch": waves.scons_extensions.sbatch()}) env.SlurmSbatch(target=["my_output.stdout"], source=["my_source.input"], slurm_job="cat $SOURCE > $TARGET")
- Parameters:
program – An absolute path or basename string for the sbatch program.
required – A space delimited string of sbatch required arguments
action_prefix – Advanced behavior. Most users should accept the defaults.
- Returns:
SLURM sbatch builder
- waves.scons_extensions.sbatch_abaqus_journal(*args, **kwargs)[source]
Thin pass through wrapper of
waves.scons_extensions.abaqus_journal()
Catenate the actions and submit with SLURM sbatch. Accepts the
sbatch_options
builder keyword argument to modify sbatch behavior.Sbatch Abaqus journal builder action keywords#sbatch --wait --output=${TARGET.base}.slurm.out ${sbatch_options} --wrap "${action_prefix} ${program} -information environment ${environment_suffix} && ${action_prefix} ${program} ${required} ${abaqus_options} -- ${journal_options} ${action_suffix}"
Sbatch Abaqus journal builder action default expansion#sbatch --wait --output=${TARGET.base}.slurm.out ${sbatch_options} --wrap "cd ${TARGET.dir.abspath} && abaqus cae -noGui ${SOURCE.abspath} ${abaqus_options} -- ${journal_options} > ${TARGETS[-1].abspath} 2>&1"
- waves.scons_extensions.sbatch_abaqus_journal_builder_factory(*args, **kwargs)[source]
Thin pass through wrapper of
waves.scons_extensions.abaqus_journal_builder_factory()
Catenate the actions and submit with SLURM sbatch. Accepts the
sbatch_options
builder keyword argument to modify sbatch behavior.action string construction#sbatch --wait --output=${TARGET.base}.slurm.out ${sbatch_options} --wrap "${environment} ${action_prefix} ${program} ${program_required} ${program_options} ${subcommand} ${subcommand_required} ${subcommand_options} ${action_suffix}"
- waves.scons_extensions.sbatch_abaqus_solver(*args, **kwargs)[source]
Thin pass through wrapper of
waves.scons_extensions.abaqus_solver()
Catenate the actions and submit with SLURM sbatch. Accepts the
sbatch_options
builder keyword argument to modify sbatch behavior.Sbatch Abaqus solver builder action default expansion#sbatch --wait --output=${TARGET.base}.slurm.out ${sbatch_options} --wrap "${action_prefix} ${program} -job ${job_name} -input ${SOURCE.filebase} ${abaqus_options} ${required} ${action_suffix}"
Sbatch Abaqus solver builder action default expansion#sbatch --wait --output=${TARGET.base}.slurm.out ${sbatch_options} --wrap "cd ${TARGET.dir.abspath} && ${program} -job ${job_name} -input ${SOURCE.filebase} ${abaqus_options} -interactive -ask_delete no > ${TARGETS[-1].abspath} 2>&1"
- waves.scons_extensions.sbatch_abaqus_solver_builder_factory(*args, **kwargs)[source]
Thin pass through wrapper of
waves.scons_extensions.abaqus_solver_builder_factory()
Catenate the actions and submit with SLURM sbatch. Accepts the
sbatch_options
builder keyword argument to modify sbatch behavior.action string construction#sbatch --wait --output=${TARGET.base}.slurm.out ${sbatch_options} --wrap "${environment} ${action_prefix} ${program} ${program_required} ${program_options} ${subcommand} ${subcommand_required} ${subcommand_options} ${action_suffix}"
- waves.scons_extensions.sbatch_python_builder_factory(*args, **kwargs)[source]
Thin pass through wrapper of
waves.scons_extensions.python_builder_factory()
Catenate the actions and submit with SLURM sbatch. Accepts the
sbatch_options
builder keyword argument to modify sbatch behavior.action string construction#sbatch --wait --output=${TARGET.base}.slurm.out ${sbatch_options} --wrap "${environment} ${action_prefix} ${program} ${program_required} ${program_options} ${subcommand} ${subcommand_required} ${subcommand_options} ${action_suffix}"
- waves.scons_extensions.sbatch_quinoa_builder_factory(*args, **kwargs)[source]
Thin pass through wrapper of
waves.scons_extensions.quinoa_builder_factory()
Catenate the actions and submit with SLURM sbatch. Accepts the
sbatch_options
builder keyword argument to modify sbatch behavior.action string construction#sbatch --wait --output=${TARGET.base}.slurm.out ${sbatch_options} --wrap "${environment} ${action_prefix} ${program} ${program_required} ${program_options} ${subcommand} ${subcommand_required} ${subcommand_options} ${action_suffix}"
- waves.scons_extensions.sbatch_sierra_builder_factory(*args, **kwargs)[source]
Thin pass through wrapper of
waves.scons_extensions.sierra_builder_factory()
Catenate the actions and submit with SLURM sbatch. Accepts the
sbatch_options
builder keyword argument to modify sbatch behavior.action string construction#sbatch --wait --output=${TARGET.base}.slurm.out ${sbatch_options} --wrap "${environment} ${action_prefix} ${program} ${program_required} ${program_options} ${subcommand} ${subcommand_required} ${subcommand_options} ${action_suffix}"
- waves.scons_extensions.shell_environment(
- command: str,
- shell: str = 'bash',
- cache: str | None = None,
- overwrite_cache: bool = False,
Return an SCons shell environment from a cached file or by running a shell command
If the environment is created successfully and a cache file is requested, the cache file is _always_ written. The
overwrite_cache
behavior forces the shellcommand
execution, even when the cache file is present. If thecommand
fails (raising asubprocess.CalledProcessError
) the captured output is printed to STDERR before re-raising the exception.Warning
Currently assumes a nix flavored shell: sh, bash, zsh, csh, tcsh. May work with any shell supporting command construction as below.
{shell} -c "{command} && env -0"
The method may fail if the command produces stdout that does not terminate in a newline. Redirect command output away from stdout if this causes problems, e.g.
command = 'command > /dev/null && command two > /dev/null'
in most shells.SConstruct#import waves env = waves.scons_extensions.shell_environment("source my_script.sh")
- Parameters:
command – the shell command to execute
shell – the shell to use when executing command by absolute or relative path
cache – absolute or relative path to read/write a shell environment dictionary. Will be written as YAML formatted file regardless of extension.
overwrite_cache – Ignore previously cached files if they exist.
- Returns:
SCons shell environment
- Raises:
subprocess.CalledProcessError – Print the captured output and re-raise exception when the shell command returns a non-zero exit status.
- waves.scons_extensions.sierra_builder_factory(
- environment: str = '',
- action_prefix: str = 'cd ${TARGET.dir.abspath} &&',
- program: str = 'sierra',
- program_required: str = '',
- program_options: str = '',
- subcommand: str = 'adagio',
- subcommand_required: str = '-i ${SOURCE.abspath}',
- subcommand_options: str = '',
- action_suffix: str = '> ${TARGETS[-1].abspath} 2>&1',
- emitter=<function first_target_emitter>,
- **kwargs,
Sierra builder factory
This builder factory extends
waves.scons_extensions.first_target_builder_factory()
. This builder factory uses thewaves.scons_extensions.first_target_emitter()
. At least one task target must be specified in the task definition and the last target will always be the expected STDOUT and STDERR redirection output file,TARGETS[-1]
ending in*.stdout
.Warning
Users overriding the
emitter
keyword argument are responsible for providing an emitter with equivalent STDOUT file handling behavior aswaves.scons_extensions.first_target_emitter()
or updating theaction_suffix
option to match their emitter’s behavior.With the default options this builder requires the following sources file provided in the order:
Sierra input file:
*.i
action string construction#${environment} ${action_prefix} ${program} ${program_required} ${program_options} ${subcommand} ${subcommand_required} ${subcommand_options} ${action_suffix}
action string default expansion#${environment} cd ${TARGET.dir.abspath} && sierra ${program_required} ${program_options} adagio ${SOURCE.abspath} ${subcommand_options} > ${TARGETS[-1].abspath} 2>&1
The builder returned by this factory accepts all SCons Builder arguments. The arguments of this function are also available as keyword arguments of the builder. When provided during task definition, the task keyword arguments override the builder keyword arguments.
- Parameters:
environment – This variable is intended primarily for use with builders and tasks that can not execute from an SCons construction environment. For instance, when tasks execute on a remote server with SSH wrapped actions using
waves.scons_extensions.ssh_builder_actions()
and therefore must initialize the remote environment as part of the builder action.action_prefix – This variable is intended to perform directory change operations prior to program execution
program – The Sierra absolute or relative path
program_required – Space delimited string of required Sierra options and arguments that are crucial to builder behavior and should not be modified except by advanced users
program_options – Space delimited string of optional Sierra options and arguments that can be freely modified by the user
subcommand – The Sierra application absolute or relative path
subcommand_required – Space delimited string of required Sierra application options and arguments that are crucial to builder behavior and should not be modified except by advanced users.
subcommand_options – Space delimited string of optional Sierra application options and arguments that can be freely modified by the user
action_suffix – This variable is intended to perform program STDOUT and STDERR redirection operations.
emitter – An SCons emitter function. This is not a keyword argument in the action string.
kwargs – Any additional keyword arguments are passed directly to the SCons builder object.
- Returns:
Sierra builder
- waves.scons_extensions.sphinx_build(
- program: str = 'sphinx-build',
- options: str = '',
- builder: str = 'html',
- tags: str = '',
Sphinx builder using the
-b
specifierThis builder does not have an emitter. It requires at least one target.
action#${program} ${options} -b ${builder} ${TARGET.dir.dir.abspath} ${TARGET.dir.abspath} ${tags}
SConstruct#import waves env = Environment() env.Append(BUILDERS={ "SphinxBuild": waves.scons_extensions.sphinx_build(options="-W"), }) sources = ["conf.py", "index.rst"] targets = ["html/index.html"] html = env.SphinxBuild( target=targets, source=sources, ) env.Clean(html, [Dir("html")] + sources) env.Alias("html", html)
- Parameters:
program – sphinx-build executable
options – sphinx-build options
builder – builder name. See the Sphinx documentation for options
tags – sphinx-build tags
- Returns:
Sphinx builder
- waves.scons_extensions.sphinx_latexpdf(
- program: str = 'sphinx-build',
- options: str = '',
- builder: str = 'latexpdf',
- tags: str = '',
Sphinx builder using the
-M
specifier. Intended forlatexpdf
builds.This builder does not have an emitter. It requires at least one target.
action#${program} -M ${builder} ${TARGET.dir.dir.abspath} ${TARGET.dir.dir.abspath} ${tags} ${options}"
SConstruct#import waves env = Environment() env.Append(BUILDERS={ "SphinxPDF": waves.scons_extensions.sphinx_latexpdf(options="-W"), }) sources = ["conf.py", "index.rst"] targets = ["latex/project.pdf"] latexpdf = env.SphinxBuild( target=targets, source=sources, ) env.Clean(latexpdf, [Dir("latex")] + sources) env.Alias("latexpdf", latexpdf)
- Parameters:
program (str) – sphinx-build executable
options (str) – sphinx-build options
builder (str) – builder name. See the Sphinx documentation for options
tags (str) – sphinx-build tags
- Returns:
Sphinx latexpdf builder
- waves.scons_extensions.sphinx_scanner() Scanner [source]
SCons scanner that searches for directives
.. include::
.. literalinclude::
.. image::
.. figure::
.. bibliography::
inside
.rst
and.txt
files- Returns:
Sphinx source file dependency Scanner
- waves.scons_extensions.ssh_builder_actions(
- builder: Builder,
- remote_server: str = '',
- remote_directory: str = '',
- rsync_push_options: str = '-rlptv',
- rsync_pull_options: str = '-rlptv',
- ssh_options: str = '',
Wrap and modify a builder’s action list with remote copy operations and SSH commands
Warning
This builder does not provide asynchronous server-client behavior. The local/client machine must maintain the SSH connection continuously throughout the duration of the task. If the SSH connection is interrupted, the task will fail. This makes SSH wrapped builders fragile with respect to network connectivity. Users are strongly encouraged to seek solutions that allow full software installation and full workflow execution on the target compute server. If mixed server execution is required, a build directory on a shared network drive and interrupted workflow execution should be preferred over SSH wrapped builders.
If the remote server and remote directory strings are not specified at builder instantation, then the task definitions must specify these keyword arguments. If a portion of the remote server and/or remote directory are known to be constant across all possible tasks, users may define their own substitution keyword arguments. For example, the following remote directory uses common leading path elements and introduces a new keyword variable
task_directory
to allow per-task changes to the remote directory:remote_directory="/path/to/base/build/${task_directory}"
.Warning
The
waves.scons_extensions.ssh_builder_actions()
is a work-in-progress solution with some assumptions specific to the action construction used by WAVES. It _should_ work for most basic builders, but adapting this function to users’ custom builders will probably require some advanced SCons knowledge and inspection of thewaves.scons_extensions_ssh_builder_actions()
implementation.Builder/Task keyword arguments
remote_server
: remote server where the original builder’s actions should be executedremote_directory
: absolute or relative path where the original builder’s actions should be executed.rsync_push_options
: rsync options when pushing sources to the remote serverrsync_pull_options
: rsync options when pulling remote directory from the remote serverssh_options
: SSH options when running the original builder’s actions on the remote server
Design assumptions
Creates the
remote_directory
withmkdir -p
.mkdir
must exist on theremote_server
.Copies all source files to a flat
remote_directory
withrsync
.rsync
must exist on the local system.Replaces instances of
cd ${TARGET.dir.abspath} &&
withcd ${remote_directory} &&
in the original builder actions and keyword arguments.Replaces instances of
SOURCE.abspath
orSOURCES.abspath
withSOURCE[S].file
in the original builder actions and keyword arguments.Replaces instances of
SOURCES[0-9]/TARGETS[0-9].abspath
withSOURCES[0-9]/TARGETS[0-9].file
in the original builder action and keyword arguments.Prefixes all original builder actions with
cd ${remote_directory} &&
.All original builder actions are wrapped in single quotes as
'{original action}'
to preserve the&&
as part of theremote_server
command. Shell variables, e.g.$USER
, will not be expanded on theremote_server
. If quotes are included in the original builder actions, they should be double quotes.Returns the entire
remote_directory
to the original builder${TARGET.dir.abspath}
withrysnc
.rsync
must exist on the local system.
SConstruct#import getpass import waves user = getpass.getuser() env = Environment() env.Append(BUILDERS={ "SSHAbaqusSolver": waves.scons_extensions.ssh_builder_actions( waves.scons_extensions.abaqus_solver( program="/remote/server/installation/path/of/abaqus" ), remote_server="myserver.mydomain.com", remote_directory="/scratch/${user}/myproject/myworkflow/${task_directory}" ) }) env.SSHAbaqusSolver( target=["myjob.sta"], source=["input.inp"], job_name="myjob", abaqus_options="-cpus 4", task_directory="myjob", user=user )
my_package.py#import SCons.Builder import waves def print_builder_actions(builder): for action in builder.action.list: print(action.cmd_list) def cat(): builder = SCons.Builder.Builder( action=[ "cat ${SOURCES.abspath} | tee ${TARGETS[0].abspath}", "echo \"Hello World!\"" ] ) return builder build_cat = cat() ssh_build_cat = waves.scons_extensions.ssh_builder_actions( cat(), remote_server="myserver.mydomain.com", remote_directory="/scratch/roppenheimer/ssh_wrapper" )
>>> import my_package >>> my_package.print_builder_actions(my_package.build_cat) cat ${SOURCES.abspath} | tee ${TARGETS[0].abspath} echo "Hello World!" >>> my_package.print_builder_actions(my_package.ssh_build_cat) ssh ${ssh_options} ${remote_server} "mkdir -p /scratch/roppenheimer/ssh_wrapper" rsync ${rsync_push_options} ${SOURCES.abspath} ${remote_server}:${remote_directory} ssh ${ssh_options} ${remote_server} 'cd ${remote_directory} && cat ${SOURCES.file} | tee ${TARGETS[0].file}' ssh ${ssh_options} ${remote_server} 'cd ${remote_directory} && echo "Hello World!"' rsync ${rsync_pull_options} ${remote_server}:${remote_directory} ${TARGET.dir.abspath}
- Parameters:
builder – The SCons builder to modify
remote_server – remote server where the original builder’s actions should be executed
remote_directory – absolute or relative path where the original builder’s actions should be executed.
rsync_push_options – rsync options when pushing sources to the remote server
rsync_pull_options – rsync options when pulling remote directory from the remote server
ssh_options – SSH options when running the original builder’s actions on the remote server
- Returns:
modified builder
- waves.scons_extensions.substitution_syntax(
- env: SConsEnvironment,
- substitution_dictionary: dict,
- prefix: str = '@',
- suffix: str = '@',
Return a dictionary copy with the pre/suffix added to the key strings
Assumes a flat dictionary with keys of type str. Keys that aren’t strings will be converted to their string representation. Nested dictionaries can be supplied, but only the first layer keys will be modified. Dictionary values are unchanged.
SConstruct#env = Environment() env.AddMethod(waves.scons_extensions.substitution_syntax, "SubstitutionSyntax") original_dictionary = {"key": "value"} substitution_dictionary = env.SubstitutionSyntax(original_dictionary)
- Parameters:
substitution_dictionary (dict) – Original dictionary to copy
prefix (string) – String to prepend to all dictionary keys
suffix (string) – String to append to all dictionary keys
- Returns:
Copy of the dictionary with key strings modified by the pre/suffix
- waves.scons_extensions.truchas_builder_factory(
- environment: str = '',
- action_prefix: str = 'cd ${TARGET.dir.dir.abspath} &&',
- program: str = 'mpirun',
- program_required: str = '',
- program_options: str = '-np 1',
- subcommand: str = 'truchas',
- subcommand_required: str = '-f -o:${TARGET.dir.filebase} ${SOURCE.abspath}',
- subcommand_options: str = '',
- action_suffix: str = '> ${TARGETS[-1].abspath} 2>&1',
- emitter=<function first_target_emitter>,
- **kwargs,
Truchas builder factory.
Warning
This is an experimental builder. It is subject to change without warning.
Warning
This builder is not included in the regression test suite yet. Contact the development team if you encounter problems or have recommendations for improved design behavior.
This builder factory extends
waves.scons_extensions.first_target_builder_factory()
. This builder factory uses thewaves.scons_extensions.first_target_emitter()
. At least one task target must be specified in the task definition and the last target will always be the expected STDOUT and STDERR redirection output file,TARGETS[-1]
ending in*.stdout
.Warning
Note that this builder’s action prefix is different from other builders. Truchas output control produces a build subdirectory, so the action prefix moves up two directories above the expected output instead of one. All Truchas output targets must include the requested output directory and the output directory name must match the target file basename, e.g.
target/target.log
andparameter_set1/target/target.log
.With the default options this builder requires the following sources file provided in the order:
Truchas input file:
*.inp
With the default options this builder requires the following target file provided in the order:
Truchas output log with desired output directory:
target/target.log
action string construction#${environment} ${action_prefix} ${program} ${program_required} ${program_options} ${subcommand} ${subcommand_required} ${subcommand_options} ${action_suffix}
action string default expansion#${environment} cd ${TARGET.dir.dir.abspath} && mpirun ${program_required} -np 1 truchas -f -o:${TARGET.dir.filebase} ${SOURCE.abspath} ${subcommand_options} > ${TARGETS[-1].abspath} 2>&1
SConstruct#import waves env = Environment() env.AddMethod(waves.scons_extensions.add_program, "AddProgram") env["TRUCHAS_PROGRAM"] = env.AddProgram(["truchas"]) env.Append(BUILDERS={ "Truchas": waves.scons_extensions.truchas_builder_factory( subcommand=env["TRUCHAS_PROGRAM"] ) }) env.Truchas( target=[ "target/target.log" "target/target.h5" ], source=["source.inp"], )
The builder returned by this factory accepts all SCons Builder arguments. The arguments of this function are also available as keyword arguments of the builder. When provided during task definition, the task keyword arguments override the builder keyword arguments.
- Parameters:
environment – This variable is intended primarily for use with builders and tasks that can not execute from an SCons construction environment. For instance, when tasks execute on a remote server with SSH wrapped actions using
waves.scons_extensions.ssh_builder_actions()
and therefore must initialize the remote environment as part of the builder action.action_prefix – This variable is intended to perform directory change operations prior to program execution
program – The mpirun absolute or relative path
program_required – Space delimited string of required mpirun options and arguments that are crucial to builder behavior and should not be modified except by advanced users
program_options – Space delimited string of optional mpirun options and arguments that can be freely modified by the user
subcommand – The Truchas absolute or relative path
subcommand_required – Space delimited string of required Truchas options and arguments that are crucial to builder behavior and should not be modified except by advanced users.
subcommand_options – Space delimited string of optional Truchas options and arguments that can be freely modified by the user
action_suffix – This variable is intended to perform program STDOUT and STDERR redirection operations.
emitter – An SCons emitter function. This is not a keyword argument in the action string.
kwargs – Any additional keyword arguments are passed directly to the SCons builder object.
- Returns:
Truchas builder
Parameter Generators#
External API module
Will raise RuntimeError
or a derived class of waves.exceptions.WAVESError
to allow the CLI implementation
to convert stack-trace/exceptions into STDERR message and non-zero exit codes.
- class waves.parameter_generators._ScipyGenerator(
- parameter_schema: dict,
- output_file_template: str | None = None,
- output_file: str | None = None,
- output_file_type: Literal['h5', 'yaml'] = 'h5',
- set_name_template: str = 'parameter_set@number',
- previous_parameter_study: str | None = None,
- require_previous_parameter_study: bool = False,
- overwrite: bool = False,
- write_meta: bool = False,
- **kwargs,
Bases:
ParameterGenerator
,ABC
- _create_parameter_names() None [source]
Construct the parameter names from a distribution parameter schema
- _generate(**kwargs) None [source]
Generate the parameter study definition
All implemented class method should accept kwargs as
_generate(self, **kwargs)
. The ABC class accepts, but does not use anykwargs
.Must set the class attributes:
self._samples
: The parameter study samples. A 2D numpy array in the shape (number of parameter sets,number of parameters). If it’s possible that the samples may be of mixed type,
numpy.array(..., dtype=object)
should be used to preserve the original Python types.
self._set_hashes
: list of parameter set content hashes created by callingself._create_set_hashes
after populating theself._samples
parameter study values.self._set_names
: Dictionary mapping parameter set hash to parameter set name strings created bycalling
self._create_set_names
after populatingself._set_hashes
.
self.parameter_study
: The Xarray Dataset parameter study object, created by callingself._create_parameter_study()
after definingself._samples
.
Minimum necessary work example:
# Work unique to the parameter generator schema and set generation set_count = 5 # Normally set according to the parameter schema parameter_count = len(self._parameter_names) self._samples = numpy.zeros((set_count, parameter_count)) # Work performed by common ABC methods super()._generate()
- _generate_distribution_samples(sampler, set_count, parameter_count) None [source]
Create parameter distribution samples
Requires attibrutes:
self.parameter_distributions
: dictionary containing the {parameter name: scipy.stats distribution} defined by the parameter schema. Set bywaves.parameter_generators._ScipyGenerator._generate_parameter_distributions()
.
Sets attribute(s):
self._samples
: The parameter study samples. A 2D numpy array in the shape (number of parameter sets,number of parameters).
- _generate_parameter_distributions() dict [source]
Return dictionary containing the {parameter name: scipy.stats distribution} defined by the parameter schema.
- Returns:
parameter_distributions
- _validate() None [source]
Validate the parameter distribution schema. Executed by class initiation.
parameter_schema = { 'num_simulations': 4, # Required key. Value must be an integer. 'parameter_1': { 'distribution': 'norm', # Required key. Value must be a valid scipy.stats 'loc': 50, # distribution name. 'scale': 1 }, 'parameter_2': { 'distribution': 'skewnorm', 'a': 4, 'loc': 30, 'scale': 2 } }
- Raises:
waves.exceptions.SchemaValidationError –
Parameter schema is not a dictionary
Parameter schema
num_simulations
key is not an integerParameter definition distribution value is not a valid Python identifier
Parameter definition key(s) is not a valid Python identifier
Parameter schema does not have a
num_simulations
keyParameter definition does not contain a
distribution
key
- class waves.parameter_generators.CartesianProduct(
- parameter_schema: dict,
- output_file_template: str | None = None,
- output_file: str | None = None,
- output_file_type: Literal['h5', 'yaml'] = 'h5',
- set_name_template: str = 'parameter_set@number',
- previous_parameter_study: str | None = None,
- require_previous_parameter_study: bool = False,
- overwrite: bool = False,
- write_meta: bool = False,
- **kwargs,
Bases:
ParameterGenerator
Builds a cartesian product parameter study
Parameters must be scalar valued integers, floats, strings, or booleans
- Parameters:
parameter_schema – The YAML loaded parameter study schema dictionary -
{parameter_name: schema value}
CartesianProduct expects “schema value” to be an iterable. For example, when read from a YAML file “schema value” will be a Python list. Each parameter’s values must have a consistent data type, but data type may vary between parameters.output_file_template – Output file name template for multiple file output of the parameter study. Required if parameter sets will be written to files instead of printed to STDOUT. May contain pathseps for an absolute or relative path template. May contain the
@number
set number placeholder in the file basename but not in the path. If the placeholder is not found it will be appended to the template string. Output files are overwritten if the content of the file has changed or ifoverwrite
is True.output_file_template
andoutput_file
are mutually exclusive.output_file – Output file name for single file output of the parameter study. Required if parameter sets will be written to a file instead of printed to STDOUT. May contain pathseps for an absolute or relative path. Output file is overwritten if the content of the file has changed or if
overwrite
is True.output_file
andoutput_file_template
are mutually exclusive.output_file_type – Output file syntax or type. Options are: ‘yaml’, ‘h5’.
set_name_template – Parameter set name template. Overridden by
output_file_template
, if provided.previous_parameter_study – A relative or absolute file path to a previously created parameter study Xarray Dataset
require_previous_parameter_study – Raise a
RuntimeError
if the previous parameter study file is missing.overwrite – Overwrite existing output files
write_meta – Write a meta file named “parameter_study_meta.txt” containing the parameter set file names. Useful for command line execution with build systems that require an explicit file list for target creation.
- Variables:
self.parameter_study – The final parameter study XArray Dataset object
- Raises:
waves.exceptions.MutuallyExclusiveError – If the mutually exclusive output file template and output file options are both specified
waves.exceptions.APIError – If an unknown output file type is requested
RuntimeError – If a previous parameter study file is specified and missing, and
require_previous_parameter_study
isTrue
waves.exceptions.SchemaValidationError –
Parameter schema is not a dictionary
Parameter key is not a supported iterable: set, tuple, list
Example
>>> import waves >>> parameter_schema = { ... 'parameter_1': [1, 2], ... 'parameter_2': ['a', 'b'] ... } >>> parameter_generator = waves.parameter_generators.CartesianProduct(parameter_schema) >>> print(parameter_generator.parameter_study) <xarray.Dataset> Dimensions: (set_hash: 4) Coordinates: set_hash (set_hash) <U32 'de3cb3eaecb767ff63973820b2... * set_name (set_hash) <U14 'parameter_set0' ... 'param... Data variables: parameter_1 (set_hash) object 1 1 2 2 parameter_2 (set_hash) object 'a' 'b' 'a' 'b'
- _conditionally_write_dataset(
- existing_parameter_study: ~pathlib.Path,
- parameter_study: <Mock name='mock.Dataset' id='139620815441024'>,
Write NetCDF file over previous study if the datasets have changed or self.overwrite is True
- Parameters:
existing_parameter_study – A relative or absolute file path to a previously created parameter study Xarray Dataset
parameter_study – Parameter study xarray dataset
- _conditionally_write_yaml(output_file: str | Path, parameter_dictionary: dict) None
Write YAML file over previous study if the datasets have changed or self.overwrite is True
- Parameters:
output_file – A relative or absolute file path to the output YAML file
parameter_dictionary – dictionary containing parameter set data
- _create_parameter_study() None
Create the standard structure for the parameter study dataset
requires:
self._set_hashes
: parameter set content hashes identifying rows of parameter studyself._parameter_names
: parameter names used as columns of parameter studyself._samples
: The parameter study samples. Rows are sets. Columns are parameters.
creates attribute:
self.parameter_study
- _create_set_hashes() None
Construct unique, repeatable parameter set content hashes from
self._samples
.Creates an md5 hash from the concatenated string representation of parameter
name:value
associations.requires:
self._samples
: The parameter study samples. Rows are sets. Columns are parameters.self._parameter_names
: parameter names used as columns of parameter study
creates attribute:
self._set_hashes
: parameter set content hashes identifying rows of parameter study
- _create_set_names() None
Construct parameter set names from the set name template and number of parameter sets in
self._samples
Creates the class attribute
self._set_names
required to populate the_generate()
method’s parameter study Xarray dataset object.requires:
self._set_hashes
: parameter set content hashes identifying rows of parameter study
creates attribute:
self._set_names
: Dictionary mapping parameter set hash to parameter set name
- _create_set_names_array() <Mock name='mock.DataArray' id='139620810639216'>
Create an Xarray DataArray with the parameter set names using parameter set hashes as the coordinate
- Returns:
set_names_array
- _generate(**kwargs) None [source]
Generate the Cartesian Product parameter sets.
- _merge_parameter_studies() None
Merge the current parameter study into a previous parameter study.
Preserve the previous parameter study set name to set contents associations by dropping the current study’s set names during merge. Resets attributes:
self.parameter_study
self._samples
self._set_hashes
self._set_names
- Raises:
RuntimeError – If the
self.previous_parameter_study
attribute is None
- _merge_set_names_array() None
Merge the parameter set names array into the parameter study dataset as a non-index coordinate
- _parameter_study_to_numpy() <Mock name='mock.ndarray' id='139620811286016'>
Return the parameter study data as a 2D numpy array
- Returns:
data
- _scons_write(target: list, source: list, env) None
SCons Python build function wrapper for the parameter generator’s write() function.
Reference: https://scons.org/doc/production/HTML/scons-user/ch17s04.html
Searches for following keyword arguments in the task construction environment and passes to the write function:
output_file_type
- Parameters:
target – The target file list of strings
source – The source file list of SCons.Node.FS.File objects
env (SCons.Script.SConscript.SConsEnvironment) – The builder’s SCons construction environment object
- _update_set_names() None
Update the parameter set names after a parameter study dataset merge operation.
Resets attributes:
self.parameter_study
self._set_names
- _validate() None [source]
Validate the Cartesian Product parameter schema. Executed by class initiation.
- _write(
- parameter_study_object,
- parameter_study_iterator,
- conditional_write_function,
- dry_run: bool = False,
Write parameter study formatted output to STDOUT, separate set files, or a single file
Behavior as specified in
waves.parameter_generators.ParameterGenerator.write()
- _write_meta() None
Write the parameter study meta data file.
The parameter study meta file is always overwritten. It should NOT be used to determine if the parameter study target or dependee is out-of-date. Parameter study file paths are written as absolute paths.
- parameter_study_to_dict() Dict[str, Dict[str, Any]]
Return parameter study as a dictionary
Used for iterating on parameter sets in an SCons workflow with parameter substitution dictionaries, e.g.
>>> for set_name, parameters in parameter_generator.parameter_study_to_dict().items(): ... print(f"{set_name}: {parameters}") ... parameter_set0: {'parameter_1': 1, 'parameter_2': 'a'} parameter_set1: {'parameter_1': 1, 'parameter_2': 'b'} parameter_set2: {'parameter_1': 2, 'parameter_2': 'a'} parameter_set3: {'parameter_1': 2, 'parameter_2': 'b'}
- Returns:
parameter study sets and samples as a dictionary: {set_name: {parameter: value}, …}
- write(
- output_file_type: Literal['h5', 'yaml'] | None = None,
- dry_run: bool | None = False,
Write the parameter study to STDOUT or an output file.
Writes to STDOUT by default. Requires non-default
output_file_template
oroutput_file
specification to write to files.If printing to STDOUT, print all parameter sets together. If printing to files, overwrite when contents of existing files have changed. If overwrite is specified, overwrite all parameter set files. If a dry run is requested print file-content associations for files that would have been written.
Writes parameter set files in YAML syntax by default. Output formatting is controlled by
output_file_type
.parameter_1: 1 parameter_2: a
- Parameters:
output_file_type – Output file syntax or type. Options are: ‘yaml’, ‘h5’.
dry_run – Print contents of new parameter study output files to STDOUT and exit
- Raises:
waves.exceptions.ChoicesError – If an unsupported output file type is requested
- class waves.parameter_generators.CustomStudy(
- parameter_schema: dict,
- output_file_template: str | None = None,
- output_file: str | None = None,
- output_file_type: Literal['h5', 'yaml'] = 'h5',
- set_name_template: str = 'parameter_set@number',
- previous_parameter_study: str | None = None,
- require_previous_parameter_study: bool = False,
- overwrite: bool = False,
- write_meta: bool = False,
- **kwargs,
Bases:
ParameterGenerator
Builds a custom parameter study from user-specified values
Parameters must be scalar valued integers, floats, strings, or booleans
- Parameters:
parameter_schema – Dictionary with two keys:
parameter_samples
andparameter_names
. Parameter samples in the form of a 2D array with shape M x N, where M is the number of parameter sets and N is the number of parameters. Parameter names in the form of a 1D array with length N. When creating a parameter_samples array with mixed type (e.g. string and floats) use dtype=object to preserve the mixed types and avoid casting all values to a common type (e.g. all your floats will become strings). Each parameter’s values must have a consistent data type, but data type may vary between parameters.output_file_template – Output file name template for multiple file output of the parameter study. Required if parameter sets will be written to files instead of printed to STDOUT. May contain pathseps for an absolute or relative path template. May contain the
@number
set number placeholder in the file basename but not in the path. If the placeholder is not found it will be appended to the template string. Output files are overwritten if the content of the file has changed or ifoverwrite
is True.output_file_template
andoutput_file
are mutually exclusive.output_file – Output file name for single file output of the parameter study. Required if parameter sets will be written to a file instead of printed to STDOUT. May contain pathseps for an absolute or relative path. Output file is overwritten if the content of the file has changed or if
overwrite
is True.output_file
andoutput_file_template
are mutually exclusive.output_file_type – Output file syntax or type. Options are: ‘yaml’, ‘h5’.
set_name_template – Parameter set name template. Overridden by
output_file_template
, if provided.previous_parameter_study – A relative or absolute file path to a previously created parameter study Xarray Dataset
require_previous_parameter_study – Raise a
RuntimeError
if the previous parameter study file is missing.overwrite – Overwrite existing output files
write_meta – Write a meta file named “parameter_study_meta.txt” containing the parameter set file names. Useful for command line execution with build systems that require an explicit file list for target creation.
- Variables:
self.parameter_study – The final parameter study XArray Dataset object
- Raises:
waves.exceptions.MutuallyExclusiveError – If the mutually exclusive output file template and output file options are both specified
waves.exceptions.APIError – If an unknown output file type is requested
RuntimeError – If a previous parameter study file is specified and missing, and
require_previous_parameter_study
isTrue
waves.exceptions.SchemaValidationError –
Parameter schema is not a dictionary
Parameter schema does not contain the
parameter_names
keyParameter schema does not contain the
parameter_samples
keyThe
parameter_samples
value is an improperly shaped array
Example
>>> import waves >>> import numpy >>> parameter_schema = dict( ... parameter_samples = numpy.array([[1.0, 'a', 5], [2.0, 'b', 6]], dtype=object), ... parameter_names = numpy.array(['height', 'prefix', 'index'])) >>> parameter_generator = waves.parameter_generators.CustomStudy(parameter_schema) >>> print(parameter_generator.parameter_study) <xarray.Dataset> Dimensions: (set_hash: 2) Coordinates: set_hash (set_hash) <U32 '50ba1a2716e42f8c4fcc34a90a... * set_name (set_hash) <U14 'parameter_set0' 'parameter... Data variables: height (set_hash) object 1.0 2.0 prefix (set_hash) object 'a' 'b' index (set_hash) object 5 6
- _conditionally_write_dataset(
- existing_parameter_study: ~pathlib.Path,
- parameter_study: <Mock name='mock.Dataset' id='139620815441024'>,
Write NetCDF file over previous study if the datasets have changed or self.overwrite is True
- Parameters:
existing_parameter_study – A relative or absolute file path to a previously created parameter study Xarray Dataset
parameter_study – Parameter study xarray dataset
- _conditionally_write_yaml(output_file: str | Path, parameter_dictionary: dict) None
Write YAML file over previous study if the datasets have changed or self.overwrite is True
- Parameters:
output_file – A relative or absolute file path to the output YAML file
parameter_dictionary – dictionary containing parameter set data
- _create_parameter_study() None
Create the standard structure for the parameter study dataset
requires:
self._set_hashes
: parameter set content hashes identifying rows of parameter studyself._parameter_names
: parameter names used as columns of parameter studyself._samples
: The parameter study samples. Rows are sets. Columns are parameters.
creates attribute:
self.parameter_study
- _create_set_hashes() None
Construct unique, repeatable parameter set content hashes from
self._samples
.Creates an md5 hash from the concatenated string representation of parameter
name:value
associations.requires:
self._samples
: The parameter study samples. Rows are sets. Columns are parameters.self._parameter_names
: parameter names used as columns of parameter study
creates attribute:
self._set_hashes
: parameter set content hashes identifying rows of parameter study
- _create_set_names() None
Construct parameter set names from the set name template and number of parameter sets in
self._samples
Creates the class attribute
self._set_names
required to populate the_generate()
method’s parameter study Xarray dataset object.requires:
self._set_hashes
: parameter set content hashes identifying rows of parameter study
creates attribute:
self._set_names
: Dictionary mapping parameter set hash to parameter set name
- _create_set_names_array() <Mock name='mock.DataArray' id='139620810639216'>
Create an Xarray DataArray with the parameter set names using parameter set hashes as the coordinate
- Returns:
set_names_array
- _generate(**kwargs) None [source]
Generate the parameter study dataset from the user provided parameter array.
- _merge_parameter_studies() None
Merge the current parameter study into a previous parameter study.
Preserve the previous parameter study set name to set contents associations by dropping the current study’s set names during merge. Resets attributes:
self.parameter_study
self._samples
self._set_hashes
self._set_names
- Raises:
RuntimeError – If the
self.previous_parameter_study
attribute is None
- _merge_set_names_array() None
Merge the parameter set names array into the parameter study dataset as a non-index coordinate
- _parameter_study_to_numpy() <Mock name='mock.ndarray' id='139620811286016'>
Return the parameter study data as a 2D numpy array
- Returns:
data
- _scons_write(target: list, source: list, env) None
SCons Python build function wrapper for the parameter generator’s write() function.
Reference: https://scons.org/doc/production/HTML/scons-user/ch17s04.html
Searches for following keyword arguments in the task construction environment and passes to the write function:
output_file_type
- Parameters:
target – The target file list of strings
source – The source file list of SCons.Node.FS.File objects
env (SCons.Script.SConscript.SConsEnvironment) – The builder’s SCons construction environment object
- _update_set_names() None
Update the parameter set names after a parameter study dataset merge operation.
Resets attributes:
self.parameter_study
self._set_names
- _validate() None [source]
Validate the Custom Study parameter samples and names. Executed by class initiation.
- _write(
- parameter_study_object,
- parameter_study_iterator,
- conditional_write_function,
- dry_run: bool = False,
Write parameter study formatted output to STDOUT, separate set files, or a single file
Behavior as specified in
waves.parameter_generators.ParameterGenerator.write()
- _write_meta() None
Write the parameter study meta data file.
The parameter study meta file is always overwritten. It should NOT be used to determine if the parameter study target or dependee is out-of-date. Parameter study file paths are written as absolute paths.
- parameter_study_to_dict() Dict[str, Dict[str, Any]]
Return parameter study as a dictionary
Used for iterating on parameter sets in an SCons workflow with parameter substitution dictionaries, e.g.
>>> for set_name, parameters in parameter_generator.parameter_study_to_dict().items(): ... print(f"{set_name}: {parameters}") ... parameter_set0: {'parameter_1': 1, 'parameter_2': 'a'} parameter_set1: {'parameter_1': 1, 'parameter_2': 'b'} parameter_set2: {'parameter_1': 2, 'parameter_2': 'a'} parameter_set3: {'parameter_1': 2, 'parameter_2': 'b'}
- Returns:
parameter study sets and samples as a dictionary: {set_name: {parameter: value}, …}
- write(output_file_type: Literal['h5', 'yaml'] | None = None, dry_run: bool | None = False) None
Write the parameter study to STDOUT or an output file.
Writes to STDOUT by default. Requires non-default
output_file_template
oroutput_file
specification to write to files.If printing to STDOUT, print all parameter sets together. If printing to files, overwrite when contents of existing files have changed. If overwrite is specified, overwrite all parameter set files. If a dry run is requested print file-content associations for files that would have been written.
Writes parameter set files in YAML syntax by default. Output formatting is controlled by
output_file_type
.parameter_1: 1 parameter_2: a
- Parameters:
output_file_type – Output file syntax or type. Options are: ‘yaml’, ‘h5’.
dry_run – Print contents of new parameter study output files to STDOUT and exit
- Raises:
waves.exceptions.ChoicesError – If an unsupported output file type is requested
- waves.parameter_generators.HASH_COORDINATE_KEY: Final[str] = 'set_hash'
The set hash coordinate used in WAVES parameter study Xarray Datasets
- class waves.parameter_generators.LatinHypercube(*args, **kwargs)[source]
Bases:
_ScipyGenerator
Builds a Latin-Hypercube parameter study from the scipy Latin Hypercube class
Warning
The merged parameter study feature does not check for consistent parameter distributions. Changing the parameter definitions and merging with a previous parameter study will result in incorrect relationships between parameter schema and the parameter study samples.
- Parameters:
parameter_schema – The YAML loaded parameter study schema dictionary -
{parameter_name: schema value}
LatinHypercube expects “schema value” to be a dictionary with a strict structure and several required keys. Validated on class instantiation.output_file_template – Output file name template for multiple file output of the parameter study. Required if parameter sets will be written to files instead of printed to STDOUT. May contain pathseps for an absolute or relative path template. May contain the
@number
set number placeholder in the file basename but not in the path. If the placeholder is not found it will be appended to the template string. Output files are overwritten if the content of the file has changed or ifoverwrite
is True.output_file_template
andoutput_file
are mutually exclusive.output_file – Output file name for single file output of the parameter study. Required if parameter sets will be written to a file instead of printed to STDOUT. May contain pathseps for an absolute or relative path. Output file is overwritten if the content of the file has changed or if
overwrite
is True.output_file
andoutput_file_template
are mutually exclusive.output_file_type – Output file syntax or type. Options are: ‘yaml’, ‘h5’.
set_name_template – Parameter set name template. Overridden by
output_file_template
, if provided.previous_parameter_study – A relative or absolute file path to a previously created parameter study Xarray Dataset
require_previous_parameter_study – Raise a
RuntimeError
if the previous parameter study file is missing.overwrite – Overwrite existing output files
write_meta – Write a meta file named “parameter_study_meta.txt” containing the parameter set file names. Useful for command line execution with build systems that require an explicit file list for target creation.
kwargs – Any additional keyword arguments are passed through to the sampler method
- Variables:
self.parameter_distributions – A dictionary mapping parameter names to the scipy.stats distribution
self.parameter_study – The final parameter study XArray Dataset object
- Raises:
waves.exceptions.MutuallyExclusiveError – If the mutually exclusive output file template and output file options are both specified
waves.exceptions.APIError – If an unknown output file type is requested
RuntimeError – If a previous parameter study file is specified and missing, and
require_previous_parameter_study
isTrue
To produce consistent Latin Hypercubes on repeat instantiations, the
**kwargs
must include{'seed': <int>}
. See the scipy Latin Hypercubescipy.stats.qmc.LatinHypercube
class documentation for details Thed
keyword argument is internally managed and will be overwritten to match the number of parameters defined in the parameter schema.Example
>>> import waves >>> parameter_schema = { ... 'num_simulations': 4, # Required key. Value must be an integer. ... 'parameter_1': { ... 'distribution': 'norm', # Required key. Value must be a valid scipy.stats ... 'loc': 50, # distribution name. ... 'scale': 1 ... }, ... 'parameter_2': { ... 'distribution': 'skewnorm', ... 'a': 4, ... 'loc': 30, ... 'scale': 2 ... } ... } >>> parameter_generator = waves.parameter_generators.LatinHypercube(parameter_schema) >>> print(parameter_generator.parameter_study) <xarray.Dataset> Dimensions: (set_hash: 4) Coordinates: set_hash (set_hash) <U32 '1e8219dae27faa5388328e225a... * set_name (set_hash) <U14 'parameter_set0' ... 'param... Data variables: parameter_1 (set_hash) float64 0.125 ... 51.15 parameter_2 (set_hash) float64 0.625 ... 30.97
- _conditionally_write_dataset(
- existing_parameter_study: ~pathlib.Path,
- parameter_study: <Mock name='mock.Dataset' id='139620815441024'>,
Write NetCDF file over previous study if the datasets have changed or self.overwrite is True
- Parameters:
existing_parameter_study – A relative or absolute file path to a previously created parameter study Xarray Dataset
parameter_study – Parameter study xarray dataset
- _conditionally_write_yaml(output_file: str | Path, parameter_dictionary: dict) None
Write YAML file over previous study if the datasets have changed or self.overwrite is True
- Parameters:
output_file – A relative or absolute file path to the output YAML file
parameter_dictionary – dictionary containing parameter set data
- _create_parameter_names() None
Construct the parameter names from a distribution parameter schema
- _create_parameter_study() None
Create the standard structure for the parameter study dataset
requires:
self._set_hashes
: parameter set content hashes identifying rows of parameter studyself._parameter_names
: parameter names used as columns of parameter studyself._samples
: The parameter study samples. Rows are sets. Columns are parameters.
creates attribute:
self.parameter_study
- _create_set_hashes() None
Construct unique, repeatable parameter set content hashes from
self._samples
.Creates an md5 hash from the concatenated string representation of parameter
name:value
associations.requires:
self._samples
: The parameter study samples. Rows are sets. Columns are parameters.self._parameter_names
: parameter names used as columns of parameter study
creates attribute:
self._set_hashes
: parameter set content hashes identifying rows of parameter study
- _create_set_names() None
Construct parameter set names from the set name template and number of parameter sets in
self._samples
Creates the class attribute
self._set_names
required to populate the_generate()
method’s parameter study Xarray dataset object.requires:
self._set_hashes
: parameter set content hashes identifying rows of parameter study
creates attribute:
self._set_names
: Dictionary mapping parameter set hash to parameter set name
- _create_set_names_array() <Mock name='mock.DataArray' id='139620810639216'>
Create an Xarray DataArray with the parameter set names using parameter set hashes as the coordinate
- Returns:
set_names_array
- _generate(**kwargs) None [source]
Generate the Latin Hypercube parameter sets
- _generate_distribution_samples(sampler, set_count, parameter_count) None
Create parameter distribution samples
Requires attibrutes:
self.parameter_distributions
: dictionary containing the {parameter name: scipy.stats distribution} defined by the parameter schema. Set bywaves.parameter_generators._ScipyGenerator._generate_parameter_distributions()
.
Sets attribute(s):
self._samples
: The parameter study samples. A 2D numpy array in the shape (number of parameter sets,number of parameters).
- _generate_parameter_distributions() dict
Return dictionary containing the {parameter name: scipy.stats distribution} defined by the parameter schema.
- Returns:
parameter_distributions
- _merge_parameter_studies() None
Merge the current parameter study into a previous parameter study.
Preserve the previous parameter study set name to set contents associations by dropping the current study’s set names during merge. Resets attributes:
self.parameter_study
self._samples
self._set_hashes
self._set_names
- Raises:
RuntimeError – If the
self.previous_parameter_study
attribute is None
- _merge_set_names_array() None
Merge the parameter set names array into the parameter study dataset as a non-index coordinate
- _parameter_study_to_numpy() <Mock name='mock.ndarray' id='139620811286016'>
Return the parameter study data as a 2D numpy array
- Returns:
data
- _scons_write(target: list, source: list, env) None
SCons Python build function wrapper for the parameter generator’s write() function.
Reference: https://scons.org/doc/production/HTML/scons-user/ch17s04.html
Searches for following keyword arguments in the task construction environment and passes to the write function:
output_file_type
- Parameters:
target – The target file list of strings
source – The source file list of SCons.Node.FS.File objects
env (SCons.Script.SConscript.SConsEnvironment) – The builder’s SCons construction environment object
- _update_set_names() None
Update the parameter set names after a parameter study dataset merge operation.
Resets attributes:
self.parameter_study
self._set_names
- _validate() None
Validate the parameter distribution schema. Executed by class initiation.
parameter_schema = { 'num_simulations': 4, # Required key. Value must be an integer. 'parameter_1': { 'distribution': 'norm', # Required key. Value must be a valid scipy.stats 'loc': 50, # distribution name. 'scale': 1 }, 'parameter_2': { 'distribution': 'skewnorm', 'a': 4, 'loc': 30, 'scale': 2 } }
- Raises:
waves.exceptions.SchemaValidationError –
Parameter schema is not a dictionary
Parameter schema
num_simulations
key is not an integerParameter definition distribution value is not a valid Python identifier
Parameter definition key(s) is not a valid Python identifier
Parameter schema does not have a
num_simulations
keyParameter definition does not contain a
distribution
key
- _write(
- parameter_study_object,
- parameter_study_iterator,
- conditional_write_function,
- dry_run: bool = False,
Write parameter study formatted output to STDOUT, separate set files, or a single file
Behavior as specified in
waves.parameter_generators.ParameterGenerator.write()
- _write_meta() None
Write the parameter study meta data file.
The parameter study meta file is always overwritten. It should NOT be used to determine if the parameter study target or dependee is out-of-date. Parameter study file paths are written as absolute paths.
- parameter_study_to_dict() Dict[str, Dict[str, Any]]
Return parameter study as a dictionary
Used for iterating on parameter sets in an SCons workflow with parameter substitution dictionaries, e.g.
>>> for set_name, parameters in parameter_generator.parameter_study_to_dict().items(): ... print(f"{set_name}: {parameters}") ... parameter_set0: {'parameter_1': 1, 'parameter_2': 'a'} parameter_set1: {'parameter_1': 1, 'parameter_2': 'b'} parameter_set2: {'parameter_1': 2, 'parameter_2': 'a'} parameter_set3: {'parameter_1': 2, 'parameter_2': 'b'}
- Returns:
parameter study sets and samples as a dictionary: {set_name: {parameter: value}, …}
- write(
- output_file_type: Literal['h5', 'yaml'] | None = None,
- dry_run: bool | None = False,
Write the parameter study to STDOUT or an output file.
Writes to STDOUT by default. Requires non-default
output_file_template
oroutput_file
specification to write to files.If printing to STDOUT, print all parameter sets together. If printing to files, overwrite when contents of existing files have changed. If overwrite is specified, overwrite all parameter set files. If a dry run is requested print file-content associations for files that would have been written.
Writes parameter set files in YAML syntax by default. Output formatting is controlled by
output_file_type
.parameter_1: 1 parameter_2: a
- Parameters:
output_file_type – Output file syntax or type. Options are: ‘yaml’, ‘h5’.
dry_run – Print contents of new parameter study output files to STDOUT and exit
- Raises:
waves.exceptions.ChoicesError – If an unsupported output file type is requested
- class waves.parameter_generators.OneAtATime(
- parameter_schema: dict,
- output_file_template: str | None = None,
- output_file: str | None = None,
- output_file_type: Literal['h5', 'yaml'] = 'h5',
- set_name_template: str = 'parameter_set@number',
- previous_parameter_study: str | None = None,
- require_previous_parameter_study: bool = False,
- overwrite: bool = False,
- write_meta: bool = False,
- **kwargs,
Bases:
ParameterGenerator
Builds a parameter study with single-value changes from a nominal parameter set. The nominal parameter set is created from the first value of every parameter iterable.
Parameters must be scalar valued integers, floats, strings, or booleans
- Parameters:
parameter_schema – The YAML loaded parameter study schema dictionary -
{parameter_name: schema value}
OneAtATime expects “schema value” to be an ordered iterable. For example, when read from a YAML file “schema value” will be a Python list. Each parameter’s values must have a consistent data type, but data type may vary between parameters.output_file_template – Output file name template for multiple file output of the parameter study. Required if parameter sets will be written to files instead of printed to STDOUT. May contain pathseps for an absolute or relative path template. May contain the
@number
set number placeholder in the file basename but not in the path. If the placeholder is not found it will be appended to the template string. Output files are overwritten if the content of the file has changed or ifoverwrite
is True.output_file_template
andoutput_file
are mutually exclusive.output_file – Output file name for single file output of the parameter study. Required if parameter sets will be written to a file instead of printed to STDOUT. May contain pathseps for an absolute or relative path. Output file is overwritten if the content of the file has changed or if
overwrite
is True.output_file
andoutput_file_template
are mutually exclusive.output_file_type – Output file syntax or type. Options are: ‘yaml’, ‘h5’.
set_name_template – Parameter set name template. Overridden by
output_file_template
, if provided.previous_parameter_study – A relative or absolute file path to a previously created parameter study Xarray Dataset
require_previous_parameter_study – Raise a
RuntimeError
if the previous parameter study file is missing.overwrite – Overwrite existing output files
write_meta – Write a meta file named “parameter_study_meta.txt” containing the parameter set file names. Useful for command line execution with build systems that require an explicit file list for target creation.
- Variables:
self.parameter_study – The final parameter study XArray Dataset object
- Raises:
waves.exceptions.MutuallyExclusiveError – If the mutually exclusive output file template and output file options are both specified
waves.exceptions.APIError – If an unknown output file type is requested
RuntimeError – If a previous parameter study file is specified and missing, and
require_previous_parameter_study
isTrue
waves.exceptions.SchemaValidationError –
Parameter schema is not a dictionary
Parameter key is not a supported iterable: tuple, list
Parameter key is empty
Example
>>> import waves >>> parameter_schema = { ... 'parameter_1': [1.0], ... 'parameter_2': ['a', 'b'], ... 'parameter_3': [5, 3, 7] ... } >>> parameter_generator = waves.parameter_generators.OneAtATime(parameter_schema) >>> print(parameter_generator.parameter_study) <xarray.Dataset> Dimensions: (set_name: 4) Coordinates: set_hash (set_name) <U32 '375a9b0b7c00d01bced92d9c5a6d302c' .... * set_name (set_name) <U14 'parameter_set0' ... 'parameter_set3' parameter_sets (set_name) <U14 'parameter_set0' ... 'parameter_set3' Data variables: parameter_1 (set_name) float64 32B 1.0 1.0 1.0 1.0 parameter_2 (set_name) <U1 16B 'a' 'b' 'a' 'a' parameter_3 (set_name) int64 32B 5 5 3 7
- _conditionally_write_dataset(
- existing_parameter_study: ~pathlib.Path,
- parameter_study: <Mock name='mock.Dataset' id='139620815441024'>,
Write NetCDF file over previous study if the datasets have changed or self.overwrite is True
- Parameters:
existing_parameter_study – A relative or absolute file path to a previously created parameter study Xarray Dataset
parameter_study – Parameter study xarray dataset
- _conditionally_write_yaml(output_file: str | Path, parameter_dictionary: dict) None
Write YAML file over previous study if the datasets have changed or self.overwrite is True
- Parameters:
output_file – A relative or absolute file path to the output YAML file
parameter_dictionary – dictionary containing parameter set data
- _create_parameter_study() None
Create the standard structure for the parameter study dataset
requires:
self._set_hashes
: parameter set content hashes identifying rows of parameter studyself._parameter_names
: parameter names used as columns of parameter studyself._samples
: The parameter study samples. Rows are sets. Columns are parameters.
creates attribute:
self.parameter_study
- _create_set_hashes() None
Construct unique, repeatable parameter set content hashes from
self._samples
.Creates an md5 hash from the concatenated string representation of parameter
name:value
associations.requires:
self._samples
: The parameter study samples. Rows are sets. Columns are parameters.self._parameter_names
: parameter names used as columns of parameter study
creates attribute:
self._set_hashes
: parameter set content hashes identifying rows of parameter study
- _create_set_names() None
Construct parameter set names from the set name template and number of parameter sets in
self._samples
Creates the class attribute
self._set_names
required to populate the_generate()
method’s parameter study Xarray dataset object.requires:
self._set_hashes
: parameter set content hashes identifying rows of parameter study
creates attribute:
self._set_names
: Dictionary mapping parameter set hash to parameter set name
- _create_set_names_array() <Mock name='mock.DataArray' id='139620810639216'>
Create an Xarray DataArray with the parameter set names using parameter set hashes as the coordinate
- Returns:
set_names_array
- _generate(**kwargs) None [source]
Generate the parameter sets from the user provided parameter values.
- _merge_parameter_studies() None
Merge the current parameter study into a previous parameter study.
Preserve the previous parameter study set name to set contents associations by dropping the current study’s set names during merge. Resets attributes:
self.parameter_study
self._samples
self._set_hashes
self._set_names
- Raises:
RuntimeError – If the
self.previous_parameter_study
attribute is None
- _merge_set_names_array() None
Merge the parameter set names array into the parameter study dataset as a non-index coordinate
- _parameter_study_to_numpy() <Mock name='mock.ndarray' id='139620811286016'>
Return the parameter study data as a 2D numpy array
- Returns:
data
- _scons_write(target: list, source: list, env) None
SCons Python build function wrapper for the parameter generator’s write() function.
Reference: https://scons.org/doc/production/HTML/scons-user/ch17s04.html
Searches for following keyword arguments in the task construction environment and passes to the write function:
output_file_type
- Parameters:
target – The target file list of strings
source – The source file list of SCons.Node.FS.File objects
env (SCons.Script.SConscript.SConsEnvironment) – The builder’s SCons construction environment object
- _update_set_names() None
Update the parameter set names after a parameter study dataset merge operation.
Resets attributes:
self.parameter_study
self._set_names
- _validate() None [source]
Validate the One-at-a-Time parameter schema. Executed by class initiation.
- _write(
- parameter_study_object,
- parameter_study_iterator,
- conditional_write_function,
- dry_run: bool = False,
Write parameter study formatted output to STDOUT, separate set files, or a single file
Behavior as specified in
waves.parameter_generators.ParameterGenerator.write()
- _write_meta() None
Write the parameter study meta data file.
The parameter study meta file is always overwritten. It should NOT be used to determine if the parameter study target or dependee is out-of-date. Parameter study file paths are written as absolute paths.
- parameter_study_to_dict() Dict[str, Dict[str, Any]]
Return parameter study as a dictionary
Used for iterating on parameter sets in an SCons workflow with parameter substitution dictionaries, e.g.
>>> for set_name, parameters in parameter_generator.parameter_study_to_dict().items(): ... print(f"{set_name}: {parameters}") ... parameter_set0: {'parameter_1': 1, 'parameter_2': 'a'} parameter_set1: {'parameter_1': 1, 'parameter_2': 'b'} parameter_set2: {'parameter_1': 2, 'parameter_2': 'a'} parameter_set3: {'parameter_1': 2, 'parameter_2': 'b'}
- Returns:
parameter study sets and samples as a dictionary: {set_name: {parameter: value}, …}
- write(output_file_type: Literal['h5', 'yaml'] | None = None, dry_run: bool | None = False) None
Write the parameter study to STDOUT or an output file.
Writes to STDOUT by default. Requires non-default
output_file_template
oroutput_file
specification to write to files.If printing to STDOUT, print all parameter sets together. If printing to files, overwrite when contents of existing files have changed. If overwrite is specified, overwrite all parameter set files. If a dry run is requested print file-content associations for files that would have been written.
Writes parameter set files in YAML syntax by default. Output formatting is controlled by
output_file_type
.parameter_1: 1 parameter_2: a
- Parameters:
output_file_type – Output file syntax or type. Options are: ‘yaml’, ‘h5’.
dry_run – Print contents of new parameter study output files to STDOUT and exit
- Raises:
waves.exceptions.ChoicesError – If an unsupported output file type is requested
- class waves.parameter_generators.ParameterGenerator(
- parameter_schema: dict,
- output_file_template: str | None = None,
- output_file: str | None = None,
- output_file_type: Literal['h5', 'yaml'] = 'h5',
- set_name_template: str = 'parameter_set@number',
- previous_parameter_study: str | None = None,
- require_previous_parameter_study: bool = False,
- overwrite: bool = False,
- write_meta: bool = False,
- **kwargs,
Bases:
ABC
Abstract base class for parameter study generators
Parameters must be scalar valued integers, floats, strings, or booleans
- Parameters:
parameter_schema – The YAML loaded parameter study schema dictionary, e.g.
{parameter_name: schema_value}
. Validated on class instantiation.output_file_template – Output file name template for multiple file output of the parameter study. Required if parameter sets will be written to files instead of printed to STDOUT. May contain pathseps for an absolute or relative path template. May contain the
@number
set number placeholder in the file basename but not in the path. If the placeholder is not found it will be appended to the template string. Output files are overwritten if the content of the file has changed or ifoverwrite
is True.output_file_template
andoutput_file
are mutually exclusive.output_file – Output file name for single file output of the parameter study. Required if parameter sets will be written to a file instead of printed to STDOUT. May contain pathseps for an absolute or relative path. Output file is overwritten if the content of the file has changed or if
overwrite
is True.output_file
andoutput_file_template
are mutually exclusive.output_file_type – Output file syntax or type. Options are: ‘yaml’, ‘h5’.
set_name_template – Parameter set name template. Overridden by
output_file_template
, if provided.previous_parameter_study – A relative or absolute file path to a previously created parameter study Xarray Dataset
require_previous_parameter_study – Raise a
RuntimeError
if the previous parameter study file is missing.overwrite – Overwrite existing output files
write_meta – Write a meta file named “parameter_study_meta.txt” containing the parameter set file names. Useful for command line execution with build systems that require an explicit file list for target creation.
- Variables:
self.parameter_study – The final parameter study XArray Dataset object
- Raises:
waves.exceptions.MutuallyExclusiveError – If the mutually exclusive output file template and output file options are both specified
waves.exceptions.APIError – If an unknown output file type is requested
RuntimeError – If a previous parameter study file is specified and missing, and
require_previous_parameter_study
isTrue
- _conditionally_write_dataset(
- existing_parameter_study: ~pathlib.Path,
- parameter_study: <Mock name='mock.Dataset' id='139620815441024'>,
Write NetCDF file over previous study if the datasets have changed or self.overwrite is True
- Parameters:
existing_parameter_study – A relative or absolute file path to a previously created parameter study Xarray Dataset
parameter_study – Parameter study xarray dataset
- _conditionally_write_yaml(output_file: str | Path, parameter_dictionary: dict) None [source]
Write YAML file over previous study if the datasets have changed or self.overwrite is True
- Parameters:
output_file – A relative or absolute file path to the output YAML file
parameter_dictionary – dictionary containing parameter set data
- _create_parameter_study() None [source]
Create the standard structure for the parameter study dataset
requires:
self._set_hashes
: parameter set content hashes identifying rows of parameter studyself._parameter_names
: parameter names used as columns of parameter studyself._samples
: The parameter study samples. Rows are sets. Columns are parameters.
creates attribute:
self.parameter_study
- _create_set_hashes() None [source]
Construct unique, repeatable parameter set content hashes from
self._samples
.Creates an md5 hash from the concatenated string representation of parameter
name:value
associations.requires:
self._samples
: The parameter study samples. Rows are sets. Columns are parameters.self._parameter_names
: parameter names used as columns of parameter study
creates attribute:
self._set_hashes
: parameter set content hashes identifying rows of parameter study
- _create_set_names() None [source]
Construct parameter set names from the set name template and number of parameter sets in
self._samples
Creates the class attribute
self._set_names
required to populate the_generate()
method’s parameter study Xarray dataset object.requires:
self._set_hashes
: parameter set content hashes identifying rows of parameter study
creates attribute:
self._set_names
: Dictionary mapping parameter set hash to parameter set name
- _create_set_names_array() <Mock name='mock.DataArray' id='139620810639216'> [source]
Create an Xarray DataArray with the parameter set names using parameter set hashes as the coordinate
- Returns:
set_names_array
- abstractmethod _generate(**kwargs) None [source]
Generate the parameter study definition
All implemented class method should accept kwargs as
_generate(self, **kwargs)
. The ABC class accepts, but does not use anykwargs
.Must set the class attributes:
self._samples
: The parameter study samples. A 2D numpy array in the shape (number of parameter sets,number of parameters). If it’s possible that the samples may be of mixed type,
numpy.array(..., dtype=object)
should be used to preserve the original Python types.
self._set_hashes
: list of parameter set content hashes created by callingself._create_set_hashes
after populating theself._samples
parameter study values.self._set_names
: Dictionary mapping parameter set hash to parameter set name strings created bycalling
self._create_set_names
after populatingself._set_hashes
.
self.parameter_study
: The Xarray Dataset parameter study object, created by callingself._create_parameter_study()
after definingself._samples
.
Minimum necessary work example:
# Work unique to the parameter generator schema and set generation set_count = 5 # Normally set according to the parameter schema parameter_count = len(self._parameter_names) self._samples = numpy.zeros((set_count, parameter_count)) # Work performed by common ABC methods super()._generate()
- _merge_parameter_studies() None [source]
Merge the current parameter study into a previous parameter study.
Preserve the previous parameter study set name to set contents associations by dropping the current study’s set names during merge. Resets attributes:
self.parameter_study
self._samples
self._set_hashes
self._set_names
- Raises:
RuntimeError – If the
self.previous_parameter_study
attribute is None
- _merge_set_names_array() None [source]
Merge the parameter set names array into the parameter study dataset as a non-index coordinate
- _parameter_study_to_numpy() <Mock name='mock.ndarray' id='139620811286016'> [source]
Return the parameter study data as a 2D numpy array
- Returns:
data
- _scons_write(target: list, source: list, env) None [source]
SCons Python build function wrapper for the parameter generator’s write() function.
Reference: https://scons.org/doc/production/HTML/scons-user/ch17s04.html
Searches for following keyword arguments in the task construction environment and passes to the write function:
output_file_type
- Parameters:
target – The target file list of strings
source – The source file list of SCons.Node.FS.File objects
env (SCons.Script.SConscript.SConsEnvironment) – The builder’s SCons construction environment object
- _update_set_names() None [source]
Update the parameter set names after a parameter study dataset merge operation.
Resets attributes:
self.parameter_study
self._set_names
- abstractmethod _validate() None [source]
Process parameter study input to verify schema
Must set the class attributes:
self._parameter_names
: list of strings containing the parameter study’s parameter names
Minimum necessary work example:
# Work unique to the parameter generator schema. Example matches CartesianProduct schema. self._parameter_names = list(self.parameter_schema.keys())
- _write(
- parameter_study_object,
- parameter_study_iterator,
- conditional_write_function,
- dry_run: bool = False,
Write parameter study formatted output to STDOUT, separate set files, or a single file
Behavior as specified in
waves.parameter_generators.ParameterGenerator.write()
- _write_meta() None [source]
Write the parameter study meta data file.
The parameter study meta file is always overwritten. It should NOT be used to determine if the parameter study target or dependee is out-of-date. Parameter study file paths are written as absolute paths.
- parameter_study_to_dict() Dict[str, Dict[str, Any]] [source]
Return parameter study as a dictionary
Used for iterating on parameter sets in an SCons workflow with parameter substitution dictionaries, e.g.
>>> for set_name, parameters in parameter_generator.parameter_study_to_dict().items(): ... print(f"{set_name}: {parameters}") ... parameter_set0: {'parameter_1': 1, 'parameter_2': 'a'} parameter_set1: {'parameter_1': 1, 'parameter_2': 'b'} parameter_set2: {'parameter_1': 2, 'parameter_2': 'a'} parameter_set3: {'parameter_1': 2, 'parameter_2': 'b'}
- Returns:
parameter study sets and samples as a dictionary: {set_name: {parameter: value}, …}
- write(
- output_file_type: Literal['h5', 'yaml'] | None = None,
- dry_run: bool | None = False,
Write the parameter study to STDOUT or an output file.
Writes to STDOUT by default. Requires non-default
output_file_template
oroutput_file
specification to write to files.If printing to STDOUT, print all parameter sets together. If printing to files, overwrite when contents of existing files have changed. If overwrite is specified, overwrite all parameter set files. If a dry run is requested print file-content associations for files that would have been written.
Writes parameter set files in YAML syntax by default. Output formatting is controlled by
output_file_type
.parameter_1: 1 parameter_2: a
- Parameters:
output_file_type – Output file syntax or type. Options are: ‘yaml’, ‘h5’.
dry_run – Print contents of new parameter study output files to STDOUT and exit
- Raises:
waves.exceptions.ChoicesError – If an unsupported output file type is requested
- class waves.parameter_generators.SALibSampler(sampler_class, *args, **kwargs)[source]
Bases:
ParameterGenerator
,ABC
Builds a SALib sampler parameter study from a SALib.sample
sampler_class
Samplers must use the
N
sample count argument. Note that in SALib.sampleN
is not always equivalent to the number of simulations. The following samplers are tested for parameter study shape and merge behavior:fast_sampler
finite_diff
latin
sobol
morris
Warning
For small numbers of parameters, some SALib generators produce duplicate parameter sets. These duplicate sets are removed during parameter study generation. This may cause the SALib analyze method(s) to raise errors related to the expected parameter set count.
Warning
The merged parameter study feature does not check for consistent parameter distributions. Changing the parameter definitions and merging with a previous parameter study will result in incorrect relationships between parameter schema and the parameter study samples.
- Parameters:
sampler_class – The SALib.sample sampler class name. Case sensitive.
parameter_schema – The YAML loaded parameter study schema dictionary -
{parameter_name: schema value}
SALibSampler expects “schema value” to be a dictionary with a strict structure and several required keys. Validated on class instantiation.output_file_template – Output file name template for multiple file output of the parameter study. Required if parameter sets will be written to files instead of printed to STDOUT. May contain pathseps for an absolute or relative path template. May contain the
@number
set number placeholder in the file basename but not in the path. If the placeholder is not found it will be appended to the template string. Output files are overwritten if the content of the file has changed or ifoverwrite
is True.output_file_template
andoutput_file
are mutually exclusive.output_file – Output file name for single file output of the parameter study. Required if parameter sets will be written to a file instead of printed to STDOUT. May contain pathseps for an absolute or relative path. Output file is overwritten if the content of the file has changed or if
overwrite
is True.output_file
andoutput_file_template
are mutually exclusive.output_file_type – Output file syntax or type. Options are: ‘yaml’, ‘h5’.
set_name_template – Parameter set name template. Overridden by
output_file_template
, if provided.previous_parameter_study – A relative or absolute file path to a previously created parameter study Xarray Dataset
overwrite – Overwrite existing output files
write_meta – Write a meta file named “parameter_study_meta.txt” containing the parameter set file names. Useful for command line execution with build systems that require an explicit file list for target creation.
kwargs – Any additional keyword arguments are passed through to the sampler method
- Variables:
self.parameter_study – The final parameter study XArray Dataset object
- Raises:
waves.exceptions.MutuallyExclusiveError – If the mutually exclusive output file template and output file options are both specified
waves.exceptions.APIError – If an unknown output file type is requested
RuntimeError – If a previous parameter study file is specified and missing, and
require_previous_parameter_study
isTrue
waves.exceptions.SchemaValidationError –
If the SALib sobol or SALib morris sampler is specified and there are fewer than 2 parameters.
N
is not a key ofparameter_schema
problem
is not a key ofparameter_schema
names
is not a key ofparameter_schema['problem']
parameter_schema
is not a dictionaryparameter_schema['N']
is not an integerparameter_schema['problem']
is not a dictionaryparameter_schema['problem']['names']
is not a YAML compliant iterable (list, set, tuple)
Keyword arguments for the SALib.sample
sampler_class
sample
method.Example
>>> import waves >>> parameter_schema = { ... "N": 4, # Required key. Value must be an integer. ... "problem": { # Required key. See the SALib sampler interface documentation ... "num_vars": 3, ... "names": ["parameter_1", "parameter_2", "parameter_3"], ... "bounds": [[-1, 1], [-2, 2], [-3, 3]] ... } ... } >>> parameter_generator = waves.parameter_generators.SALibSampler("sobol", parameter_schema) >>> print(parameter_generator.parameter_study) <xarray.Dataset> Dimensions: (set_name: 32) Coordinates: set_hash (set_name) <U32 'e0cb1990f9d70070eaf5638101dcaf... * set_name (set_name) <U15 'parameter_set0' ... 'parameter... Data variables: parameter_1 (set_name) float64 -0.2029 ... 0.187 parameter_2 (set_name) float64 -0.801 ... 0.6682 parameter_3 (set_name) float64 0.4287 ... -2.871
- _conditionally_write_dataset(
- existing_parameter_study: ~pathlib.Path,
- parameter_study: <Mock name='mock.Dataset' id='139620815441024'>,
Write NetCDF file over previous study if the datasets have changed or self.overwrite is True
- Parameters:
existing_parameter_study – A relative or absolute file path to a previously created parameter study Xarray Dataset
parameter_study – Parameter study xarray dataset
- _conditionally_write_yaml(output_file: str | Path, parameter_dictionary: dict) None
Write YAML file over previous study if the datasets have changed or self.overwrite is True
- Parameters:
output_file – A relative or absolute file path to the output YAML file
parameter_dictionary – dictionary containing parameter set data
- _create_parameter_names() None [source]
Construct the parameter names from a distribution parameter schema
- _create_parameter_study() None
Create the standard structure for the parameter study dataset
requires:
self._set_hashes
: parameter set content hashes identifying rows of parameter studyself._parameter_names
: parameter names used as columns of parameter studyself._samples
: The parameter study samples. Rows are sets. Columns are parameters.
creates attribute:
self.parameter_study
- _create_set_hashes() None
Construct unique, repeatable parameter set content hashes from
self._samples
.Creates an md5 hash from the concatenated string representation of parameter
name:value
associations.requires:
self._samples
: The parameter study samples. Rows are sets. Columns are parameters.self._parameter_names
: parameter names used as columns of parameter study
creates attribute:
self._set_hashes
: parameter set content hashes identifying rows of parameter study
- _create_set_names() None
Construct parameter set names from the set name template and number of parameter sets in
self._samples
Creates the class attribute
self._set_names
required to populate the_generate()
method’s parameter study Xarray dataset object.requires:
self._set_hashes
: parameter set content hashes identifying rows of parameter study
creates attribute:
self._set_names
: Dictionary mapping parameter set hash to parameter set name
- _create_set_names_array() <Mock name='mock.DataArray' id='139620810639216'>
Create an Xarray DataArray with the parameter set names using parameter set hashes as the coordinate
- Returns:
set_names_array
- _generate(**kwargs) None [source]
Generate the SALib.sample
sampler_class
parameter sets
- _merge_parameter_studies() None
Merge the current parameter study into a previous parameter study.
Preserve the previous parameter study set name to set contents associations by dropping the current study’s set names during merge. Resets attributes:
self.parameter_study
self._samples
self._set_hashes
self._set_names
- Raises:
RuntimeError – If the
self.previous_parameter_study
attribute is None
- _merge_set_names_array() None
Merge the parameter set names array into the parameter study dataset as a non-index coordinate
- _parameter_study_to_numpy() <Mock name='mock.ndarray' id='139620811286016'>
Return the parameter study data as a 2D numpy array
- Returns:
data
- _sampler_overrides(override_kwargs: dict | None = None) dict [source]
Provide sampler specific kwarg override dictionaries
sobol produces duplicate parameter sets for two parameters when
calc_second_order
isTrue
. Override this kwarg to beFalse
if there are only two parameters.
- Parameters:
override_kwargs – any common kwargs to include in the override dictionary
- Returns:
override kwarg dictionary
- _sampler_validation() None [source]
Call campler specific schema validation check methods
sobol requires at least two parameters
Requires attributes:
self._sampler_class
set by class initiationself._parameter_names
set byself._create_parameter_names()
- Raises:
waves.exceptions.SchemaValidationError – A sobol or morris sampler contains fewer than two parameters
- _scons_write(target: list, source: list, env) None
SCons Python build function wrapper for the parameter generator’s write() function.
Reference: https://scons.org/doc/production/HTML/scons-user/ch17s04.html
Searches for following keyword arguments in the task construction environment and passes to the write function:
output_file_type
- Parameters:
target – The target file list of strings
source – The source file list of SCons.Node.FS.File objects
env (SCons.Script.SConscript.SConsEnvironment) – The builder’s SCons construction environment object
- _update_set_names() None
Update the parameter set names after a parameter study dataset merge operation.
Resets attributes:
self.parameter_study
self._set_names
- _validate() None [source]
Process parameter study input to verify schema
Must set the class attributes:
self._parameter_names
: list of strings containing the parameter study’s parameter names
Minimum necessary work example:
# Work unique to the parameter generator schema. Example matches CartesianProduct schema. self._parameter_names = list(self.parameter_schema.keys())
- _write(
- parameter_study_object,
- parameter_study_iterator,
- conditional_write_function,
- dry_run: bool = False,
Write parameter study formatted output to STDOUT, separate set files, or a single file
Behavior as specified in
waves.parameter_generators.ParameterGenerator.write()
- _write_meta() None
Write the parameter study meta data file.
The parameter study meta file is always overwritten. It should NOT be used to determine if the parameter study target or dependee is out-of-date. Parameter study file paths are written as absolute paths.
- parameter_study_to_dict() Dict[str, Dict[str, Any]]
Return parameter study as a dictionary
Used for iterating on parameter sets in an SCons workflow with parameter substitution dictionaries, e.g.
>>> for set_name, parameters in parameter_generator.parameter_study_to_dict().items(): ... print(f"{set_name}: {parameters}") ... parameter_set0: {'parameter_1': 1, 'parameter_2': 'a'} parameter_set1: {'parameter_1': 1, 'parameter_2': 'b'} parameter_set2: {'parameter_1': 2, 'parameter_2': 'a'} parameter_set3: {'parameter_1': 2, 'parameter_2': 'b'}
- Returns:
parameter study sets and samples as a dictionary: {set_name: {parameter: value}, …}
- write(output_file_type: Literal['h5', 'yaml'] | None = None, dry_run: bool | None = False) None
Write the parameter study to STDOUT or an output file.
Writes to STDOUT by default. Requires non-default
output_file_template
oroutput_file
specification to write to files.If printing to STDOUT, print all parameter sets together. If printing to files, overwrite when contents of existing files have changed. If overwrite is specified, overwrite all parameter set files. If a dry run is requested print file-content associations for files that would have been written.
Writes parameter set files in YAML syntax by default. Output formatting is controlled by
output_file_type
.parameter_1: 1 parameter_2: a
- Parameters:
output_file_type – Output file syntax or type. Options are: ‘yaml’, ‘h5’.
dry_run – Print contents of new parameter study output files to STDOUT and exit
- Raises:
waves.exceptions.ChoicesError – If an unsupported output file type is requested
- waves.parameter_generators.SET_COORDINATE_KEY: Final[str] = 'set_name'
The set name coordinate used in WAVES parameter study Xarray Datasets
- class waves.parameter_generators.ScipySampler(sampler_class, *args, **kwargs)[source]
Bases:
_ScipyGenerator
Builds a scipy sampler parameter study from a scipy.stats.qmc
sampler_class
Samplers must use the
d
parameter space dimension keyword argument. The following samplers are tested for parameter study shape and merge behavior:Sobol
Halton
LatinHypercube
PoissonDisk
Warning
The merged parameter study feature does not check for consistent parameter distributions. Changing the parameter definitions and merging with a previous parameter study will result in incorrect relationships between parameter schema and the parameter study samples.
- Parameters:
sampler_class – The scipy.stats.qmc sampler class name. Case sensitive.
parameter_schema – The YAML loaded parameter study schema dictionary -
{parameter_name: schema value}
ScipySampler expects “schema value” to be a dictionary with a strict structure and several required keys. Validated on class instantiation.output_file_template – Output file name template for multiple file output of the parameter study. Required if parameter sets will be written to files instead of printed to STDOUT. May contain pathseps for an absolute or relative path template. May contain the
@number
set number placeholder in the file basename but not in the path. If the placeholder is not found it will be appended to the template string. Output files are overwritten if the content of the file has changed or ifoverwrite
is True.output_file_template
andoutput_file
are mutually exclusive.output_file – Output file name for single file output of the parameter study. Required if parameter sets will be written to a file instead of printed to STDOUT. May contain pathseps for an absolute or relative path. Output file is overwritten if the content of the file has changed or if
overwrite
is True.output_file
andoutput_file_template
are mutually exclusive.output_file_type – Output file syntax or type. Options are: ‘yaml’, ‘h5’.
set_name_template – Parameter set name template. Overridden by
output_file_template
, if provided.previous_parameter_study – A relative or absolute file path to a previously created parameter study Xarray Dataset
overwrite – Overwrite existing output files
write_meta – Write a meta file named “parameter_study_meta.txt” containing the parameter set file names. Useful for command line execution with build systems that require an explicit file list for target creation.
kwargs – Any additional keyword arguments are passed through to the sampler method
- Variables:
self.parameter_distributions – A dictionary mapping parameter names to the
scipy.stats
distributionself.parameter_study – The final parameter study XArray Dataset object
- Raises:
waves.exceptions.MutuallyExclusiveError – If the mutually exclusive output file template and output file options are both specified
waves.exceptions.APIError – If an unknown output file type is requested
RuntimeError – If a previous parameter study file is specified and missing, and
require_previous_parameter_study
isTrue
Keyword arguments for the
scipy.stats.qmc
sampler_class
. Thed
keyword argument is internally managed and will be overwritten to match the number of parameters defined in the parameter schema.Example
>>> import waves >>> parameter_schema = { ... 'num_simulations': 4, # Required key. Value must be an integer. ... 'parameter_1': { ... 'distribution': 'norm', # Required key. Value must be a valid scipy.stats ... 'loc': 50, # distribution name. ... 'scale': 1 ... }, ... 'parameter_2': { ... 'distribution': 'skewnorm', ... 'a': 4, ... 'loc': 30, ... 'scale': 2 ... } ... } >>> parameter_generator = waves.parameter_generators.ScipySampler("LatinHypercube", parameter_schema) >>> print(parameter_generator.parameter_study) <xarray.Dataset> Dimensions: (set_hash: 4) Coordinates: set_hash (set_hash) <U32 '1e8219dae27faa5388328e225a... * set_name (set_hash) <U14 'parameter_set0' ... 'param... Data variables: parameter_1 (set_hash) float64 0.125 ... 51.15 parameter_2 (set_hash) float64 0.625 ... 30.97
- _conditionally_write_dataset(
- existing_parameter_study: ~pathlib.Path,
- parameter_study: <Mock name='mock.Dataset' id='139620815441024'>,
Write NetCDF file over previous study if the datasets have changed or self.overwrite is True
- Parameters:
existing_parameter_study – A relative or absolute file path to a previously created parameter study Xarray Dataset
parameter_study – Parameter study xarray dataset
- _conditionally_write_yaml(output_file: str | Path, parameter_dictionary: dict) None
Write YAML file over previous study if the datasets have changed or self.overwrite is True
- Parameters:
output_file – A relative or absolute file path to the output YAML file
parameter_dictionary – dictionary containing parameter set data
- _create_parameter_names() None
Construct the parameter names from a distribution parameter schema
- _create_parameter_study() None
Create the standard structure for the parameter study dataset
requires:
self._set_hashes
: parameter set content hashes identifying rows of parameter studyself._parameter_names
: parameter names used as columns of parameter studyself._samples
: The parameter study samples. Rows are sets. Columns are parameters.
creates attribute:
self.parameter_study
- _create_set_hashes() None
Construct unique, repeatable parameter set content hashes from
self._samples
.Creates an md5 hash from the concatenated string representation of parameter
name:value
associations.requires:
self._samples
: The parameter study samples. Rows are sets. Columns are parameters.self._parameter_names
: parameter names used as columns of parameter study
creates attribute:
self._set_hashes
: parameter set content hashes identifying rows of parameter study
- _create_set_names() None
Construct parameter set names from the set name template and number of parameter sets in
self._samples
Creates the class attribute
self._set_names
required to populate the_generate()
method’s parameter study Xarray dataset object.requires:
self._set_hashes
: parameter set content hashes identifying rows of parameter study
creates attribute:
self._set_names
: Dictionary mapping parameter set hash to parameter set name
- _create_set_names_array() <Mock name='mock.DataArray' id='139620810639216'>
Create an Xarray DataArray with the parameter set names using parameter set hashes as the coordinate
- Returns:
set_names_array
- _generate(**kwargs) None [source]
Generate the scipy.stats.qmc
sampler_class
parameter sets
- _generate_distribution_samples(sampler, set_count, parameter_count) None
Create parameter distribution samples
Requires attibrutes:
self.parameter_distributions
: dictionary containing the {parameter name: scipy.stats distribution} defined by the parameter schema. Set bywaves.parameter_generators._ScipyGenerator._generate_parameter_distributions()
.
Sets attribute(s):
self._samples
: The parameter study samples. A 2D numpy array in the shape (number of parameter sets,number of parameters).
- _generate_parameter_distributions() dict
Return dictionary containing the {parameter name: scipy.stats distribution} defined by the parameter schema.
- Returns:
parameter_distributions
- _merge_parameter_studies() None
Merge the current parameter study into a previous parameter study.
Preserve the previous parameter study set name to set contents associations by dropping the current study’s set names during merge. Resets attributes:
self.parameter_study
self._samples
self._set_hashes
self._set_names
- Raises:
RuntimeError – If the
self.previous_parameter_study
attribute is None
- _merge_set_names_array() None
Merge the parameter set names array into the parameter study dataset as a non-index coordinate
- _parameter_study_to_numpy() <Mock name='mock.ndarray' id='139620811286016'>
Return the parameter study data as a 2D numpy array
- Returns:
data
- _scons_write(target: list, source: list, env) None
SCons Python build function wrapper for the parameter generator’s write() function.
Reference: https://scons.org/doc/production/HTML/scons-user/ch17s04.html
Searches for following keyword arguments in the task construction environment and passes to the write function:
output_file_type
- Parameters:
target – The target file list of strings
source – The source file list of SCons.Node.FS.File objects
env (SCons.Script.SConscript.SConsEnvironment) – The builder’s SCons construction environment object
- _update_set_names() None
Update the parameter set names after a parameter study dataset merge operation.
Resets attributes:
self.parameter_study
self._set_names
- _validate() None
Validate the parameter distribution schema. Executed by class initiation.
parameter_schema = { 'num_simulations': 4, # Required key. Value must be an integer. 'parameter_1': { 'distribution': 'norm', # Required key. Value must be a valid scipy.stats 'loc': 50, # distribution name. 'scale': 1 }, 'parameter_2': { 'distribution': 'skewnorm', 'a': 4, 'loc': 30, 'scale': 2 } }
- Raises:
waves.exceptions.SchemaValidationError –
Parameter schema is not a dictionary
Parameter schema
num_simulations
key is not an integerParameter definition distribution value is not a valid Python identifier
Parameter definition key(s) is not a valid Python identifier
Parameter schema does not have a
num_simulations
keyParameter definition does not contain a
distribution
key
- _write(
- parameter_study_object,
- parameter_study_iterator,
- conditional_write_function,
- dry_run: bool = False,
Write parameter study formatted output to STDOUT, separate set files, or a single file
Behavior as specified in
waves.parameter_generators.ParameterGenerator.write()
- _write_meta() None
Write the parameter study meta data file.
The parameter study meta file is always overwritten. It should NOT be used to determine if the parameter study target or dependee is out-of-date. Parameter study file paths are written as absolute paths.
- parameter_study_to_dict() Dict[str, Dict[str, Any]]
Return parameter study as a dictionary
Used for iterating on parameter sets in an SCons workflow with parameter substitution dictionaries, e.g.
>>> for set_name, parameters in parameter_generator.parameter_study_to_dict().items(): ... print(f"{set_name}: {parameters}") ... parameter_set0: {'parameter_1': 1, 'parameter_2': 'a'} parameter_set1: {'parameter_1': 1, 'parameter_2': 'b'} parameter_set2: {'parameter_1': 2, 'parameter_2': 'a'} parameter_set3: {'parameter_1': 2, 'parameter_2': 'b'}
- Returns:
parameter study sets and samples as a dictionary: {set_name: {parameter: value}, …}
- write(output_file_type: Literal['h5', 'yaml'] | None = None, dry_run: bool | None = False) None
Write the parameter study to STDOUT or an output file.
Writes to STDOUT by default. Requires non-default
output_file_template
oroutput_file
specification to write to files.If printing to STDOUT, print all parameter sets together. If printing to files, overwrite when contents of existing files have changed. If overwrite is specified, overwrite all parameter set files. If a dry run is requested print file-content associations for files that would have been written.
Writes parameter set files in YAML syntax by default. Output formatting is controlled by
output_file_type
.parameter_1: 1 parameter_2: a
- Parameters:
output_file_type – Output file syntax or type. Options are: ‘yaml’, ‘h5’.
dry_run – Print contents of new parameter study output files to STDOUT and exit
- Raises:
waves.exceptions.ChoicesError – If an unsupported output file type is requested
- class waves.parameter_generators.SobolSequence(*args, **kwargs)[source]
Bases:
_ScipyGenerator
Builds a Sobol sequence parameter study from the scipy Sobol class
random
method.Warning
The merged parameter study feature does not check for consistent parameter distributions. Changing the parameter definitions and merging with a previous parameter study will result in incorrect relationships between parameter schema and the parameter study samples.
- Parameters:
parameter_schema – The YAML loaded parameter study schema dictionary -
{parameter_name: schema value}
SobolSequence expects “schema value” to be a dictionary with a strict structure and several required keys. Validated on class instantiation.output_file_template – Output file name template for multiple file output of the parameter study. Required if parameter sets will be written to files instead of printed to STDOUT. May contain pathseps for an absolute or relative path template. May contain the
@number
set number placeholder in the file basename but not in the path. If the placeholder is not found it will be appended to the template string. Output files are overwritten if the content of the file has changed or ifoverwrite
is True.output_file_template
andoutput_file
are mutually exclusive.output_file – Output file name for single file output of the parameter study. Required if parameter sets will be written to a file instead of printed to STDOUT. May contain pathseps for an absolute or relative path. Output file is overwritten if the content of the file has changed or if
overwrite
is True.output_file
andoutput_file_template
are mutually exclusive.output_file_type – Output file syntax or type. Options are: ‘yaml’, ‘h5’.
set_name_template – Parameter set name template. Overridden by
output_file_template
, if provided.previous_parameter_study – A relative or absolute file path to a previously created parameter study Xarray Dataset
overwrite – Overwrite existing output files
write_meta – Write a meta file named “parameter_study_meta.txt” containing the parameter set file names. Useful for command line execution with build systems that require an explicit file list for target creation.
kwargs – Any additional keyword arguments are passed through to the sampler method
- Variables:
self.parameter_distributions – A dictionary mapping parameter names to the
scipy.stats
distributionself.parameter_study – The final parameter study XArray Dataset object
- Raises:
waves.exceptions.MutuallyExclusiveError – If the mutually exclusive output file template and output file options are both specified
waves.exceptions.APIError – If an unknown output file type is requested
RuntimeError – If a previous parameter study file is specified and missing, and
require_previous_parameter_study
isTrue
To produce consistent Sobol sequences on repeat instantiations, the
**kwargs
must include eitherscramble=False
orseed=<int>
. See the scipy Sobolscipy.stats.qmc.Sobol
class documentation for details. Thed
keyword argument is internally managed and will be overwritten to match the number of parameters defined in the parameter schema.Example
>>> import waves >>> parameter_schema = { ... 'num_simulations': 4, # Required key. Value must be an integer. ... 'parameter_1': { ... 'distribution': 'uniform', # Required key. Value must be a valid scipy.stats ... 'loc': 0, # distribution name. ... 'scale': 10 ... }, ... 'parameter_2': { ... 'distribution': 'uniform', ... 'loc': 2, ... 'scale': 3 ... } ... } >>> parameter_generator = waves.parameter_generators.SobolSequence(parameter_schema) >>> print(parameter_generator.parameter_study) <xarray.Dataset> Dimensions: (set_name: 4) Coordinates: set_hash (set_name) <U32 'c1fa74da12c0991379d1df6541c421... * set_name (set_name) <U14 'parameter_set0' ... 'parameter... Data variables: parameter_1 (set_name) float64 0.0 0.5 ... 7.5 2.5 parameter_2 (set_name) float64 0.0 0.5 ... 4.25
- _conditionally_write_dataset(
- existing_parameter_study: ~pathlib.Path,
- parameter_study: <Mock name='mock.Dataset' id='139620815441024'>,
Write NetCDF file over previous study if the datasets have changed or self.overwrite is True
- Parameters:
existing_parameter_study – A relative or absolute file path to a previously created parameter study Xarray Dataset
parameter_study – Parameter study xarray dataset
- _conditionally_write_yaml(output_file: str | Path, parameter_dictionary: dict) None
Write YAML file over previous study if the datasets have changed or self.overwrite is True
- Parameters:
output_file – A relative or absolute file path to the output YAML file
parameter_dictionary – dictionary containing parameter set data
- _create_parameter_names() None
Construct the parameter names from a distribution parameter schema
- _create_parameter_study() None
Create the standard structure for the parameter study dataset
requires:
self._set_hashes
: parameter set content hashes identifying rows of parameter studyself._parameter_names
: parameter names used as columns of parameter studyself._samples
: The parameter study samples. Rows are sets. Columns are parameters.
creates attribute:
self.parameter_study
- _create_set_hashes() None
Construct unique, repeatable parameter set content hashes from
self._samples
.Creates an md5 hash from the concatenated string representation of parameter
name:value
associations.requires:
self._samples
: The parameter study samples. Rows are sets. Columns are parameters.self._parameter_names
: parameter names used as columns of parameter study
creates attribute:
self._set_hashes
: parameter set content hashes identifying rows of parameter study
- _create_set_names() None
Construct parameter set names from the set name template and number of parameter sets in
self._samples
Creates the class attribute
self._set_names
required to populate the_generate()
method’s parameter study Xarray dataset object.requires:
self._set_hashes
: parameter set content hashes identifying rows of parameter study
creates attribute:
self._set_names
: Dictionary mapping parameter set hash to parameter set name
- _create_set_names_array() <Mock name='mock.DataArray' id='139620810639216'>
Create an Xarray DataArray with the parameter set names using parameter set hashes as the coordinate
- Returns:
set_names_array
- _generate(**kwargs) None [source]
Generate the parameter study dataset from the user provided parameter array
- _generate_distribution_samples(sampler, set_count, parameter_count) None
Create parameter distribution samples
Requires attibrutes:
self.parameter_distributions
: dictionary containing the {parameter name: scipy.stats distribution} defined by the parameter schema. Set bywaves.parameter_generators._ScipyGenerator._generate_parameter_distributions()
.
Sets attribute(s):
self._samples
: The parameter study samples. A 2D numpy array in the shape (number of parameter sets,number of parameters).
- _generate_parameter_distributions() dict
Return dictionary containing the {parameter name: scipy.stats distribution} defined by the parameter schema.
- Returns:
parameter_distributions
- _merge_parameter_studies() None
Merge the current parameter study into a previous parameter study.
Preserve the previous parameter study set name to set contents associations by dropping the current study’s set names during merge. Resets attributes:
self.parameter_study
self._samples
self._set_hashes
self._set_names
- Raises:
RuntimeError – If the
self.previous_parameter_study
attribute is None
- _merge_set_names_array() None
Merge the parameter set names array into the parameter study dataset as a non-index coordinate
- _parameter_study_to_numpy() <Mock name='mock.ndarray' id='139620811286016'>
Return the parameter study data as a 2D numpy array
- Returns:
data
- _scons_write(target: list, source: list, env) None
SCons Python build function wrapper for the parameter generator’s write() function.
Reference: https://scons.org/doc/production/HTML/scons-user/ch17s04.html
Searches for following keyword arguments in the task construction environment and passes to the write function:
output_file_type
- Parameters:
target – The target file list of strings
source – The source file list of SCons.Node.FS.File objects
env (SCons.Script.SConscript.SConsEnvironment) – The builder’s SCons construction environment object
- _update_set_names() None
Update the parameter set names after a parameter study dataset merge operation.
Resets attributes:
self.parameter_study
self._set_names
- _validate() None
Validate the parameter distribution schema. Executed by class initiation.
parameter_schema = { 'num_simulations': 4, # Required key. Value must be an integer. 'parameter_1': { 'distribution': 'norm', # Required key. Value must be a valid scipy.stats 'loc': 50, # distribution name. 'scale': 1 }, 'parameter_2': { 'distribution': 'skewnorm', 'a': 4, 'loc': 30, 'scale': 2 } }
- Raises:
waves.exceptions.SchemaValidationError –
Parameter schema is not a dictionary
Parameter schema
num_simulations
key is not an integerParameter definition distribution value is not a valid Python identifier
Parameter definition key(s) is not a valid Python identifier
Parameter schema does not have a
num_simulations
keyParameter definition does not contain a
distribution
key
- _write(
- parameter_study_object,
- parameter_study_iterator,
- conditional_write_function,
- dry_run: bool = False,
Write parameter study formatted output to STDOUT, separate set files, or a single file
Behavior as specified in
waves.parameter_generators.ParameterGenerator.write()
- _write_meta() None
Write the parameter study meta data file.
The parameter study meta file is always overwritten. It should NOT be used to determine if the parameter study target or dependee is out-of-date. Parameter study file paths are written as absolute paths.
- parameter_study_to_dict() Dict[str, Dict[str, Any]]
Return parameter study as a dictionary
Used for iterating on parameter sets in an SCons workflow with parameter substitution dictionaries, e.g.
>>> for set_name, parameters in parameter_generator.parameter_study_to_dict().items(): ... print(f"{set_name}: {parameters}") ... parameter_set0: {'parameter_1': 1, 'parameter_2': 'a'} parameter_set1: {'parameter_1': 1, 'parameter_2': 'b'} parameter_set2: {'parameter_1': 2, 'parameter_2': 'a'} parameter_set3: {'parameter_1': 2, 'parameter_2': 'b'}
- Returns:
parameter study sets and samples as a dictionary: {set_name: {parameter: value}, …}
- write(output_file_type: Literal['h5', 'yaml'] | None = None, dry_run: bool | None = False) None
Write the parameter study to STDOUT or an output file.
Writes to STDOUT by default. Requires non-default
output_file_template
oroutput_file
specification to write to files.If printing to STDOUT, print all parameter sets together. If printing to files, overwrite when contents of existing files have changed. If overwrite is specified, overwrite all parameter set files. If a dry run is requested print file-content associations for files that would have been written.
Writes parameter set files in YAML syntax by default. Output formatting is controlled by
output_file_type
.parameter_1: 1 parameter_2: a
- Parameters:
output_file_type – Output file syntax or type. Options are: ‘yaml’, ‘h5’.
dry_run – Print contents of new parameter study output files to STDOUT and exit
- Raises:
waves.exceptions.ChoicesError – If an unsupported output file type is requested
Exceptions#
Module of package specific exceptions.
The project design intent is to print error messages to STDERR and return non-zero exit codes when used as a command line utility (CLI), but raise exceptions when used as a package (API). The only time a stack trace should be printed when using the CLI is if the exception is unexpected and may represent an internal bug.
Most raised exceptions in the package API should be Python built-ins. However, to support the CLI error handling
described above some exceptions need to be uniquely identifiable as package exceptions and not third-party exceptions.
waves.exceptions.WAVESError()
and RuntimeError
exceptions will be be caught by the command line utility and
converted to error messages and non-zero return codes. Third-party exceptions represent truly unexpected behavior that
may be an internal bug and print a stack trace.
This behavior could be supported by limiting package raised exceptions to RuntimeError exceptions; however, more specific exceptions are desirable when using the package API to allow end-users to handle different collections of API exceptions differently.
- exception waves.exceptions.APIError[source]
Bases:
WAVESError
Raised when an API validation fails, e.g. an argument value is outside the list of acceptable choices. Intended to mirror an associated
argparse
CLI option validation.
- exception waves.exceptions.ChoicesError[source]
Bases:
APIError
Raised during API validation that mirrors an
argparse
CLI argument with limited choices.
- exception waves.exceptions.MutuallyExclusiveError[source]
Bases:
APIError
Raised during API validation that mirrors an
argparse
CLI mutually exclusive group.
- exception waves.exceptions.SchemaValidationError[source]
Bases:
APIError
Raised when a WAVES parameter generator schema validation fails
- exception waves.exceptions.WAVESError[source]
Bases:
Exception
The base class for WAVES exceptions. All exceptions that must be caught by the CLI should derive from this class.
_main.py#
Internal module implementing the command line utility behavior
Should raise RuntimeError
or a derived class of waves.exceptions.WAVESError
to allow
waves._main.main()
to convert stack-trace/exceptions into STDERR message and non-zero exit codes.
- waves._main.get_parser() ArgumentParser [source]
Get parser object for command line options
- Returns:
parser
- waves._main.main() None [source]
This is the main function that performs actions based on command line arguments.
_docs.py#
Internal API module implementing the docs
subcommand behavior.
Should raise RuntimeError
or a derived class of waves.exceptions.WAVESError
to allow the CLI implementation
to convert stack-trace/exceptions into STDERR message and non-zero exit codes.
- waves._docs.get_parser() ArgumentParser [source]
Return a ‘no-help’ parser for the docs subcommand
- Returns:
parser
- waves._docs.main(documentation_index: Path, print_local_path: bool = False) None [source]
Open the package HTML documentation in the system default web browser or print the path to the documentation index file.
- Parameters:
print_local_path – Flag to print the local path to terminal instead of calling the default web browser
_fetch.py#
Internal API module implementing the fetch
subcommand behavior.
Should raise RuntimeError
or a derived class of waves.exceptions.WAVESError
to allow the CLI implementation
to convert stack-trace/exceptions into STDERR message and non-zero exit codes.
- waves._fetch.available_files(
- root_directory: str | Path,
- relative_paths: Iterable[str | Path],
Build a list of files at
relative_paths
with respect to the rootroot_directory
directoryReturns a list of absolute paths and a list of any relative paths that were not found. Falls back to a full recursive search of
relative_paths
withpathlib.Path.rglob
to enable pathlib style pattern matching.- Parameters:
root_directory – Relative or absolute root path to search. Relative paths are converted to absolute paths with respect to the current working directory before searching.
relative_paths – Relative paths to search for. Directories are searched recursively for files.
- Returns:
available_files, not_found
- waves._fetch.build_copy_tuples(
- destination: str | Path,
- requested_paths_resolved: List[Path],
- overwrite: bool = False,
- Parameters:
destination – String or pathlike object for the destination directory
requested_paths_resolved – List of absolute requested files as path-objects
- Returns:
requested and destination file path pairs
- waves._fetch.build_destination_files(
- destination: str | Path,
- requested_paths: List[Path],
Build destination file paths from the requested paths, truncating the longest possible source prefix path
- Parameters:
destination – String or pathlike object for the destination directory
requested_paths – List of requested files as path-objects
- Returns:
destination files, existing files
- waves._fetch.build_source_files(
- root_directory: str | Path,
- relative_paths: Iterable[str | Path],
- exclude_patterns: Iterable[str] = ['__pycache__', '.pyc', '.sconf_temp', '.sconsign.dblite', 'config.log'],
Wrap
available_files()
and trim list based on exclude patternsIf no source files are found, an empty list is returned.
- Parameters:
root_directory (str) – Relative or absolute root path to search. Relative paths are converted to absolute paths with respect to the current working directory before searching.
relative_paths (list) – Relative paths to search for. Directories are searched recursively for files.
exclude_patterns (list) – list of strings to exclude from the root_directory directory tree if the path contains a matching string.
- Returns:
source_files, not_found
- Return type:
tuple of lists
- waves._fetch.conditional_copy(copy_tuples: List[Tuple[Path, Path]]) None [source]
Copy when destination file doesn’t exist or doesn’t match source file content
Uses Python
shutil.copyfile
, so meta data isn’t preserved. Creates intermediate parent directories prior to copy, but doesn’t raise exceptions on existing parent directories.- Parameters:
copy_tuples – Tuple of source, destination pathlib.Path pairs, e.g.
((source, destination), ...)
- waves._fetch.extend_requested_paths(
- requested_paths: List[Path],
- tutorial: Literal[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
Extend the requested_paths list with the necessary tutorial files.
- Parameters:
requested_paths – list of relative path-like objects that subset the files found in the
root_directory
relative_paths
tutorial – Integer to fetch all necessary files for the specified tutorial number
- Returns:
extended requested paths
- Raises:
ChoicesError – If the requested tutorial number doesn’t exist
- waves._fetch.get_parser() ArgumentParser [source]
Return a ‘no-help’ parser for the fetch subcommand
- Returns:
parser
- waves._fetch.longest_common_path_prefix(file_list: List[Path]) Path [source]
Return the longest common file path prefix.
The edge case of a single path is handled by returning the parent directory
- Parameters:
file_list – List of path-like objects
- Returns:
longest common path prefix
- Raises:
RuntimeError – When file list is empty
- waves._fetch.main(
- subcommand: str,
- root_directory: str | Path,
- relative_paths: Iterable[str | Path],
- destination: str | Path,
- requested_paths: List[Path] | None = None,
- tutorial: Literal[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12] | None = None,
- overwrite: bool = False,
- dry_run: bool = False,
- print_available: bool = False,
Thin wrapper on
waves.fetch.recursive_copy()
to provide subcommand specific behavior and STDOUT/STDERRRecursively copy requested paths from root_directory/relative_paths directories into destination directory using the shortest possible shared source prefix.
- Parameters:
subcommand – name of the subcommand to report in STDOUT
root_directory – String or pathlike object for the root_directory directory
relative_paths – List of string or pathlike objects describing relative paths to search for in root_directory
destination – String or pathlike object for the destination directory
requested_paths – list of Path objects that subset the files found in the
root_directory
relative_paths
tutorial – Integer to fetch all necessary files for the specified tutorial number
overwrite – Boolean to overwrite any existing files in destination directory
dry_run – Print the destination tree and exit. Short circuited by
print_available
print_available – Print the available source files and exit. Short circuits
dry_run
- waves._fetch.print_list(
- things_to_print: list,
- prefix: str = '\t',
- stream=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>,
Print a list to the specified stream, one line per item
- Parameters:
things_to_print (list) – List of items to print
prefix (str) – prefix to print on each line before printing the item
stream (file-like) – output stream. Defaults to
sys.stdout
.
- waves._fetch.recursive_copy(
- root_directory: str | Path,
- relative_paths: Iterable[str | Path],
- destination: str | Path,
- requested_paths: List[Path] | None = None,
- tutorial: Literal[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12] | None = None,
- overwrite: bool = False,
- dry_run: bool = False,
- print_available: bool = False,
Recursively copy requested paths from root_directory/relative_paths directories into destination directory using the shortest possible shared source prefix.
If destination files exist, copy non-conflicting files unless overwrite is specified.
- Parameters:
root_directory – String or pathlike object for the root_directory directory
relative_paths – List of string or pathlike objects describing relative paths to search for in root_directory
destination – String or pathlike object for the destination directory
requested_paths – list of relative path-objects that subset the files found in the
root_directory
relative_paths
tutorial – Integer to fetch all necessary files for the specified tutorial number
overwrite – Boolean to overwrite any existing files in destination directory
dry_run – Print the destination tree and exit. Short circuited by
print_available
print_available – Print the available source files and exit. Short circuits
dry_run
- Raises:
RuntimeError – If the no requested files exist in the longest common source path
_visualize.py#
Internal API module implementing the visualize
subcommand behavior.
Should raise RuntimeError
or a derived class of waves.exceptions.WAVESError
to allow the CLI implementation
to convert stack-trace/exceptions into STDERR message and non-zero exit codes.
- waves._visualize.add_node_count(
- graph: <Mock name='mock.DiGraph' id='139620748960896'>,
- text: str = 'Node count: ',
Add an orphan node with the total node count to a directed graph
The graph nodes must contain a
layer
attribute with integer values. Orphan node is assigned to the minimum layer.- Parameters:
graph – original graph
text – Leading text for node name and label
- waves._visualize.ancestor_subgraph(graph: <Mock name='mock.DiGraph' id='139620748960896'>, nodes: ~typing.Iterable[str]) <Mock name='mock.DiGraph' id='139620748960896'> [source]
Return a new directed graph containing nodes and their ancestors
- Parameters:
graph – original directed graph
nodes – iterable of nodes name strings
- Returns:
subgraph
- Raises:
RuntimeError – If one or more nodes are missing from the graph
- waves._visualize.check_regex_exclude(
- exclude_regex: str,
- node_name: str,
- current_indent: int,
- exclude_indent: int,
- exclude_node: bool = False,
Excludes node names that match the regular expression
- Parameters:
exclude_regex (str) – Regular expression
node_name (str) – Name of the node
current_indent (int) – Current indent of the parsed output
exclude_indent (int) – Set to current_indent if node is to be excluded
exclude_node (bool) – Indicated whether a node should be excluded
- Returns:
Tuple containing exclude_node and exclude_indent
- waves._visualize.get_parser() ArgumentParser [source]
Return a ‘no-help’ parser for the visualize subcommand
- Returns:
parser
- waves._visualize.graph_to_graphml(graph: <Mock name='mock.DiGraph' id='139620748960896'>) str [source]
Return the networkx graphml text
- Parameters:
graph – networkx directed graph
- waves._visualize.main(
- targets: List[str],
- scons_args: list | None = None,
- sconstruct: Path = PosixPath('SConstruct'),
- output_file: Path | None = None,
- height: int = 12,
- width: int = 36,
- font_size: int = 10,
- node_color: str = '#5AC7CB',
- edge_color: str = '#B7DEBE',
- exclude_list: List[str] = ['/usr/bin'],
- exclude_regex: str | None = None,
- print_graphml: bool = False,
- print_tree: bool = False,
- vertical: bool = False,
- no_labels: bool = False,
- node_count: bool = False,
- transparent: bool = False,
- break_paths: bool = False,
- input_file: str | Path | None = None,
Visualize the directed acyclic graph created by a SCons build
Uses matplotlib and networkx to build out an acyclic directed graph showing the relationships of the various dependencies using boxes and arrows. The visualization can be saved as an svg and graphml output can be printed as well.
- Parameters:
targets – Strings specifying SCons targets
scons_args – list of SCons arguments
sconstruct – Path to an SConstruct file or parent directory
output_file – File for saving the visualization
height – Height of visualization if being saved to a file
width – Width of visualization if being saved to a file
font_size – Font size of node labels
exclude_list – exclude nodes starting with strings in this list (e.g. /usr/bin)
exclude_regex – exclude nodes that match this regular expression
print_graphml – Whether to print the graph in graphml format
print_tree – Print the text output of the
scons --tree
command to the screenvertical – Specifies a vertical layout of graph instead of the default horizontal layout
no_labels – Don’t print labels on the nodes of the visualization
node_count – Add a node count orphan node
transparent – Use a transparent background
break_paths – Format paths by breaking at path separator with a newline
input_file – Path to text file storing output from SCons tree command
- waves._visualize.parse_output(
- tree_lines: List[str],
- exclude_list: List[str] = ['/usr/bin'],
- exclude_regex: str | None = None,
- no_labels: bool = False,
- break_paths: bool = False,
Parse the string that has the tree output and return as a networkx directed graph
- Parameters:
tree_lines – output of the scons tree command pre-split on newlines to a list of strings
exclude_list – exclude nodes starting with strings in this list(e.g. /usr/bin)
exclude_regex – exclude nodes that match this regular expression
no_labels – Don’t print labels on the nodes of the visualization
break_paths – Format paths by breaking at path separator with a newline
- Returns:
networkx directed graph
- Raises:
RuntimeError – If the parsed input doesn’t contain recognizable SCons nodes
- waves._visualize.plot(
- figure: <Mock name='mock.figure.Figure' id='139620735256144'>,
- output_file: ~pathlib.Path | None = None,
- transparent: bool = False,
Open a matplotlib plot or save to file
- Parameters:
figure – The matplotlib figure
output_file – File for saving the visualization
transparent – Use a transparent background
- waves._visualize.visualize(
- graph: <Mock name='mock.DiGraph' id='139620748960896'>,
- height: int = 12,
- width: int = 36,
- font_size: int = 10,
- node_color: str = '#5AC7CB',
- edge_color: str = '#B7DEBE',
- vertical: bool = False,
Create a visualization showing the tree
Nodes in graph require the
layer
andlabel
attributes.- Parameters:
tree – output of the scons tree command stored as dictionary
height – Height of visualization if being saved to a file
width – Width of visualization if being saved to a file
font_size – Font size of file names in points
vertical – Specifies a vertical layout of graph instead of the default horizontal layout
_build.py#
Internal API module implementing the build
subcommand behavior.
Should raise RuntimeError
or a derived class of waves.exceptions.WAVESError
to allow the CLI implementation
to convert stack-trace/exceptions into STDERR message and non-zero exit codes.
- waves._build.get_parser() ArgumentParser [source]
Return a ‘no-help’ parser for the build subcommand
- Returns:
parser
- waves._build.main(
- targets: list,
- scons_args: list | None = None,
- max_iterations: int = 5,
- working_directory: str | Path | None = None,
- git_clone_directory: str | Path | None = None,
Submit an iterative SCons command
SCons command is re-submitted until SCons reports that the target ‘is up to date.’ or the iteration count is reached.
- Parameters:
targets – list of SCons targets (positional arguments)
scons_args – list of SCons arguments
max_iterations – Maximum number of iterations before the iterative loop is terminated
working_directory – Change the SCons command working directory
git_clone_directory – Destination directory for a Git clone operation
_parameter_study.py#
Internal API module implementing the parameter study subcommand(s) behavior.
Thin CLI wrapper around waves.parameter_generators()
classes
Should raise RuntimeError
or a derived class of waves.exceptions.WAVESError
to allow the CLI implementation
to convert stack-trace/exceptions into STDERR message and non-zero exit codes.
- waves._parameter_study.get_parser() ArgumentParser [source]
Return a ‘no-help’ parser for the parameter study subcommand(s)
- Returns:
parser
- waves._parameter_study.main(
- subcommand: str,
- input_file: str | Path | TextIOWrapper | None,
- output_file_template: str | None = None,
- output_file: str | None = None,
- output_file_type: Literal['h5', 'yaml'] = 'yaml',
- set_name_template: str = 'parameter_set@number',
- previous_parameter_study: str | None = None,
- require_previous_parameter_study: bool = False,
- overwrite: bool = False,
- dry_run: bool = False,
- write_meta: bool = False,
Build parameter studies
- Parameters:
subcommand (str) – parameter study type to build
input_file (str) – path to YAML formatted parameter study schema file
output_file_template (str) – output file template name
output_file (str) – relative or absolute output file path
output_file_type (str) – yaml or h5
set_name_template (str) – parameter set name string template. May contain ‘@number’ for the set number.
previous_parameter_study (str) – relative or absolute path to previous parameter study file
overwrite (bool) – overwrite all existing parameter set file(s)
dry_run (bool) – print what files would have been written, but do no work
write_meta (bool) – write a meta file name ‘parameter_study_meta.txt’ containing the parameter set file path(s)
- waves._parameter_study.read_parameter_schema(input_file: str | Path | TextIOWrapper | None) dict [source]
Read a YAML dictionary from STDIN or a file
- Parameters:
input_file – STDIN stream or file path
- Returns:
dictionary
- Raises:
RuntimeError – if not STDIN and the file name does not exist
_print_study.py#
Internal API module implementing the print_study
subcommand behavior.
Should raise RuntimeError
or a derived class of waves.exceptions.WAVESError
to allow the CLI implementation
to convert stack-trace/exceptions into STDERR message and non-zero exit codes.
- waves._print_study.get_parser() ArgumentParser [source]
Return a ‘no-help’ parser for the print_study subcommand
- Returns:
parser
- waves._print_study.main(parameter_study_file: Path) None [source]
Open and print a WAVES parameter study file as a table
- Parameters:
parameter_study_file – The parameter study file to open
- Raises:
RuntimeError – If one or more files fails to open
_utilities.py#
Internal API module storing project utilities.
Functions that are limited in use to a public API should prefer to raise Python built-in exceptions.
Functions that may be used in a CLI implementation should raise RuntimeError
or a derived class of
waves.exceptions.WAVESError
to allow the CLI implementation to convert stack-trace/exceptions into STDERR
message and non-zero exit codes.
- class waves._utilities._AtSignTemplate(template)[source]
Bases:
Template
Use the CMake ‘@’ delimiter in a Python ‘string.Template’ to avoid clashing with bash variable syntax
- waves._utilities.cache_environment(
- command: str,
- shell: str = 'bash',
- cache: str | Path | None = None,
- overwrite_cache: bool = False,
- verbose: bool = False,
Retrieve cached environment dictionary or run a shell command to generate environment dictionary
Warning
Currently assumes a nix flavored shell: sh, bash, zsh, csh, tcsh. May work with any shell supporting command construction as below.
{shell} -c "{command} && env -0"
The method may fail if the command produces stdout that does not terminate in a newline. Redirect command output away from stdout if this causes problems, e.g.
command = 'command > /dev/null && command two > /dev/null'
in most shells.If the environment is created successfully and a cache file is requested, the cache file is always written. The
overwrite_cache
behavior forces the shellcommand
execution, even when the cache file is present. If thecommand
fails (raising asubprocess.CalledProcessError
) the captured output is printed to STDERR before re-raising the exception.- Parameters:
command – the shell command to execute
shell – the shell to use when executing command by absolute or relative path
cache – absolute or relative path to read/write a shell environment dictionary. Will be written as YAML formatted file regardless of extension.
overwrite_cache – Ignore previously cached files if they exist.
verbose – Print SCons configuration-like action messages when True
- Returns:
shell environment dictionary
- Raises:
subprocess.CalledProcessError – Print the captured output and re-raise exception when the shell command returns a non-zero exit status.
- waves._utilities.create_valid_identifier(identifier: str) None [source]
Create a valid Python identifier from an arbitray string by replacing invalid characters with underscores
- Parameters:
identifier – String to convert to valid Python identifier
- waves._utilities.cubit_os_bin() str [source]
Return the OS specific Cubit bin directory name
Making Cubit importable requires putting the Cubit bin directory on PYTHONPATH. On MacOS, the directory is “MacOS”. On other systems it is “bin”.
- Returns:
bin directory name, e.g. “bin” or “MacOS”
- Return type:
- waves._utilities.find_command(options: Iterable[str]) str [source]
Return first found command in list of options.
- Parameters:
options – alternate command options
- Returns:
command absolute path
- Raises:
FileNotFoundError – If no matching command is found
- waves._utilities.find_cubit_bin(options: Iterable[str], bin_directory: str | None = None) Path [source]
Provided a few options for the Cubit executable, search for the bin directory.
Recommend first checking to see if cubit will import.
- Parameters:
options – Cubit command options
bin_directory – Cubit’s bin directory name. Override the bin directory returned by
waves._utilities.cubit_os_bin()
.
- Returns:
Cubit bin directory absolute path
- Raises:
FileNotFoundError – If the Cubit command or bin directory is not found
- waves._utilities.find_cubit_python(options: Iterable[str], python_command: str = 'python3*') Path [source]
Provided a few options for the Cubit executable, search for the Cubit Python interpreter.
Recommend first checking to see if cubit will import.
- Parameters:
options – Cubit command options
python_command – Cubit’s Python executable file basename or
pathlib.Path.rglob
pattern
- Returns:
Cubit Python intepreter executable absolute path
- Raises:
FileNotFoundError – If the Cubit command or Cubit Python interpreter is not found
- waves._utilities.return_environment(
- command: str,
- shell: str = 'bash',
- string_option: str = '-c',
- separator: str = '&&',
- environment: str = 'env -0',
Run a shell command and return the shell environment as a dictionary
{shell} {string_option} "{command} {separator} {environment}"
Warning
The method may fail if the command produces stdout that does not terminate in a newline. Redirect command output away from stdout if this causes problems, e.g.
command = 'command > /dev/null && command two > /dev/null'
in most shells.- Parameters:
command – the shell command to execute
shell – the shell to use when executing command by absolute or relative path
string_option – the shell’s option to execute a string command
separator – the shell’s command separator, e.g.
;
or&&
.environment – environment command to print environment on STDOUT with null terminated seperators
- Returns:
shell environment dictionary
- Raises:
subprocess.CalledProcessError – When the shell command returns a non-zero exit status
- waves._utilities.search_commands(options: Iterable[str]) str | None [source]
Return the first found command in the list of options. Return None if none are found.
- Parameters:
options (list) – executable path(s) to test
- Returns:
command absolute path
- waves._utilities.set_name_substitution(
- original: Iterable[str | Path] | str | Path,
- replacement: str,
- identifier: str = 'set_name',
- suffix: str = '/',
Replace
@identifier
with replacement text in a list of strings and pathlib Path objectsIf the original is not a string, Path, or an iterable of strings and Paths, return without modification.
- Parameters:
original – List of strings
replacement – substitution string for the identifier
identifier – template identifier to replace, e.g.
@identifier
becomesreplacement
suffix – to insert after the replacement text
- Returns:
string or list of strings with identifier replacements
- waves._utilities.tee_subprocess(command: List[str], **kwargs) Tuple[int, str] [source]
Stream STDOUT to terminal while saving buffer to variable
- Parameters:
command – Command to execute provided a list of strings
kwargs (dict) – Any additional keyword arguments are passed through to subprocess.Popen
- Returns:
integer return code, string STDOUT
- waves._utilities.warn_only_once(function)[source]
Decorator to suppress warnings raised by successive function calls
- Parameters:
function – The function to wrap
- Returns:
function wrapped in the warning suppression logic
odb_extract.py#
Extracts data from an Abaqus odb file. Writes two files ‘output_file.h5’ and ‘output_file_datasets.h5’
Calls odbreport feature of Abaqus, parses resultant file, and creates output file. Most simulation data lives in a group path following the instance and set name, e.g. ‘/INSTANCE/FieldOutputs/ELEMENT_SET’, and can be accessed with xarray as
import xarray
xarray.open_dataset("sample.h5", group="/INSTANCE/FieldOutputs/ELEMENT_SET")
You can view all group paths with ‘h5ls -r sample.h5’. Additional ODB information is available in the ‘/odb’ group path. The ‘/xarray/Dataset’ group path contains a list of group paths that contain an xarray dataset.
/ # Top level group required in all hdf5 files
/<instance name>/ # Groups containing data of each instance found in an odb
FieldOutputs/ # Group with multiple xarray datasets for each field output
<field name>/ # Group with datasets containing field output data for a specified set or surface
# If no set or surface is specified, the <field name> will be 'ALL_NODES' or 'ALL_ELEMENTS'
HistoryOutputs/ # Group with multiple xarray datasets for each history output
<region name>/ # Group with datasets containing history output data for specified history region name
# If no history region name is specified, the <region name> will be 'ALL NODES'
Mesh/ # Group written from an xarray dataset with all mesh information for this instance
/<instance name>_Assembly/ # Group containing data of assembly instance found in an odb
Mesh/ # Group written from an xarray dataset with all mesh information for this instance
/odb/ # Catch all group for data found in the odbreport file not already organized by instance
info/ # Group with datasets that mostly give odb meta-data like name, path, etc.
jobData/ # Group with datasets that contain additional odb meta-data
rootAssembly/ # Group with datasets that match odb file organization per Abaqus documentation
sectionCategories/ # Group with datasets that match odb file organization per Abaqus documentation
/xarray/ # Group with a dataset that lists the location of all data written from xarray datasets
- waves._abaqus.odb_extract.get_odb_report_args(odb_report_args: str, input_file: Path, job_name: Path)[source]
Generates odb_report arguments
- Parameters:
odb_report_args – String of command line options to pass to
abaqus odbreport
.input_file –
.odb
file.job_name – Report file.
- waves._abaqus.odb_extract.get_parser()[source]
Get parser object for command line options
- Returns:
argument parser
- Return type:
parser
- waves._abaqus.odb_extract.odb_extract(
- input_file: list,
- output_file: str,
- output_type: str = 'h5',
- odb_report_args: str | None = None,
- abaqus_command: str = 'abq2024',
- delete_report_file: bool = False,
- verbose: bool = False,
The odb_extract Abaqus data extraction tool. Most users should use the associated command line interface.
- Parameters:
input_file – A list of
*.odb
files to extract. Current implementation only supports extraction on the first file in the list.output_file – The output file name to extract to. Extension should match on of the supported output types.
output_type – Output file type. Defaults to
h5
. Options are:h5
,yaml
,json
.odb_report_args – String of command line options to pass to
abaqus odbreport
.abaqus_command – The abaqus command name or absolute path to the Abaqus exectuble.
delete_report_file – Boolean to delete the intermediate Abaqus generated report file after producing the
output_file
.verbose – Boolean to print more verbose messages
- waves._abaqus.odb_extract.run_external(cmd)[source]
Execute an external command and get its exitcode, stdout and stderr.
- Parameters:
cmd (str) – command line command to run
- Returns:
output, return_code, error_code
Odb Report File Parser#
- class waves._abaqus.abaqus_file_parser.OdbReportFileParser(input_file, verbose=False, *args, **kwargs)[source]
Bases:
AbaqusFileParser
Class for parsing Abaqus odbreport files. Expected input includes only files that are in the csv format and which have used the ‘blocked’ option.
Results are stored either in a dictionary which mimics the format of the odb file (see Abaqus documentation), or stored in a specialized ‘extract’ format written to an hdf5 file.
Format of HDF5 file#/ # Top level group required in all hdf5 files /<instance name>/ # Groups containing data of each instance found in an odb FieldOutputs/ # Group with multiple xarray datasets for each field output <field name>/ # Group with datasets containing field output data for a specified set or surface # If no set or surface is specified, the <field name> will be # 'ALL_NODES' or 'ALL_ELEMENTS' HistoryOutputs/ # Group with multiple xarray datasets for each history output <region name>/ # Group with datasets containing history output data for specified history region name # If no history region name is specified, the <region name> will be 'ALL NODES' Mesh/ # Group written from an xarray dataset with all mesh information for this instance /<instance name>_Assembly/ # Group containing data of assembly instance found in an odb Mesh/ # Group written from an xarray dataset with all mesh information for this instance /odb/ # Catch all group for data found in the odbreport file not already organized by instance info/ # Group with datasets that mostly give odb meta-data like name, path, etc. jobData/ # Group with datasets that contain additional odb meta-data rootAssembly/ # Group with datasets that match odb file organization per Abaqus documentation sectionCategories/ # Group with datasets that match odb file organization per Abaqus documentation /xarray/ # Group with a dataset that lists the location of all data written from xarray datasets Dataset # HDF5 Dataset that lists the location within the hdf5 file of all xarray datasets
- create_extract_format(odb_dict, h5_file, time_stamp)[source]
Format the dictionary with the odb data into something that resembles previous abaqus extract method
- Parameters:
odb_dict (dict) – Dictionary with odb data
h5_file (str) – Name of h5_file to use for storing data
time_stamp (str) – Time stamp for possibly appending to hdf5 file names
- Returns:
None
- get_position_index(position, position_type, values)[source]
Get the index of the position (node or element) currently used
- Parameters:
position (int) – integer representing a node or element
position_type (str) – string of either ‘nodes’ or ‘elements’
values (dict) – dictionary where values are stored
- Returns:
index, just_added
- Return type:
int, bool
- pad_none_values(step_number, frame_number, position_length, data_length, element_size, values)[source]
Pad the values list with None or lists of None values in the locations indicated by the parameters
- Parameters:
step_number (int) – index of current step
frame_number (int) – index of current frame
position_length (int) – number of nodes or elements
data_length (int) – length of data given in field
element_size (int) – number of element lines that could be listed, e.g. for a hex this value would be 6
values (list) – list that holds the data values
- parse(format='extract', h5_file='extract.h5', time_stamp=None)[source]
- Parse the file and store the results in the self.parsed dictionary.
Can parse csv formatted output with the blocked option from the odbreport command
- Parameters:
format (str) – Format in which to store data can be ‘odb’ or ‘extract’
h5_file (str) – Name of hdf5 file to store data into when using the extract format
time_stamp (str) – Time stamp for possibly appending to hdf5 file names
- Returns:
None
- parse_analytic_surface(f, instance, line)[source]
Parse the section that contains analytic surface
- Parameters:
f (file object) – open file
instance (dict) – dictionary for storing the analytic surface
line (str) – current line of file
- Returns:
None
- parse_components_of_field(f, line, field)[source]
Parse the section that contains the data for field outputs found after the ‘Components of field’ heading
- Parameters:
f (file object) – open file
line (str) – current line of file
field (dict) – dictionary for storing field output
- Returns:
current line of file
- Return type:
str
- parse_element_classes(f, instance, number_of_element_classes)[source]
Parse the section that contains element classes
- Parameters:
f (file object) – open file
instance (dict) – dictionary for storing the elements
number_of_element_classes (int) – number of element classes to parse
- Returns:
None
- parse_element_set(f, instance, number_of_element_sets)[source]
Parse the section that contains element sets
- Parameters:
f (file object) – open file
instance (dict) – dictionary for storing the element sets
number_of_element_sets (int) – number of element sets to parse
- Returns:
None
- parse_elements(f, instance, number_of_elements)[source]
Parse the section that contains elements
- Parameters:
f (file object) – open file
instance (dict) – dictionary for storing the elements
number_of_elements (int) – number of elements to parse
- Returns:
None
- parse_field_values(f, line, values)[source]
Parse the section that contains the data for field values
- Parameters:
f (file object) – open file
line (str) – current line
values (list) – list for storing the field values
- Returns:
current line of file
- Return type:
str
- parse_fields(f, fields, line)[source]
Parse the section that contains the data for field outputs
- Parameters:
f (file object) – open file
fields (dict) – dictionary for storing the field outputs
line (str) – current line of file
- Returns:
current line of file
- Return type:
str
- parse_frames(f, frames, number_of_frames)[source]
Parse the section that contains the data for frames
- Parameters:
f (file object) – open file
frames (list) – list for storing the frames
number_of_frames (int) – number of frames to parse
- Returns:
current line of file
- Return type:
str
- parse_history_outputs(f, outputs, line)[source]
Parse the section that contains history outputs
- Parameters:
f (file object) – open file
outputs (dict) – dict for storing the history output data
line (str) – current line of file
- Returns:
current line of file
- Return type:
str
- parse_history_regions(f, line, regions, number_of_history_regions)[source]
Parse the section that contains history regions
- Parameters:
f (file object) – open file
line (str) – current line of file
regions (dict) – dict for storing the history region data
number_of_history_regions (int) – number of history regions to parse
- Returns:
current line of file
- Return type:
str
- parse_instances(f, instances, number_of_instances)[source]
Parse the section that contains instances
- Parameters:
f (file object) – open file
instances (dict) – dictionary for storing the instances
number_of_instances (int) – number of instances to parse
- Returns:
None
- parse_node_set(f, instance, number_of_node_sets)[source]
Parse the section that contains node sets
- Parameters:
f (file object) – open file
instance (dict) – dictionary for storing the node sets
number_of_node_sets (int) – number of node sets to parse
- Returns:
None
- parse_nodes(f, instance, number_of_nodes, embedded_space)[source]
Parse the section that contains nodes
- Parameters:
f (file object) – open file
instance (dict) – dictionary for storing the nodes
number_of_nodes (int) – number of nodes to parse
embedded_space (str) – type of embedded space
- Returns:
None
- parse_rigid_bodies(f, instance, number_of_rigid_bodies)[source]
Parse the section that contains rigid_bodies
- Parameters:
f (file object) – open file
instance (dict) – dictionary for storing the rigid bodies
number_of_rigid_bodies (int) – number of rigid bodies to parse
- Returns:
None
- parse_section_categories(f, categories, number_of_categories)[source]
Parse the section that contains section categories
- Parameters:
f (file object) – open file
categories (dict) – dictionary for storing the section categories
number_of_categories (int) – number of section categories to parse
- Returns:
None
- parse_steps(f, steps, number_of_steps)[source]
Parse the section that contains the data for steps
- Parameters:
f (file object) – open file
steps (dict) – dictionary for storing the steps
number_of_steps (int) – number of steps to parse
- Returns:
None
- parse_surfaces(f, instance, number_of_surfaces)[source]
Parse the section that contains surfaces
- Parameters:
f (file object) – open file
instance (dict) – dictionary for storing the surfaces
number_of_surfaces (int) – number of surfaces to parse
- Returns:
None
- save_dict_to_group(h5file, path, data_member, output_file)[source]
Recursively save data from python dictionary to hdf5 file.
This method can handle data types of int, float, str, and xarray Datasets, as well as lists or dictionaries of the aforementioned types. Tuples are assumed to have ints or floats.
- Parameters:
h5file (stream) – file stream to write data into
path (str) – name of hdf5 group to write into
data_member (dict) – member of dictionary
output_file (str) – name of h5 output file
- setup_extract_field_format(field, line)[source]
Do setup of field output formatting for extract format
- Parameters:
field (dict) – dictionary with field data
line (str) – current line of file
- Returns:
dictionary for which to store field values
- Return type:
dict
- setup_extract_history_format(output, current_history_output)[source]
Do setup of history output formatting for extract format
- Parameters:
output (dict) – dictionary with history output data
current_history_output (int) – current history output count