FlowSieve
3.4.0
FlowSieve Coarse-Graining Documentation
|
FlowSieve is developed as an open resource by the Complex Flow Group at the University of Rochester, under the sponsorship of the National Science Foundation and the National Aeronautics and Space Administration. Continued support for FlowSieve depends on demonstrable evidence of the code’s value to the scientific community. We kindly request that you cite the code in your publications and presentations. FlowSieve is made available under The Open Software License 3.0 (OSL-3.0) (see the license file or the human-readable summary at the end of the README), which means it is open to use, but requires attribution.
The following citations are suggested:
For journal articles, proceedings, etc.., we suggest:
Other articles that may be relevant to the work are:
For presentations, posters, etc.., we suggest acknowledging:
Aluie 2018 demonstrated how, when applied appropriately, coarse-graining can not only be applied in a data-processing sense, but also to the governing equations. This provides a physically meaningful and mathematically coherent way to quantify not only how much energy is contained in different length scales, but also how much energy is being transferred to different scales.
FlowSieve
is a heavily-parallelized coarse-graining codebase that provides tools for spatially filtering both scalar fields and vector fields in Cartesian and spherical geometries. Specifically, filtering velocity vector fields on a sphere provides a high-powered tool for scale-decomposing oceanic and atmospheric flows following the mathematical results in Aluie 2019.
FlowSieve
is designed to work in high-performance computing (HPC) environments in order to efficiently analyse large oceanic and atmospheric datasets, and extract scientifically meaningful diagnostics, including scale-wise energy content and energy transfer.
The tutorials, in addition to providing introductory instruction to using FlowSieve, also provide a way to verify that your installation is working as expected. The provided Jupyter notebooks include the figures that were generated by the developers, and provide a reference. As always, feel free to contact the developers for assistance (see Community Guidelines below).
A series of basic tutorials are provided to outline both various usage cases as well as how to use / process the outputs.
Some details regarding underlying methods are discussed on this page (warning, math content).
For notes about the Helmholtz decomposition, go to this page.
For notes on installation, please see this page.
The coarse-graining codebase uses netcdf files for both input and output. Dimension orderings are assumed to follow the CF-convention of (time, depth, latitude, longitude).
scale_factor, offset, and fill_value attributes are applied to output variables following CF-convention usage.
Where possible, units and variables descriptions (which are provided in constants.hpp) are also include as variable attributes in the output files.
Currently, no other filetypes are supported.
Post-processing (such as region-averaging, Okubo-Weiss histogram binning, time-averaging, etc) can be enabled and run on-line by setting the APPLY_POSTPROCESS flag in constants.hpp to true.
This will produce an additional output file for each filtering scale.
Various geographic regions of interest can be provided in a netcdf file.
--version
./coarse_grain.x --version
prints a summary of the constants / variables used when compiling--help
./coarse_grain.x --help
prints a summary of the run-time command-line arguments, as well as default values (if applicable).When specifying filtering scales, consider a wide sweep. It can also be beneficial to use logarithmically-spaced scales, for plotting purposes. Python can be helpful for this. For example, numpy.logspace( np.log10(50e3), np.log10(2000e3), 10 )
would produce 10 logarithmically-spaced filter scales between 50km and 2000km.
Hint: to print filter scales to only three significant digits, the numpy.format_float_scientific
function can help.
If you are using a bash script (e.g. a job-submission script), an easy way to pass the filter scales on to the coarse-graining executable is to define a variable that has the list of scales, and then just pass that to the executable using the –filter_scales flag.
Some known issues (with solutions where available) are given on this page.
Setting the debug flag in the Makefile specifies how much information is printed during runtime.
This list may not be quite up-to-date. Rule of thumb:
Additionally, setting DEBUG>=1 will result in slower runtime, since it enables bounds-checking in the apply-filter routines ( i.e. vector.at() vs vector[] ). These routines account for the vast majority of runtime outside of very small filter scales (which are fast enough to not be a concern), and so this optimization was only applied those those routines.
See the function map for the main filtering function to get an overview of the function dependencies.
This is a brief human-readable summary of the OSL-3.0 license, and is not the actual licence. See licence.md
for the full licence details.
You are free:
As long as you: