Persistent Caching #31
Closed
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Issue Addressed
This update adds persistent internal state caching of storages for all HBV physical models, thereby enabling these models to be run sequentially in time (critical in scenarios that forbid running a model on all timesteps simultaneously in a batched fashion).
Description
This is the sister PR to mhpi/generic_deltamodel/pull/73 which supports the changes made to physical models here. We would advise checking this to contextualize the necessity of these changes. TLDR; we add persistent caching and methods
.get_states()and.load_states(<your states>)to both fetch and load states to these models, respectively. These methods connect to dMG and will eventually enable natively saving states to, and loading states from, a file so that they may persist after a runtime has completed.Furthermore, the changes here support of simplifying sequential modeling applications like NOAA's NextGen and CIROH-UA's NGIAB.
Note we only consider the models storage states here. In the second part of this update, parameter caching and saving will also be added for native access in dMG.
Note Pt2: it is yet unclear if caching states at every forward call adds nontrivial overhead from IO -- of course this will not be evident in minimal test cases and will require field testing. In particular, state tensors are always converted to CPU memory when cached, and converted back to the model's target device if using
cache_states; hence the overhead.Inspiration
These additions were motivated in part by the state loading work @ZhennanShi1 first introduced for HydroDL and later PR, as well as #72 (for which I hope this begins to enable use in sequential applications). Furthermore, this work was motivate to, and makes great strides towards, simplifying dMG model applications in NOAA's NextGen national water modeling framework and CIROH-UA's NGIAB.
Type of Change
Other (please specify):
Checklist