-
Couldn't load subscription status.
- Fork 164
Description
There are several model_mods and core DART modules that have a fixed size memory requirement on each processor. The memory usage is static_mem* num_procs (does not scale as you add processors), and is a hard limit for the model size in DART.
Goal:
memory usage per core = static_mem / num_procs
total memory usage = static_mem
Rather than the current:
memory usage per core = static_mem
total memory usage = static_mem * num_procs
Note the code may need to be sensible about what static data is tiny (fine on every core) vs. large.
Static data in DART:
-
static data, same across the ensemble:
- WRF phb (3d variable sized static data). A wrf model_mod version with distributed phb was written 2014/16 but never released.
- Mesh structures (e.g. MPAS)
- quad_interp utilities data structures (particularly MOM6 CESM3 workhorse 2/3-degree)
- POP interpolation data structures
- get_close data structures
-
Per ensemble member static data:
This gets put into the state at the moment, so is inflated (maybe should not be). An example (I think) is the CLM fields that are 'no-update' see bug: inflation files when using 'no copy back' variables #276
In addition (going as a separate issue), is observation sequence files which are on every core (and particularly for external forward operators which are in the obs sequence).