- 
                Notifications
    You must be signed in to change notification settings 
- Fork 98
Open
Milestone
Description
The current lru_cache code is complicated because it mixes several intentions. Let's untangle them.
The two goals are 1) in-memory LRU memoization to lessen remote data fetches; 2) persistent caching, primarily so that tests do not require network access.
Consequences of mixing these concerns are:
- Can't use external lru_cache code (incl. in 3.x)
- Configuration is confusing. uta connect() requires a cache mode that's different than the lru_cache mode, and neither checks whether the supplied value is legit.
- Learning mode is slow (probably because it writes back every time).
- As implemented, the hdp interface is also entangled in caching.
I'm going to park this as a placeholder for discussion. I think I'd like to see us revert to a cache-unaware UTA module, and a (new) caching hdp that accepts an underlying hdp (eg uta) that merely caches data.
Metadata
Metadata
Assignees
Labels
No labels
Type
Projects
Status
No status