-
Notifications
You must be signed in to change notification settings - Fork 6
Description
From my observations and a bit of feedback from test users, there can be some latency issues when retrieving file data from a large exposed instance (in this case c. 45Tb of data).
It seems noticeable as you navigate down a series of directories (with the size/scope growing).
e.g. /root/main title/sub title (27 folders)/possible up to a few hundred or greater folders/up to 10 folders/content
Whether an index (that refreshes itself when a user eventually navigates down in case of change OR something that does an update every hour) or use of the file cache (if possible/used e.g. rclone) is possible I don't know.
User-case, vps mounted Pupcloud, external cloud data store via rclone mounted drive point). It seems empirically slower than comparable Filebrowser configuration so far.
(Some test users felt the thing had crashed, so it puts further desire for the "spinner" on network activity as per the earlier FR).
Edit: this refers ONLY to the file structure metadata and NOT the actual end contents.
[if the developer wants access to the referenced test environment, just contact]