-
Notifications
You must be signed in to change notification settings - Fork 643
feat: raise open file descriptors limit at startup #342
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
What does the PR do ? does it try to raise the number of open file or does it better address the error ? (sry not clear from my limited experience with rust) |
|
Yes, this PR tries to increase the number of open files, and this is a better approach for multi-thread apps. Because that error is not triggered after watching the files but rather during the initialization of the watcher, i.e. it occurs before the actual watching begins, which means the same error will occur with all I/O operations. |
|
I am dubious this is the correct fix: the max-open-file property is a system property that should be controlled by the user, not the program. |
|
While I'm not sure how you encountered this issue (Are there any steps I can reproduce?), seems to me that it's not specific to the watcher but rather affects all I/O operations within Yazi. When this error occurs, it not only impacts the watcher but even regular directory listings are affected. Currently, it just becomes apparent during watcher initialization and causes the program to exit prematurely. For apps like Yazi that utilize multi-threading to accelerate processing, a default soft limit of 256 seems quite limiting for large directories. I'm not quite sure what you mean by "raising the number may fail.", are you saying that if a user sets the system's hard limit to a very small value, the increase is ineffective? I believe that would be an issue with the user's environment. The hard limit should typically be set to "unlimited" or a very large value, and it's unlikely to be exceeded. Please correct me if I'm wrong! |
I mean exactly that: it can fail and all that can fail will fail.
well I dunno about that, and I doubt it. I think changing this value is not for yazi to decide since it has sideeffects that can affect the user system in a way the user might not realize (more fds => strain resources). IMO it would be best to not change the value but gracefully exit with a message on how to increase that number ? Btw, I've updated my value from 1024 to 30k (turns out I had commented the code doing that recently for another reason). |
|
The value is specific to the process, and what this PR does is modify Yazi's own maximum available limit. Prompting the user to do a Even if it's done by configuring it permanently, it's still not a suitable method. Some single-threaded apps don't need such a concurrency, and it can impact the overall system performance. I believe this should be the responsibility of the concurrent app itself. It's just making a |
|
ok, seems legit in retrospect
if the request exceeds the hardlimit it could fail so a proper error message and exit would be neat in that case |
No, it won't exceed the hard limit. The crate takes its maximum upper bound, so it should always succeed unless the user has explicitly disabled that system call: https://github.com/paritytech/fdlimit/blob/eee618fbe779452099f1857dd9191ec711067af6/src/lib.rs#L91-L93 Since this error only occurs during I/O operations and is not specific to the watcher, any I/O operation can potentially encounter "Too many open files." One possible approach is to add this check for all I/O operations, but that would be a substantial amount of work, and some I/O operations occur in async contexts where it's not practically feasible. I'm not sure if there's a better solution than simply raising the app's own limit, which should work in most cases. |
|
thanks for the explanation. Sounds good enough. Also now users can search and find this PR and ticket so a solution is easier for everyone. Go ahead ! |
Close #340