Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@sxyazi
Copy link
Owner

@sxyazi sxyazi commented Nov 6, 2023

Close #340

@teto
Copy link

teto commented Nov 6, 2023

What does the PR do ? does it try to raise the number of open file or does it better address the error ? (sry not clear from my limited experience with rust)

@sxyazi
Copy link
Owner Author

sxyazi commented Nov 6, 2023

Yes, this PR tries to increase the number of open files, and this is a better approach for multi-thread apps.

Because that error is not triggered after watching the files but rather during the initialization of the watcher, i.e. it occurs before the actual watching begins, which means the same error will occur with all I/O operations.

@teto
Copy link

teto commented Nov 6, 2023

I am dubious this is the correct fix: the max-open-file property is a system property that should be controlled by the user, not the program.
Also this just delays the issue: depending on configuration raising the number may fail, if I run even more services, yazi will fail as it does today.
Is the file watcher necessary for yazi or can it work in a degraded state ? otherwise I would suggest as a fix to just fail more gracefully with a message "please raise number of open files with ulimit -n XXXX or close file handles" ?

@sxyazi
Copy link
Owner Author

sxyazi commented Nov 6, 2023

While I'm not sure how you encountered this issue (Are there any steps I can reproduce?), seems to me that it's not specific to the watcher but rather affects all I/O operations within Yazi.

When this error occurs, it not only impacts the watcher but even regular directory listings are affected. Currently, it just becomes apparent during watcher initialization and causes the program to exit prematurely. For apps like Yazi that utilize multi-threading to accelerate processing, a default soft limit of 256 seems quite limiting for large directories.

I'm not quite sure what you mean by "raising the number may fail.", are you saying that if a user sets the system's hard limit to a very small value, the increase is ineffective? I believe that would be an issue with the user's environment. The hard limit should typically be set to "unlimited" or a very large value, and it's unlikely to be exceeded. Please correct me if I'm wrong!

CleanShot 2023-11-06 at 19 22 14

@teto
Copy link

teto commented Nov 8, 2023

I'm not quite sure what you mean by "raising the number may fail.", are you saying that if a user sets the system's hard limit to a very small value, the increase is ineffective? I believe that would be an issue with the user's environment.

I mean exactly that: it can fail and all that can fail will fail.

The hard limit should typically be set to "unlimited" or a very large value, and it's unlikely to be exceeded. Please correct me if I'm wrong!

well I dunno about that, and I doubt it. I think changing this value is not for yazi to decide since it has sideeffects that can affect the user system in a way the user might not realize (more fds => strain resources). IMO it would be best to not change the value but gracefully exit with a message on how to increase that number ?

Btw, I've updated my value from 1024 to 30k (turns out I had commented the code doing that recently for another reason).

@sxyazi
Copy link
Owner Author

sxyazi commented Nov 9, 2023

The value is specific to the process, and what this PR does is modify Yazi's own maximum available limit. Prompting the user to do a ulimit -n 1024 manually is not a reasonable approach -- It requires the user to do this every shell session before using Yazi.

Even if it's done by configuring it permanently, it's still not a suitable method. Some single-threaded apps don't need such a concurrency, and it can impact the overall system performance. I believe this should be the responsibility of the concurrent app itself.

It's just making a setrlimit system call and passing a completely legal value, I'm not sure why it would fail. Maybe the user has disabled this system call? That would be an issue with the user's environment.

@teto
Copy link

teto commented Nov 10, 2023

ok, seems legit in retrospect

It's just making a setrlimit system call and passing a completely legal value, I'm not sure why it would fail.

if the request exceeds the hardlimit it could fail so a proper error message and exit would be neat in that case

@sxyazi
Copy link
Owner Author

sxyazi commented Nov 10, 2023

if the request exceeds the hardlimit it could fail so a proper error message and exit would be neat in that case

No, it won't exceed the hard limit. The crate takes its maximum upper bound, so it should always succeed unless the user has explicitly disabled that system call: https://github.com/paritytech/fdlimit/blob/eee618fbe779452099f1857dd9191ec711067af6/src/lib.rs#L91-L93

Since this error only occurs during I/O operations and is not specific to the watcher, any I/O operation can potentially encounter "Too many open files."

One possible approach is to add this check for all I/O operations, but that would be a substantial amount of work, and some I/O operations occur in async contexts where it's not practically feasible. I'm not sure if there's a better solution than simply raising the app's own limit, which should work in most cases.

@teto
Copy link

teto commented Nov 10, 2023

thanks for the explanation. Sounds good enough. Also now users can search and find this PR and ticket so a solution is easier for everyone. Go ahead !

@sxyazi sxyazi merged commit 34d4be4 into main Nov 10, 2023
@sxyazi sxyazi deleted the pr-a9c266c7 branch November 10, 2023 15:03
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Apr 10, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

yazi crashes one the limit of open files is reached

3 participants