-
Notifications
You must be signed in to change notification settings - Fork 6.2k
cephfs qos: implement cephfs qos base on tokenbucket algorighm. #29266
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Please also include this PR description in the commit message itself any pertinent information you also shared in your mailing list post. A tracker ticket would also be appropriate so we can account for this feature during release credits. |
|
@batrick Thanks for your suggestion, and the tracker URL is:https://tracker.ceph.com/issues/40986 |
The basic idea is as follows: all clients use the same QoS setting, just as the implementation in this PR. Maybe there are multiple mount points, if we limit the total IO, the number of total mount points is also limited. So in my implementation, the total IO & BPS is not limited. Set QoS info as one of the dir's xattrs; All clients that can access the same dirs with the same QoS setting. Follow the Quota's config flow. when the MDS receives the QoS setting, it'll also broadcast the message to all clients. We can change the limit online. [support]: limit && burst config [usage]: setfattr -n ceph.qos.limit.iops -v 200 /mnt/cephfs/testdirs/ setfattr -n ceph.qos.burst.read_bps -v 200 /mnt/cephfs/testdirs/ getfattr -n ceph.qos.limit.iops /mnt/cephfs/testdirs/ getfattr -n ceph.qos /mnt/cephfs/testdirs/ [problems]: Because there is no queue in CephFS IO path, If the bps is lower than the request's block size, the whole Client will be blocked until it gets enough token. Signed-off-by: Wang Songbo <[email protected]>
|
This pull request has been automatically marked as stale because it has not had any activity for 60 days. It will be closed if no further activity occurs for another 30 days. |
|
Sorry for the delay reviewing this. We'll take a look in the next month as things cool down for Octopus. |
|
This pull request has been automatically marked as stale because it has not had any activity for 60 days. It will be closed if no further activity occurs for another 30 days. |
|
This pull request has been automatically closed because there has been no activity for 90 days. Please feel free to reopen this pull request (or open a new one) if the proposed change is still appropriate. Thank you for your contribution! |
|
This pull request has been automatically marked as stale because it has not had any activity for 60 days. It will be closed if no further activity occurs for another 30 days. |
|
@cephfs ping? |
|
This pull request has been automatically marked as stale because it has not had any activity for 60 days. It will be closed if no further activity occurs for another 30 days. |
|
Is there anyone still working on this? Wonder if there are an available and mature CephFS QOS mechanism. If we let all clients accessing the same dir have same Qos setting, what would happen if a dir have thousands of mount? |
|
@batrick ping? |
|
This pull request has been automatically marked as stale because it has not had any activity for 60 days. It will be closed if no further activity occurs for another 30 days. |
|
@batrick I'm also interested in the status of QoS for CephFS. What has to be done for this PR or similar to be merged (e.g., anything that I/someone else can help with)? |
|
Also interested in QoS for CephFS. Although the risk of abuse may be minimal in practice, it would be nice to be able to assign at least some kind of resource limit on shared CephFS environments. |
|
Is there anyone still working on this? Is there any available and mature CephFS QOS mechanism? |
|
Also looking forward for having per-directory QoS setting to limit IO rate issued by CephFS clients |
|
Hi Is there any plan to get this merged? We are also looking forward for having QoS setting to limit IO rate |
|
This pull request has been automatically marked as stale because it has not had any activity for 60 days. It will be closed if no further activity occurs for another 30 days. |
|
This pull request has been automatically marked as stale because it has not had any activity for 60 days. It will be closed if no further activity occurs for another 30 days. |
|
ping /unstale |
|
This pull request has been automatically marked as stale because it has not had any activity for 60 days. It will be closed if no further activity occurs for another 30 days. |
|
Ping. |
|
This pull request has been automatically marked as stale because it has not had any activity for 60 days. It will be closed if no further activity occurs for another 30 days. |
|
This pull request has been automatically closed because there has been no activity for 90 days. Please feel free to reopen this pull request (or open a new one) if the proposed change is still appropriate. Thank you for your contribution! |
|
Ping, can we please re-open this? |
The basic idea is as follows:
[support]:
limit && burst config
[usage]:
setfattr -n ceph.qos.limit.iops -v 200 /mnt/cephfs/testdirs/setfattr -n ceph.qos.burst.read_bps -v 200 /mnt/cephfs/testdirs/getfattr -n ceph.qos.limit.iops /mnt/cephfs/testdirs/getfattr -n ceph.qos /mnt/cephfs/testdirs/[problems]:
Because there is no queue in CephFS IO path, If the bps is lower than the request's block size, the whole Client will be blocked until it gets enough token.
Signed-off-by: Wang Songbo [email protected]