Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit eb29860

Browse files
dmatlackbonzini
authored andcommitted
KVM: x86/mmu: Do not recover dirty-tracked NX Huge Pages
Do not recover (i.e. zap) an NX Huge Page that is being dirty tracked, as it will just be faulted back in at the same 4KiB granularity when accessed by a vCPU. This may need to be changed if KVM ever supports 2MiB (or larger) dirty tracking granularity, or faulting huge pages during dirty tracking for reads/executes. However for now, these zaps are entirely wasteful. In order to check if this commit increases the CPU usage of the NX recovery worker thread I used a modified version of execute_perf_test [1] that supports splitting guest memory into multiple slots and reports /proc/pid/schedstat:se.sum_exec_runtime for the NX recovery worker just before tearing down the VM. The goal was to force a large number of NX Huge Page recoveries and see if the recovery worker used any more CPU. Test Setup: echo 1000 > /sys/module/kvm/parameters/nx_huge_pages_recovery_period_ms echo 10 > /sys/module/kvm/parameters/nx_huge_pages_recovery_ratio Test Command: ./execute_perf_test -v64 -s anonymous_hugetlb_1gb -x 16 -o | kvm-nx-lpage-re:se.sum_exec_runtime | | ---------------------------------------- | Run | Before | After | ------- | ------------------ | ------------------- | 1 | 730.084105 | 724.375314 | 2 | 728.751339 | 740.581988 | 3 | 736.264720 | 757.078163 | Comparing the median results, this commit results in about a 1% increase CPU usage of the NX recovery worker when testing a VM with 16 slots. However, the effect is negligible with the default halving time of NX pages, which is 1 hour rather than 10 seconds given by period_ms = 1000, ratio = 10. [1] https://lore.kernel.org/kvm/[email protected]/ Signed-off-by: David Matlack <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
1 parent 63d28a2 commit eb29860

File tree

1 file changed

+16
-1
lines changed

1 file changed

+16
-1
lines changed

arch/x86/kvm/mmu/mmu.c

Lines changed: 16 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6841,6 +6841,7 @@ static int set_nx_huge_pages_recovery_param(const char *val, const struct kernel
68416841
static void kvm_recover_nx_huge_pages(struct kvm *kvm)
68426842
{
68436843
unsigned long nx_lpage_splits = kvm->stat.nx_lpage_splits;
6844+
struct kvm_memory_slot *slot;
68446845
int rcu_idx;
68456846
struct kvm_mmu_page *sp;
68466847
unsigned int ratio;
@@ -6875,7 +6876,21 @@ static void kvm_recover_nx_huge_pages(struct kvm *kvm)
68756876
struct kvm_mmu_page,
68766877
possible_nx_huge_page_link);
68776878
WARN_ON_ONCE(!sp->nx_huge_page_disallowed);
6878-
if (is_tdp_mmu_page(sp))
6879+
WARN_ON_ONCE(!sp->role.direct);
6880+
6881+
slot = gfn_to_memslot(kvm, sp->gfn);
6882+
WARN_ON_ONCE(!slot);
6883+
6884+
/*
6885+
* Unaccount and do not attempt to recover any NX Huge Pages
6886+
* that are being dirty tracked, as they would just be faulted
6887+
* back in as 4KiB pages. The NX Huge Pages in this slot will be
6888+
* recovered, along with all the other huge pages in the slot,
6889+
* when dirty logging is disabled.
6890+
*/
6891+
if (slot && kvm_slot_dirty_track_enabled(slot))
6892+
unaccount_nx_huge_page(kvm, sp);
6893+
else if (is_tdp_mmu_page(sp))
68796894
flush |= kvm_tdp_mmu_zap_sp(kvm, sp);
68806895
else
68816896
kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list);

0 commit comments

Comments
 (0)