KVM: arm/arm64: Check pagesize when allocating a hugepage at Stage 2
KVM only supports PMD hugepages at stage 2 but doesn't actually check
that the provided hugepage memory pagesize is PMD_SIZE before populating
stage 2 entries.
In cases where the backing hugepage size is smaller than PMD_SIZE (such
as when using contiguous hugepages), KVM can end up creating stage 2
mappings that extend beyond the supplied memory.
Fix this by checking for the pagesize of userspace vma before creating
PMD hugepage at stage 2.
Fixes: 66b3923a1a
("arm64: hugetlb: add support for PTE contiguous bit")
Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: <stable@vger.kernel.org> # v4.5+
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
zero-colors
parent
0eb7c33cad
commit
c507babf10
|
@ -1310,7 +1310,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
|
||||||
return -EFAULT;
|
return -EFAULT;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (is_vm_hugetlb_page(vma) && !logging_active) {
|
if (vma_kernel_pagesize(vma) == PMD_SIZE && !logging_active) {
|
||||||
hugetlb = true;
|
hugetlb = true;
|
||||||
gfn = (fault_ipa & PMD_MASK) >> PAGE_SHIFT;
|
gfn = (fault_ipa & PMD_MASK) >> PAGE_SHIFT;
|
||||||
} else {
|
} else {
|
||||||
|
|
Loading…
Reference in New Issue