1
0
Fork 0

KVM fixes for v4.17-rc3

ARM:
  - PSCI selection API, a leftover from 4.16 (for stable)
  - Kick vcpu on active interrupt affinity change
  - Plug a VMID allocation race on oversubscribed systems
  - Silence debug messages
  - Update Christoffer's email address (linaro -> arm)
 
 x86:
  - Expose userspace-relevant bits of a newly added feature
  - Fix TLB flushing on VMX with VPID, but without EPT
 -----BEGIN PGP SIGNATURE-----
 
 iQEcBAABCAAGBQJa44lQAAoJEED/6hsPKofo1dIH/3n9AZSWvavgL2V3j6agT8Yy
 hxF4nHCFEJd5aqDNwbG9QEzivKw88r3o3mdB2XAQESB2MlCYR1jkTONm7yvVJTs/
 /P9gj+DEQbCj2AgT//u3BGsAsZDKFhB9JwfmV2Mp4zDIqWFa6oCOGeq/iPVAGDcN
 vUpuYeIicuH9SRoxH7de3z+BEXW0O+gCABXQtvA93FKTMz35yFTgmbDVCnvaV0zL
 3B+3/4/jdbTRICW8EX6Li43+gEBUMtnVNkdqxLPTuCtDG8iuPUGfgF02gH99/9gj
 hliV3Q4VUZKkSABW5AqKPe4+9rbsHCh9eL0LpHFGI9y+6LeUIOXAX4CtohR8gWE=
 =W9Vz
 -----END PGP SIGNATURE-----

rMerge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm

Pull KVM fixes from Radim Krčmář:
 "ARM:
   - PSCI selection API, a leftover from 4.16 (for stable)
   - Kick vcpu on active interrupt affinity change
   - Plug a VMID allocation race on oversubscribed systems
   - Silence debug messages
   - Update Christoffer's email address (linaro -> arm)

  x86:
   - Expose userspace-relevant bits of a newly added feature
   - Fix TLB flushing on VMX with VPID, but without EPT"

* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
  x86/headers/UAPI: Move DISABLE_EXITS KVM capability bits to the UAPI
  kvm: apic: Flush TLB after APIC mode/address change if VPIDs are in use
  arm/arm64: KVM: Add PSCI version selection API
  KVM: arm/arm64: vgic: Kick new VCPU on interrupt migration
  arm64: KVM: Demote SVE and LORegion warnings to debug only
  MAINTAINERS: Update e-mail address for Christoffer Dall
  KVM: arm/arm64: Close VMID generation race
hifive-unleashed-5.1
Linus Torvalds 2018-04-27 16:13:31 -07:00
commit 46dc111dfe
17 changed files with 189 additions and 32 deletions

View File

@ -1960,6 +1960,9 @@ ARM 32-bit VFP control registers have the following id bit patterns:
ARM 64-bit FP registers have the following id bit patterns: ARM 64-bit FP registers have the following id bit patterns:
0x4030 0000 0012 0 <regno:12> 0x4030 0000 0012 0 <regno:12>
ARM firmware pseudo-registers have the following bit pattern:
0x4030 0000 0014 <regno:16>
arm64 registers are mapped using the lower 32 bits. The upper 16 of arm64 registers are mapped using the lower 32 bits. The upper 16 of
that is the register group type, or coprocessor number: that is the register group type, or coprocessor number:
@ -1976,6 +1979,9 @@ arm64 CCSIDR registers are demultiplexed by CSSELR value:
arm64 system registers have the following id bit patterns: arm64 system registers have the following id bit patterns:
0x6030 0000 0013 <op0:2> <op1:3> <crn:4> <crm:4> <op2:3> 0x6030 0000 0013 <op0:2> <op1:3> <crn:4> <crm:4> <op2:3>
arm64 firmware pseudo-registers have the following bit pattern:
0x6030 0000 0014 <regno:16>
MIPS registers are mapped using the lower 32 bits. The upper 16 of that is MIPS registers are mapped using the lower 32 bits. The upper 16 of that is
the register group type: the register group type:
@ -2510,7 +2516,8 @@ Possible features:
and execute guest code when KVM_RUN is called. and execute guest code when KVM_RUN is called.
- KVM_ARM_VCPU_EL1_32BIT: Starts the CPU in a 32bit mode. - KVM_ARM_VCPU_EL1_32BIT: Starts the CPU in a 32bit mode.
Depends on KVM_CAP_ARM_EL1_32BIT (arm64 only). Depends on KVM_CAP_ARM_EL1_32BIT (arm64 only).
- KVM_ARM_VCPU_PSCI_0_2: Emulate PSCI v0.2 for the CPU. - KVM_ARM_VCPU_PSCI_0_2: Emulate PSCI v0.2 (or a future revision
backward compatible with v0.2) for the CPU.
Depends on KVM_CAP_ARM_PSCI_0_2. Depends on KVM_CAP_ARM_PSCI_0_2.
- KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU. - KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU.
Depends on KVM_CAP_ARM_PMU_V3. Depends on KVM_CAP_ARM_PMU_V3.

View File

@ -0,0 +1,30 @@
KVM implements the PSCI (Power State Coordination Interface)
specification in order to provide services such as CPU on/off, reset
and power-off to the guest.
The PSCI specification is regularly updated to provide new features,
and KVM implements these updates if they make sense from a virtualization
point of view.
This means that a guest booted on two different versions of KVM can
observe two different "firmware" revisions. This could cause issues if
a given guest is tied to a particular PSCI revision (unlikely), or if
a migration causes a different PSCI version to be exposed out of the
blue to an unsuspecting guest.
In order to remedy this situation, KVM exposes a set of "firmware
pseudo-registers" that can be manipulated using the GET/SET_ONE_REG
interface. These registers can be saved/restored by userspace, and set
to a convenient value if required.
The following register is defined:
* KVM_REG_ARM_PSCI_VERSION:
- Only valid if the vcpu has the KVM_ARM_VCPU_PSCI_0_2 feature set
(and thus has already been initialized)
- Returns the current PSCI version on GET_ONE_REG (defaulting to the
highest PSCI version implemented by KVM and compatible with v0.2)
- Allows any PSCI version implemented by KVM and compatible with
v0.2 to be set with SET_ONE_REG
- Affects the whole VM (even if the register view is per-vcpu)

View File

@ -7744,7 +7744,7 @@ F: arch/x86/include/asm/svm.h
F: arch/x86/kvm/svm.c F: arch/x86/kvm/svm.c
KERNEL VIRTUAL MACHINE FOR ARM (KVM/arm) KERNEL VIRTUAL MACHINE FOR ARM (KVM/arm)
M: Christoffer Dall <christoffer.dall@linaro.org> M: Christoffer Dall <christoffer.dall@arm.com>
M: Marc Zyngier <marc.zyngier@arm.com> M: Marc Zyngier <marc.zyngier@arm.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
L: kvmarm@lists.cs.columbia.edu L: kvmarm@lists.cs.columbia.edu
@ -7758,7 +7758,7 @@ F: virt/kvm/arm/
F: include/kvm/arm_* F: include/kvm/arm_*
KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64) KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)
M: Christoffer Dall <christoffer.dall@linaro.org> M: Christoffer Dall <christoffer.dall@arm.com>
M: Marc Zyngier <marc.zyngier@arm.com> M: Marc Zyngier <marc.zyngier@arm.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
L: kvmarm@lists.cs.columbia.edu L: kvmarm@lists.cs.columbia.edu

View File

@ -77,6 +77,9 @@ struct kvm_arch {
/* Interrupt controller */ /* Interrupt controller */
struct vgic_dist vgic; struct vgic_dist vgic;
int max_vcpus; int max_vcpus;
/* Mandated version of PSCI */
u32 psci_version;
}; };
#define KVM_NR_MEM_OBJS 40 #define KVM_NR_MEM_OBJS 40

View File

@ -195,6 +195,12 @@ struct kvm_arch_memory_slot {
#define KVM_REG_ARM_VFP_FPINST 0x1009 #define KVM_REG_ARM_VFP_FPINST 0x1009
#define KVM_REG_ARM_VFP_FPINST2 0x100A #define KVM_REG_ARM_VFP_FPINST2 0x100A
/* KVM-as-firmware specific pseudo-registers */
#define KVM_REG_ARM_FW (0x0014 << KVM_REG_ARM_COPROC_SHIFT)
#define KVM_REG_ARM_FW_REG(r) (KVM_REG_ARM | KVM_REG_SIZE_U64 | \
KVM_REG_ARM_FW | ((r) & 0xffff))
#define KVM_REG_ARM_PSCI_VERSION KVM_REG_ARM_FW_REG(0)
/* Device Control API: ARM VGIC */ /* Device Control API: ARM VGIC */
#define KVM_DEV_ARM_VGIC_GRP_ADDR 0 #define KVM_DEV_ARM_VGIC_GRP_ADDR 0
#define KVM_DEV_ARM_VGIC_GRP_DIST_REGS 1 #define KVM_DEV_ARM_VGIC_GRP_DIST_REGS 1

View File

@ -22,6 +22,7 @@
#include <linux/module.h> #include <linux/module.h>
#include <linux/vmalloc.h> #include <linux/vmalloc.h>
#include <linux/fs.h> #include <linux/fs.h>
#include <kvm/arm_psci.h>
#include <asm/cputype.h> #include <asm/cputype.h>
#include <linux/uaccess.h> #include <linux/uaccess.h>
#include <asm/kvm.h> #include <asm/kvm.h>
@ -176,6 +177,7 @@ static unsigned long num_core_regs(void)
unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu) unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu)
{ {
return num_core_regs() + kvm_arm_num_coproc_regs(vcpu) return num_core_regs() + kvm_arm_num_coproc_regs(vcpu)
+ kvm_arm_get_fw_num_regs(vcpu)
+ NUM_TIMER_REGS; + NUM_TIMER_REGS;
} }
@ -196,6 +198,11 @@ int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
uindices++; uindices++;
} }
ret = kvm_arm_copy_fw_reg_indices(vcpu, uindices);
if (ret)
return ret;
uindices += kvm_arm_get_fw_num_regs(vcpu);
ret = copy_timer_indices(vcpu, uindices); ret = copy_timer_indices(vcpu, uindices);
if (ret) if (ret)
return ret; return ret;
@ -214,6 +221,9 @@ int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_CORE) if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_CORE)
return get_core_reg(vcpu, reg); return get_core_reg(vcpu, reg);
if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_FW)
return kvm_arm_get_fw_reg(vcpu, reg);
if (is_timer_reg(reg->id)) if (is_timer_reg(reg->id))
return get_timer_reg(vcpu, reg); return get_timer_reg(vcpu, reg);
@ -230,6 +240,9 @@ int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_CORE) if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_CORE)
return set_core_reg(vcpu, reg); return set_core_reg(vcpu, reg);
if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_FW)
return kvm_arm_set_fw_reg(vcpu, reg);
if (is_timer_reg(reg->id)) if (is_timer_reg(reg->id))
return set_timer_reg(vcpu, reg); return set_timer_reg(vcpu, reg);

View File

@ -75,6 +75,9 @@ struct kvm_arch {
/* Interrupt controller */ /* Interrupt controller */
struct vgic_dist vgic; struct vgic_dist vgic;
/* Mandated version of PSCI */
u32 psci_version;
}; };
#define KVM_NR_MEM_OBJS 40 #define KVM_NR_MEM_OBJS 40

View File

@ -206,6 +206,12 @@ struct kvm_arch_memory_slot {
#define KVM_REG_ARM_TIMER_CNT ARM64_SYS_REG(3, 3, 14, 3, 2) #define KVM_REG_ARM_TIMER_CNT ARM64_SYS_REG(3, 3, 14, 3, 2)
#define KVM_REG_ARM_TIMER_CVAL ARM64_SYS_REG(3, 3, 14, 0, 2) #define KVM_REG_ARM_TIMER_CVAL ARM64_SYS_REG(3, 3, 14, 0, 2)
/* KVM-as-firmware specific pseudo-registers */
#define KVM_REG_ARM_FW (0x0014 << KVM_REG_ARM_COPROC_SHIFT)
#define KVM_REG_ARM_FW_REG(r) (KVM_REG_ARM64 | KVM_REG_SIZE_U64 | \
KVM_REG_ARM_FW | ((r) & 0xffff))
#define KVM_REG_ARM_PSCI_VERSION KVM_REG_ARM_FW_REG(0)
/* Device Control API: ARM VGIC */ /* Device Control API: ARM VGIC */
#define KVM_DEV_ARM_VGIC_GRP_ADDR 0 #define KVM_DEV_ARM_VGIC_GRP_ADDR 0
#define KVM_DEV_ARM_VGIC_GRP_DIST_REGS 1 #define KVM_DEV_ARM_VGIC_GRP_DIST_REGS 1

View File

@ -25,6 +25,7 @@
#include <linux/module.h> #include <linux/module.h>
#include <linux/vmalloc.h> #include <linux/vmalloc.h>
#include <linux/fs.h> #include <linux/fs.h>
#include <kvm/arm_psci.h>
#include <asm/cputype.h> #include <asm/cputype.h>
#include <linux/uaccess.h> #include <linux/uaccess.h>
#include <asm/kvm.h> #include <asm/kvm.h>
@ -205,7 +206,7 @@ static int get_timer_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu) unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu)
{ {
return num_core_regs() + kvm_arm_num_sys_reg_descs(vcpu) return num_core_regs() + kvm_arm_num_sys_reg_descs(vcpu)
+ NUM_TIMER_REGS; + kvm_arm_get_fw_num_regs(vcpu) + NUM_TIMER_REGS;
} }
/** /**
@ -225,6 +226,11 @@ int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
uindices++; uindices++;
} }
ret = kvm_arm_copy_fw_reg_indices(vcpu, uindices);
if (ret)
return ret;
uindices += kvm_arm_get_fw_num_regs(vcpu);
ret = copy_timer_indices(vcpu, uindices); ret = copy_timer_indices(vcpu, uindices);
if (ret) if (ret)
return ret; return ret;
@ -243,6 +249,9 @@ int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_CORE) if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_CORE)
return get_core_reg(vcpu, reg); return get_core_reg(vcpu, reg);
if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_FW)
return kvm_arm_get_fw_reg(vcpu, reg);
if (is_timer_reg(reg->id)) if (is_timer_reg(reg->id))
return get_timer_reg(vcpu, reg); return get_timer_reg(vcpu, reg);
@ -259,6 +268,9 @@ int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_CORE) if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_CORE)
return set_core_reg(vcpu, reg); return set_core_reg(vcpu, reg);
if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_FW)
return kvm_arm_set_fw_reg(vcpu, reg);
if (is_timer_reg(reg->id)) if (is_timer_reg(reg->id))
return set_timer_reg(vcpu, reg); return set_timer_reg(vcpu, reg);

View File

@ -996,14 +996,12 @@ static u64 read_id_reg(struct sys_reg_desc const *r, bool raz)
if (id == SYS_ID_AA64PFR0_EL1) { if (id == SYS_ID_AA64PFR0_EL1) {
if (val & (0xfUL << ID_AA64PFR0_SVE_SHIFT)) if (val & (0xfUL << ID_AA64PFR0_SVE_SHIFT))
pr_err_once("kvm [%i]: SVE unsupported for guests, suppressing\n", kvm_debug("SVE unsupported for guests, suppressing\n");
task_pid_nr(current));
val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT); val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT);
} else if (id == SYS_ID_AA64MMFR1_EL1) { } else if (id == SYS_ID_AA64MMFR1_EL1) {
if (val & (0xfUL << ID_AA64MMFR1_LOR_SHIFT)) if (val & (0xfUL << ID_AA64MMFR1_LOR_SHIFT))
pr_err_once("kvm [%i]: LORegions unsupported for guests, suppressing\n", kvm_debug("LORegions unsupported for guests, suppressing\n");
task_pid_nr(current));
val &= ~(0xfUL << ID_AA64MMFR1_LOR_SHIFT); val &= ~(0xfUL << ID_AA64MMFR1_LOR_SHIFT);
} }

View File

@ -4544,12 +4544,6 @@ static void vmx_flush_tlb(struct kvm_vcpu *vcpu, bool invalidate_gpa)
__vmx_flush_tlb(vcpu, to_vmx(vcpu)->vpid, invalidate_gpa); __vmx_flush_tlb(vcpu, to_vmx(vcpu)->vpid, invalidate_gpa);
} }
static void vmx_flush_tlb_ept_only(struct kvm_vcpu *vcpu)
{
if (enable_ept)
vmx_flush_tlb(vcpu, true);
}
static void vmx_decache_cr0_guest_bits(struct kvm_vcpu *vcpu) static void vmx_decache_cr0_guest_bits(struct kvm_vcpu *vcpu)
{ {
ulong cr0_guest_owned_bits = vcpu->arch.cr0_guest_owned_bits; ulong cr0_guest_owned_bits = vcpu->arch.cr0_guest_owned_bits;
@ -9278,7 +9272,7 @@ static void vmx_set_virtual_x2apic_mode(struct kvm_vcpu *vcpu, bool set)
} else { } else {
sec_exec_control &= ~SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE; sec_exec_control &= ~SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE;
sec_exec_control |= SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES; sec_exec_control |= SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
vmx_flush_tlb_ept_only(vcpu); vmx_flush_tlb(vcpu, true);
} }
vmcs_write32(SECONDARY_VM_EXEC_CONTROL, sec_exec_control); vmcs_write32(SECONDARY_VM_EXEC_CONTROL, sec_exec_control);
@ -9306,7 +9300,7 @@ static void vmx_set_apic_access_page_addr(struct kvm_vcpu *vcpu, hpa_t hpa)
!nested_cpu_has2(get_vmcs12(&vmx->vcpu), !nested_cpu_has2(get_vmcs12(&vmx->vcpu),
SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) { SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) {
vmcs_write64(APIC_ACCESS_ADDR, hpa); vmcs_write64(APIC_ACCESS_ADDR, hpa);
vmx_flush_tlb_ept_only(vcpu); vmx_flush_tlb(vcpu, true);
} }
} }
@ -11220,7 +11214,7 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
} }
} else if (nested_cpu_has2(vmcs12, } else if (nested_cpu_has2(vmcs12,
SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) { SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) {
vmx_flush_tlb_ept_only(vcpu); vmx_flush_tlb(vcpu, true);
} }
/* /*
@ -12073,7 +12067,7 @@ static void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason,
} else if (!nested_cpu_has_ept(vmcs12) && } else if (!nested_cpu_has_ept(vmcs12) &&
nested_cpu_has2(vmcs12, nested_cpu_has2(vmcs12,
SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) { SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) {
vmx_flush_tlb_ept_only(vcpu); vmx_flush_tlb(vcpu, true);
} }
/* This is needed for same reason as it was needed in prepare_vmcs02 */ /* This is needed for same reason as it was needed in prepare_vmcs02 */

View File

@ -302,13 +302,6 @@ static inline u64 nsec_to_cycles(struct kvm_vcpu *vcpu, u64 nsec)
__rem; \ __rem; \
}) })
#define KVM_X86_DISABLE_EXITS_MWAIT (1 << 0)
#define KVM_X86_DISABLE_EXITS_HTL (1 << 1)
#define KVM_X86_DISABLE_EXITS_PAUSE (1 << 2)
#define KVM_X86_DISABLE_VALID_EXITS (KVM_X86_DISABLE_EXITS_MWAIT | \
KVM_X86_DISABLE_EXITS_HTL | \
KVM_X86_DISABLE_EXITS_PAUSE)
static inline bool kvm_mwait_in_guest(struct kvm *kvm) static inline bool kvm_mwait_in_guest(struct kvm *kvm)
{ {
return kvm->arch.mwait_in_guest; return kvm->arch.mwait_in_guest;

View File

@ -37,10 +37,15 @@ static inline int kvm_psci_version(struct kvm_vcpu *vcpu, struct kvm *kvm)
* Our PSCI implementation stays the same across versions from * Our PSCI implementation stays the same across versions from
* v0.2 onward, only adding the few mandatory functions (such * v0.2 onward, only adding the few mandatory functions (such
* as FEATURES with 1.0) that are required by newer * as FEATURES with 1.0) that are required by newer
* revisions. It is thus safe to return the latest. * revisions. It is thus safe to return the latest, unless
* userspace has instructed us otherwise.
*/ */
if (test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features)) if (test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features)) {
if (vcpu->kvm->arch.psci_version)
return vcpu->kvm->arch.psci_version;
return KVM_ARM_PSCI_LATEST; return KVM_ARM_PSCI_LATEST;
}
return KVM_ARM_PSCI_0_1; return KVM_ARM_PSCI_0_1;
} }
@ -48,4 +53,11 @@ static inline int kvm_psci_version(struct kvm_vcpu *vcpu, struct kvm *kvm)
int kvm_hvc_call_handler(struct kvm_vcpu *vcpu); int kvm_hvc_call_handler(struct kvm_vcpu *vcpu);
struct kvm_one_reg;
int kvm_arm_get_fw_num_regs(struct kvm_vcpu *vcpu);
int kvm_arm_copy_fw_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices);
int kvm_arm_get_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg);
int kvm_arm_set_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg);
#endif /* __KVM_ARM_PSCI_H__ */ #endif /* __KVM_ARM_PSCI_H__ */

View File

@ -676,6 +676,13 @@ struct kvm_ioeventfd {
__u8 pad[36]; __u8 pad[36];
}; };
#define KVM_X86_DISABLE_EXITS_MWAIT (1 << 0)
#define KVM_X86_DISABLE_EXITS_HTL (1 << 1)
#define KVM_X86_DISABLE_EXITS_PAUSE (1 << 2)
#define KVM_X86_DISABLE_VALID_EXITS (KVM_X86_DISABLE_EXITS_MWAIT | \
KVM_X86_DISABLE_EXITS_HTL | \
KVM_X86_DISABLE_EXITS_PAUSE)
/* for KVM_ENABLE_CAP */ /* for KVM_ENABLE_CAP */
struct kvm_enable_cap { struct kvm_enable_cap {
/* in */ /* in */

View File

@ -63,7 +63,7 @@ static DEFINE_PER_CPU(struct kvm_vcpu *, kvm_arm_running_vcpu);
static atomic64_t kvm_vmid_gen = ATOMIC64_INIT(1); static atomic64_t kvm_vmid_gen = ATOMIC64_INIT(1);
static u32 kvm_next_vmid; static u32 kvm_next_vmid;
static unsigned int kvm_vmid_bits __read_mostly; static unsigned int kvm_vmid_bits __read_mostly;
static DEFINE_SPINLOCK(kvm_vmid_lock); static DEFINE_RWLOCK(kvm_vmid_lock);
static bool vgic_present; static bool vgic_present;
@ -473,11 +473,16 @@ static void update_vttbr(struct kvm *kvm)
{ {
phys_addr_t pgd_phys; phys_addr_t pgd_phys;
u64 vmid; u64 vmid;
bool new_gen;
if (!need_new_vmid_gen(kvm)) read_lock(&kvm_vmid_lock);
new_gen = need_new_vmid_gen(kvm);
read_unlock(&kvm_vmid_lock);
if (!new_gen)
return; return;
spin_lock(&kvm_vmid_lock); write_lock(&kvm_vmid_lock);
/* /*
* We need to re-check the vmid_gen here to ensure that if another vcpu * We need to re-check the vmid_gen here to ensure that if another vcpu
@ -485,7 +490,7 @@ static void update_vttbr(struct kvm *kvm)
* use the same vmid. * use the same vmid.
*/ */
if (!need_new_vmid_gen(kvm)) { if (!need_new_vmid_gen(kvm)) {
spin_unlock(&kvm_vmid_lock); write_unlock(&kvm_vmid_lock);
return; return;
} }
@ -519,7 +524,7 @@ static void update_vttbr(struct kvm *kvm)
vmid = ((u64)(kvm->arch.vmid) << VTTBR_VMID_SHIFT) & VTTBR_VMID_MASK(kvm_vmid_bits); vmid = ((u64)(kvm->arch.vmid) << VTTBR_VMID_SHIFT) & VTTBR_VMID_MASK(kvm_vmid_bits);
kvm->arch.vttbr = kvm_phys_to_vttbr(pgd_phys) | vmid; kvm->arch.vttbr = kvm_phys_to_vttbr(pgd_phys) | vmid;
spin_unlock(&kvm_vmid_lock); write_unlock(&kvm_vmid_lock);
} }
static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu) static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)

View File

@ -18,6 +18,7 @@
#include <linux/arm-smccc.h> #include <linux/arm-smccc.h>
#include <linux/preempt.h> #include <linux/preempt.h>
#include <linux/kvm_host.h> #include <linux/kvm_host.h>
#include <linux/uaccess.h>
#include <linux/wait.h> #include <linux/wait.h>
#include <asm/cputype.h> #include <asm/cputype.h>
@ -427,3 +428,62 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
smccc_set_retval(vcpu, val, 0, 0, 0); smccc_set_retval(vcpu, val, 0, 0, 0);
return 1; return 1;
} }
int kvm_arm_get_fw_num_regs(struct kvm_vcpu *vcpu)
{
return 1; /* PSCI version */
}
int kvm_arm_copy_fw_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
{
if (put_user(KVM_REG_ARM_PSCI_VERSION, uindices))
return -EFAULT;
return 0;
}
int kvm_arm_get_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
{
if (reg->id == KVM_REG_ARM_PSCI_VERSION) {
void __user *uaddr = (void __user *)(long)reg->addr;
u64 val;
val = kvm_psci_version(vcpu, vcpu->kvm);
if (copy_to_user(uaddr, &val, KVM_REG_SIZE(reg->id)))
return -EFAULT;
return 0;
}
return -EINVAL;
}
int kvm_arm_set_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
{
if (reg->id == KVM_REG_ARM_PSCI_VERSION) {
void __user *uaddr = (void __user *)(long)reg->addr;
bool wants_02;
u64 val;
if (copy_from_user(&val, uaddr, KVM_REG_SIZE(reg->id)))
return -EFAULT;
wants_02 = test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features);
switch (val) {
case KVM_ARM_PSCI_0_1:
if (wants_02)
return -EINVAL;
vcpu->kvm->arch.psci_version = val;
return 0;
case KVM_ARM_PSCI_0_2:
case KVM_ARM_PSCI_1_0:
if (!wants_02)
return -EINVAL;
vcpu->kvm->arch.psci_version = val;
return 0;
}
}
return -EINVAL;
}

View File

@ -600,6 +600,7 @@ retry:
list_for_each_entry_safe(irq, tmp, &vgic_cpu->ap_list_head, ap_list) { list_for_each_entry_safe(irq, tmp, &vgic_cpu->ap_list_head, ap_list) {
struct kvm_vcpu *target_vcpu, *vcpuA, *vcpuB; struct kvm_vcpu *target_vcpu, *vcpuA, *vcpuB;
bool target_vcpu_needs_kick = false;
spin_lock(&irq->irq_lock); spin_lock(&irq->irq_lock);
@ -670,11 +671,18 @@ retry:
list_del(&irq->ap_list); list_del(&irq->ap_list);
irq->vcpu = target_vcpu; irq->vcpu = target_vcpu;
list_add_tail(&irq->ap_list, &new_cpu->ap_list_head); list_add_tail(&irq->ap_list, &new_cpu->ap_list_head);
target_vcpu_needs_kick = true;
} }
spin_unlock(&irq->irq_lock); spin_unlock(&irq->irq_lock);
spin_unlock(&vcpuB->arch.vgic_cpu.ap_list_lock); spin_unlock(&vcpuB->arch.vgic_cpu.ap_list_lock);
spin_unlock_irqrestore(&vcpuA->arch.vgic_cpu.ap_list_lock, flags); spin_unlock_irqrestore(&vcpuA->arch.vgic_cpu.ap_list_lock, flags);
if (target_vcpu_needs_kick) {
kvm_make_request(KVM_REQ_IRQ_PENDING, target_vcpu);
kvm_vcpu_kick(target_vcpu);
}
goto retry; goto retry;
} }