alistair23-linux/arch/arm/kernel/asm-offsets.c

188 lines
7.3 KiB
C
Raw Normal View History

/*
* Copyright (C) 1995-2003 Russell King
* 2001-2002 Keith Owens
*
* Generate definitions needed by assembly language modules.
* This code generates raw asm output which is post-processed to extract
* and format the required data.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/compiler.h>
#include <linux/sched.h>
#include <linux/mm.h>
#include <linux/dma-mapping.h>
KVM: ARM: World-switch implementation Provides complete world-switch implementation to switch to other guests running in non-secure modes. Includes Hyp exception handlers that capture necessary exception information and stores the information on the VCPU and KVM structures. The following Hyp-ABI is also documented in the code: Hyp-ABI: Calling HYP-mode functions from host (in SVC mode): Switching to Hyp mode is done through a simple HVC #0 instruction. The exception vector code will check that the HVC comes from VMID==0 and if so will push the necessary state (SPSR, lr_usr) on the Hyp stack. - r0 contains a pointer to a HYP function - r1, r2, and r3 contain arguments to the above function. - The HYP function will be called with its arguments in r0, r1 and r2. On HYP function return, we return directly to SVC. A call to a function executing in Hyp mode is performed like the following: <svc code> ldr r0, =BSYM(my_hyp_fn) ldr r1, =my_param hvc #0 ; Call my_hyp_fn(my_param) from HYP mode <svc code> Otherwise, the world-switch is pretty straight-forward. All state that can be modified by the guest is first backed up on the Hyp stack and the VCPU values is loaded onto the hardware. State, which is not loaded, but theoretically modifiable by the guest is protected through the virtualiation features to generate a trap and cause software emulation. Upon guest returns, all state is restored from hardware onto the VCPU struct and the original state is restored from the Hyp-stack onto the hardware. SMP support using the VMPIDR calculated on the basis of the host MPIDR and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier. Reuse of VMIDs has been implemented by Antonios Motakis and adapated from a separate patch into the appropriate patches introducing the functionality. Note that the VMIDs are stored per VM as required by the ARM architecture reference manual. To support VFP/NEON we trap those instructions using the HPCTR. When we trap, we switch the FPU. After a guest exit, the VFP state is returned to the host. When disabling access to floating point instructions, we also mask FPEXC_EN in order to avoid the guest receiving Undefined instruction exceptions before we have a chance to switch back the floating point state. We are reusing vfp_hard_struct, so we depend on VFPv3 being enabled in the host kernel, if not, we still trap cp10 and cp11 in order to inject an undefined instruction exception whenever the guest tries to use VFP/NEON. VFP/NEON developed by Antionios Motakis and Rusty Russell. Aborts that are permission faults, and not stage-1 page table walk, do not report the faulting address in the HPFAR. We have to resolve the IPA, and store it just like the HPFAR register on the VCPU struct. If the IPA cannot be resolved, it means another CPU is playing with the page tables, and we simply restart the guest. This quirk was fixed by Marc Zyngier. Reviewed-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-20 16:47:42 -07:00
#ifdef CONFIG_KVM_ARM_HOST
#include <linux/kvm_host.h>
#endif
#include <asm/cacheflush.h>
#include <asm/glue-df.h>
#include <asm/glue-pf.h>
#include <asm/mach/arch.h>
#include <asm/thread_info.h>
#include <asm/memory.h>
#include <asm/procinfo.h>
ARM: kernel: implement stack pointer save array through MPIDR hashing Current implementation of cpu_{suspend}/cpu_{resume} relies on the MPIDR to index the array of pointers where the context is saved and restored. The current approach works as long as the MPIDR can be considered a linear index, so that the pointers array can simply be dereferenced by using the MPIDR[7:0] value. On ARM multi-cluster systems, where the MPIDR may not be a linear index, to properly dereference the stack pointer array, a mapping function should be applied to it so that it can be used for arrays look-ups. This patch adds code in the cpu_{suspend}/cpu_{resume} implementation that relies on shifting and ORing hashing method to map a MPIDR value to a set of buckets precomputed at boot to have a collision free mapping from MPIDR to context pointers. The hashing algorithm must be simple, fast, and implementable with few instructions since in the cpu_resume path the mapping is carried out with the MMU off and the I-cache off, hence code and data are fetched from DRAM with no-caching available. Simplicity is counterbalanced with a little increase of memory (allocated dynamically) for stack pointers buckets, that should be anyway fairly limited on most systems. Memory for context pointers is allocated in a early_initcall with size precomputed and stashed previously in kernel data structures. Memory for context pointers is allocated through kmalloc; this guarantees contiguous physical addresses for the allocated memory which is fundamental to the correct functioning of the resume mechanism that relies on the context pointer array to be a chunk of contiguous physical memory. Virtual to physical address conversion for the context pointer array base is carried out at boot to avoid fiddling with virt_to_phys conversions in the cpu_resume path which is quite fragile and should be optimized to execute as few instructions as possible. Virtual and physical context pointer base array addresses are stashed in a struct that is accessible from assembly using values generated through the asm-offsets.c mechanism. Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Colin Cross <ccross@android.com> Cc: Santosh Shilimkar <santosh.shilimkar@ti.com> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Amit Kucheria <amit.kucheria@linaro.org> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Nicolas Pitre <nico@linaro.org> Tested-by: Shawn Guo <shawn.guo@linaro.org> Tested-by: Kevin Hilman <khilman@linaro.org> Tested-by: Stephen Warren <swarren@wwwdotorg.org>
2013-05-16 03:34:30 -06:00
#include <asm/suspend.h>
ARM: 8330/1: add VDSO user-space code Place VDSO-related user-space code in arch/arm/kernel/vdso/. It is almost completely written in C with some assembly helpers to load the data page address, sample the counter, and fall back to system calls when necessary. The VDSO can service gettimeofday and clock_gettime when CONFIG_ARM_ARCH_TIMER is enabled and the architected timer is present (and correctly configured). It reads the CP15-based virtual counter to compute high-resolution timestamps. Of particular note is that a post-processing step ("vdsomunge") is necessary to produce a shared object which is architecturally allowed to be used by both soft- and hard-float EABI programs. The 2012 edition of the ARM ABI defines Tag_ABI_VFP_args = 3 "Code is compatible with both the base and VFP variants; the user did not permit non-variadic functions to pass FP parameters/results." Unfortunately current toolchains do not support this tag, which is ideally what we would use. The best available option is to ensure that both EF_ARM_ABI_FLOAT_SOFT and EF_ARM_ABI_FLOAT_HARD are unset in the ELF header's e_flags, indicating that the shared object is "old" and should be accepted for backward compatibility's sake. While binutils < 2.24 appear to produce a vdso.so with both flags clear, 2.24 always sets EF_ARM_ABI_FLOAT_SOFT, with no way to inhibit this behavior. So we have to fix things up with a custom post-processing step. In fact, the VDSO code in glibc does much less validation (including checking these flags) than the code for handling conventional file-backed shared libraries, so this is a bit moot unless glibc's VDSO code becomes more strict. Signed-off-by: Nathan Lynch <nathan_lynch@mentor.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-03-25 12:14:22 -06:00
#include <asm/vdso_datapage.h>
#include <asm/hardware/cache-l2x0.h>
#include <linux/kbuild.h>
/*
* Make sure that the compiler and target are compatible.
*/
#if defined(__APCS_26__)
#error Sorry, your compiler targets APCS-26 but this kernel requires APCS-32
#endif
/*
* GCC 3.0, 3.1: general bad code generation.
* GCC 3.2.0: incorrect function argument offset calculation.
* GCC 3.2.x: miscompiles NEW_AUX_ENT in fs/binfmt_elf.c
* (http://gcc.gnu.org/PR8896) and incorrect structure
* initialisation in fs/jffs2/erase.c
* GCC 4.8.0-4.8.2: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58854
* miscompiles find_get_entry(), and can result in EXT3 and EXT4
* filesystem corruption (possibly other FS too).
*/
#ifdef __GNUC__
#if (__GNUC__ == 3 && __GNUC_MINOR__ < 3)
#error Your compiler is too buggy; it is known to miscompile kernels.
#error Known good compilers: 3.3, 4.x
#endif
#if GCC_VERSION >= 40800 && GCC_VERSION < 40803
#error Your compiler is too buggy; it is known to miscompile kernels
#error and result in filesystem corruption and oopses.
#endif
#endif
int main(void)
{
DEFINE(TSK_ACTIVE_MM, offsetof(struct task_struct, active_mm));
#ifdef CONFIG_CC_STACKPROTECTOR
DEFINE(TSK_STACK_CANARY, offsetof(struct task_struct, stack_canary));
#endif
BLANK();
DEFINE(TI_FLAGS, offsetof(struct thread_info, flags));
DEFINE(TI_PREEMPT, offsetof(struct thread_info, preempt_count));
DEFINE(TI_ADDR_LIMIT, offsetof(struct thread_info, addr_limit));
DEFINE(TI_TASK, offsetof(struct thread_info, task));
DEFINE(TI_CPU, offsetof(struct thread_info, cpu));
DEFINE(TI_CPU_DOMAIN, offsetof(struct thread_info, cpu_domain));
DEFINE(TI_CPU_SAVE, offsetof(struct thread_info, cpu_context));
DEFINE(TI_USED_CP, offsetof(struct thread_info, used_cp));
DEFINE(TI_TP_VALUE, offsetof(struct thread_info, tp_value));
DEFINE(TI_FPSTATE, offsetof(struct thread_info, fpstate));
#ifdef CONFIG_VFP
DEFINE(TI_VFPSTATE, offsetof(struct thread_info, vfpstate));
ARM: vfp: fix a hole in VFP thread migration Fix a hole in the VFP thread migration. Lets define two threads. Thread 1, we'll call 'interesting_thread' which is a thread which is running on CPU0, using VFP (so vfp_current_hw_state[0] = &interesting_thread->vfpstate) and gets migrated off to CPU1, where it continues execution of VFP instructions. Thread 2, we'll call 'new_cpu0_thread' which is the thread which takes over on CPU0. This has also been using VFP, and last used VFP on CPU0, but doesn't use it again. The following code will be executed twice: cpu = thread->cpu; /* * On SMP, if VFP is enabled, save the old state in * case the thread migrates to a different CPU. The * restoring is done lazily. */ if ((fpexc & FPEXC_EN) && vfp_current_hw_state[cpu]) { vfp_save_state(vfp_current_hw_state[cpu], fpexc); vfp_current_hw_state[cpu]->hard.cpu = cpu; } /* * Thread migration, just force the reloading of the * state on the new CPU in case the VFP registers * contain stale data. */ if (thread->vfpstate.hard.cpu != cpu) vfp_current_hw_state[cpu] = NULL; The first execution will be on CPU0 to switch away from 'interesting_thread'. interesting_thread->cpu will be 0. So, vfp_current_hw_state[0] points at interesting_thread->vfpstate. The hardware state will be saved, along with the CPU number (0) that it was executing on. 'thread' will be 'new_cpu0_thread' with new_cpu0_thread->cpu = 0. Also, because it was executing on CPU0, new_cpu0_thread->vfpstate.hard.cpu = 0, and so the thread migration check is not triggered. This means that vfp_current_hw_state[0] remains pointing at interesting_thread. The second execution will be on CPU1 to switch _to_ 'interesting_thread'. So, 'thread' will be 'interesting_thread' and interesting_thread->cpu now will be 1. The previous thread executing on CPU1 is not relevant to this so we shall ignore that. We get to the thread migration check. Here, we discover that interesting_thread->vfpstate.hard.cpu = 0, yet interesting_thread->cpu is now 1, indicating thread migration. We set vfp_current_hw_state[1] to NULL. So, at this point vfp_current_hw_state[] contains the following: [0] = &interesting_thread->vfpstate [1] = NULL Our interesting thread now executes a VFP instruction, takes a fault which loads the state into the VFP hardware. Now, through the assembly we now have: [0] = &interesting_thread->vfpstate [1] = &interesting_thread->vfpstate CPU1 stops due to ptrace (and so saves its VFP state) using the thread switch code above), and CPU0 calls vfp_sync_hwstate(). if (vfp_current_hw_state[cpu] == &thread->vfpstate) { vfp_save_state(&thread->vfpstate, fpexc | FPEXC_EN); BANG, we corrupt interesting_thread's VFP state by overwriting the more up-to-date state saved by CPU1 with the old VFP state from CPU0. Fix this by ensuring that we have sane semantics for the various state describing variables: 1. vfp_current_hw_state[] points to the current owner of the context information stored in each CPUs hardware, or NULL if that state information is invalid. 2. thread->vfpstate.hard.cpu always contains the most recent CPU number which the state was loaded into or NR_CPUS if no CPU owns the state. So, for a particular CPU to be a valid owner of the VFP state for a particular thread t, two things must be true: vfp_current_hw_state[cpu] == &t->vfpstate && t->vfpstate.hard.cpu == cpu. and that is valid from the moment a CPU loads the saved VFP context into the hardware. This gives clear and consistent semantics to interpreting these variables. This patch also fixes thread copying, ensuring that t->vfpstate.hard.cpu is invalidated, otherwise CPU0 may believe it was the last owner. The hole can happen thus: - thread1 runs on CPU2 using VFP, migrates to CPU3, exits and thread_info freed. - New thread allocated from a previously running thread on CPU2, reusing memory for thread1 and copying vfp.hard.cpu. At this point, the following are true: new_thread1->vfpstate.hard.cpu == 2 &new_thread1->vfpstate == vfp_current_hw_state[2] Lastly, this also addresses thread flushing in a similar way to thread copying. Hole is: - thread runs on CPU0, using VFP, migrates to CPU1 but does not use VFP. - thread calls execve(), so thread flush happens, leaving vfp_current_hw_state[0] intact. This vfpstate is memset to 0 causing thread->vfpstate.hard.cpu = 0. - thread migrates back to CPU0 before using VFP. At this point, the following are true: thread->vfpstate.hard.cpu == 0 &thread->vfpstate == vfp_current_hw_state[0] Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-09 09:09:43 -06:00
#ifdef CONFIG_SMP
DEFINE(VFP_CPU, offsetof(union vfp_state, hard.cpu));
#endif
#endif
#ifdef CONFIG_ARM_THUMBEE
DEFINE(TI_THUMBEE_STATE, offsetof(struct thread_info, thumbee_state));
#endif
#ifdef CONFIG_IWMMXT
DEFINE(TI_IWMMXT_STATE, offsetof(struct thread_info, fpstate.iwmmxt));
#endif
#ifdef CONFIG_CRUNCH
DEFINE(TI_CRUNCH_STATE, offsetof(struct thread_info, crunchstate));
#endif
BLANK();
DEFINE(S_R0, offsetof(struct pt_regs, ARM_r0));
DEFINE(S_R1, offsetof(struct pt_regs, ARM_r1));
DEFINE(S_R2, offsetof(struct pt_regs, ARM_r2));
DEFINE(S_R3, offsetof(struct pt_regs, ARM_r3));
DEFINE(S_R4, offsetof(struct pt_regs, ARM_r4));
DEFINE(S_R5, offsetof(struct pt_regs, ARM_r5));
DEFINE(S_R6, offsetof(struct pt_regs, ARM_r6));
DEFINE(S_R7, offsetof(struct pt_regs, ARM_r7));
DEFINE(S_R8, offsetof(struct pt_regs, ARM_r8));
DEFINE(S_R9, offsetof(struct pt_regs, ARM_r9));
DEFINE(S_R10, offsetof(struct pt_regs, ARM_r10));
DEFINE(S_FP, offsetof(struct pt_regs, ARM_fp));
DEFINE(S_IP, offsetof(struct pt_regs, ARM_ip));
DEFINE(S_SP, offsetof(struct pt_regs, ARM_sp));
DEFINE(S_LR, offsetof(struct pt_regs, ARM_lr));
DEFINE(S_PC, offsetof(struct pt_regs, ARM_pc));
DEFINE(S_PSR, offsetof(struct pt_regs, ARM_cpsr));
DEFINE(S_OLD_R0, offsetof(struct pt_regs, ARM_ORIG_r0));
DEFINE(PT_REGS_SIZE, sizeof(struct pt_regs));
DEFINE(SVC_DACR, offsetof(struct svc_pt_regs, dacr));
DEFINE(SVC_ADDR_LIMIT, offsetof(struct svc_pt_regs, addr_limit));
DEFINE(SVC_REGS_SIZE, sizeof(struct svc_pt_regs));
BLANK();
#ifdef CONFIG_CACHE_L2X0
DEFINE(L2X0_R_PHY_BASE, offsetof(struct l2x0_regs, phy_base));
DEFINE(L2X0_R_AUX_CTRL, offsetof(struct l2x0_regs, aux_ctrl));
DEFINE(L2X0_R_TAG_LATENCY, offsetof(struct l2x0_regs, tag_latency));
DEFINE(L2X0_R_DATA_LATENCY, offsetof(struct l2x0_regs, data_latency));
DEFINE(L2X0_R_FILTER_START, offsetof(struct l2x0_regs, filter_start));
DEFINE(L2X0_R_FILTER_END, offsetof(struct l2x0_regs, filter_end));
DEFINE(L2X0_R_PREFETCH_CTRL, offsetof(struct l2x0_regs, prefetch_ctrl));
DEFINE(L2X0_R_PWR_CTRL, offsetof(struct l2x0_regs, pwr_ctrl));
BLANK();
#endif
#ifdef CONFIG_CPU_HAS_ASID
DEFINE(MM_CONTEXT_ID, offsetof(struct mm_struct, context.id.counter));
BLANK();
#endif
DEFINE(VMA_VM_MM, offsetof(struct vm_area_struct, vm_mm));
DEFINE(VMA_VM_FLAGS, offsetof(struct vm_area_struct, vm_flags));
BLANK();
DEFINE(VM_EXEC, VM_EXEC);
BLANK();
DEFINE(PAGE_SZ, PAGE_SIZE);
BLANK();
DEFINE(SYS_ERROR0, 0x9f0000);
BLANK();
DEFINE(SIZEOF_MACHINE_DESC, sizeof(struct machine_desc));
DEFINE(MACHINFO_TYPE, offsetof(struct machine_desc, nr));
DEFINE(MACHINFO_NAME, offsetof(struct machine_desc, name));
BLANK();
DEFINE(PROC_INFO_SZ, sizeof(struct proc_info_list));
DEFINE(PROCINFO_INITFUNC, offsetof(struct proc_info_list, __cpu_flush));
DEFINE(PROCINFO_MM_MMUFLAGS, offsetof(struct proc_info_list, __cpu_mm_mmu_flags));
DEFINE(PROCINFO_IO_MMUFLAGS, offsetof(struct proc_info_list, __cpu_io_mmu_flags));
BLANK();
#ifdef MULTI_DABORT
DEFINE(PROCESSOR_DABT_FUNC, offsetof(struct processor, _data_abort));
#endif
#ifdef MULTI_PABORT
DEFINE(PROCESSOR_PABT_FUNC, offsetof(struct processor, _prefetch_abort));
#endif
#ifdef MULTI_CPU
DEFINE(CPU_SLEEP_SIZE, offsetof(struct processor, suspend_size));
DEFINE(CPU_DO_SUSPEND, offsetof(struct processor, do_suspend));
DEFINE(CPU_DO_RESUME, offsetof(struct processor, do_resume));
#endif
#ifdef MULTI_CACHE
DEFINE(CACHE_FLUSH_KERN_ALL, offsetof(struct cpu_cache_fns, flush_kern_all));
ARM: kernel: implement stack pointer save array through MPIDR hashing Current implementation of cpu_{suspend}/cpu_{resume} relies on the MPIDR to index the array of pointers where the context is saved and restored. The current approach works as long as the MPIDR can be considered a linear index, so that the pointers array can simply be dereferenced by using the MPIDR[7:0] value. On ARM multi-cluster systems, where the MPIDR may not be a linear index, to properly dereference the stack pointer array, a mapping function should be applied to it so that it can be used for arrays look-ups. This patch adds code in the cpu_{suspend}/cpu_{resume} implementation that relies on shifting and ORing hashing method to map a MPIDR value to a set of buckets precomputed at boot to have a collision free mapping from MPIDR to context pointers. The hashing algorithm must be simple, fast, and implementable with few instructions since in the cpu_resume path the mapping is carried out with the MMU off and the I-cache off, hence code and data are fetched from DRAM with no-caching available. Simplicity is counterbalanced with a little increase of memory (allocated dynamically) for stack pointers buckets, that should be anyway fairly limited on most systems. Memory for context pointers is allocated in a early_initcall with size precomputed and stashed previously in kernel data structures. Memory for context pointers is allocated through kmalloc; this guarantees contiguous physical addresses for the allocated memory which is fundamental to the correct functioning of the resume mechanism that relies on the context pointer array to be a chunk of contiguous physical memory. Virtual to physical address conversion for the context pointer array base is carried out at boot to avoid fiddling with virt_to_phys conversions in the cpu_resume path which is quite fragile and should be optimized to execute as few instructions as possible. Virtual and physical context pointer base array addresses are stashed in a struct that is accessible from assembly using values generated through the asm-offsets.c mechanism. Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Colin Cross <ccross@android.com> Cc: Santosh Shilimkar <santosh.shilimkar@ti.com> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Amit Kucheria <amit.kucheria@linaro.org> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Nicolas Pitre <nico@linaro.org> Tested-by: Shawn Guo <shawn.guo@linaro.org> Tested-by: Kevin Hilman <khilman@linaro.org> Tested-by: Stephen Warren <swarren@wwwdotorg.org>
2013-05-16 03:34:30 -06:00
#endif
#ifdef CONFIG_ARM_CPU_SUSPEND
DEFINE(SLEEP_SAVE_SP_SZ, sizeof(struct sleep_save_sp));
DEFINE(SLEEP_SAVE_SP_PHYS, offsetof(struct sleep_save_sp, save_ptr_stash_phys));
DEFINE(SLEEP_SAVE_SP_VIRT, offsetof(struct sleep_save_sp, save_ptr_stash));
#endif
BLANK();
DEFINE(DMA_BIDIRECTIONAL, DMA_BIDIRECTIONAL);
DEFINE(DMA_TO_DEVICE, DMA_TO_DEVICE);
DEFINE(DMA_FROM_DEVICE, DMA_FROM_DEVICE);
ARM: mcpm: introduce helpers for platform coherency exit/setup This provides helper methods to coordinate between CPUs coming down and CPUs going up, as well as documentation on the used algorithms, so that cluster teardown and setup operations are not done for a cluster simultaneously. For use in the power_down() implementation: * __mcpm_cpu_going_down(unsigned int cluster, unsigned int cpu) * __mcpm_outbound_enter_critical(unsigned int cluster) * __mcpm_outbound_leave_critical(unsigned int cluster) * __mcpm_cpu_down(unsigned int cluster, unsigned int cpu) The power_up_setup() helper should do platform-specific setup in preparation for turning the CPU on, such as invalidating local caches or entering coherency. It must be assembler for now, since it must run before the MMU can be switched on. It is passed the affinity level for which initialization should be performed. Because the mcpm_sync_struct content is looked-up and modified with the cache enabled or disabled depending on the code path, it is crucial to always ensure proper cache maintenance to update main memory right away. The sync_cache_*() helpers are used to that end. Also, in order to prevent a cached writer from interfering with an adjacent non-cached writer, we ensure each state variable is located to a separate cache line. Thanks to Nicolas Pitre and Achin Gupta for the help with this patch. Signed-off-by: Dave Martin <dave.martin@linaro.org> Signed-off-by: Nicolas Pitre <nico@linaro.org> Reviewed-by: Will Deacon <will.deacon@arm.com>
2012-07-17 07:25:42 -06:00
BLANK();
DEFINE(CACHE_WRITEBACK_ORDER, __CACHE_WRITEBACK_ORDER);
ARM: mcpm: introduce helpers for platform coherency exit/setup This provides helper methods to coordinate between CPUs coming down and CPUs going up, as well as documentation on the used algorithms, so that cluster teardown and setup operations are not done for a cluster simultaneously. For use in the power_down() implementation: * __mcpm_cpu_going_down(unsigned int cluster, unsigned int cpu) * __mcpm_outbound_enter_critical(unsigned int cluster) * __mcpm_outbound_leave_critical(unsigned int cluster) * __mcpm_cpu_down(unsigned int cluster, unsigned int cpu) The power_up_setup() helper should do platform-specific setup in preparation for turning the CPU on, such as invalidating local caches or entering coherency. It must be assembler for now, since it must run before the MMU can be switched on. It is passed the affinity level for which initialization should be performed. Because the mcpm_sync_struct content is looked-up and modified with the cache enabled or disabled depending on the code path, it is crucial to always ensure proper cache maintenance to update main memory right away. The sync_cache_*() helpers are used to that end. Also, in order to prevent a cached writer from interfering with an adjacent non-cached writer, we ensure each state variable is located to a separate cache line. Thanks to Nicolas Pitre and Achin Gupta for the help with this patch. Signed-off-by: Dave Martin <dave.martin@linaro.org> Signed-off-by: Nicolas Pitre <nico@linaro.org> Reviewed-by: Will Deacon <will.deacon@arm.com>
2012-07-17 07:25:42 -06:00
DEFINE(CACHE_WRITEBACK_GRANULE, __CACHE_WRITEBACK_GRANULE);
BLANK();
KVM: ARM: World-switch implementation Provides complete world-switch implementation to switch to other guests running in non-secure modes. Includes Hyp exception handlers that capture necessary exception information and stores the information on the VCPU and KVM structures. The following Hyp-ABI is also documented in the code: Hyp-ABI: Calling HYP-mode functions from host (in SVC mode): Switching to Hyp mode is done through a simple HVC #0 instruction. The exception vector code will check that the HVC comes from VMID==0 and if so will push the necessary state (SPSR, lr_usr) on the Hyp stack. - r0 contains a pointer to a HYP function - r1, r2, and r3 contain arguments to the above function. - The HYP function will be called with its arguments in r0, r1 and r2. On HYP function return, we return directly to SVC. A call to a function executing in Hyp mode is performed like the following: <svc code> ldr r0, =BSYM(my_hyp_fn) ldr r1, =my_param hvc #0 ; Call my_hyp_fn(my_param) from HYP mode <svc code> Otherwise, the world-switch is pretty straight-forward. All state that can be modified by the guest is first backed up on the Hyp stack and the VCPU values is loaded onto the hardware. State, which is not loaded, but theoretically modifiable by the guest is protected through the virtualiation features to generate a trap and cause software emulation. Upon guest returns, all state is restored from hardware onto the VCPU struct and the original state is restored from the Hyp-stack onto the hardware. SMP support using the VMPIDR calculated on the basis of the host MPIDR and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier. Reuse of VMIDs has been implemented by Antonios Motakis and adapated from a separate patch into the appropriate patches introducing the functionality. Note that the VMIDs are stored per VM as required by the ARM architecture reference manual. To support VFP/NEON we trap those instructions using the HPCTR. When we trap, we switch the FPU. After a guest exit, the VFP state is returned to the host. When disabling access to floating point instructions, we also mask FPEXC_EN in order to avoid the guest receiving Undefined instruction exceptions before we have a chance to switch back the floating point state. We are reusing vfp_hard_struct, so we depend on VFPv3 being enabled in the host kernel, if not, we still trap cp10 and cp11 in order to inject an undefined instruction exception whenever the guest tries to use VFP/NEON. VFP/NEON developed by Antionios Motakis and Rusty Russell. Aborts that are permission faults, and not stage-1 page table walk, do not report the faulting address in the HPFAR. We have to resolve the IPA, and store it just like the HPFAR register on the VCPU struct. If the IPA cannot be resolved, it means another CPU is playing with the page tables, and we simply restart the guest. This quirk was fixed by Marc Zyngier. Reviewed-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-20 16:47:42 -07:00
#ifdef CONFIG_KVM_ARM_HOST
DEFINE(VCPU_GUEST_CTXT, offsetof(struct kvm_vcpu, arch.ctxt));
DEFINE(VCPU_HOST_CTXT, offsetof(struct kvm_vcpu, arch.host_cpu_context));
DEFINE(CPU_CTXT_VFP, offsetof(struct kvm_cpu_context, vfp));
DEFINE(CPU_CTXT_GP_REGS, offsetof(struct kvm_cpu_context, gp_regs));
DEFINE(GP_REGS_USR, offsetof(struct kvm_regs, usr_regs));
ARM: 8330/1: add VDSO user-space code Place VDSO-related user-space code in arch/arm/kernel/vdso/. It is almost completely written in C with some assembly helpers to load the data page address, sample the counter, and fall back to system calls when necessary. The VDSO can service gettimeofday and clock_gettime when CONFIG_ARM_ARCH_TIMER is enabled and the architected timer is present (and correctly configured). It reads the CP15-based virtual counter to compute high-resolution timestamps. Of particular note is that a post-processing step ("vdsomunge") is necessary to produce a shared object which is architecturally allowed to be used by both soft- and hard-float EABI programs. The 2012 edition of the ARM ABI defines Tag_ABI_VFP_args = 3 "Code is compatible with both the base and VFP variants; the user did not permit non-variadic functions to pass FP parameters/results." Unfortunately current toolchains do not support this tag, which is ideally what we would use. The best available option is to ensure that both EF_ARM_ABI_FLOAT_SOFT and EF_ARM_ABI_FLOAT_HARD are unset in the ELF header's e_flags, indicating that the shared object is "old" and should be accepted for backward compatibility's sake. While binutils < 2.24 appear to produce a vdso.so with both flags clear, 2.24 always sets EF_ARM_ABI_FLOAT_SOFT, with no way to inhibit this behavior. So we have to fix things up with a custom post-processing step. In fact, the VDSO code in glibc does much less validation (including checking these flags) than the code for handling conventional file-backed shared libraries, so this is a bit moot unless glibc's VDSO code becomes more strict. Signed-off-by: Nathan Lynch <nathan_lynch@mentor.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-03-25 12:14:22 -06:00
#endif
BLANK();
#ifdef CONFIG_VDSO
DEFINE(VDSO_DATA_SIZE, sizeof(union vdso_data_store));
KVM: ARM: World-switch implementation Provides complete world-switch implementation to switch to other guests running in non-secure modes. Includes Hyp exception handlers that capture necessary exception information and stores the information on the VCPU and KVM structures. The following Hyp-ABI is also documented in the code: Hyp-ABI: Calling HYP-mode functions from host (in SVC mode): Switching to Hyp mode is done through a simple HVC #0 instruction. The exception vector code will check that the HVC comes from VMID==0 and if so will push the necessary state (SPSR, lr_usr) on the Hyp stack. - r0 contains a pointer to a HYP function - r1, r2, and r3 contain arguments to the above function. - The HYP function will be called with its arguments in r0, r1 and r2. On HYP function return, we return directly to SVC. A call to a function executing in Hyp mode is performed like the following: <svc code> ldr r0, =BSYM(my_hyp_fn) ldr r1, =my_param hvc #0 ; Call my_hyp_fn(my_param) from HYP mode <svc code> Otherwise, the world-switch is pretty straight-forward. All state that can be modified by the guest is first backed up on the Hyp stack and the VCPU values is loaded onto the hardware. State, which is not loaded, but theoretically modifiable by the guest is protected through the virtualiation features to generate a trap and cause software emulation. Upon guest returns, all state is restored from hardware onto the VCPU struct and the original state is restored from the Hyp-stack onto the hardware. SMP support using the VMPIDR calculated on the basis of the host MPIDR and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier. Reuse of VMIDs has been implemented by Antonios Motakis and adapated from a separate patch into the appropriate patches introducing the functionality. Note that the VMIDs are stored per VM as required by the ARM architecture reference manual. To support VFP/NEON we trap those instructions using the HPCTR. When we trap, we switch the FPU. After a guest exit, the VFP state is returned to the host. When disabling access to floating point instructions, we also mask FPEXC_EN in order to avoid the guest receiving Undefined instruction exceptions before we have a chance to switch back the floating point state. We are reusing vfp_hard_struct, so we depend on VFPv3 being enabled in the host kernel, if not, we still trap cp10 and cp11 in order to inject an undefined instruction exception whenever the guest tries to use VFP/NEON. VFP/NEON developed by Antionios Motakis and Rusty Russell. Aborts that are permission faults, and not stage-1 page table walk, do not report the faulting address in the HPFAR. We have to resolve the IPA, and store it just like the HPFAR register on the VCPU struct. If the IPA cannot be resolved, it means another CPU is playing with the page tables, and we simply restart the guest. This quirk was fixed by Marc Zyngier. Reviewed-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-20 16:47:42 -07:00
#endif
return 0;
}