Commit graph

124 commits

Author SHA1 Message Date
Martin Schwidefsky 86ed42f401 s390: use local symbol names in entry[64].S
To improve the output of the perf tool hide most of the symbols
from entry[64].S by using the '.L' prefix.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2014-12-08 09:42:38 +01:00
Jan Willeke 2a0a5b2299 s390/uprobes: architecture backend for uprobes
Signed-off-by: Jan Willeke <willeke@de.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2014-09-25 10:52:17 +02:00
Linus Torvalds b05d59dfce At over 200 commits, covering almost all supported architectures, this
was a pretty active cycle for KVM.  Changes include:
 
 - a lot of s390 changes: optimizations, support for migration,
   GDB support and more
 
 - ARM changes are pretty small: support for the PSCI 0.2 hypercall
   interface on both the guest and the host (the latter acked by Catalin)
 
 - initial POWER8 and little-endian host support
 
 - support for running u-boot on embedded POWER targets
 
 - pretty large changes to MIPS too, completing the userspace interface
   and improving the handling of virtualized timer hardware
 
 - for x86, a larger set of changes is scheduled for 3.17.  Still,
   we have a few emulator bugfixes and support for running nested
   fully-virtualized Xen guests (para-virtualized Xen guests have
   always worked).  And some optimizations too.
 
 The only missing architecture here is ia64.  It's not a coincidence
 that support for KVM on ia64 is scheduled for removal in 3.17.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.22 (GNU/Linux)
 
 iQIcBAABAgAGBQJTjtlBAAoJEBvWZb6bTYbyMOUP/2NAePghE3IjG99ikHFdn+BX
 BfrURsuR6GD0AhYQnBidBmpFbAmN/LwSJxv/M7sV7OBRWLu3qbt69DrPTU2e/FK1
 j9q25peu8jRyHzJ1q9rBroo74nD9lQYuVr3uXNxxcg0DRnw14JHGlM3y8LDEknO8
 W+gpWTeAQ+2AuOX98MpRbCRMuzziCSv5bP5FhBVnsWHiZfvMbcUrbeJt+zYSiDAZ
 0tHm/5dFKzfj/vVrrnjD4EZcRr688Bs5rztG96hY6aoVJryjZGLtLp92wCWkRRmH
 CCvZwd245NmNthuKHzcs27/duSWfU0uOlu7AMrD44QYhzeDGyB/2nbCxbGqLLoBA
 nnOviXH4cC65/CnisZ79zfo979HbZcX+Lzg747EjBgCSxJmLlwgiG8yXtDvk5otB
 TH6GUeGDiEEPj//JD3XtgSz0sF2NvjREWRyemjDMvhz6JC/bLytXKb3sn+NXSj8m
 ujzF9eQoa4qKDcBL4IQYGTJ4z5nY3Pd68dHFIPHB7n82OxFLSQUBKxXw8/1fb5og
 VVb8PL4GOcmakQlAKtTMlFPmuy4bbL2r/2iV5xJiOZKmXIu8Hs1JezBE3SFAltbl
 3cAGwSM9/dDkKxUbTFblyOE9bkKbg4WYmq0LkdzsPEomb3IZWntOT25rYnX+LrBz
 bAknaZpPiOrW11Et1htY
 =j5Od
 -----END PGP SIGNATURE-----

Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm into next

Pull KVM updates from Paolo Bonzini:
 "At over 200 commits, covering almost all supported architectures, this
  was a pretty active cycle for KVM.  Changes include:

   - a lot of s390 changes: optimizations, support for migration, GDB
     support and more

   - ARM changes are pretty small: support for the PSCI 0.2 hypercall
     interface on both the guest and the host (the latter acked by
     Catalin)

   - initial POWER8 and little-endian host support

   - support for running u-boot on embedded POWER targets

   - pretty large changes to MIPS too, completing the userspace
     interface and improving the handling of virtualized timer hardware

   - for x86, a larger set of changes is scheduled for 3.17.  Still, we
     have a few emulator bugfixes and support for running nested
     fully-virtualized Xen guests (para-virtualized Xen guests have
     always worked).  And some optimizations too.

  The only missing architecture here is ia64.  It's not a coincidence
  that support for KVM on ia64 is scheduled for removal in 3.17"

* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (203 commits)
  KVM: add missing cleanup_srcu_struct
  KVM: PPC: Book3S PR: Rework SLB switching code
  KVM: PPC: Book3S PR: Use SLB entry 0
  KVM: PPC: Book3S HV: Fix machine check delivery to guest
  KVM: PPC: Book3S HV: Work around POWER8 performance monitor bugs
  KVM: PPC: Book3S HV: Make sure we don't miss dirty pages
  KVM: PPC: Book3S HV: Fix dirty map for hugepages
  KVM: PPC: Book3S HV: Put huge-page HPTEs in rmap chain for base address
  KVM: PPC: Book3S HV: Fix check for running inside guest in global_invalidates()
  KVM: PPC: Book3S: Move KVM_REG_PPC_WORT to an unused register number
  KVM: PPC: Book3S: Add ONE_REG register names that were missed
  KVM: PPC: Add CAP to indicate hcall fixes
  KVM: PPC: MPIC: Reset IRQ source private members
  KVM: PPC: Graciously fail broken LE hypercalls
  PPC: ePAPR: Fix hypercall on LE guest
  KVM: PPC: BOOK3S: Remove open coded make_dsisr in alignment handler
  KVM: PPC: BOOK3S: Always use the saved DAR value
  PPC: KVM: Make NX bit available with magic page
  KVM: PPC: Disable NX for old magic page using guests
  KVM: PPC: BOOK3S: HV: Add mixed page-size support for guest
  ...
2014-06-04 08:47:12 -07:00
Martin Schwidefsky d3a73acbc2 s390: split TIF bits into CIF, PIF and TIF bits
The oi and ni instructions used in entry[64].S to set and clear bits
in the thread-flags are not guaranteed to be atomic in regard to other
CPUs. Split the TIF bits into CPU, pt_regs and thread-info specific
bits. Updates on the TIF bits are done with atomic instructions,
updates on CPU and pt_regs bits are done with non-atomic instructions.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2014-05-20 08:58:47 +02:00
Martin Schwidefsky beef560b4c s390/uaccess: simplify control register updates
Always switch to the kernel ASCE in switch_mm. Load the secondary
space ASCE in finish_arch_post_lock_switch after checking that
any pending page table operations have completed. The primary
ASCE is loaded in entry[64].S. With this the update_primary_asce
call can be removed from the switch_to macro and from the start
of switch_mm function. Remove the load_primary argument from
update_user_asce/clear_user_asce, rename update_user_asce to
set_user_asce and rename update_primary_asce to load_kernel_asce.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2014-05-20 08:58:46 +02:00
Jens Freimann 21ee7ffd17 s390: rename and split lowcore field per_perc_atmid
per_perc_atmid is currently a two-byte field that combines two
fields, the PER code and the PER Addressing-and-Translation-Mode
Identification (ATMID)

Let's make them accessible indepently and also rename per_cause to
per_code.

Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2014-04-22 13:24:48 +02:00
Heiko Carstens 457f218095 s390/uaccess: rework uaccess code - fix locking issues
The current uaccess code uses a page table walk in some circumstances,
e.g. in case of the in atomic futex operations or if running on old
hardware which doesn't support the mvcos instruction.

However it turned out that the page table walk code does not correctly
lock page tables when accessing page table entries.
In other words: a different cpu may invalidate a page table entry while
the current cpu inspects the pte. This may lead to random data corruption.

Adding correct locking however isn't trivial for all uaccess operations.
Especially copy_in_user() is problematic since that requires to hold at
least two locks, but must be protected against ABBA deadlock when a
different cpu also performs a copy_in_user() operation.

So the solution is a different approach where we change address spaces:

User space runs in primary address mode, or access register mode within
vdso code, like it currently already does.

The kernel usually also runs in home space mode, however when accessing
user space the kernel switches to primary or secondary address mode if
the mvcos instruction is not available or if a compare-and-swap (futex)
instruction on a user space address is performed.
KVM however is special, since that requires the kernel to run in home
address space while implicitly accessing user space with the sie
instruction.

So we end up with:

User space:
- runs in primary or access register mode
- cr1 contains the user asce
- cr7 contains the user asce
- cr13 contains the kernel asce

Kernel space:
- runs in home space mode
- cr1 contains the user or kernel asce
  -> the kernel asce is loaded when a uaccess requires primary or
     secondary address mode
- cr7 contains the user or kernel asce, (changed with set_fs())
- cr13 contains the kernel asce

In case of uaccess the kernel changes to:
- primary space mode in case of a uaccess (copy_to_user) and uses
  e.g. the mvcp instruction to access user space. However the kernel
  will stay in home space mode if the mvcos instruction is available
- secondary space mode in case of futex atomic operations, so that the
  instructions come from primary address space and data from secondary
  space

In case of kvm the kernel runs in home space mode, but cr1 gets switched
to contain the gmap asce before the sie instruction gets executed. When
the sie instruction is finished cr1 will be switched back to contain the
user asce.

A context switch between two processes will always load the kernel asce
for the next process in cr1. So the first exit to user space is a bit
more expensive (one extra load control register instruction) than before,
however keeps the code rather simple.

In sum this means there is no need to perform any error prone page table
walks anymore when accessing user space.

The patch seems to be rather large, however it mainly removes the
the page table walk code and restores the previously deleted "standard"
uaccess code, with a couple of changes.

The uaccess without mvcos mode can be enforced with the "uaccess_primary"
kernel parameter.

Reported-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2014-04-03 14:31:04 +02:00
Martin Schwidefsky 53e857f308 s390/mm,tlb: race of lazy TLB flush vs. recreation of TLB entries
Git commit 050eef364a "[S390] fix tlb flushing vs. concurrent
/proc accesses" introduced the attach counter to avoid using the
mm_users value to decide between IPTE for every PTE and lazy TLB
flushing with IDTE. That fixed the problem with mm_users but it
introduced another subtle race, fortunately one that is very hard
to hit.
The background is the requirement of the architecture that a valid
PTE may not be changed while it can be used concurrently by another
cpu. The decision between IPTE and lazy TLB flushing needs to be
done while the PTE is still valid. Now if the virtual cpu is
temporarily stopped after the decision to use lazy TLB flushing but
before the invalid bit of the PTE has been set, another cpu can attach
the mm, find that flush_mm is set, do the IDTE, return to userspace,
and recreate a TLB that uses the PTE in question. When the first,
stopped cpu continues it will change the PTE while it is attached on
another cpu. The first cpu will do another IDTE shortly after the
modification of the PTE which makes the race window quite short.

To fix this race the CPU that wants to attach the address space of a
user space thread needs to wait for the end of the PTE modification.
The number of concurrent TLB flushers for an mm is tracked in the
upper 16 bits of the attach_count and finish_arch_post_lock_switch
is used to wait for the end of the flush operation if required.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2014-02-21 08:50:18 +01:00
Hendrik Brueckner b4a960159e s390: Fix misspellings using 'codespell' tool
Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2014-01-16 16:40:13 +01:00
Heiko Carstens 61aa4884b7 s390: use IS_ENABLED to check if a CONFIG is set to y or m
This is shorter and should be used instead of the longer form
which checks for both possible config options.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-12-16 14:38:04 +01:00
Martin Schwidefsky dbbfe487e5 s390: fix system call restart after inferior call
Git commit 616498813b "s390: system call path micro optimization"
introduced a regression in regard to system call restarting and inferior
function calls via the ptrace interface. The pointer to the system call
table needs to be loaded in sysc_sigpending if do_signal returns with
TIF_SYSCALl set after it restored a system call context.

Cc: stable@vger.kernel.org # 3.10+
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-09-30 13:04:40 +02:00
Martin Schwidefsky 0587d409ec s390/time: return with irqs disabled from psw_idle
Modify the psw_idle waiting logic in entry[64].S to return with
interrupts disabled. This avoids potential issues with udelay
and interrupt loops as interrupts are not reenabled after
clock comparator interrupts.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-08-28 09:19:23 +02:00
Martin Schwidefsky 1f44a22577 s390: convert interrupt handling to use generic hardirq
With the introduction of PCI it became apparent that s390 should
convert to generic hardirqs as too many drivers do not have the
correct dependency for GENERIC_HARDIRQS. On the architecture
level s390 does not have irq lines. It has external interrupts,
I/O interrupts and adapter interrupts. This patch hard-codes all
external interrupts as irq #1, all I/O interrupts as irq #2 and
all adapter interrupts as irq #3. The additional information from
the lowcore associated with the interrupt is stored in the
pt_regs of the interrupt frame, where the interrupt handler can
pick it up. For PCI/MSI interrupts the adapter interrupt handler
scans the relevant bit fields and calls generic_handle_irq with
the virtual irq number for the MSI interrupt.

Reviewed-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-08-22 12:20:04 +02:00
Linus Torvalds fe489bf450 KVM fixes for 3.11
On the x86 side, there are some optimizations and documentation updates.
 The big ARM/KVM change for 3.11, support for AArch64, will come through
 Catalin Marinas's tree.  s390 and PPC have misc cleanups and bugfixes.
 
 There is a conflict due to "s390/pgtable: fix ipte notify bit" having
 entered 3.10 through Martin Schwidefsky's s390 tree.  This pull request
 has additional changes on top, so this tree's version is the correct one.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.13 (GNU/Linux)
 
 iQIcBAABAgAGBQJR0oU6AAoJEBvWZb6bTYbynnsP/RSUrrHrA8Wu1tqVfAKu+1y5
 6OIihqZ9x11/YMaNofAfv86jqxFu0/j7CzMGphNdjzujqKI+Q1tGe7oiVCmKzoG+
 UvSctWsz0lpllgBtnnrm5tcfmG6rrddhLtpA7m320+xCVx8KV5P4VfyHZEU+Ho8h
 ziPmb2mAQ65gBNX6nLHEJ3ITTgad6gt4NNbrKIYpyXuWZQJypzaRqT/vpc4md+Ed
 dCebMXsL1xgyb98EcnOdrWH1wV30MfucR7IpObOhXnnMKeeltqAQPvaOlKzZh4dK
 +QfxJfdRZVS0cepcxzx1Q2X3dgjoKQsHq1nlIyz3qu1vhtfaqBlixLZk0SguZ/R9
 1S1YqucZiLRO57RD4q0Ak5oxwobu18ZoqJZ6nledNdWwDe8bz/W2wGAeVty19ky0
 qstBdM9jnwXrc0qrVgZp3+s5dsx3NAm/KKZBoq4sXiDLd/yBzdEdWIVkIrU3X9wU
 3X26wOmBxtsB7so/JR7ciTsQHelmLicnVeXohAEP9CjIJffB81xVXnXs0P0SYuiQ
 RzbSCwjPzET4JBOaHWT0Dhv0DTS/EaI97KzlN32US3Bn3WiLlS1oDCoPFoaLqd2K
 LxQMsXS8anAWxFvexfSuUpbJGPnKSidSQoQmJeMGBa9QhmZCht3IL16/Fb641ToN
 xBohzi49L9FDbpOnTYfz
 =1zpG
 -----END PGP SIGNATURE-----

Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm

Pull KVM fixes from Paolo Bonzini:
 "On the x86 side, there are some optimizations and documentation
  updates.  The big ARM/KVM change for 3.11, support for AArch64, will
  come through Catalin Marinas's tree.  s390 and PPC have misc cleanups
  and bugfixes"

* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (87 commits)
  KVM: PPC: Ignore PIR writes
  KVM: PPC: Book3S PR: Invalidate SLB entries properly
  KVM: PPC: Book3S PR: Allow guest to use 1TB segments
  KVM: PPC: Book3S PR: Don't keep scanning HPTEG after we find a match
  KVM: PPC: Book3S PR: Fix invalidation of SLB entry 0 on guest entry
  KVM: PPC: Book3S PR: Fix proto-VSID calculations
  KVM: PPC: Guard doorbell exception with CONFIG_PPC_DOORBELL
  KVM: Fix RTC interrupt coalescing tracking
  kvm: Add a tracepoint write_tsc_offset
  KVM: MMU: Inform users of mmio generation wraparound
  KVM: MMU: document fast invalidate all mmio sptes
  KVM: MMU: document fast invalidate all pages
  KVM: MMU: document fast page fault
  KVM: MMU: document mmio page fault
  KVM: MMU: document write_flooding_count
  KVM: MMU: document clear_spte_count
  KVM: MMU: drop kvm_mmu_zap_mmio_sptes
  KVM: MMU: init kvm generation close to mmio wrap-around value
  KVM: MMU: add tracepoint for check_mmio_spte
  KVM: MMU: fast invalidate all mmio sptes
  ...
2013-07-03 13:21:40 -07:00
Martin Schwidefsky 48f6b00c6e s390/irq: store interrupt information in pt_regs
Copy the interrupt parameters from the lowcore to the pt_regs structure
in entry[64].S and reduce the arguments of the low level interrupt handler
to the pt_regs pointer only. In addition move the test-pending-interrupt
loop from do_IRQ to entry[64].S to make sure that interrupt information
is always delivered via pt_regs.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-06-26 21:10:23 +02:00
Heinz Graalfs b764bb1c50 KVM: s390,perf: Detect if perf samples belong to KVM host or guest
This patch is based on an original patch of David Hildenbrand.

The perf core implementation calls architecture specific code in order
to ask for specific information for a particular sample:

perf_instruction_pointer()
When perf core code asks for the instruction pointer, architecture
specific code must detect if a KVM guest was running when the sample
was taken. A sample can be associated with a  KVM guest when the PSW
supervisor state bit is set and the PSW instruction pointer part
contains the address of 'sie_exit'.
A KVM guest's instruction pointer information is then retrieved via
gpsw entry pointed to by the sie control-block.

perf_misc_flags()
perf code code calls this function in order to associate the kernel
vs. user state infomation with a particular sample. Architecture
specific code must also first detectif a KVM guest was running
at the time the sample was taken.

Signed-off-by: Heinz Graalfs <graalfs@linux.vnet.ibm.com>
Reviewed-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-06-17 17:10:23 +02:00
Martin Schwidefsky 7c470539c9 s390/kvm: avoid automatic sie reentry
Do not automatically restart the sie instruction in entry64.S after an
interrupt, return to the caller with a reason code instead. That allows
to deal with RCU and other conditions in C code.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
2013-05-21 11:55:26 +03:00
Christian Borntraeger 49b99e1e0d s390/kvm: Provide a way to prevent reentering SIE
Lets provide functions to prevent KVM from reentering SIE and
to kick cpus out of SIE. We cannot use the common kvm_vcpu_kick code,
since we need to kick out guests in places that hold architecture
specific locks (e.g. pgste lock) which might be necessary on the
other cpus - so no waiting possible.

So lets provide a bit in a private field of the sie control block
that acts as a gate keeper, after we claimed we are in SIE.
Please note that we do not reuse prog0c, since we want to access
that bit without atomic ops.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
2013-05-21 11:55:23 +03:00
Christian Borntraeger 95d38fd0bc s390/kvm: Mark if a cpu is in SIE
Lets track in a private bit if the sie control block is active.
We want to track this as closely as possible, so we also have to
instrument the interrupt and program check handler. Lets use the
existing HANDLE_SIE_INTERCEPT macro.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
2013-05-21 11:55:21 +03:00
Martin Schwidefsky 616498813b s390: system call path micro optimization
Add a pointer to the system call table to the thread_info structure.
The TIF_31BIT bit is set or cleared by SET_PERSONALITY exactly once
for the lifetime of a process. With the pointer to the correct system
call table in thread_info the system call code in entry64.S path can
drop the check for TIF_31BIT which saves a couple of instructions.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-04-26 09:07:05 +02:00
Martin Schwidefsky dc7ee00d47 s390: lowcore stack pointer offsets
Store the stack pointers in the lowcore for the kernel stack, the async
stack and the panic stack with the offset required for the first user.
This avoids an unnecessary add instruction on the system call path.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-04-26 09:07:01 +02:00
Martin Schwidefsky 6551fbdfd8 s390: critical section cleanup vs. machine checks
The current machine check code uses the registers stored by the machine
in the lowcore at __LC_GPREGS_SAVE_AREA as the registers of the interrupted
context. The registers 0-7 of a user process can get clobbered if a machine
checks interrupts the execution of a critical section in entry[64].S.

The reason is that the critical section cleanup code may need to modify
the PSW and the registers for the previous context to get to the end of a
critical section. If registers 0-7 have to be replaced the relevant copy
will be in the registers, which invalidates the copy in the lowcore. The
machine check handler needs to explicitly store registers 0-7 to the stack.

Cc: stable@vger.kernel.org
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-03-05 10:21:35 +01:00
Hendrik Brueckner 23d18e8d93 s390/cleanup: rename SPP to LPP
The set-program-parameter (SPP) instruction has been renamed to
load-program-parameter (LPP) (see SA23-2260).  Reflect this change
and rename all macro/instruction references.

Also remove the duplicate SPP/LPP entry in the kernel disassembler
instruction list.

Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-02-14 15:55:24 +01:00
Linus Torvalds c7708fac5a Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux
Pull s390 update from Martin Schwidefsky:
 "Add support to generate code for the latest machine zEC12, MOD and XOR
  instruction support for the BPF jit compiler, the dasd safe offline
  feature and the big one: the s390 architecture gets PCI support!!
  Right before the world ends on the 21st ;-)"

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: (41 commits)
  s390/qdio: rename the misleading PCI flag of qdio devices
  s390/pci: remove obsolete email addresses
  s390/pci: speed up __iowrite64_copy by using pci store block insn
  s390/pci: enable NEED_DMA_MAP_STATE
  s390/pci: no msleep in potential IRQ context
  s390/pci: fix potential NULL pointer dereference in dma_free_seg_table()
  s390/pci: use kmem_cache_zalloc instead of kmem_cache_alloc/memset
  s390/bpf,jit: add support for XOR instruction
  s390/bpf,jit: add support MOD instruction
  s390/cio: fix pgid reserved check
  vga: compile fix, disable vga for s390
  s390/pci: add PCI Kconfig options
  s390/pci: s390 specific PCI sysfs attributes
  s390/pci: PCI hotplug support via SCLP
  s390/pci: CHSC PCI support for error and availability events
  s390/pci: DMA support
  s390/pci: PCI adapter interrupts for MSI/MSI-X
  s390/bitops: find leftmost bit instruction support
  s390/pci: CLP interface
  s390/pci: base support
  ...
2012-12-13 14:20:19 -08:00
Christian Borntraeger ce6a04ac1b s390/kvm: Fix address space mixup
I was chasing down a bug of random validity intercepts on s390.
(guest prefix page not mapped in the host virtual aspace). Turns out
that the problem was a wrong address space control element. The
cause was quite complex:

During paging activity a DAT protection during SIE caused a program
interrupt. Normally, the sie retry loop tries to catch all
interrupts during and shortly before sie to rerun the setup. The
problem is now that protection causes a suppressing program interrupt,
causing the PSW to point to the instruction AFTER SIE in case of DAT
protection. This confused the logic of the retry loop to not trigger,
instead we jumped directly back to SIE after return from
the program  interrupt. (the protection fault handler itself did
a rewind of the psw). This usually works quite well, but:

If now the protection fault handler has to wait, another program
might be scheduled in. Later on the sie process will be schedules
in again. In that case the content of CR1 (primary address space)
will be wrong because switch_to will put the user space ASCE into CR1
and not the guest ASCE.

In addition the program parameter is also wrong for every protection
fault of a guest, since we dont issue the SPP instruction.

So lets also check for PSW == instruction after SIE in the program
check handler. Instead of expensively checking all program
interruption codes that might be suppressing we assume that a program
interrupt pointing after SIE was always a program interrupt in SIE.
(Otherwise we have a kernel bug anyway).

We also have to compensate the rewinding, since the C-level handlers
will do that. Therefore we need to add a nop with the same length
as SIE before the sie_loop.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
CC: stable@vger.kernel.org
CC: Heiko Carstens <heiko.carstens@de.ibm.com>
2012-11-23 11:14:34 +01:00
Martin Schwidefsky 39efd4ec9a s390/ptrace: race of single stepping vs signal delivery
The current single step code is racy in regard to concurrent delivery
of signals. If a signal is delivered after a PER program check occurred
but before the TIF_PER_TRAP bit has been checked in entry[64].S the code
clears TIF_PER_TRAP and then calls do_signal. This is wrong, if the
instruction completed (or has been suppressed) a SIGTRAP should be
delivered to the debugger in any case. Only if the instruction has been
nullified the SIGTRAP may not be send.

The new logic always sets TIF_PER_TRAP if the program check indicates PER
tracing but removes it again for all program checks that are nullifying.
The effect is that for each change in the PSW address we now get a
single SIGTRAP.

Reported-by: Andreas Arnez <arnez@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2012-11-23 11:14:33 +01:00
Heiko Carstens b01a37a749 s390/traps: preinitialize program check table
Preinitialize the program check table, so we can put it into the
read-only data section.
Also use only four byte entries for the table, since each program
check handler resides within the first 2GB. Therefore this reduces
the size of the table by 50% on 64 bit builds.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2012-11-23 11:14:25 +01:00
Al Viro 30dcb0996e s390: switch to saner kernel_execve() semantics
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2012-10-29 10:54:37 -04:00
Linus Torvalds e1b28147f6 Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux
Pull second s390 update from Martin Schwidefsky:
 "The big thing in this pull request is the UAPI patch from David, and
  worth mentioning is the page table dumper.  The rest are small
  improvements and bug fixes."

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux:
  s390/entry: fix svc number for TIF_SYSCALL system call restart
  s390/mm,vmem: fix vmem_add_mem()/vmem_remove_range()
  s390/vmalloc: have separate modules area
  s390/zcrypt: remove duplicated include from zcrypt_pcixcc.c
  s390/css_chars: remove superfluous ifdef
  s390/chsc: make headers usable
  s390/mm: let kernel text section always begin at 1MB
  s390/mm: fix mapping of read-only kernel text section
  s390/mm: add page table dumper
  s390: add support to start the kernel in 64 bit mode.
  s390/mm,pageattr: remove superfluous EXPORT_SYMBOLs
  s390/mm,pageattr: add more page table walk sanity checks
  s390/mm: fix pmd_huge() usage for kernel mapping
  s390/dcssblk: cleanup device attribute usage
  s390/mm: use pfmf instruction to initialize storage keys
  s390/facilities: cleanup PFMF and HPAGE machine facility detection
  UAPI: (Scripted) Disintegrate arch/s390/include/asm
2012-10-10 19:48:33 +09:00
Linus Torvalds 42859eea96 Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/signal
Pull generic execve() changes from Al Viro:
 "This introduces the generic kernel_thread() and kernel_execve()
  functions, and switches x86, arm, alpha, um and s390 over to them."

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/signal: (26 commits)
  s390: convert to generic kernel_execve()
  s390: switch to generic kernel_thread()
  s390: fold kernel_thread_helper() into ret_from_fork()
  s390: fold execve_tail() into start_thread(), convert to generic sys_execve()
  um: switch to generic kernel_thread()
  x86, um/x86: switch to generic sys_execve and kernel_execve
  x86: split ret_from_fork
  alpha: introduce ret_from_kernel_execve(), switch to generic kernel_execve()
  alpha: switch to generic kernel_thread()
  alpha: switch to generic sys_execve()
  arm: get rid of execve wrapper, switch to generic execve() implementation
  arm: optimized current_pt_regs()
  arm: introduce ret_from_kernel_execve(), switch to generic kernel_execve()
  arm: split ret_from_fork, simplify kernel_thread() [based on patch by rmk]
  generic sys_execve()
  generic kernel_execve()
  new helper: current_pt_regs()
  preparation for generic kernel_thread()
  um: kill thread->forking
  um: let signal_delivered() do SIGTRAP on singlestepping into handler
  ...
2012-10-10 12:02:25 +09:00
Martin Schwidefsky 450e47da67 s390/entry: fix svc number for TIF_SYSCALL system call restart
The load of the svc number in the TIF_SYSCALL restart path needs to be
done with an instruction that loads all 64 bits of %r1, 'lh' only loads
32 bits. If the upper half of %r1 is not zero and has the msb set,
entry64.S will try to execute an svc with a really large number.
What will be in the upper half of %r1 depends on the code generated by
gcc for the functions on the do_signal() callchain.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2012-10-09 14:17:02 +02:00
Al Viro f322220d61 s390: convert to generic kernel_execve()
same situation as with alpha and arm - only massage needed

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2012-09-30 23:03:04 -04:00
Al Viro 37fe5d41f6 s390: fold kernel_thread_helper() into ret_from_fork()
... and don't bother with syscall return path in case of kernel
threads.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2012-09-30 23:03:03 -04:00
Al Viro 65f22a906e s390: fold execve_tail() into start_thread(), convert to generic sys_execve()
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2012-09-30 23:03:02 -04:00
Heiko Carstens eb608fb366 s390/exceptions: switch to relative exception table entries
This is the s390 port of 70627654 "x86, extable: Switch to relative
exception table entries".
Reduces the size of our exception tables by 50% on 64 bit builds.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2012-09-26 15:45:10 +02:00
Martin Schwidefsky d35339a42d s390: add support for transactional memory
Allow user-space processes to use transactional execution (TX).
If the TX facility is available user space programs can use
transactions for fine-grained serialization based on the data
objects that are referenced during a transaction. This is
useful for lockless data structures and speculative compiler
optimizations.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2012-09-26 15:45:02 +02:00
Martin Schwidefsky 27f6b41662 s390/vtimer: rework virtual timer interface
The current virtual timer interface is inherently per-cpu and hard to
use. The sole user of the interface is appldata which uses it to execute
a function after a specific amount of cputime has been used over all cpus.

Rework the virtual timer interface to hook into the cputime accounting.
This makes the interface independent from the CPU timer interrupts, and
makes the virtual timers global as opposed to per-cpu.
Overall the code is greatly simplified. The downside is that the accuracy
is not as good as the original implementation, but it is still good enough
for appldata.

Reviewed-by: Jan Glauber <jang@linux.vnet.ibm.com>
Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2012-07-20 11:15:08 +02:00
Heiko Carstens a53c8fab3f s390/comments: unify copyright messages and remove file names
Remove the file name from the comment at top of many files. In most
cases the file name was wrong anyway, so it's rather pointless.

Also unify the IBM copyright statement. We did have a lot of sightly
different statements and wanted to change them one after another
whenever a file gets touched. However that never happened. Instead
people start to take the old/"wrong" statements to use as a template
for new files.
So unify all of them in one go.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2012-07-20 11:15:04 +02:00
Heiko Carstens fbe765680d s390/smp: make absolute lowcore / cpu restart parameter accesses more robust
Setting the cpu restart parameters is done in three different fashions:
- directly setting the four parameters individually
- copying the four parameters with memcpy (using 4 * sizeof(long))
- copying the four parameters using a private structure

In addition code in entry*.S relies on a certain order of the restart
members of struct _lowcore.

Make all of this more robust to future changes by adding a
mem_absolute_assign(dest, val) define, which assigns val to dest
using absolute addressing mode. Also the load multiple instructions
in entry*.S have been split into separate load instruction so the
order of the struct _lowcore members doesn't matter anymore.

In addition move the prototypes of memcpy_real/absolute from uaccess.h
to processor.h. These memcpy* variants are not related to uaccess at all.
string.h doesn't seem to match as well, so lets use processor.h.

Also replace the eight byte array in struct _lowcore which represents a
misaliged u64 with a u64. The compiler will always create code that
handles the misaligned u64 correctly.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2012-06-14 09:09:02 +02:00
Heiko Carstens eb546195a7 s390/sigp: use sigp order code defines in assembly code
Use sigp order code defines in assembly code as well.
With this change all places that use sigp constants should
have been converted to use self describing defines instead
of directly using constants.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2012-06-05 13:23:36 +02:00
Christian Borntraeger 569ba7d208 s390/kvm: get rid of duplicate instruction
After
commit 5e8010cb50
    s390: replace TIF_SIE with PF_VCPU
there is no need to load the thread info before sie_loop where
it is also loaded.
Get rid of this duplicate instruction.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2012-06-05 13:23:35 +02:00
Martin Schwidefsky eda0c6d6b0 s390: fix race on TIF_MCCK_PENDING
There is a small race window in the __switch_to code in regard to
the transfer of the TIF_MCCK_PENDING bit from the previous to the
next task. The bit is transferred before the task struct pointer
and the thread-info pointer for the next task has been stored to
lowcore. If a machine check sets the TIF_MCCK_PENDING bit between
the transfer code and the store of current/thread_info the bit
is still set for the previous task. And if the previous task has
terminated it can get lost. The effect is that a pending CRW is
not retrieved until the next machine checks sets TIF_MCCK_PENDING.
To fix this reorder __switch_to to first store the task struct
and thread-info pointer and then do the transfer of the bit.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2012-05-16 14:42:46 +02:00
Heiko Carstens d652d59682 s390/time: simply Kconfig dependency
Use HAVE_MARCH_Z9_109_FEATURES to figure out if stckf is available
at compile time.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2012-05-16 14:42:44 +02:00
Heiko Carstens 9ee39e4b84 s390/entry64: avoid SPP code duplication
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2012-05-16 14:42:43 +02:00
Heiko Carstens 473e66baad s390/time: always use stckf instead of stck if available
The store clock fast instruction saves a couple of instructions compared
to the store clock instruction. Always use stckf instead of stck if it
is available.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2012-05-16 14:42:41 +02:00
Martin Schwidefsky 5e8010cb50 s390: replace TIF_SIE with PF_VCPU
Replace the check for TIF_SIE in the fault handler by a check for PF_VCPU.
With the last user of TIF_SIE gone we can now remove the bit.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2012-05-16 14:42:39 +02:00
Martin Schwidefsky 6022afc060 s390: make sie intercept independent of thread_info
HANDLE_SIE_INTERCEPT is called early, use supervisor state and
instruction address to decide if the reset of the PSW to sie_loop
is required.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2012-05-16 14:42:38 +02:00
Michael Holzheu 0de9db37f0 s390: initialize backchain for ext_int_handler()
To allow correct stack backtraces the backchain for the external
interrupt handler is now initialized with zero like it is already
done for example by io_int_handler().

Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2012-05-16 14:42:37 +02:00
Martin Schwidefsky 4c1051e37a [S390] rework idle code
Whenever the cpu loads an enabled wait PSW it will appear as idle to the
underlying host system. The code in default_idle calls vtime_stop_cpu
which does the necessary voodoo to get the cpu time accounting right.
The udelay code just loads an enabled wait PSW. To correct this rework
the vtime_stop_cpu/vtime_start_cpu logic and move the difficult parts
to entry[64].S, vtime_stop_cpu can now be called from anywhere and
vtime_start_cpu is gone. The correction of the cpu time during wakeup
from an enabled wait PSW is done with a critical section in entry[64].S.
As vtime_start_cpu is gone, s390_idle_check can be removed as well.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2012-03-11 11:59:28 -04:00
Martin Schwidefsky 8b646bd759 [S390] rework smp code
Define struct pcpu and merge some of the NR_CPUS arrays into it, including
__cpu_logical_map, current_set and smp_cpu_state. Split smp related
functions to those operating on physical cpus and the functions operating
on a logical cpu number. Make the functions for physical cpus use a
pointer to a struct pcpu. This hides the knowledge about cpu addresses in
smp.c, entry[64].S and swsusp_asm64.S, thus remove the sigp.h header.

The PSW restart mechanism is used to start secondary cpus, calling a
function on an online cpu, calling a function on the ipl cpu, and for
the nmi signal. Replace the different assembler functions with a
single function restart_int_handler. The new entry point calls a function
whose pointer is stored in the lowcore of the target cpu and it can wait
for the source cpu to stop. This covers all existing use cases.

Overall the code is now simpler and there are ~380 lines less code.

Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2012-03-11 11:59:28 -04:00