1
0
Fork 0
Commit Graph

69 Commits (redonkable)

Author SHA1 Message Date
Kirill A. Shutemov dcddffd41d mm: do not pass mm_struct into handle_mm_fault
We always have vma->vm_mm around.

Link: http://lkml.kernel.org/r/1466021202-61880-8-git-send-email-kirill.shutemov@linux.intel.com
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-07-26 16:19:19 -07:00
David Woodhouse 4692400827 iommu/vt-d: Clear PPR bit to ensure we get more page request interrupts
According to the VT-d specification we need to clear the PPR bit in
the Page Request Status register when handling page requests, or the
hardware won't generate any more interrupts.

This wasn't actually necessary on SKL/KBL (which may well be the
subject of a hardware erratum, although it's harmless enough). But
other implementations do appear to get it right, and we only ever get
one interrupt unless we clear the PPR bit.

Reported-by: CQ Tang <cq.tang@intel.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
Cc: stable@vger.kernel.org
2016-02-15 12:42:38 +00:00
David Woodhouse e57e58bd39 iommu/vt-d: Fix mm refcounting to hold mm_count not mm_users
Holding mm_users works OK for graphics, which was the first user of SVM
with VT-d. However, it works less well for other devices, where we actually
do a mmap() from the file descriptor to which the SVM PASID state is tied.

In this case on process exit we end up with a recursive reference count:
 - The MM remains alive until the file is closed and the driver's release()
   call ends up unbinding the PASID.
 - The VMA corresponding to the mmap() remains intact until the MM is
   destroyed.
 - Thus the file isn't closed, even when exit_files() runs, because the
   VMA is still holding a reference to it. And the MM remains alive…

To address this issue, we *stop* holding mm_users while the PASID is bound.
We already hold mm_count by virtue of the MMU notifier, and that can be
made to be sufficient.

It means that for a period during process exit, the fun part of mmput()
has happened and exit_mmap() has been called so the MM is basically
defunct. But the PGD still exists and the PASID is still bound to it.

During this period, we have to be very careful — exit_mmap() doesn't use
mm->mmap_sem because it doesn't expect anyone else to be touching the MM
(quite reasonably, since mm_users is zero). So we also need to fix the
fault handler to just report failure if mm_users is already zero, and to
temporarily bump mm_users while handling any faults.

Additionally, exit_mmap() calls mmu_notifier_release() *before* it tears
down the page tables, which is too early for us to flush the IOTLB for
this PASID. And __mmu_notifier_release() removes every notifier from the
list, so when exit_mmap() finally *does* tear down the mappings and
clear the page tables, we don't get notified. So we work around this by
clearing the PASID table entry in our MMU notifier release() callback.
That way, the hardware *can't* get any pages back from the page tables
before they get cleared.

Hardware designers have confirmed that the resulting 'PASID not present'
faults should be handled just as gracefully as 'page not present' faults,
the important criterion being that they don't perturb the operation for
any *other* PASID in the system.

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
Cc: stable@vger.kernel.org
2016-01-13 21:05:46 +00:00
Joerg Roedel 7f8312a3b3 iommu/vt-d: Do access checks before calling handle_mm_fault()
Not doing so is a bug and might trigger a BUG_ON in
handle_mm_fault(). So add the proper permission checks
before calling into mm code.

Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Acked-By: David Woodhouse <David.Woodhouse@intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-14 15:37:55 +01:00
David Woodhouse 0bdec95ce5 iommu/vt-d: Fix rwxp flags in SVM device fault callback
This is the downside of using bitfields in the struct definition, rather
than doing all the explicit masking and shifting.

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2015-10-28 15:14:09 +09:00
David Woodhouse 5a10ba27d9 iommu/vt-d: Handle Caching Mode implementations of SVM
Not entirely clear why, but it seems we need to reserve PASID zero and
flush it when we make a PASID entry present.

Quite we we couldn't use the true PASID value, isn't clear.

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2015-10-24 21:06:39 +02:00
David Woodhouse 5d52f482eb iommu/vt-d: Fix SVM IOTLB flush handling
Change the 'pages' parameter to 'unsigned long' to avoid overflow.

Fix the device-IOTLB flush parameter calculation — the size of the IOTLB
flush is indicated by the position of the least significant zero bit in
the address field. For example, a value of 0x12345f000 will flush from
0x123440000 to 0x12347ffff (256KiB).

Finally, the cap_pgsel_inv() is not relevant to SVM; the spec says that
*all* implementations must support page-selective invaliation for
"first-level" translations. So don't check for it.

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2015-10-20 16:26:21 +01:00
Dan Carpenter 3c7c2f3288 iommu/vt-d: fix a loop in prq_event_thread()
There is an extra semi-colon on this if statement so we always break on
the first iteration.

Fixes: 0204a49609 ('iommu/vt-d: Add callback to device driver on page faults')
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2015-10-18 15:26:04 +01:00
David Woodhouse e034992160 iommu/vt-d: Fix IOTLB flushing for global pages
When flushing kernel-mode PASIDs, we need to flush global pages too.

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2015-10-16 19:37:04 +01:00
David Woodhouse 7f92a2e910 iommu/vt-d: Fix address shifting in page request handler
This really should be VTD_PAGE_SHIFT, not PAGE_SHIFT. Not that we ever
really anticipate seeing this used on IA64, but we should get it right
anyway.

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2015-10-16 17:22:34 +01:00
Dan Carpenter 95fb6144bb iommu/vt-d: shift wrapping bug in prq_event_thread()
The "req->addr" variable is a bit field declared as "u64 addr:52;".
The "address" variable is a u64.  We need to cast "req->addr" to a u64
before the shift or the result is truncated to 52 bits.

Fixes: a222a7f0bb ('iommu/vt-d: Implement page request handling')
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2015-10-15 21:16:47 +01:00
David Woodhouse 26322ab55a iommu/vt-d: Fix NULL pointer dereference in page request error case
Dan Carpenter pointed out an error path which could lead to us
dereferencing the 'svm' pointer after we know it to be NULL because the
PASID lookup failed. Fix that, and make it less likely to happen again.

Fixes: a222a7f0bb ('iommu/vt-d: Implement page request handling')
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2015-10-15 21:16:22 +01:00
David Woodhouse 5cec753709 iommu/vt-d: Implement SVM_FLAG_SUPERVISOR_MODE for kernel access
This is only usable for the static 1:1 mapping of physical memory.

Any access to vmalloc or module regions will require some way of doing
an IOTLB flush. It's theoretically possible to hook into the
tlb_flush_kernel_range() function, but that seems like overkill — most
of the addresses accessed through a kernel PASID *will* be in the 1:1
mapping.

If we really need to allow access to more interesting kernel regions,
then the answer will probably be an explicit IOTLB flush call after use,
akin to the DMA API's unmap function.

In fact, it might be worth introducing that sooner rather than later, and
making it just BUG() if the address isn't in the static 1:1 mapping.

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2015-10-15 15:52:21 +01:00
David Woodhouse 569e4f7782 iommu/vt-d: Implement SVM_FLAG_PRIVATE_PASID to allocate unique PASIDs
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2015-10-15 15:35:32 +01:00
David Woodhouse 0204a49609 iommu/vt-d: Add callback to device driver on page faults
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2015-10-15 15:35:28 +01:00
David Woodhouse a222a7f0bb iommu/vt-d: Implement page request handling
Largely based on the driver-mode implementation by Jesse Barnes.

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2015-10-15 15:35:19 +01:00
David Woodhouse 907fea3491 iommu/vt-d: Implement deferred invalidate for SVM
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2015-10-15 13:22:35 +01:00
David Woodhouse 2f26e0a9c9 iommu/vt-d: Add basic SVM PASID support
This provides basic PASID support for endpoint devices, tested with a
version of the i915 driver.

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2015-10-15 12:55:45 +01:00
David Woodhouse 8a94ade4ce iommu/vt-d: Add initial support for PASID tables
Add CONFIG_INTEL_IOMMU_SVM, and allocate PASID tables on supported hardware.

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2015-10-15 11:24:51 +01:00