1
0
Fork 0
Commit Graph

357 Commits (d834c5ab83febf9624ad3b16c3c348aa1e02014c)

Author SHA1 Message Date
Joerg Roedel e3d10af112 iommu: Make iommu_device_link/unlink take a struct iommu_device
This makes the interface more consistent with
iommu_device_sysfs_add/remove.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2017-02-10 13:44:57 +01:00
Joerg Roedel 39ab9555c2 iommu: Add sysfs bindings for struct iommu_device
There is currently support for iommu sysfs bindings, but
those need to be implemented in the IOMMU drivers. Add a
more generic version of this by adding a struct device to
struct iommu_device and use that for the sysfs bindings.

Also convert the AMD and Intel IOMMU driver to make use of
it.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2017-02-10 13:44:57 +01:00
Joerg Roedel b0119e8708 iommu: Introduce new 'struct iommu_device'
This struct represents one hardware iommu in the iommu core
code. For now it only has the iommu-ops associated with it,
but that will be extended soon.

The register/unregister interface is also added, as well as
making use of it in the Intel and AMD IOMMU drivers.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2017-02-10 13:44:57 +01:00
David Dillow f7116e115a iommu/vt-d: Don't over-free page table directories
dma_pte_free_level() recurses down the IOMMU page tables and frees
directory pages that are entirely contained in the given PFN range.
Unfortunately, it incorrectly calculates the starting address covered
by the PTE under consideration, which can lead to it clearing an entry
that is still in use.

This occurs if we have a scatterlist with an entry that has a length
greater than 1026 MB and is aligned to 2 MB for both the IOMMU and
physical addresses. For example, if __domain_mapping() is asked to map a
two-entry scatterlist with 2 MB and 1028 MB segments to PFN 0xffff80000,
it will ask if dma_pte_free_pagetable() is asked to PFNs from
0xffff80200 to 0xffffc05ff, it will also incorrectly clear the PFNs from
0xffff80000 to 0xffff801ff because of this issue. The current code will
set level_pfn to 0xffff80200, and 0xffff80200-0xffffc01ff fits inside
the range being cleared. Properly setting the level_pfn for the current
level under consideration catches that this PTE is outside of the range
being cleared.

This patch also changes the value passed into dma_pte_free_level() when
it recurses. This only affects the first PTE of the range being cleared,
and is handled by the existing code that ensures we start our cursor no
lower than start_pfn.

This was found when using dma_map_sg() to map large chunks of contiguous
memory, which immediatedly led to faults on the first access of the
erroneously-deleted mappings.

Fixes: 3269ee0bd6 ("intel-iommu: Fix leaks in pagetable freeing")
Reviewed-by: Benjamin Serebrin <serebrin@google.com>
Signed-off-by: David Dillow <dillow@google.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2017-01-31 12:50:05 +01:00
Ashok Raj 21e722c4c8 iommu/vt-d: Tylersburg isoch identity map check is done too late.
The check to set identity map for tylersburg is done too late. It needs
to be done before the check for identity_map domain is done.

To: Joerg Roedel <joro@8bytes.org>
To: David Woodhouse <dwmw2@infradead.org>
Cc: iommu@lists.linux-foundation.org
Cc: linux-kernel@vger.kernel.org
Cc: stable@vger.kernel.org
Cc: Ashok Raj <ashok.raj@intel.com>

Fixes: 86080ccc22 ("iommu/vt-d: Allocate si_domain in init_dmars()")
Signed-off-by: Ashok Raj <ashok.raj@intel.com>
Reported-by: Yunhong Jiang <yunhong.jiang@intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2017-01-31 12:50:05 +01:00
Eric Auger 0659b8dc45 iommu/vt-d: Implement reserved region get/put callbacks
This patch registers the [FEE0_0000h - FEF0_000h] 1MB MSI
range as a reserved region and RMRR regions as direct regions.

This will allow to report those reserved regions in the
iommu-group sysfs.

Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-01-23 11:48:17 +00:00
Jacob Pan 65ca7f5f7d iommu/vt-d: Fix pasid table size encoding
Different encodings are used to represent supported PASID bits
and number of PASID table entries.
The current code assigns ecap_pss directly to extended context
table entry PTS which is wrong and could result in writing
non-zero bits to the reserved fields. IOMMU fault reason
11 will be reported when reserved bits are nonzero.
This patch converts ecap_pss to extend context entry pts encoding
based on VT-d spec. Chapter 9.4 as follows:
 - number of PASID bits = ecap_pss + 1
 - number of PASID table entries = 2^(pts + 5)
Software assigned limit of pasid_max value is also respected to
match the allocation limitation of PASID table.

cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
cc: Ashok Raj <ashok.raj@intel.com>
Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
Tested-by: Mika Kuoppala <mika.kuoppala@intel.com>
Fixes: 2f26e0a9c9 ('iommu/vt-d: Add basic SVM PASID support')
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2017-01-04 15:18:57 +01:00
Xunlei Pang aec0e86172 iommu/vt-d: Flush old iommu caches for kdump when the device gets context mapped
We met the DMAR fault both on hpsa P420i and P421 SmartArray controllers
under kdump, it can be steadily reproduced on several different machines,
the dmesg log is like:
HP HPSA Driver (v 3.4.16-0)
hpsa 0000:02:00.0: using doorbell to reset controller
hpsa 0000:02:00.0: board ready after hard reset.
hpsa 0000:02:00.0: Waiting for controller to respond to no-op
DMAR: Setting identity map for device 0000:02:00.0 [0xe8000 - 0xe8fff]
DMAR: Setting identity map for device 0000:02:00.0 [0xf4000 - 0xf4fff]
DMAR: Setting identity map for device 0000:02:00.0 [0xbdf6e000 - 0xbdf6efff]
DMAR: Setting identity map for device 0000:02:00.0 [0xbdf6f000 - 0xbdf7efff]
DMAR: Setting identity map for device 0000:02:00.0 [0xbdf7f000 - 0xbdf82fff]
DMAR: Setting identity map for device 0000:02:00.0 [0xbdf83000 - 0xbdf84fff]
DMAR: DRHD: handling fault status reg 2
DMAR: [DMA Read] Request device [02:00.0] fault addr fffff000 [fault reason 06] PTE Read access is not set
hpsa 0000:02:00.0: controller message 03:00 timed out
hpsa 0000:02:00.0: no-op failed; re-trying

After some debugging, we found that the fault addr is from DMA initiated at
the driver probe stage after reset(not in-flight DMA), and the corresponding
pte entry value is correct, the fault is likely due to the old iommu caches
of the in-flight DMA before it.

Thus we need to flush the old cache after context mapping is setup for the
device, where the device is supposed to finish reset at its driver probe
stage and no in-flight DMA exists hereafter.

I'm not sure if the hardware is responsible for invalidating all the related
caches allocated in the iommu hardware before, but seems not the case for hpsa,
actually many device drivers have problems in properly resetting the hardware.
Anyway flushing (again) by software in kdump kernel when the device gets context
mapped which is a quite infrequent operation does little harm.

With this patch, the problematic machine can survive the kdump tests.

CC: Myron Stowe <myron.stowe@gmail.com>
CC: Joseph Szczypek <jszczype@redhat.com>
CC: Don Brace <don.brace@microsemi.com>
CC: Baoquan He <bhe@redhat.com>
CC: Dave Young <dyoung@redhat.com>
Fixes: 091d42e43d ("iommu/vt-d: Copy translation tables from old kernel")
Fixes: dbcd861f25 ("iommu/vt-d: Do not re-use domain-ids from the old kernel")
Fixes: cf484d0e69 ("iommu/vt-d: Mark copied context entries")
Signed-off-by: Xunlei Pang <xlpang@redhat.com>
Tested-by: Don Brace <don.brace@microsemi.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2017-01-04 15:14:04 +01:00
Linus Torvalds e71c3978d6 Merge branch 'smp-hotplug-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull smp hotplug updates from Thomas Gleixner:
 "This is the final round of converting the notifier mess to the state
  machine. The removal of the notifiers and the related infrastructure
  will happen around rc1, as there are conversions outstanding in other
  trees.

  The whole exercise removed about 2000 lines of code in total and in
  course of the conversion several dozen bugs got fixed. The new
  mechanism allows to test almost every hotplug step standalone, so
  usage sites can exercise all transitions extensively.

  There is more room for improvement, like integrating all the
  pointlessly different architecture mechanisms of synchronizing,
  setting cpus online etc into the core code"

* 'smp-hotplug-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (60 commits)
  tracing/rb: Init the CPU mask on allocation
  soc/fsl/qbman: Convert to hotplug state machine
  soc/fsl/qbman: Convert to hotplug state machine
  zram: Convert to hotplug state machine
  KVM/PPC/Book3S HV: Convert to hotplug state machine
  arm64/cpuinfo: Convert to hotplug state machine
  arm64/cpuinfo: Make hotplug notifier symmetric
  mm/compaction: Convert to hotplug state machine
  iommu/vt-d: Convert to hotplug state machine
  mm/zswap: Convert pool to hotplug state machine
  mm/zswap: Convert dst-mem to hotplug state machine
  mm/zsmalloc: Convert to hotplug state machine
  mm/vmstat: Convert to hotplug state machine
  mm/vmstat: Avoid on each online CPU loops
  mm/vmstat: Drop get_online_cpus() from init_cpu_node_state/vmstat_cpu_dead()
  tracing/rb: Convert to hotplug state machine
  oprofile/nmi timer: Convert to hotplug state machine
  net/iucv: Use explicit clean up labels in iucv_init()
  x86/pci/amd-bus: Convert to hotplug state machine
  x86/oprofile/nmi: Convert to hotplug state machine
  ...
2016-12-12 19:25:04 -08:00
Anna-Maria Gleixner 21647615db iommu/vt-d: Convert to hotplug state machine
Install the callbacks via the state machine.

Signed-off-by: Anna-Maria Gleixner <anna-maria@linutronix.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: iommu@lists.linux-foundation.org
Cc: rt@linutronix.de
Cc: David Woodhouse <dwmw2@infradead.org>
Link: http://lkml.kernel.org/r/20161126231350.10321-14-bigeasy@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2016-12-02 00:52:37 +01:00
Linus Torvalds 105ecadc6d Merge git://git.infradead.org/intel-iommu
Pull IOMMU fixes from David Woodhouse:
 "Two minor fixes.

  The first fixes the assignment of SR-IOV virtual functions to the
  correct IOMMU unit, and the second fixes the excessively large (and
  physically contiguous) PASID tables used with SVM"

* git://git.infradead.org/intel-iommu:
  iommu/vt-d: Fix PASID table allocation
  iommu/vt-d: Fix IOMMU lookup for SR-IOV Virtual Functions
2016-11-27 08:24:46 -08:00
Joerg Roedel bea64033dd iommu/vt-d: Fix dead-locks in disable_dmar_iommu() path
It turns out that the disable_dmar_iommu() code-path tried
to get the device_domain_lock recursivly, which will
dead-lock when this code runs on dmar removal. Fix both
code-paths that could lead to the dead-lock.

Fixes: 55d940430a ('iommu/vt-d: Get rid of domain->iommu_lock')
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-11-08 15:08:26 +01:00
Ashok Raj 1c387188c6 iommu/vt-d: Fix IOMMU lookup for SR-IOV Virtual Functions
The VT-d specification (§8.3.3) says:
    ‘Virtual Functions’ of a ‘Physical Function’ are under the scope
    of the same remapping unit as the ‘Physical Function’.

The BIOS is not required to list all the possible VFs in the scope
tables, and arguably *shouldn't* make any attempt to do so, since there
could be a huge number of them.

This has been broken basically for ever — the VF is never going to match
against a specific unit's scope, so it ends up being assigned to the
INCLUDE_ALL IOMMU. Which was always actually correct by coincidence, but
now we're looking at Root-Complex integrated devices with SR-IOV support
it's going to start being wrong.

Fix it to simply use pci_physfn() before doing the lookup for PCI devices.

Cc: stable@vger.kernel.org
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Signed-off-by: Ashok Raj <ashok.raj@intel.com>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
2016-10-30 05:32:51 -06:00
Joerg Roedel 1c5ebba95b iommu/vt-d: Make sure RMRRs are mapped before domain goes public
When a domain is allocated through the get_valid_domain_for_dev
path, it will be context-mapped before the RMRR regions are
mapped in the page-table. This opens a short time window
where device-accesses to these regions fail and causing DMAR
faults.

Fix this by mapping the RMRR regions before the domain is
context-mapped.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-09-05 13:00:28 +02:00
Joerg Roedel 76208356a0 iommu/vt-d: Split up get_domain_for_dev function
Split out the search for an already existing domain and the
context mapping of the device to the new domain.

This allows to map possible RMRR regions into the domain
before it is context mapped.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-09-05 13:00:28 +02:00
Krzysztof Kozlowski 00085f1efa dma-mapping: use unsigned long for dma_attrs
The dma-mapping core and the implementations do not change the DMA
attributes passed by pointer.  Thus the pointer can point to const data.
However the attributes do not have to be a bitfield.  Instead unsigned
long will do fine:

1. This is just simpler.  Both in terms of reading the code and setting
   attributes.  Instead of initializing local attributes on the stack
   and passing pointer to it to dma_set_attr(), just set the bits.

2. It brings safeness and checking for const correctness because the
   attributes are passed by value.

Semantic patches for this change (at least most of them):

    virtual patch
    virtual context

    @r@
    identifier f, attrs;

    @@
    f(...,
    - struct dma_attrs *attrs
    + unsigned long attrs
    , ...)
    {
    ...
    }

    @@
    identifier r.f;
    @@
    f(...,
    - NULL
    + 0
     )

and

    // Options: --all-includes
    virtual patch
    virtual context

    @r@
    identifier f, attrs;
    type t;

    @@
    t f(..., struct dma_attrs *attrs);

    @@
    identifier r.f;
    @@
    f(...,
    - NULL
    + 0
     )

Link: http://lkml.kernel.org/r/1468399300-5399-2-git-send-email-k.kozlowski@samsung.com
Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Acked-by: Vineet Gupta <vgupta@synopsys.com>
Acked-by: Robin Murphy <robin.murphy@arm.com>
Acked-by: Hans-Christian Noren Egtvedt <egtvedt@samfundet.no>
Acked-by: Mark Salter <msalter@redhat.com> [c6x]
Acked-by: Jesper Nilsson <jesper.nilsson@axis.com> [cris]
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch> [drm]
Reviewed-by: Bart Van Assche <bart.vanassche@sandisk.com>
Acked-by: Joerg Roedel <jroedel@suse.de> [iommu]
Acked-by: Fabien Dessenne <fabien.dessenne@st.com> [bdisp]
Reviewed-by: Marek Szyprowski <m.szyprowski@samsung.com> [vb2-core]
Acked-by: David Vrabel <david.vrabel@citrix.com> [xen]
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> [xen swiotlb]
Acked-by: Joerg Roedel <jroedel@suse.de> [iommu]
Acked-by: Richard Kuo <rkuo@codeaurora.org> [hexagon]
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> [m68k]
Acked-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> [s390]
Acked-by: Bjorn Andersson <bjorn.andersson@linaro.org>
Acked-by: Hans-Christian Noren Egtvedt <egtvedt@samfundet.no> [avr32]
Acked-by: Vineet Gupta <vgupta@synopsys.com> [arc]
Acked-by: Robin Murphy <robin.murphy@arm.com> [arm64 and dma-iommu]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-08-04 08:50:07 -04:00
Linus Torvalds dd9671172a IOMMU Updates for Linux v4.8
In the updates:
 
 	* Big endian support and preparation for defered probing for the
 	  Exynos IOMMU driver
 
 	* Simplifications in iommu-group id handling
 
 	* Support for Mediatek generation one IOMMU hardware
 
 	* Conversion of the AMD IOMMU driver to use the generic IOVA
 	  allocator. This driver now also benefits from the recent
 	  scalability improvements in the IOVA code.
 
 	* Preparations to use generic DMA mapping code in the Rockchip
 	  IOMMU driver
 
 	* Device tree adaption and conversion to use generic page-table
 	  code for the MSM IOMMU driver
 
 	* An iova_to_phys optimization in the ARM-SMMU driver to greatly
 	  improve page-table teardown performance with VFIO
 
 	* Various other small fixes and conversions
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2
 
 iQIcBAABAgAGBQJXl3e+AAoJECvwRC2XARrjMIgP/1Mm9qIfcaAxKY4ByqbVfrH8
 313PO6rpwUhhywUmnf/1F/x+JbuLv8MmRXfSc106mdB1rq9NXpkORYKrqVxs0cSq
 6u6TzZWbF6WN1ipqXxDITNFBSy7u97K1VuFaKyYFfLbg8xrkcdkMZJ7BqM2xIEdk
 rnRKcfHo6wsmCXJ6InsUPmKAqU6AfMewZTGjO+v77Gce0rZEbsJ8n7BRKC9vO2bc
 akvN2W+zzEUSyhbuyYQBG+agpmC5GJvz4u+6QvAP5sxTWfAsnwAoPpP4xxR+/KjT
 eicHlja4v0YK6Hr4AJaMxoKfKIrCdqpWm0D2tg/edyWZCeg98AW/w7/s0I8OD3ao
 Otj6IqC8nPk0pYciOeEPQ7aqPbvKAqU2FYWt7lWamrdr98u2R3p2nXGl0KthoAj6
 JqzrCZXvBS7sj1IPLlGpj939yvbKbjpE0p7y1qhI1VEBXoBWFNvlKydkYx76BTGK
 F6paGVqn2Zwy00AqAsylTEkvIK063zwShZ6nPqz4bMdVlgzjrjCzdDecjfbHr8Ic
 6D2oCwyF+RJ8qw+Ecm9EmWFik80sgb+iUTeeYEXNf+YzLYt5McIj7fi3N+sUPel3
 YJ4S4x0sIpgUZZ1i+rOo8ZPAFHRU6SRPYV+ewaeYKrMt+Un5dTn9SddpqrJdbiUu
 YrF36BaQjc123IRGKrSd
 =xiS2
 -----END PGP SIGNATURE-----

Merge tag 'iommu-updates-v4.8' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu

Pull IOMMU updates from Joerg Roedel:

 - big-endian support and preparation for defered probing for the Exynos
   IOMMU driver

 - simplifications in iommu-group id handling

 - support for Mediatek generation one IOMMU hardware

 - conversion of the AMD IOMMU driver to use the generic IOVA allocator.
   This driver now also benefits from the recent scalability
   improvements in the IOVA code.

 - preparations to use generic DMA mapping code in the Rockchip IOMMU
   driver

 - device tree adaption and conversion to use generic page-table code
   for the MSM IOMMU driver

 - an iova_to_phys optimization in the ARM-SMMU driver to greatly
   improve page-table teardown performance with VFIO

 - various other small fixes and conversions

* tag 'iommu-updates-v4.8' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (59 commits)
  iommu/amd: Initialize dma-ops domains with 3-level page-table
  iommu/amd: Update Alias-DTE in update_device_table()
  iommu/vt-d: Return error code in domain_context_mapping_one()
  iommu/amd: Use container_of to get dma_ops_domain
  iommu/amd: Flush iova queue before releasing dma_ops_domain
  iommu/amd: Handle IOMMU_DOMAIN_DMA in ops->domain_free call-back
  iommu/amd: Use dev_data->domain in get_domain()
  iommu/amd: Optimize map_sg and unmap_sg
  iommu/amd: Introduce dir2prot() helper
  iommu/amd: Implement timeout to flush unmap queues
  iommu/amd: Implement flush queue
  iommu/amd: Allow NULL pointer parameter for domain_flush_complete()
  iommu/amd: Set up data structures for flush queue
  iommu/amd: Remove align-parameter from __map_single()
  iommu/amd: Remove other remains of old address allocator
  iommu/amd: Make use of the generic IOVA allocator
  iommu/amd: Remove special mapping code for dma_ops path
  iommu/amd: Pass gfp-flags to iommu_map_page()
  iommu/amd: Implement apply_dm_region call-back
  iommu/amd: Create a list of reserved iova addresses
  ...
2016-08-01 07:25:10 -04:00
Linus Torvalds 194dc870a5 Add braces to avoid "ambiguous ‘else’" compiler warnings
Some of our "for_each_xyz()" macro constructs make gcc unhappy about
lack of braces around if-statements inside or outside the loop, because
the loop construct itself has a "if-then-else" statement inside of it.

The resulting warnings look something like this:

  drivers/gpu/drm/i915/i915_debugfs.c: In function ‘i915_dump_lrc’:
  drivers/gpu/drm/i915/i915_debugfs.c:2103:6: warning: suggest explicit braces to avoid ambiguous ‘else’ [-Wparentheses]
     if (ctx != dev_priv->kernel_context)
        ^

even if the code itself is fine.

Since the warning is fairly easy to avoid by adding a braces around the
if-statement near the for_each_xyz() construct, do so, rather than
disabling the otherwise potentially useful warning.

(The if-then-else statements used in the "for_each_xyz()" constructs are
designed to be inherently safe even with no braces, but in this case
it's quite understandable that gcc isn't really able to tell that).

This finally leaves the standard "allmodconfig" build with just a
handful of remaining warnings, so new and valid warnings hopefully will
stand out.

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-07-27 20:03:31 -07:00
Joerg Roedel f360d3241f Merge branches 'x86/amd', 'x86/vt-d', 'arm/exynos', 'arm/mediatek', 'arm/msm', 'arm/rockchip', 'arm/smmu' and 'core' into next 2016-07-26 16:02:37 +02:00
Wei Yang 5c365d18a7 iommu/vt-d: Return error code in domain_context_mapping_one()
In 'commit <55d940430ab9> ("iommu/vt-d: Get rid of domain->iommu_lock")',
the error handling path is changed a little, which makes the function
always return 0.

This path fixes this.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Fixes: 55d940430a ('iommu/vt-d: Get rid of domain->iommu_lock')
Cc: stable@vger.kernel.org # v4.3+
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-07-14 10:26:30 +02:00
Aaron Campbell 0caa7616a6 iommu/vt-d: Fix infinite loop in free_all_cpu_cached_iovas
Per VT-d spec Section 10.4.2 ("Capability Register"), the maximum
number of possible domains is 64K; indeed this is the maximum value
that the cap_ndoms() macro will expand to.  Since the value 65536
will not fix in a u16, the 'did' variable must be promoted to an
int, otherwise the test for < 65536 will always be true and the
loop will never end.

The symptom, in my case, was a hung machine during suspend.

Fixes: 3bd4f9112f ("iommu/vt-d: Fix overflow of iommu->domains array")
Signed-off-by: Aaron Campbell <aaron@monkey.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-07-04 13:34:52 +02:00
Jan Niehusmann 3bd4f9112f iommu/vt-d: Fix overflow of iommu->domains array
The valid range of 'did' in get_iommu_domain(*iommu, did)
is 0..cap_ndoms(iommu->cap), so don't exceed that
range in free_all_cpu_cached_iovas().

The user-visible impact of the out-of-bounds access is the machine
hanging on suspend-to-ram. It is, in fact, a kernel panic, but due
to already suspended devices, that's often not visible to the user.

Fixes: 22e2f9fa63 ("iommu/vt-d: Use per-cpu IOVA caching")
Signed-off-by: Jan Niehusmann <jan@gondor.com>
Tested-By: Marius Vlad <marius.c.vlad@intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-06-27 13:21:37 +02:00
Joerg Roedel a4c34ff1c0 iommu/vt-d: Enable QI on all IOMMUs before setting root entry
This seems to be required on some X58 chipsets on systems
with more than one IOMMU. QI does not work until it is
enabled on all IOMMUs in the system.

Reported-by: Dheeraj CVR <cvr.dheeraj@gmail.com>
Tested-by: Dheeraj CVR <cvr.dheeraj@gmail.com>
Fixes: 5f0a7f7614 ('iommu/vt-d: Make root entry visible for hardware right after allocation')
Cc: stable@vger.kernel.org
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-06-17 11:29:48 +02:00
Wei Yang 86f004c77c iommu/vt-d: Reduce extra first level entry in iommu->domains
In commit <8bf478163e69> ("iommu/vt-d: Split up iommu->domains array"), it
it splits iommu->domains in two levels. Each first level contains 256
entries of second level. In case of the ndomains is exact a multiple of
256, it would have one more extra first level entry for current
implementation.

This patch refines this calculation to reduce the extra first level entry.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-06-15 13:36:58 +02:00
Linus Torvalds 2566278551 Merge git://git.infradead.org/intel-iommu
Pull intel IOMMU updates from David Woodhouse:
 "This patchset improves the scalability of the Intel IOMMU code by
  resolving two spinlock bottlenecks and eliminating the linearity of
  the IOVA allocator, yielding up to ~5x performance improvement and
  approaching 'iommu=off' performance"

* git://git.infradead.org/intel-iommu:
  iommu/vt-d: Use per-cpu IOVA caching
  iommu/iova: introduce per-cpu caching to iova allocation
  iommu/vt-d: change intel-iommu to use IOVA frame numbers
  iommu/vt-d: avoid dev iotlb logic for domains with no dev iotlbs
  iommu/vt-d: only unmap mapped entries
  iommu/vt-d: correct flush_unmaps pfn usage
  iommu/vt-d: per-cpu deferred invalidation queues
  iommu/vt-d: refactoring of deferred flush entries
2016-05-27 13:49:24 -07:00
Joerg Roedel 6c0b43df74 Merge branches 'arm/io-pgtable', 'arm/rockchip', 'arm/omap', 'x86/vt-d', 'ppc/pamu', 'core' and 'x86/amd' into next 2016-05-09 19:39:17 +02:00
Omer Peleg 22e2f9fa63 iommu/vt-d: Use per-cpu IOVA caching
Commit 9257b4a2 ('iommu/iova: introduce per-cpu caching to iova allocation')
introduced per-CPU IOVA caches to massively improve scalability. Use them.

Signed-off-by: Omer Peleg <omer@cs.technion.ac.il>
[mad@cs.technion.ac.il: rebased, cleaned up and reworded the commit message]
Signed-off-by: Adam Morrison <mad@cs.technion.ac.il>
Reviewed-by: Shaohua Li <shli@fb.com>
Reviewed-by: Ben Serebrin <serebrin@google.com>
[dwmw2: split out VT-d part into a separate patch]
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2016-04-20 15:44:48 -04:00
Omer Peleg 2aac630429 iommu/vt-d: change intel-iommu to use IOVA frame numbers
Make intel-iommu map/unmap/invalidate work with IOVA pfns instead of
pointers to "struct iova". This avoids using the iova struct from the IOVA
red-black tree and the resulting explicit find_iova() on unmap.

This patch will allow us to cache IOVAs in the next patch, in order to
avoid rbtree operations for the majority of map/unmap operations.

Note: In eliminating the find_iova() operation, we have also eliminated
the sanity check previously done in the unmap flow. Arguably, this was
overhead that is better avoided in production code, but it could be
brought back as a debug option for driver development.

Signed-off-by: Omer Peleg <omer@cs.technion.ac.il>
[mad@cs.technion.ac.il: rebased, fixed to not break iova api, and reworded
 the commit message]
Signed-off-by: Adam Morrison <mad@cs.technion.ac.il>
Reviewed-by: Shaohua Li <shli@fb.com>
Reviewed-by: Ben Serebrin <serebrin@google.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2016-04-20 15:07:22 -04:00
Omer Peleg 0824c5920b iommu/vt-d: avoid dev iotlb logic for domains with no dev iotlbs
This patch avoids taking the device_domain_lock in iommu_flush_dev_iotlb()
for domains with no dev iotlb devices.

Signed-off-by: Omer Peleg <omer@cs.technion.ac.il>
[gvdl@google.com: fixed locking issues]
Signed-off-by: Godfrey van der Linden <gvdl@google.com>
[mad@cs.technion.ac.il: rebased and reworded the commit message]
Signed-off-by: Adam Morrison <mad@cs.technion.ac.il>
Reviewed-by: Shaohua Li <shli@fb.com>
Reviewed-by: Ben Serebrin <serebrin@google.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2016-04-20 15:06:15 -04:00
Omer Peleg 769530e4ba iommu/vt-d: only unmap mapped entries
Current unmap implementation unmaps the entire area covered by the IOVA
range, which is a power-of-2 aligned region. The corresponding map,
however, only maps those pages originally mapped by the user. This
discrepancy can lead to unmapping of already unmapped entries, which is
unneeded work.

With this patch, only mapped pages are unmapped. This is also a baseline
for a map/unmap implementation based on IOVAs and not iova structures,
which will allow caching.

Signed-off-by: Omer Peleg <omer@cs.technion.ac.il>
[mad@cs.technion.ac.il: rebased and reworded the commit message]
Signed-off-by: Adam Morrison <mad@cs.technion.ac.il>
Reviewed-by: Shaohua Li <shli@fb.com>
Reviewed-by: Ben Serebrin <serebrin@google.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2016-04-20 15:06:01 -04:00
Omer Peleg f5c0c08b1e iommu/vt-d: correct flush_unmaps pfn usage
Change flush_unmaps() to correctly pass iommu_flush_iotlb_psi()
dma addresses.  (x86_64 mm and dma have the same size for pages
at the moment, but this usage improves consistency.)

Signed-off-by: Omer Peleg <omer@cs.technion.ac.il>
[mad@cs.technion.ac.il: rebased and reworded the commit message]
Signed-off-by: Adam Morrison <mad@cs.technion.ac.il>
Reviewed-by: Shaohua Li <shli@fb.com>
Reviewed-by: Ben Serebrin <serebrin@google.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2016-04-20 15:05:56 -04:00
Omer Peleg aa4732406e iommu/vt-d: per-cpu deferred invalidation queues
The IOMMU's IOTLB invalidation is a costly process.  When iommu mode
is not set to "strict", it is done asynchronously. Current code
amortizes the cost of invalidating IOTLB entries by batching all the
invalidations in the system and performing a single global invalidation
instead. The code queues pending invalidations in a global queue that
is accessed under the global "async_umap_flush_lock" spinlock, which
can result is significant spinlock contention.

This patch splits this deferred queue into multiple per-cpu deferred
queues, and thus gets rid of the "async_umap_flush_lock" and its
contention.  To keep existing deferred invalidation behavior, it still
invalidates the pending invalidations of all CPUs whenever a CPU
reaches its watermark or a timeout occurs.

Signed-off-by: Omer Peleg <omer@cs.technion.ac.il>
[mad@cs.technion.ac.il: rebased, cleaned up and reworded the commit message]
Signed-off-by: Adam Morrison <mad@cs.technion.ac.il>
Reviewed-by: Shaohua Li <shli@fb.com>
Reviewed-by: Ben Serebrin <serebrin@google.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2016-04-20 15:05:24 -04:00
Omer Peleg 314f1dc140 iommu/vt-d: refactoring of deferred flush entries
Currently, deferred flushes' info is striped between several lists in
the flush tables. Instead, move all information about a specific flush
to a single entry in this table.

This patch does not introduce any functional change.

Signed-off-by: Omer Peleg <omer@cs.technion.ac.il>
[mad@cs.technion.ac.il: rebased and reworded the commit message]
Signed-off-by: Adam Morrison <mad@cs.technion.ac.il>
Reviewed-by: Shaohua Li <shli@fb.com>
Reviewed-by: Ben Serebrin <serebrin@google.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2016-04-20 15:05:20 -04:00
Dan Carpenter 0b74ecdfbe iommu/vt-d: Silence an uninitialized variable warning
My static checker complains that "dma_alias" is uninitialized unless we
are dealing with a pci device.  This is true but harmless.  Anyway, we
can flip the condition around to silence the warning.

Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-04-07 14:51:47 +02:00
Michael S. Tsirkin 3d1a2442d2 x86/vt-d: Fix comment for dma_pte_free_pagetable()
dma_pte_free_pagetable no longer depends on last level ptes
being clear, it clears them itself.  Fix up the comment to
match.

Cc: Jiang Liu <jiang.liu@linux.intel.com>
Suggested-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-04-05 17:00:37 +02:00
Joerg Roedel e6a8c9b337 iommu/vt-d: Use BUS_NOTIFY_REMOVED_DEVICE in hotplug path
In the PCI hotplug path of the Intel IOMMU driver, replace
the usage of the BUS_NOTIFY_DEL_DEVICE notifier, which is
executed before the driver is unbound from the device, with
BUS_NOTIFY_REMOVED_DEVICE, which runs after that.

This fixes a kernel BUG being triggered in the VT-d code
when the device driver tries to unmap DMA buffers and the
VT-d driver already destroyed all mappings.

Reported-by: Stefani Seibold <stefani@seibold.net>
Cc: stable@vger.kernel.org # v4.3+
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-02-29 23:55:16 +01:00
Jeremy McNicoll da972fb13b iommu/vt-d: Don't skip PCI devices when disabling IOTLB
Fix a simple typo when disabling IOTLB on PCI(e) devices.

Fixes: b16d0cb9e2 ("iommu/vt-d: Always enable PASID/PRI PCI capabilities before ATS")
Cc: stable@vger.kernel.org  # v4.4
Signed-off-by: Jeremy McNicoll <jmcnicol@redhat.com>
Reviewed-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-01-29 12:18:13 +01:00
Dan Williams 3e6110fd54 Revert "scatterlist: use sg_phys()"
commit db0fa0cb01 "scatterlist: use sg_phys()" did replacements of
the form:

    phys_addr_t phys = page_to_phys(sg_page(s));
    phys_addr_t phys = sg_phys(s) & PAGE_MASK;

However, this breaks platforms where sizeof(phys_addr_t) >
sizeof(unsigned long).  Revert for 4.3 and 4.4 to make room for a
combined helper in 4.5.

Cc: <stable@vger.kernel.org>
Cc: Jens Axboe <axboe@fb.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Fixes: db0fa0cb01 ("scatterlist: use sg_phys()")
Suggested-by: Joerg Roedel <joro@8bytes.org>
Reported-by: Vitaly Lavrov <vel21ripn@gmail.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2015-12-15 12:54:06 -08:00
Mel Gorman d0164adc89 mm, page_alloc: distinguish between being unable to sleep, unwilling to sleep and avoiding waking kswapd
__GFP_WAIT has been used to identify atomic context in callers that hold
spinlocks or are in interrupts.  They are expected to be high priority and
have access one of two watermarks lower than "min" which can be referred
to as the "atomic reserve".  __GFP_HIGH users get access to the first
lower watermark and can be called the "high priority reserve".

Over time, callers had a requirement to not block when fallback options
were available.  Some have abused __GFP_WAIT leading to a situation where
an optimisitic allocation with a fallback option can access atomic
reserves.

This patch uses __GFP_ATOMIC to identify callers that are truely atomic,
cannot sleep and have no alternative.  High priority users continue to use
__GFP_HIGH.  __GFP_DIRECT_RECLAIM identifies callers that can sleep and
are willing to enter direct reclaim.  __GFP_KSWAPD_RECLAIM to identify
callers that want to wake kswapd for background reclaim.  __GFP_WAIT is
redefined as a caller that is willing to enter direct reclaim and wake
kswapd for background reclaim.

This patch then converts a number of sites

o __GFP_ATOMIC is used by callers that are high priority and have memory
  pools for those requests. GFP_ATOMIC uses this flag.

o Callers that have a limited mempool to guarantee forward progress clear
  __GFP_DIRECT_RECLAIM but keep __GFP_KSWAPD_RECLAIM. bio allocations fall
  into this category where kswapd will still be woken but atomic reserves
  are not used as there is a one-entry mempool to guarantee progress.

o Callers that are checking if they are non-blocking should use the
  helper gfpflags_allow_blocking() where possible. This is because
  checking for __GFP_WAIT as was done historically now can trigger false
  positives. Some exceptions like dm-crypt.c exist where the code intent
  is clearer if __GFP_DIRECT_RECLAIM is used instead of the helper due to
  flag manipulations.

o Callers that built their own GFP flags instead of starting with GFP_KERNEL
  and friends now also need to specify __GFP_KSWAPD_RECLAIM.

The first key hazard to watch out for is callers that removed __GFP_WAIT
and was depending on access to atomic reserves for inconspicuous reasons.
In some cases it may be appropriate for them to use __GFP_HIGH.

The second key hazard is callers that assembled their own combination of
GFP flags instead of starting with something like GFP_KERNEL.  They may
now wish to specify __GFP_KSWAPD_RECLAIM.  It's almost certainly harmless
if it's missed in most cases as other activity will wake kswapd.

Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Christoph Lameter <cl@linux.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Vitaly Wool <vitalywool@gmail.com>
Cc: Rik van Riel <riel@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-11-06 17:50:42 -08:00
Linus Torvalds 39cf7c3981 IOMMU Updates for Linux v4.4
This time including:
 
 	* A new IOMMU driver for s390 pci devices
 
 	* Common dma-ops support based on iommu-api for ARM64. The plan is to
 	  use this as a basis for ARM32 and hopefully other architectures as
 	  well in the future.
 
 	* MSI support for ARM-SMMUv3
 
 	* Cleanups and dead code removal in the AMD IOMMU driver
 
 	* Better RMRR handling for the Intel VT-d driver
 
 	* Various other cleanups and small fixes
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.22 (GNU/Linux)
 
 iQIcBAABAgAGBQJWOz7hAAoJECvwRC2XARrjbvYQALwtITTA5iTm0y/ApwNMxI7n
 pZpjZVPoBPNsGBc4t/MT8pVhUSdmpBOljbV4Y4CayL1mSSB6Bl2gooZjd66m7Z81
 qMJYEVWhFQqVsIKkCSNOgaO7W5y+xt3rTgqN6vCu86/CCDfKrTPP/+CRl1T/z9bo
 1J8ioM3KnZG9KzG8JuXYFg5wwbKToaBh6swSmj+O4U9hru7zV/ILP7ikcc9pyMji
 12WbzCqchRORsJZD65xMRYAqRaPNN/3IlDejs00TOFhY3qpWgEgFUucyeRJBJ/+q
 K4U8T5vZsnr1a04l7/BeYbLmP7y/9Qv0N0xMGtTyoy/w/BieGqRWu4hHhqf/44NO
 EhCSXcEThMNCGTjP2VWC4dnQ/s7Y8OmSW9nCreUcFVxHoE5LfDoh8RngA2fpeNuS
 ixb3OwP+YXHN9Ck+1BQqQCeBznsPTLuDxlhRjCJsWntIfMSkXebOkz83YxyZ9b0Q
 gFvptfuknU7cotUwWa3dg8RiUB8kNlKJyEEByaVpWEbEOabnONKEMkstvuBx6Ots
 kA63wbe7QcPgbUYuq7g0nijDw6E2aEtf0nx2Xx4ZDL932qjg/xUkiBpmbDXHw4Gu
 nimNXVQtbCzF74SyTvxEtupiijOTm5eHtoKtg0mYnqPZ+V9eOwEvW8IHaFFf8XHD
 SecikoTtH1Q4RVtqOcAQ
 =jLlB
 -----END PGP SIGNATURE-----

Merge tag 'iommu-updates-v4.4' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu

Pull iommu updates from Joerg Roedel:
 "This time including:

   - A new IOMMU driver for s390 pci devices

   - Common dma-ops support based on iommu-api for ARM64.  The plan is
     to use this as a basis for ARM32 and hopefully other architectures
     as well in the future.

   - MSI support for ARM-SMMUv3

   - Cleanups and dead code removal in the AMD IOMMU driver

   - Better RMRR handling for the Intel VT-d driver

   - Various other cleanups and small fixes"

* tag 'iommu-updates-v4.4' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (41 commits)
  iommu/vt-d: Fix return value check of parse_ioapics_under_ir()
  iommu/vt-d: Propagate error-value from ir_parse_ioapic_hpet_scope()
  iommu/vt-d: Adjust the return value of the parse_ioapics_under_ir
  iommu: Move default domain allocation to iommu_group_get_for_dev()
  iommu: Remove is_pci_dev() fall-back from iommu_group_get_for_dev
  iommu/arm-smmu: Switch to device_group call-back
  iommu/fsl: Convert to device_group call-back
  iommu: Add device_group call-back to x86 iommu drivers
  iommu: Add generic_device_group() function
  iommu: Export and rename iommu_group_get_for_pci_dev()
  iommu: Revive device_group iommu-ops call-back
  iommu/amd: Remove find_last_devid_on_pci()
  iommu/amd: Remove first/last_device handling
  iommu/amd: Initialize amd_iommu_last_bdf for DEV_ALL
  iommu/amd: Cleanup buffer allocation
  iommu/amd: Remove cmd_buf_size and evt_buf_size from struct amd_iommu
  iommu/amd: Align DTE flag definitions
  iommu/amd: Remove old alias handling code
  iommu/amd: Set alias DTE in do_attach/do_detach
  iommu/amd: WARN when __[attach|detach]_device are called with irqs enabled
  ...
2015-11-05 16:12:10 -08:00
Linus Torvalds ab1228e42e Merge git://git.infradead.org/intel-iommu
Pull intel iommu updates from David Woodhouse:
 "This adds "Shared Virtual Memory" (aka PASID support) for the Intel
  IOMMU.  This allows devices to do DMA using process address space,
  translated through the normal CPU page tables for the relevant mm.

  With corresponding support added to the i915 driver, this has been
  tested with the graphics device on Skylake.  We don't have the
  required TLP support in our PCIe root ports for supporting discrete
  devices yet, so it's only integrated devices that can do it so far"

* git://git.infradead.org/intel-iommu: (23 commits)
  iommu/vt-d: Fix rwxp flags in SVM device fault callback
  iommu/vt-d: Expose struct svm_dev_ops without CONFIG_INTEL_IOMMU_SVM
  iommu/vt-d: Clean up pasid_enabled() and ecs_enabled() dependencies
  iommu/vt-d: Handle Caching Mode implementations of SVM
  iommu/vt-d: Fix SVM IOTLB flush handling
  iommu/vt-d: Use dev_err(..) in intel_svm_device_to_iommu(..)
  iommu/vt-d: fix a loop in prq_event_thread()
  iommu/vt-d: Fix IOTLB flushing for global pages
  iommu/vt-d: Fix address shifting in page request handler
  iommu/vt-d: shift wrapping bug in prq_event_thread()
  iommu/vt-d: Fix NULL pointer dereference in page request error case
  iommu/vt-d: Implement SVM_FLAG_SUPERVISOR_MODE for kernel access
  iommu/vt-d: Implement SVM_FLAG_PRIVATE_PASID to allocate unique PASIDs
  iommu/vt-d: Add callback to device driver on page faults
  iommu/vt-d: Implement page request handling
  iommu/vt-d: Generalise DMAR MSI setup to allow for page request events
  iommu/vt-d: Implement deferred invalidate for SVM
  iommu/vt-d: Add basic SVM PASID support
  iommu/vt-d: Always enable PASID/PRI PCI capabilities before ATS
  iommu/vt-d: Add initial support for PASID tables
  ...
2015-11-05 16:06:52 -08:00
Joerg Roedel b67ad2f7c7 Merge branches 'x86/vt-d', 'arm/omap', 'arm/smmu', 's390', 'core' and 'x86/amd' into next
Conflicts:
	drivers/iommu/amd_iommu_types.h
2015-11-02 20:03:34 +09:00
David Woodhouse d42fde7084 iommu/vt-d: Clean up pasid_enabled() and ecs_enabled() dependencies
When booted with intel_iommu=ecs_off we were still allocating the PASID
tables even though we couldn't actually use them. We really want to make
the pasid_enabled() macro depend on ecs_enabled().

Which is unfortunate, because currently they're the other way round to
cope with the Broadwell/Skylake problems with ECS.

Instead of having ecs_enabled() depend on pasid_enabled(), which was never
something that made me happy anyway, make it depend in the normal case
on the "broken PASID" bit 28 *not* being set.

Then pasid_enabled() can depend on ecs_enabled() as it should. And we also
don't need to mess with it if we ever see an implementation that has some
features requiring ECS (like PRI) but which *doesn't* have PASID support.

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2015-10-24 21:33:01 +02:00
Joerg Roedel a960fadbe6 iommu: Add device_group call-back to x86 iommu drivers
Set the device_group call-back to pci_device_group() for the
Intel VT-d and the AMD IOMMU driver.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-10-22 00:00:49 +02:00
Linus Torvalds 8a70dd2669 Merge tag 'for-linus-20151021' of git://git.infradead.org/intel-iommu
Pull intel-iommu bugfix from David Woodhouse:
 "This contains a single fix, for when the IOMMU API is used to overlay
  an existing mapping comprised of 4KiB pages, with a mapping that can
  use superpages.

  For the *first* superpage in the new mapping, we were correctly¹
  freeing the old bottom-level page table page and clearing the link to
  it, before installing the superpage.  For subsequent superpages,
  however, we weren't.  This causes a memory leak, and a warning about
  setting a PTE which is already set.

  ¹ Well, not *entirely* correctly.  We just free the page table pages
    right there and then, which is wrong.  In fact they should only be
    freed *after* the IOTLB is flushed so we know the hardware will no
    longer be looking at them....  and in fact I note that the IOTLB
    flush is completely missing from the intel_iommu_map() code path,
    although it needs to be there if it's permitted to overwrite
    existing mappings.

    Fixing those is somewhat more intrusive though, and will probably
    need to wait for 4.4 at this point"

* tag 'for-linus-20151021' of git://git.infradead.org/intel-iommu:
  iommu/vt-d: fix range computation when making room for large pages
2015-10-22 06:32:48 +09:00
Sudeep Dutt b9997e385e iommu/vt-d: Use dev_err(..) in intel_svm_device_to_iommu(..)
This will give a little bit of assistance to those developing drivers
using SVM. It might cause a slight annoyance to end-users whose kernel
disables the IOMMU when drivers are trying to use it. But the fix there
is to fix the kernel to enable the IOMMU.

Signed-off-by: Sudeep Dutt <sudeep.dutt@intel.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2015-10-19 15:03:00 +01:00
David Woodhouse a222a7f0bb iommu/vt-d: Implement page request handling
Largely based on the driver-mode implementation by Jesse Barnes.

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2015-10-15 15:35:19 +01:00
David Woodhouse 907fea3491 iommu/vt-d: Implement deferred invalidate for SVM
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2015-10-15 13:22:35 +01:00
David Woodhouse 2f26e0a9c9 iommu/vt-d: Add basic SVM PASID support
This provides basic PASID support for endpoint devices, tested with a
version of the i915 driver.

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2015-10-15 12:55:45 +01:00
David Woodhouse b16d0cb9e2 iommu/vt-d: Always enable PASID/PRI PCI capabilities before ATS
The behaviour if you enable PASID support after ATS is undefined. So we
have to enable it first, even if we don't know whether we'll need it.

This is safe enough; unless we set up a context that permits it, the device
can't actually *do* anything with it.

Also shift the feature detction to dmar_insert_one_dev_info() as it only
needs to happen once.

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2015-10-15 12:05:39 +01:00
David Woodhouse 8a94ade4ce iommu/vt-d: Add initial support for PASID tables
Add CONFIG_INTEL_IOMMU_SVM, and allocate PASID tables on supported hardware.

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2015-10-15 11:24:51 +01:00
David Woodhouse ae853ddb9a iommu/vt-d: Introduce intel_iommu=pasid28, and pasid_enabled() macro
As long as we use an identity mapping to work around the worst of the
hardware bugs which caused us to defeature it and change the definition
of the capability bit, we *can* use PASID support on the devices which
advertised it in bit 28 of the Extended Capability Register.

Allow people to do so with 'intel_iommu=pasid28' on the command line.

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2015-10-15 11:24:45 +01:00
David Woodhouse d14053b3c7 iommu/vt-d: Fix ATSR handling for Root-Complex integrated endpoints
The VT-d specification says that "Software must enable ATS on endpoint
devices behind a Root Port only if the Root Port is reported as
supporting ATS transactions."

We walk up the tree to find a Root Port, but for integrated devices we
don't find one — we get to the host bridge. In that case we *should*
allow ATS. Currently we don't, which means that we are incorrectly
failing to use ATS for the integrated graphics. Fix that.

We should never break out of this loop "naturally" with bus==NULL,
since we'll always find bridge==NULL in that case (and now return 1).

So remove the check for (!bridge) after the loop, since it can never
happen. If it did, it would be worthy of a BUG_ON(!bridge). But since
it'll oops anyway in that case, that'll do just as well.

Cc: stable@vger.kernel.org
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2015-10-15 09:28:56 +01:00
Dan Williams dfddb969ed iommu/vt-d: Switch from ioremap_cache to memremap
In preparation for deprecating ioremap_cache() convert its usage in
intel-iommu to memremap.  This also eliminates the mishandling of the
__iomem annotation in the implementation.

Cc: David Woodhouse <dwmw2@infradead.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-10-14 15:22:06 +02:00
Christian Zander ba2374fd2b iommu/vt-d: fix range computation when making room for large pages
In preparation for the installation of a large page, any small page
tables that may still exist in the target IOV address range are
removed.  However, if a scatter/gather list entry is large enough to
fit more than one large page, the address space for any subsequent
large pages is not cleared of conflicting small page tables.

This can cause legitimate mapping requests to fail with errors of the
form below, potentially followed by a series of IOMMU faults:

ERROR: DMA PTE for vPFN 0xfde00 already set (to 7f83a4003 not 7e9e00083)

In this example, a 4MiB scatter/gather list entry resulted in the
successful installation of a large page @ vPFN 0xfdc00, followed by
a failed attempt to install another large page @ vPFN 0xfde00, due to
the presence of a pointer to a small page table @ 0x7f83a4000.

To address this problem, compute the number of large pages that fit
into a given scatter/gather list entry, and use it to derive the
last vPFN covered by the large page(s).

Cc: stable@vger.kernel.org
Signed-off-by: Christian Zander <christian@nervanasys.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2015-10-13 20:32:50 +01:00
Linus Torvalds 7554225312 IOMMU Fixes for Linux v4.3-rc5
A few fixes piled up:
 
 	* Fix for a suspend/resume issue where PCI probing code overwrote
 	  dev->irq for the MSI irq of the AMD IOMMU.
 
 	* Fix for a kernel crash when a 32 bit PCI device was assigned to a KVM
 	  guest.
 
 	* Fix for a possible memory leak in the VT-d driver
 
 	* A couple of fixes for the ARM-SMMU driver
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.22 (GNU/Linux)
 
 iQIcBAABAgAGBQJWHNdbAAoJECvwRC2XARrjB/YQAKouJaRMjBaehx6kbaZMhMJy
 hXDsh8Xl6TtCe6kLD2uXrvjLZAdu32kjrtzhhcM21EO5Ms2Weq6A60/98LwnJ4Eg
 AqftjfxQsIwf2G1PvHb+xepgcFxIAhW6a3nORzx6d2AGrNWmMtUhbLTSncYjmojf
 Td4dscuRmRPenJUV1JhcJQBR62QonknIHV99QmevaCSAoUdyuMH+t5kQVEgPjx7C
 GlMPNEZZmGl7J3NXSWRtDSkUxFZ1OU8MTKc1LmPPHHAOZk37wbePihQbLLySlHPH
 v4G1R05e2hG7C66yu959fyOleL87lDToUXhwQNFJMqEc+e7IzBzZsB3ANEHjpLQH
 UJC9COU+sf8mPafja4ge/KbyGDmgDg/OMQJDhU6+DSXUflwymeWJmXr7sLFQex6O
 nZO/SVzkbKj+PKxV7UnGD0sTeAAk0X6vfhFCL0l/acPpQg0T6Fpky5D5fUMv5dWS
 xxxvxfwBcDoI44fxWBhfPYvmLFT9f5da+bpbzeeGjVSNezOkPJ65AJcVk5An4kQu
 PRzJGoq3XpZHOeg5+O7IKzeuJ+3qc7Tz4wAzMxcaNFpVBl2qp1RUkTbmS9/YV1b5
 ZOcIFBMLuUROE1ExsU19c5Uo0j1Bvh9jtdy6lNFCagQYzihtA0Jk19ucllx1jIjD
 sdv2hgDIauRToKF1d9xz
 =v5G4
 -----END PGP SIGNATURE-----

Merge tag 'iommu-fixes-v4.3-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu

Pull IOMMU fixes from Joerg Roedel:
 "A few fixes piled up:

   - Fix for a suspend/resume issue where PCI probing code overwrote
     dev->irq for the MSI irq of the AMD IOMMU.

   - Fix for a kernel crash when a 32 bit PCI device was assigned to a
     KVM guest.

   - Fix for a possible memory leak in the VT-d driver

   - A couple of fixes for the ARM-SMMU driver"

* tag 'iommu-fixes-v4.3-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu:
  iommu/amd: Fix NULL pointer deref on device detach
  iommu/amd: Prevent binding other PCI drivers to IOMMU PCI devices
  iommu/vt-d: Fix memory leak in dmar_insert_one_dev_info()
  iommu/arm-smmu: Use correct address mask for CMD_TLBI_S2_IPA
  iommu/arm-smmu: Ensure IAS is set correctly for AArch32-capable SMMUs
  iommu/io-pgtable-arm: Don't use dma_to_phys()
2015-10-13 10:09:59 -07:00
Joerg Roedel b1ce5b79ae iommu/vt-d: Create RMRR mappings in newly allocated domains
Currently the RMRR entries are created only at boot time.
This means they will vanish when the domain allocated at
boot time is destroyed.
This patch makes sure that also newly allocated domains will
get RMRR mappings.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-10-05 17:39:21 +02:00
Joerg Roedel d66ce54b46 iommu/vt-d: Split iommu_prepare_identity_map
Split the part of the function that fetches the domain out
and put the rest into into a domain_prepare_identity_map, so
that the code can also be used with when the domain is
already known.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-10-05 17:38:47 +02:00
Linus Torvalds 8c25ab8b5a Merge git://git.infradead.org/intel-iommu
Pull IOVA fixes from David Woodhouse:
 "The main fix here is the first one, fixing the over-allocation of
   size-aligned requests.  The other patches simply make the existing
  IOVA code available to users other than the Intel VT-d driver, with no
  functional change.

  I concede the latter really *should* have been submitted during the
  merge window, but since it's basically risk-free and people are
  waiting to build on top of it and it's my fault I didn't get it in, I
  (and they) would be grateful if you'd take it"

* git://git.infradead.org/intel-iommu:
  iommu: Make the iova library a module
  iommu: iova: Export symbols
  iommu: iova: Move iova cache management to the iova library
  iommu/iova: Avoid over-allocating when size-aligned
2015-10-02 07:59:29 -04:00
Sudip Mukherjee 499f3aa432 iommu/vt-d: Fix memory leak in dmar_insert_one_dev_info()
We are returning NULL if we are not able to attach the iommu
to the domain but while returning we missed freeing info.

Signed-off-by: Sudip Mukherjee <sudip@vectorindia.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-09-29 15:45:50 +02:00
Linus Torvalds 9a9952bbd7 IOMMU Updates for Linux v4.3
This time the IOMMU updates are mostly cleanups or fixes. No big new
 features or drivers this time. In particular the changes include:
 
 	* Bigger cleanup of the Domain<->IOMMU data structures and the
 	  code that manages them in the Intel VT-d driver. This makes
 	  the code easier to understand and maintain, and also easier to
 	  keep the data structures in sync. It is also a preparation
 	  step to make use of default domains from the IOMMU core in the
 	  Intel VT-d driver.
 
 	* Fixes for a couple of DMA-API misuses in ARM IOMMU drivers,
 	  namely in the ARM and Tegra SMMU drivers.
 
 	* Fix for a potential buffer overflow in the OMAP iommu driver's
 	  debug code
 
 	* A couple of smaller fixes and cleanups in various drivers
 
 	* One small new feature: Report domain-id usage in the Intel
 	  VT-d driver to easier detect bugs where these are leaked.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.22 (GNU/Linux)
 
 iQIcBAABAgAGBQJV7sCEAAoJECvwRC2XARrjz3YP/Au4IIfqykfPvmI0cmPhVnAV
 Q72tltwkbK2u2iP+pHheveaMngJtAshsZrnhBon4KJRIt/KTLZQvsFplHDaRhPfY
 yw3LIxhC5kLG/S6irY9Ozb0+uTMdQ3BU2uS23pyoFVfCz+RngBrAwDBcTKqZDCDG
 8dNd+T21XlzxuyeGr58h9upz2VFtq6feoGFhLU5PNxTlf4JWZe77D7NlbSvx6Nwy
 7Ai8dVRgpV9ciUP7w8FXrCUvbMZQDIoTMiWGNSlogVMgA0dllGES91UZYhWf3pil
 abuX6DeFul/cOhEOnH2xa+j5zz2O/upe9stU4wAFw6IhPiAELTHc2NKlWAhwb0SY
 bpDRf7dgLnUfqpmZLpWjTwN4jllc0qS2MIHj+eUu0uhdFi4Z0BuH2wSCdbR7xkqk
 u5u0Jq7hDNKs5FmQTSsWSiAdjakMsRjIN7jMrBbOeZnBSmUnLx74KGPLTb63ncR3
 WIOi4Iyu+LSXBIvZDiLu3lIIh7Atzd+y7IDnb8KXdyqfy+h53OZZOJNbP/qTWHgT
 ZUdm/qrqjIQpTQfleOEadC7vY/y3fR5sBtOQHUamfntni3oYCc4AMRlNdf3eV9lb
 Tyss6F699mU7d/vennTaIToBgVwaXdLYtmvGWjnoT/kqOMclyDf3cIUtZGtp2rJR
 ddmzDA3vBUC5pGj8Hd8R
 =yoGE
 -----END PGP SIGNATURE-----

Merge tag 'iommu-updates-v4.3' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu

Pull iommu updates for from Joerg Roedel:
 "This time the IOMMU updates are mostly cleanups or fixes.  No big new
  features or drivers this time.  In particular the changes include:

   - Bigger cleanup of the Domain<->IOMMU data structures and the code
     that manages them in the Intel VT-d driver.  This makes the code
     easier to understand and maintain, and also easier to keep the data
     structures in sync.  It is also a preparation step to make use of
     default domains from the IOMMU core in the Intel VT-d driver.

   - Fixes for a couple of DMA-API misuses in ARM IOMMU drivers, namely
     in the ARM and Tegra SMMU drivers.

   - Fix for a potential buffer overflow in the OMAP iommu driver's
     debug code

   - A couple of smaller fixes and cleanups in various drivers

   - One small new feature: Report domain-id usage in the Intel VT-d
     driver to easier detect bugs where these are leaked"

* tag 'iommu-updates-v4.3' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (83 commits)
  iommu/vt-d: Really use upper context table when necessary
  x86/vt-d: Fix documentation of DRHD
  iommu/fsl: Really fix init section(s) content
  iommu/io-pgtable-arm: Unmap and free table when overwriting with block
  iommu/io-pgtable-arm: Move init-fn declarations to io-pgtable.h
  iommu/msm: Use BUG_ON instead of if () BUG()
  iommu/vt-d: Access iomem correctly
  iommu/vt-d: Make two functions static
  iommu/vt-d: Use BUG_ON instead of if () BUG()
  iommu/vt-d: Return false instead of 0 in irq_remapping_cap()
  iommu/amd: Use BUG_ON instead of if () BUG()
  iommu/amd: Make a symbol static
  iommu/amd: Simplify allocation in irq_remapping_alloc()
  iommu/tegra-smmu: Parameterize number of TLB lines
  iommu/tegra-smmu: Factor out tegra_smmu_set_pde()
  iommu/tegra-smmu: Extract tegra_smmu_pte_get_use()
  iommu/tegra-smmu: Use __GFP_ZERO to allocate zeroed pages
  iommu/tegra-smmu: Remove PageReserved manipulation
  iommu/tegra-smmu: Convert to use DMA API
  iommu/tegra-smmu: smmu_flush_ptc() wants device addresses
  ...
2015-09-08 17:22:35 -07:00
Linus Torvalds d975f309a8 Merge branch 'for-4.3/sg' of git://git.kernel.dk/linux-block
Pull SG updates from Jens Axboe:
 "This contains a set of scatter-gather related changes/fixes for 4.3:

   - Add support for limited chaining of sg tables even for
     architectures that do not set ARCH_HAS_SG_CHAIN.  From Christoph.

   - Add sg chain support to target_rd.  From Christoph.

   - Fixup open coded sg->page_link in crypto/omap-sham.  From
     Christoph.

   - Fixup open coded crypto ->page_link manipulation.  From Dan.

   - Also from Dan, automated fixup of manual sg_unmark_end()
     manipulations.

   - Also from Dan, automated fixup of open coded sg_phys()
     implementations.

   - From Robert Jarzmik, addition of an sg table splitting helper that
     drivers can use"

* 'for-4.3/sg' of git://git.kernel.dk/linux-block:
  lib: scatterlist: add sg splitting function
  scatterlist: use sg_phys()
  crypto/omap-sham: remove an open coded access to ->page_link
  scatterlist: remove open coded sg_unmark_end instances
  crypto: replace scatterwalk_sg_chain with sg_chain
  target/rd: always chain S/G list
  scatterlist: allow limited chaining without ARCH_HAS_SG_CHAIN
2015-09-02 13:22:38 -07:00
Linus Torvalds 26f8b7edc9 PCI changes for the v4.3 merge window:
Enumeration
     Allocate ATS struct during enumeration (Bjorn Helgaas)
     Embed ATS info directly into struct pci_dev (Bjorn Helgaas)
     Reduce size of ATS structure elements (Bjorn Helgaas)
     Stop caching ATS Invalidate Queue Depth (Bjorn Helgaas)
     iommu/vt-d: Cache PCI ATS state and Invalidate Queue Depth (Bjorn Helgaas)
     Move MPS configuration check to pci_configure_device() (Bjorn Helgaas)
     Set MPS to match upstream bridge (Keith Busch)
     ARM/PCI: Set MPS before pci_bus_add_devices() (Murali Karicheri)
     Add pci_scan_root_bus_msi() (Lorenzo Pieralisi)
     ARM/PCI, designware, xilinx: Use pci_scan_root_bus_msi() (Lorenzo Pieralisi)
 
   Resource management
     Call pci_read_bridge_bases() from core instead of arch code (Lorenzo Pieralisi)
 
   PCI device hotplug
     pciehp: Remove unused interrupt events (Bjorn Helgaas)
     pciehp: Remove ignored MRL sensor interrupt events (Bjorn Helgaas)
     pciehp: Handle invalid data when reading from non-existent devices (Jarod Wilson)
     pciehp: Simplify pcie_poll_cmd() (Yijing Wang)
     Use "slot" and "pci_slot" for struct hotplug_slot and struct pci_slot (Yijing Wang)
     Protect pci_bus->slots with pci_slot_mutex, not pci_bus_sem (Yijing Wang)
     Hold pci_slot_mutex while searching bus->slots list (Yijing Wang)
 
   Power management
     Disable async suspend/resume for JMicron multi-function SATA/AHCI (Zhang Rui)
 
   Virtualization
     Add ACS quirks for Intel I219-LM/V (Alex Williamson)
     Restore ACS configuration as part of pci_restore_state() (Alexander Duyck)
 
   MSI
     Add pcibios_alloc_irq() and pcibios_free_irq() (Jiang Liu)
     x86: Implement pcibios_alloc_irq() and pcibios_free_irq() (Jiang Liu)
     Add helpers to manage pci_dev->irq and pci_dev->irq_managed (Jiang Liu)
     Free legacy IRQ when enabling MSI/MSI-X (Jiang Liu)
     ARM/PCI: Remove msi_controller from struct pci_sys_data (Lorenzo Pieralisi)
     Remove unused pcibios_msi_controller() hook (Lorenzo Pieralisi)
 
   Generic host bridge driver
     Remove dependency on ARM-specific struct hw_pci (Jayachandran C)
     Build setup-irq.o for arm64 (Jayachandran C)
     Add arm64 support (Jayachandran C)
 
   APM X-Gene host bridge driver
     Add APM X-Gene PCIe 64-bit prefetchable window (Duc Dang)
     Add support for a 64-bit prefetchable memory window (Duc Dang)
     Drop owner assignment from platform_driver (Krzysztof Kozlowski)
 
   Broadcom iProc host bridge driver
     Allow BCMA bus driver to be built as module (Hauke Mehrtens)
     Delete unnecessary checks before phy calls (Markus Elfring)
     Add arm64 support (Ray Jui)
 
   Synopsys DesignWare host bridge driver
     Don't complain missing *config* reg space if va_cfg0 is set (Murali Karicheri)
 
   TI DRA7xx host bridge driver
     Disable pm_runtime on get_sync failure (Kishon Vijay Abraham I)
     Add PM support (Kishon Vijay Abraham I)
     Clear MSE bit during suspend so clocks will idle (Kishon Vijay Abraham I)
     Add support to make GPIO drive PERST# line (Kishon Vijay Abraham I)
 
   Xilinx AXI host bridge driver
     Check for MSI interrupt flag before handling as INTx (Russell Joyce)
 
   Miscellaneous
     Fix Intersil/Techwell TW686[4589] AV capture class code (Krzysztof Hałasa)
     Use PCI_CLASS_SERIAL_USB instead of bare number (Bjorn Helgaas)
     Fix generic NCR 53c810 class code quirk (Bjorn Helgaas)
     Fix TI816X class code quirk (Bjorn Helgaas)
     Remove unused "pci_probe" flags (Bjorn Helgaas)
     Host bridge driver code simplifications (Fabio Estevam)
     Add dev_flags bit to access VPD through function 0 (Mark Rustad)
     Add VPD function 0 quirk for Intel Ethernet devices (Mark Rustad)
     Kill off set_irq_flags() usage (Rob Herring)
     Remove Intel Cherrytrail D3 delays (Srinidhi Kasagar)
     Clean up pci_find_capability() (Wei Yang)
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABAgAGBQJV5FE/AAoJEFmIoMA60/r8I2QP/R9b9MrvH2i9tN98/lTDl7g3
 czE58ZM1d4kMYtW3Pm/DrYI6y6RprAaB4ZEp5rHxlFLqBPZEQwWodA19NkjECcb6
 g5qKWOdIWA4T6Jaab6a/yCmAFa0jni7iAmmTYqca9o3Xj7tFovxDxqPSYkh+rer0
 v+1sAr/4HXSiN339KR6teEF3VZqLFp6ewMydQlVS+R7kAOHHYQDqoo9WF6JnIoL5
 PO3Kbmr1WN3fZY3s98yLq1x6XmLrLlmGdJI+2r+KewO4r/05CL6wTVP/oTMi+Eti
 dueseeISlOTcTAUhk87Vap23uJPeB/rJbYoFdCr7+0AkZGe/U/E2dpZm2wyMcCvq
 OrATuFymgzIuJm5uUPsdH4lzsX97U9BcDccracfC38rYnP5u3bqHCjw8HJzANR7p
 VYbFBzc5ZCCUYtQAjyrKt2820AvTFo+Bu+z75IsJO8LQQgv/zGtQQ8grIQeAjH+l
 sAe3xOTwzZnq6Obl4qb/GElHmIGUbQ1X4Dx1mliiijKMKkhYHOA0iFnB/OBILmEZ
 wHzKU8chWcI9lip0aaX8q9i/qovdVUt2+rdo/N40l7YY66x4jkNgQQXZX+FSKk6H
 stTvEBQgK28EKCHDxMsgzTGIqllSyk4DnRMA7ij1hRWqdUbGk7wOPTvm9QSwNDWe
 SokuWzAQD9YeMRGdsYjZ
 =DX1r
 -----END PGP SIGNATURE-----

Merge tag 'pci-v4.3-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci

Pull PCI updates from Bjorn Helgaas:
 "PCI changes for the v4.3 merge window:

  Enumeration:
   - Allocate ATS struct during enumeration (Bjorn Helgaas)
   - Embed ATS info directly into struct pci_dev (Bjorn Helgaas)
   - Reduce size of ATS structure elements (Bjorn Helgaas)
   - Stop caching ATS Invalidate Queue Depth (Bjorn Helgaas)
   - iommu/vt-d: Cache PCI ATS state and Invalidate Queue Depth (Bjorn Helgaas)
   - Move MPS configuration check to pci_configure_device() (Bjorn Helgaas)
   - Set MPS to match upstream bridge (Keith Busch)
   - ARM/PCI: Set MPS before pci_bus_add_devices() (Murali Karicheri)
   - Add pci_scan_root_bus_msi() (Lorenzo Pieralisi)
   - ARM/PCI, designware, xilinx: Use pci_scan_root_bus_msi() (Lorenzo Pieralisi)

  Resource management:
   - Call pci_read_bridge_bases() from core instead of arch code (Lorenzo Pieralisi)

  PCI device hotplug:
   - pciehp: Remove unused interrupt events (Bjorn Helgaas)
   - pciehp: Remove ignored MRL sensor interrupt events (Bjorn Helgaas)
   - pciehp: Handle invalid data when reading from non-existent devices (Jarod Wilson)
   - pciehp: Simplify pcie_poll_cmd() (Yijing Wang)
   - Use "slot" and "pci_slot" for struct hotplug_slot and struct pci_slot (Yijing Wang)
   - Protect pci_bus->slots with pci_slot_mutex, not pci_bus_sem (Yijing Wang)
   - Hold pci_slot_mutex while searching bus->slots list (Yijing Wang)

  Power management:
   - Disable async suspend/resume for JMicron multi-function SATA/AHCI (Zhang Rui)

  Virtualization:
   - Add ACS quirks for Intel I219-LM/V (Alex Williamson)
   - Restore ACS configuration as part of pci_restore_state() (Alexander Duyck)

  MSI:
   - Add pcibios_alloc_irq() and pcibios_free_irq() (Jiang Liu)
   - x86: Implement pcibios_alloc_irq() and pcibios_free_irq() (Jiang Liu)
   - Add helpers to manage pci_dev->irq and pci_dev->irq_managed (Jiang Liu)
   - Free legacy IRQ when enabling MSI/MSI-X (Jiang Liu)
   - ARM/PCI: Remove msi_controller from struct pci_sys_data (Lorenzo Pieralisi)
   - Remove unused pcibios_msi_controller() hook (Lorenzo Pieralisi)

  Generic host bridge driver:
   - Remove dependency on ARM-specific struct hw_pci (Jayachandran C)
   - Build setup-irq.o for arm64 (Jayachandran C)
   - Add arm64 support (Jayachandran C)

  APM X-Gene host bridge driver:
   - Add APM X-Gene PCIe 64-bit prefetchable window (Duc Dang)
   - Add support for a 64-bit prefetchable memory window (Duc Dang)
   - Drop owner assignment from platform_driver (Krzysztof Kozlowski)

  Broadcom iProc host bridge driver:
   - Allow BCMA bus driver to be built as module (Hauke Mehrtens)
   - Delete unnecessary checks before phy calls (Markus Elfring)
   - Add arm64 support (Ray Jui)

  Synopsys DesignWare host bridge driver:
   - Don't complain missing *config* reg space if va_cfg0 is set (Murali Karicheri)

  TI DRA7xx host bridge driver:
   - Disable pm_runtime on get_sync failure (Kishon Vijay Abraham I)
   - Add PM support (Kishon Vijay Abraham I)
   - Clear MSE bit during suspend so clocks will idle (Kishon Vijay Abraham I)
   - Add support to make GPIO drive PERST# line (Kishon Vijay Abraham I)

  Xilinx AXI host bridge driver:
   - Check for MSI interrupt flag before handling as INTx (Russell Joyce)

  Miscellaneous:
   - Fix Intersil/Techwell TW686[4589] AV capture class code (Krzysztof Hałasa)
   - Use PCI_CLASS_SERIAL_USB instead of bare number (Bjorn Helgaas)
   - Fix generic NCR 53c810 class code quirk (Bjorn Helgaas)
   - Fix TI816X class code quirk (Bjorn Helgaas)
   - Remove unused "pci_probe" flags (Bjorn Helgaas)
   - Host bridge driver code simplifications (Fabio Estevam)
   - Add dev_flags bit to access VPD through function 0 (Mark Rustad)
   - Add VPD function 0 quirk for Intel Ethernet devices (Mark Rustad)
   - Kill off set_irq_flags() usage (Rob Herring)
   - Remove Intel Cherrytrail D3 delays (Srinidhi Kasagar)
   - Clean up pci_find_capability() (Wei Yang)"

* tag 'pci-v4.3-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (72 commits)
  PCI: Disable async suspend/resume for JMicron multi-function SATA/AHCI
  PCI: Set MPS to match upstream bridge
  PCI: Move MPS configuration check to pci_configure_device()
  PCI: Drop references acquired by of_parse_phandle()
  PCI/MSI: Remove unused pcibios_msi_controller() hook
  ARM/PCI: Remove msi_controller from struct pci_sys_data
  ARM/PCI, designware, xilinx: Use pci_scan_root_bus_msi()
  PCI: Add pci_scan_root_bus_msi()
  ARM/PCI: Replace panic with WARN messages on failures
  PCI: generic: Add arm64 support
  PCI: Build setup-irq.o for arm64
  PCI: generic: Remove dependency on ARM-specific struct hw_pci
  PCI: imx6: Simplify a trivial if-return sequence
  PCI: spear: Use BUG_ON() instead of condition followed by BUG()
  PCI: dra7xx: Remove unneeded use of IS_ERR_VALUE()
  PCI: Remove pci_ats_enabled()
  PCI: Stop caching ATS Invalidate Queue Depth
  PCI: Move ATS declarations to linux/pci.h so they're all together
  PCI: Clean up ATS error handling
  PCI: Use pci_physfn() rather than looking up physfn by hand
  ...
2015-08-31 17:14:39 -07:00
Joerg Roedel 4df4eab168 iommu/vt-d: Really use upper context table when necessary
There is a bug in iommu_context_addr() which will always use
the lower context table, even when the upper context table
needs to be used. Fix this issue.

Fixes: 03ecc32c52 ("iommu/vt-d: support extended root and context entries")
Reported-by: Xiao, Nan <nan.xiao@hp.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-08-25 11:39:27 +02:00
Dan Williams db0fa0cb01 scatterlist: use sg_phys()
Coccinelle cleanup to replace open coded sg to physical address
translations.  This is in preparation for introducing scatterlists that
reference __pfn_t.

// sg_phys.cocci: convert usage page_to_phys(sg_page(sg)) to sg_phys(sg)
// usage: make coccicheck COCCI=sg_phys.cocci MODE=patch

virtual patch

@@
struct scatterlist *sg;
@@

- page_to_phys(sg_page(sg)) + sg->offset
+ sg_phys(sg)

@@
struct scatterlist *sg;
@@

- page_to_phys(sg_page(sg))
+ sg_phys(sg) & PAGE_MASK

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
2015-08-17 08:13:26 -06:00
Joerg Roedel 543c8dcf1d iommu/vt-d: Access iomem correctly
This fixes wrong accesses to iomem introduced by the kdump
fixing code.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-08-13 19:49:56 +02:00
Joerg Roedel b690420a40 iommu/vt-d: Make two functions static
These functions are only used in that file and can be
static.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-08-13 19:49:51 +02:00
Joerg Roedel dc02e46e8d iommu/vt-d: Use BUG_ON instead of if () BUG()
Found by a coccicheck script.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-08-13 19:49:46 +02:00
Joerg Roedel f303e50766 iommu/vt-d: Avoid duplicate device_domain_info structures
When a 'struct device_domain_info' is created as an alias
for another device, this struct will not be re-used when the
real device is encountered. Fix that to avoid duplicate
device_domain_info structures being added.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-08-12 16:23:37 +02:00
Joerg Roedel 08a7f456a7 iommu/vt-d: Only insert alias dev_info if there is an alias
For devices without an PCI alias there will be two
device_domain_info structures added. Prevent that by
checking if the alias is different from the device.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-08-12 16:23:36 +02:00
Joerg Roedel 127c761598 iommu/vt-d: Pass device_domain_info to __dmar_remove_one_dev_info
This struct contains all necessary information for the
function already. Also handle the info->dev == NULL case
while at it.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-08-12 16:23:36 +02:00
Joerg Roedel 2309bd793e iommu/vt-d: Remove dmar_global_lock from device_notifier
The code in the locked section does not touch anything
protected by the dmar_global_lock. Remove it from there.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-08-12 16:23:36 +02:00
Joerg Roedel 55d940430a iommu/vt-d: Get rid of domain->iommu_lock
When this lock is held the device_domain_lock is also
required to make sure the device_domain_info does not vanish
while in use. So this lock can be removed as it gives no
additional protection.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-08-12 16:23:36 +02:00
Joerg Roedel de7e888646 iommu/vt-d: Only call domain_remove_one_dev_info to detach old domain
There is no need to make a difference here between VM and
non-VM domains, so simplify this code here.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-08-12 16:23:36 +02:00
Joerg Roedel d160aca527 iommu/vt-d: Unify domain->iommu attach/detachment
Move the code to attach/detach domains to iommus and vice
verce into a single function to make sure there are no
dangling references.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-08-12 16:23:36 +02:00
Joerg Roedel c6c2cebd66 iommu/vt-d: Establish domain<->iommu link in dmar_insert_one_dev_info
This makes domain attachment more synchronous with domain
deattachment. The domain<->iommu link is released in
dmar_remove_one_dev_info.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-08-12 16:23:36 +02:00
Joerg Roedel dc534b25d1 iommu/vt-d: Pass an iommu pointer to domain_init()
This allows to do domain->iommu attachment after domain_init
has run.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-08-12 16:23:36 +02:00
Joerg Roedel 2452d9db12 iommu/vt-d: Rename iommu_detach_dependent_devices()
Rename this function and the ones further down its
call-chain to domain_context_clear_*. In particular this
means:

	iommu_detach_dependent_devices -> domain_context_clear
		   iommu_detach_dev_cb -> domain_context_clear_one_cb
		      iommu_detach_dev -> domain_context_clear_one

These names match a lot better with its
domain_context_mapping counterparts.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-08-12 16:23:35 +02:00
Joerg Roedel e6de0f8dfc iommu/vt-d: Rename domain_remove_one_dev_info()
Rename the function to dmar_remove_one_dev_info to match is
name better with its dmar_insert_one_dev_info counterpart.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-08-12 16:23:35 +02:00
Joerg Roedel 5db31569e9 iommu/vt-d: Rename dmar_insert_dev_info()
Rename this function to dmar_insert_one_dev_info() to match
the name better with its counter part function
domain_remove_one_dev_info().

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-08-12 16:23:35 +02:00
Joerg Roedel cc4e2575cc iommu/vt-d: Move context-mapping into dmar_insert_dev_info
Do the context-mapping of devices from a single place in the
call-path and clean up the other call-sites.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-08-12 16:23:35 +02:00
Joerg Roedel 76f45fe35c iommu/vt-d: Simplify domain_remove_dev_info()
Just call domain_remove_one_dev_info() for all devices in
the domain instead of reimplementing the functionality.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-08-12 16:23:35 +02:00
Joerg Roedel b608ac3b6d iommu/vt-d: Simplify domain_remove_one_dev_info()
Simplify this function as much as possible with the new
iommu_refcnt field.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-08-12 16:23:34 +02:00
Joerg Roedel 42e8c186b5 iommu/vt-d: Simplify io/tlb flushing in intel_iommu_unmap
We don't need to do an expensive search for domain-ids
anymore, as we keep track of per-iommu domain-ids.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-08-12 16:23:34 +02:00
Joerg Roedel 29a27719ab iommu/vt-d: Replace iommu_bmp with a refcount
This replaces the dmar_domain->iommu_bmp with a similar
reference count array. This allows us to keep track of how
many devices behind each iommu are attached to the domain.

This is necessary for further simplifications and
optimizations to the iommu<->domain attachment code.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-08-12 16:23:34 +02:00
Joerg Roedel af1089ce38 iommu/vt-d: Kill dmar_domain->id
This field is now obsolete because all places use the
per-iommu domain-ids. Kill the remaining uses of this field
and remove it.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-08-12 16:23:34 +02:00
Joerg Roedel 0dc7971594 iommu/vt-d: Don't pre-allocate domain ids for si_domain
There is no reason for this special handling of the
si_domain. The per-iommu domain-id can be allocated
on-demand like for any other domain. So remove the
pre-allocation code.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-08-12 16:23:34 +02:00
Joerg Roedel a1ddcbe930 iommu/vt-d: Pass dmar_domain directly into iommu_flush_iotlb_psi
This function can figure out the domain-id to use itself
from the iommu_did array. This is more reliable over
different domain types and brings us one step further to
remove the domain->id field.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-08-12 16:23:34 +02:00
Joerg Roedel de24e55395 iommu/vt-d: Simplify domain_context_mapping_one
Get rid of the special cases for VM domains vs. non-VM
domains and simplify the code further to just handle the
hardware passthrough vs. page-table case.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-08-12 16:23:33 +02:00
Joerg Roedel 28ccce0d95 iommu/vt-d: Calculate translation in domain_context_mapping_one
There is no reason to pass the translation type through
multiple layers. It can also be determined in the
domain_context_mapping_one function directly.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-08-12 16:23:33 +02:00
Joerg Roedel e2411427f7 iommu/vt-d: Get rid of iommu_attach_vm_domain()
The special case for VM domains is not needed, as other
domains could be attached to the iommu in the same way. So
get rid of this special case.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-08-12 16:23:33 +02:00
Joerg Roedel 8bf478163e iommu/vt-d: Split up iommu->domains array
This array is indexed by the domain-id and contains the
pointers to the domains attached to this iommu. Modern
systems support 65536 domain ids, so that this array has a
size of 512kb, per iommu.

This is a huge waste of space, as the array is usually
sparsely populated. This patch makes the array
two-dimensional and allocates the memory for the domain
pointers on-demand.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-08-12 16:23:33 +02:00
Joerg Roedel 9452d5bfe5 iommu/vt-d: Add access functions for iommu->domains
This makes it easier to change the layout of the data
structure later.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-08-12 16:23:33 +02:00
Joerg Roedel c0e8a6c803 iommu/vt-d: Keep track of per-iommu domain ids
Instead of searching in the domain array for already
allocated domain ids, keep track of them explicitly.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-08-12 16:23:32 +02:00
Alex Williamson 2238c0827a iommu/vt-d: Report domain usage in sysfs
Debugging domain ID leakage typically requires long running tests in
order to exhaust the domain ID space or kernel instrumentation to
track the setting and clearing of bits.  A couple trivial intel-iommu
specific sysfs extensions make it much easier to expose the IOMMU
capabilities and current usage.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-08-03 16:30:57 +02:00
Kees Cook 2439d4aa92 iommu/vt-d: Avoid format string leaks into iommu_device_create
This makes sure it won't be possible to accidentally leak format
strings into iommu device names. Current name allocations are safe,
but this makes the "%s" explicit.

Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-08-03 16:15:47 +02:00
Sakari Ailus ae1ff3d623 iommu: iova: Move iova cache management to the iova library
This is necessary to separate intel-iommu from the iova library.

Signed-off-by: Sakari Ailus <sakari.ailus@linux.intel.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2015-07-28 15:47:58 +01:00
Robin Murphy 8f6429c7cb iommu/iova: Avoid over-allocating when size-aligned
Currently, allocating a size-aligned IOVA region quietly adjusts the
actual allocation size in the process, returning a rounded-up
power-of-two-sized allocation. This results in mismatched behaviour in
the IOMMU driver if the original size was not a power of two, where the
original size is mapped, but the rounded-up IOVA size is unmapped.

Whilst some IOMMUs will happily unmap already-unmapped pages, others
consider this an error, so fix it by computing the necessary alignment
padding without altering the actual allocation size. Also clean up by
making pad_size unsigned, since its callers always pass unsigned values
and negative padding makes little sense here anyway.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2015-07-28 15:47:56 +01:00
Alex Williamson 46ebb7af7b iommu/vt-d: Fix VM domain ID leak
This continues the attempt to fix commit fb170fb4c5 ("iommu/vt-d:
Introduce helper functions to make code symmetric for readability").
The previous attempt in commit 7168440690 ("iommu/vt-d: Detach
domain *only* from attached iommus") overlooked the fact that
dmar_domain.iommu_bmp gets cleared for VM domains when devices are
detached:

intel_iommu_detach_device
  domain_remove_one_dev_info
    domain_detach_iommu

The domain is detached from the iommu, but the iommu is still attached
to the domain, for whatever reason.  Thus when we get to domain_exit(),
we can't rely on iommu_bmp for VM domains to find the active iommus,
we must check them all.  Without that, the corresponding bit in
intel_iommu.domain_ids doesn't get cleared and repeated VM domain
creation and destruction will run out of domain IDs.  Meanwhile we
still can't call iommu_detach_domain() on arbitrary non-VM domains or
we risk clearing in-use domain IDs, as 7168440690 attempted to
address.

It's tempting to modify iommu_detach_domain() to test the domain
iommu_bmp, but the call ordering from domain_remove_one_dev_info()
prevents it being able to work as fb170fb4c5 seems to have intended.
Caching of unused VM domains on the iommu object seems to be the root
of the problem, but this code is far too fragile for that kind of
rework to be proposed for stable, so we simply revert this chunk to
its state prior to fb170fb4c5.

Fixes: fb170fb4c5 ("iommu/vt-d: Introduce helper functions to make
                      code symmetric for readability")
Fixes: 7168440690 ("iommu/vt-d: Detach domain *only* from attached
                      iommus")
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Cc: Jiang Liu <jiang.liu@linux.intel.com>
Cc: stable@vger.kernel.org # v3.17+
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-07-23 14:17:39 +02:00
Bjorn Helgaas fb0cc3aa55 iommu/vt-d: Cache PCI ATS state and Invalidate Queue Depth
We check the ATS state (enabled/disabled) and fetch the PCI ATS Invalidate
Queue Depth in performance-sensitive paths.  It's easy to cache these,
which removes dependencies on PCI.

Remember the ATS enabled state.  When enabling, read the queue depth once
and cache it in the device_domain_info struct.  This is similar to what
amd_iommu.c does.

Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Joerg Roedel <jroedel@suse.de>
Acked-by: Joerg Roedel <jroedel@suse.de>
2015-07-20 11:49:46 -05:00