1
0
Fork 0

dma-mapping updates for 4.18:

- replaceme the force_dma flag with a dma_configure bus method.
    (Nipun Gupta, although one patch is іncorrectly attributed to me
     due to a git rebase bug)
  - use GFP_DMA32 more agressively in dma-direct. (Takashi Iwai)
  - remove PCI_DMA_BUS_IS_PHYS and rely on the dma-mapping API to do the
    right thing for bounce buffering.
  - move dma-debug initialization to common code, and apply a few cleanups
    to the dma-debug code.
  - cleanup the Kconfig mess around swiotlb selection
  - swiotlb comment fixup (Yisheng Xie)
  - a trivial swiotlb fix. (Dan Carpenter)
  - support swiotlb on RISC-V. (based on a patch from Palmer Dabbelt)
  - add a new generic dma-noncoherent dma_map_ops implementation and use
    it for arc, c6x and nds32.
  - improve scatterlist validity checking in dma-debug. (Robin Murphy)
  - add a struct device quirk to limit the dma-mask to 32-bit due to
    bridge/system issues, and switch x86 to use it instead of a local
    hack for VIA bridges.
  - handle devices without a dma_mask more gracefully in the dma-direct
    code.
 -----BEGIN PGP SIGNATURE-----
 
 iQI/BAABCAApFiEEgdbnc3r/njty3Iq9D55TZVIEUYMFAlsU1hwLHGhjaEBsc3Qu
 ZGUACgkQD55TZVIEUYPraxAAocC7JiFKW133/VugCtGA1x9uE8DPHealtsWTAeEq
 KOOB3GxWMU2hKqQ4km5tcfdWoGJvvab6hmDXcitzZGi2JajO7Ae0FwIy3yvxSIKm
 iH/ON7c4sJt8gKrXYsLVylmwDaimNs4a6xfODoCRgnWuovI2QrrZzupnlzPNsiOC
 lv8ezzcW+Ay/gvDD/r72psO+w3QELETif/OzR/qTOtvLrVabM06eHmPQ8Wb98smu
 /UPMMv6/3XwQnxpxpdyqN+p/gUdneXithzT261wTeZ+8gDXmcWBwHGcMBCimcoBi
 FklW52moazIPIsTysqoNlVFsLGJTeS4p2D3BLAp5NwWYsLv+zHUVZsI1JY/8u5Ox
 mM11LIfvu9JtUzaqD9SvxlxIeLhhYZZGnUoV3bQAkpHSQhN/xp2YXd5NWSo5ac2O
 dch83+laZkZgd6ryw6USpt/YTPM/UHBYy7IeGGHX/PbmAke0ZlvA6Rae7kA5DG59
 7GaLdwQyrHp8uGFgwze8P+R4POSk1ly73HHLBT/pFKnDD7niWCPAnBzuuEQGJs00
 0zuyWLQyzOj1l6HCAcMNyGnYSsMp8Fx0fvEmKR/EYs8O83eJKXi6L9aizMZx4v1J
 0wTolUWH6SIIdz474YmewhG5YOLY7mfe9E8aNr8zJFdwRZqwaALKoteRGUxa3f6e
 zUE=
 =6Acj
 -----END PGP SIGNATURE-----

Merge tag 'dma-mapping-4.18' of git://git.infradead.org/users/hch/dma-mapping

Pull dma-mapping updates from Christoph Hellwig:

 - replace the force_dma flag with a dma_configure bus method. (Nipun
   Gupta, although one patch is іncorrectly attributed to me due to a
   git rebase bug)

 - use GFP_DMA32 more agressively in dma-direct. (Takashi Iwai)

 - remove PCI_DMA_BUS_IS_PHYS and rely on the dma-mapping API to do the
   right thing for bounce buffering.

 - move dma-debug initialization to common code, and apply a few
   cleanups to the dma-debug code.

 - cleanup the Kconfig mess around swiotlb selection

 - swiotlb comment fixup (Yisheng Xie)

 - a trivial swiotlb fix. (Dan Carpenter)

 - support swiotlb on RISC-V. (based on a patch from Palmer Dabbelt)

 - add a new generic dma-noncoherent dma_map_ops implementation and use
   it for arc, c6x and nds32.

 - improve scatterlist validity checking in dma-debug. (Robin Murphy)

 - add a struct device quirk to limit the dma-mask to 32-bit due to
   bridge/system issues, and switch x86 to use it instead of a local
   hack for VIA bridges.

 - handle devices without a dma_mask more gracefully in the dma-direct
   code.

* tag 'dma-mapping-4.18' of git://git.infradead.org/users/hch/dma-mapping: (48 commits)
  dma-direct: don't crash on device without dma_mask
  nds32: use generic dma_noncoherent_ops
  nds32: implement the unmap_sg DMA operation
  nds32: consolidate DMA cache maintainance routines
  x86/pci-dma: switch the VIA 32-bit DMA quirk to use the struct device flag
  x86/pci-dma: remove the explicit nodac and allowdac option
  x86/pci-dma: remove the experimental forcesac boot option
  Documentation/x86: remove a stray reference to pci-nommu.c
  core, dma-direct: add a flag 32-bit dma limits
  dma-mapping: remove unused gfp_t parameter to arch_dma_alloc_attrs
  dma-debug: check scatterlist segments
  c6x: use generic dma_noncoherent_ops
  arc: use generic dma_noncoherent_ops
  arc: fix arc_dma_{map,unmap}_page
  arc: fix arc_dma_sync_sg_for_{cpu,device}
  arc: simplify arc_dma_sync_single_for_{cpu,device}
  dma-mapping: provide a generic dma-noncoherent implementation
  dma-mapping: simplify Kconfig dependencies
  riscv: add swiotlb support
  riscv: only enable ZONE_DMA32 for 64-bit
  ...
hifive-unleashed-5.1
Linus Torvalds 2018-06-04 10:58:12 -07:00
commit e5a594643a
141 changed files with 625 additions and 1393 deletions

View File

@ -1705,7 +1705,6 @@
nopanic
merge
nomerge
forcesac
soft
pt [x86, IA-64]
nobypass [PPC/POWERNV]

View File

@ -1,31 +0,0 @@
#
# Feature name: dma-api-debug
# Kconfig: HAVE_DMA_API_DEBUG
# description: arch supports DMA debug facilities
#
-----------------------
| arch |status|
-----------------------
| alpha: | TODO |
| arc: | TODO |
| arm: | ok |
| arm64: | ok |
| c6x: | ok |
| h8300: | TODO |
| hexagon: | TODO |
| ia64: | ok |
| m68k: | TODO |
| microblaze: | ok |
| mips: | ok |
| nios2: | TODO |
| openrisc: | TODO |
| parisc: | TODO |
| powerpc: | ok |
| s390: | ok |
| sh: | ok |
| sparc: | ok |
| um: | TODO |
| unicore32: | TODO |
| x86: | ok |
| xtensa: | ok |
-----------------------

View File

@ -187,9 +187,9 @@ PCI
IOMMU (input/output memory management unit)
Currently four x86-64 PCI-DMA mapping implementations exist:
Multiple x86-64 PCI-DMA mapping implementations exist, for example:
1. <arch/x86_64/kernel/pci-nommu.c>: use no hardware/software IOMMU at all
1. <lib/dma-direct.c>: use no hardware/software IOMMU at all
(e.g. because you have < 3 GB memory).
Kernel boot message: "PCI-DMA: Disabling IOMMU"
@ -208,7 +208,7 @@ IOMMU (input/output memory management unit)
Kernel boot message: "PCI-DMA: Using Calgary IOMMU"
iommu=[<size>][,noagp][,off][,force][,noforce][,leak[=<nr_of_leak_pages>]
[,memaper[=<order>]][,merge][,forcesac][,fullflush][,nomerge]
[,memaper[=<order>]][,merge][,fullflush][,nomerge]
[,noaperture][,calgary]
General iommu options:
@ -235,14 +235,7 @@ IOMMU (input/output memory management unit)
(experimental).
nomerge Don't do scatter-gather (SG) merging.
noaperture Ask the IOMMU not to touch the aperture for AGP.
forcesac Force single-address cycle (SAC) mode for masks <40bits
(experimental).
noagp Don't initialize the AGP driver and use full aperture.
allowdac Allow double-address cycle (DAC) mode, i.e. DMA >4GB.
DAC is used with 32-bit PCI to push a 64-bit address in
two cycles. When off all DMA over >4GB is forced through
an IOMMU or software bounce buffering.
nodac Forbid DAC mode, i.e. DMA >4GB.
panic Always panic when IOMMU overflows.
calgary Use the Calgary IOMMU if it is available

View File

@ -4330,12 +4330,14 @@ W: http://git.infradead.org/users/hch/dma-mapping.git
S: Supported
F: lib/dma-debug.c
F: lib/dma-direct.c
F: lib/dma-noncoherent.c
F: lib/dma-virt.c
F: drivers/base/dma-mapping.c
F: drivers/base/dma-coherent.c
F: include/asm-generic/dma-mapping.h
F: include/linux/dma-direct.h
F: include/linux/dma-mapping.h
F: include/linux/dma-noncoherent.h
DME1737 HARDWARE MONITOR DRIVER
M: Juerg Haefliger <juergh@gmail.com>

View File

@ -278,9 +278,6 @@ config HAVE_CLK
The <linux/clk.h> calls support software clock gating and
thus are a key power management tool on many systems.
config HAVE_DMA_API_DEBUG
bool
config HAVE_HW_BREAKPOINT
bool
depends on PERF_EVENTS

View File

@ -10,6 +10,8 @@ config ALPHA
select HAVE_OPROFILE
select HAVE_PCSPKR_PLATFORM
select HAVE_PERF_EVENTS
select NEED_DMA_MAP_STATE
select NEED_SG_DMA_LENGTH
select VIRT_TO_BUS
select GENERIC_IRQ_PROBE
select AUTO_IRQ_AFFINITY if SMP
@ -64,15 +66,6 @@ config ZONE_DMA
bool
default y
config ARCH_DMA_ADDR_T_64BIT
def_bool y
config NEED_DMA_MAP_STATE
def_bool y
config NEED_SG_DMA_LENGTH
def_bool y
config GENERIC_ISA_DMA
bool
default y
@ -346,9 +339,6 @@ config PCI_DOMAINS
config PCI_SYSCALL
def_bool PCI
config IOMMU_HELPER
def_bool PCI
config ALPHA_NONAME
bool
depends on ALPHA_BOOK1 || ALPHA_NONAME_CH

View File

@ -56,11 +56,6 @@ struct pci_controller {
/* IOMMU controls. */
/* The PCI address space does not equal the physical memory address space.
The networking and block device layers use this boolean for bounce buffer
decisions. */
#define PCI_DMA_BUS_IS_PHYS 0
/* TODO: integrate with include/asm-generic/pci.h ? */
static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel)
{

View File

@ -9,11 +9,15 @@
config ARC
def_bool y
select ARC_TIMERS
select ARCH_HAS_SYNC_DMA_FOR_CPU
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
select ARCH_HAS_SG_CHAIN
select ARCH_SUPPORTS_ATOMIC_RMW if ARC_HAS_LLSC
select BUILDTIME_EXTABLE_SORT
select CLONE_BACKWARDS
select COMMON_CLK
select DMA_NONCOHERENT_OPS
select DMA_NONCOHERENT_MMAP
select GENERIC_ATOMIC64 if !ISA_ARCV2 || !(ARC_HAS_LL64 && ARC_HAS_LLSC)
select GENERIC_CLOCKEVENTS
select GENERIC_FIND_FIRST_BIT
@ -453,16 +457,11 @@ config ARC_HAS_PAE40
default n
depends on ISA_ARCV2
select HIGHMEM
select PHYS_ADDR_T_64BIT
help
Enable access to physical memory beyond 4G, only supported on
ARC cores with 40 bit Physical Addressing support
config ARCH_PHYS_ADDR_T_64BIT
def_bool ARC_HAS_PAE40
config ARCH_DMA_ADDR_T_64BIT
bool
config ARC_KVADDR_SIZE
int "Kernel Virtual Address Space size (MB)"
range 0 512

View File

@ -2,6 +2,7 @@
generic-y += bugs.h
generic-y += device.h
generic-y += div64.h
generic-y += dma-mapping.h
generic-y += emergency-restart.h
generic-y += extable.h
generic-y += fb.h

View File

@ -1,21 +0,0 @@
/*
* DMA Mapping glue for ARC
*
* Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef ASM_ARC_DMA_MAPPING_H
#define ASM_ARC_DMA_MAPPING_H
extern const struct dma_map_ops arc_dma_ops;
static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
{
return &arc_dma_ops;
}
#endif

View File

@ -16,12 +16,6 @@
#define PCIBIOS_MIN_MEM 0x100000
#define pcibios_assign_all_busses() 1
/*
* The PCI address space does equal the physical memory address space.
* The networking and block device layers use this boolean for bounce
* buffer decisions.
*/
#define PCI_DMA_BUS_IS_PHYS 1
#endif /* __KERNEL__ */

View File

@ -16,13 +16,12 @@
* The default DMA address == Phy address which is 0x8000_0000 based.
*/
#include <linux/dma-mapping.h>
#include <linux/dma-noncoherent.h>
#include <asm/cache.h>
#include <asm/cacheflush.h>
static void *arc_dma_alloc(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs)
void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle,
gfp_t gfp, unsigned long attrs)
{
unsigned long order = get_order(size);
struct page *page;
@ -89,7 +88,7 @@ static void *arc_dma_alloc(struct device *dev, size_t size,
return kvaddr;
}
static void arc_dma_free(struct device *dev, size_t size, void *vaddr,
void arch_dma_free(struct device *dev, size_t size, void *vaddr,
dma_addr_t dma_handle, unsigned long attrs)
{
phys_addr_t paddr = dma_handle;
@ -105,9 +104,9 @@ static void arc_dma_free(struct device *dev, size_t size, void *vaddr,
__free_pages(page, get_order(size));
}
static int arc_dma_mmap(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, dma_addr_t dma_addr, size_t size,
unsigned long attrs)
int arch_dma_mmap(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, dma_addr_t dma_addr, size_t size,
unsigned long attrs)
{
unsigned long user_count = vma_pages(vma);
unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
@ -130,149 +129,14 @@ static int arc_dma_mmap(struct device *dev, struct vm_area_struct *vma,
return ret;
}
/*
* streaming DMA Mapping API...
* CPU accesses page via normal paddr, thus needs to explicitly made
* consistent before each use
*/
static void _dma_cache_sync(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr,
size_t size, enum dma_data_direction dir)
{
switch (dir) {
case DMA_FROM_DEVICE:
dma_cache_inv(paddr, size);
break;
case DMA_TO_DEVICE:
dma_cache_wback(paddr, size);
break;
case DMA_BIDIRECTIONAL:
dma_cache_wback_inv(paddr, size);
break;
default:
pr_err("Invalid DMA dir [%d] for OP @ %pa[p]\n", dir, &paddr);
}
dma_cache_wback(paddr, size);
}
/*
* arc_dma_map_page - map a portion of a page for streaming DMA
*
* Ensure that any data held in the cache is appropriately discarded
* or written back.
*
* The device owns this memory once this call has completed. The CPU
* can regain ownership by calling dma_unmap_page().
*
* Note: while it takes struct page as arg, caller can "abuse" it to pass
* a region larger than PAGE_SIZE, provided it is physically contiguous
* and this still works correctly
*/
static dma_addr_t arc_dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size, enum dma_data_direction dir,
unsigned long attrs)
void arch_sync_dma_for_cpu(struct device *dev, phys_addr_t paddr,
size_t size, enum dma_data_direction dir)
{
phys_addr_t paddr = page_to_phys(page) + offset;
if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
_dma_cache_sync(paddr, size, dir);
return paddr;
dma_cache_inv(paddr, size);
}
/*
* arc_dma_unmap_page - unmap a buffer previously mapped through dma_map_page()
*
* After this call, reads by the CPU to the buffer are guaranteed to see
* whatever the device wrote there.
*
* Note: historically this routine was not implemented for ARC
*/
static void arc_dma_unmap_page(struct device *dev, dma_addr_t handle,
size_t size, enum dma_data_direction dir,
unsigned long attrs)
{
phys_addr_t paddr = handle;
if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
_dma_cache_sync(paddr, size, dir);
}
static int arc_dma_map_sg(struct device *dev, struct scatterlist *sg,
int nents, enum dma_data_direction dir, unsigned long attrs)
{
struct scatterlist *s;
int i;
for_each_sg(sg, s, nents, i)
s->dma_address = dma_map_page(dev, sg_page(s), s->offset,
s->length, dir);
return nents;
}
static void arc_dma_unmap_sg(struct device *dev, struct scatterlist *sg,
int nents, enum dma_data_direction dir,
unsigned long attrs)
{
struct scatterlist *s;
int i;
for_each_sg(sg, s, nents, i)
arc_dma_unmap_page(dev, sg_dma_address(s), sg_dma_len(s), dir,
attrs);
}
static void arc_dma_sync_single_for_cpu(struct device *dev,
dma_addr_t dma_handle, size_t size, enum dma_data_direction dir)
{
_dma_cache_sync(dma_handle, size, DMA_FROM_DEVICE);
}
static void arc_dma_sync_single_for_device(struct device *dev,
dma_addr_t dma_handle, size_t size, enum dma_data_direction dir)
{
_dma_cache_sync(dma_handle, size, DMA_TO_DEVICE);
}
static void arc_dma_sync_sg_for_cpu(struct device *dev,
struct scatterlist *sglist, int nelems,
enum dma_data_direction dir)
{
int i;
struct scatterlist *sg;
for_each_sg(sglist, sg, nelems, i)
_dma_cache_sync(sg_phys(sg), sg->length, dir);
}
static void arc_dma_sync_sg_for_device(struct device *dev,
struct scatterlist *sglist, int nelems,
enum dma_data_direction dir)
{
int i;
struct scatterlist *sg;
for_each_sg(sglist, sg, nelems, i)
_dma_cache_sync(sg_phys(sg), sg->length, dir);
}
static int arc_dma_supported(struct device *dev, u64 dma_mask)
{
/* Support 32 bit DMA mask exclusively */
return dma_mask == DMA_BIT_MASK(32);
}
const struct dma_map_ops arc_dma_ops = {
.alloc = arc_dma_alloc,
.free = arc_dma_free,
.mmap = arc_dma_mmap,
.map_page = arc_dma_map_page,
.unmap_page = arc_dma_unmap_page,
.map_sg = arc_dma_map_sg,
.unmap_sg = arc_dma_unmap_sg,
.sync_single_for_device = arc_dma_sync_single_for_device,
.sync_single_for_cpu = arc_dma_sync_single_for_cpu,
.sync_sg_for_cpu = arc_dma_sync_sg_for_cpu,
.sync_sg_for_device = arc_dma_sync_sg_for_device,
.dma_supported = arc_dma_supported,
};
EXPORT_SYMBOL(arc_dma_ops);

View File

@ -60,7 +60,6 @@ config ARM
select HAVE_CONTEXT_TRACKING
select HAVE_C_RECORDMCOUNT
select HAVE_DEBUG_KMEMLEAK
select HAVE_DMA_API_DEBUG
select HAVE_DMA_CONTIGUOUS if MMU
select HAVE_DYNAMIC_FTRACE if (!XIP_KERNEL) && !CPU_ENDIAN_BE32 && MMU
select HAVE_DYNAMIC_FTRACE_WITH_REGS if HAVE_DYNAMIC_FTRACE
@ -96,6 +95,7 @@ config ARM
select HAVE_VIRT_CPU_ACCOUNTING_GEN
select IRQ_FORCED_THREADING
select MODULES_USE_ELF_REL
select NEED_DMA_MAP_STATE
select NO_BOOTMEM
select OF_EARLY_FLATTREE if OF
select OF_RESERVED_MEM if OF
@ -119,9 +119,6 @@ config ARM_HAS_SG_CHAIN
select ARCH_HAS_SG_CHAIN
bool
config NEED_SG_DMA_LENGTH
bool
config ARM_DMA_USE_IOMMU
bool
select ARM_HAS_SG_CHAIN
@ -224,9 +221,6 @@ config ARCH_MAY_HAVE_PC_FDC
config ZONE_DMA
bool
config NEED_DMA_MAP_STATE
def_bool y
config ARCH_SUPPORTS_UPROBES
def_bool y
@ -1778,12 +1772,6 @@ config SECCOMP
and the task is only allowed to execute a few safe syscalls
defined by each seccomp mode.
config SWIOTLB
def_bool y
config IOMMU_HELPER
def_bool SWIOTLB
config PARAVIRT
bool "Enable paravirtualization code"
help
@ -1815,6 +1803,7 @@ config XEN
depends on MMU
select ARCH_DMA_ADDR_T_64BIT
select ARM_PSCI
select SWIOTLB
select SWIOTLB_XEN
select PARAVIRT
help

View File

@ -19,13 +19,6 @@ static inline int pci_proc_domain(struct pci_bus *bus)
}
#endif /* CONFIG_PCI_DOMAINS */
/*
* The PCI address space does equal the physical memory address space.
* The networking and block device layers use this boolean for bounce
* buffer decisions.
*/
#define PCI_DMA_BUS_IS_PHYS (1)
#define HAVE_PCI_MMAP
#define ARCH_GENERIC_PCI_MMAP_RESOURCE

View File

@ -754,7 +754,7 @@ int __init arm_add_memory(u64 start, u64 size)
else
size -= aligned_start - start;
#ifndef CONFIG_ARCH_PHYS_ADDR_T_64BIT
#ifndef CONFIG_PHYS_ADDR_T_64BIT
if (aligned_start > ULONG_MAX) {
pr_crit("Ignoring memory at 0x%08llx outside 32-bit physical address space\n",
(long long)start);

View File

@ -2,7 +2,6 @@
config ARCH_AXXIA
bool "LSI Axxia platforms"
depends on ARCH_MULTI_V7 && ARM_LPAE
select ARCH_DMA_ADDR_T_64BIT
select ARM_AMBA
select ARM_GIC
select ARM_TIMER_SP804

View File

@ -211,7 +211,6 @@ config ARCH_BRCMSTB
select BRCMSTB_L2_IRQ
select BCM7120_L2_IRQ
select ARCH_HAS_HOLES_MEMORYMODEL
select ARCH_DMA_ADDR_T_64BIT if ARM_LPAE
select ZONE_DMA if ARM_LPAE
select SOC_BRCMSTB
select SOC_BUS

View File

@ -112,7 +112,6 @@ config SOC_EXYNOS5440
bool "SAMSUNG EXYNOS5440"
default y
depends on ARCH_EXYNOS5
select ARCH_DMA_ADDR_T_64BIT if ARM_LPAE
select HAVE_ARM_ARCH_TIMER
select AUTO_ZRELADDR
select PINCTRL_EXYNOS5440

View File

@ -1,7 +1,6 @@
config ARCH_HIGHBANK
bool "Calxeda ECX-1000/2000 (Highbank/Midway)"
depends on ARCH_MULTI_V7
select ARCH_DMA_ADDR_T_64BIT if ARM_LPAE
select ARCH_HAS_HOLES_MEMORYMODEL
select ARCH_SUPPORTS_BIG_ENDIAN
select ARM_AMBA

View File

@ -3,7 +3,6 @@ config ARCH_ROCKCHIP
depends on ARCH_MULTI_V7
select PINCTRL
select PINCTRL_ROCKCHIP
select ARCH_DMA_ADDR_T_64BIT if ARM_LPAE
select ARCH_HAS_RESET_CONTROLLER
select ARM_AMBA
select ARM_GIC

View File

@ -29,7 +29,6 @@ config ARCH_RMOBILE
menuconfig ARCH_RENESAS
bool "Renesas ARM SoCs"
depends on ARCH_MULTI_V7 && MMU
select ARCH_DMA_ADDR_T_64BIT if ARM_LPAE
select ARCH_SHMOBILE
select ARM_GIC
select GPIOLIB

View File

@ -15,6 +15,5 @@ menuconfig ARCH_TEGRA
select RESET_CONTROLLER
select SOC_BUS
select ZONE_DMA if ARM_LPAE
select ARCH_DMA_ADDR_T_64BIT if ARM_LPAE
help
This enables support for NVIDIA Tegra based systems.

View File

@ -661,6 +661,7 @@ config ARM_LPAE
bool "Support for the Large Physical Address Extension"
depends on MMU && CPU_32v7 && !CPU_32v6 && !CPU_32v5 && \
!CPU_32v4 && !CPU_32v3
select PHYS_ADDR_T_64BIT
help
Say Y if you have an ARMv7 processor supporting the LPAE page
table format and you would like to access memory beyond the
@ -673,12 +674,6 @@ config ARM_PV_FIXUP
def_bool y
depends on ARM_LPAE && ARM_PATCH_PHYS_VIRT && ARCH_KEYSTONE
config ARCH_PHYS_ADDR_T_64BIT
def_bool ARM_LPAE
config ARCH_DMA_ADDR_T_64BIT
bool
config ARM_THUMB
bool "Support Thumb user binaries" if !CPU_THUMBONLY && EXPERT
depends on CPU_THUMB_CAPABLE

View File

@ -241,12 +241,3 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
void arch_teardown_dma_ops(struct device *dev)
{
}
#define PREALLOC_DMA_DEBUG_ENTRIES 4096
static int __init dma_debug_do_init(void)
{
dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES);
return 0;
}
core_initcall(dma_debug_do_init);

View File

@ -1151,15 +1151,6 @@ int arm_dma_supported(struct device *dev, u64 mask)
return __dma_supported(dev, mask, false);
}
#define PREALLOC_DMA_DEBUG_ENTRIES 4096
static int __init dma_debug_do_init(void)
{
dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES);
return 0;
}
core_initcall(dma_debug_do_init);
#ifdef CONFIG_ARM_DMA_USE_IOMMU
static int __dma_info_to_prot(enum dma_data_direction dir, unsigned long attrs)

View File

@ -105,7 +105,6 @@ config ARM64
select HAVE_CONTEXT_TRACKING
select HAVE_DEBUG_BUGVERBOSE
select HAVE_DEBUG_KMEMLEAK
select HAVE_DMA_API_DEBUG
select HAVE_DMA_CONTIGUOUS
select HAVE_DYNAMIC_FTRACE
select HAVE_EFFICIENT_UNALIGNED_ACCESS
@ -133,6 +132,8 @@ config ARM64
select IRQ_FORCED_THREADING
select MODULES_USE_ELF_RELA
select MULTI_IRQ_HANDLER
select NEED_DMA_MAP_STATE
select NEED_SG_DMA_LENGTH
select NO_BOOTMEM
select OF
select OF_EARLY_FLATTREE
@ -142,6 +143,7 @@ config ARM64
select POWER_SUPPLY
select REFCOUNT_FULL
select SPARSE_IRQ
select SWIOTLB
select SYSCTL_EXCEPTION_TRACE
select THREAD_INFO_IN_TASK
help
@ -150,9 +152,6 @@ config ARM64
config 64BIT
def_bool y
config ARCH_PHYS_ADDR_T_64BIT
def_bool y
config MMU
def_bool y
@ -237,24 +236,9 @@ config ZONE_DMA32
config HAVE_GENERIC_GUP
def_bool y
config ARCH_DMA_ADDR_T_64BIT
def_bool y
config NEED_DMA_MAP_STATE
def_bool y
config NEED_SG_DMA_LENGTH
def_bool y
config SMP
def_bool y
config SWIOTLB
def_bool y
config IOMMU_HELPER
def_bool SWIOTLB
config KERNEL_MODE_NEON
def_bool y

View File

@ -18,11 +18,6 @@
#define pcibios_assign_all_busses() \
(pci_has_flag(PCI_REASSIGN_ALL_BUS))
/*
* PCI address space differs from physical memory address space
*/
#define PCI_DMA_BUS_IS_PHYS (0)
#define ARCH_GENERIC_PCI_MMAP_RESOURCE 1
extern int isa_dma_bridge_buggy;

View File

@ -508,16 +508,6 @@ static int __init arm64_dma_init(void)
}
arch_initcall(arm64_dma_init);
#define PREALLOC_DMA_DEBUG_ENTRIES 4096
static int __init dma_debug_do_init(void)
{
dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES);
return 0;
}
fs_initcall(dma_debug_do_init);
#ifdef CONFIG_IOMMU_DMA
#include <linux/dma-iommu.h>
#include <linux/platform_device.h>

View File

@ -6,11 +6,13 @@
config C6X
def_bool y
select ARCH_HAS_SYNC_DMA_FOR_CPU
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
select CLKDEV_LOOKUP
select DMA_NONCOHERENT_OPS
select GENERIC_ATOMIC64
select GENERIC_IRQ_SHOW
select HAVE_ARCH_TRACEHOOK
select HAVE_DMA_API_DEBUG
select HAVE_MEMBLOCK
select SPARSE_IRQ
select IRQ_DOMAIN

View File

@ -5,6 +5,7 @@ generic-y += current.h
generic-y += device.h
generic-y += div64.h
generic-y += dma.h
generic-y += dma-mapping.h
generic-y += emergency-restart.h
generic-y += exec.h
generic-y += extable.h

View File

@ -1,28 +0,0 @@
/*
* Port on Texas Instruments TMS320C6x architecture
*
* Copyright (C) 2004, 2009, 2010, 2011 Texas Instruments Incorporated
* Author: Aurelien Jacquiot <aurelien.jacquiot@ti.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
*/
#ifndef _ASM_C6X_DMA_MAPPING_H
#define _ASM_C6X_DMA_MAPPING_H
extern const struct dma_map_ops c6x_dma_ops;
static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
{
return &c6x_dma_ops;
}
extern void coherent_mem_init(u32 start, u32 size);
void *c6x_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle,
gfp_t gfp, unsigned long attrs);
void c6x_dma_free(struct device *dev, size_t size, void *vaddr,
dma_addr_t dma_handle, unsigned long attrs);
#endif /* _ASM_C6X_DMA_MAPPING_H */

View File

@ -28,5 +28,7 @@ extern unsigned char c6x_fuse_mac[6];
extern void machine_init(unsigned long dt_ptr);
extern void time_init(void);
extern void coherent_mem_init(u32 start, u32 size);
#endif /* !__ASSEMBLY__ */
#endif /* _ASM_C6X_SETUP_H */

View File

@ -8,6 +8,6 @@ extra-y := head.o vmlinux.lds
obj-y := process.o traps.o irq.o signal.o ptrace.o
obj-y += setup.o sys_c6x.o time.o devicetree.o
obj-y += switch_to.o entry.o vectors.o c6x_ksyms.o
obj-y += soc.o dma.o
obj-y += soc.o
obj-$(CONFIG_MODULES) += module.o

View File

@ -1,149 +0,0 @@
/*
* Copyright (C) 2011 Texas Instruments Incorporated
* Author: Mark Salter <msalter@redhat.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/module.h>
#include <linux/dma-mapping.h>
#include <linux/mm.h>
#include <linux/mm_types.h>
#include <linux/scatterlist.h>
#include <asm/cacheflush.h>
static void c6x_dma_sync(dma_addr_t handle, size_t size,
enum dma_data_direction dir)
{
unsigned long paddr = handle;
BUG_ON(!valid_dma_direction(dir));
switch (dir) {
case DMA_FROM_DEVICE:
L2_cache_block_invalidate(paddr, paddr + size);
break;
case DMA_TO_DEVICE:
L2_cache_block_writeback(paddr, paddr + size);
break;
case DMA_BIDIRECTIONAL:
L2_cache_block_writeback_invalidate(paddr, paddr + size);
break;
default:
break;
}
}
static dma_addr_t c6x_dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size, enum dma_data_direction dir,
unsigned long attrs)
{
dma_addr_t handle = virt_to_phys(page_address(page) + offset);
if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
c6x_dma_sync(handle, size, dir);
return handle;
}
static void c6x_dma_unmap_page(struct device *dev, dma_addr_t handle,
size_t size, enum dma_data_direction dir, unsigned long attrs)
{
if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
c6x_dma_sync(handle, size, dir);
}
static int c6x_dma_map_sg(struct device *dev, struct scatterlist *sglist,
int nents, enum dma_data_direction dir, unsigned long attrs)
{
struct scatterlist *sg;
int i;
for_each_sg(sglist, sg, nents, i) {
sg->dma_address = sg_phys(sg);
if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
c6x_dma_sync(sg->dma_address, sg->length, dir);
}
return nents;
}
static void c6x_dma_unmap_sg(struct device *dev, struct scatterlist *sglist,
int nents, enum dma_data_direction dir, unsigned long attrs)
{
struct scatterlist *sg;
int i;
if (attrs & DMA_ATTR_SKIP_CPU_SYNC)
return;
for_each_sg(sglist, sg, nents, i)
c6x_dma_sync(sg_dma_address(sg), sg->length, dir);
}
static void c6x_dma_sync_single_for_cpu(struct device *dev, dma_addr_t handle,
size_t size, enum dma_data_direction dir)
{
c6x_dma_sync(handle, size, dir);
}
static void c6x_dma_sync_single_for_device(struct device *dev,
dma_addr_t handle, size_t size, enum dma_data_direction dir)
{
c6x_dma_sync(handle, size, dir);
}
static void c6x_dma_sync_sg_for_cpu(struct device *dev,
struct scatterlist *sglist, int nents,
enum dma_data_direction dir)
{
struct scatterlist *sg;
int i;
for_each_sg(sglist, sg, nents, i)
c6x_dma_sync_single_for_cpu(dev, sg_dma_address(sg),
sg->length, dir);
}
static void c6x_dma_sync_sg_for_device(struct device *dev,
struct scatterlist *sglist, int nents,
enum dma_data_direction dir)
{
struct scatterlist *sg;
int i;
for_each_sg(sglist, sg, nents, i)
c6x_dma_sync_single_for_device(dev, sg_dma_address(sg),
sg->length, dir);
}
const struct dma_map_ops c6x_dma_ops = {
.alloc = c6x_dma_alloc,
.free = c6x_dma_free,
.map_page = c6x_dma_map_page,
.unmap_page = c6x_dma_unmap_page,
.map_sg = c6x_dma_map_sg,
.unmap_sg = c6x_dma_unmap_sg,
.sync_single_for_device = c6x_dma_sync_single_for_device,
.sync_single_for_cpu = c6x_dma_sync_single_for_cpu,
.sync_sg_for_device = c6x_dma_sync_sg_for_device,
.sync_sg_for_cpu = c6x_dma_sync_sg_for_cpu,
};
EXPORT_SYMBOL(c6x_dma_ops);
/* Number of entries preallocated for DMA-API debugging */
#define PREALLOC_DMA_DEBUG_ENTRIES (1 << 16)
static int __init dma_init(void)
{
dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES);
return 0;
}
fs_initcall(dma_init);

View File

@ -19,10 +19,12 @@
#include <linux/bitops.h>
#include <linux/module.h>
#include <linux/interrupt.h>
#include <linux/dma-mapping.h>
#include <linux/dma-noncoherent.h>
#include <linux/memblock.h>
#include <asm/cacheflush.h>
#include <asm/page.h>
#include <asm/setup.h>
/*
* DMA coherent memory management, can be redefined using the memdma=
@ -73,7 +75,7 @@ static void __free_dma_pages(u32 addr, int order)
* Allocate DMA coherent memory space and return both the kernel
* virtual and DMA address for that space.
*/
void *c6x_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle,
void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle,
gfp_t gfp, unsigned long attrs)
{
u32 paddr;
@ -98,7 +100,7 @@ void *c6x_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle,
/*
* Free DMA coherent memory as defined by the above mapping.
*/
void c6x_dma_free(struct device *dev, size_t size, void *vaddr,
void arch_dma_free(struct device *dev, size_t size, void *vaddr,
dma_addr_t dma_handle, unsigned long attrs)
{
int order;
@ -139,3 +141,35 @@ void __init coherent_mem_init(phys_addr_t start, u32 size)
dma_bitmap = phys_to_virt(bitmap_phys);
memset(dma_bitmap, 0, dma_pages * PAGE_SIZE);
}
static void c6x_dma_sync(struct device *dev, phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
BUG_ON(!valid_dma_direction(dir));
switch (dir) {
case DMA_FROM_DEVICE:
L2_cache_block_invalidate(paddr, paddr + size);
break;
case DMA_TO_DEVICE:
L2_cache_block_writeback(paddr, paddr + size);
break;
case DMA_BIDIRECTIONAL:
L2_cache_block_writeback_invalidate(paddr, paddr + size);
break;
default:
break;
}
}
void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr,
size_t size, enum dma_data_direction dir)
{
return c6x_dma_sync(dev, paddr, size, dir);
}
void arch_sync_dma_for_cpu(struct device *dev, phys_addr_t paddr,
size_t size, enum dma_data_direction dir)
{
return c6x_dma_sync(dev, paddr, size, dir);
}

View File

@ -15,6 +15,4 @@ static inline void pcibios_penalize_isa_irq(int irq, int active)
/* We don't do dynamic PCI IRQ allocation */
}
#define PCI_DMA_BUS_IS_PHYS (1)
#endif /* _ASM_H8300_PCI_H */

View File

@ -19,6 +19,7 @@ config HEXAGON
select GENERIC_IRQ_SHOW
select HAVE_ARCH_KGDB
select HAVE_ARCH_TRACEHOOK
select NEED_SG_DMA_LENGTH
select NO_IOPORT_MAP
select GENERIC_IOMAP
select GENERIC_SMP_IDLE_THREAD
@ -63,9 +64,6 @@ config GENERIC_CSUM
config GENERIC_IRQ_PROBE
def_bool y
config NEED_SG_DMA_LENGTH
def_bool y
config RWSEM_GENERIC_SPINLOCK
def_bool n

View File

@ -208,7 +208,6 @@ const struct dma_map_ops hexagon_dma_ops = {
.sync_single_for_cpu = hexagon_sync_single_for_cpu,
.sync_single_for_device = hexagon_sync_single_for_device,
.mapping_error = hexagon_mapping_error,
.is_phys = 1,
};
void __init hexagon_dma_init(void)

View File

@ -29,7 +29,6 @@ config IA64
select HAVE_FUNCTION_TRACER
select TTY
select HAVE_ARCH_TRACEHOOK
select HAVE_DMA_API_DEBUG
select HAVE_MEMBLOCK
select HAVE_MEMBLOCK_NODE_MAP
select HAVE_VIRT_CPU_ACCOUNTING
@ -54,6 +53,8 @@ config IA64
select MODULES_USE_ELF_RELA
select ARCH_USE_CMPXCHG_LOCKREF
select HAVE_ARCH_AUDITSYSCALL
select NEED_DMA_MAP_STATE
select NEED_SG_DMA_LENGTH
default y
help
The Itanium Processor Family is Intel's 64-bit successor to
@ -78,18 +79,6 @@ config MMU
bool
default y
config ARCH_DMA_ADDR_T_64BIT
def_bool y
config NEED_DMA_MAP_STATE
def_bool y
config NEED_SG_DMA_LENGTH
def_bool y
config SWIOTLB
bool
config STACKTRACE_SUPPORT
def_bool y
@ -146,7 +135,6 @@ config IA64_GENERIC
bool "generic"
select NUMA
select ACPI_NUMA
select DMA_DIRECT_OPS
select SWIOTLB
select PCI_MSI
help
@ -167,7 +155,6 @@ config IA64_GENERIC
config IA64_DIG
bool "DIG-compliant"
select DMA_DIRECT_OPS
select SWIOTLB
config IA64_DIG_VTD
@ -183,7 +170,6 @@ config IA64_HP_ZX1
config IA64_HP_ZX1_SWIOTLB
bool "HP-zx1/sx1000 with software I/O TLB"
select DMA_DIRECT_OPS
select SWIOTLB
help
Build a kernel that runs on HP zx1 and sx1000 systems even when they
@ -207,7 +193,6 @@ config IA64_SGI_UV
bool "SGI-UV"
select NUMA
select ACPI_NUMA
select DMA_DIRECT_OPS
select SWIOTLB
help
Selecting this option will optimize the kernel for use on UV based
@ -218,7 +203,6 @@ config IA64_SGI_UV
config IA64_HP_SIM
bool "Ski-simulator"
select DMA_DIRECT_OPS
select SWIOTLB
depends on !PM
@ -613,6 +597,3 @@ source "security/Kconfig"
source "crypto/Kconfig"
source "lib/Kconfig"
config IOMMU_HELPER
def_bool (IA64_HP_ZX1 || IA64_HP_ZX1_SWIOTLB || IA64_GENERIC || SWIOTLB)

View File

@ -1845,9 +1845,6 @@ static void ioc_init(unsigned long hpa, struct ioc *ioc)
ioc_resource_init(ioc);
ioc_sac_init(ioc);
if ((long) ~iovp_mask > (long) ia64_max_iommu_merge_mask)
ia64_max_iommu_merge_mask = ~iovp_mask;
printk(KERN_INFO PFX
"%s %d.%d HPA 0x%lx IOVA space %dMb at 0x%lx\n",
ioc->name, (ioc->rev >> 4) & 0xF, ioc->rev & 0xF,

View File

@ -30,23 +30,6 @@ struct pci_vector_struct {
#define PCIBIOS_MIN_IO 0x1000
#define PCIBIOS_MIN_MEM 0x10000000
/*
* PCI_DMA_BUS_IS_PHYS should be set to 1 if there is _necessarily_ a direct
* correspondence between device bus addresses and CPU physical addresses.
* Platforms with a hardware I/O MMU _must_ turn this off to suppress the
* bounce buffer handling code in the block and network device layers.
* Platforms with separate bus address spaces _must_ turn this off and provide
* a device DMA mapping implementation that takes care of the necessary
* address translation.
*
* For now, the ia64 platforms which may have separate/multiple bus address
* spaces all have I/O MMUs which support the merging of physically
* discontiguous buffers, so we can use that as the sole factor to determine
* the setting of PCI_DMA_BUS_IS_PHYS.
*/
extern unsigned long ia64_max_iommu_merge_mask;
#define PCI_DMA_BUS_IS_PHYS (ia64_max_iommu_merge_mask == ~0UL)
#define HAVE_PCI_MMAP
#define ARCH_GENERIC_PCI_MMAP_RESOURCE
#define arch_can_pci_mmap_wc() 1

View File

@ -9,16 +9,6 @@ int iommu_detected __read_mostly;
const struct dma_map_ops *dma_ops;
EXPORT_SYMBOL(dma_ops);
#define PREALLOC_DMA_DEBUG_ENTRIES (1 << 16)
static int __init dma_init(void)
{
dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES);
return 0;
}
fs_initcall(dma_init);
const struct dma_map_ops *dma_get_ops(struct device *dev)
{
return dma_ops;

View File

@ -123,18 +123,6 @@ unsigned long ia64_i_cache_stride_shift = ~0;
#define CACHE_STRIDE_SHIFT 5
unsigned long ia64_cache_stride_shift = ~0;
/*
* The merge_mask variable needs to be set to (max(iommu_page_size(iommu)) - 1). This
* mask specifies a mask of address bits that must be 0 in order for two buffers to be
* mergeable by the I/O MMU (i.e., the end address of the first buffer and the start
* address of the second buffer must be aligned to (merge_mask+1) in order to be
* mergeable). By default, we assume there is no I/O MMU which can merge physically
* discontiguous buffers, so we set the merge_mask to ~0UL, which corresponds to a iommu
* page-size of 2^64.
*/
unsigned long ia64_max_iommu_merge_mask = ~0UL;
EXPORT_SYMBOL(ia64_max_iommu_merge_mask);
/*
* We use a special marker for the end of memory and it uses the extra (+1) slot
*/

View File

@ -480,11 +480,6 @@ sn_io_early_init(void)
tioca_init_provider();
tioce_init_provider();
/*
* This is needed to avoid bounce limit checks in the blk layer
*/
ia64_max_iommu_merge_mask = ~PAGE_MASK;
sn_irq_lh_init();
INIT_LIST_HEAD(&sn_sysdata_list);
sn_init_cpei_timer();

View File

@ -4,12 +4,6 @@
#include <asm-generic/pci.h>
/* The PCI address space does equal the physical memory
* address space. The networking and block device layers use
* this boolean for bounce buffer decisions.
*/
#define PCI_DMA_BUS_IS_PHYS (1)
#define pcibios_assign_all_busses() 1
#define PCIBIOS_MIN_IO 0x00000100

View File

@ -19,7 +19,6 @@ config MICROBLAZE
select HAVE_ARCH_HASH
select HAVE_ARCH_KGDB
select HAVE_DEBUG_KMEMLEAK
select HAVE_DMA_API_DEBUG
select HAVE_DYNAMIC_FTRACE
select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_FUNCTION_GRAPH_TRACER

View File

@ -62,12 +62,6 @@ extern int pci_mmap_legacy_page_range(struct pci_bus *bus,
#define HAVE_PCI_LEGACY 1
/* The PCI address space does equal the physical memory
* address space (no IOMMU). The IDE and SCSI device layers use
* this boolean for bounce buffer decisions.
*/
#define PCI_DMA_BUS_IS_PHYS (1)
extern void pcibios_claim_one_bus(struct pci_bus *b);
extern void pcibios_finish_adding_to_bus(struct pci_bus *bus);

View File

@ -184,14 +184,3 @@ const struct dma_map_ops dma_nommu_ops = {
.sync_sg_for_device = dma_nommu_sync_sg_for_device,
};
EXPORT_SYMBOL(dma_nommu_ops);
/* Number of entries preallocated for DMA-API debugging */
#define PREALLOC_DMA_DEBUG_ENTRIES (1 << 16)
static int __init dma_init(void)
{
dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES);
return 0;
}
fs_initcall(dma_init);

View File

@ -42,7 +42,6 @@ config MIPS
select HAVE_C_RECORDMCOUNT
select HAVE_DEBUG_KMEMLEAK
select HAVE_DEBUG_STACKOVERFLOW
select HAVE_DMA_API_DEBUG
select HAVE_DMA_CONTIGUOUS
select HAVE_DYNAMIC_FTRACE
select HAVE_EXIT_THREAD
@ -132,7 +131,7 @@ config MIPS_GENERIC
config MIPS_ALCHEMY
bool "Alchemy processor based machines"
select ARCH_PHYS_ADDR_T_64BIT
select PHYS_ADDR_T_64BIT
select CEVT_R4K
select CSRC_R4K
select IRQ_MIPS_CPU
@ -890,7 +889,7 @@ config CAVIUM_OCTEON_SOC
bool "Cavium Networks Octeon SoC based boards"
select CEVT_R4K
select ARCH_HAS_PHYS_TO_DMA
select ARCH_PHYS_ADDR_T_64BIT
select PHYS_ADDR_T_64BIT
select DMA_COHERENT
select SYS_SUPPORTS_64BIT_KERNEL
select SYS_SUPPORTS_BIG_ENDIAN
@ -912,6 +911,7 @@ config CAVIUM_OCTEON_SOC
select MIPS_NR_CPU_NR_MAP_1024
select BUILTIN_DTB
select MTD_COMPLEX_MAPPINGS
select SWIOTLB
select SYS_SUPPORTS_RELOCATABLE
help
This option supports all of the Octeon reference boards from Cavium
@ -936,7 +936,7 @@ config NLM_XLR_BOARD
select SWAP_IO_SPACE
select SYS_SUPPORTS_32BIT_KERNEL
select SYS_SUPPORTS_64BIT_KERNEL
select ARCH_PHYS_ADDR_T_64BIT
select PHYS_ADDR_T_64BIT
select SYS_SUPPORTS_BIG_ENDIAN
select SYS_SUPPORTS_HIGHMEM
select DMA_COHERENT
@ -962,7 +962,7 @@ config NLM_XLP_BOARD
select HW_HAS_PCI
select SYS_SUPPORTS_32BIT_KERNEL
select SYS_SUPPORTS_64BIT_KERNEL
select ARCH_PHYS_ADDR_T_64BIT
select PHYS_ADDR_T_64BIT
select GPIOLIB
select SYS_SUPPORTS_BIG_ENDIAN
select SYS_SUPPORTS_LITTLE_ENDIAN
@ -1101,9 +1101,6 @@ config GPIO_TXX9
config FW_CFE
bool
config ARCH_DMA_ADDR_T_64BIT
def_bool (HIGHMEM && ARCH_PHYS_ADDR_T_64BIT) || 64BIT
config ARCH_SUPPORTS_UPROBES
bool
@ -1122,9 +1119,6 @@ config DMA_NONCOHERENT
bool
select NEED_DMA_MAP_STATE
config NEED_DMA_MAP_STATE
bool
config SYS_HAS_EARLY_PRINTK
bool
@ -1373,6 +1367,7 @@ config CPU_LOONGSON3
select MIPS_PGD_C0_CONTEXT
select MIPS_L1_CACHE_SHIFT_6
select GPIOLIB
select SWIOTLB
help
The Loongson 3 processor implements the MIPS64R2 instruction
set with many extensions.
@ -1770,7 +1765,7 @@ config CPU_MIPS32_R5_XPA
depends on SYS_SUPPORTS_HIGHMEM
select XPA
select HIGHMEM
select ARCH_PHYS_ADDR_T_64BIT
select PHYS_ADDR_T_64BIT
default n
help
Choose this option if you want to enable the Extended Physical
@ -2402,9 +2397,6 @@ config SB1_PASS_2_1_WORKAROUNDS
default y
config ARCH_PHYS_ADDR_T_64BIT
bool
choice
prompt "SmartMIPS or microMIPS ASE support"

View File

@ -67,18 +67,6 @@ config CAVIUM_OCTEON_LOCK_L2_MEMCPY
help
Lock the kernel's implementation of memcpy() into L2.
config IOMMU_HELPER
bool
config NEED_SG_DMA_LENGTH
bool
config SWIOTLB
def_bool y
select DMA_DIRECT_OPS
select IOMMU_HELPER
select NEED_SG_DMA_LENGTH
config OCTEON_ILM
tristate "Module to measure interrupt latency using Octeon CIU Timer"
help

View File

@ -121,13 +121,6 @@ extern unsigned long PCIBIOS_MIN_MEM;
#include <linux/string.h>
#include <asm/io.h>
/*
* The PCI address space does equal the physical memory address space.
* The networking and block device layers use this boolean for bounce
* buffer decisions.
*/
#define PCI_DMA_BUS_IS_PHYS (1)
#ifdef CONFIG_PCI_DOMAINS_GENERIC
static inline int pci_proc_domain(struct pci_bus *bus)
{

View File

@ -130,21 +130,6 @@ config LOONGSON_UART_BASE
default y
depends on EARLY_PRINTK || SERIAL_8250
config IOMMU_HELPER
bool
config NEED_SG_DMA_LENGTH
bool
config SWIOTLB
bool "Soft IOMMU Support for All-Memory DMA"
default y
depends on CPU_LOONGSON3
select DMA_DIRECT_OPS
select IOMMU_HELPER
select NEED_SG_DMA_LENGTH
select NEED_DMA_MAP_STATE
config PHYS48_TO_HT40
bool
default y if CPU_LOONGSON3

View File

@ -402,13 +402,3 @@ static const struct dma_map_ops mips_default_dma_map_ops = {
const struct dma_map_ops *mips_dma_map_ops = &mips_default_dma_map_ops;
EXPORT_SYMBOL(mips_dma_map_ops);
#define PREALLOC_DMA_DEBUG_ENTRIES (1 << 16)
static int __init mips_dma_init(void)
{
dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES);
return 0;
}
fs_initcall(mips_dma_init);

View File

@ -83,10 +83,4 @@ endif
config NLM_COMMON
bool
config IOMMU_HELPER
bool
config NEED_SG_DMA_LENGTH
bool
endif

View File

@ -5,10 +5,13 @@
config NDS32
def_bool y
select ARCH_HAS_SYNC_DMA_FOR_CPU
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
select ARCH_WANT_FRAME_POINTERS if FTRACE
select CLKSRC_MMIO
select CLONE_BACKWARDS
select COMMON_CLK
select DMA_NONCOHERENT_OPS
select GENERIC_ASHLDI3
select GENERIC_ASHRDI3
select GENERIC_LSHRDI3

View File

@ -13,6 +13,7 @@ generic-y += cputime.h
generic-y += device.h
generic-y += div64.h
generic-y += dma.h
generic-y += dma-mapping.h
generic-y += emergency-restart.h
generic-y += errno.h
generic-y += exec.h

View File

@ -1,14 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#ifndef ASMNDS32_DMA_MAPPING_H
#define ASMNDS32_DMA_MAPPING_H
extern struct dma_map_ops nds32_dma_ops;
static inline struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
{
return &nds32_dma_ops;
}
#endif

View File

@ -3,17 +3,14 @@
#include <linux/types.h>
#include <linux/mm.h>
#include <linux/export.h>
#include <linux/string.h>
#include <linux/scatterlist.h>
#include <linux/dma-mapping.h>
#include <linux/dma-noncoherent.h>
#include <linux/io.h>
#include <linux/cache.h>
#include <linux/highmem.h>
#include <linux/slab.h>
#include <asm/cacheflush.h>
#include <asm/tlbflush.h>
#include <asm/dma-mapping.h>
#include <asm/proc-fns.h>
/*
@ -22,11 +19,6 @@
static pte_t *consistent_pte;
static DEFINE_RAW_SPINLOCK(consistent_lock);
enum master_type {
FOR_CPU = 0,
FOR_DEVICE = 1,
};
/*
* VM region handling support.
*
@ -124,10 +116,8 @@ out:
return c;
}
/* FIXME: attrs is not used. */
static void *nds32_dma_alloc_coherent(struct device *dev, size_t size,
dma_addr_t * handle, gfp_t gfp,
unsigned long attrs)
void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle,
gfp_t gfp, unsigned long attrs)
{
struct page *page;
struct arch_vm_region *c;
@ -232,8 +222,8 @@ no_page:
return NULL;
}
static void nds32_dma_free(struct device *dev, size_t size, void *cpu_addr,
dma_addr_t handle, unsigned long attrs)
void arch_dma_free(struct device *dev, size_t size, void *cpu_addr,
dma_addr_t handle, unsigned long attrs)
{
struct arch_vm_region *c;
unsigned long flags, addr;
@ -333,145 +323,69 @@ static int __init consistent_init(void)
}
core_initcall(consistent_init);
static void consistent_sync(void *vaddr, size_t size, int direction, int master_type);
static dma_addr_t nds32_dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size,
enum dma_data_direction dir,
unsigned long attrs)
static inline void cache_op(phys_addr_t paddr, size_t size,
void (*fn)(unsigned long start, unsigned long end))
{
if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
consistent_sync((void *)(page_address(page) + offset), size, dir, FOR_DEVICE);
return page_to_phys(page) + offset;
}
struct page *page = pfn_to_page(paddr >> PAGE_SHIFT);
unsigned offset = paddr & ~PAGE_MASK;
size_t left = size;
unsigned long start;
static void nds32_dma_unmap_page(struct device *dev, dma_addr_t handle,
size_t size, enum dma_data_direction dir,
unsigned long attrs)
{
if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
consistent_sync(phys_to_virt(handle), size, dir, FOR_CPU);
}
do {
size_t len = left;
/*
* Make an area consistent for devices.
*/
static void consistent_sync(void *vaddr, size_t size, int direction, int master_type)
{
unsigned long start = (unsigned long)vaddr;
unsigned long end = start + size;
if (master_type == FOR_CPU) {
switch (direction) {
case DMA_TO_DEVICE:
break;
case DMA_FROM_DEVICE:
case DMA_BIDIRECTIONAL:
cpu_dma_inval_range(start, end);
break;
default:
BUG();
}
} else {
/* FOR_DEVICE */
switch (direction) {
case DMA_FROM_DEVICE:
break;
case DMA_TO_DEVICE:
case DMA_BIDIRECTIONAL:
cpu_dma_wb_range(start, end);
break;
default:
BUG();
}
}
}
static int nds32_dma_map_sg(struct device *dev, struct scatterlist *sg,
int nents, enum dma_data_direction dir,
unsigned long attrs)
{
int i;
for (i = 0; i < nents; i++, sg++) {
void *virt;
unsigned long pfn;
struct page *page = sg_page(sg);
sg->dma_address = sg_phys(sg);
pfn = page_to_pfn(page) + sg->offset / PAGE_SIZE;
page = pfn_to_page(pfn);
if (PageHighMem(page)) {
virt = kmap_atomic(page);
consistent_sync(virt, sg->length, dir, FOR_CPU);
kunmap_atomic(virt);
void *addr;
if (offset + len > PAGE_SIZE) {
if (offset >= PAGE_SIZE) {
page += offset >> PAGE_SHIFT;
offset &= ~PAGE_MASK;
}
len = PAGE_SIZE - offset;
}
addr = kmap_atomic(page);
start = (unsigned long)(addr + offset);
fn(start, start + len);
kunmap_atomic(addr);
} else {
if (sg->offset > PAGE_SIZE)
panic("sg->offset:%08x > PAGE_SIZE\n",
sg->offset);
virt = page_address(page) + sg->offset;
consistent_sync(virt, sg->length, dir, FOR_CPU);
start = (unsigned long)phys_to_virt(paddr);
fn(start, start + size);
}
}
return nents;
offset = 0;
page++;
left -= len;
} while (left);
}
static void nds32_dma_unmap_sg(struct device *dev, struct scatterlist *sg,
int nhwentries, enum dma_data_direction dir,
unsigned long attrs)
void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr,
size_t size, enum dma_data_direction dir)
{
}
static void
nds32_dma_sync_single_for_cpu(struct device *dev, dma_addr_t handle,
size_t size, enum dma_data_direction dir)
{
consistent_sync((void *)phys_to_virt(handle), size, dir, FOR_CPU);
}
static void
nds32_dma_sync_single_for_device(struct device *dev, dma_addr_t handle,
size_t size, enum dma_data_direction dir)
{
consistent_sync((void *)phys_to_virt(handle), size, dir, FOR_DEVICE);
}
static void
nds32_dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nents,
enum dma_data_direction dir)
{
int i;
for (i = 0; i < nents; i++, sg++) {
char *virt =
page_address((struct page *)sg->page_link) + sg->offset;
consistent_sync(virt, sg->length, dir, FOR_CPU);
switch (dir) {
case DMA_FROM_DEVICE:
break;
case DMA_TO_DEVICE:
case DMA_BIDIRECTIONAL:
cache_op(paddr, size, cpu_dma_wb_range);
break;
default:
BUG();
}
}
static void
nds32_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
int nents, enum dma_data_direction dir)
void arch_sync_dma_for_cpu(struct device *dev, phys_addr_t paddr,
size_t size, enum dma_data_direction dir)
{
int i;
for (i = 0; i < nents; i++, sg++) {
char *virt =
page_address((struct page *)sg->page_link) + sg->offset;
consistent_sync(virt, sg->length, dir, FOR_DEVICE);
switch (dir) {
case DMA_TO_DEVICE:
break;
case DMA_FROM_DEVICE:
case DMA_BIDIRECTIONAL:
cache_op(paddr, size, cpu_dma_inval_range);
break;
default:
BUG();
}
}
struct dma_map_ops nds32_dma_ops = {
.alloc = nds32_dma_alloc_coherent,
.free = nds32_dma_free,
.map_page = nds32_dma_map_page,
.unmap_page = nds32_dma_unmap_page,
.map_sg = nds32_dma_map_sg,
.unmap_sg = nds32_dma_unmap_sg,
.sync_single_for_device = nds32_dma_sync_single_for_device,
.sync_single_for_cpu = nds32_dma_sync_single_for_cpu,
.sync_sg_for_cpu = nds32_dma_sync_sg_for_cpu,
.sync_sg_for_device = nds32_dma_sync_sg_for_device,
};
EXPORT_SYMBOL(nds32_dma_ops);

View File

@ -247,14 +247,3 @@ const struct dma_map_ops or1k_dma_map_ops = {
.sync_single_for_device = or1k_sync_single_for_device,
};
EXPORT_SYMBOL(or1k_dma_map_ops);
/* Number of entries preallocated for DMA-API debugging */
#define PREALLOC_DMA_DEBUG_ENTRIES (1 << 16)
static int __init dma_init(void)
{
dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES);
return 0;
}
fs_initcall(dma_init);

View File

@ -51,6 +51,8 @@ config PARISC
select GENERIC_CLOCKEVENTS
select ARCH_NO_COHERENT_DMA_MMAP
select CPU_NO_EFFICIENT_FFS
select NEED_DMA_MAP_STATE
select NEED_SG_DMA_LENGTH
help
The PA-RISC microprocessor is designed by Hewlett-Packard and used
@ -111,12 +113,6 @@ config PM
config STACKTRACE_SUPPORT
def_bool y
config NEED_DMA_MAP_STATE
def_bool y
config NEED_SG_DMA_LENGTH
def_bool y
config ISA_DMA_API
bool

View File

@ -87,29 +87,6 @@ struct pci_hba_data {
#define PCI_F_EXTEND 0UL
#endif /* !CONFIG_64BIT */
/*
* If the PCI device's view of memory is the same as the CPU's view of memory,
* PCI_DMA_BUS_IS_PHYS is true. The networking and block device layers use
* this boolean for bounce buffer decisions.
*/
#ifdef CONFIG_PA20
/* All PA-2.0 machines have an IOMMU. */
#define PCI_DMA_BUS_IS_PHYS 0
#define parisc_has_iommu() do { } while (0)
#else
#if defined(CONFIG_IOMMU_CCIO) || defined(CONFIG_IOMMU_SBA)
extern int parisc_bus_is_phys; /* in arch/parisc/kernel/setup.c */
#define PCI_DMA_BUS_IS_PHYS parisc_bus_is_phys
#define parisc_has_iommu() do { parisc_bus_is_phys = 0; } while (0)
#else
#define PCI_DMA_BUS_IS_PHYS 1
#define parisc_has_iommu() do { } while (0)
#endif
#endif /* !CONFIG_PA20 */
/*
** Most PCI devices (eg Tulip, NCR720) also export the same registers
** to both MMIO and I/O port space. Due to poor performance of I/O Port

View File

@ -58,11 +58,6 @@ struct proc_dir_entry * proc_runway_root __read_mostly = NULL;
struct proc_dir_entry * proc_gsc_root __read_mostly = NULL;
struct proc_dir_entry * proc_mckinley_root __read_mostly = NULL;
#if !defined(CONFIG_PA20) && (defined(CONFIG_IOMMU_CCIO) || defined(CONFIG_IOMMU_SBA))
int parisc_bus_is_phys __read_mostly = 1; /* Assume no IOMMU is present */
EXPORT_SYMBOL(parisc_bus_is_phys);
#endif
void __init setup_cmdline(char **cmdline_p)
{
extern unsigned int boot_args[];

View File

@ -13,12 +13,6 @@ config 64BIT
bool
default y if PPC64
config ARCH_PHYS_ADDR_T_64BIT
def_bool PPC64 || PHYS_64BIT
config ARCH_DMA_ADDR_T_64BIT
def_bool ARCH_PHYS_ADDR_T_64BIT
config MMU
bool
default y
@ -187,7 +181,6 @@ config PPC
select HAVE_CONTEXT_TRACKING if PPC64
select HAVE_DEBUG_KMEMLEAK
select HAVE_DEBUG_STACKOVERFLOW
select HAVE_DMA_API_DEBUG
select HAVE_DYNAMIC_FTRACE
select HAVE_DYNAMIC_FTRACE_WITH_REGS if MPROFILE_KERNEL
select HAVE_EBPF_JIT if PPC64
@ -223,9 +216,11 @@ config PPC
select HAVE_SYSCALL_TRACEPOINTS
select HAVE_VIRT_CPU_ACCOUNTING
select HAVE_IRQ_TIME_ACCOUNTING
select IOMMU_HELPER if PPC64
select IRQ_DOMAIN
select IRQ_FORCED_THREADING
select MODULES_USE_ELF_RELA
select NEED_SG_DMA_LENGTH
select NO_BOOTMEM
select OF
select OF_EARLY_FLATTREE
@ -478,19 +473,6 @@ config MPROFILE_KERNEL
depends on PPC64 && CPU_LITTLE_ENDIAN
def_bool !DISABLE_MPROFILE_KERNEL
config IOMMU_HELPER
def_bool PPC64
config SWIOTLB
bool "SWIOTLB support"
default n
select IOMMU_HELPER
---help---
Support for IO bounce buffering for systems without an IOMMU.
This allows us to DMA to the full physical address space on
platforms where the size of a physical address is larger
than the bus address. Not all platforms support this.
config HOTPLUG_CPU
bool "Support for enabling/disabling CPUs"
depends on SMP && (PPC_PSERIES || \
@ -913,9 +895,6 @@ config ZONE_DMA
config NEED_DMA_MAP_STATE
def_bool (PPC64 || NOT_COHERENT_CACHE)
config NEED_SG_DMA_LENGTH
def_bool y
config GENERIC_ISA_DMA
bool
depends on ISA_DMA_API

View File

@ -92,24 +92,6 @@ extern int pci_mmap_legacy_page_range(struct pci_bus *bus,
#define HAVE_PCI_LEGACY 1
#ifdef CONFIG_PPC64
/* The PCI address space does not equal the physical memory address
* space (we have an IOMMU). The IDE and SCSI device layers use
* this boolean for bounce buffer decisions.
*/
#define PCI_DMA_BUS_IS_PHYS (0)
#else /* 32-bit */
/* The PCI address space does equal the physical memory
* address space (no IOMMU). The IDE and SCSI device layers use
* this boolean for bounce buffer decisions.
*/
#define PCI_DMA_BUS_IS_PHYS (1)
#endif /* CONFIG_PPC64 */
extern void pcibios_claim_one_bus(struct pci_bus *b);
extern void pcibios_finish_adding_to_bus(struct pci_bus *bus);

View File

@ -309,8 +309,6 @@ int dma_set_coherent_mask(struct device *dev, u64 mask)
}
EXPORT_SYMBOL(dma_set_coherent_mask);
#define PREALLOC_DMA_DEBUG_ENTRIES (1 << 16)
int dma_set_mask(struct device *dev, u64 dma_mask)
{
if (ppc_md.dma_set_mask)
@ -361,7 +359,6 @@ EXPORT_SYMBOL_GPL(dma_get_required_mask);
static int __init dma_init(void)
{
dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES);
#ifdef CONFIG_PCI
dma_debug_add_bus(&pci_bus_type);
#endif

View File

@ -222,6 +222,7 @@ config PTE_64BIT
config PHYS_64BIT
bool 'Large physical address support' if E500 || PPC_86xx
depends on (44x || E500 || PPC_86xx) && !PPC_83xx && !PPC_82xx
select PHYS_ADDR_T_64BIT
---help---
This option enables kernel support for larger than 32-bit physical
addresses. This feature may not be available on all cores.

View File

@ -3,8 +3,16 @@
# see Documentation/kbuild/kconfig-language.txt.
#
config 64BIT
bool
config 32BIT
bool
config RISCV
def_bool y
# even on 32-bit, physical (and DMA) addresses are > 32-bits
select PHYS_ADDR_T_64BIT
select OF
select OF_EARLY_FLATTREE
select OF_IRQ
@ -22,7 +30,6 @@ config RISCV
select GENERIC_ATOMIC64 if !64BIT || !RISCV_ISA_A
select HAVE_MEMBLOCK
select HAVE_MEMBLOCK_NODE_MAP
select HAVE_DMA_API_DEBUG
select HAVE_DMA_CONTIGUOUS
select HAVE_GENERIC_DMA_COHERENT
select IRQ_DOMAIN
@ -39,16 +46,9 @@ config RISCV
config MMU
def_bool y
# even on 32-bit, physical (and DMA) addresses are > 32-bits
config ARCH_PHYS_ADDR_T_64BIT
def_bool y
config ZONE_DMA32
bool
default y
config ARCH_DMA_ADDR_T_64BIT
def_bool y
default y if 64BIT
config PAGE_OFFSET
hex
@ -101,7 +101,6 @@ choice
config ARCH_RV32I
bool "RV32I"
select CPU_SUPPORTS_32BIT_KERNEL
select 32BIT
select GENERIC_ASHLDI3
select GENERIC_ASHRDI3
@ -109,13 +108,13 @@ config ARCH_RV32I
config ARCH_RV64I
bool "RV64I"
select CPU_SUPPORTS_64BIT_KERNEL
select 64BIT
select HAVE_FUNCTION_TRACER
select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_DYNAMIC_FTRACE
select HAVE_DYNAMIC_FTRACE_WITH_REGS
select SWIOTLB
endchoice
@ -171,11 +170,6 @@ config NR_CPUS
depends on SMP
default "8"
config CPU_SUPPORTS_32BIT_KERNEL
bool
config CPU_SUPPORTS_64BIT_KERNEL
bool
choice
prompt "CPU Tuning"
default TUNE_GENERIC
@ -202,24 +196,6 @@ endmenu
menu "Kernel type"
choice
prompt "Kernel code model"
default 64BIT
config 32BIT
bool "32-bit kernel"
depends on CPU_SUPPORTS_32BIT_KERNEL
help
Select this option to build a 32-bit kernel.
config 64BIT
bool "64-bit kernel"
depends on CPU_SUPPORTS_64BIT_KERNEL
help
Select this option to build a 64-bit kernel.
endchoice
source "mm/Kconfig"
source "kernel/Kconfig.preempt"

View File

@ -0,0 +1,15 @@
// SPDX-License-Identifier: GPL-2.0
#ifndef _RISCV_ASM_DMA_MAPPING_H
#define _RISCV_ASM_DMA_MAPPING_H 1
#ifdef CONFIG_SWIOTLB
#include <linux/swiotlb.h>
static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
{
return &swiotlb_dma_ops;
}
#else
#include <asm-generic/dma-mapping.h>
#endif /* CONFIG_SWIOTLB */
#endif /* _RISCV_ASM_DMA_MAPPING_H */

View File

@ -26,9 +26,6 @@
/* RISC-V shim does not initialize PCI bus */
#define pcibios_assign_all_busses() 1
/* We do not have an IOMMU */
#define PCI_DMA_BUS_IS_PHYS 1
extern int isa_dma_bridge_buggy;
#ifdef CONFIG_PCI

View File

@ -29,6 +29,7 @@
#include <linux/of_fdt.h>
#include <linux/of_platform.h>
#include <linux/sched/task.h>
#include <linux/swiotlb.h>
#include <asm/setup.h>
#include <asm/sections.h>
@ -206,6 +207,7 @@ void __init setup_arch(char **cmdline_p)
setup_bootmem();
paging_init();
unflatten_device_tree();
swiotlb_init(1);
#ifdef CONFIG_SMP
setup_smp();

View File

@ -35,9 +35,6 @@ config GENERIC_BUG
config GENERIC_BUG_RELATIVE_POINTERS
def_bool y
config ARCH_DMA_ADDR_T_64BIT
def_bool y
config GENERIC_LOCKBREAK
def_bool y if SMP && PREEMPT
@ -133,7 +130,6 @@ config S390
select HAVE_CMPXCHG_LOCAL
select HAVE_COPY_THREAD_TLS
select HAVE_DEBUG_KMEMLEAK
select HAVE_DMA_API_DEBUG
select HAVE_DMA_CONTIGUOUS
select DMA_DIRECT_OPS
select HAVE_DYNAMIC_FTRACE
@ -709,7 +705,11 @@ config QDIO
menuconfig PCI
bool "PCI support"
select PCI_MSI
select IOMMU_HELPER
select IOMMU_SUPPORT
select NEED_DMA_MAP_STATE
select NEED_SG_DMA_LENGTH
help
Enable PCI support.
@ -733,15 +733,6 @@ config PCI_DOMAINS
config HAS_IOMEM
def_bool PCI
config IOMMU_HELPER
def_bool PCI
config NEED_SG_DMA_LENGTH
def_bool PCI
config NEED_DMA_MAP_STATE
def_bool PCI
config CHSC_SCH
def_tristate m
prompt "Support for CHSC subchannels"

View File

@ -2,8 +2,6 @@
#ifndef __ASM_S390_PCI_H
#define __ASM_S390_PCI_H
/* must be set before including asm-generic/pci.h */
#define PCI_DMA_BUS_IS_PHYS (0)
/* must be set before including pci_clp.h */
#define PCI_BAR_COUNT 6

View File

@ -668,15 +668,6 @@ void zpci_dma_exit(void)
kmem_cache_destroy(dma_region_table_cache);
}
#define PREALLOC_DMA_DEBUG_ENTRIES (1 << 16)
static int __init dma_debug_do_init(void)
{
dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES);
return 0;
}
fs_initcall(dma_debug_do_init);
const struct dma_map_ops s390_pci_dma_ops = {
.alloc = s390_dma_alloc,
.free = s390_dma_free,
@ -685,8 +676,6 @@ const struct dma_map_ops s390_pci_dma_ops = {
.map_page = s390_dma_map_pages,
.unmap_page = s390_dma_unmap_pages,
.mapping_error = s390_mapping_error,
/* if we support direct DMA this must be conditional */
.is_phys = 0,
/* dma_supported is unconditionally true without a callback */
};
EXPORT_SYMBOL_GPL(s390_pci_dma_ops);

View File

@ -14,7 +14,6 @@ config SUPERH
select HAVE_OPROFILE
select HAVE_GENERIC_DMA_COHERENT
select HAVE_ARCH_TRACEHOOK
select HAVE_DMA_API_DEBUG
select HAVE_PERF_EVENTS
select HAVE_DEBUG_BUGVERBOSE
select ARCH_HAVE_CUSTOM_GPIO_H
@ -51,6 +50,9 @@ config SUPERH
select HAVE_ARCH_AUDITSYSCALL
select HAVE_FUTEX_CMPXCHG if FUTEX
select HAVE_NMI
select NEED_DMA_MAP_STATE
select NEED_SG_DMA_LENGTH
help
The SuperH is a RISC processor targeted for use in embedded systems
and consumer electronics; it was also used in the Sega Dreamcast
@ -161,12 +163,6 @@ config DMA_COHERENT
config DMA_NONCOHERENT
def_bool !DMA_COHERENT
config NEED_DMA_MAP_STATE
def_bool DMA_NONCOHERENT
config NEED_SG_DMA_LENGTH
def_bool y
config PGTABLE_LEVELS
default 3 if X2TLB
default 2

View File

@ -71,12 +71,6 @@ extern unsigned long PCIBIOS_MIN_IO, PCIBIOS_MIN_MEM;
* SuperH has everything mapped statically like x86.
*/
/* The PCI address space does equal the physical memory
* address space. The networking and block device layers use
* this boolean for bounce buffer decisions.
*/
#define PCI_DMA_BUS_IS_PHYS (dma_ops->is_phys)
#ifdef CONFIG_PCI
/*
* None of the SH PCI controllers support MWI, it is always treated as a

View File

@ -78,7 +78,6 @@ const struct dma_map_ops nommu_dma_ops = {
.sync_single_for_device = nommu_sync_single_for_device,
.sync_sg_for_device = nommu_sync_sg_for_device,
#endif
.is_phys = 1,
};
void __init no_iommu_init(void)

View File

@ -20,18 +20,9 @@
#include <asm/cacheflush.h>
#include <asm/addrspace.h>
#define PREALLOC_DMA_DEBUG_ENTRIES 4096
const struct dma_map_ops *dma_ops;
EXPORT_SYMBOL(dma_ops);
static int __init dma_init(void)
{
dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES);
return 0;
}
fs_initcall(dma_init);
void *dma_generic_alloc_coherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t gfp,
unsigned long attrs)

View File

@ -25,7 +25,6 @@ config SPARC
select RTC_CLASS
select RTC_DRV_M48T59
select RTC_SYSTOHC
select HAVE_DMA_API_DEBUG
select HAVE_ARCH_JUMP_LABEL if SPARC64
select GENERIC_IRQ_SHOW
select ARCH_WANT_IPC_PARSE_VERSION
@ -44,6 +43,8 @@ config SPARC
select ARCH_HAS_SG_CHAIN
select CPU_NO_EFFICIENT_FFS
select LOCKDEP_SMALL if LOCKDEP
select NEED_DMA_MAP_STATE
select NEED_SG_DMA_LENGTH
config SPARC32
def_bool !64BIT
@ -67,6 +68,7 @@ config SPARC64
select HAVE_SYSCALL_TRACEPOINTS
select HAVE_CONTEXT_TRACKING
select HAVE_DEBUG_KMEMLEAK
select IOMMU_HELPER
select SPARSE_IRQ
select RTC_DRV_CMOS
select RTC_DRV_BQ4802
@ -102,14 +104,6 @@ config ARCH_ATU
bool
default y if SPARC64
config ARCH_DMA_ADDR_T_64BIT
bool
default y if ARCH_ATU
config IOMMU_HELPER
bool
default y if SPARC64
config STACKTRACE_SUPPORT
bool
default y if SPARC64
@ -146,12 +140,6 @@ config ZONE_DMA
bool
default y if SPARC32
config NEED_DMA_MAP_STATE
def_bool y
config NEED_SG_DMA_LENGTH
def_bool y
config GENERIC_ISA_DMA
bool
default y if SPARC32

View File

@ -17,7 +17,7 @@
#define IOPTE_WRITE 0x0000000000000002UL
#define IOMMU_NUM_CTXS 4096
#include <linux/iommu-common.h>
#include <asm/iommu-common.h>
struct iommu_arena {
unsigned long *map;

View File

@ -17,10 +17,6 @@
#define PCI_IRQ_NONE 0xffffffff
/* Dynamic DMA mapping stuff.
*/
#define PCI_DMA_BUS_IS_PHYS (0)
#endif /* __KERNEL__ */
#ifndef CONFIG_LEON_PCI

View File

@ -17,12 +17,6 @@
#define PCI_IRQ_NONE 0xffffffff
/* The PCI address space does not equal the physical memory
* address space. The networking and block device layers use
* this boolean for bounce buffer decisions.
*/
#define PCI_DMA_BUS_IS_PHYS (0)
/* PCI IOMMU mapping bypass support. */
/* PCI 64-bit addressing works for all slots on all controller

View File

@ -59,7 +59,7 @@ obj-$(CONFIG_SPARC32) += leon_pmc.o
obj-$(CONFIG_SPARC64) += reboot.o
obj-$(CONFIG_SPARC64) += sysfs.o
obj-$(CONFIG_SPARC64) += iommu.o
obj-$(CONFIG_SPARC64) += iommu.o iommu-common.o
obj-$(CONFIG_SPARC64) += central.o
obj-$(CONFIG_SPARC64) += starfire.o
obj-$(CONFIG_SPARC64) += power.o
@ -74,8 +74,6 @@ obj-$(CONFIG_SPARC64) += pcr.o
obj-$(CONFIG_SPARC64) += nmi.o
obj-$(CONFIG_SPARC64_SMP) += cpumap.o
obj-y += dma.o
obj-$(CONFIG_PCIC_PCI) += pcic.o
obj-$(CONFIG_LEON_PCI) += leon_pci.o
obj-$(CONFIG_SPARC_GRPCI2)+= leon_pci_grpci2.o

View File

@ -1,13 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
#include <linux/kernel.h>
#include <linux/dma-mapping.h>
#include <linux/dma-debug.h>
#define PREALLOC_DMA_DEBUG_ENTRIES (1 << 15)
static int __init dma_init(void)
{
dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES);
return 0;
}
fs_initcall(dma_init);

View File

@ -8,9 +8,9 @@
#include <linux/bitmap.h>
#include <linux/bug.h>
#include <linux/iommu-helper.h>
#include <linux/iommu-common.h>
#include <linux/dma-mapping.h>
#include <linux/hash.h>
#include <asm/iommu-common.h>
static unsigned long iommu_large_alloc = 15;
@ -93,7 +93,6 @@ void iommu_tbl_pool_init(struct iommu_map_table *iommu,
p->hint = p->start;
p->end = num_entries;
}
EXPORT_SYMBOL(iommu_tbl_pool_init);
unsigned long iommu_tbl_range_alloc(struct device *dev,
struct iommu_map_table *iommu,
@ -224,7 +223,6 @@ bail:
return n;
}
EXPORT_SYMBOL(iommu_tbl_range_alloc);
static struct iommu_pool *get_pool(struct iommu_map_table *tbl,
unsigned long entry)
@ -264,4 +262,3 @@ void iommu_tbl_range_free(struct iommu_map_table *iommu, u64 dma_addr,
bitmap_clear(iommu->map, entry, npages);
spin_unlock_irqrestore(&(pool->lock), flags);
}
EXPORT_SYMBOL(iommu_tbl_range_free);

View File

@ -14,7 +14,7 @@
#include <linux/errno.h>
#include <linux/iommu-helper.h>
#include <linux/bitmap.h>
#include <linux/iommu-common.h>
#include <asm/iommu-common.h>
#ifdef CONFIG_PCI
#include <linux/pci.h>

View File

@ -16,7 +16,7 @@
#include <linux/list.h>
#include <linux/init.h>
#include <linux/bitmap.h>
#include <linux/iommu-common.h>
#include <asm/iommu-common.h>
#include <asm/hypervisor.h>
#include <asm/iommu.h>

View File

@ -16,7 +16,7 @@
#include <linux/export.h>
#include <linux/log2.h>
#include <linux/of_device.h>
#include <linux/iommu-common.h>
#include <asm/iommu-common.h>
#include <asm/iommu.h>
#include <asm/irq.h>

View File

@ -19,6 +19,8 @@ config UNICORE32
select ARCH_WANT_FRAME_POINTERS
select GENERIC_IOMAP
select MODULES_USE_ELF_REL
select NEED_DMA_MAP_STATE
select SWIOTLB
help
UniCore-32 is 32-bit Instruction Set Architecture,
including a series of low-power-consumption RISC chip
@ -61,9 +63,6 @@ config ARCH_MAY_HAVE_PC_FDC
config ZONE_DMA
def_bool y
config NEED_DMA_MAP_STATE
def_bool y
source "init/Kconfig"
source "kernel/Kconfig.freezer"

View File

@ -39,14 +39,3 @@ config CPU_TLB_SINGLE_ENTRY_DISABLE
default y
help
Say Y here to disable the TLB single entry operations.
config SWIOTLB
def_bool y
select DMA_DIRECT_OPS
config IOMMU_HELPER
def_bool SWIOTLB
config NEED_SG_DMA_LENGTH
def_bool SWIOTLB

View File

@ -28,6 +28,8 @@ config X86_64
select ARCH_USE_CMPXCHG_LOCKREF
select HAVE_ARCH_SOFT_DIRTY
select MODULES_USE_ELF_RELA
select NEED_DMA_MAP_STATE
select SWIOTLB
select X86_DEV_DMA_OPS
select ARCH_HAS_SYSCALL_WRAPPER
@ -134,7 +136,6 @@ config X86
select HAVE_C_RECORDMCOUNT
select HAVE_DEBUG_KMEMLEAK
select HAVE_DEBUG_STACKOVERFLOW
select HAVE_DMA_API_DEBUG
select HAVE_DMA_CONTIGUOUS
select HAVE_DYNAMIC_FTRACE
select HAVE_DYNAMIC_FTRACE_WITH_REGS
@ -184,6 +185,7 @@ config X86
select HAVE_UNSTABLE_SCHED_CLOCK
select HAVE_USER_RETURN_NOTIFIER
select IRQ_FORCED_THREADING
select NEED_SG_DMA_LENGTH
select PCI_LOCKLESS_CONFIG
select PERF_EVENTS
select RTC_LIB
@ -236,13 +238,6 @@ config ARCH_MMAP_RND_COMPAT_BITS_MAX
config SBUS
bool
config NEED_DMA_MAP_STATE
def_bool y
depends on X86_64 || INTEL_IOMMU || DMA_API_DEBUG || SWIOTLB
config NEED_SG_DMA_LENGTH
def_bool y
config GENERIC_ISA_DMA
def_bool y
depends on ISA_DMA_API
@ -875,6 +870,7 @@ config DMI
config GART_IOMMU
bool "Old AMD GART IOMMU support"
select IOMMU_HELPER
select SWIOTLB
depends on X86_64 && PCI && AMD_NB
---help---
@ -896,6 +892,7 @@ config GART_IOMMU
config CALGARY_IOMMU
bool "IBM Calgary IOMMU support"
select IOMMU_HELPER
select SWIOTLB
depends on X86_64 && PCI
---help---
@ -923,20 +920,6 @@ config CALGARY_IOMMU_ENABLED_BY_DEFAULT
Calgary anyway, pass 'iommu=calgary' on the kernel command line.
If unsure, say Y.
# need this always selected by IOMMU for the VIA workaround
config SWIOTLB
def_bool y if X86_64
---help---
Support for software bounce buffers used on x86-64 systems
which don't have a hardware IOMMU. Using this PCI devices
which can only access 32-bits of memory can be used on systems
with more than 3 GB of memory.
If unsure, say Y.
config IOMMU_HELPER
def_bool y
depends on CALGARY_IOMMU || GART_IOMMU || SWIOTLB || AMD_IOMMU
config MAXSMP
bool "Enable Maximum number of SMP Processors and NUMA Nodes"
depends on X86_64 && SMP && DEBUG_KERNEL
@ -1458,6 +1441,7 @@ config HIGHMEM
config X86_PAE
bool "PAE (Physical Address Extension) Support"
depends on X86_32 && !HIGHMEM4G
select PHYS_ADDR_T_64BIT
select SWIOTLB
---help---
PAE is required for NX support, and furthermore enables
@ -1485,14 +1469,6 @@ config X86_5LEVEL
Say N if unsure.
config ARCH_PHYS_ADDR_T_64BIT
def_bool y
depends on X86_64 || X86_PAE
config ARCH_DMA_ADDR_T_64BIT
def_bool y
depends on X86_64 || HIGHMEM64G
config X86_DIRECT_GBPAGES
def_bool y
depends on X86_64 && !DEBUG_PAGEALLOC

View File

@ -30,10 +30,7 @@ static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
return dma_ops;
}
int arch_dma_supported(struct device *dev, u64 mask);
#define arch_dma_supported arch_dma_supported
bool arch_dma_alloc_attrs(struct device **dev, gfp_t *gfp);
bool arch_dma_alloc_attrs(struct device **dev);
#define arch_dma_alloc_attrs arch_dma_alloc_attrs
#endif

View File

@ -117,9 +117,6 @@ void native_restore_msi_irqs(struct pci_dev *dev);
#define native_setup_msi_irqs NULL
#define native_teardown_msi_irq NULL
#endif
#define PCI_DMA_BUS_IS_PHYS (dma_ops->is_phys)
#endif /* __KERNEL__ */
#ifdef CONFIG_X86_64

View File

@ -15,13 +15,11 @@
#include <asm/x86_init.h>
#include <asm/iommu_table.h>
static int forbid_dac __read_mostly;
static bool disable_dac_quirk __read_mostly;
const struct dma_map_ops *dma_ops = &dma_direct_ops;
EXPORT_SYMBOL(dma_ops);
static int iommu_sac_force __read_mostly;
#ifdef CONFIG_IOMMU_DEBUG
int panic_on_overflow __read_mostly = 1;
int force_iommu __read_mostly = 1;
@ -55,9 +53,6 @@ struct device x86_dma_fallback_dev = {
};
EXPORT_SYMBOL(x86_dma_fallback_dev);
/* Number of entries preallocated for DMA-API debugging */
#define PREALLOC_DMA_DEBUG_ENTRIES 65536
void __init pci_iommu_alloc(void)
{
struct iommu_table_entry *p;
@ -76,7 +71,7 @@ void __init pci_iommu_alloc(void)
}
}
bool arch_dma_alloc_attrs(struct device **dev, gfp_t *gfp)
bool arch_dma_alloc_attrs(struct device **dev)
{
if (!*dev)
*dev = &x86_dma_fallback_dev;
@ -125,13 +120,13 @@ static __init int iommu_setup(char *p)
if (!strncmp(p, "nomerge", 7))
iommu_merge = 0;
if (!strncmp(p, "forcesac", 8))
iommu_sac_force = 1;
pr_warn("forcesac option ignored.\n");
if (!strncmp(p, "allowdac", 8))
forbid_dac = 0;
pr_warn("allowdac option ignored.\n");
if (!strncmp(p, "nodac", 5))
forbid_dac = 1;
pr_warn("nodac option ignored.\n");
if (!strncmp(p, "usedac", 6)) {
forbid_dac = -1;
disable_dac_quirk = true;
return 1;
}
#ifdef CONFIG_SWIOTLB
@ -156,40 +151,9 @@ static __init int iommu_setup(char *p)
}
early_param("iommu", iommu_setup);
int arch_dma_supported(struct device *dev, u64 mask)
{
#ifdef CONFIG_PCI
if (mask > 0xffffffff && forbid_dac > 0) {
dev_info(dev, "PCI: Disallowing DAC for device\n");
return 0;
}
#endif
/* Tell the device to use SAC when IOMMU force is on. This
allows the driver to use cheaper accesses in some cases.
Problem with this is that if we overflow the IOMMU area and
return DAC as fallback address the device may not handle it
correctly.
As a special case some controllers have a 39bit address
mode that is as efficient as 32bit (aic79xx). Don't force
SAC for these. Assume all masks <= 40 bits are of this
type. Normally this doesn't make any difference, but gives
more gentle handling of IOMMU overflow. */
if (iommu_sac_force && (mask >= DMA_BIT_MASK(40))) {
dev_info(dev, "Force SAC with mask %Lx\n", mask);
return 0;
}
return 1;
}
EXPORT_SYMBOL(arch_dma_supported);
static int __init pci_iommu_init(void)
{
struct iommu_table_entry *p;
dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES);
#ifdef CONFIG_PCI
dma_debug_add_bus(&pci_bus_type);
@ -209,11 +173,17 @@ rootfs_initcall(pci_iommu_init);
#ifdef CONFIG_PCI
/* Many VIA bridges seem to corrupt data for DAC. Disable it here */
static int via_no_dac_cb(struct pci_dev *pdev, void *data)
{
pdev->dev.dma_32bit_limit = true;
return 0;
}
static void via_no_dac(struct pci_dev *dev)
{
if (forbid_dac == 0) {
if (!disable_dac_quirk) {
dev_info(&dev->dev, "disabling DAC on VIA PCI bridge\n");
forbid_dac = 1;
pci_walk_bus(dev->subordinate, via_no_dac_cb, NULL);
}
}
DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_VIA, PCI_ANY_ID,

View File

@ -19,7 +19,6 @@ config XTENSA
select HAVE_ARCH_KASAN if MMU
select HAVE_CC_STACKPROTECTOR
select HAVE_DEBUG_KMEMLEAK
select HAVE_DMA_API_DEBUG
select HAVE_DMA_CONTIGUOUS
select HAVE_EXIT_THREAD
select HAVE_FUNCTION_TRACER

View File

@ -42,8 +42,6 @@ extern struct pci_controller* pcibios_alloc_controller(void);
* decisions.
*/
#define PCI_DMA_BUS_IS_PHYS (1)
/* Tell PCI code what kind of PCI resource mappings we support */
#define HAVE_PCI_MMAP 1
#define ARCH_GENERIC_PCI_MMAP_RESOURCE 1

View File

@ -261,12 +261,3 @@ const struct dma_map_ops xtensa_dma_map_ops = {
.mapping_error = xtensa_dma_mapping_error,
};
EXPORT_SYMBOL(xtensa_dma_map_ops);
#define PREALLOC_DMA_DEBUG_ENTRIES (1 << 16)
static int __init xtensa_dma_init(void)
{
dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES);
return 0;
}
fs_initcall(xtensa_dma_init);

View File

@ -20,6 +20,7 @@
#include <linux/sizes.h>
#include <linux/limits.h>
#include <linux/clk/clk-conf.h>
#include <linux/platform_device.h>
#include <asm/irq.h>
@ -193,14 +194,16 @@ static const struct dev_pm_ops amba_pm = {
/*
* Primecells are part of the Advanced Microcontroller Bus Architecture,
* so we call the bus "amba".
* DMA configuration for platform and AMBA bus is same. So here we reuse
* platform's DMA config routine.
*/
struct bus_type amba_bustype = {
.name = "amba",
.dev_groups = amba_dev_groups,
.match = amba_match,
.uevent = amba_uevent,
.dma_configure = platform_dma_configure,
.pm = &amba_pm,
.force_dma = true,
};
static int __init amba_init(void)

View File

@ -329,36 +329,13 @@ void dma_common_free_remap(void *cpu_addr, size_t size, unsigned long vm_flags)
#endif
/*
* Common configuration to enable DMA API use for a device
* enables DMA API use for a device
*/
#include <linux/pci.h>
int dma_configure(struct device *dev)
{
struct device *bridge = NULL, *dma_dev = dev;
enum dev_dma_attr attr;
int ret = 0;
if (dev_is_pci(dev)) {
bridge = pci_get_host_bridge_device(to_pci_dev(dev));
dma_dev = bridge;
if (IS_ENABLED(CONFIG_OF) && dma_dev->parent &&
dma_dev->parent->of_node)
dma_dev = dma_dev->parent;
}
if (dma_dev->of_node) {
ret = of_dma_configure(dev, dma_dev->of_node);
} else if (has_acpi_companion(dma_dev)) {
attr = acpi_get_dma_attr(to_acpi_device_node(dma_dev->fwnode));
if (attr != DEV_DMA_NOT_SUPPORTED)
ret = acpi_dma_configure(dev, attr);
}
if (bridge)
pci_put_host_bridge_device(bridge);
return ret;
if (dev->bus->dma_configure)
return dev->bus->dma_configure(dev);
return 0;
}
void dma_deconfigure(struct device *dev)

View File

@ -1130,6 +1130,22 @@ int platform_pm_restore(struct device *dev)
#endif /* CONFIG_HIBERNATE_CALLBACKS */
int platform_dma_configure(struct device *dev)
{
enum dev_dma_attr attr;
int ret = 0;
if (dev->of_node) {
ret = of_dma_configure(dev, dev->of_node, true);
} else if (has_acpi_companion(dev)) {
attr = acpi_get_dma_attr(to_acpi_device_node(dev->fwnode));
if (attr != DEV_DMA_NOT_SUPPORTED)
ret = acpi_dma_configure(dev, attr);
}
return ret;
}
static const struct dev_pm_ops platform_dev_pm_ops = {
.runtime_suspend = pm_generic_runtime_suspend,
.runtime_resume = pm_generic_runtime_resume,
@ -1141,8 +1157,8 @@ struct bus_type platform_bus_type = {
.dev_groups = platform_dev_groups,
.match = platform_match,
.uevent = platform_uevent,
.dma_configure = platform_dma_configure,
.pm = &platform_dev_pm_ops,
.force_dma = true,
};
EXPORT_SYMBOL_GPL(platform_bus_type);

Some files were not shown because too many files have changed in this diff Show More