Commit graph

155 commits

Author SHA1 Message Date
Laura Abbott 36d0fd2198 arm: use genalloc for the atomic pool
ARM currently uses a bitmap for tracking atomic allocations.  genalloc
already handles this type of memory pool allocation so switch to using
that instead.

Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: David Riley <davidriley@chromium.org>
Cc: Olof Johansson <olof@lixom.net>
Cc: Ritesh Harjain <ritesh.harjani@gmail.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Thierry Reding <thierry.reding@gmail.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-10-09 22:25:52 -04:00
Laura Abbott 513510ddba common: dma-mapping: introduce common remapping functions
For architectures without coherent DMA, memory for DMA may need to be
remapped with coherent attributes.  Factor out the the remapping code from
arm and put it in a common location to reduce code duplication.

As part of this, the arm APIs are now migrated away from
ioremap_page_range to the common APIs which use map_vm_area for remapping.
 This should be an equivalent change and using map_vm_area is more correct
as ioremap_page_range is intended to bring in io addresses into the cpu
space and not regular kernel managed memory.

Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: David Riley <davidriley@chromium.org>
Cc: Olof Johansson <olof@lixom.net>
Cc: Ritesh Harjain <ritesh.harjani@gmail.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Thierry Reding <thierry.reding@gmail.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Laura Abbott <lauraa@codeaurora.org>
Cc: Mitchel Humpherys <mitchelh@codeaurora.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-10-09 22:25:52 -04:00
Joonsoo Kim a254129e86 CMA: generalize CMA reserved area management functionality
Currently, there are two users on CMA functionality, one is the DMA
subsystem and the other is the KVM on powerpc.  They have their own code
to manage CMA reserved area even if they looks really similar.  From my
guess, it is caused by some needs on bitmap management.  KVM side wants
to maintain bitmap not for 1 page, but for more size.  Eventually it use
bitmap where one bit represents 64 pages.

When I implement CMA related patches, I should change those two places
to apply my change and it seem to be painful to me.  I want to change
this situation and reduce future code management overhead through this
patch.

This change could also help developer who want to use CMA in their new
feature development, since they can use CMA easily without copying &
pasting this reserved area management code.

In previous patches, we have prepared some features to generalize CMA
reserved area management and now it's time to do it.  This patch moves
core functions to mm/cma.c and change DMA APIs to use these functions.

There is no functional change in DMA APIs.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Acked-by: Michal Nazarewicz <mina86@mina86.com>
Acked-by: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Cc: Alexander Graf <agraf@suse.de>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Cc: Gleb Natapov <gleb@kernel.org>
Acked-by: Marek Szyprowski <m.szyprowski@samsung.com>
Tested-by: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:16 -07:00
Russell King 6b076991dc ARM: DMA: ensure that old section mappings are flushed from the TLB
When setting up the CMA region, we must ensure that the old section
mappings are flushed from the TLB before replacing them with page
tables, otherwise we can suffer from mismatched aliases if the CPU
speculatively prefetches from these mappings at an inopportune time.

A mismatched alias can occur when the TLB contains a section mapping,
but a subsequent prefetch causes it to load a page table mapping,
resulting in the possibility of the TLB containing two matching
mappings for the same virtual address region.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-17 19:26:08 +01:00
Linus Torvalds eb3d3ec567 Merge branch 'for-linus' of git://ftp.arm.linux.org.uk/~rmk/linux-arm into next
Pull ARM updates from Russell King:

 - Major clean-up of the L2 cache support code.  The existing mess was
   becoming rather unmaintainable through all the additions that others
   have done over time.  This turns it into a much nicer structure, and
   implements a few performance improvements as well.

 - Clean up some of the CP15 control register tweaks for alignment
   support, moving some code and data into alignment.c

 - DMA properties for ARM, from Santosh and reviewed by DT people.  This
   adds DT properties to specify bus translations we can't discover
   automatically, and to indicate whether devices are coherent.

 - Hibernation support for ARM

 - Make ftrace work with read-only text in modules

 - add suspend support for PJ4B CPUs

 - rework interrupt masking for undefined instruction handling, which
   allows us to enable interrupts earlier in the handling of these
   exceptions.

 - support for big endian page tables

 - fix stacktrace support to exclude stacktrace functions from the
   trace, and add save_stack_trace_regs() implementation so that kprobes
   can record stack traces.

 - Add support for the Cortex-A17 CPU.

 - Remove last vestiges of ARM710 support.

 - Removal of ARM "meminfo" structure, finally converting us solely to
   memblock to handle the early memory initialisation.

* 'for-linus' of git://ftp.arm.linux.org.uk/~rmk/linux-arm: (142 commits)
  ARM: ensure C page table setup code follows assembly code (part II)
  ARM: ensure C page table setup code follows assembly code
  ARM: consolidate last remaining open-coded alignment trap enable
  ARM: remove global cr_no_alignment
  ARM: remove CPU_CP15 conditional from alignment.c
  ARM: remove unused adjust_cr() function
  ARM: move "noalign" command line option to alignment.c
  ARM: provide common method to clear bits in CPU control register
  ARM: 8025/1: Get rid of meminfo
  ARM: 8060/1: mm: allow sub-architectures to override PCI I/O memory type
  ARM: 8066/1: correction for ARM patch 8031/2
  ARM: 8049/1: ftrace/add save_stack_trace_regs() implementation
  ARM: 8065/1: remove last use of CONFIG_CPU_ARM710
  ARM: 8062/1: Modify ldrt fixup handler to re-execute the userspace instruction
  ARM: 8047/1: rwsem: use asm-generic rwsem implementation
  ARM: l2c: trial at enabling some Cortex-A9 optimisations
  ARM: l2c: add warnings for stuff modifying aux_ctrl register values
  ARM: l2c: print a warning with L2C-310 caches if the cache size is modified
  ARM: l2c: remove old .set_debug method
  ARM: l2c: kill L2X0_AUX_CTRL_MASK before anyone else makes use of this
  ...
2014-06-05 15:57:04 -07:00
Russell King bd63ce27d9 Merge branch 'devel-stable' into for-next 2014-06-05 12:36:22 +01:00
Russell King 1fb333489f Merge branches 'alignment', 'fixes', 'l2c' (early part) and 'misc' into for-next 2014-06-05 12:35:52 +01:00
Russell King 6b74f61a47 DT support for 'dma-ranges'and 'dma-coherent' properties with ARM updates
- The 'dma-ranges' helps to take care of few DMAable system memory
         restrictions by use of dma_pfn_offset which is maintained per
         device. Arch code then uses it for dma address translations for such
         cases. We update the dma_pfn_offset accordingly during DT the device
         creation process.
 - The 'dma-coherent' property is used to setup arch's coherent dma_ops.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.11 (GNU/Linux)
 
 iQIcBAABAgAGBQJTajacAAoJEHJsHOdBp5c/780QAJN50zmxyZ7sqA9xGum8MSJl
 Vjpp1mw3eu7dZ1HoWcpn35l0tOEVpU/wo4ymtt6YYUhD3Po2LZCl3e43h91B/9/B
 Ih++WZaN+UmpUpp9YJyeS9pkl0wwEqSmJyTBXZrhFhl4o3KNQlHWPGOMJ5CBPaA0
 Z03TT1MeOMiCo10xz6JCA/DjPnQz9m5ClxNXLwdP1KOiTDDsv4gtkTZ0UenttIoU
 DTerJ+GIt1Gzb+P92aGvuc9wgLKacYmH599m6fQcmd9cIG2oMN2Xdxzfqo56v7Sb
 TGwFcKWYlhPDbDPmcPlidS6j4O+r8cMRwgHLO3r6LHJezCGQOYU8GzN7m6DKt4ww
 lCIR/k9u4YY/ZiLFeQ+G0Au8T1J6DHdbCI5sciFI53XYT4HMsV1aNpogOim7adC8
 4bPRmGCIN03aW+2ynLkFkdnXSBnaAyjt6qlr5zP8owsKDkV7+0WadQqyD2ovQ0FE
 sBt1HtOUGUsiR/97J4JFBGFxb84zMa6hXhFVUeFbyScCJNm2gkKeRQfiiB4mZi9L
 NAX/KVGyS6dktJaoLUiKi/p7aqOat3ezD1PrCziq4ceyWbDLag8Bq9H7rtb7vvqC
 ulHDUPfRy3Z9kmV8+QAznqPJVY1IHXJ18A+YFXF5ktr+5CJ51C8HjVZP3GZKncPC
 LpA1rRUEwEqsAwnjzcXW
 =Q7n3
 -----END PGP SIGNATURE-----

Merge tag 'dt-dma-properties-for-arm' of git://git.kernel.org/pub/scm/linux/kernel/git/ssantosh/linux-keystone into devel-stable

DT support for 'dma-ranges'and 'dma-coherent' properties with ARM updates

- The 'dma-ranges' helps to take care of few DMAable system memory
        restrictions by use of dma_pfn_offset which is maintained per
        device. Arch code then uses it for dma address translations for such
        cases. We update the dma_pfn_offset accordingly during DT the device
        creation process.
- The 'dma-coherent' property is used to setup arch's coherent dma_ops.
2014-05-23 12:30:52 +01:00
Russell King deace4a6b4 ARM: dma-mapping: avoid calling dma_cache_maint_page() on dev=>cpu
Avoid calling dma_cache_maint_page() when unmapping a DMA_TO_DEVICE
buffer.  The L1 cache ops never do anything in this circumstance, nor
do they ever need to - all that matters for this case is that the data
written is visible to the device before DMA starts.  What happens during
the transfer (provided the buffer is not written to) is of no real
consequence.

We already do this optimisation for the L2 cache.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-05-22 16:33:14 +01:00
Gioh Kim e464ef16c4 arm: dma-mapping: add checking cma area initialized
If CMA is turned on and CMA size is set to zero, kernel should
behave as if CMA was not enabled at compile time.
Every dma allocation should check existence of cma area
before requesting memory.

Signed-off-by: Gioh Kim <gioh.kim@lge.com>
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Acked-by: Michal Nazarewicz <mina86@mina86.com>
[mszyprow: removed redundant empty line from the patch]
Signed-off-by: <m.szyprowski@samsung.com>
2014-05-22 08:09:31 +02:00
Ritesh Harjani 006f841db1 arm: dma-iommu: Clean up redundant variable
mapping->size can be derived from mapping->bits << PAGE_SHIFT
which makes mapping->size as redundant.

Clean this up.

Signed-off-by: Ritesh Harjani <ritesh.harjani@gmail.com>
Reported-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
2014-05-20 13:43:26 +02:00
Santosh Shilimkar 2161c2485d ARM: dma: use phys_addr_t in __dma_page_[cpu_to_dev/dev_to_cpu]
On a 32 bit ARM architecture with LPAE extension physical addresses
cannot fit into unsigned long variable.

So fix it by using phys_addr_t instead of unsigned long.

Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
Cc: Russell King - ARM Linux <linux@arm.linux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
2014-05-07 09:21:45 -04:00
Ritesh Harjani 59f0f119e8 arm: dma-mapping: Fix mapping size value
68efd7d2fb("arm: dma-mapping: remove order parameter from
arm_iommu_create_mapping()") is causing kernel panic
because it wrongly sets the value of mapping->size:

Unable to handle kernel NULL pointer dereference at virtual
address 000000a0
pgd = e7a84000
[000000a0] *pgd=00000000
...
PC is at bitmap_clear+0x48/0xd0
LR is at __iommu_remove_mapping+0x130/0x164

Fix it by correcting mapping->size value.

Signed-off-by: Ritesh Harjani <ritesh.harjani@gmail.com>
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
2014-04-23 15:07:00 +02:00
Linus Torvalds 2d1eb87ae1 Merge branch 'for-linus' of git://ftp.arm.linux.org.uk/~rmk/linux-arm
Pull ARM changes from Russell King:

 - Perf updates from Will Deacon:
   - Support for Qualcomm Krait processors (run perf on your phone!)
   - Support for Cortex-A12 (run perf stat on your FPGA!)
   - Support for perf_sample_event_took, allowing us to automatically decrease
     the sample rate if we can't handle the PMU interrupts quickly enough
     (run perf record on your FPGA!).

 - Basic uprobes support from David Long:
     This patch series adds basic uprobes support to ARM. It is based on
     patches developed earlier by Rabin Vincent. That approach of adding
     hooks into the kprobes instruction parsing code was not well received.
     This approach separates the ARM instruction parsing code in kprobes out
     into a separate set of functions which can be used by both kprobes and
     uprobes. Both kprobes and uprobes then provide their own semantic action
     tables to process the results of the parsing.

 - ARMv7M (microcontroller) updates from Uwe Kleine-König

 - OMAP DMA updates (recently added Vinod's Ack even though they've been
   sitting in linux-next for a few months) to reduce the reliance of
   omap-dma on the code in arch/arm.

 - SA11x0 changes from Dmitry Eremin-Solenikov and Alexander Shiyan

 - Support for Cortex-A12 CPU

 - Align support for ARMv6 with ARMv7 so they can cooperate better in a
   single zImage.

 - Addition of first AT_HWCAP2 feature bits for ARMv8 crypto support.

 - Removal of IRQ_DISABLED from various ARM files

 - Improved efficiency of virt_to_page() for single zImage

 - Patch from Ulf Hansson to permit runtime PM callbacks to be available for
   AMBA devices for suspend/resume as well.

 - Finally kill asm/system.h on ARM.

* 'for-linus' of git://ftp.arm.linux.org.uk/~rmk/linux-arm: (89 commits)
  dmaengine: omap-dma: more consolidation of CCR register setup
  dmaengine: omap-dma: move IRQ handling to omap-dma
  dmaengine: omap-dma: move register read/writes into omap-dma.c
  ARM: omap: dma: get rid of 'p' allocation and clean up
  ARM: omap: move dma channel allocation into plat-omap code
  ARM: omap: dma: get rid of errata global
  ARM: omap: clean up DMA register accesses
  ARM: omap: remove almost-const variables
  ARM: omap: remove references to disable_irq_lch
  dmaengine: omap-dma: cleanup errata 3.3 handling
  dmaengine: omap-dma: provide register read/write functions
  dmaengine: omap-dma: use cached CCR value when enabling DMA
  dmaengine: omap-dma: move barrier to omap_dma_start_desc()
  dmaengine: omap-dma: move clnk_ctrl setting to preparation functions
  dmaengine: omap-dma: improve efficiency loading C.SA/C.EI/C.FI registers
  dmaengine: omap-dma: consolidate clearing channel status register
  dmaengine: omap-dma: move CCR buffering disable errata out of the fast path
  dmaengine: omap-dma: provide register definitions
  dmaengine: omap-dma: consolidate setup of CCR
  dmaengine: omap-dma: consolidate setup of CSDP
  ...
2014-04-05 13:20:43 -07:00
Russell King bce5669be3 Merge branch 'devel-stable' into for-next 2014-04-04 00:33:49 +01:00
Marek Szyprowski 68efd7d2fb arm: dma-mapping: remove order parameter from arm_iommu_create_mapping()
The 'order' parameter for IOMMU-aware dma-mapping implementation was
introduced mainly as a hack to reduce size of the bitmap used for
tracking IO virtual address space. Since now it is possible to dynamically
resize the bitmap, this hack is not needed and can be removed without any
impact on the client devices. This way the parameters for
arm_iommu_create_mapping() becomes much easier to understand. 'size'
parameter now means the maximum supported IO address space size.

The code will allocate (resize) bitmap in chunks, ensuring that a single
chunk is not larger than a single memory page to avoid unreliable
allocations of size larger than PAGE_SIZE in atomic context.

Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
2014-02-28 11:55:18 +01:00
Andreas Herrmann 4d852ef8c2 arm: dma-mapping: Add support to extend DMA IOMMU mappings
Instead of using just one bitmap to keep track of IO virtual addresses
(handed out for IOMMU use) introduce an array of bitmaps. This allows
us to extend existing mappings when running out of iova space in the
initial mapping etc.

If there is not enough space in the mapping to service an IO virtual
address allocation request, __alloc_iova() tries to extend the mapping
-- by allocating another bitmap -- and makes another allocation
attempt using the freshly allocated bitmap.

This allows arm iommu drivers to start with a decent initial size when
an dma_iommu_mapping is created and still to avoid running out of IO
virtual addresses for the mapping.

Signed-off-by: Andreas Herrmann <andreas.herrmann@calxeda.com>
[mszyprow: removed extensions parameter to arm_iommu_create_mapping()
 function, which will be modified in the next patch anyway, also some
 debug messages about extending bitmap]
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
2014-02-28 11:55:18 +01:00
Steven Capper 6ea41c8011 ARM: 7979/1: mm: Remove hugetlb warning from Coherent DMA allocator
The Coherant DMA allocator allocates pages of high order then splits
them up into smaller pages.

This splitting logic would run into problems if the allocator was
given compound pages. Thus the Coherant DMA allocator was originally
incompatible with compound pages existing and, by extension, huge
pages. A compile #error was put in place whenever huge pages were
enabled.

Compatibility with compound pages has since been introduced by the
following commit (which merely excludes GFP_COMP pages from being
requested by the coherant DMA allocator):
  ea2e705 ARM: 7172/1: dma: Drop GFP_COMP for DMA memory allocations

When huge page support was introduced to ARM, the compile #error in
dma-mapping.c was replaced by a #warning when it should have been
removed instead.

This patch removes the compile #warning in dma-mapping.c when huge
pages are enabled.

Signed-off-by: Steve Capper <steve.capper@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-02-18 19:42:47 +00:00
Marek Szyprowski 10c8562f93 ARM: dma-mapping: fix GFP_ATOMIC macro usage
GFP_ATOMIC is not a single gfp flag, but a macro which expands to the other
flags and LACK of __GFP_WAIT flag. To check if caller wanted to perform an
atomic allocation, the code must test __GFP_WAIT flag presence. This patch
fixes the issue introduced in v3.6-rc5

Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
CC: stable@vger.kernel.org
2014-02-11 09:40:05 +01:00
Russell King 6f14d778c1 Merge branches 'amba', 'fixes', 'kees', 'misc' and 'unstable/sa11x0' into for-next 2014-01-21 21:26:33 +00:00
Russell King 71b55663c5 ARM: fix executability of CMA mappings
The CMA region was being marked executable:

0xdc04e000-0xdc050000           8K     RW x      MEM/CACHED/WBRA
0xdc060000-0xdc100000         640K     RW x      MEM/CACHED/WBRA
0xdc4f5000-0xdc500000          44K     RW x      MEM/CACHED/WBRA
0xdcce9000-0xe0000000       52316K     RW x      MEM/CACHED/WBRA

This is mainly due to the badly worded MT_MEMORY_DMA_READY symbol, but
there are also a few other places in dma-mapping which should be
corrected to use the right constant.  Fix all these places:

0xdc04e000-0xdc050000           8K     RW NX     MEM/CACHED/WBRA
0xdc060000-0xdc100000         640K     RW NX     MEM/CACHED/WBRA
0xdc280000-0xdc300000         512K     RW NX     MEM/CACHED/WBRA
0xdc6fc000-0xe0000000       58384K     RW NX     MEM/CACHED/WBRA

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-12-11 09:53:22 +00:00
Russell King 9f28cde0bc ARM: another fix for the DMA mapping checks
Peter reports that OMAP audio broke with the recent fix for these
checks, caused by OMAP audio using a 64-bit DMA mask.  We should
allow 64-bit DMA masks even with 32-bit dma_addr_t if we can be sure
the amount of RAM we have won't allow the 32-bit dma_addr_t to
overflow.  Unfortunately, the checks to detect overflow were not
correct.

Tested-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-12-09 23:24:26 +00:00
Russell King 11a5aa3256 ARM: dma-mapping: check DMA mask against available memory
Some buses have negative offsets, which causes the DMA mask checks to
falsely fail.  Fix this by using the actual amount of memory fitted in
the system.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-11-30 14:45:29 +00:00
Linus Torvalds f47671e2d8 Merge branch 'for-linus' of git://git.linaro.org/people/rmk/linux-arm
Pull ARM updates from Russell King:
 "Included in this series are:

   1. BE8 (modern big endian) changes for ARM from Ben Dooks
   2. big.Little support from Nicolas Pitre and Dave Martin
   3. support for LPAE systems with all system memory above 4GB
   4. Perf updates from Will Deacon
   5. Additional prefetching and other performance improvements from Will.
   6. Neon-optimised AES implementation fro Ard.
   7. A number of smaller fixes scattered around the place.

  There is a rather horrid merge conflict in tools/perf - I was never
  notified of the conflict because it originally occurred between Will's
  tree and other stuff.  Consequently I have a resolution which Will
  forwarded me, which I'll forward on immediately after sending this
  mail.

  The other notable thing is I'm expecting some build breakage in the
  crypto stuff on ARM only with Ard's AES patches.  These were merged
  into a stable git branch which others had already pulled, so there's
  little I can do about this.  The problem is caused because these
  patches have a dependency on some code in the crypto git tree - I
  tried requesting a branch I can pull to resolve these, and all I got
  each time from the crypto people was "we'll revert our patches then"
  which would only make things worse since I still don't have the
  dependent patches.  I've no idea what's going on there or how to
  resolve that, and since I can't split these patches from the rest of
  this pull request, I'm rather stuck with pushing this as-is or
  reverting Ard's patches.

  Since it should "come out in the wash" I've left them in - the only
  build problems they seem to cause at the moment are with randconfigs,
  and since it's a new feature anyway.  However, if by -rc1 the
  dependencies aren't in, I think it'd be best to revert Ard's patches"

I resolved the perf conflict roughly as per the patch sent by Russell,
but there may be some differences.  Any errors are likely mine.  Let's
see how the crypto issues work out..

* 'for-linus' of git://git.linaro.org/people/rmk/linux-arm: (110 commits)
  ARM: 7868/1: arm/arm64: remove atomic_clear_mask() in "include/asm/atomic.h"
  ARM: 7867/1: include: asm: use 'int' instead of 'unsigned long' for 'oldval' in atomic_cmpxchg().
  ARM: 7866/1: include: asm: use 'long long' instead of 'u64' within atomic.h
  ARM: 7871/1: amba: Extend number of IRQS
  ARM: 7887/1: Don't smp_cross_call() on UP devices in arch_irq_work_raise()
  ARM: 7872/1: Support arch_irq_work_raise() via self IPIs
  ARM: 7880/1: Clear the IT state independent of the Thumb-2 mode
  ARM: 7878/1: nommu: Implement dummy early_paging_init()
  ARM: 7876/1: clear Thumb-2 IT state on exception handling
  ARM: 7874/2: bL_switcher: Remove cpu_hotplug_driver_{lock,unlock}()
  ARM: footbridge: fix build warnings for netwinder
  ARM: 7873/1: vfp: clear vfp_current_hw_state for dying cpu
  ARM: fix misplaced arch_virt_to_idmap()
  ARM: 7848/1: mcpm: Implement cpu_kill() to synchronise on powerdown
  ARM: 7847/1: mcpm: Factor out logical-to-physical CPU translation
  ARM: 7869/1: remove unused XSCALE_PMU Kconfig param
  ARM: 7864/1: Handle 64-bit memory in case of 32-bit phys_addr_t
  ARM: 7863/1: Let arm_add_memory() always use 64-bit arguments
  ARM: 7862/1: pcpu: replace __get_cpu_var_uses
  ARM: 7861/1: cacheflush: consolidate single-CPU ARMv7 cache disabling code
  ...
2013-11-14 08:51:29 +09:00
Linus Torvalds 8ceafbfa91 Merge branch 'for-linus-dma-masks' of git://git.linaro.org/people/rmk/linux-arm
Pull DMA mask updates from Russell King:
 "This series cleans up the handling of DMA masks in a lot of drivers,
  fixing some bugs as we go.

  Some of the more serious errors include:
   - drivers which only set their coherent DMA mask if the attempt to
     set the streaming mask fails.
   - drivers which test for a NULL dma mask pointer, and then set the
     dma mask pointer to a location in their module .data section -
     which will cause problems if the module is reloaded.

  To counter these, I have introduced two helper functions:
   - dma_set_mask_and_coherent() takes care of setting both the
     streaming and coherent masks at the same time, with the correct
     error handling as specified by the API.
   - dma_coerce_mask_and_coherent() which resolves the problem of
     drivers forcefully setting DMA masks.  This is more a marker for
     future work to further clean these locations up - the code which
     creates the devices really should be initialising these, but to fix
     that in one go along with this change could potentially be very
     disruptive.

  The last thing this series does is prise away some of Linux's addition
  to "DMA addresses are physical addresses and RAM always starts at
  zero".  We have ARM LPAE systems where all system memory is above 4GB
  physical, hence having DMA masks interpreted by (eg) the block layers
  as describing physical addresses in the range 0..DMAMASK fails on
  these platforms.  Santosh Shilimkar addresses this in this series; the
  patches were copied to the appropriate people multiple times but were
  ignored.

  Fixing this also gets rid of some ARM weirdness in the setup of the
  max*pfn variables, and brings ARM into line with every other Linux
  architecture as far as those go"

* 'for-linus-dma-masks' of git://git.linaro.org/people/rmk/linux-arm: (52 commits)
  ARM: 7805/1: mm: change max*pfn to include the physical offset of memory
  ARM: 7797/1: mmc: Use dma_max_pfn(dev) helper for bounce_limit calculations
  ARM: 7796/1: scsi: Use dma_max_pfn(dev) helper for bounce_limit calculations
  ARM: 7795/1: mm: dma-mapping: Add dma_max_pfn(dev) helper function
  ARM: 7794/1: block: Rename parameter dma_mask to max_addr for blk_queue_bounce_limit()
  ARM: DMA-API: better handing of DMA masks for coherent allocations
  ARM: 7857/1: dma: imx-sdma: setup dma mask
  DMA-API: firmware/google/gsmi.c: avoid direct access to DMA masks
  DMA-API: dcdbas: update DMA mask handing
  DMA-API: dma: edma.c: no need to explicitly initialize DMA masks
  DMA-API: usb: musb: use platform_device_register_full() to avoid directly messing with dma masks
  DMA-API: crypto: remove last references to 'static struct device *dev'
  DMA-API: crypto: fix ixp4xx crypto platform device support
  DMA-API: others: use dma_set_coherent_mask()
  DMA-API: staging: use dma_set_coherent_mask()
  DMA-API: usb: use new dma_coerce_mask_and_coherent()
  DMA-API: usb: use dma_set_coherent_mask()
  DMA-API: parport: parport_pc.c: use dma_coerce_mask_and_coherent()
  DMA-API: net: octeon: use dma_coerce_mask_and_coherent()
  DMA-API: net: nxp/lpc_eth: use dma_coerce_mask_and_coherent()
  ...
2013-11-14 07:55:21 +09:00
Russell King 42cbe8271c Merge branches 'fixes', 'mmci' and 'sa11x0' into for-next 2013-11-12 10:59:08 +00:00
Russell King 4dcfa60071 ARM: DMA-API: better handing of DMA masks for coherent allocations
We need to start treating DMA masks as something which is specific to
the bus that the device resides on, otherwise we're going to hit all
sorts of nasty issues with LPAE and 32-bit DMA controllers in >32-bit
systems, where memory is offset from PFN 0.

In order to start doing this, we convert the DMA mask to a PFN using
the device specific dma_to_pfn() macro.  This is the reverse of the
pfn_to_dma() macro which is used to get the DMA address for the device.

This gives us a PFN mask, which we can then check against the PFN
limit of the DMA zone.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-31 14:49:21 +00:00
Russell King 0ea1ec713f ARM: dma-mapping: don't allow DMA mappings to be marked executable
DMA mapping permissions were being derived from pgprot_kernel directly
without using PAGE_KERNEL.  This causes them to be marked with executable
permission, which is not what we want.  Fix this.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-24 11:17:27 +01:00
Andreas Herrmann c9b24996d5 ARM: dma-mapping: Always pass proper prot flags to iommu_map()
... otherwise it is impossible for the low level iommu driver to
figure out which pte flags should be used.

In __map_sg_chunk we can derive the flags from dma_data_direction.

In __iommu_create_mapping we should treat the memory like
DMA_BIDIRECTIONAL and pass both IOMMU_READ and IOMMU_WRITE to
iommu_map.
__iommu_create_mapping is used during dma_alloc_coherent (via
arm_iommu_alloc_attrs).  AFAIK dma_alloc_coherent is responsible for
allocation _and_ mapping.  I think this implies that access to the
mapped pages should be allowed.

Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Andreas Herrmann <andreas.herrmann@calxeda.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
2013-10-02 13:23:11 +02:00
Linus Torvalds 2e03285224 Merge branch 'for-linus' of git://git.linaro.org/people/rmk/linux-arm
Pull ARM updates from Russell King:
 "This set includes adding support for Neon acceleration of RAID6 XOR
  code from Ard Biesheuvel, cache flushing and barrier updates from Will
  Deacon, and a cleanup to the ARM debug code which reduces the amount
  of code by about 500 lines.

  A few other cleanups, such as constifying the machine descriptors
  which already shouldn't be written to, cleaning up the printing of the
  L2 cache size"

* 'for-linus' of git://git.linaro.org/people/rmk/linux-arm: (55 commits)
  ARM: 7826/1: debug: support debug ll on hisilicon soc
  ARM: 7830/1: delay: don't bother reporting bogomips in /proc/cpuinfo
  ARM: 7829/1: Add ".text.unlikely" and ".text.hot" to arm unwind tables
  ARM: 7828/1: ARMv7-M: implement restart routine common to all v7-M machines
  ARM: 7827/1: highbank: fix debug uart virtual address for LPAE
  ARM: 7823/1: errata: workaround Cortex-A15 erratum 773022
  ARM: 7806/1: allow DEBUG_UNCOMPRESS for Tegra
  ARM: 7793/1: debug: use generic option for ep93xx PL10x debug port
  ARM: debug: move SPEAr debug to generic PL01x code
  ARM: debug: move davinci debug to generic 8250 code
  ARM: debug: move keystone debug to generic 8250 code
  ARM: debug: remove DEBUG_ROCKCHIP_UART
  ARM: debug: provide generic option choices for 8250 and PL01x ports
  ARM: debug: move PL01X debug include into arch/arm/include/debug/
  ARM: debug: provide PL01x debug uart phys/virt address configuration options
  ARM: debug: add support for word accesses to debug/8250.S
  ARM: debug: move 8250 debug include into arch/arm/include/debug/
  ARM: debug: provide 8250 debug uart phys/virt address configuration options
  ARM: debug: provide 8250 debug uart register shift configuration option
  ARM: debug: provide 8250 debug uart flow control configuration option
  ...
2013-09-05 18:07:32 -07:00
Alexander Graf bf550fc93d Merge remote-tracking branch 'origin/next' into kvm-ppc-next
Conflicts:
	mm/Kconfig

CMA DMA split and ZSWAP introduction were conflicting, fix up manually.
2013-08-29 00:41:59 +02:00
Will Deacon 792a843a9f ARM: mm: remove redundant dsb() prior to range TLB invalidation
The kernel TLB range invalidation functions already contain dsb
instructions before and after the maintenance, so there is no need to
introduce additional barriers.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2013-08-12 12:25:44 +01:00
Alexander Graf 20f7462aac Merge remote-tracking branch 'cmadma/for-v3.12-cma-dma' into kvm-ppc-next
Add prerequisite patch for CMA RMA allocation patches
2013-07-08 16:16:56 +02:00
Linus Torvalds 8b70a90cab Merge branch 'for-v3.11' of git://git.linaro.org/people/mszyprowski/linux-dma-mapping
Pull ARM DMA mapping updates from Marek Szyprowski:
 "This contains important bugfixes and an update for IOMMU integration
  support for ARM architecture"

* 'for-v3.11' of git://git.linaro.org/people/mszyprowski/linux-dma-mapping:
  ARM: dma: Drop __GFP_COMP for iommu dma memory allocations
  ARM: DMA-mapping: mark all !DMA_TO_DEVICE pages in unmapping as clean
  ARM: dma-mapping: NULLify dev->archdata.mapping pointer on detach
  ARM: dma-mapping: convert DMA direction into IOMMU protection attributes
  ARM: dma-mapping: Get pages if the cpu_addr is out of atomic_pool
2013-07-06 12:41:54 -07:00
Aneesh Kumar K.V f825c736e7 mm/cma: Move dma contiguous changes into a seperate config
We want to use CMA for allocating hash page table and real mode area for
PPC64. Hence move DMA contiguous related changes into a seperate config
so that ppc64 can enable CMA without requiring DMA contiguous.

Acked-by: Michal Nazarewicz <mina86@mina86.com>
Acked-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
[removed defconfig changes]
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
2013-07-02 10:08:22 +02:00
Russell King 3c0c01ab74 Merge branch 'devel-stable' into for-next
Conflicts:
	arch/arm/Makefile
	arch/arm/include/asm/glue-proc.h
2013-06-29 11:44:43 +01:00
Richard Zhao 5b91a98c61 ARM: dma: Drop __GFP_COMP for iommu dma memory allocations
__iommu_alloc_buffer wants to split pages after allocation in order to
reduce the memory footprint. This does not work well with __GFP_COMP
pages, so drop this flag before allocation

One failure example is snd_malloc_dev_pages call dma_alloc_coherent with
__GFP_COMP.

Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
2013-06-28 15:14:29 +02:00
Ming Lei 63c181922f ARM: DMA-mapping: mark all !DMA_TO_DEVICE pages in unmapping as clean
It is common for one sg to include many pages, so mark all these
pages as clean to avoid unnecessary flushing on them in
set_pte_at() or update_mmu_cache().

The patch might improve loading performance of applciation code a bit.

On the below test code to read file(~1GByte size) from usb mass storage
disk to buffer created with mmap(PROT_READ | PROT_EXEC) on
Pandaboard, average ~1% improvement can be observed with the patch on
10 times test.

unsigned int sum = 0;

static unsigned long tv_diff(struct timeval *tv1, struct timeval *tv2)
{
	return (tv2->tv_sec - tv1->tv_sec) * 1000000 +
		(tv2->tv_usec - tv1->tv_usec);
}

int main(int argc, char *argv[])
{
	char *mbuffer;
	int fd;
	int i;
	unsigned long page_size, size;
	struct stat stat;
	struct timeval t1, t2;

	page_size = getpagesize();
	fd = open(argv[1], O_RDONLY);
	assert(fd >= 0);

	fstat(fd, &stat);
	size = stat.st_size;
	printf("%s: file %s, file size %lu, page size %lu\n", argv[0],
		read_filename, size, page_size);

	gettimeofday(&t1, NULL);
	mbuffer = mmap(NULL, size, PROT_READ | PROT_EXEC, MAP_SHARED, fd, 0);
	for (i = 0 ; i < size ; i += page_size)
		sum += mbuffer[i];
	munmap(mbuffer, page_size);
	gettimeofday(&t2, NULL);
	printf("\tread mmaped time: %luus\n", tv_diff(&t1, &t2));

	close(fd);
}

Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Russell King <linux@arm.linux.org.uk>
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
2013-06-28 15:14:28 +02:00
Will Deacon 9e4b259d4f ARM: dma-mapping: NULLify dev->archdata.mapping pointer on detach
The current code only clobbers a local variable, so the device is left
with a stale mapping pointer.

Cc: Hiroshi Doyu <hdoyu@nvidia.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Acked-by: Hiroshi Doyu <hdoyu@nvidia.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
2013-06-28 15:14:27 +02:00
Will Deacon 13987d68bc ARM: dma-mapping: convert DMA direction into IOMMU protection attributes
IOMMU mappings take a prot parameter, identifying the protection bits
to enforce on the newly created mapping (READ or WRITE). The ARM
dma-mapping framework currently just passes 0 as the prot argument,
resulting in faulting mappings.

This patch infers the protection attributes based on the direction of
the DMA transfer.

Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
2013-06-28 15:14:27 +02:00
YoungJun Cho 836bfa0d29 ARM: dma-mapping: Get pages if the cpu_addr is out of atomic_pool
In __iommu_get_pages(), the cpu_addr is checked wheather in
atomic_pool range or not. So if the cpu_addr is in atomic_pool
range, it does not need to check twice.

Signed-off-by: YoungJun Cho <yj44.cho@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
2013-06-28 15:14:27 +02:00
Catalin Marinas 1355e2a6eb ARM: mm: HugeTLB support for LPAE systems.
This patch adds support for hugetlbfs based on the x86 implementation.
It allows mapping of 2MB sections (see Documentation/vm/hugetlbpage.txt
for usage). The 64K pages configuration is not supported (section size
is 512MB in this case).

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
[steve.capper@linaro.org: symbolic constants replace numbers in places.
Split up into multiple files, to simplify future non-LPAE support,
removed huge_pmd_share code, as this is very rarely executed,
Added PROT_NONE support].
Signed-off-by: Steve Capper <steve.capper@linaro.org>
Reviewed-by: Will Deacon <will.deacon@arm.com>
2013-06-04 16:52:37 +01:00
Ming Lei b2a234ed64 ARM: 7730/1: DMA-mapping: mark all !DMA_TO_DEVICE pages in unmapping as clean
It is common for one sg to include many pages, so mark all these
pages as clean to avoid unnecessary flushing on them in
set_pte_at() or update_mmu_cache().

The patch might improve loading performance of applciation code a bit.

On the below test code to read file(~1GByte size) from usb mass storage
disk to buffer created with mmap(PROT_READ | PROT_EXEC) on
Pandaboard, average ~1% improvement can be observed with the patch on
10 times test.

unsigned int sum = 0;
static unsigned long tv_diff(struct timeval *tv1, struct timeval *tv2)
{
	return (tv2->tv_sec - tv1->tv_sec) * 1000000 + (tv2->tv_usec - tv1->tv_usec);
}
int main(int argc, char *argv[])
{
	char *mbuffer;
	int fd;
	int i;
	unsigned long page_size, size;
	struct stat stat;
	struct timeval t1, t2;

	page_size = getpagesize();
	fd = open(argv[1], O_RDONLY);
	assert(fd >= 0);

	fstat(fd, &stat);
	size = stat.st_size;
	printf("%s: file %s, file size %lu, page size %lun", argv[0],
	        read_filename, size, page_size);

	gettimeofday(&t1, NULL);
	mbuffer = mmap(NULL, size, PROT_READ | PROT_EXEC, MAP_SHARED, fd, 0);
	for (i = 0 ; i < size ; i += page_size)
	        sum += mbuffer[i];
	munmap(mbuffer, page_size);
	gettimeofday(&t2, NULL);
	printf("tread mmaped time: %luusn", tv_diff(&t1, &t2));

	close(fd);
}

Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-05-23 00:09:45 +01:00
Russell King 946342d03e Merge branches 'devel-stable', 'entry', 'fixes', 'mach-types', 'misc' and 'smp-hotplug' into for-linus 2013-05-02 21:30:36 +01:00
Joonsoo Kim dd0f67f474 ARM: 7693/1: mm: clean-up in order to reduce to call kmap_high_get()
In kmap_atomic(), kmap_high_get() is invoked for checking already
mapped area. In __flush_dcache_page() and dma_cache_maint_page(),
we explicitly call kmap_high_get() before kmap_atomic()
when cache_is_vipt(), so kmap_high_get() can be invoked twice.
This is useless operation, so remove one.

v2: change cache_is_vipt() to cache_is_vipt_nonaliasing() in order to
be self-documented

Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-04-17 16:55:01 +01:00
Marek Szyprowski 9d1400cf79 ARM: DMA-mapping: add missing GFP_DMA flag for atomic buffer allocation
Atomic pool should always be allocated from DMA zone if such zone is
available in the system to avoid issues caused by limited dma mask of
any of the devices used for making an atomic allocation.

Reported-by: Krzysztof Halasa <khc@pm.waw.pl>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Stable <stable@vger.kernel.org>	[v3.6+]
2013-03-14 09:25:19 +01:00
Marek Szyprowski d589829107 ARM: DMA-mapping: fix memory leak in IOMMU dma-mapping implementation
This patch removes page_address() usage in IOMMU-aware dma-mapping
implementation and replaced it with direct use of the cpu virtual address
provided by the caller. page_address() returned incorrect address for
pages remapped in atomic pool, what caused memory leak.

Reported-by: Hiroshi Doyu <hdoyu@nvidia.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Tested-by: Hiroshi Doyu <hdoyu@nvidia.com>
2013-02-25 15:30:44 +01:00
Seung-Woo Kim 60460abffc ARM: dma-mapping: Add maximum alignment order for dma iommu buffers
Alignment order for a dma iommu buffer is set by buffer size. For
large buffer, it is a waste of iommu address space. So configurable
parameter to limit maximum alignment order can reduce the waste.

Signed-off-by: Seung-Woo Kim <sw0312.kim@samsung.com>
Signed-off-by: Kyungmin.park <kyungmin.park@samsung.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
2013-02-25 15:30:43 +01:00
Marek Szyprowski f8669bef11 ARM: dma-mapping: use himem for DMA buffers for IOMMU-mapped devices
IOMMU can provide access to any memory page, so there is no point in
limiting the allocated pages only to lowmem, once other parts of
dma-mapping subsystem correctly supports himem pages.

Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
2013-02-25 15:30:43 +01:00
Marek Szyprowski 9848e48f4c ARM: dma-mapping: add support for CMA regions placed in highmem zone
This patch adds missing pieces to correctly support memory pages served
from CMA regions placed in high memory zones. Please note that the default
global CMA area is still put into lowmem and is limited by optional
architecture specific DMA zone. One can however put device specific CMA
regions in high memory zone to reduce lowmem usage.

Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Acked-by: Michal Nazarewicz <mina86@mina86.com>
2013-02-25 15:30:42 +01:00