alistair23-linux/arch/metag/mm/maccess.c
James Hogan 373cd784d0 metag: Memory handling
Meta has instructions for accessing:
 - bytes        - GETB (1 byte)
 - words        - GETW (2 bytes)
 - doublewords  - GETD (4 bytes)
 - longwords    - GETL (8 bytes)

All accesses must be aligned. Unaligned accesses can be detected and
made to fault on Meta2, however it isn't possible to fix up unaligned
writes so we don't bother fixing up reads either.

This patch adds metag memory handling code including:
 - I/O memory (io.h, ioremap.c): Actually any virtual memory can be
   accessed with these helpers. A part of the non-MMUable address space
   is used for memory mapped I/O. The ioremap() function is implemented
   one to one for non-MMUable addresses.
 - User memory (uaccess.h, usercopy.c): User memory is directly
   accessible from privileged code.
 - Kernel memory (maccess.c): probe_kernel_write() needs to be
   overwridden to use the I/O functions when doing a simple aligned
   write to non-writecombined memory, otherwise the write may be split
   by the generic version.

Note that due to the fact that a portion of the virtual address space is
non-MMUable, and therefore always maps directly to the physical address
space, metag specific I/O functions are made available (metag_in32,
metag_out32 etc). These cast the address argument to a pointer so that
they can be used with raw physical addresses. These accessors are only
to be used for accessing fixed core Meta architecture registers in the
non-MMU region, and not for any SoC/peripheral registers.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
2013-03-02 20:09:19 +00:00

69 lines
1.9 KiB
C

/*
* safe read and write memory routines callable while atomic
*
* Copyright 2012 Imagination Technologies
*/
#include <linux/uaccess.h>
#include <asm/io.h>
/*
* The generic probe_kernel_write() uses the user copy code which can split the
* writes if the source is unaligned, and repeats writes to make exceptions
* precise. We override it here to avoid these things happening to memory mapped
* IO memory where they could have undesired effects.
* Due to the use of CACHERD instruction this only works on Meta2 onwards.
*/
#ifdef CONFIG_METAG_META21
long probe_kernel_write(void *dst, const void *src, size_t size)
{
unsigned long ldst = (unsigned long)dst;
void __iomem *iodst = (void __iomem *)dst;
unsigned long lsrc = (unsigned long)src;
const u8 *psrc = (u8 *)src;
unsigned int pte, i;
u8 bounce[8] __aligned(8);
if (!size)
return 0;
/* Use the write combine bit to decide is the destination is MMIO. */
pte = __builtin_meta2_cacherd(dst);
/* Check the mapping is valid and writeable. */
if ((pte & (MMCU_ENTRY_WR_BIT | MMCU_ENTRY_VAL_BIT))
!= (MMCU_ENTRY_WR_BIT | MMCU_ENTRY_VAL_BIT))
return -EFAULT;
/* Fall back to generic version for cases we're not interested in. */
if (pte & MMCU_ENTRY_WRC_BIT || /* write combined memory */
(ldst & (size - 1)) || /* destination unaligned */
size > 8 || /* more than max write size */
(size & (size - 1))) /* non power of 2 size */
return __probe_kernel_write(dst, src, size);
/* If src is unaligned, copy to the aligned bounce buffer first. */
if (lsrc & (size - 1)) {
for (i = 0; i < size; ++i)
bounce[i] = psrc[i];
psrc = bounce;
}
switch (size) {
case 1:
writeb(*psrc, iodst);
break;
case 2:
writew(*(const u16 *)psrc, iodst);
break;
case 4:
writel(*(const u32 *)psrc, iodst);
break;
case 8:
writeq(*(const u64 *)psrc, iodst);
break;
}
return 0;
}
#endif