1
0
Fork 0
Commit Graph

17 Commits (5691e22711a4ee70f473e909d7aae76a966d819d)

Author SHA1 Message Date
Kees Cook 47b6d269db ARM: 8958/1: rename missed uaccess .fixup section
commit f87b1c49bc upstream.

When the uaccess .fixup section was renamed to .text.fixup, one case was
missed. Under ld.bfd, the orphaned section was moved close to .text
(since they share the "ax" bits), so things would work normally on
uaccess faults. Under ld.lld, the orphaned section was placed outside
the .text section, making it unreachable.

Link: https://github.com/ClangBuiltLinux/linux/issues/282
Link: https://bugs.chromium.org/p/chromium/issues/detail?id=1020633#c44
Link: https://lore.kernel.org/r/nycvar.YSQ.7.76.1912032147340.17114@knanqh.ubzr
Link: https://lore.kernel.org/lkml/202002071754.F5F073F1D@keescook/

Fixes: c4a84ae39b ("ARM: 8322/1: keep .text and .fixup regions closer together")
Cc: stable@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-03-21 08:11:58 +01:00
Thomas Gleixner d2912cb15b treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500
Based on 2 normalized pattern(s):

  this program is free software you can redistribute it and or modify
  it under the terms of the gnu general public license version 2 as
  published by the free software foundation

  this program is free software you can redistribute it and or modify
  it under the terms of the gnu general public license version 2 as
  published by the free software foundation #

extracted by the scancode license scanner the SPDX license identifier

  GPL-2.0-only

has been chosen to replace the boilerplate/reference in 4122 file(s).

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Enrico Weigelt <info@metux.net>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-06-19 17:09:55 +02:00
Stefan Agner a216376add ARM: 8841/1: use unified assembler in macros
Use unified assembler syntax (UAL) in macros. Divided syntax is
considered deprecated. This will also allow to build the kernel
using LLVM's integrated assembler.

Signed-off-by: Stefan Agner <stefan@agner.ch>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-26 11:25:43 +00:00
Vincent Whitchurch f441882a52 ARM: 8812/1: Optimise copy_{from/to}_user for !CPU_USE_DOMAINS
ARMv6+ processors do not use CONFIG_CPU_USE_DOMAINS and use privileged
ldr/str instructions in copy_{from/to}_user.  They are currently
unnecessarily using single ldr/str instructions and can use ldm/stm
instructions instead like memcpy does (but with appropriate fixup
tables).

This speeds up a "dd if=foo of=bar bs=32k" on a tmpfs filesystem by
about 4% on my Cortex-A9.

before:134217728 bytes (128.0MB) copied, 0.543848 seconds, 235.4MB/s
before:134217728 bytes (128.0MB) copied, 0.538610 seconds, 237.6MB/s
before:134217728 bytes (128.0MB) copied, 0.544356 seconds, 235.1MB/s
before:134217728 bytes (128.0MB) copied, 0.544364 seconds, 235.1MB/s
before:134217728 bytes (128.0MB) copied, 0.537130 seconds, 238.3MB/s
before:134217728 bytes (128.0MB) copied, 0.533443 seconds, 240.0MB/s
before:134217728 bytes (128.0MB) copied, 0.545691 seconds, 234.6MB/s
before:134217728 bytes (128.0MB) copied, 0.534695 seconds, 239.4MB/s
before:134217728 bytes (128.0MB) copied, 0.540561 seconds, 236.8MB/s
before:134217728 bytes (128.0MB) copied, 0.541025 seconds, 236.6MB/s

 after:134217728 bytes (128.0MB) copied, 0.520445 seconds, 245.9MB/s
 after:134217728 bytes (128.0MB) copied, 0.527846 seconds, 242.5MB/s
 after:134217728 bytes (128.0MB) copied, 0.519510 seconds, 246.4MB/s
 after:134217728 bytes (128.0MB) copied, 0.527231 seconds, 242.8MB/s
 after:134217728 bytes (128.0MB) copied, 0.525030 seconds, 243.8MB/s
 after:134217728 bytes (128.0MB) copied, 0.524236 seconds, 244.2MB/s
 after:134217728 bytes (128.0MB) copied, 0.523659 seconds, 244.4MB/s
 after:134217728 bytes (128.0MB) copied, 0.525018 seconds, 243.8MB/s
 after:134217728 bytes (128.0MB) copied, 0.519249 seconds, 246.5MB/s
 after:134217728 bytes (128.0MB) copied, 0.518527 seconds, 246.9MB/s

Reviewed-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Vincent Whitchurch <vincent.whitchurch@axis.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2018-11-12 10:51:59 +00:00
Julien Thierry afaf6838f4 ARM: 8796/1: spectre-v1,v1.1: provide helpers for address sanitization
Introduce C and asm helpers to sanitize user address, taking the
address range they target into account.

Use asm helper for existing sanitization in __copy_from_user().

Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2018-10-05 10:51:15 +01:00
Russell King a3c0f84765 ARM: spectre-v1: mitigate user accesses
Spectre variant 1 attacks are about this sequence of pseudo-code:

	index = load(user-manipulated pointer);
	access(base + index * stride);

In order for the cache side-channel to work, the access() must me made
to memory which userspace can detect whether cache lines have been
loaded.  On 32-bit ARM, this must be either user accessible memory, or
a kernel mapping of that same user accessible memory.

The problem occurs when the load() speculatively loads privileged data,
and the subsequent access() is made to user accessible memory.

Any load() which makes use of a user-maniplated pointer is a potential
problem if the data it has loaded is used in a subsequent access.  This
also applies for the access() if the data loaded by that access is used
by a subsequent access.

Harden the get_user() accessors against Spectre attacks by forcing out
of bounds addresses to a NULL pointer.  This prevents get_user() being
used as the load() step above.  As a side effect, put_user() will also
be affected even though it isn't implicated.

Also harden copy_from_user() by redoing the bounds check within the
arm_copy_from_user() code, and NULLing the pointer if out of bounds.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2018-08-02 17:41:38 +01:00
Russell King 8478132a87 Revert "arm: move exports to definitions"
This reverts commit 4dd1837d75.

Moving the exports for assembly code into the assembly files breaks
KSYM trimming, but also breaks modversions.

While fixing the KSYM trimming is trivial, fixing modversions brings
us to a technically worse position that we had prior to the above
change:

- We end up with the prototype definitions divorsed from everything
  else, which means that adding or removing assembly level ksyms
  become more fragile:
  * if adding a new assembly ksyms export, a missed prototype in
    asm-prototypes.h results in a successful build if no module in
    the selected configuration makes use of the symbol.
  * when removing a ksyms export, asm-prototypes.h will get forgotten,
    with armksyms.c, you'll get a build error if you forget to touch
    the file.

- We end up with the same amount of include files and prototypes,
  they're just in a header file instead of a .c file with their
  exports.

As for lines of code, we don't get much of a size reduction:
 (original commit)
 47 files changed, 131 insertions(+), 208 deletions(-)
 (fix for ksyms trimming)
 7 files changed, 18 insertions(+), 5 deletions(-)
 (two fixes for modversions)
 1 file changed, 34 insertions(+)
 3 files changed, 7 insertions(+), 2 deletions(-)
which results in a net total of only 25 lines deleted.

As there does not seem to be much benefit from this change of approach,
revert the change.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2016-11-23 10:00:03 +00:00
Linus Torvalds b26b5ef5ec Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs
Pull more misc uaccess and vfs updates from Al Viro:
 "The rest of the stuff from -next (more uaccess work) + assorted fixes"

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
  score: traps: Add missing include file to fix build error
  fs/super.c: don't fool lockdep in freeze_super() and thaw_super() paths
  fs/super.c: fix race between freeze_super() and thaw_super()
  overlayfs: Fix setting IOP_XATTR flag
  iov_iter: kernel-doc import_iovec() and rw_copy_check_uvector()
  blackfin: no access_ok() for __copy_{to,from}_user()
  arm64: don't zero in __copy_from_user{,_inatomic}
  arm: don't zero in __copy_from_user_inatomic()/__copy_from_user()
  arc: don't leak bits of kernel stack into coredump
  alpha: get rid of tail-zeroing in __copy_user()
2016-10-14 18:19:05 -07:00
Al Viro 91344493b7 arm: don't zero in __copy_from_user_inatomic()/__copy_from_user()
adjust copy_from_user(), obviously

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-09-15 19:51:56 -04:00
Al Viro 4dd1837d75 arm: move exports to definitions
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-08-07 23:47:21 -04:00
Russell King 3fba7e23f7 ARM: uaccess: provide uaccess_save_and_enable() and uaccess_restore()
Provide uaccess_save_and_enable() and uaccess_restore() to permit
control of userspace visibility to the kernel, and hook these into
the appropriate places in the kernel where we need to access
userspace.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-08-25 16:14:43 +01:00
Lin Yongting 279f487e0b ARM: 8225/1: Add unwinding support for memory copy functions
The memory copy functions(memcpy, __copy_from_user, __copy_to_user)
never had unwinding annotations added. Currently, when accessing
invalid pointer by these functions occurs the backtrace shown will
stop at these functions or some completely unrelated function.
Add unwinding annotations in hopes of getting a more useful backtrace
in following cases:
1. die on accessing invalid pointer by these functions
2. kprobe trapped at any instruction within these functions
3. interrupted at any instruction within these functions

Signed-off-by: Lin Yongting <linyongting@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-11-27 16:00:25 +00:00
Russell King 4260415f6a ARM: fix build error in arch/arm/kernel/process.c
/tmp/ccJ3ssZW.s: Assembler messages:
/tmp/ccJ3ssZW.s:1952: Error: can't resolve `.text' {.text section} - `.LFB1077'

This is caused because:

	.section .data
	.section .text
	.section .text
	.previous

does not return us to the .text section, but the .data section; this
makes use of .previous dangerous if the ordering of previous sections
is not known.

Fix up the other users of .previous; .pushsection and .popsection are
a safer pairing to use than .section and .previous.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-04-21 08:45:21 +01:00
Catalin Marinas 8b592783a2 Thumb-2: Implement the unified arch/arm/lib functions
This patch adds the ARM/Thumb-2 unified support for the arch/arm/lib/*
files.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2009-07-24 12:32:57 +01:00
Catalin Marinas 93ed397011 [ARM] 5227/1: Add the ENDPROC declarations to the .S files
This declaration specifies the "function" type and size for various
assembly functions, mainly needed for generating the correct branch
instructions in Thumb-2.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-09-01 12:06:34 +01:00
Russell King 02fcb97436 [ARM] Remove the __arch_* layer from uaccess.h
Back in the days when we had armo (26-bit) and armv (32-bit) combined,
we had an additional layer to the uaccess macros to ensure correct
typing.  Since we no longer have 26-bit in this tree, we no longer
need this layer, so eliminate it.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2006-06-28 17:53:27 +01:00
Nicolas Pitre fadab0943d [ARM] 2948/1: new preemption safe copy_{to|from}_user implementation
Patch from Nicolas Pitre

This patch provides a preemption safe implementation of copy_to_user
and copy_from_user based on the copy template also used for memcpy.
It is enabled unconditionally when CONFIG_PREEMPT=y.  Otherwise if the
configured architecture is not ARMv3 then it is enabled as well as it
gives better performances at least on StrongARM and XScale cores.  If
ARMv3 is not too affected or if it doesn't matter too much then
uaccess.S could be removed altogether.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2005-11-01 19:52:24 +00:00