1
0
Fork 0
Commit Graph

15 Commits (4ab328f06e305bf3ea254f4e3c94bb4d820998c1)

Author SHA1 Message Date
Ard Biesheuvel 0777e3e172 ARM: 8125/1: crypto: enable NEON SHA-1 for big endian
This tweaks the SHA-1 NEON code slightly so it works correctly under big
endian, and removes the Kconfig condition preventing it from being
selected if CONFIG_CPU_BIG_ENDIAN is set.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-08-27 15:44:11 +01:00
Linus Torvalds c489d98c8c Merge branch 'for-linus' of git://ftp.arm.linux.org.uk/~rmk/linux-arm
Pull ARM updates from Russell King:
 "Included in this update:

   - perf updates from Will Deacon:

     The main changes are callchain stability fixes from Jean Pihet and
     event mapping and PMU name rework from Mark Rutland

     The latter is preparatory work for enabling some code re-use with
     arm64 in the future.

   - updates for nommu from Uwe Kleine-König:

     Two different fixes for the same problem making some ARM nommu
     configurations not boot since 3.6-rc1.  The problem is that
     user_addr_max returned the biggest available RAM address which
     makes some copy_from_user variants fail to read from XIP memory.

   - deprecate legacy OMAP DMA API, in preparation for it's removal.

     The popular drivers have been converted over, leaving a very small
     number of rarely used drivers, which hopefully can be converted
     during the next cycle with a bit more visibility (and hopefully
     people popping out of the woodwork to help test)

   - more tweaks for BE systems, particularly with the kernel image
     format.  In connection with this, I've cleaned up the way we
     generate the linker script for the decompressor.

   - removal of hard-coded assumptions of the kernel stack size, making
     everywhere depend on the value of THREAD_SIZE_ORDER.

   - MCPM updates from Nicolas Pitre.

   - Make it easier for proper CPU part number checks (which should
     always include the vendor field).

   - Assembly code optimisation - use the "bx" instruction when
     returning from a function on ARMv6+ rather than "mov pc, reg".

   - Save the last kernel misaligned fault location and report it via
     the procfs alignment file.

   - Clean up the way we create the initial stack frame, which is a
     repeated pattern in several different locations.

   - Support for 8-byte get_user(), needed for some DRM implementations.

   - mcs locking from Will Deacon.

   - Save and restore a few more Cortex-A9 registers (for errata
     workarounds)

   - Fix various aspects of the SWP emulation, and the ELF hwcap for the
     SWP instruction.

   - Update LPAE logic for pte_write and pmd_write to make it more
     correct.

   - Support for Broadcom Brahma15 CPU cores.

   - ARM assembly crypto updates from Ard Biesheuvel"

* 'for-linus' of git://ftp.arm.linux.org.uk/~rmk/linux-arm: (53 commits)
  ARM: add comments to the early page table remap code
  ARM: 8122/1: smp_scu: enable SCU standby support
  ARM: 8121/1: smp_scu: use macro for SCU enable bit
  ARM: 8120/1: crypto: sha512: add ARM NEON implementation
  ARM: 8119/1: crypto: sha1: add ARM NEON implementation
  ARM: 8118/1: crypto: sha1/make use of common SHA-1 structures
  ARM: 8113/1: remove remaining definitions of PLAT_PHYS_OFFSET from <mach/memory.h>
  ARM: 8111/1: Enable erratum 798181 for Broadcom Brahma-B15
  ARM: 8110/1: do CPU-specific init for Broadcom Brahma15 cores
  ARM: 8109/1: mm: Modify pte_write and pmd_write logic for LPAE
  ARM: 8108/1: mm: Introduce {pte,pmd}_isset and {pte,pmd}_isclear
  ARM: hwcap: disable HWCAP_SWP if the CPU advertises it has exclusives
  ARM: SWP emulation: only initialise on ARMv7 CPUs
  ARM: SWP emulation: always enable when SMP is enabled
  ARM: 8103/1: save/restore Cortex-A9 CP15 registers on suspend/resume
  ARM: 8098/1: mcs lock: implement wfe-based polling for MCS locking
  ARM: 8091/2: add get_user() support for 8 byte types
  ARM: 8097/1: unistd.h: relocate comments back to place
  ARM: 8096/1: Describe required sort order for textofs-y (TEXT_OFFSET)
  ARM: 8090/1: add revision info for PL310 errata 588369 and 727915
  ...
2014-08-05 10:05:29 -07:00
Jussi Kivilinna c8611d712a ARM: 8120/1: crypto: sha512: add ARM NEON implementation
This patch adds ARM NEON assembly implementation of SHA-512 and SHA-384
algorithms.

tcrypt benchmark results on Cortex-A8, sha512-generic vs sha512-neon-asm:

block-size      bytes/update    old-vs-new
16              16              2.99x
64              16              2.67x
64              64              3.00x
256             16              2.64x
256             64              3.06x
256             256             3.33x
1024            16              2.53x
1024            256             3.39x
1024            1024            3.52x
2048            16              2.50x
2048            256             3.41x
2048            1024            3.54x
2048            2048            3.57x
4096            16              2.49x
4096            256             3.42x
4096            1024            3.56x
4096            4096            3.59x
8192            16              2.48x
8192            256             3.42x
8192            1024            3.56x
8192            4096            3.60x
8192            8192            3.60x

Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-08-02 08:51:50 +01:00
Jussi Kivilinna 604682551a ARM: 8119/1: crypto: sha1: add ARM NEON implementation
This patch adds ARM NEON assembly implementation of SHA-1 algorithm.

tcrypt benchmark results on Cortex-A8, sha1-arm-asm vs sha1-neon-asm:

block-size      bytes/update    old-vs-new
16              16              1.04x
64              16              1.02x
64              64              1.05x
256             16              1.03x
256             64              1.04x
256             256             1.30x
1024            16              1.03x
1024            256             1.36x
1024            1024            1.52x
2048            16              1.03x
2048            256             1.39x
2048            1024            1.55x
2048            2048            1.59x
4096            16              1.03x
4096            256             1.40x
4096            1024            1.57x
4096            4096            1.62x
8192            16              1.03x
8192            256             1.40x
8192            1024            1.58x
8192            4096            1.63x
8192            8192            1.63x

Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-08-02 08:51:47 +01:00
Jussi Kivilinna 1f8673d31a ARM: 8118/1: crypto: sha1/make use of common SHA-1 structures
Common SHA-1 structures are defined in <crypto/sha.h> for code sharing.

This patch changes SHA-1/ARM glue code to use these structures.

Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-08-02 08:51:46 +01:00
Mikulas Patocka f3c400ef47 crypto: arm-aes - fix encryption of unaligned data
Fix the same alignment bug as in arm64 - we need to pass residue
unprocessed bytes as the last argument to blkcipher_walk_done.

Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Cc: stable@vger.kernel.org	# 3.13+
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-07-28 22:01:03 +08:00
Russell King 6ebbf2ce43 ARM: convert all "mov.* pc, reg" to "bx reg" for ARMv6+
ARMv6 and greater introduced a new instruction ("bx") which can be used
to return from function calls.  Recent CPUs perform better when the
"bx lr" instruction is used rather than the "mov pc, lr" instruction,
and this sequence is strongly recommended to be used by the ARM
architecture manual (section A.4.1.1).

We provide a new macro "ret" with all its variants for the condition
code which will resolve to the appropriate instruction.

Rather than doing this piecemeal, and miss some instances, change all
the "mov pc" instances to use the new macro, with the exception of
the "movs" instruction and the kprobes code.  This allows us to detect
the "mov pc, lr" case and fix it up - and also gives us the possibility
of deploying this for other registers depending on the CPU selection.

Reported-by: Will Deacon <will.deacon@arm.com>
Tested-by: Stephen Warren <swarren@nvidia.com> # Tegra Jetson TK1
Tested-by: Robert Jarzmik <robert.jarzmik@free.fr> # mioa701_bootresume.S
Tested-by: Andrew Lunn <andrew@lunn.ch> # Kirkwood
Tested-by: Shawn Guo <shawn.guo@freescale.com>
Tested-by: Tony Lindgren <tony@atomide.com> # OMAPs
Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com> # Armada XP, 375, 385
Acked-by: Sekhar Nori <nsekhar@ti.com> # DaVinci
Acked-by: Christoffer Dall <christoffer.dall@linaro.org> # kvm/hyp
Acked-by: Haojian Zhuang <haojian.zhuang@gmail.com> # PXA3xx
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> # Xen
Tested-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> # ARMv7M
Tested-by: Simon Horman <horms+renesas@verge.net.au> # Shmobile
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-18 12:29:04 +01:00
Russell King d2eca20d77 CRYPTO: Fix more AES build errors
Building a multi-arch kernel results in:

arch/arm/crypto/built-in.o: In function `aesbs_xts_decrypt':
sha1_glue.c:(.text+0x15c8): undefined reference to `bsaes_xts_decrypt'
arch/arm/crypto/built-in.o: In function `aesbs_xts_encrypt':
sha1_glue.c:(.text+0x1664): undefined reference to `bsaes_xts_encrypt'
arch/arm/crypto/built-in.o: In function `aesbs_ctr_encrypt':
sha1_glue.c:(.text+0x184c): undefined reference to `bsaes_ctr32_encrypt_blocks'
arch/arm/crypto/built-in.o: In function `aesbs_cbc_decrypt':
sha1_glue.c:(.text+0x19b4): undefined reference to `bsaes_cbc_encrypt'

This code is already runtime-conditional on NEON being supported, so
there's no point compiling it out depending on the minimum build
architecture.

Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-01-05 13:59:56 +00:00
Russell King 75c912a367 ARM: add .gitignore entry for aesbs-core.S
This avoids this file being incorrectly added to git.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-07 15:43:53 +01:00
Ard Biesheuvel e4e7f10bfc ARM: add support for bit sliced AES using NEON instructions
Bit sliced AES gives around 45% speedup on Cortex-A15 for encryption
and around 25% for decryption. This implementation of the AES algorithm
does not rely on any lookup tables so it is believed to be invulnerable
to cache timing attacks.

This algorithm processes up to 8 blocks in parallel in constant time. This
means that it is not usable by chaining modes that are strictly sequential
in nature, such as CBC encryption. CBC decryption, however, can benefit from
this implementation and runs about 25% faster. The other chaining modes
implemented in this module, XTS and CTR, can execute fully in parallel in
both directions.

The core code has been adopted from the OpenSSL project (in collaboration
with the original author, on cc). For ease of maintenance, this version is
identical to the upstream OpenSSL code, i.e., all modifications that were
required to make it suitable for inclusion into the kernel have been made
upstream. The original can be found here:

    http://git.openssl.org/gitweb/?p=openssl.git;a=commit;h=6f6a6130

Note to integrators:
While this implementation is significantly faster than the existing table
based ones (generic or ARM asm), especially in CTR mode, the effects on
power efficiency are unclear as of yet. This code does fundamentally more
work, by calculating values that the table based code obtains by a simple
lookup; only by doing all of that work in a SIMD fashion, it manages to
perform better.

Cc: Andy Polyakov <appro@openssl.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
2013-10-04 20:48:38 +02:00
Ard Biesheuvel 5ce26f3b5a ARM: move AES typedefs and function prototypes to separate header
Put the struct definitions for AES keys and the asm function prototypes in a
separate header and export the asm functions from the module.
This allows other drivers to use them directly.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
2013-10-04 09:26:54 +02:00
Ard Biesheuvel 40190c85f4 ARM: 7837/3: fix Thumb-2 bug in AES assembler code
Patch 638591c enabled building the AES assembler code in Thumb2 mode.
However, this code used arithmetic involving PC rather than adr{l}
instructions to generate PC-relative references to the lookup tables,
and this needs to take into account the different PC offset when
running in Thumb mode.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
Cc: stable@vger.kernel.org
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-09-22 11:43:38 +01:00
Ard Biesheuvel 934fc24df1 ARM: 7723/1: crypto: sha1-armv4-large.S: fix SP handling
Make the SHA1 asm code ABI conformant by making sure all stack
accesses occur above the stack pointer.

Origin:
http://git.openssl.org/gitweb/?p=openssl.git;a=commit;h=1a9d60d2

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
Cc: stable@vger.kernel.org
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-05-22 22:01:35 +01:00
Dave Martin 638591cd7b ARM: 7626/1: arm/crypto: Make asm SHA-1 and AES code Thumb-2 compatible
This patch fixes aes-armv4.S and sha1-armv4-large.S to work
natively in Thumb.  This allows ARM/Thumb interworking workarounds
to be removed.

I also take the opportunity to convert some explicit assembler
directives for exported functions to the standard
ENTRY()/ENDPROC().

For the code itself:

  * In sha1_block_data_order, use of TEQ with sp is deprecated in
    ARMv7 and not supported in Thumb.  For the branches back to
    .L_00_15 and .L_40_59, the TEQ is converted to a CMP, under the
    assumption that clobbering the C flag here will not cause
    incorrect behaviour.

    For the first branch back to .L_20_39_or_60_79 the C flag is
    important, so sp is moved temporarily into another register so
    that TEQ can be used for the comparison.

  * In the AES code, most forms of register-indexed addressing with
    shifts and rotates are not permitted for loads and stores in
    Thumb, so the address calculation is done using a separate
    instruction for the Thumb case.

The resulting code is unlikely to be optimally scheduled, but it
should not have a large impact given the overall size of the code.
I haven't run any benchmarks.

Signed-off-by: Dave Martin <dave.martin@linaro.org>
Tested-by: David McCullough <ucdevel@gmail.com> (ARM only)
Acked-by: David McCullough <ucdevel@gmail.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-01-13 12:41:22 +00:00
David McCullough f0be44f4fb arm/crypto: Add optimized AES and SHA1 routines
Add assembler versions of AES and SHA1 for ARM platforms.  This has provided
up to a 50% improvement in IPsec/TCP throughout for tunnels using AES128/SHA1.

Platform   CPU SPeed    Endian   Before (bps)   After (bps)   Improvement

IXP425      533 MHz      big     11217042        15566294        ~38%
KS8695      166 MHz     little    3828549         5795373        ~51%

Signed-off-by: David McCullough <ucdevel@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2012-09-07 04:17:02 +08:00