1
0
Fork 0

powerpc updates for 4.16

Highlights:
 
  - Enable support for memory protection keys aka "pkeys" on Power7/8/9 when
    using the hash table MMU.
 
  - Extend our interrupt soft masking to support masking PMU interrupts as well
    as "normal" interrupts, and then use that to implement local_t for a ~4x
    speedup vs the current atomics-based implementation.
 
  - A new driver "ocxl" for "Open Coherent Accelerator Processor Interface
    (OpenCAPI)" devices.
 
  - Support for new device tree properties on PowerVM to describe hotpluggable
    memory and devices.
 
  - Add support for CLOCK_{REALTIME/MONOTONIC}_COARSE to the 64-bit VDSO.
 
  - Freescale updates from Scott:
      "Contains fixes for CPM GPIO and an FSL PCI erratum workaround, plus a
       minor cleanup patch."
 
 As well as quite a lot of other changes all over the place, and small fixes and
 cleanups as always.
 
 Thanks to:
   Alan Modra, Alastair D'Silva, Alexey Kardashevskiy, Alistair Popple, Andreas
   Schwab, Andrew Donnellan, Aneesh Kumar K.V, Anju T Sudhakar, Anshuman
   Khandual, Anton Blanchard, Arnd Bergmann, Balbir Singh, Benjamin
   Herrenschmidt, Bhaktipriya Shridhar, Bryant G. Ly, Cédric Le Goater,
   Christophe Leroy, Christophe Lombard, Cyril Bur, David Gibson, Desnes A. Nunes
   do Rosario, Dmitry Torokhov, Frederic Barrat, Geert Uytterhoeven, Guilherme G.
   Piccoli, Gustavo A. R. Silva, Gustavo Romero, Ivan Mikhaylov, Joakim
   Tjernlund, Joe Perches, Josh Poimboeuf, Juan J. Alvarez, Julia Cartwright,
   Kamalesh Babulal, Madhavan Srinivasan, Mahesh Salgaonkar, Mathieu Malaterre,
   Michael Bringmann, Michael Hanselmann, Michael Neuling, Nathan Fontenot,
   Naveen N. Rao, Nicholas Piggin, Paul Mackerras, Philippe Bergheaud, Ram Pai,
   Russell Currey, Santosh Sivaraj, Scott Wood, Seth Forshee, Simon Guo, Stewart
   Smith, Sukadev Bhattiprolu, Thiago Jung Bauermann, Vaibhav Jain, Vasyl
   Gomonovych.
 -----BEGIN PGP SIGNATURE-----
 
 iQIwBAABCAAaBQJadF6wExxtcGVAZWxsZXJtYW4uaWQuYXUACgkQUevqPMjhpYA2
 nBAAnguCEyAIYpc+ffE3WU9xJEWxa6bKuVufHcUFVntGiGD+igmMS+SHp4ay3Aos
 HcA4WFrpzNb2KZ++kmFWtAKWnMfCiW9xuYJNicjr7X5ZiVBEhLWN/mQCwBKs3p6L
 5+HhvytcdkKVbEcyVjEGvRL40AyxXNOI02o6Co9X8vanHsmWB4q0eWe4PHstZqlg
 6K6kazMp+NTvEFYwKNXDOvuHouKSL57l14SLROH7CpJkNTOQ9s+W59/LmnuCjRlu
 o70b7iWOAEbF9tvMma1ksDZVNj7mSyaymLYCyOXu4CkuuleJacZYJ9oQGNddoIbC
 wk7l93vPT/yze7DYg8x3uXpKcaDEvEepPuQ/ubz+UXFQWuJtl5ej6Cv+0eOmyZIs
 +bjWhGHKdNttnsiPlTRCX/gWD13RE1dB6xXJlfOJ7Oz9OnXXK8ZKc1NTREbQXRWM
 8tClAwf9upWpm86GHPVnyrgYbgZo5b1os4SoS8e3kESzakrQVQP7J376u2DtccRq
 2AGqjJ+tl5tYPnhm8zG1cNrpqHHpgkNGqLS7DvWRg3EPmEKVQcltN1b/0aKaAjHA
 aTRofjrVo+jJ4MX1uyEo59yNCEQPfjkmHRQdLwm+xjWTzEPfIMzpWyXm14tawDQf
 OjcAe90W/qQ18brw4z+2BI14J76XziOSX/QcunOn1u/sqaM=
 =3rYn
 -----END PGP SIGNATURE-----

Merge tag 'powerpc-4.16-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux

Pull powerpc updates from Michael Ellerman:
 "Highlights:

   - Enable support for memory protection keys aka "pkeys" on Power7/8/9
     when using the hash table MMU.

   - Extend our interrupt soft masking to support masking PMU interrupts
     as well as "normal" interrupts, and then use that to implement
     local_t for a ~4x speedup vs the current atomics-based
     implementation.

   - A new driver "ocxl" for "Open Coherent Accelerator Processor
     Interface (OpenCAPI)" devices.

   - Support for new device tree properties on PowerVM to describe
     hotpluggable memory and devices.

   - Add support for CLOCK_{REALTIME/MONOTONIC}_COARSE to the 64-bit
     VDSO.

   - Freescale updates from Scott: fixes for CPM GPIO and an FSL PCI
     erratum workaround, plus a minor cleanup patch.

  As well as quite a lot of other changes all over the place, and small
  fixes and cleanups as always.

  Thanks to: Alan Modra, Alastair D'Silva, Alexey Kardashevskiy,
  Alistair Popple, Andreas Schwab, Andrew Donnellan, Aneesh Kumar K.V,
  Anju T Sudhakar, Anshuman Khandual, Anton Blanchard, Arnd Bergmann,
  Balbir Singh, Benjamin Herrenschmidt, Bhaktipriya Shridhar, Bryant G.
  Ly, Cédric Le Goater, Christophe Leroy, Christophe Lombard, Cyril Bur,
  David Gibson, Desnes A. Nunes do Rosario, Dmitry Torokhov, Frederic
  Barrat, Geert Uytterhoeven, Guilherme G. Piccoli, Gustavo A. R. Silva,
  Gustavo Romero, Ivan Mikhaylov, Joakim Tjernlund, Joe Perches, Josh
  Poimboeuf, Juan J. Alvarez, Julia Cartwright, Kamalesh Babulal,
  Madhavan Srinivasan, Mahesh Salgaonkar, Mathieu Malaterre, Michael
  Bringmann, Michael Hanselmann, Michael Neuling, Nathan Fontenot,
  Naveen N. Rao, Nicholas Piggin, Paul Mackerras, Philippe Bergheaud,
  Ram Pai, Russell Currey, Santosh Sivaraj, Scott Wood, Seth Forshee,
  Simon Guo, Stewart Smith, Sukadev Bhattiprolu, Thiago Jung Bauermann,
  Vaibhav Jain, Vasyl Gomonovych"

* tag 'powerpc-4.16-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: (199 commits)
  powerpc/mm/radix: Fix build error when RADIX_MMU=n
  macintosh/ams-input: Use true and false for boolean values
  macintosh: change some data types from int to bool
  powerpc/watchdog: Print the NIP in soft_nmi_interrupt()
  powerpc/watchdog: regs can't be null in soft_nmi_interrupt()
  powerpc/watchdog: Tweak watchdog printks
  powerpc/cell: Remove axonram driver
  rtc-opal: Fix handling of firmware error codes, prevent busy loops
  powerpc/mpc52xx_gpt: make use of raw_spinlock variants
  macintosh/adb: Properly mark continued kernel messages
  powerpc/pseries: Fix cpu hotplug crash with memoryless nodes
  powerpc/numa: Ensure nodes initialized for hotplug
  powerpc/numa: Use ibm,max-associativity-domains to discover possible nodes
  powerpc/kernel: Block interrupts when updating TIDR
  powerpc/powernv/idoa: Remove unnecessary pcidev from pci_dn
  powerpc/mm/nohash: do not flush the entire mm when range is a single page
  powerpc/pseries: Add Initialization of VF Bars
  powerpc/pseries/pci: Associate PEs to VFs in configure SR-IOV
  powerpc/eeh: Add EEH notify resume sysfs
  powerpc/eeh: Add EEH operations to notify resume
  ...
hifive-unleashed-5.1
Linus Torvalds 2018-02-02 10:01:04 -08:00
commit 03f51d4efa
320 changed files with 11619 additions and 3066 deletions

View File

@ -0,0 +1,35 @@
What: /sys/class/ocxl/<afu name>/afu_version
Date: January 2018
Contact: linuxppc-dev@lists.ozlabs.org
Description: read only
Version of the AFU, in the format <major>:<minor>
Reflects what is read in the configuration space of the AFU
What: /sys/class/ocxl/<afu name>/contexts
Date: January 2018
Contact: linuxppc-dev@lists.ozlabs.org
Description: read only
Number of contexts for the AFU, in the format <n>/<max>
where:
n: number of currently active contexts, for debug
max: maximum number of contexts supported by the AFU
What: /sys/class/ocxl/<afu name>/pp_mmio_size
Date: January 2018
Contact: linuxppc-dev@lists.ozlabs.org
Description: read only
Size of the per-process mmio area, as defined in the
configuration space of the AFU
What: /sys/class/ocxl/<afu name>/global_mmio_size
Date: January 2018
Contact: linuxppc-dev@lists.ozlabs.org
Description: read only
Size of the global mmio area, as defined in the
configuration space of the AFU
What: /sys/class/ocxl/<afu name>/global_mmio_area
Date: January 2018
Contact: linuxppc-dev@lists.ozlabs.org
Description: read/write
Give access the global mmio area for the AFU

View File

@ -0,0 +1,160 @@
========================================================
OpenCAPI (Open Coherent Accelerator Processor Interface)
========================================================
OpenCAPI is an interface between processors and accelerators. It aims
at being low-latency and high-bandwidth. The specification is
developed by the `OpenCAPI Consortium <http://opencapi.org/>`_.
It allows an accelerator (which could be a FPGA, ASICs, ...) to access
the host memory coherently, using virtual addresses. An OpenCAPI
device can also host its own memory, that can be accessed from the
host.
OpenCAPI is known in linux as 'ocxl', as the open, processor-agnostic
evolution of 'cxl' (the driver for the IBM CAPI interface for
powerpc), which was named that way to avoid confusion with the ISDN
CAPI subsystem.
High-level view
===============
OpenCAPI defines a Data Link Layer (DL) and Transaction Layer (TL), to
be implemented on top of a physical link. Any processor or device
implementing the DL and TL can start sharing memory.
::
+-----------+ +-------------+
| | | |
| | | Accelerated |
| Processor | | Function |
| | +--------+ | Unit | +--------+
| |--| Memory | | (AFU) |--| Memory |
| | +--------+ | | +--------+
+-----------+ +-------------+
| |
+-----------+ +-------------+
| TL | | TLX |
+-----------+ +-------------+
| |
+-----------+ +-------------+
| DL | | DLX |
+-----------+ +-------------+
| |
| PHY |
+---------------------------------------+
Device discovery
================
OpenCAPI relies on a PCI-like configuration space, implemented on the
device. So the host can discover AFUs by querying the config space.
OpenCAPI devices in Linux are treated like PCI devices (with a few
caveats). The firmware is expected to abstract the hardware as if it
was a PCI link. A lot of the existing PCI infrastructure is reused:
devices are scanned and BARs are assigned during the standard PCI
enumeration. Commands like 'lspci' can therefore be used to see what
devices are available.
The configuration space defines the AFU(s) that can be found on the
physical adapter, such as its name, how many memory contexts it can
work with, the size of its MMIO areas, ...
MMIO
====
OpenCAPI defines two MMIO areas for each AFU:
* the global MMIO area, with registers pertinent to the whole AFU.
* a per-process MMIO area, which has a fixed size for each context.
AFU interrupts
==============
OpenCAPI includes the possibility for an AFU to send an interrupt to a
host process. It is done through a 'intrp_req' defined in the
Transaction Layer, specifying a 64-bit object handle which defines the
interrupt.
The driver allows a process to allocate an interrupt and obtain its
64-bit object handle, that can be passed to the AFU.
char devices
============
The driver creates one char device per AFU found on the physical
device. A physical device may have multiple functions and each
function can have multiple AFUs. At the time of this writing though,
it has only been tested with devices exporting only one AFU.
Char devices can be found in /dev/ocxl/ and are named as:
/dev/ocxl/<AFU name>.<location>.<index>
where <AFU name> is a max 20-character long name, as found in the
config space of the AFU.
<location> is added by the driver and can help distinguish devices
when a system has more than one instance of the same OpenCAPI device.
<index> is also to help distinguish AFUs in the unlikely case where a
device carries multiple copies of the same AFU.
Sysfs class
===========
An ocxl class is added for the devices representing the AFUs. See
/sys/class/ocxl. The layout is described in
Documentation/ABI/testing/sysfs-class-ocxl
User API
========
open
----
Based on the AFU definition found in the config space, an AFU may
support working with more than one memory context, in which case the
associated char device may be opened multiple times by different
processes.
ioctl
-----
OCXL_IOCTL_ATTACH:
Attach the memory context of the calling process to the AFU so that
the AFU can access its memory.
OCXL_IOCTL_IRQ_ALLOC:
Allocate an AFU interrupt and return an identifier.
OCXL_IOCTL_IRQ_FREE:
Free a previously allocated AFU interrupt.
OCXL_IOCTL_IRQ_SET_FD:
Associate an event fd to an AFU interrupt so that the user process
can be notified when the AFU sends an interrupt.
mmap
----
A process can mmap the per-process MMIO area for interactions with the
AFU.

View File

@ -1309,7 +1309,7 @@ number and level/sense information. All interrupt children in an
OpenPIC interrupt domain use 2 cells per interrupt in their interrupts
property.
The PCI bus binding specifies a #interrupt-cell value of 1 to encode
The PCI bus binding specifies a #interrupt-cells value of 1 to encode
which interrupt pin (INTA,INTB,INTC,INTD) is used.
2) interrupt-parent property

View File

@ -46,7 +46,6 @@ stall the CPU for an extended period, you should also not attempt to
implement direct_access.
These block devices may be used for inspiration:
- axonram: Axon DDR2 device driver
- brd: RAM backed block device driver
- dcssblk: s390 dcss block device driver
- pmem: NVDIMM persistent memory driver

View File

@ -326,6 +326,7 @@ Code Seq#(hex) Include File Comments
0xB5 00-0F uapi/linux/rpmsg.h <mailto:linux-remoteproc@vger.kernel.org>
0xC0 00-0F linux/usb/iowarrior.h
0xCA 00-0F uapi/misc/cxl.h
0xCA 10-2F uapi/misc/ocxl.h
0xCA 80-BF uapi/scsi/cxlflash_ioctl.h
0xCB 00-1F CBM serial IEC bus in development:
<mailto:michael.klein@puffin.lb.shuttle.de>

View File

@ -9909,6 +9909,18 @@ M: Josh Poimboeuf <jpoimboe@redhat.com>
S: Supported
F: tools/objtool/
OCXL (Open Coherent Accelerator Processor Interface OpenCAPI) DRIVER
M: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
M: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
L: linuxppc-dev@lists.ozlabs.org
S: Supported
F: arch/powerpc/platforms/powernv/ocxl.c
F: arch/powerpc/include/asm/pnv-ocxl.h
F: drivers/misc/ocxl/
F: include/misc/ocxl*
F: include/uapi/misc/ocxl.h
F: Documentation/accelerators/ocxl.txt
OMAP AUDIO SUPPORT
M: Peter Ujfalusi <peter.ujfalusi@ti.com>
M: Jarkko Nikula <jarkko.nikula@bitmer.com>

View File

@ -143,6 +143,7 @@ config PPC
select ARCH_HAS_PMEM_API if PPC64
select ARCH_HAS_SCALED_CPUTIME if VIRT_CPU_ACCOUNTING_NATIVE
select ARCH_HAS_SG_CHAIN
select ARCH_HAS_STRICT_KERNEL_RWX if ((PPC_BOOK3S_64 || PPC32) && !RELOCATABLE && !HIBERNATION)
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
select ARCH_HAS_UACCESS_FLUSHCACHE if PPC64
select ARCH_HAS_UBSAN_SANITIZE_ALL
@ -150,6 +151,7 @@ config PPC
select ARCH_HAVE_NMI_SAFE_CMPXCHG
select ARCH_MIGHT_HAVE_PC_PARPORT
select ARCH_MIGHT_HAVE_PC_SERIO
select ARCH_OPTIONAL_KERNEL_RWX if ARCH_HAS_STRICT_KERNEL_RWX
select ARCH_SUPPORTS_ATOMIC_RMW
select ARCH_USE_BUILTIN_BSWAP
select ARCH_USE_CMPXCHG_LOCKREF if PPC64
@ -180,8 +182,6 @@ config PPC
select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT
select HAVE_ARCH_SECCOMP_FILTER
select HAVE_ARCH_TRACEHOOK
select ARCH_HAS_STRICT_KERNEL_RWX if ((PPC_BOOK3S_64 || PPC32) && !RELOCATABLE && !HIBERNATION)
select ARCH_OPTIONAL_KERNEL_RWX if ARCH_HAS_STRICT_KERNEL_RWX
select HAVE_CBPF_JIT if !PPC64
select HAVE_CONTEXT_TRACKING if PPC64
select HAVE_DEBUG_KMEMLEAK
@ -868,6 +868,21 @@ config SECCOMP
If unsure, say Y. Only embedded should say N here.
config PPC_MEM_KEYS
prompt "PowerPC Memory Protection Keys"
def_bool y
depends on PPC_BOOK3S_64
select ARCH_USES_HIGH_VMA_FLAGS
select ARCH_HAS_PKEYS
help
Memory Protection Keys provides a mechanism for enforcing
page-based protections, but without requiring modification of the
page tables when an application changes protection domains.
For details, see Documentation/vm/protection-keys.txt
If unsure, say y.
endmenu
config ISA_DMA_API

View File

@ -90,6 +90,10 @@ config MSI_BITMAP_SELFTEST
depends on DEBUG_KERNEL
default n
config PPC_IRQ_SOFT_MASK_DEBUG
bool "Include extra checks for powerpc irq soft masking"
default n
config XMON
bool "Include xmon kernel debugger"
depends on DEBUG_KERNEL
@ -368,7 +372,7 @@ config PPC_PTDUMP
config PPC_HTDUMP
def_bool y
depends on PPC_PTDUMP && PPC_BOOK3S
depends on PPC_PTDUMP && PPC_BOOK3S_64
config PPC_FAST_ENDIAN_SWITCH
bool "Deprecated fast endian-switch syscall"

View File

@ -63,6 +63,7 @@ UTS_MACHINE := $(subst $(space),,$(machine-y))
ifdef CONFIG_PPC32
KBUILD_LDFLAGS_MODULE += arch/powerpc/lib/crtsavres.o
else
KBUILD_LDFLAGS_MODULE += -T $(srctree)/arch/powerpc/kernel/module.lds
ifeq ($(call ld-ifversion, -ge, 225000000, y),y)
# Have the linker provide sfpr if possible.
# There is a corresponding test in arch/powerpc/lib/Makefile

View File

@ -108,10 +108,10 @@ src-wlib-y := string.S crt0.S stdio.c decompress.c main.c \
$(libfdt) libfdt-wrapper.c \
ns16550.c serial.c simple_alloc.c div64.S util.S \
elf_util.c $(zlib-y) devtree.c stdlib.c \
oflib.c ofconsole.c cuboot.c cpm-serial.c \
uartlite.c opal.c
oflib.c ofconsole.c cuboot.c
src-wlib-$(CONFIG_PPC_MPC52XX) += mpc52xx-psc.c
src-wlib-$(CONFIG_PPC64_BOOT_WRAPPER) += opal-calls.S
src-wlib-$(CONFIG_PPC64_BOOT_WRAPPER) += opal-calls.S opal.c
ifndef CONFIG_PPC64_BOOT_WRAPPER
src-wlib-y += crtsavres.S
endif
@ -120,6 +120,8 @@ src-wlib-$(CONFIG_44x) += 4xx.c ebony.c bamboo.c
src-wlib-$(CONFIG_PPC_8xx) += mpc8xx.c planetcore.c fsl-soc.c
src-wlib-$(CONFIG_PPC_82xx) += pq2.c fsl-soc.c planetcore.c
src-wlib-$(CONFIG_EMBEDDED6xx) += mpsc.c mv64x60.c mv64x60_i2c.c ugecon.c fsl-soc.c
src-wlib-$(CONFIG_XILINX_VIRTEX) += uartlite.c
src-wlib-$(CONFIG_CPM) += cpm-serial.c
src-plat-y := of.c epapr.c
src-plat-$(CONFIG_40x) += fixed-head.S ep405.c cuboot-hotfoot.c \

View File

@ -105,24 +105,24 @@
reg = <0 0x0 0x02000000>;
compatible = "cfi-flash";
bank-width = <2>;
partition@0x0 {
partition@0 {
label = "u-boot";
reg = <0x00000000 0x00040000>;
read-only;
};
partition@0x00040000 {
partition@40000 {
label = "env";
reg = <0x00040000 0x00020000>;
};
partition@0x00060000 {
partition@60000 {
label = "dtb";
reg = <0x00060000 0x00020000>;
};
partition@0x00080000 {
partition@80000 {
label = "kernel";
reg = <0x00080000 0x00500000>;
};
partition@0x00580000 {
partition@580000 {
label = "root";
reg = <0x00580000 0x00A80000>;
};

View File

@ -216,7 +216,7 @@
interrupts = <39 2>;
};
IIC0: i2c@00000000 {
IIC0: i2c@0 {
compatible = "ibm,iic-476gtr", "ibm,iic";
reg = <0x0 0x00000020>;
interrupt-parent = <&MPIC>;
@ -229,7 +229,7 @@
};
};
IIC1: i2c@00000100 {
IIC1: i2c@100 {
compatible = "ibm,iic-476gtr", "ibm,iic";
reg = <0x100 0x00000020>;
interrupt-parent = <&MPIC>;

View File

@ -276,14 +276,14 @@
>;
};
cpu-error@0070 {
cpu-error@70 {
compatible = "marvell,mv64360-cpu-error";
reg = <0x0070 0x10 0x0128 0x28>;
interrupts = <3>;
interrupt-parent = <&PIC>;
};
sram-ctrl@0380 {
sram-ctrl@380 {
compatible = "marvell,mv64360-sram-ctrl";
reg = <0x0380 0x80>;
interrupts = <13>;
@ -311,7 +311,7 @@
interrupt-parent = <&PIC>;
};
/* Devices attached to the device controller */
devicebus@045c {
devicebus@45c {
#address-cells = <2>;
#size-cells = <1>;
compatible = "marvell,mv64306-devctrl";

View File

@ -108,7 +108,7 @@
reg = <0x50000000 0x4>;
};
IIC0: i2c@00000000 {
IIC0: i2c@0 {
compatible = "ibm,iic-currituck", "ibm,iic";
reg = <0x0 0x00000014>;
interrupt-parent = <&MPIC>;

View File

@ -126,7 +126,7 @@
par_io@e0100 {
num-ports = <7>;
pio1: ucc_pin@01 {
pio1: ucc_pin@1 {
pio-map = <
/* port pin dir open_drain assignment has_irq */
0x4 0xa 0x1 0x0 0x2 0x0 /* TxD0 */
@ -154,7 +154,7 @@
0x1 0x1f 0x2 0x0 0x3 0x0>; /* GTX125 */
};
pio2: ucc_pin@02 {
pio2: ucc_pin@2 {
pio-map = <
/* port pin dir open_drain assignment has_irq */
0x5 0xa 0x1 0x0 0x2 0x0 /* TxD0 */
@ -228,22 +228,22 @@
/* These are the same PHYs as on
* gianfar's MDIO bus */
qe_phy0: ethernet-phy@07 {
qe_phy0: ethernet-phy@7 {
interrupt-parent = <&mpic>;
interrupts = <1 1 0 0>;
reg = <0x7>;
};
qe_phy1: ethernet-phy@01 {
qe_phy1: ethernet-phy@1 {
interrupt-parent = <&mpic>;
interrupts = <2 1 0 0>;
reg = <0x1>;
};
qe_phy2: ethernet-phy@02 {
qe_phy2: ethernet-phy@2 {
interrupt-parent = <&mpic>;
interrupts = <1 1 0 0>;
reg = <0x2>;
};
qe_phy3: ethernet-phy@03 {
qe_phy3: ethernet-phy@3 {
interrupt-parent = <&mpic>;
interrupts = <2 1 0 0>;
reg = <0x3>;

View File

@ -141,7 +141,7 @@
gpio-controller;
};
pio1: ucc_pin@01 {
pio1: ucc_pin@1 {
pio-map = <
/* port pin dir open_drain assignment has_irq */
0x2 0x1f 0x1 0x0 0x1 0x0 /* QE_MUX_MDC */
@ -161,7 +161,7 @@
0x2 0x14 0x1 0x0 0x2 0x0>; /* ENET1_GTXCLK */
};
pio2: ucc_pin@02 {
pio2: ucc_pin@2 {
pio-map = <
/* port pin dir open_drain assignment has_irq */
0x2 0x1f 0x1 0x0 0x1 0x0 /* QE_MUX_MDC */
@ -181,7 +181,7 @@
0x2 0x2 0x1 0x0 0x2 0x0>; /* ENET2_GTXCLK */
};
pio3: ucc_pin@03 {
pio3: ucc_pin@3 {
pio-map = <
/* port pin dir open_drain assignment has_irq */
0x2 0x1f 0x1 0x0 0x1 0x0 /* QE_MUX_MDC */
@ -201,7 +201,7 @@
0x2 0x19 0x1 0x0 0x2 0x0>; /* ENET3_GTXCLK */
};
pio4: ucc_pin@04 {
pio4: ucc_pin@4 {
pio-map = <
/* port pin dir open_drain assignment has_irq */
0x2 0x1f 0x1 0x0 0x1 0x0 /* QE_MUX_MDC */
@ -272,30 +272,30 @@
reg = <0x2120 0x18>;
compatible = "fsl,ucc-mdio";
qe_phy0: ethernet-phy@07 {
qe_phy0: ethernet-phy@7 {
interrupt-parent = <&mpic>;
interrupts = <1 1 0 0>;
reg = <0x7>;
};
qe_phy1: ethernet-phy@01 {
qe_phy1: ethernet-phy@1 {
interrupt-parent = <&mpic>;
interrupts = <2 1 0 0>;
reg = <0x1>;
};
qe_phy2: ethernet-phy@02 {
qe_phy2: ethernet-phy@2 {
interrupt-parent = <&mpic>;
interrupts = <3 1 0 0>;
reg = <0x2>;
};
qe_phy3: ethernet-phy@03 {
qe_phy3: ethernet-phy@3 {
interrupt-parent = <&mpic>;
interrupts = <4 1 0 0>;
reg = <0x3>;
};
qe_phy5: ethernet-phy@04 {
qe_phy5: ethernet-phy@4 {
reg = <0x04>;
};
qe_phy7: ethernet-phy@06 {
qe_phy7: ethernet-phy@6 {
reg = <0x6>;
};
tbi1: tbi-phy@11 {

View File

@ -202,7 +202,7 @@
ranges = <0x0 0xe0100 0x60>;
device_type = "par_io";
num-ports = <3>;
pio1: ucc_pin@01 {
pio1: ucc_pin@1 {
pio-map = <
/* port pin dir open_drain assignment has_irq */
0x1 0x13 0x1 0x0 0x1 0x0 /* QE_MUX_MDC */
@ -225,7 +225,7 @@
0x0 0x10 0x2 0x0 0x2 0x0>; /* ENET1_COL */
};
pio2: ucc_pin@02 {
pio2: ucc_pin@2 {
pio-map = <
/* port pin dir open_drain assignment has_irq */
0x1 0x13 0x1 0x0 0x1 0x0 /* QE_MUX_MDC */
@ -296,7 +296,7 @@
interrupts = <4 1 0 0>;
reg = <0x0>;
};
qe_phy1: ethernet-phy@03 {
qe_phy1: ethernet-phy@3 {
interrupt-parent = <&mpic>;
interrupts = <5 1 0 0>;
reg = <0x3>;

View File

@ -245,7 +245,7 @@
ranges = <0x0 0xe0100 0x60>;
device_type = "par_io";
num-ports = <3>;
pio1: ucc_pin@01 {
pio1: ucc_pin@1 {
pio-map = <
/* port pin dir open_drain assignment has_irq */
0x1 0x13 0x1 0x0 0x1 0x0 /* QE_MUX_MDC */
@ -268,7 +268,7 @@
0x0 0x10 0x2 0x0 0x2 0x0>; /* ENET1_COL */
};
pio2: ucc_pin@02 {
pio2: ucc_pin@2 {
pio-map = <
/* port pin dir open_drain assignment has_irq */
0x1 0x13 0x1 0x0 0x1 0x0 /* QE_MUX_MDC */
@ -283,7 +283,7 @@
0x1 0x8 0x2 0x0 0x2 0x0>; /* ENET5_RX_ER_SER5_CD_B */
};
pio3: ucc_pin@03 {
pio3: ucc_pin@3 {
pio-map = <
/* port pin dir open_drain assignment has_irq */
0x0 0x16 0x2 0x0 0x2 0x0 /* SER7_CD_B*/
@ -293,7 +293,7 @@
0x0 0x15 0x1 0x0 0x2 0x0>; /* SER7_TXD0*/
};
pio4: ucc_pin@04 {
pio4: ucc_pin@4 {
pio-map = <
/* port pin dir open_drain assignment has_irq */
0x1 0x0 0x2 0x0 0x2 0x0 /* SER3_CD_B*/

View File

@ -106,7 +106,7 @@
interrupts = <4 1 0 0>;
reg = <0x6>;
};
qe_phy1: ethernet-phy@03 {
qe_phy1: ethernet-phy@3 {
interrupt-parent = <&mpic>;
interrupts = <5 1 0 0>;
reg = <0x3>;

View File

@ -172,7 +172,7 @@
ranges = <0x0 0xe0100 0x60>;
device_type = "par_io";
num-ports = <3>;
pio1: ucc_pin@01 {
pio1: ucc_pin@1 {
pio-map = <
/* port pin dir open_drain assignment has_irq */
0x1 0x13 0x1 0x0 0x1 0x0 /* QE_MUX_MDC */
@ -195,7 +195,7 @@
0x0 0x10 0x2 0x0 0x2 0x0>; /* ENET1_COL */
};
pio2: ucc_pin@02 {
pio2: ucc_pin@2 {
pio-map = <
/* port pin dir open_drain assignment has_irq */
0x1 0x13 0x1 0x0 0x1 0x0 /* QE_MUX_MDC */
@ -210,7 +210,7 @@
0x1 0x8 0x2 0x0 0x2 0x0>; /* ENET5_RX_ER_SER5_CD_B */
};
pio3: ucc_pin@03 {
pio3: ucc_pin@3 {
pio-map = <
/* port pin dir open_drain assignment has_irq */
0x0 0x16 0x2 0x0 0x2 0x0 /* SER7_CD_B*/
@ -220,7 +220,7 @@
0x0 0x15 0x1 0x0 0x2 0x0>; /* SER7_TXD0*/
};
pio4: ucc_pin@04 {
pio4: ucc_pin@4 {
pio-map = <
/* port pin dir open_drain assignment has_irq */
0x1 0x0 0x2 0x0 0x2 0x0 /* SER3_CD_B*/

View File

@ -61,7 +61,7 @@
};
mdio@fc000 {
phy_sgmii_2: ethernet-phy@03 {
phy_sgmii_2: ethernet-phy@3 {
reg = <0x03>;
};
};

View File

@ -77,23 +77,23 @@
};
mdio0: mdio@fc000 {
phy_sgmii_0: ethernet-phy@02 {
phy_sgmii_0: ethernet-phy@2 {
reg = <0x02>;
};
phy_sgmii_1: ethernet-phy@03 {
phy_sgmii_1: ethernet-phy@3 {
reg = <0x03>;
};
phy_sgmii_2: ethernet-phy@01 {
phy_sgmii_2: ethernet-phy@1 {
reg = <0x01>;
};
phy_rgmii_0: ethernet-phy@04 {
phy_rgmii_0: ethernet-phy@4 {
reg = <0x04>;
};
phy_rgmii_1: ethernet-phy@05 {
phy_rgmii_1: ethernet-phy@5 {
reg = <0x05>;
};
};

View File

@ -59,7 +59,7 @@
};
mdio@fc000 {
phy_sgmii_2: ethernet-phy@03 {
phy_sgmii_2: ethernet-phy@3 {
reg = <0x03>;
};
};

View File

@ -148,15 +148,15 @@
};
mdio0: mdio@fc000 {
phy_sgmii_2: ethernet-phy@03 {
phy_sgmii_2: ethernet-phy@3 {
reg = <0x03>;
};
phy_rgmii_0: ethernet-phy@01 {
phy_rgmii_0: ethernet-phy@1 {
reg = <0x01>;
};
phy_rgmii_1: ethernet-phy@02 {
phy_rgmii_1: ethernet-phy@2 {
reg = <0x02>;
};
};

View File

@ -583,21 +583,21 @@
};
};
OHCI1: ohci@02040000 {
OHCI1: ohci@2040000 {
compatible = "ohci-le";
reg = <0x02040000 0xa0>;
interrupt-parent = <&UIC1_3>;
interrupts = <28 0x8 29 0x8>;
};
OHCI2: ohci@02080000 {
OHCI2: ohci@2080000 {
compatible = "ohci-le";
reg = <0x02080000 0xa0>;
interrupt-parent = <&UIC1_3>;
interrupts = <30 0x8 31 0x8>;
};
EHCI: ehci@02000000 {
EHCI: ehci@2000000 {
compatible = "usb-ehci";
reg = <0x02000000 0xa4>;
interrupt-parent = <&UIC1_3>;

View File

@ -54,13 +54,13 @@
ranges = <0x0c000000 0x0c000000 0x00010000>;
interrupt-parent = <&PIC>;
video@0c002000 {
video@c002000 {
compatible = "nintendo,flipper-vi";
reg = <0x0c002000 0x100>;
interrupts = <8>;
};
processor-interface@0c003000 {
processor-interface@c003000 {
compatible = "nintendo,flipper-pi";
reg = <0x0c003000 0x100>;
@ -71,7 +71,7 @@
};
};
dsp@0c005000 {
dsp@c005000 {
#address-cells = <1>;
#size-cells = <1>;
compatible = "nintendo,flipper-dsp";
@ -84,26 +84,26 @@
};
};
disk@0c006000 {
disk@c006000 {
compatible = "nintendo,flipper-di";
reg = <0x0c006000 0x40>;
interrupts = <2>;
};
audio@0c006c00 {
audio@c006c00 {
compatible = "nintendo,flipper-ai";
reg = <0x0c006c00 0x20>;
interrupts = <6>;
};
gamepad-controller@0c006400 {
gamepad-controller@c006400 {
compatible = "nintendo,flipper-si";
reg = <0x0c006400 0x100>;
interrupts = <3>;
};
/* External Interface bus */
exi@0c006800 {
exi@c006800 {
compatible = "nintendo,flipper-exi";
reg = <0x0c006800 0x40>;
virtual-reg = <0x0c006800>;

View File

@ -237,7 +237,7 @@
};
};
PCIE0: pciex@0a0000000 {
PCIE0: pciex@a0000000 {
device_type = "pci";
#interrupt-cells = <1>;
#size-cells = <2>;

View File

@ -322,7 +322,7 @@
};
};
PCIE0: pciex@0a0000000 {
PCIE0: pciex@a0000000 {
device_type = "pci";
#interrupt-cells = <1>;
#size-cells = <2>;
@ -363,7 +363,7 @@
0x0 0x0 0x0 0x4 &UIC2 0x3 0x4 /* swizzled int D */>;
};
PCIE1: pciex@0c0000000 {
PCIE1: pciex@c0000000 {
device_type = "pci";
#interrupt-cells = <1>;
#size-cells = <2>;

View File

@ -434,27 +434,27 @@
compatible = "fsl,ucc-mdio";
/* Piggy2 (UCC4, MDIO 0x00, RMII) */
phy_piggy2: ethernet-phy@00 {
phy_piggy2: ethernet-phy@0 {
reg = <0x0>;
};
/* Eth-1 (UCC5, MDIO 0x08, RMII) */
phy_eth1: ethernet-phy@08 {
phy_eth1: ethernet-phy@8 {
reg = <0x08>;
};
/* Eth-2 (UCC6, MDIO 0x09, RMII) */
phy_eth2: ethernet-phy@09 {
phy_eth2: ethernet-phy@9 {
reg = <0x09>;
};
/* Eth-3 (UCC7, MDIO 0x0a, RMII) */
phy_eth3: ethernet-phy@0a {
phy_eth3: ethernet-phy@a {
reg = <0x0a>;
};
/* Eth-4 (UCC8, MDIO 0x0b, RMII) */
phy_eth4: ethernet-phy@0b {
phy_eth4: ethernet-phy@b {
reg = <0x0b>;
};

View File

@ -268,7 +268,7 @@
};
};
PCIE0: pciex@0a0000000 {
PCIE0: pciex@a0000000 {
device_type = "pci";
#interrupt-cells = <1>;
#size-cells = <2>;
@ -309,7 +309,7 @@
0x0 0x0 0x0 0x4 &UIC2 0x3 0x4 /* swizzled int D */>;
};
PCIE1: pciex@0c0000000 {
PCIE1: pciex@c0000000 {
device_type = "pci";
#interrupt-cells = <1>;
#size-cells = <2>;

View File

@ -186,7 +186,7 @@
device_type = "par_io";
num-ports = <7>;
pio3: ucc_pin@03 {
pio3: ucc_pin@3 {
pio-map = <
/* port pin dir open_drain assignment has_irq */
3 4 3 0 2 0 /* MDIO */
@ -208,7 +208,7 @@
1 12 1 0 1 0 /* TX_EN */
1 13 2 0 1 0>; /* CRS */
};
pio4: ucc_pin@04 {
pio4: ucc_pin@4 {
pio-map = <
/* port pin dir open_drain assignment has_irq */
3 31 2 0 1 0 /* RX_CLK (CLK7) */
@ -228,7 +228,7 @@
1 30 1 0 1 0 /* TX_EN */
1 31 2 0 1 0>; /* CRS */
};
pio5: ucc_pin@05 {
pio5: ucc_pin@5 {
pio-map = <
/*
* open has
@ -352,12 +352,12 @@
reg = <0x2320 0x18>;
compatible = "fsl,ucc-mdio";
phy3: ethernet-phy@03 {
phy3: ethernet-phy@3 {
interrupt-parent = <&ipic>;
interrupts = <17 0x8>;
reg = <0x3>;
};
phy4: ethernet-phy@04 {
phy4: ethernet-phy@4 {
interrupt-parent = <&ipic>;
interrupts = <18 0x8>;
reg = <0x4>;

View File

@ -175,7 +175,7 @@
gpio-controller;
};
ucc2pio:ucc_pin@02 {
ucc2pio:ucc_pin@2 {
pio-map = <
/* port pin dir open_drain assignment has_irq */
3 4 3 0 2 0 /* MDIO */
@ -197,7 +197,7 @@
0 30 1 0 1 0 /* TX_EN */
0 31 2 0 1 0>; /* CRS */
};
ucc3pio:ucc_pin@03 {
ucc3pio:ucc_pin@3 {
pio-map = <
/* port pin dir open_drain assignment has_irq */
0 13 2 0 1 0 /* RX_CLK (CLK9) */
@ -310,12 +310,12 @@
reg = <0x3120 0x18>;
compatible = "fsl,ucc-mdio";
phy00:ethernet-phy@00 {
phy00:ethernet-phy@0 {
interrupt-parent = <&ipic>;
interrupts = <0>;
reg = <0x0>;
};
phy04:ethernet-phy@04 {
phy04:ethernet-phy@4 {
interrupt-parent = <&ipic>;
interrupts = <0>;
reg = <0x4>;

View File

@ -228,7 +228,7 @@
gpio-controller;
};
pio1: ucc_pin@01 {
pio1: ucc_pin@1 {
pio-map = <
/* port pin dir open_drain assignment has_irq */
0 3 1 0 1 0 /* TxD0 */
@ -255,7 +255,7 @@
2 9 1 0 3 0 /* GTX_CLK - CLK10 */
2 8 2 0 1 0>; /* GTX125 - CLK9 */
};
pio2: ucc_pin@02 {
pio2: ucc_pin@2 {
pio-map = <
/* port pin dir open_drain assignment has_irq */
0 17 1 0 1 0 /* TxD0 */
@ -393,12 +393,12 @@
reg = <0x2120 0x18>;
compatible = "fsl,ucc-mdio";
phy0: ethernet-phy@00 {
phy0: ethernet-phy@0 {
interrupt-parent = <&ipic>;
interrupts = <17 0x8>;
reg = <0x0>;
};
phy1: ethernet-phy@01 {
phy1: ethernet-phy@1 {
interrupt-parent = <&ipic>;
interrupts = <18 0x8>;
reg = <0x1>;

View File

@ -40,12 +40,12 @@
compatible = "intel,JS28F128", "cfi-flash";
bank-width = <4>;
device-width = <1>;
partition@0x0 {
partition@0 {
label = "space";
/* FC000000 -> FFEFFFFF */
reg = <0x00000000 0x03f00000>;
};
partition@0x03f00000 {
partition@3f00000 {
label = "bootloader";
/* FFF00000 -> FFFFFFFF */
reg = <0x03f00000 0x00100000>;
@ -95,12 +95,12 @@
reg = <0x6 0x0 0x800000>;
bank-width = <1>;
device-width = <1>;
partition@0x0 {
partition@0 {
label = "space";
/* EF800000 -> EFF9FFFF */
reg = <0x00000000 0x007a0000>;
};
partition@0x7a0000 {
partition@7a0000 {
label = "bootloader";
/* EFFA0000 -> EFFFFFFF */
reg = <0x007a0000 0x00060000>;

View File

@ -38,12 +38,12 @@
reg = <0x0 0x0 0x800000>;
bank-width = <1>;
device-width = <1>;
partition@0x0 {
partition@0 {
label = "space";
/* FF800000 -> FFF9FFFF */
reg = <0x00000000 0x007a0000>;
};
partition@0x7a0000 {
partition@7a0000 {
label = "bootloader";
/* FFFA0000 -> FFFFFFFF */
reg = <0x007a0000 0x00060000>;
@ -92,12 +92,12 @@
compatible = "intel,JS28F128", "cfi-flash";
bank-width = <4>;
device-width = <1>;
partition@0x0 {
partition@0 {
label = "space";
/* EC000000 -> EFEFFFFF */
reg = <0x00000000 0x03f00000>;
};
partition@0x03f00000 {
partition@3f00000 {
label = "bootloader";
/* EFF00000 -> EFFFFFFF */
reg = <0x03f00000 0x00100000>;

View File

@ -65,14 +65,14 @@
0x0d800000 0x0d800000 0x00800000>;
interrupt-parent = <&PIC0>;
video@0c002000 {
video@c002000 {
compatible = "nintendo,hollywood-vi",
"nintendo,flipper-vi";
reg = <0x0c002000 0x100>;
interrupts = <8>;
};
processor-interface@0c003000 {
processor-interface@c003000 {
compatible = "nintendo,hollywood-pi",
"nintendo,flipper-pi";
reg = <0x0c003000 0x100>;
@ -84,7 +84,7 @@
};
};
dsp@0c005000 {
dsp@c005000 {
#address-cells = <1>;
#size-cells = <1>;
compatible = "nintendo,hollywood-dsp",
@ -93,14 +93,14 @@
interrupts = <6>;
};
gamepad-controller@0d006400 {
gamepad-controller@d006400 {
compatible = "nintendo,hollywood-si",
"nintendo,flipper-si";
reg = <0x0d006400 0x100>;
interrupts = <3>;
};
audio@0c006c00 {
audio@c006c00 {
compatible = "nintendo,hollywood-ai",
"nintendo,flipper-ai";
reg = <0x0d006c00 0x20>;
@ -108,7 +108,7 @@
};
/* External Interface bus */
exi@0d006800 {
exi@d006800 {
compatible = "nintendo,hollywood-exi",
"nintendo,flipper-exi";
reg = <0x0d006800 0x40>;
@ -116,7 +116,7 @@
interrupts = <4>;
};
usb@0d040000 {
usb@d040000 {
compatible = "nintendo,hollywood-usb-ehci",
"usb-ehci";
reg = <0x0d040000 0x100>;
@ -124,7 +124,7 @@
interrupt-parent = <&PIC1>;
};
usb@0d050000 {
usb@d050000 {
compatible = "nintendo,hollywood-usb-ohci",
"usb-ohci";
reg = <0x0d050000 0x100>;
@ -132,7 +132,7 @@
interrupt-parent = <&PIC1>;
};
usb@0d060000 {
usb@d060000 {
compatible = "nintendo,hollywood-usb-ohci",
"usb-ohci";
reg = <0x0d060000 0x100>;
@ -140,7 +140,7 @@
interrupt-parent = <&PIC1>;
};
sd@0d070000 {
sd@d070000 {
compatible = "nintendo,hollywood-sdhci",
"sdhci";
reg = <0x0d070000 0x200>;
@ -148,7 +148,7 @@
interrupt-parent = <&PIC1>;
};
sdio@0d080000 {
sdio@d080000 {
compatible = "nintendo,hollywood-sdhci",
"sdhci";
reg = <0x0d080000 0x200>;
@ -156,14 +156,14 @@
interrupt-parent = <&PIC1>;
};
ipc@0d000000 {
ipc@d000000 {
compatible = "nintendo,hollywood-ipc";
reg = <0x0d000000 0x10>;
interrupts = <30>;
interrupt-parent = <&PIC1>;
};
PIC1: pic1@0d800030 {
PIC1: pic1@d800030 {
#interrupt-cells = <1>;
compatible = "nintendo,hollywood-pic";
reg = <0x0d800030 0x10>;
@ -171,7 +171,7 @@
interrupts = <14>;
};
GPIO: gpio@0d8000c0 {
GPIO: gpio@d8000c0 {
#gpio-cells = <2>;
compatible = "nintendo,hollywood-gpio";
reg = <0x0d8000c0 0x40>;
@ -203,12 +203,12 @@
*/
};
control@0d800100 {
control@d800100 {
compatible = "nintendo,hollywood-control";
reg = <0x0d800100 0x300>;
};
disk@0d806000 {
disk@d806000 {
compatible = "nintendo,hollywood-di";
reg = <0x0d806000 0x40>;
interrupts = <2>;

View File

@ -124,20 +124,26 @@ int serial_console_init(void)
else if (dt_is_compatible(devp, "marvell,mv64360-mpsc"))
rc = mpsc_console_init(devp, &serial_cd);
#endif
#ifdef CONFIG_CPM
else if (dt_is_compatible(devp, "fsl,cpm1-scc-uart") ||
dt_is_compatible(devp, "fsl,cpm1-smc-uart") ||
dt_is_compatible(devp, "fsl,cpm2-scc-uart") ||
dt_is_compatible(devp, "fsl,cpm2-smc-uart"))
rc = cpm_console_init(devp, &serial_cd);
#endif
#ifdef CONFIG_PPC_MPC52XX
else if (dt_is_compatible(devp, "fsl,mpc5200-psc-uart"))
rc = mpc5200_psc_console_init(devp, &serial_cd);
#endif
#ifdef CONFIG_XILINX_VIRTEX
else if (dt_is_compatible(devp, "xlnx,opb-uartlite-1.00.b") ||
dt_is_compatible(devp, "xlnx,xps-uartlite-1.00.a"))
rc = uartlite_console_init(devp, &serial_cd);
#endif
#ifdef CONFIG_PPC64_BOOT_WRAPPER
else if (dt_is_compatible(devp, "ibm,opal-console-raw"))
rc = opal_console_init(devp, &serial_cd);
#endif
/* Add other serial console driver calls here */

View File

@ -13,7 +13,6 @@ CONFIG_EXPERT=y
CONFIG_PARTITION_ADVANCED=y
CONFIG_MPC86XADS=y
CONFIG_8xx_COPYBACK=y
CONFIG_8xx_CPU6=y
CONFIG_GEN_RTC=y
CONFIG_HZ_1000=y
CONFIG_MATH_EMULATION=y

View File

@ -96,6 +96,7 @@ CONFIG_BLK_DEV_NBD=m
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_SIZE=65536
CONFIG_VIRTIO_BLK=m
CONFIG_BLK_DEV_NVME=y
CONFIG_IDE=y
CONFIG_BLK_DEV_IDECD=y
CONFIG_BLK_DEV_GENERIC=y
@ -112,6 +113,7 @@ CONFIG_SCSI_CXGB3_ISCSI=m
CONFIG_SCSI_CXGB4_ISCSI=m
CONFIG_SCSI_BNX2_ISCSI=m
CONFIG_BE2ISCSI=m
CONFIG_SCSI_AACRAID=y
CONFIG_SCSI_MPT2SAS=m
CONFIG_SCSI_SYM53C8XX_2=m
CONFIG_SCSI_SYM53C8XX_DMA_ADDRESSING_MODE=0

View File

@ -311,6 +311,29 @@ static inline int pte_present(pte_t pte)
return pte_val(pte) & _PAGE_PRESENT;
}
/*
* We only find page table entry in the last level
* Hence no need for other accessors
*/
#define pte_access_permitted pte_access_permitted
static inline bool pte_access_permitted(pte_t pte, bool write)
{
unsigned long pteval = pte_val(pte);
/*
* A read-only access is controlled by _PAGE_USER bit.
* We have _PAGE_READ set for WRITE and EXECUTE
*/
unsigned long need_pte_bits = _PAGE_PRESENT | _PAGE_USER;
if (write)
need_pte_bits |= _PAGE_WRITE;
if ((pteval & need_pte_bits) != need_pte_bits)
return false;
return true;
}
/* Conversion functions: convert a page and protection to a page entry,
* and a page entry and page directory to the page they refer to.
*

View File

@ -17,6 +17,12 @@
#define H_PUD_TABLE_SIZE (sizeof(pud_t) << H_PUD_INDEX_SIZE)
#define H_PGD_TABLE_SIZE (sizeof(pgd_t) << H_PGD_INDEX_SIZE)
#define H_PAGE_F_GIX_SHIFT 53
#define H_PAGE_F_SECOND _RPAGE_RPN44 /* HPTE is in 2ndary HPTEG */
#define H_PAGE_F_GIX (_RPAGE_RPN43 | _RPAGE_RPN42 | _RPAGE_RPN41)
#define H_PAGE_BUSY _RPAGE_RSV1 /* software: PTE & hash are busy */
#define H_PAGE_HASHPTE _RPAGE_RSV2 /* software: PTE & hash are busy */
/* PTE flags to conserve for HPTE identification */
#define _PAGE_HPTEFLAGS (H_PAGE_BUSY | H_PAGE_HASHPTE | \
H_PAGE_F_SECOND | H_PAGE_F_GIX)
@ -49,6 +55,20 @@ static inline int hash__hugepd_ok(hugepd_t hpd)
}
#endif
/*
* 4K PTE format is different from 64K PTE format. Saving the hash_slot is just
* a matter of returning the PTE bits that need to be modified. On 64K PTE,
* things are a little more involved and hence needs many more parameters to
* accomplish the same. However we want to abstract this out from the caller by
* keeping the prototype consistent across the two formats.
*/
static inline unsigned long pte_set_hidx(pte_t *ptep, real_pte_t rpte,
unsigned int subpg_index, unsigned long hidx)
{
return (hidx << H_PAGE_F_GIX_SHIFT) &
(H_PAGE_F_SECOND | H_PAGE_F_GIX);
}
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
static inline char *get_hpte_slot_array(pmd_t *pmdp)

View File

@ -13,21 +13,17 @@
*/
#define H_PAGE_COMBO _RPAGE_RPN0 /* this is a combo 4k page */
#define H_PAGE_4K_PFN _RPAGE_RPN1 /* PFN is for a single 4k page */
#define H_PAGE_BUSY _RPAGE_RPN44 /* software: PTE & hash are busy */
#define H_PAGE_HASHPTE _RPAGE_RPN43 /* PTE has associated HPTE */
/*
* We need to differentiate between explicit huge page and THP huge
* page, since THP huge page also need to track real subpage details
*/
#define H_PAGE_THP_HUGE H_PAGE_4K_PFN
/*
* Used to track subpage group valid if H_PAGE_COMBO is set
* This overloads H_PAGE_F_GIX and H_PAGE_F_SECOND
*/
#define H_PAGE_COMBO_VALID (H_PAGE_F_GIX | H_PAGE_F_SECOND)
/* PTE flags to conserve for HPTE identification */
#define _PAGE_HPTEFLAGS (H_PAGE_BUSY | H_PAGE_F_SECOND | \
H_PAGE_F_GIX | H_PAGE_HASHPTE | H_PAGE_COMBO)
#define _PAGE_HPTEFLAGS (H_PAGE_BUSY | H_PAGE_HASHPTE | H_PAGE_COMBO)
/*
* we support 16 fragments per PTE page of 64K size.
*/
@ -55,24 +51,57 @@ static inline real_pte_t __real_pte(pte_t pte, pte_t *ptep)
unsigned long *hidxp;
rpte.pte = pte;
rpte.hidx = 0;
if (pte_val(pte) & H_PAGE_COMBO) {
/*
* Make sure we order the hidx load against the H_PAGE_COMBO
* check. The store side ordering is done in __hash_page_4K
*/
smp_rmb();
hidxp = (unsigned long *)(ptep + PTRS_PER_PTE);
rpte.hidx = *hidxp;
}
/*
* Ensure that we do not read the hidx before we read the PTE. Because
* the writer side is expected to finish writing the hidx first followed
* by the PTE, by using smp_wmb(). pte_set_hash_slot() ensures that.
*/
smp_rmb();
hidxp = (unsigned long *)(ptep + PTRS_PER_PTE);
rpte.hidx = *hidxp;
return rpte;
}
/*
* shift the hidx representation by one-modulo-0xf; i.e hidx 0 is respresented
* as 1, 1 as 2,... , and 0xf as 0. This convention lets us represent a
* invalid hidx 0xf with a 0x0 bit value. PTEs are anyway zero'd when
* allocated. We dont have to zero them gain; thus save on the initialization.
*/
#define HIDX_UNSHIFT_BY_ONE(x) ((x + 0xfUL) & 0xfUL) /* shift backward by one */
#define HIDX_SHIFT_BY_ONE(x) ((x + 0x1UL) & 0xfUL) /* shift forward by one */
#define HIDX_BITS(x, index) (x << (index << 2))
#define BITS_TO_HIDX(x, index) ((x >> (index << 2)) & 0xfUL)
#define INVALID_RPTE_HIDX 0x0UL
static inline unsigned long __rpte_to_hidx(real_pte_t rpte, unsigned long index)
{
if ((pte_val(rpte.pte) & H_PAGE_COMBO))
return (rpte.hidx >> (index<<2)) & 0xf;
return (pte_val(rpte.pte) >> H_PAGE_F_GIX_SHIFT) & 0xf;
return HIDX_UNSHIFT_BY_ONE(BITS_TO_HIDX(rpte.hidx, index));
}
/*
* Commit the hidx and return PTE bits that needs to be modified. The caller is
* expected to modify the PTE bits accordingly and commit the PTE to memory.
*/
static inline unsigned long pte_set_hidx(pte_t *ptep, real_pte_t rpte,
unsigned int subpg_index, unsigned long hidx)
{
unsigned long *hidxp = (unsigned long *)(ptep + PTRS_PER_PTE);
rpte.hidx &= ~HIDX_BITS(0xfUL, subpg_index);
*hidxp = rpte.hidx | HIDX_BITS(HIDX_SHIFT_BY_ONE(hidx), subpg_index);
/*
* Anyone reading PTE must ensure hidx bits are read after reading the
* PTE by using the read-side barrier smp_rmb(). __real_pte() can be
* used for that.
*/
smp_wmb();
/* No PTE bits to be modified, return 0x0UL */
return 0x0UL;
}
#define __rpte_to_pte(r) ((r).pte)

View File

@ -9,11 +9,6 @@
*
*/
#define H_PTE_NONE_MASK _PAGE_HPTEFLAGS
#define H_PAGE_F_GIX_SHIFT 56
#define H_PAGE_BUSY _RPAGE_RSV1 /* software: PTE & hash are busy */
#define H_PAGE_F_SECOND _RPAGE_RSV2 /* HPTE is in 2ndary HPTEG */
#define H_PAGE_F_GIX (_RPAGE_RSV3 | _RPAGE_RSV4 | _RPAGE_RPN44)
#define H_PAGE_HASHPTE _RPAGE_RPN43 /* PTE has associated HPTE */
#ifdef CONFIG_PPC_64K_PAGES
#include <asm/book3s/64/hash-64k.h>
@ -167,6 +162,9 @@ static inline int hash__pte_none(pte_t pte)
return (pte_val(pte) & ~H_PTE_NONE_MASK) == 0;
}
unsigned long pte_get_hash_gslot(unsigned long vpn, unsigned long shift,
int ssize, real_pte_t rpte, unsigned int subpg_index);
/* This low level function performs the actual PTE insertion
* Setting the PTE depends on the MMU type and other factors. It's
* an horrible mess that I'm not going to try to clean up now but

View File

@ -90,6 +90,8 @@
#define HPTE_R_PP0 ASM_CONST(0x8000000000000000)
#define HPTE_R_TS ASM_CONST(0x4000000000000000)
#define HPTE_R_KEY_HI ASM_CONST(0x3000000000000000)
#define HPTE_R_KEY_BIT0 ASM_CONST(0x2000000000000000)
#define HPTE_R_KEY_BIT1 ASM_CONST(0x1000000000000000)
#define HPTE_R_RPN_SHIFT 12
#define HPTE_R_RPN ASM_CONST(0x0ffffffffffff000)
#define HPTE_R_RPN_3_0 ASM_CONST(0x01fffffffffff000)
@ -104,6 +106,9 @@
#define HPTE_R_C ASM_CONST(0x0000000000000080)
#define HPTE_R_R ASM_CONST(0x0000000000000100)
#define HPTE_R_KEY_LO ASM_CONST(0x0000000000000e00)
#define HPTE_R_KEY_BIT2 ASM_CONST(0x0000000000000800)
#define HPTE_R_KEY_BIT3 ASM_CONST(0x0000000000000400)
#define HPTE_R_KEY_BIT4 ASM_CONST(0x0000000000000200)
#define HPTE_R_KEY (HPTE_R_KEY_LO | HPTE_R_KEY_HI)
#define HPTE_V_1TB_SEG ASM_CONST(0x4000000000000000)

View File

@ -108,6 +108,16 @@ typedef struct {
#ifdef CONFIG_SPAPR_TCE_IOMMU
struct list_head iommu_group_mem_list;
#endif
#ifdef CONFIG_PPC_MEM_KEYS
/*
* Each bit represents one protection key.
* bit set -> key allocated
* bit unset -> key available for allocation
*/
u32 pkey_allocation_map;
s16 execute_only_pkey; /* key holding execute-only protection */
#endif
} mm_context_t;
/*

View File

@ -14,8 +14,9 @@
*/
#define _PAGE_BIT_SWAP_TYPE 0
#define _PAGE_NA 0
#define _PAGE_RO 0
#define _PAGE_SHARED 0
#define _PAGE_USER 0
#define _PAGE_EXEC 0x00001 /* execute permission */
#define _PAGE_WRITE 0x00002 /* write access allowed */
@ -39,6 +40,7 @@
#define _RPAGE_RSV2 0x0800000000000000UL
#define _RPAGE_RSV3 0x0400000000000000UL
#define _RPAGE_RSV4 0x0200000000000000UL
#define _RPAGE_RSV5 0x00040UL
#define _PAGE_PTE 0x4000000000000000UL /* distinguishes PTEs from pointers */
#define _PAGE_PRESENT 0x8000000000000000UL /* pte contains a translation */
@ -58,6 +60,25 @@
/* Max physical address bit as per radix table */
#define _RPAGE_PA_MAX 57
#ifdef CONFIG_PPC_MEM_KEYS
#ifdef CONFIG_PPC_64K_PAGES
#define H_PTE_PKEY_BIT0 _RPAGE_RSV1
#define H_PTE_PKEY_BIT1 _RPAGE_RSV2
#else /* CONFIG_PPC_64K_PAGES */
#define H_PTE_PKEY_BIT0 0 /* _RPAGE_RSV1 is not available */
#define H_PTE_PKEY_BIT1 0 /* _RPAGE_RSV2 is not available */
#endif /* CONFIG_PPC_64K_PAGES */
#define H_PTE_PKEY_BIT2 _RPAGE_RSV3
#define H_PTE_PKEY_BIT3 _RPAGE_RSV4
#define H_PTE_PKEY_BIT4 _RPAGE_RSV5
#else /* CONFIG_PPC_MEM_KEYS */
#define H_PTE_PKEY_BIT0 0
#define H_PTE_PKEY_BIT1 0
#define H_PTE_PKEY_BIT2 0
#define H_PTE_PKEY_BIT3 0
#define H_PTE_PKEY_BIT4 0
#endif /* CONFIG_PPC_MEM_KEYS */
/*
* Max physical address bit we will use for now.
*
@ -121,13 +142,16 @@
#define _PAGE_CHG_MASK (PTE_RPN_MASK | _PAGE_HPTEFLAGS | _PAGE_DIRTY | \
_PAGE_ACCESSED | _PAGE_SPECIAL | _PAGE_PTE | \
_PAGE_SOFT_DIRTY)
#define H_PTE_PKEY (H_PTE_PKEY_BIT0 | H_PTE_PKEY_BIT1 | H_PTE_PKEY_BIT2 | \
H_PTE_PKEY_BIT3 | H_PTE_PKEY_BIT4)
/*
* Mask of bits returned by pte_pgprot()
*/
#define PAGE_PROT_BITS (_PAGE_SAO | _PAGE_NON_IDEMPOTENT | _PAGE_TOLERANT | \
H_PAGE_4K_PFN | _PAGE_PRIVILEGED | _PAGE_ACCESSED | \
_PAGE_READ | _PAGE_WRITE | _PAGE_DIRTY | _PAGE_EXEC | \
_PAGE_SOFT_DIRTY)
_PAGE_SOFT_DIRTY | H_PTE_PKEY)
/*
* We define 2 sets of base prot bits, one for basic pages (ie,
* cacheable kernel and user pages) and one for non cacheable
@ -546,6 +570,40 @@ static inline int pte_present(pte_t pte)
{
return !!(pte_raw(pte) & cpu_to_be64(_PAGE_PRESENT));
}
#ifdef CONFIG_PPC_MEM_KEYS
extern bool arch_pte_access_permitted(u64 pte, bool write, bool execute);
#else
static inline bool arch_pte_access_permitted(u64 pte, bool write, bool execute)
{
return true;
}
#endif /* CONFIG_PPC_MEM_KEYS */
#define pte_access_permitted pte_access_permitted
static inline bool pte_access_permitted(pte_t pte, bool write)
{
unsigned long pteval = pte_val(pte);
/* Also check for pte_user */
unsigned long clear_pte_bits = _PAGE_PRIVILEGED;
/*
* _PAGE_READ is needed for any access and will be
* cleared for PROT_NONE
*/
unsigned long need_pte_bits = _PAGE_PRESENT | _PAGE_READ;
if (write)
need_pte_bits |= _PAGE_WRITE;
if ((pteval & need_pte_bits) != need_pte_bits)
return false;
if ((pteval & clear_pte_bits) == clear_pte_bits)
return false;
return arch_pte_access_permitted(pte_val(pte), write, 0);
}
/*
* Conversion functions: convert a page and protection to a page entry,
* and a page entry and page directory to the page they refer to.
@ -850,6 +908,11 @@ static inline int pud_bad(pud_t pud)
return hash__pud_bad(pud);
}
#define pud_access_permitted pud_access_permitted
static inline bool pud_access_permitted(pud_t pud, bool write)
{
return pte_access_permitted(pud_pte(pud), write);
}
#define pgd_write(pgd) pte_write(pgd_pte(pgd))
static inline void pgd_set(pgd_t *pgdp, unsigned long val)
@ -889,6 +952,12 @@ static inline int pgd_bad(pgd_t pgd)
return hash__pgd_bad(pgd);
}
#define pgd_access_permitted pgd_access_permitted
static inline bool pgd_access_permitted(pgd_t pgd, bool write)
{
return pte_access_permitted(pgd_pte(pgd), write);
}
extern struct page *pgd_page(pgd_t pgd);
/* Pointers in the page table tree are physical addresses */
@ -1009,6 +1078,12 @@ static inline int pmd_protnone(pmd_t pmd)
#define __pmd_write(pmd) __pte_write(pmd_pte(pmd))
#define pmd_savedwrite(pmd) pte_savedwrite(pmd_pte(pmd))
#define pmd_access_permitted pmd_access_permitted
static inline bool pmd_access_permitted(pmd_t pmd, bool write)
{
return pte_access_permitted(pmd_pte(pmd), write);
}
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
extern pmd_t pfn_pmd(unsigned long pfn, pgprot_t pgprot);
extern pmd_t mk_pmd(struct page *page, pgprot_t pgprot);

View File

@ -51,6 +51,7 @@ static inline void arch_leave_lazy_mmu_mode(void)
#define arch_flush_lazy_mmu_mode() do {} while (0)
extern void hash__tlbiel_all(unsigned int action);
extern void flush_hash_page(unsigned long vpn, real_pte_t pte, int psize,
int ssize, unsigned long flags);

View File

@ -11,6 +11,12 @@ static inline int mmu_get_ap(int psize)
return mmu_psize_defs[psize].ap;
}
#ifdef CONFIG_PPC_RADIX_MMU
extern void radix__tlbiel_all(unsigned int action);
#else
static inline void radix__tlbiel_all(unsigned int action) { WARN_ON(1); };
#endif
extern void radix__flush_hugetlb_tlb_range(struct vm_area_struct *vma,
unsigned long start, unsigned long end);
extern void radix__flush_tlb_range_psize(struct mm_struct *mm, unsigned long start,
@ -47,4 +53,5 @@ extern void radix__flush_tlb_lpid(unsigned long lpid);
extern void radix__flush_tlb_all(void);
extern void radix__flush_tlb_pte_p9_dd1(unsigned long old_pte, struct mm_struct *mm,
unsigned long address);
#endif

View File

@ -8,6 +8,44 @@
#include <asm/book3s/64/tlbflush-hash.h>
#include <asm/book3s/64/tlbflush-radix.h>
/* TLB flush actions. Used as argument to tlbiel_all() */
enum {
TLB_INVAL_SCOPE_GLOBAL = 0, /* invalidate all TLBs */
TLB_INVAL_SCOPE_LPID = 1, /* invalidate TLBs for current LPID */
};
#ifdef CONFIG_PPC_NATIVE
static inline void tlbiel_all(void)
{
/*
* This is used for host machine check and bootup.
*
* This uses early_radix_enabled and implementations use
* early_cpu_has_feature etc because that works early in boot
* and this is the machine check path which is not performance
* critical.
*/
if (early_radix_enabled())
radix__tlbiel_all(TLB_INVAL_SCOPE_GLOBAL);
else
hash__tlbiel_all(TLB_INVAL_SCOPE_GLOBAL);
}
#else
static inline void tlbiel_all(void) { BUG(); };
#endif
static inline void tlbiel_all_lpid(bool radix)
{
/*
* This is used for guest machine check.
*/
if (radix)
radix__tlbiel_all(TLB_INVAL_SCOPE_LPID);
else
hash__tlbiel_all(TLB_INVAL_SCOPE_LPID);
}
#define __HAVE_ARCH_FLUSH_PMD_TLB_RANGE
static inline void flush_pmd_tlb_range(struct vm_area_struct *vma,
unsigned long start, unsigned long end)

View File

@ -133,9 +133,11 @@ struct pt_regs;
extern int do_page_fault(struct pt_regs *, unsigned long, unsigned long);
extern void bad_page_fault(struct pt_regs *, unsigned long, int);
extern void _exception(int, struct pt_regs *, int, unsigned long);
extern void _exception_pkey(int, struct pt_regs *, int, unsigned long, int);
extern void die(const char *, struct pt_regs *, long);
extern bool die_will_crash(void);
extern void panic_flush_kmsg_start(void);
extern void panic_flush_kmsg_end(void);
#endif /* !__ASSEMBLY__ */
#endif /* __KERNEL__ */

View File

@ -31,8 +31,10 @@ unsigned int create_cond_branch(const unsigned int *addr,
unsigned long target, int flags);
int patch_branch(unsigned int *addr, unsigned long target, int flags);
int patch_instruction(unsigned int *addr, unsigned int instr);
int raw_patch_instruction(unsigned int *addr, unsigned int instr);
int instr_is_relative_branch(unsigned int instr);
int instr_is_relative_link_branch(unsigned int instr);
int instr_is_branch_to_addr(const unsigned int *instr, unsigned long addr);
unsigned long branch_target(const unsigned int *instr);
unsigned int translate_branch(const unsigned int *dest,

View File

@ -166,6 +166,6 @@ static inline int cpm_command(u32 command, u8 opcode)
}
#endif /* CONFIG_CPM */
int cpm2_gpiochip_add32(struct device_node *np);
int cpm2_gpiochip_add32(struct device *dev);
#endif

View File

@ -605,5 +605,7 @@ enum cpm_clk {
};
int cpm1_clk_setup(enum cpm_clk_target target, int clock, int mode);
int cpm1_gpiochip_add16(struct device *dev);
int cpm1_gpiochip_add32(struct device *dev);
#endif /* __CPM1__ */

View File

@ -107,12 +107,6 @@ struct cpu_spec {
* called in real mode to handle SLB and TLB errors.
*/
long (*machine_check_early)(struct pt_regs *regs);
/*
* Processor specific routine to flush tlbs.
*/
void (*flush_tlb)(unsigned int action);
};
extern struct cpu_spec *cur_cpu_spec;
@ -133,12 +127,6 @@ extern void cpu_feature_keys_init(void);
static inline void cpu_feature_keys_init(void) { }
#endif
/* TLB flush actions. Used as argument to cpu_spec.flush_tlb() hook */
enum {
TLB_INVAL_SCOPE_GLOBAL = 0, /* invalidate all TLBs */
TLB_INVAL_SCOPE_LPID = 1, /* invalidate TLBs for current LPID */
};
#endif /* __ASSEMBLY__ */
/* CPU kernel features */
@ -207,7 +195,7 @@ enum {
#define CPU_FTR_STCX_CHECKS_ADDRESS LONG_ASM_CONST(0x0004000000000000)
#define CPU_FTR_POPCNTB LONG_ASM_CONST(0x0008000000000000)
#define CPU_FTR_POPCNTD LONG_ASM_CONST(0x0010000000000000)
/* Free LONG_ASM_CONST(0x0020000000000000) */
#define CPU_FTR_PKEY LONG_ASM_CONST(0x0020000000000000)
#define CPU_FTR_VMX_COPY LONG_ASM_CONST(0x0040000000000000)
#define CPU_FTR_TM LONG_ASM_CONST(0x0080000000000000)
#define CPU_FTR_CFAR LONG_ASM_CONST(0x0100000000000000)
@ -454,7 +442,7 @@ enum {
CPU_FTR_DSCR | CPU_FTR_SAO | CPU_FTR_ASYM_SMT | \
CPU_FTR_STCX_CHECKS_ADDRESS | CPU_FTR_POPCNTB | CPU_FTR_POPCNTD | \
CPU_FTR_CFAR | CPU_FTR_HVMODE | \
CPU_FTR_VMX_COPY | CPU_FTR_HAS_PPR | CPU_FTR_DABRX)
CPU_FTR_VMX_COPY | CPU_FTR_HAS_PPR | CPU_FTR_DABRX | CPU_FTR_PKEY)
#define CPU_FTRS_POWER8 (CPU_FTR_USE_TB | CPU_FTR_LWSYNC | \
CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | CPU_FTR_ARCH_206 |\
CPU_FTR_MMCRA | CPU_FTR_SMT | \
@ -464,7 +452,7 @@ enum {
CPU_FTR_STCX_CHECKS_ADDRESS | CPU_FTR_POPCNTB | CPU_FTR_POPCNTD | \
CPU_FTR_CFAR | CPU_FTR_HVMODE | CPU_FTR_VMX_COPY | \
CPU_FTR_DBELL | CPU_FTR_HAS_PPR | CPU_FTR_DAWR | \
CPU_FTR_ARCH_207S | CPU_FTR_TM_COMP)
CPU_FTR_ARCH_207S | CPU_FTR_TM_COMP | CPU_FTR_PKEY)
#define CPU_FTRS_POWER8E (CPU_FTRS_POWER8 | CPU_FTR_PMAO_BUG)
#define CPU_FTRS_POWER8_DD1 (CPU_FTRS_POWER8 & ~CPU_FTR_DBELL)
#define CPU_FTRS_POWER9 (CPU_FTR_USE_TB | CPU_FTR_LWSYNC | \
@ -476,7 +464,8 @@ enum {
CPU_FTR_STCX_CHECKS_ADDRESS | CPU_FTR_POPCNTB | CPU_FTR_POPCNTD | \
CPU_FTR_CFAR | CPU_FTR_HVMODE | CPU_FTR_VMX_COPY | \
CPU_FTR_DBELL | CPU_FTR_HAS_PPR | CPU_FTR_DAWR | \
CPU_FTR_ARCH_207S | CPU_FTR_TM_COMP | CPU_FTR_ARCH_300)
CPU_FTR_ARCH_207S | CPU_FTR_TM_COMP | CPU_FTR_ARCH_300 | \
CPU_FTR_PKEY)
#define CPU_FTRS_POWER9_DD1 ((CPU_FTRS_POWER9 | CPU_FTR_POWER9_DD1) & \
(~CPU_FTR_SAO))
#define CPU_FTRS_POWER9_DD2_0 CPU_FTRS_POWER9

View File

@ -0,0 +1,102 @@
/*
* drmem.h: Power specific logical memory block representation
*
* Copyright 2017 IBM Corporation
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#ifndef _ASM_POWERPC_LMB_H
#define _ASM_POWERPC_LMB_H
struct drmem_lmb {
u64 base_addr;
u32 drc_index;
u32 aa_index;
u32 flags;
};
struct drmem_lmb_info {
struct drmem_lmb *lmbs;
int n_lmbs;
u32 lmb_size;
};
extern struct drmem_lmb_info *drmem_info;
#define for_each_drmem_lmb_in_range(lmb, start, end) \
for ((lmb) = (start); (lmb) <= (end); (lmb)++)
#define for_each_drmem_lmb(lmb) \
for_each_drmem_lmb_in_range((lmb), \
&drmem_info->lmbs[0], \
&drmem_info->lmbs[drmem_info->n_lmbs - 1])
/*
* The of_drconf_cell_v1 struct defines the layout of the LMB data
* specified in the ibm,dynamic-memory device tree property.
* The property itself is a 32-bit value specifying the number of
* LMBs followed by an array of of_drconf_cell_v1 entries, one
* per LMB.
*/
struct of_drconf_cell_v1 {
__be64 base_addr;
__be32 drc_index;
__be32 reserved;
__be32 aa_index;
__be32 flags;
};
/*
* Version 2 of the ibm,dynamic-memory property is defined as a
* 32-bit value specifying the number of LMB sets followed by an
* array of of_drconf_cell_v2 entries, one per LMB set.
*/
struct of_drconf_cell_v2 {
u32 seq_lmbs;
u64 base_addr;
u32 drc_index;
u32 aa_index;
u32 flags;
} __packed;
#define DRCONF_MEM_ASSIGNED 0x00000008
#define DRCONF_MEM_AI_INVALID 0x00000040
#define DRCONF_MEM_RESERVED 0x00000080
static inline u32 drmem_lmb_size(void)
{
return drmem_info->lmb_size;
}
#define DRMEM_LMB_RESERVED 0x80000000
static inline void drmem_mark_lmb_reserved(struct drmem_lmb *lmb)
{
lmb->flags |= DRMEM_LMB_RESERVED;
}
static inline void drmem_remove_lmb_reservation(struct drmem_lmb *lmb)
{
lmb->flags &= ~DRMEM_LMB_RESERVED;
}
static inline bool drmem_lmb_reserved(struct drmem_lmb *lmb)
{
return lmb->flags & DRMEM_LMB_RESERVED;
}
u64 drmem_lmb_memory_max(void);
void __init walk_drmem_lmbs(struct device_node *dn,
void (*func)(struct drmem_lmb *, const __be32 **));
int drmem_update_dt(void);
#ifdef CONFIG_PPC_PSERIES
void __init walk_drmem_lmbs_early(unsigned long node,
void (*func)(struct drmem_lmb *, const __be32 **));
#endif
#endif /* _ASM_POWERPC_LMB_H */

View File

@ -214,6 +214,7 @@ struct eeh_ops {
int (*write_config)(struct pci_dn *pdn, int where, int size, u32 val);
int (*next_error)(struct eeh_pe **pe);
int (*restore_config)(struct pci_dn *pdn);
int (*notify_resume)(struct pci_dn *pdn);
};
extern int eeh_subsystem_flags;
@ -297,6 +298,7 @@ int eeh_pe_reset(struct eeh_pe *pe, int option);
int eeh_pe_configure(struct eeh_pe *pe);
int eeh_pe_inject_err(struct eeh_pe *pe, int type, int func,
unsigned long addr, unsigned long mask);
int eeh_restore_vf_config(struct pci_dn *pdn);
/**
* EEH_POSSIBLE_ERROR() -- test for possible MMIO failure.

View File

@ -251,18 +251,40 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
std r10,area+EX_R10(r13); /* save r10 - r12 */ \
OPT_GET_SPR(r10, SPRN_CFAR, CPU_FTR_CFAR)
#define __EXCEPTION_PROLOG_1(area, extra, vec) \
#define __EXCEPTION_PROLOG_1_PRE(area) \
OPT_SAVE_REG_TO_PACA(area+EX_PPR, r9, CPU_FTR_HAS_PPR); \
OPT_SAVE_REG_TO_PACA(area+EX_CFAR, r10, CPU_FTR_CFAR); \
SAVE_CTR(r10, area); \
mfcr r9; \
extra(vec); \
mfcr r9;
#define __EXCEPTION_PROLOG_1_POST(area) \
std r11,area+EX_R11(r13); \
std r12,area+EX_R12(r13); \
GET_SCRATCH0(r10); \
std r10,area+EX_R13(r13)
/*
* This version of the EXCEPTION_PROLOG_1 will carry
* addition parameter called "bitmask" to support
* checking of the interrupt maskable level in the SOFTEN_TEST.
* Intended to be used in MASKABLE_EXCPETION_* macros.
*/
#define MASKABLE_EXCEPTION_PROLOG_1(area, extra, vec, bitmask) \
__EXCEPTION_PROLOG_1_PRE(area); \
extra(vec, bitmask); \
__EXCEPTION_PROLOG_1_POST(area);
/*
* This version of the EXCEPTION_PROLOG_1 is intended
* to be used in STD_EXCEPTION* macros
*/
#define _EXCEPTION_PROLOG_1(area, extra, vec) \
__EXCEPTION_PROLOG_1_PRE(area); \
extra(vec); \
__EXCEPTION_PROLOG_1_POST(area);
#define EXCEPTION_PROLOG_1(area, extra, vec) \
__EXCEPTION_PROLOG_1(area, extra, vec)
_EXCEPTION_PROLOG_1(area, extra, vec)
#define __EXCEPTION_PROLOG_PSERIES_1(label, h) \
ld r10,PACAKMSR(r13); /* get MSR value for kernel */ \
@ -485,7 +507,7 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
mflr r9; /* Get LR, later save to stack */ \
ld r2,PACATOC(r13); /* get kernel TOC into r2 */ \
std r9,_LINK(r1); \
lbz r10,PACASOFTIRQEN(r13); \
lbz r10,PACAIRQSOFTMASK(r13); \
mfspr r11,SPRN_XER; /* save XER in stackframe */ \
std r10,SOFTE(r1); \
std r11,_XER(r1); \
@ -549,22 +571,23 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
#define SOFTEN_VALUE_0xe80 PACA_IRQ_DBELL
#define SOFTEN_VALUE_0xe60 PACA_IRQ_HMI
#define SOFTEN_VALUE_0xea0 PACA_IRQ_EE
#define SOFTEN_VALUE_0xf00 PACA_IRQ_PMI
#define __SOFTEN_TEST(h, vec) \
lbz r10,PACASOFTIRQEN(r13); \
cmpwi r10,0; \
#define __SOFTEN_TEST(h, vec, bitmask) \
lbz r10,PACAIRQSOFTMASK(r13); \
andi. r10,r10,bitmask; \
li r10,SOFTEN_VALUE_##vec; \
beq masked_##h##interrupt
bne masked_##h##interrupt
#define _SOFTEN_TEST(h, vec) __SOFTEN_TEST(h, vec)
#define _SOFTEN_TEST(h, vec, bitmask) __SOFTEN_TEST(h, vec, bitmask)
#define SOFTEN_TEST_PR(vec) \
#define SOFTEN_TEST_PR(vec, bitmask) \
KVMTEST(EXC_STD, vec); \
_SOFTEN_TEST(EXC_STD, vec)
_SOFTEN_TEST(EXC_STD, vec, bitmask)
#define SOFTEN_TEST_HV(vec) \
#define SOFTEN_TEST_HV(vec, bitmask) \
KVMTEST(EXC_HV, vec); \
_SOFTEN_TEST(EXC_HV, vec)
_SOFTEN_TEST(EXC_HV, vec, bitmask)
#define KVMTEST_PR(vec) \
KVMTEST(EXC_STD, vec)
@ -572,53 +595,57 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
#define KVMTEST_HV(vec) \
KVMTEST(EXC_HV, vec)
#define SOFTEN_NOTEST_PR(vec) _SOFTEN_TEST(EXC_STD, vec)
#define SOFTEN_NOTEST_HV(vec) _SOFTEN_TEST(EXC_HV, vec)
#define SOFTEN_NOTEST_PR(vec, bitmask) _SOFTEN_TEST(EXC_STD, vec, bitmask)
#define SOFTEN_NOTEST_HV(vec, bitmask) _SOFTEN_TEST(EXC_HV, vec, bitmask)
#define __MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra) \
#define __MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra, bitmask) \
SET_SCRATCH0(r13); /* save r13 */ \
EXCEPTION_PROLOG_0(PACA_EXGEN); \
__EXCEPTION_PROLOG_1(PACA_EXGEN, extra, vec); \
MASKABLE_EXCEPTION_PROLOG_1(PACA_EXGEN, extra, vec, bitmask); \
EXCEPTION_PROLOG_PSERIES_1(label, h);
#define _MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra) \
__MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra)
#define _MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra, bitmask) \
__MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra, bitmask)
#define MASKABLE_EXCEPTION_PSERIES(loc, vec, label) \
#define MASKABLE_EXCEPTION_PSERIES(loc, vec, label, bitmask) \
_MASKABLE_EXCEPTION_PSERIES(vec, label, \
EXC_STD, SOFTEN_TEST_PR)
EXC_STD, SOFTEN_TEST_PR, bitmask)
#define MASKABLE_EXCEPTION_PSERIES_OOL(vec, label) \
EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_PR, vec); \
#define MASKABLE_EXCEPTION_PSERIES_OOL(vec, label, bitmask) \
MASKABLE_EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_PR, vec, bitmask);\
EXCEPTION_PROLOG_PSERIES_1(label, EXC_STD)
#define MASKABLE_EXCEPTION_HV(loc, vec, label) \
#define MASKABLE_EXCEPTION_HV(loc, vec, label, bitmask) \
_MASKABLE_EXCEPTION_PSERIES(vec, label, \
EXC_HV, SOFTEN_TEST_HV)
EXC_HV, SOFTEN_TEST_HV, bitmask)
#define MASKABLE_EXCEPTION_HV_OOL(vec, label) \
EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_HV, vec); \
#define MASKABLE_EXCEPTION_HV_OOL(vec, label, bitmask) \
MASKABLE_EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_HV, vec, bitmask);\
EXCEPTION_PROLOG_PSERIES_1(label, EXC_HV)
#define __MASKABLE_RELON_EXCEPTION_PSERIES(vec, label, h, extra) \
#define __MASKABLE_RELON_EXCEPTION_PSERIES(vec, label, h, extra, bitmask) \
SET_SCRATCH0(r13); /* save r13 */ \
EXCEPTION_PROLOG_0(PACA_EXGEN); \
__EXCEPTION_PROLOG_1(PACA_EXGEN, extra, vec); \
MASKABLE_EXCEPTION_PROLOG_1(PACA_EXGEN, extra, vec, bitmask); \
EXCEPTION_RELON_PROLOG_PSERIES_1(label, h)
#define _MASKABLE_RELON_EXCEPTION_PSERIES(vec, label, h, extra) \
__MASKABLE_RELON_EXCEPTION_PSERIES(vec, label, h, extra)
#define _MASKABLE_RELON_EXCEPTION_PSERIES(vec, label, h, extra, bitmask)\
__MASKABLE_RELON_EXCEPTION_PSERIES(vec, label, h, extra, bitmask)
#define MASKABLE_RELON_EXCEPTION_PSERIES(loc, vec, label) \
#define MASKABLE_RELON_EXCEPTION_PSERIES(loc, vec, label, bitmask) \
_MASKABLE_RELON_EXCEPTION_PSERIES(vec, label, \
EXC_STD, SOFTEN_NOTEST_PR)
EXC_STD, SOFTEN_NOTEST_PR, bitmask)
#define MASKABLE_RELON_EXCEPTION_HV(loc, vec, label) \
#define MASKABLE_RELON_EXCEPTION_PSERIES_OOL(vec, label, bitmask) \
MASKABLE_EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_NOTEST_PR, vec, bitmask);\
EXCEPTION_PROLOG_PSERIES_1(label, EXC_STD);
#define MASKABLE_RELON_EXCEPTION_HV(loc, vec, label, bitmask) \
_MASKABLE_RELON_EXCEPTION_PSERIES(vec, label, \
EXC_HV, SOFTEN_TEST_HV)
EXC_HV, SOFTEN_TEST_HV, bitmask)
#define MASKABLE_RELON_EXCEPTION_HV_OOL(vec, label) \
EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_HV, vec); \
#define MASKABLE_RELON_EXCEPTION_HV_OOL(vec, label, bitmask) \
MASKABLE_EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_NOTEST_HV, vec, bitmask);\
EXCEPTION_RELON_PROLOG_PSERIES_1(label, EXC_HV)
/*

View File

@ -51,6 +51,8 @@
#define FW_FEATURE_BEST_ENERGY ASM_CONST(0x0000000080000000)
#define FW_FEATURE_TYPE1_AFFINITY ASM_CONST(0x0000000100000000)
#define FW_FEATURE_PRRN ASM_CONST(0x0000000200000000)
#define FW_FEATURE_DRMEM_V2 ASM_CONST(0x0000000400000000)
#define FW_FEATURE_DRC_INFO ASM_CONST(0x0000000400000000)
#ifndef __ASSEMBLY__
@ -67,7 +69,8 @@ enum {
FW_FEATURE_CMO | FW_FEATURE_VPHN | FW_FEATURE_XCMO |
FW_FEATURE_SET_MODE | FW_FEATURE_BEST_ENERGY |
FW_FEATURE_TYPE1_AFFINITY | FW_FEATURE_PRRN |
FW_FEATURE_HPT_RESIZE,
FW_FEATURE_HPT_RESIZE | FW_FEATURE_DRMEM_V2 |
FW_FEATURE_DRC_INFO,
FW_FEATURE_PSERIES_ALWAYS = 0,
FW_FEATURE_POWERNV_POSSIBLE = FW_FEATURE_OPAL,
FW_FEATURE_POWERNV_ALWAYS = 0,

View File

@ -29,6 +29,7 @@ DECLARE_PER_CPU_SHARED_ALIGNED(irq_cpustat_t, irq_stat);
#define local_softirq_pending() __this_cpu_read(irq_stat.__softirq_pending)
#define __ARCH_SET_SOFTIRQ_PENDING
#define __ARCH_IRQ_EXIT_IRQS_DISABLED
#define set_softirq_pending(x) __this_cpu_write(irq_stat.__softirq_pending, (x))
#define or_softirq_pending(x) __this_cpu_or(irq_stat.__softirq_pending, (x))

View File

@ -178,7 +178,7 @@ name:
* TRAMP_REAL_* - real, unrelocated helpers (virt can call these)
* TRAMP_VIRT_* - virt, unreloc helpers (in practice, real can use)
* TRAMP_KVM - KVM handlers that get put into real, unrelocated
* EXC_COMMON_* - virt, relocated common handlers
* EXC_COMMON - virt, relocated common handlers
*
* The EXC handlers are given a name, and branch to name_common, or the
* appropriate KVM or masking function. Vector handler verieties are as
@ -211,7 +211,6 @@ name:
* EXC_COMMON_BEGIN/END - used to open-code the handler
* EXC_COMMON
* EXC_COMMON_ASYNC
* EXC_COMMON_HV
*
* TRAMP_REAL and TRAMP_VIRT can be used with BEGIN/END. KVM
* and OOL handlers are implemented as types of TRAMP and TRAMP_VIRT handlers.
@ -269,14 +268,14 @@ name:
STD_RELON_EXCEPTION_PSERIES(start, realvec, name##_common); \
EXC_VIRT_END(name, start, size);
#define EXC_REAL_MASKABLE(name, start, size) \
#define EXC_REAL_MASKABLE(name, start, size, bitmask) \
EXC_REAL_BEGIN(name, start, size); \
MASKABLE_EXCEPTION_PSERIES(start, start, name##_common); \
MASKABLE_EXCEPTION_PSERIES(start, start, name##_common, bitmask);\
EXC_REAL_END(name, start, size);
#define EXC_VIRT_MASKABLE(name, start, size, realvec) \
#define EXC_VIRT_MASKABLE(name, start, size, realvec, bitmask) \
EXC_VIRT_BEGIN(name, start, size); \
MASKABLE_RELON_EXCEPTION_PSERIES(start, realvec, name##_common); \
MASKABLE_RELON_EXCEPTION_PSERIES(start, realvec, name##_common, bitmask);\
EXC_VIRT_END(name, start, size);
#define EXC_REAL_HV(name, start, size) \
@ -305,13 +304,13 @@ name:
#define __EXC_REAL_OOL_MASKABLE(name, start, size) \
__EXC_REAL_OOL(name, start, size);
#define __TRAMP_REAL_OOL_MASKABLE(name, vec) \
#define __TRAMP_REAL_OOL_MASKABLE(name, vec, bitmask) \
TRAMP_REAL_BEGIN(tramp_real_##name); \
MASKABLE_EXCEPTION_PSERIES_OOL(vec, name##_common); \
MASKABLE_EXCEPTION_PSERIES_OOL(vec, name##_common, bitmask); \
#define EXC_REAL_OOL_MASKABLE(name, start, size) \
#define EXC_REAL_OOL_MASKABLE(name, start, size, bitmask) \
__EXC_REAL_OOL_MASKABLE(name, start, size); \
__TRAMP_REAL_OOL_MASKABLE(name, start);
__TRAMP_REAL_OOL_MASKABLE(name, start, bitmask);
#define __EXC_REAL_OOL_HV_DIRECT(name, start, size, handler) \
EXC_REAL_BEGIN(name, start, size); \
@ -332,13 +331,13 @@ name:
#define __EXC_REAL_OOL_MASKABLE_HV(name, start, size) \
__EXC_REAL_OOL(name, start, size);
#define __TRAMP_REAL_OOL_MASKABLE_HV(name, vec) \
#define __TRAMP_REAL_OOL_MASKABLE_HV(name, vec, bitmask) \
TRAMP_REAL_BEGIN(tramp_real_##name); \
MASKABLE_EXCEPTION_HV_OOL(vec, name##_common); \
MASKABLE_EXCEPTION_HV_OOL(vec, name##_common, bitmask); \
#define EXC_REAL_OOL_MASKABLE_HV(name, start, size) \
#define EXC_REAL_OOL_MASKABLE_HV(name, start, size, bitmask) \
__EXC_REAL_OOL_MASKABLE_HV(name, start, size); \
__TRAMP_REAL_OOL_MASKABLE_HV(name, start);
__TRAMP_REAL_OOL_MASKABLE_HV(name, start, bitmask);
#define __EXC_VIRT_OOL(name, start, size) \
EXC_VIRT_BEGIN(name, start, size); \
@ -356,13 +355,13 @@ name:
#define __EXC_VIRT_OOL_MASKABLE(name, start, size) \
__EXC_VIRT_OOL(name, start, size);
#define __TRAMP_VIRT_OOL_MASKABLE(name, realvec) \
#define __TRAMP_VIRT_OOL_MASKABLE(name, realvec, bitmask) \
TRAMP_VIRT_BEGIN(tramp_virt_##name); \
MASKABLE_RELON_EXCEPTION_PSERIES_OOL(realvec, name##_common); \
MASKABLE_RELON_EXCEPTION_PSERIES_OOL(realvec, name##_common, bitmask);\
#define EXC_VIRT_OOL_MASKABLE(name, start, size, realvec) \
#define EXC_VIRT_OOL_MASKABLE(name, start, size, realvec, bitmask) \
__EXC_VIRT_OOL_MASKABLE(name, start, size); \
__TRAMP_VIRT_OOL_MASKABLE(name, realvec);
__TRAMP_VIRT_OOL_MASKABLE(name, realvec, bitmask);
#define __EXC_VIRT_OOL_HV(name, start, size) \
__EXC_VIRT_OOL(name, start, size);
@ -378,13 +377,13 @@ name:
#define __EXC_VIRT_OOL_MASKABLE_HV(name, start, size) \
__EXC_VIRT_OOL(name, start, size);
#define __TRAMP_VIRT_OOL_MASKABLE_HV(name, realvec) \
#define __TRAMP_VIRT_OOL_MASKABLE_HV(name, realvec, bitmask) \
TRAMP_VIRT_BEGIN(tramp_virt_##name); \
MASKABLE_RELON_EXCEPTION_HV_OOL(realvec, name##_common); \
MASKABLE_RELON_EXCEPTION_HV_OOL(realvec, name##_common, bitmask);\
#define EXC_VIRT_OOL_MASKABLE_HV(name, start, size, realvec) \
#define EXC_VIRT_OOL_MASKABLE_HV(name, start, size, realvec, bitmask) \
__EXC_VIRT_OOL_MASKABLE_HV(name, start, size); \
__TRAMP_VIRT_OOL_MASKABLE_HV(name, realvec);
__TRAMP_VIRT_OOL_MASKABLE_HV(name, realvec, bitmask);
#define TRAMP_KVM(area, n) \
TRAMP_KVM_BEGIN(do_kvm_##n); \
@ -413,10 +412,6 @@ name:
EXC_COMMON_BEGIN(name); \
STD_EXCEPTION_COMMON_ASYNC(realvec, name, hdlr); \
#define EXC_COMMON_HV(name, realvec, hdlr) \
EXC_COMMON_BEGIN(name); \
STD_EXCEPTION_COMMON(realvec + 0x2, name, hdlr); \
#endif /* __ASSEMBLY__ */
#endif /* _ASM_POWERPC_HEAD_64_H */

View File

@ -42,4 +42,8 @@ extern void wait_for_tb_resync(void);
static inline void wait_for_subcore_guest_exit(void) { }
static inline void wait_for_tb_resync(void) { }
#endif
struct pt_regs;
extern long hmi_handle_debugtrig(struct pt_regs *regs);
#endif /* __ASM_PPC64_HMI_H__ */

View File

@ -47,8 +47,7 @@ static inline pte_t *hugepd_page(hugepd_t hpd)
{
BUG_ON(!hugepd_ok(hpd));
#ifdef CONFIG_PPC_8xx
return (pte_t *)__va(hpd_val(hpd) &
~(_PMD_PAGE_MASK | _PMD_PRESENT_MASK));
return (pte_t *)__va(hpd_val(hpd) & ~HUGEPD_SHIFT_MASK);
#else
return (pte_t *)((hpd_val(hpd) &
~HUGEPD_SHIFT_MASK) | PD_HUGE);

View File

@ -27,6 +27,15 @@
#define PACA_IRQ_DEC 0x08 /* Or FIT */
#define PACA_IRQ_EE_EDGE 0x10 /* BookE only */
#define PACA_IRQ_HMI 0x20
#define PACA_IRQ_PMI 0x40
/*
* flags for paca->irq_soft_mask
*/
#define IRQS_ENABLED 0
#define IRQS_DISABLED 1 /* local_irq_disable() interrupts */
#define IRQS_PMI_DISABLED 2
#define IRQS_ALL_DISABLED (IRQS_DISABLED | IRQS_PMI_DISABLED)
#endif /* CONFIG_PPC64 */
@ -43,46 +52,112 @@ extern void unknown_exception(struct pt_regs *regs);
#ifdef CONFIG_PPC64
#include <asm/paca.h>
static inline unsigned long arch_local_save_flags(void)
static inline notrace unsigned long irq_soft_mask_return(void)
{
unsigned long flags;
asm volatile(
"lbz %0,%1(13)"
: "=r" (flags)
: "i" (offsetof(struct paca_struct, soft_enabled)));
: "i" (offsetof(struct paca_struct, irq_soft_mask)));
return flags;
}
static inline unsigned long arch_local_irq_disable(void)
/*
* The "memory" clobber acts as both a compiler barrier
* for the critical section and as a clobber because
* we changed paca->irq_soft_mask
*/
static inline notrace void irq_soft_mask_set(unsigned long mask)
{
unsigned long flags, zero;
#ifdef CONFIG_PPC_IRQ_SOFT_MASK_DEBUG
/*
* The irq mask must always include the STD bit if any are set.
*
* and interrupts don't get replayed until the standard
* interrupt (local_irq_disable()) is unmasked.
*
* Other masks must only provide additional masking beyond
* the standard, and they are also not replayed until the
* standard interrupt becomes unmasked.
*
* This could be changed, but it will require partial
* unmasks to be replayed, among other things. For now, take
* the simple approach.
*/
WARN_ON(mask && !(mask & IRQS_DISABLED));
#endif
asm volatile(
"li %1,0; lbz %0,%2(13); stb %1,%2(13)"
: "=r" (flags), "=&r" (zero)
: "i" (offsetof(struct paca_struct, soft_enabled))
"stb %0,%1(13)"
:
: "r" (mask),
"i" (offsetof(struct paca_struct, irq_soft_mask))
: "memory");
}
static inline notrace unsigned long irq_soft_mask_set_return(unsigned long mask)
{
unsigned long flags;
#ifdef CONFIG_PPC_IRQ_SOFT_MASK_DEBUG
WARN_ON(mask && !(mask & IRQS_DISABLED));
#endif
asm volatile(
"lbz %0,%1(13); stb %2,%1(13)"
: "=&r" (flags)
: "i" (offsetof(struct paca_struct, irq_soft_mask)),
"r" (mask)
: "memory");
return flags;
}
static inline notrace unsigned long irq_soft_mask_or_return(unsigned long mask)
{
unsigned long flags, tmp;
asm volatile(
"lbz %0,%2(13); or %1,%0,%3; stb %1,%2(13)"
: "=&r" (flags), "=r" (tmp)
: "i" (offsetof(struct paca_struct, irq_soft_mask)),
"r" (mask)
: "memory");
#ifdef CONFIG_PPC_IRQ_SOFT_MASK_DEBUG
WARN_ON((mask | flags) && !((mask | flags) & IRQS_DISABLED));
#endif
return flags;
}
static inline unsigned long arch_local_save_flags(void)
{
return irq_soft_mask_return();
}
static inline void arch_local_irq_disable(void)
{
irq_soft_mask_set(IRQS_DISABLED);
}
extern void arch_local_irq_restore(unsigned long);
static inline void arch_local_irq_enable(void)
{
arch_local_irq_restore(1);
arch_local_irq_restore(IRQS_ENABLED);
}
static inline unsigned long arch_local_irq_save(void)
{
return arch_local_irq_disable();
return irq_soft_mask_set_return(IRQS_DISABLED);
}
static inline bool arch_irqs_disabled_flags(unsigned long flags)
{
return flags == 0;
return flags & IRQS_DISABLED;
}
static inline bool arch_irqs_disabled(void)
@ -90,6 +165,55 @@ static inline bool arch_irqs_disabled(void)
return arch_irqs_disabled_flags(arch_local_save_flags());
}
#ifdef CONFIG_PPC_BOOK3S
/*
* To support disabling and enabling of irq with PMI, set of
* new powerpc_local_irq_pmu_save() and powerpc_local_irq_restore()
* functions are added. These macros are implemented using generic
* linux local_irq_* code from include/linux/irqflags.h.
*/
#define raw_local_irq_pmu_save(flags) \
do { \
typecheck(unsigned long, flags); \
flags = irq_soft_mask_or_return(IRQS_DISABLED | \
IRQS_PMI_DISABLED); \
} while(0)
#define raw_local_irq_pmu_restore(flags) \
do { \
typecheck(unsigned long, flags); \
arch_local_irq_restore(flags); \
} while(0)
#ifdef CONFIG_TRACE_IRQFLAGS
#define powerpc_local_irq_pmu_save(flags) \
do { \
raw_local_irq_pmu_save(flags); \
trace_hardirqs_off(); \
} while(0)
#define powerpc_local_irq_pmu_restore(flags) \
do { \
if (raw_irqs_disabled_flags(flags)) { \
raw_local_irq_pmu_restore(flags); \
trace_hardirqs_off(); \
} else { \
trace_hardirqs_on(); \
raw_local_irq_pmu_restore(flags); \
} \
} while(0)
#else
#define powerpc_local_irq_pmu_save(flags) \
do { \
raw_local_irq_pmu_save(flags); \
} while(0)
#define powerpc_local_irq_pmu_restore(flags) \
do { \
raw_local_irq_pmu_restore(flags); \
} while (0)
#endif /* CONFIG_TRACE_IRQFLAGS */
#endif /* CONFIG_PPC_BOOK3S */
#ifdef CONFIG_PPC_BOOK3E
#define __hard_irq_enable() asm volatile("wrteei 1" : : : "memory")
#define __hard_irq_disable() asm volatile("wrteei 0" : : : "memory")
@ -98,14 +222,13 @@ static inline bool arch_irqs_disabled(void)
#define __hard_irq_disable() __mtmsrd(local_paca->kernel_msr, 1)
#endif
#define hard_irq_disable() do { \
u8 _was_enabled; \
__hard_irq_disable(); \
_was_enabled = local_paca->soft_enabled; \
local_paca->soft_enabled = 0; \
local_paca->irq_happened |= PACA_IRQ_HARD_DIS; \
if (_was_enabled) \
trace_hardirqs_off(); \
#define hard_irq_disable() do { \
unsigned long flags; \
__hard_irq_disable(); \
flags = irq_soft_mask_set_return(IRQS_ALL_DISABLED); \
local_paca->irq_happened |= PACA_IRQ_HARD_DIS; \
if (!arch_irqs_disabled_flags(flags)) \
trace_hardirqs_off(); \
} while(0)
static inline bool lazy_irq_pending(void)
@ -127,7 +250,7 @@ static inline void may_hard_irq_enable(void)
static inline bool arch_irq_disabled_regs(struct pt_regs *regs)
{
return !regs->softe;
return (regs->softe & IRQS_DISABLED);
}
extern bool prep_irq_for_idle(void);

View File

@ -34,6 +34,13 @@
#define THREAD_IMC_LDBAR_MASK 0x0003ffffffffe000ULL
#define THREAD_IMC_ENABLE 0x8000000000000000ULL
/*
* For debugfs interface for imc-mode and imc-command
*/
#define IMC_CNTL_BLK_OFFSET 0x3FC00
#define IMC_CNTL_BLK_CMD_OFFSET 8
#define IMC_CNTL_BLK_MODE_OFFSET 32
/*
* Structure to hold memory address information for imc units.
*/
@ -71,7 +78,7 @@ struct imc_events {
struct imc_pmu {
struct pmu pmu;
struct imc_mem_info *mem_info;
struct imc_events **events;
struct imc_events *events;
/*
* Attribute groups for the PMU. Slot 0 used for
* format attribute, slot 1 used for cpusmask attribute,

View File

@ -47,14 +47,14 @@
* be clobbered.
*/
#define RECONCILE_IRQ_STATE(__rA, __rB) \
lbz __rA,PACASOFTIRQEN(r13); \
lbz __rA,PACAIRQSOFTMASK(r13); \
lbz __rB,PACAIRQHAPPENED(r13); \
cmpwi cr0,__rA,0; \
li __rA,0; \
andi. __rA,__rA,IRQS_DISABLED; \
li __rA,IRQS_DISABLED; \
ori __rB,__rB,PACA_IRQ_HARD_DIS; \
stb __rB,PACAIRQHAPPENED(r13); \
beq 44f; \
stb __rA,PACASOFTIRQEN(r13); \
bne 44f; \
stb __rA,PACAIRQSOFTMASK(r13); \
TRACE_DISABLE_INTS; \
44:
@ -64,9 +64,9 @@
#define RECONCILE_IRQ_STATE(__rA, __rB) \
lbz __rA,PACAIRQHAPPENED(r13); \
li __rB,0; \
li __rB,IRQS_DISABLED; \
ori __rA,__rA,PACA_IRQ_HARD_DIS; \
stb __rB,PACASOFTIRQEN(r13); \
stb __rB,PACAIRQSOFTMASK(r13); \
stb __rA,PACAIRQHAPPENED(r13)
#endif
#endif

View File

@ -73,6 +73,8 @@ extern void kexec_smp_wait(void); /* get and clear naca physid, wait for
master to copy new code to 0 */
extern int crashing_cpu;
extern void crash_send_ipi(void (*crash_ipi_callback)(struct pt_regs *));
extern void crash_ipi_callback(struct pt_regs *);
extern int crash_wake_offline;
struct kimage;
struct pt_regs;

View File

@ -873,7 +873,7 @@ static inline void kvmppc_fix_ee_before_entry(void)
/* Only need to enable IRQs by hard enabling them after this */
local_paca->irq_happened = 0;
local_paca->soft_enabled = 1;
irq_soft_mask_set(IRQS_ENABLED);
#endif
}

View File

@ -2,76 +2,64 @@
#ifndef _ARCH_POWERPC_LOCAL_H
#define _ARCH_POWERPC_LOCAL_H
#ifdef CONFIG_PPC_BOOK3S_64
#include <linux/percpu.h>
#include <linux/atomic.h>
#include <linux/irqflags.h>
#include <asm/hw_irq.h>
typedef struct
{
atomic_long_t a;
long v;
} local_t;
#define LOCAL_INIT(i) { ATOMIC_LONG_INIT(i) }
#define LOCAL_INIT(i) { (i) }
#define local_read(l) atomic_long_read(&(l)->a)
#define local_set(l,i) atomic_long_set(&(l)->a, (i))
#define local_add(i,l) atomic_long_add((i),(&(l)->a))
#define local_sub(i,l) atomic_long_sub((i),(&(l)->a))
#define local_inc(l) atomic_long_inc(&(l)->a)
#define local_dec(l) atomic_long_dec(&(l)->a)
static __inline__ long local_add_return(long a, local_t *l)
static __inline__ long local_read(local_t *l)
{
long t;
__asm__ __volatile__(
"1:" PPC_LLARX(%0,0,%2,0) " # local_add_return\n\
add %0,%1,%0\n"
PPC405_ERR77(0,%2)
PPC_STLCX "%0,0,%2 \n\
bne- 1b"
: "=&r" (t)
: "r" (a), "r" (&(l->a.counter))
: "cc", "memory");
return t;
return READ_ONCE(l->v);
}
static __inline__ void local_set(local_t *l, long i)
{
WRITE_ONCE(l->v, i);
}
#define LOCAL_OP(op, c_op) \
static __inline__ void local_##op(long i, local_t *l) \
{ \
unsigned long flags; \
\
powerpc_local_irq_pmu_save(flags); \
l->v c_op i; \
powerpc_local_irq_pmu_restore(flags); \
}
#define LOCAL_OP_RETURN(op, c_op) \
static __inline__ long local_##op##_return(long a, local_t *l) \
{ \
long t; \
unsigned long flags; \
\
powerpc_local_irq_pmu_save(flags); \
t = (l->v c_op a); \
powerpc_local_irq_pmu_restore(flags); \
\
return t; \
}
#define LOCAL_OPS(op, c_op) \
LOCAL_OP(op, c_op) \
LOCAL_OP_RETURN(op, c_op)
LOCAL_OPS(add, +=)
LOCAL_OPS(sub, -=)
#define local_add_negative(a, l) (local_add_return((a), (l)) < 0)
static __inline__ long local_sub_return(long a, local_t *l)
{
long t;
__asm__ __volatile__(
"1:" PPC_LLARX(%0,0,%2,0) " # local_sub_return\n\
subf %0,%1,%0\n"
PPC405_ERR77(0,%2)
PPC_STLCX "%0,0,%2 \n\
bne- 1b"
: "=&r" (t)
: "r" (a), "r" (&(l->a.counter))
: "cc", "memory");
return t;
}
static __inline__ long local_inc_return(local_t *l)
{
long t;
__asm__ __volatile__(
"1:" PPC_LLARX(%0,0,%1,0) " # local_inc_return\n\
addic %0,%0,1\n"
PPC405_ERR77(0,%1)
PPC_STLCX "%0,0,%1 \n\
bne- 1b"
: "=&r" (t)
: "r" (&(l->a.counter))
: "cc", "xer", "memory");
return t;
}
#define local_inc_return(l) local_add_return(1LL, l)
#define local_inc(l) local_inc_return(l)
/*
* local_inc_and_test - increment and test
@ -81,28 +69,39 @@ static __inline__ long local_inc_return(local_t *l)
* and returns true if the result is zero, or false for all
* other cases.
*/
#define local_inc_and_test(l) (local_inc_return(l) == 0)
#define local_inc_and_test(l) (local_inc_return(l) == 0)
static __inline__ long local_dec_return(local_t *l)
#define local_dec_return(l) local_sub_return(1LL, l)
#define local_dec(l) local_dec_return(l)
#define local_sub_and_test(a, l) (local_sub_return((a), (l)) == 0)
#define local_dec_and_test(l) (local_dec_return((l)) == 0)
static __inline__ long local_cmpxchg(local_t *l, long o, long n)
{
long t;
unsigned long flags;
__asm__ __volatile__(
"1:" PPC_LLARX(%0,0,%1,0) " # local_dec_return\n\
addic %0,%0,-1\n"
PPC405_ERR77(0,%1)
PPC_STLCX "%0,0,%1\n\
bne- 1b"
: "=&r" (t)
: "r" (&(l->a.counter))
: "cc", "xer", "memory");
powerpc_local_irq_pmu_save(flags);
t = l->v;
if (t == o)
l->v = n;
powerpc_local_irq_pmu_restore(flags);
return t;
}
#define local_cmpxchg(l, o, n) \
(cmpxchg_local(&((l)->a.counter), (o), (n)))
#define local_xchg(l, n) (xchg_local(&((l)->a.counter), (n)))
static __inline__ long local_xchg(local_t *l, long n)
{
long t;
unsigned long flags;
powerpc_local_irq_pmu_save(flags);
t = l->v;
l->v = n;
powerpc_local_irq_pmu_restore(flags);
return t;
}
/**
* local_add_unless - add unless the number is a given value
@ -115,62 +114,35 @@ static __inline__ long local_dec_return(local_t *l)
*/
static __inline__ int local_add_unless(local_t *l, long a, long u)
{
long t;
unsigned long flags;
int ret = 0;
__asm__ __volatile__ (
"1:" PPC_LLARX(%0,0,%1,0) " # local_add_unless\n\
cmpw 0,%0,%3 \n\
beq- 2f \n\
add %0,%2,%0 \n"
PPC405_ERR77(0,%2)
PPC_STLCX "%0,0,%1 \n\
bne- 1b \n"
" subf %0,%2,%0 \n\
2:"
: "=&r" (t)
: "r" (&(l->a.counter)), "r" (a), "r" (u)
: "cc", "memory");
powerpc_local_irq_pmu_save(flags);
if (l->v != u) {
l->v += a;
ret = 1;
}
powerpc_local_irq_pmu_restore(flags);
return t != u;
return ret;
}
#define local_inc_not_zero(l) local_add_unless((l), 1, 0)
#define local_sub_and_test(a, l) (local_sub_return((a), (l)) == 0)
#define local_dec_and_test(l) (local_dec_return((l)) == 0)
/*
* Atomically test *l and decrement if it is greater than 0.
* The function returns the old value of *l minus 1.
*/
static __inline__ long local_dec_if_positive(local_t *l)
{
long t;
__asm__ __volatile__(
"1:" PPC_LLARX(%0,0,%1,0) " # local_dec_if_positive\n\
cmpwi %0,1\n\
addi %0,%0,-1\n\
blt- 2f\n"
PPC405_ERR77(0,%1)
PPC_STLCX "%0,0,%1\n\
bne- 1b"
"\n\
2:" : "=&b" (t)
: "r" (&(l->a.counter))
: "cc", "memory");
return t;
}
#define local_inc_not_zero(l) local_add_unless((l), 1, 0)
/* Use these for per-cpu local_t variables: on some archs they are
* much more efficient than these naive implementations. Note they take
* a variable, not an address.
*/
#define __local_inc(l) ((l)->a.counter++)
#define __local_dec(l) ((l)->a.counter++)
#define __local_add(i,l) ((l)->a.counter+=(i))
#define __local_sub(i,l) ((l)->a.counter-=(i))
#define __local_inc(l) ((l)->v++)
#define __local_dec(l) ((l)->v++)
#define __local_add(i,l) ((l)->v+=(i))
#define __local_sub(i,l) ((l)->v-=(i))
#else /* CONFIG_PPC64 */
#include <asm-generic/local.h>
#endif /* CONFIG_PPC64 */
#endif /* _ARCH_POWERPC_LOCAL_H */

View File

@ -173,11 +173,19 @@ struct machdep_calls {
/* Called after scan and before resource survey */
void (*pcibios_fixup_phb)(struct pci_controller *hose);
/*
* Called after device has been added to bus and
* before sysfs has been created.
*/
void (*pcibios_bus_add_device)(struct pci_dev *pdev);
resource_size_t (*pcibios_default_alignment)(void);
#ifdef CONFIG_PCI_IOV
void (*pcibios_fixup_sriov)(struct pci_dev *pdev);
resource_size_t (*pcibios_iov_resource_alignment)(struct pci_dev *, int resno);
int (*pcibios_sriov_enable)(struct pci_dev *pdev, u16 num_vfs);
int (*pcibios_sriov_disable)(struct pci_dev *pdev);
#endif /* CONFIG_PCI_IOV */
/* Called to shutdown machine specific hardware not already controlled

View File

@ -13,6 +13,7 @@
#include <asm/cputable.h>
#include <linux/mm.h>
#include <linux/pkeys.h>
#include <asm/cpu_has_feature.h>
/*
@ -22,13 +23,23 @@
static inline unsigned long arch_calc_vm_prot_bits(unsigned long prot,
unsigned long pkey)
{
return (prot & PROT_SAO) ? VM_SAO : 0;
#ifdef CONFIG_PPC_MEM_KEYS
return (((prot & PROT_SAO) ? VM_SAO : 0) | pkey_to_vmflag_bits(pkey));
#else
return ((prot & PROT_SAO) ? VM_SAO : 0);
#endif
}
#define arch_calc_vm_prot_bits(prot, pkey) arch_calc_vm_prot_bits(prot, pkey)
static inline pgprot_t arch_vm_get_page_prot(unsigned long vm_flags)
{
#ifdef CONFIG_PPC_MEM_KEYS
return (vm_flags & VM_SAO) ?
__pgprot(_PAGE_SAO | vmflag_to_pte_pkey_bits(vm_flags)) :
__pgprot(0 | vmflag_to_pte_pkey_bits(vm_flags));
#else
return (vm_flags & VM_SAO) ? __pgprot(_PAGE_SAO) : __pgprot(0);
#endif
}
#define arch_vm_get_page_prot(vm_flags) arch_vm_get_page_prot(vm_flags)

View File

@ -29,17 +29,25 @@
#define MI_Kp 0x40000000 /* Should always be set */
/*
* All pages' PP exec bits are set to 000, which means Execute for Supervisor
* and no Execute for User.
* Then we use the APG to say whether accesses are according to Page rules,
* "all Supervisor" rules (Exec for all) and "all User" rules (Exec for noone)
* Therefore, we define 4 APG groups. msb is _PAGE_EXEC, lsb is _PAGE_USER
* 0 (00) => Not User, no exec => 11 (all accesses performed as user)
* 1 (01) => User but no exec => 11 (all accesses performed as user)
* 2 (10) => Not User, exec => 01 (rights according to page definition)
* 3 (11) => User, exec => 00 (all accesses performed as supervisor)
* All pages' PP data bits are set to either 001 or 011 by copying _PAGE_EXEC
* into bit 21 in the ITLBmiss handler (bit 21 is the middle bit), which means
* respectively NA for All or X for Supervisor and no access for User.
* Then we use the APG to say whether accesses are according to Page rules or
* "all Supervisor" rules (Access to all)
* We also use the 2nd APG bit for _PAGE_ACCESSED when having SWAP:
* When that bit is not set access is done iaw "all user"
* which means no access iaw page rules.
* Therefore, we define 4 APG groups. lsb is _PMD_USER, 2nd is _PAGE_ACCESSED
* 0x => No access => 11 (all accesses performed as user iaw page definition)
* 10 => No user => 01 (all accesses performed according to page definition)
* 11 => User => 00 (all accesses performed as supervisor iaw page definition)
* We define all 16 groups so that all other bits of APG can take any value
*/
#define MI_APG_INIT 0xf4ffffff
#ifdef CONFIG_SWAP
#define MI_APG_INIT 0xf4f4f4f4
#else
#define MI_APG_INIT 0x44444444
#endif
/* The effective page number register. When read, contains the information
* about the last instruction TLB miss. When MI_RPN is written, bits in
@ -102,17 +110,25 @@
#define MD_Kp 0x40000000 /* Should always be set */
/*
* All pages' PP data bits are set to either 000 or 011, which means
* All pages' PP data bits are set to either 000 or 011 or 001, which means
* respectively RW for Supervisor and no access for User, or RO for
* Supervisor and no access for user.
* Supervisor and no access for user and NA for ALL.
* Then we use the APG to say whether accesses are according to Page rules or
* "all Supervisor" rules (Access to all)
* Therefore, we define 2 APG groups. lsb is _PAGE_USER
* 0 => No user => 01 (all accesses performed according to page definition)
* 1 => User => 00 (all accesses performed as supervisor
* according to page definition)
* We also use the 2nd APG bit for _PAGE_ACCESSED when having SWAP:
* When that bit is not set access is done iaw "all user"
* which means no access iaw page rules.
* Therefore, we define 4 APG groups. lsb is _PMD_USER, 2nd is _PAGE_ACCESSED
* 0x => No access => 11 (all accesses performed as user iaw page definition)
* 10 => No user => 01 (all accesses performed according to page definition)
* 11 => User => 00 (all accesses performed as supervisor iaw page definition)
* We define all 16 groups so that all other bits of APG can take any value
*/
#define MD_APG_INIT 0x4fffffff
#ifdef CONFIG_SWAP
#define MD_APG_INIT 0xf4f4f4f4
#else
#define MD_APG_INIT 0x44444444
#endif
/* The effective page number register. When read, contains the information
* about the last instruction TLB miss. When MD_RPN is written, bits in
@ -164,6 +180,12 @@
*/
#define SPRN_M_TW 799
/* APGs */
#define M_APG0 0x00000000
#define M_APG1 0x00000020
#define M_APG2 0x00000040
#define M_APG3 0x00000060
#ifndef __ASSEMBLY__
typedef struct {
unsigned int id;

View File

@ -260,6 +260,15 @@ static inline bool early_radix_enabled(void)
}
#endif
#ifdef CONFIG_PPC_MEM_KEYS
extern u16 get_mm_addr_key(struct mm_struct *mm, unsigned long address);
#else
static inline u16 get_mm_addr_key(struct mm_struct *mm, unsigned long address)
{
return 0;
}
#endif /* CONFIG_PPC_MEM_KEYS */
#endif /* !__ASSEMBLY__ */
/* The kernel use the constants below to index in the page sizes array.

View File

@ -187,11 +187,33 @@ static inline void arch_bprm_mm_init(struct mm_struct *mm,
{
}
#ifdef CONFIG_PPC_MEM_KEYS
bool arch_vma_access_permitted(struct vm_area_struct *vma, bool write,
bool execute, bool foreign);
#else /* CONFIG_PPC_MEM_KEYS */
static inline bool arch_vma_access_permitted(struct vm_area_struct *vma,
bool write, bool execute, bool foreign)
{
/* by default, allow everything */
return true;
}
#define pkey_mm_init(mm)
#define thread_pkey_regs_save(thread)
#define thread_pkey_regs_restore(new_thread, old_thread)
#define thread_pkey_regs_init(thread)
static inline int vma_pkey(struct vm_area_struct *vma)
{
return 0;
}
static inline u64 pte_to_hpte_pkey_bits(u64 pteflags)
{
return 0x0UL;
}
#endif /* CONFIG_PPC_MEM_KEYS */
#endif /* __KERNEL__ */
#endif /* __ASM_POWERPC_MMU_CONTEXT_H */

View File

@ -29,17 +29,17 @@ struct mpic_timer {
#ifdef CONFIG_MPIC_TIMER
struct mpic_timer *mpic_request_timer(irq_handler_t fn, void *dev,
const struct timeval *time);
time64_t time);
void mpic_start_timer(struct mpic_timer *handle);
void mpic_stop_timer(struct mpic_timer *handle);
void mpic_get_remain_time(struct mpic_timer *handle, struct timeval *time);
void mpic_get_remain_time(struct mpic_timer *handle, time64_t *time);
void mpic_free_timer(struct mpic_timer *handle);
#else
struct mpic_timer *mpic_request_timer(irq_handler_t fn, void *dev,
const struct timeval *time) { return NULL; }
time64_t time) { return NULL; }
void mpic_start_timer(struct mpic_timer *handle) { }
void mpic_stop_timer(struct mpic_timer *handle) { }
void mpic_get_remain_time(struct mpic_timer *handle, struct timeval *time) { }
void mpic_get_remain_time(struct mpic_timer *handle, time64_t *time) { }
void mpic_free_timer(struct mpic_timer *handle) { }
#endif

View File

@ -4,10 +4,6 @@
#ifdef CONFIG_PPC_WATCHDOG
extern void arch_touch_nmi_watchdog(void);
extern void arch_trigger_cpumask_backtrace(const cpumask_t *mask,
bool exclude_self);
#define arch_trigger_cpumask_backtrace arch_trigger_cpumask_backtrace
#else
static inline void arch_touch_nmi_watchdog(void) {}
#endif

View File

@ -61,7 +61,8 @@ static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmdp,
static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmdp,
pgtable_t pte_page)
{
*pmdp = __pmd((page_to_pfn(pte_page) << PAGE_SHIFT) | _PMD_PRESENT);
*pmdp = __pmd((page_to_pfn(pte_page) << PAGE_SHIFT) | _PMD_USER |
_PMD_PRESENT);
}
#define pmd_pgtable(pmd) pmd_page(pmd)

View File

@ -282,7 +282,7 @@ static inline void __ptep_set_access_flags(struct mm_struct *mm,
{
unsigned long set = pte_val(entry) &
(_PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_RW | _PAGE_EXEC);
unsigned long clr = ~pte_val(entry) & _PAGE_RO;
unsigned long clr = ~pte_val(entry) & (_PAGE_RO | _PAGE_NA);
pte_update(ptep, clr, set);
}

View File

@ -31,37 +31,34 @@
/* Definitions for 8xx embedded chips. */
#define _PAGE_PRESENT 0x0001 /* Page is valid */
#define _PAGE_NO_CACHE 0x0002 /* I: cache inhibit */
#define _PAGE_SHARED 0x0004 /* No ASID (context) compare */
#define _PAGE_SPECIAL 0x0008 /* SW entry, forced to 0 by the TLB miss */
#define _PAGE_PRIVILEGED 0x0004 /* No ASID (context) compare */
#define _PAGE_HUGE 0x0008 /* SPS: Small Page Size (1 if 16k, 512k or 8M)*/
#define _PAGE_DIRTY 0x0100 /* C: page changed */
/* These 4 software bits must be masked out when the L2 entry is loaded
* into the TLB.
*/
#define _PAGE_GUARDED 0x0010 /* Copied to L1 G entry in DTLB */
#define _PAGE_USER 0x0020 /* Copied to L1 APG lsb */
#define _PAGE_EXEC 0x0040 /* Copied to L1 APG */
#define _PAGE_WRITETHRU 0x0080 /* software: caching is write through */
#define _PAGE_ACCESSED 0x0800 /* software: page referenced */
#define _PAGE_SPECIAL 0x0020 /* SW entry */
#define _PAGE_EXEC 0x0040 /* Copied to PP (bit 21) in ITLB */
#define _PAGE_ACCESSED 0x0080 /* software: page referenced */
#define _PAGE_NA 0x0200 /* Supervisor NA, User no access */
#define _PAGE_RO 0x0600 /* Supervisor RO, User no access */
#define _PMD_PRESENT 0x0001
#define _PMD_BAD 0x0ff0
#define _PMD_BAD 0x0fd0
#define _PMD_PAGE_MASK 0x000c
#define _PMD_PAGE_8M 0x000c
#define _PMD_PAGE_512K 0x0004
#define _PMD_USER 0x0020 /* APG 1 */
/* Until my rework is finished, 8xx still needs atomic PTE updates */
#define PTE_ATOMIC_UPDATES 1
/* We need to add _PAGE_SHARED to kernel pages */
#define _PAGE_KERNEL_RO (_PAGE_SHARED | _PAGE_RO)
#define _PAGE_KERNEL_ROX (_PAGE_SHARED | _PAGE_RO | _PAGE_EXEC)
#define _PAGE_KERNEL_RW (_PAGE_SHARED | _PAGE_DIRTY | _PAGE_RW | \
_PAGE_HWWRITE)
#define _PAGE_KERNEL_RWX (_PAGE_SHARED | _PAGE_DIRTY | _PAGE_RW | \
_PAGE_HWWRITE | _PAGE_EXEC)
#ifdef CONFIG_PPC_16K_PAGES
#define _PAGE_PSIZE _PAGE_HUGE
#endif
#endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_NOHASH_32_PTE_8xx_H */

View File

@ -45,6 +45,29 @@ static inline int pte_present(pte_t pte)
return pte_val(pte) & _PAGE_PRESENT;
}
/*
* We only find page table entry in the last level
* Hence no need for other accessors
*/
#define pte_access_permitted pte_access_permitted
static inline bool pte_access_permitted(pte_t pte, bool write)
{
unsigned long pteval = pte_val(pte);
/*
* A read-only access is controlled by _PAGE_USER bit.
* We have _PAGE_READ set for WRITE and EXECUTE
*/
unsigned long need_pte_bits = _PAGE_PRESENT | _PAGE_USER;
if (write)
need_pte_bits |= _PAGE_WRITE;
if ((pteval & need_pte_bits) != need_pte_bits)
return false;
return true;
}
/* Conversion functions: convert a page and protection to a page entry,
* and a page entry and page directory to the page they refer to.
*
@ -103,7 +126,7 @@ static inline pte_t pte_mkspecial(pte_t pte)
static inline pte_t pte_mkhuge(pte_t pte)
{
return pte;
return __pte(pte_val(pte) | _PAGE_HUGE);
}
static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
@ -212,8 +235,10 @@ extern int ptep_set_access_flags(struct vm_area_struct *vma, unsigned long addre
#define pgprot_cached(prot) (__pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL) | \
_PAGE_COHERENT))
#if _PAGE_WRITETHRU != 0
#define pgprot_cached_wthru(prot) (__pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL) | \
_PAGE_COHERENT | _PAGE_WRITETHRU))
#endif
#define pgprot_cached_noncoherent(prot) \
(__pgprot(pgprot_val(prot) & ~_PAGE_CACHE_CTL))

View File

@ -55,6 +55,7 @@
#define _PAGE_KERNEL_RWX (_PAGE_BAP_SW | _PAGE_BAP_SR | _PAGE_DIRTY | _PAGE_BAP_SX)
#define _PAGE_KERNEL_ROX (_PAGE_BAP_SR | _PAGE_BAP_SX)
#define _PAGE_USER (_PAGE_BAP_UR | _PAGE_BAP_SR) /* Can be read */
#define _PAGE_PRIVILEGED (_PAGE_BAP_SR)
#define _PAGE_HASHPTE 0
#define _PAGE_BUSY 0

View File

@ -201,7 +201,10 @@
#define OPAL_SET_POWER_SHIFT_RATIO 155
#define OPAL_SENSOR_GROUP_CLEAR 156
#define OPAL_PCI_SET_P2P 157
#define OPAL_LAST 157
#define OPAL_NPU_SPA_SETUP 159
#define OPAL_NPU_SPA_CLEAR_CACHE 160
#define OPAL_NPU_TL_SET 161
#define OPAL_LAST 161
/* Device tree flags */

View File

@ -34,6 +34,12 @@ int64_t opal_npu_init_context(uint64_t phb_id, int pasid, uint64_t msr,
uint64_t bdf);
int64_t opal_npu_map_lpar(uint64_t phb_id, uint64_t bdf, uint64_t lparid,
uint64_t lpcr);
int64_t opal_npu_spa_setup(uint64_t phb_id, uint32_t bdfn,
uint64_t addr, uint64_t PE_mask);
int64_t opal_npu_spa_clear_cache(uint64_t phb_id, uint32_t bdfn,
uint64_t PE_handle);
int64_t opal_npu_tl_set(uint64_t phb_id, uint32_t bdfn, long cap,
uint64_t rate_phys, uint32_t size);
int64_t opal_console_write(int64_t term_number, __be64 *length,
const uint8_t *buffer);
int64_t opal_console_read(int64_t term_number, __be64 *length,

View File

@ -159,7 +159,7 @@ struct paca_struct {
u64 saved_r1; /* r1 save for RTAS calls or PM */
u64 saved_msr; /* MSR saved here by enter_rtas */
u16 trap_save; /* Used when bad stack is encountered */
u8 soft_enabled; /* irq soft-enable flag */
u8 irq_soft_mask; /* mask for irq soft masking */
u8 irq_happened; /* irq happened while soft-disabled */
u8 io_sync; /* writel() needs spin_unlock sync */
u8 irq_work_pending; /* IRQ_WORK interrupt while soft-disable */
@ -239,8 +239,7 @@ struct paca_struct {
*/
u64 exrfi[EX_SIZE] __aligned(0x80);
void *rfi_flush_fallback_area;
u64 l1d_flush_congruence;
u64 l1d_flush_sets;
u64 l1d_flush_size;
#endif
};

View File

@ -197,25 +197,22 @@ struct pci_dn {
struct iommu_table_group *table_group; /* for phb's or bridges */
int pci_ext_config_space; /* for pci devices */
struct pci_dev *pcidev; /* back-pointer to the pci device */
#ifdef CONFIG_EEH
struct eeh_dev *edev; /* eeh device */
#endif
#define IODA_INVALID_PE 0xFFFFFFFF
#ifdef CONFIG_PPC_POWERNV
unsigned int pe_number;
int vf_index; /* VF index in the PF */
#ifdef CONFIG_PCI_IOV
int vf_index; /* VF index in the PF */
u16 vfs_expanded; /* number of VFs IOV BAR expanded */
u16 num_vfs; /* number of VFs enabled*/
unsigned int *pe_num_map; /* PE# for the first VF PE or array */
bool m64_single_mode; /* Use M64 BAR in Single Mode */
#define IODA_INVALID_M64 (-1)
int (*m64_map)[PCI_SRIOV_NUM_BARS];
int (*m64_map)[PCI_SRIOV_NUM_BARS]; /* Only used on powernv */
int last_allow_rc; /* Only used on pseries */
#endif /* CONFIG_PCI_IOV */
int mps; /* Maximum Payload Size */
#endif
struct list_head child_list;
struct list_head list;
struct resource holes[PCI_SRIOV_NUM_BARS];

View File

@ -121,6 +121,8 @@ extern int remove_phb_dynamic(struct pci_controller *phb);
extern struct pci_dev *of_create_pci_dev(struct device_node *node,
struct pci_bus *bus, int devfn);
extern unsigned int pci_parse_of_flags(u32 addr0, int bridge);
extern void of_scan_pci_bridge(struct pci_dev *dev);
extern void of_scan_bus(struct device_node *node, struct pci_bus *bus);

View File

@ -0,0 +1,218 @@
/* SPDX-License-Identifier: GPL-2.0+ */
/*
* PowerPC Memory Protection Keys management
*
* Copyright 2017, Ram Pai, IBM Corporation.
*/
#ifndef _ASM_POWERPC_KEYS_H
#define _ASM_POWERPC_KEYS_H
#include <linux/jump_label.h>
#include <asm/firmware.h>
DECLARE_STATIC_KEY_TRUE(pkey_disabled);
extern int pkeys_total; /* total pkeys as per device tree */
extern u32 initial_allocation_mask; /* bits set for reserved keys */
/*
* Define these here temporarily so we're not dependent on patching linux/mm.h.
* Once it's updated we can drop these.
*/
#ifndef VM_PKEY_BIT0
# define VM_PKEY_SHIFT VM_HIGH_ARCH_BIT_0
# define VM_PKEY_BIT0 VM_HIGH_ARCH_0
# define VM_PKEY_BIT1 VM_HIGH_ARCH_1
# define VM_PKEY_BIT2 VM_HIGH_ARCH_2
# define VM_PKEY_BIT3 VM_HIGH_ARCH_3
# define VM_PKEY_BIT4 VM_HIGH_ARCH_4
#endif
#define ARCH_VM_PKEY_FLAGS (VM_PKEY_BIT0 | VM_PKEY_BIT1 | VM_PKEY_BIT2 | \
VM_PKEY_BIT3 | VM_PKEY_BIT4)
/* Override any generic PKEY permission defines */
#define PKEY_DISABLE_EXECUTE 0x4
#define PKEY_ACCESS_MASK (PKEY_DISABLE_ACCESS | \
PKEY_DISABLE_WRITE | \
PKEY_DISABLE_EXECUTE)
static inline u64 pkey_to_vmflag_bits(u16 pkey)
{
return (((u64)pkey << VM_PKEY_SHIFT) & ARCH_VM_PKEY_FLAGS);
}
static inline u64 vmflag_to_pte_pkey_bits(u64 vm_flags)
{
if (static_branch_likely(&pkey_disabled))
return 0x0UL;
return (((vm_flags & VM_PKEY_BIT0) ? H_PTE_PKEY_BIT4 : 0x0UL) |
((vm_flags & VM_PKEY_BIT1) ? H_PTE_PKEY_BIT3 : 0x0UL) |
((vm_flags & VM_PKEY_BIT2) ? H_PTE_PKEY_BIT2 : 0x0UL) |
((vm_flags & VM_PKEY_BIT3) ? H_PTE_PKEY_BIT1 : 0x0UL) |
((vm_flags & VM_PKEY_BIT4) ? H_PTE_PKEY_BIT0 : 0x0UL));
}
static inline int vma_pkey(struct vm_area_struct *vma)
{
if (static_branch_likely(&pkey_disabled))
return 0;
return (vma->vm_flags & ARCH_VM_PKEY_FLAGS) >> VM_PKEY_SHIFT;
}
#define arch_max_pkey() pkeys_total
static inline u64 pte_to_hpte_pkey_bits(u64 pteflags)
{
return (((pteflags & H_PTE_PKEY_BIT0) ? HPTE_R_KEY_BIT0 : 0x0UL) |
((pteflags & H_PTE_PKEY_BIT1) ? HPTE_R_KEY_BIT1 : 0x0UL) |
((pteflags & H_PTE_PKEY_BIT2) ? HPTE_R_KEY_BIT2 : 0x0UL) |
((pteflags & H_PTE_PKEY_BIT3) ? HPTE_R_KEY_BIT3 : 0x0UL) |
((pteflags & H_PTE_PKEY_BIT4) ? HPTE_R_KEY_BIT4 : 0x0UL));
}
static inline u16 pte_to_pkey_bits(u64 pteflags)
{
return (((pteflags & H_PTE_PKEY_BIT0) ? 0x10 : 0x0UL) |
((pteflags & H_PTE_PKEY_BIT1) ? 0x8 : 0x0UL) |
((pteflags & H_PTE_PKEY_BIT2) ? 0x4 : 0x0UL) |
((pteflags & H_PTE_PKEY_BIT3) ? 0x2 : 0x0UL) |
((pteflags & H_PTE_PKEY_BIT4) ? 0x1 : 0x0UL));
}
#define pkey_alloc_mask(pkey) (0x1 << pkey)
#define mm_pkey_allocation_map(mm) (mm->context.pkey_allocation_map)
#define __mm_pkey_allocated(mm, pkey) { \
mm_pkey_allocation_map(mm) |= pkey_alloc_mask(pkey); \
}
#define __mm_pkey_free(mm, pkey) { \
mm_pkey_allocation_map(mm) &= ~pkey_alloc_mask(pkey); \
}
#define __mm_pkey_is_allocated(mm, pkey) \
(mm_pkey_allocation_map(mm) & pkey_alloc_mask(pkey))
#define __mm_pkey_is_reserved(pkey) (initial_allocation_mask & \
pkey_alloc_mask(pkey))
static inline bool mm_pkey_is_allocated(struct mm_struct *mm, int pkey)
{
/* A reserved key is never considered as 'explicitly allocated' */
return ((pkey < arch_max_pkey()) &&
!__mm_pkey_is_reserved(pkey) &&
__mm_pkey_is_allocated(mm, pkey));
}
extern void __arch_activate_pkey(int pkey);
extern void __arch_deactivate_pkey(int pkey);
/*
* Returns a positive, 5-bit key on success, or -1 on failure.
* Relies on the mmap_sem to protect against concurrency in mm_pkey_alloc() and
* mm_pkey_free().
*/
static inline int mm_pkey_alloc(struct mm_struct *mm)
{
/*
* Note: this is the one and only place we make sure that the pkey is
* valid as far as the hardware is concerned. The rest of the kernel
* trusts that only good, valid pkeys come out of here.
*/
u32 all_pkeys_mask = (u32)(~(0x0));
int ret;
if (static_branch_likely(&pkey_disabled))
return -1;
/*
* Are we out of pkeys? We must handle this specially because ffz()
* behavior is undefined if there are no zeros.
*/
if (mm_pkey_allocation_map(mm) == all_pkeys_mask)
return -1;
ret = ffz((u32)mm_pkey_allocation_map(mm));
__mm_pkey_allocated(mm, ret);
/*
* Enable the key in the hardware
*/
if (ret > 0)
__arch_activate_pkey(ret);
return ret;
}
static inline int mm_pkey_free(struct mm_struct *mm, int pkey)
{
if (static_branch_likely(&pkey_disabled))
return -1;
if (!mm_pkey_is_allocated(mm, pkey))
return -EINVAL;
/*
* Disable the key in the hardware
*/
__arch_deactivate_pkey(pkey);
__mm_pkey_free(mm, pkey);
return 0;
}
/*
* Try to dedicate one of the protection keys to be used as an
* execute-only protection key.
*/
extern int __execute_only_pkey(struct mm_struct *mm);
static inline int execute_only_pkey(struct mm_struct *mm)
{
if (static_branch_likely(&pkey_disabled))
return -1;
return __execute_only_pkey(mm);
}
extern int __arch_override_mprotect_pkey(struct vm_area_struct *vma,
int prot, int pkey);
static inline int arch_override_mprotect_pkey(struct vm_area_struct *vma,
int prot, int pkey)
{
if (static_branch_likely(&pkey_disabled))
return 0;
/*
* Is this an mprotect_pkey() call? If so, never override the value that
* came from the user.
*/
if (pkey != -1)
return pkey;
return __arch_override_mprotect_pkey(vma, prot, pkey);
}
extern int __arch_set_user_pkey_access(struct task_struct *tsk, int pkey,
unsigned long init_val);
static inline int arch_set_user_pkey_access(struct task_struct *tsk, int pkey,
unsigned long init_val)
{
if (static_branch_likely(&pkey_disabled))
return -EINVAL;
return __arch_set_user_pkey_access(tsk, pkey, init_val);
}
static inline bool arch_pkeys_enabled(void)
{
return !static_branch_likely(&pkey_disabled);
}
extern void pkey_mm_init(struct mm_struct *mm);
extern bool arch_supports_pkeys(int cap);
extern unsigned int arch_usable_pkeys(void);
extern void thread_pkey_regs_save(struct thread_struct *thread);
extern void thread_pkey_regs_restore(struct thread_struct *new_thread,
struct thread_struct *old_thread);
extern void thread_pkey_regs_init(struct thread_struct *thread);
#endif /*_ASM_POWERPC_KEYS_H */

View File

@ -0,0 +1,36 @@
// SPDX-License-Identifier: GPL-2.0+
// Copyright 2017 IBM Corp.
#ifndef _ASM_PNV_OCXL_H
#define _ASM_PNV_OCXL_H
#include <linux/pci.h>
#define PNV_OCXL_TL_MAX_TEMPLATE 63
#define PNV_OCXL_TL_BITS_PER_RATE 4
#define PNV_OCXL_TL_RATE_BUF_SIZE ((PNV_OCXL_TL_MAX_TEMPLATE+1) * PNV_OCXL_TL_BITS_PER_RATE / 8)
extern int pnv_ocxl_get_actag(struct pci_dev *dev, u16 *base, u16 *enabled,
u16 *supported);
extern int pnv_ocxl_get_pasid_count(struct pci_dev *dev, int *count);
extern int pnv_ocxl_get_tl_cap(struct pci_dev *dev, long *cap,
char *rate_buf, int rate_buf_size);
extern int pnv_ocxl_set_tl_conf(struct pci_dev *dev, long cap,
uint64_t rate_buf_phys, int rate_buf_size);
extern int pnv_ocxl_get_xsl_irq(struct pci_dev *dev, int *hwirq);
extern void pnv_ocxl_unmap_xsl_regs(void __iomem *dsisr, void __iomem *dar,
void __iomem *tfc, void __iomem *pe_handle);
extern int pnv_ocxl_map_xsl_regs(struct pci_dev *dev, void __iomem **dsisr,
void __iomem **dar, void __iomem **tfc,
void __iomem **pe_handle);
extern int pnv_ocxl_spa_setup(struct pci_dev *dev, void *spa_mem, int PE_mask,
void **platform_data);
extern void pnv_ocxl_spa_release(void *platform_data);
extern int pnv_ocxl_spa_remove_pe(void *platform_data, int pe_handle);
extern int pnv_ocxl_alloc_xive_irq(u32 *irq, u64 *trigger_addr);
extern void pnv_ocxl_free_xive_irq(u32 irq);
#endif /* _ASM_PNV_OCXL_H */

View File

@ -236,6 +236,7 @@
#define PPC_INST_RFCI 0x4c000066
#define PPC_INST_RFDI 0x4c00004e
#define PPC_INST_RFMCI 0x4c00004c
#define PPC_INST_MFSPR 0x7c0002a6
#define PPC_INST_MFSPR_DSCR 0x7c1102a6
#define PPC_INST_MFSPR_DSCR_MASK 0xfc1ffffe
#define PPC_INST_MTSPR_DSCR 0x7c1103a6
@ -383,6 +384,7 @@
#define __PPC_ME64(s) __PPC_MB64(s)
#define __PPC_BI(s) (((s) & 0x1f) << 16)
#define __PPC_CT(t) (((t) & 0x0f) << 21)
#define __PPC_SPR(r) ((((r) & 0x1f) << 16) | ((((r) >> 5) & 0x1f) << 11))
/*
* Only use the larx hint bit on 64bit CPUs. e500v1/v2 based CPUs will treat a

View File

@ -309,6 +309,11 @@ struct thread_struct {
struct thread_vr_state ckvr_state; /* Checkpointed VR state */
unsigned long ckvrsave; /* Checkpointed VRSAVE */
#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
#ifdef CONFIG_PPC_MEM_KEYS
unsigned long amr;
unsigned long iamr;
unsigned long uamor;
#endif
#ifdef CONFIG_KVM_BOOK3S_32_HANDLER
void* kvm_shadow_vcpu; /* KVM internal data */
#endif /* CONFIG_KVM_BOOK3S_32_HANDLER */

View File

@ -80,21 +80,20 @@ extern void of_instantiate_rtc(void);
extern int of_get_ibm_chip_id(struct device_node *np);
/* The of_drconf_cell struct defines the layout of the LMB array
* specified in the device tree property
* ibm,dynamic-reconfiguration-memory/ibm,dynamic-memory
*/
struct of_drconf_cell {
u64 base_addr;
u32 drc_index;
u32 reserved;
u32 aa_index;
u32 flags;
struct of_drc_info {
char *drc_type;
char *drc_name_prefix;
u32 drc_index_start;
u32 drc_name_suffix_start;
u32 num_sequential_elems;
u32 sequential_inc;
u32 drc_power_domain;
u32 last_drc_index;
};
#define DRCONF_MEM_ASSIGNED 0x00000008
#define DRCONF_MEM_AI_INVALID 0x00000040
#define DRCONF_MEM_RESERVED 0x00000080
extern int of_read_drc_info_cell(struct property **prop,
const __be32 **curval, struct of_drc_info *data);
/*
* There are two methods for telling firmware what our capabilities are.
@ -159,6 +158,7 @@ struct of_drconf_cell {
#define OV5_PFO_HW_842 0x1140 /* PFO Compression Accelerator */
#define OV5_PFO_HW_ENCR 0x1120 /* PFO Encryption Accelerator */
#define OV5_SUB_PROCESSORS 0x1501 /* 1,2,or 4 Sub-Processors supported */
#define OV5_DRMEM_V2 0x1680 /* ibm,dynamic-reconfiguration-v2 */
#define OV5_XIVE_SUPPORT 0x17C0 /* XIVE Exploitation Support Mask */
#define OV5_XIVE_LEGACY 0x1700 /* XIVE legacy mode Only */
#define OV5_XIVE_EXPLOIT 0x1740 /* XIVE exploitation mode Only */
@ -175,6 +175,7 @@ struct of_drconf_cell {
#define OV5_HASH_GTSE 0x1940 /* Guest Translation Shoot Down Avail */
/* Radix Table Extensions */
#define OV5_RADIX_GTSE 0x1A40 /* Guest Translation Shoot Down Avail */
#define OV5_DRC_INFO 0x1640 /* Redef Prop Structures: drc-info */
/* Option Vector 6: IBM PAPR hints */
#define OV6_LINUX 0x02 /* Linux is our OS */

View File

@ -8,9 +8,6 @@
#ifndef _PAGE_HASHPTE
#define _PAGE_HASHPTE 0
#endif
#ifndef _PAGE_SHARED
#define _PAGE_SHARED 0
#endif
#ifndef _PAGE_HWWRITE
#define _PAGE_HWWRITE 0
#endif
@ -45,6 +42,20 @@
#ifndef _PAGE_PTE
#define _PAGE_PTE 0
#endif
/* At least one of _PAGE_PRIVILEGED or _PAGE_USER must be defined */
#ifndef _PAGE_PRIVILEGED
#define _PAGE_PRIVILEGED 0
#else
#ifndef _PAGE_USER
#define _PAGE_USER 0
#endif
#endif
#ifndef _PAGE_NA
#define _PAGE_NA 0
#endif
#ifndef _PAGE_HUGE
#define _PAGE_HUGE 0
#endif
#ifndef _PMD_PRESENT_MASK
#define _PMD_PRESENT_MASK _PMD_PRESENT
@ -53,17 +64,22 @@
#define _PMD_SIZE 0
#define PMD_PAGE_SIZE(pmd) bad_call_to_PMD_PAGE_SIZE()
#endif
#ifndef _PMD_USER
#define _PMD_USER 0
#endif
#ifndef _PAGE_KERNEL_RO
#define _PAGE_KERNEL_RO (_PAGE_RO)
#define _PAGE_KERNEL_RO (_PAGE_PRIVILEGED | _PAGE_RO)
#endif
#ifndef _PAGE_KERNEL_ROX
#define _PAGE_KERNEL_ROX (_PAGE_EXEC | _PAGE_RO)
#define _PAGE_KERNEL_ROX (_PAGE_PRIVILEGED | _PAGE_RO | _PAGE_EXEC)
#endif
#ifndef _PAGE_KERNEL_RW
#define _PAGE_KERNEL_RW (_PAGE_DIRTY | _PAGE_RW | _PAGE_HWWRITE)
#define _PAGE_KERNEL_RW (_PAGE_PRIVILEGED | _PAGE_DIRTY | _PAGE_RW | \
_PAGE_HWWRITE)
#endif
#ifndef _PAGE_KERNEL_RWX
#define _PAGE_KERNEL_RWX (_PAGE_DIRTY | _PAGE_RW | _PAGE_HWWRITE | _PAGE_EXEC)
#define _PAGE_KERNEL_RWX (_PAGE_PRIVILEGED | _PAGE_DIRTY | _PAGE_RW | \
_PAGE_HWWRITE | _PAGE_EXEC)
#endif
#ifndef _PAGE_HPTEFLAGS
#define _PAGE_HPTEFLAGS _PAGE_HASHPTE
@ -85,7 +101,7 @@ extern unsigned long bad_call_to_PMD_PAGE_SIZE(void);
*/
static inline bool pte_user(pte_t pte)
{
return (pte_val(pte) & _PAGE_USER) == _PAGE_USER;
return (pte_val(pte) & (_PAGE_USER | _PAGE_PRIVILEGED)) == _PAGE_USER;
}
#endif /* __ASSEMBLY__ */
@ -115,7 +131,8 @@ static inline bool pte_user(pte_t pte)
/* Mask of bits returned by pte_pgprot() */
#define PAGE_PROT_BITS (_PAGE_GUARDED | _PAGE_COHERENT | _PAGE_NO_CACHE | \
_PAGE_WRITETHRU | _PAGE_ENDIAN | _PAGE_4K_PFN | \
_PAGE_USER | _PAGE_ACCESSED | _PAGE_RO | \
_PAGE_USER | _PAGE_ACCESSED | _PAGE_RO | _PAGE_NA | \
_PAGE_PRIVILEGED | \
_PAGE_RW | _PAGE_HWWRITE | _PAGE_DIRTY | _PAGE_EXEC)
/*
@ -142,7 +159,7 @@ static inline bool pte_user(pte_t pte)
*
* Note due to the way vm flags are laid out, the bits are XWR
*/
#define PAGE_NONE __pgprot(_PAGE_BASE)
#define PAGE_NONE __pgprot(_PAGE_BASE | _PAGE_NA)
#define PAGE_SHARED __pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_RW)
#define PAGE_SHARED_X __pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_RW | \
_PAGE_EXEC)

View File

@ -312,7 +312,6 @@
DSISR_BAD_EXT_CTRL)
#define DSISR_BAD_FAULT_64S (DSISR_BAD_FAULT_32S | \
DSISR_ATTR_CONFLICT | \
DSISR_KEYFAULT | \
DSISR_UNSUPP_MMU | \
DSISR_PRTABLE_FAULT | \
DSISR_ICSWX_NO_CT | \
@ -432,8 +431,9 @@
#define SPRN_LPID 0x13F /* Logical Partition Identifier */
#endif
#define LPID_RSVD 0x3ff /* Reserved LPID for partn switching */
#define SPRN_HMER 0x150 /* Hardware m? error recovery */
#define SPRN_HMEER 0x151 /* Hardware m? enable error recovery */
#define SPRN_HMER 0x150 /* Hypervisor maintenance exception reg */
#define HMER_DEBUG_TRIG (1ul << (63 - 17)) /* Debug trigger */
#define SPRN_HMEER 0x151 /* Hyp maintenance exception enable reg */
#define SPRN_PCR 0x152 /* Processor compatibility register */
#define PCR_VEC_DIS (1ul << (63-0)) /* Vec. disable (bit NA since POWER8) */
#define PCR_VSX_DIS (1ul << (63-1)) /* VSX disable (bit NA since POWER8) */

View File

@ -66,86 +66,4 @@
#define DC_DFWT 0x40000000 /* Data cache is forced write through */
#define DC_LES 0x20000000 /* Caches are little endian mode */
#ifdef CONFIG_8xx_CPU6
#define do_mtspr_cpu6(rn, rn_addr, v) \
do { \
int _reg_cpu6 = rn_addr, _tmp_cpu6; \
asm volatile("stw %0, %1;" \
"lwz %0, %1;" \
"mtspr " __stringify(rn) ",%2" : \
: "r" (_reg_cpu6), "m"(_tmp_cpu6), \
"r" ((unsigned long)(v)) \
: "memory"); \
} while (0)
#define do_mtspr(rn, v) asm volatile("mtspr " __stringify(rn) ",%0" : \
: "r" ((unsigned long)(v)) \
: "memory")
#define mtspr(rn, v) \
do { \
if (rn == SPRN_IMMR) \
do_mtspr_cpu6(rn, 0x3d30, v); \
else if (rn == SPRN_IC_CST) \
do_mtspr_cpu6(rn, 0x2110, v); \
else if (rn == SPRN_IC_ADR) \
do_mtspr_cpu6(rn, 0x2310, v); \
else if (rn == SPRN_IC_DAT) \
do_mtspr_cpu6(rn, 0x2510, v); \
else if (rn == SPRN_DC_CST) \
do_mtspr_cpu6(rn, 0x3110, v); \
else if (rn == SPRN_DC_ADR) \
do_mtspr_cpu6(rn, 0x3310, v); \
else if (rn == SPRN_DC_DAT) \
do_mtspr_cpu6(rn, 0x3510, v); \
else if (rn == SPRN_MI_CTR) \
do_mtspr_cpu6(rn, 0x2180, v); \
else if (rn == SPRN_MI_AP) \
do_mtspr_cpu6(rn, 0x2580, v); \
else if (rn == SPRN_MI_EPN) \
do_mtspr_cpu6(rn, 0x2780, v); \
else if (rn == SPRN_MI_TWC) \
do_mtspr_cpu6(rn, 0x2b80, v); \
else if (rn == SPRN_MI_RPN) \
do_mtspr_cpu6(rn, 0x2d80, v); \
else if (rn == SPRN_MI_CAM) \
do_mtspr_cpu6(rn, 0x2190, v); \
else if (rn == SPRN_MI_RAM0) \
do_mtspr_cpu6(rn, 0x2390, v); \
else if (rn == SPRN_MI_RAM1) \
do_mtspr_cpu6(rn, 0x2590, v); \
else if (rn == SPRN_MD_CTR) \
do_mtspr_cpu6(rn, 0x3180, v); \
else if (rn == SPRN_M_CASID) \
do_mtspr_cpu6(rn, 0x3380, v); \
else if (rn == SPRN_MD_AP) \
do_mtspr_cpu6(rn, 0x3580, v); \
else if (rn == SPRN_MD_EPN) \
do_mtspr_cpu6(rn, 0x3780, v); \
else if (rn == SPRN_M_TWB) \
do_mtspr_cpu6(rn, 0x3980, v); \
else if (rn == SPRN_MD_TWC) \
do_mtspr_cpu6(rn, 0x3b80, v); \
else if (rn == SPRN_MD_RPN) \
do_mtspr_cpu6(rn, 0x3d80, v); \
else if (rn == SPRN_M_TW) \
do_mtspr_cpu6(rn, 0x3f80, v); \
else if (rn == SPRN_MD_CAM) \
do_mtspr_cpu6(rn, 0x3190, v); \
else if (rn == SPRN_MD_RAM0) \
do_mtspr_cpu6(rn, 0x3390, v); \
else if (rn == SPRN_MD_RAM1) \
do_mtspr_cpu6(rn, 0x3590, v); \
else if (rn == SPRN_DEC) \
do_mtspr_cpu6(rn, 0x2c00, v); \
else if (rn == SPRN_TBWL) \
do_mtspr_cpu6(rn, 0x3880, v); \
else if (rn == SPRN_TBWU) \
do_mtspr_cpu6(rn, 0x3a80, v); \
else if (rn == SPRN_DPDR) \
do_mtspr_cpu6(rn, 0x2d30, v); \
else \
do_mtspr(rn, v); \
} while (0)
#endif
#endif /* _ASM_POWERPC_REG_8xx_H */

View File

@ -389,3 +389,6 @@ COMPAT_SYS_SPU(preadv2)
COMPAT_SYS_SPU(pwritev2)
SYSCALL(kexec_file_load)
SYSCALL(statx)
SYSCALL(pkey_alloc)
SYSCALL(pkey_free)
SYSCALL(pkey_mprotect)

View File

@ -12,14 +12,10 @@
#include <uapi/asm/unistd.h>
#define NR_syscalls 384
#define NR_syscalls 387
#define __NR__exit __NR_exit
#define __IGNORE_pkey_mprotect
#define __IGNORE_pkey_alloc
#define __IGNORE_pkey_free
#ifndef __ASSEMBLY__
#include <linux/types.h>

View File

@ -9,6 +9,41 @@
#ifndef _ASM_POWERPC_XIVE_REGS_H
#define _ASM_POWERPC_XIVE_REGS_H
/*
* "magic" Event State Buffer (ESB) MMIO offsets.
*
* Each interrupt source has a 2-bit state machine called ESB
* which can be controlled by MMIO. It's made of 2 bits, P and
* Q. P indicates that an interrupt is pending (has been sent
* to a queue and is waiting for an EOI). Q indicates that the
* interrupt has been triggered while pending.
*
* This acts as a coalescing mechanism in order to guarantee
* that a given interrupt only occurs at most once in a queue.
*
* When doing an EOI, the Q bit will indicate if the interrupt
* needs to be re-triggered.
*
* The following offsets into the ESB MMIO allow to read or
* manipulate the PQ bits. They must be used with an 8-bytes
* load instruction. They all return the previous state of the
* interrupt (atomically).
*
* Additionally, some ESB pages support doing an EOI via a
* store at 0 and some ESBs support doing a trigger via a
* separate trigger page.
*/
#define XIVE_ESB_STORE_EOI 0x400 /* Store */
#define XIVE_ESB_LOAD_EOI 0x000 /* Load */
#define XIVE_ESB_GET 0x800 /* Load */
#define XIVE_ESB_SET_PQ_00 0xc00 /* Load */
#define XIVE_ESB_SET_PQ_01 0xd00 /* Load */
#define XIVE_ESB_SET_PQ_10 0xe00 /* Load */
#define XIVE_ESB_SET_PQ_11 0xf00 /* Load */
#define XIVE_ESB_VAL_P 0x2
#define XIVE_ESB_VAL_Q 0x1
/*
* Thread Management (aka "TM") registers
*/

View File

@ -58,6 +58,9 @@ struct xive_irq_data {
#define XIVE_IRQ_FLAG_EOI_FW 0x10
#define XIVE_IRQ_FLAG_H_INT_ESB 0x20
/* Special flag set by KVM for excalation interrupts */
#define XIVE_IRQ_NO_EOI 0x80
#define XIVE_INVALID_CHIP_ID -1
/* A queue tracking structure in a CPU */
@ -72,41 +75,6 @@ struct xive_q {
atomic_t pending_count;
};
/*
* "magic" Event State Buffer (ESB) MMIO offsets.
*
* Each interrupt source has a 2-bit state machine called ESB
* which can be controlled by MMIO. It's made of 2 bits, P and
* Q. P indicates that an interrupt is pending (has been sent
* to a queue and is waiting for an EOI). Q indicates that the
* interrupt has been triggered while pending.
*
* This acts as a coalescing mechanism in order to guarantee
* that a given interrupt only occurs at most once in a queue.
*
* When doing an EOI, the Q bit will indicate if the interrupt
* needs to be re-triggered.
*
* The following offsets into the ESB MMIO allow to read or
* manipulate the PQ bits. They must be used with an 8-bytes
* load instruction. They all return the previous state of the
* interrupt (atomically).
*
* Additionally, some ESB pages support doing an EOI via a
* store at 0 and some ESBs support doing a trigger via a
* separate trigger page.
*/
#define XIVE_ESB_STORE_EOI 0x400 /* Store */
#define XIVE_ESB_LOAD_EOI 0x000 /* Load */
#define XIVE_ESB_GET 0x800 /* Load */
#define XIVE_ESB_SET_PQ_00 0xc00 /* Load */
#define XIVE_ESB_SET_PQ_01 0xd00 /* Load */
#define XIVE_ESB_SET_PQ_10 0xe00 /* Load */
#define XIVE_ESB_SET_PQ_11 0xf00 /* Load */
#define XIVE_ESB_VAL_P 0x2
#define XIVE_ESB_VAL_Q 0x1
/* Global enable flags for the XIVE support */
extern bool __xive_enabled;
@ -154,7 +122,7 @@ static inline bool xive_enabled(void) { return false; }
static inline bool xive_spapr_init(void) { return false; }
static inline bool xive_native_init(void) { return false; }
static inline void xive_smp_probe(void) { }
extern inline int xive_smp_prepare_cpu(unsigned int cpu) { return -EINVAL; }
static inline int xive_smp_prepare_cpu(unsigned int cpu) { return -EINVAL; }
static inline void xive_smp_setup_cpu(void) { }
static inline void xive_smp_disable_cpu(void) { }
static inline void xive_kexec_teardown_cpu(int secondary) { }

View File

@ -97,6 +97,7 @@
#define ELF_NTMSPRREG 3 /* include tfhar, tfiar, texasr */
#define ELF_NEBB 3 /* includes ebbrr, ebbhr, bescr */
#define ELF_NPMU 5 /* includes siar, sdar, sier, mmcr2, mmcr0 */
#define ELF_NPKEY 3 /* includes amr, iamr, uamor */
typedef unsigned long elf_greg_t64;
typedef elf_greg_t64 elf_gregset_t64[ELF_NGREG];

View File

@ -30,4 +30,10 @@
#define MAP_STACK 0x20000 /* give out an address that is best suited for process/thread stacks */
#define MAP_HUGETLB 0x40000 /* create a huge page mapping */
/* Override any generic PKEY permission defines */
#define PKEY_DISABLE_EXECUTE 0x4
#undef PKEY_ACCESS_MASK
#define PKEY_ACCESS_MASK (PKEY_DISABLE_ACCESS |\
PKEY_DISABLE_WRITE |\
PKEY_DISABLE_EXECUTE)
#endif /* _UAPI_ASM_POWERPC_MMAN_H */

View File

@ -395,5 +395,8 @@
#define __NR_pwritev2 381
#define __NR_kexec_file_load 382
#define __NR_statx 383
#define __NR_pkey_alloc 384
#define __NR_pkey_free 385
#define __NR_pkey_mprotect 386
#endif /* _UAPI_ASM_POWERPC_UNISTD_H_ */

Some files were not shown because too many files have changed in this diff Show More