1
0
Fork 0

powerpc updates for 4.17

Notable changes:
 
  - Support for 4PB user address space on 64-bit, opt-in via mmap().
 
  - Removal of POWER4 support, which was accidentally broken in 2016 and no one
    noticed, and blocked use of some modern instructions.
 
  - Workarounds so that the hypervisor can enable Transactional Memory on Power9.
 
  - A series to disable the DAWR (Data Address Watchpoint Register) on Power9.
 
  - More information displayed in the meltdown/spectre_v1/v2 sysfs files.
 
  - A vpermxor (Power8 Altivec) implementation for the raid6 Q Syndrome.
 
  - A big series to make the allocation of our pacas (per cpu area), kernel page
    tables, and per-cpu stacks NUMA aware when using the Radix MMU on Power9.
 
 And as usual many fixes, reworks and cleanups.
 
 Thanks to:
   Aaro Koskinen, Alexandre Belloni, Alexey Kardashevskiy, Alistair Popple, Andy
   Shevchenko, Aneesh Kumar K.V, Anshuman Khandual, Balbir Singh, Benjamin
   Herrenschmidt, Christophe Leroy, Christophe Lombard, Cyril Bur, Daniel Axtens,
   Dave Young, Finn Thain, Frederic Barrat, Gustavo Romero, Horia Geantă,
   Jonathan Neuschäfer, Kees Cook, Larry Finger, Laurent Dufour, Laurent Vivier,
   Logan Gunthorpe, Madhavan Srinivasan, Mark Greer, Mark Hairgrove, Markus
   Elfring, Mathieu Malaterre, Matt Brown, Matt Evans, Mauricio Faria de
   Oliveira, Michael Neuling, Naveen N. Rao, Nicholas Piggin, Paul Mackerras,
   Philippe Bergheaud, Ram Pai, Rob Herring, Sam Bobroff, Segher Boessenkool,
   Simon Guo, Simon Horman, Stewart Smith, Sukadev Bhattiprolu, Suraj Jitindar
   Singh, Thiago Jung Bauermann, Vaibhav Jain, Vaidyanathan Srinivasan, Vasant
   Hegde, Wei Yongjun.
 -----BEGIN PGP SIGNATURE-----
 
 iQIwBAABCAAaBQJayKxDExxtcGVAZWxsZXJtYW4uaWQuYXUACgkQUevqPMjhpYAr
 JQ/6A9Xs4zHDn9OeT9esEIxciETqUlrP0Wp64c4JVC7EkG1E7xRDZ4Xb4m8R2nNt
 9sPhtNO1yCtEk6kFQtPNB0N8v6pud4I6+aMcYnn+tP8mJRYQ4x9bYaF3Hw98IKmE
 Kd6TglmsUQvh2GpwPiF93KpzzWu1HB2kZzzqJcAMTMh7C79Qz00BjrTJltzXB2jx
 tJ+B4lVy8BeU8G5nDAzJEEwb5Ypkn8O40rS/lpAwVTYOBJ8Rbyq8Fj82FeREK9YO
 4EGaEKPkC/FdzX7OJV3v2/nldCd8pzV471fAoGuBUhJiJBMBoBybcTHIdDex7LlL
 zMLV1mUtGo8iolRPhL8iCH+GGifZz2WzstYCozz7hgIraWtc/frq9rZp6q0LdH/K
 trk7UbPGlVb92ecWZVpZyEcsMzKrCgZqnAe9wRNh1uEKScEdzd/bmRaMhENUObRh
 Hili6AVvmSKExpy7k2sZP/oUMaeC15/xz8Lk7l8a/iCkYhNmPYh5iSXM5+UKpcRT
 FYOcO0o3DwXsN46Whow3nJ7TqAsDy9/ecPUG71JQi3ZrHnRrm8jxkn8MCG5pZ1Fi
 KvKDxlg6RiJo3DF9/fSOpJUokvMwqBS5dJo4eh5eiDy94aBTqmBKFecvPxQm7a0L
 l3uXCF/6JuXEvMukFjGBO4RiYhw8i+B2uKsh81XUh7HKrgE=
 =HAB1
 -----END PGP SIGNATURE-----

Merge tag 'powerpc-4.17-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux

Pull powerpc updates from Michael Ellerman:
 "Notable changes:

   - Support for 4PB user address space on 64-bit, opt-in via mmap().

   - Removal of POWER4 support, which was accidentally broken in 2016
     and no one noticed, and blocked use of some modern instructions.

   - Workarounds so that the hypervisor can enable Transactional Memory
     on Power9.

   - A series to disable the DAWR (Data Address Watchpoint Register) on
     Power9.

   - More information displayed in the meltdown/spectre_v1/v2 sysfs
     files.

   - A vpermxor (Power8 Altivec) implementation for the raid6 Q
     Syndrome.

   - A big series to make the allocation of our pacas (per cpu area),
     kernel page tables, and per-cpu stacks NUMA aware when using the
     Radix MMU on Power9.

  And as usual many fixes, reworks and cleanups.

  Thanks to: Aaro Koskinen, Alexandre Belloni, Alexey Kardashevskiy,
  Alistair Popple, Andy Shevchenko, Aneesh Kumar K.V, Anshuman Khandual,
  Balbir Singh, Benjamin Herrenschmidt, Christophe Leroy, Christophe
  Lombard, Cyril Bur, Daniel Axtens, Dave Young, Finn Thain, Frederic
  Barrat, Gustavo Romero, Horia Geantă, Jonathan Neuschäfer, Kees Cook,
  Larry Finger, Laurent Dufour, Laurent Vivier, Logan Gunthorpe,
  Madhavan Srinivasan, Mark Greer, Mark Hairgrove, Markus Elfring,
  Mathieu Malaterre, Matt Brown, Matt Evans, Mauricio Faria de Oliveira,
  Michael Neuling, Naveen N. Rao, Nicholas Piggin, Paul Mackerras,
  Philippe Bergheaud, Ram Pai, Rob Herring, Sam Bobroff, Segher
  Boessenkool, Simon Guo, Simon Horman, Stewart Smith, Sukadev
  Bhattiprolu, Suraj Jitindar Singh, Thiago Jung Bauermann, Vaibhav
  Jain, Vaidyanathan Srinivasan, Vasant Hegde, Wei Yongjun"

* tag 'powerpc-4.17-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: (207 commits)
  powerpc/64s/idle: Fix restore of AMOR on POWER9 after deep sleep
  powerpc/64s: Fix POWER9 DD2.2 and above in cputable features
  powerpc/64s: Fix pkey support in dt_cpu_ftrs, add CPU_FTR_PKEY bit
  powerpc/64s: Fix dt_cpu_ftrs to have restore_cpu clear unwanted LPCR bits
  Revert "powerpc/64s/idle: POWER9 ESL=0 stop avoid save/restore overhead"
  powerpc: iomap.c: introduce io{read|write}64_{lo_hi|hi_lo}
  powerpc: io.h: move iomap.h include so that it can use readq/writeq defs
  cxl: Fix possible deadlock when processing page faults from cxllib
  powerpc/hw_breakpoint: Only disable hw breakpoint if cpu supports it
  powerpc/mm/radix: Update command line parsing for disable_radix
  powerpc/mm/radix: Parse disable_radix commandline correctly.
  powerpc/mm/hugetlb: initialize the pagetable cache correctly for hugetlb
  powerpc/mm/radix: Update pte fragment count from 16 to 256 on radix
  powerpc/mm/keys: Update documentation and remove unnecessary check
  powerpc/64s/idle: POWER9 ESL=0 stop avoid save/restore overhead
  powerpc/64s/idle: Consolidate power9_offline_stop()/power9_idle_stop()
  powerpc/powernv: Always stop secondaries before reboot/shutdown
  powerpc: hard disable irqs in smp_send_stop loop
  powerpc: use NMI IPI for smp_send_stop
  powerpc/powernv: Fix SMT4 forcing idle code
  ...
hifive-unleashed-5.1
Linus Torvalds 2018-04-07 12:08:19 -07:00
commit 49a695ba72
275 changed files with 4647 additions and 2432 deletions

View File

@ -141,11 +141,18 @@ AFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mabi=elfv1)
endif
CFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mcmodel=medium,$(call cc-option,-mminimal-toc))
CFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mno-pointers-to-nested-functions)
CFLAGS-$(CONFIG_PPC32) := -ffixed-r2 $(MULTIPLEWORD)
CFLAGS-$(CONFIG_PPC32) += $(call cc-option,-mno-readonly-in-sdata)
ifeq ($(CONFIG_PPC_BOOK3S_64),y)
CFLAGS-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=power7,-mtune=power4)
CFLAGS-$(CONFIG_GENERIC_CPU) += -mcpu=power4
ifeq ($(CONFIG_CPU_LITTLE_ENDIAN),y)
CFLAGS-$(CONFIG_GENERIC_CPU) += -mcpu=power8
CFLAGS-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=power9,-mtune=power8)
else
CFLAGS-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=power7,$(call cc-option,-mtune=power5))
CFLAGS-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mcpu=power5,-mcpu=power4)
endif
else
CFLAGS-$(CONFIG_GENERIC_CPU) += -mcpu=powerpc64
endif
@ -166,11 +173,11 @@ ifdef CONFIG_MPROFILE_KERNEL
endif
CFLAGS-$(CONFIG_CELL_CPU) += $(call cc-option,-mcpu=cell)
CFLAGS-$(CONFIG_POWER4_CPU) += $(call cc-option,-mcpu=power4)
CFLAGS-$(CONFIG_POWER5_CPU) += $(call cc-option,-mcpu=power5)
CFLAGS-$(CONFIG_POWER6_CPU) += $(call cc-option,-mcpu=power6)
CFLAGS-$(CONFIG_POWER7_CPU) += $(call cc-option,-mcpu=power7)
CFLAGS-$(CONFIG_POWER8_CPU) += $(call cc-option,-mcpu=power8)
CFLAGS-$(CONFIG_POWER9_CPU) += $(call cc-option,-mcpu=power9)
# Altivec option not allowed with e500mc64 in GCC.
ifeq ($(CONFIG_ALTIVEC),y)
@ -243,6 +250,7 @@ endif
cpu-as-$(CONFIG_4xx) += -Wa,-m405
cpu-as-$(CONFIG_ALTIVEC) += $(call as-option,-Wa$(comma)-maltivec)
cpu-as-$(CONFIG_E200) += -Wa,-me200
cpu-as-$(CONFIG_PPC_BOOK3S_64) += -Wa,-mpower4
KBUILD_AFLAGS += $(cpu-as-y)
KBUILD_CFLAGS += $(cpu-as-y)

View File

@ -219,6 +219,6 @@
};
chosen {
linux,stdout-path = "/plb/opb/serial@ef600300";
stdout-path = "/plb/opb/serial@ef600300";
};
};

View File

@ -178,6 +178,6 @@
};
chosen {
linux,stdout-path = &console;
stdout-path = &console;
};
};

View File

@ -177,6 +177,6 @@
};
chosen {
linux,stdout-path = &console;
stdout-path = &console;
};
};

View File

@ -410,6 +410,6 @@
};
chosen {
linux,stdout-path = &UART0;
stdout-path = &UART0;
};
};

View File

@ -168,6 +168,6 @@
};
chosen {
linux,stdout-path = "/pci@80000000/isa@7/serial@3f8";
stdout-path = "/pci@80000000/isa@7/serial@3f8";
};
};

View File

@ -304,7 +304,7 @@
chosen {
bootargs = "console=ttyS0,38400 root=/dev/mtdblock3 rootfstype=jffs2";
linux,stdout-path = &serial0;
stdout-path = &serial0;
};
};

View File

@ -295,6 +295,6 @@
};
chosen {
linux,stdout-path = "/plb/opb/serial@ef600300";
stdout-path = "/plb/opb/serial@ef600300";
};
};

View File

@ -361,6 +361,6 @@
};
};
chosen {
linux,stdout-path = &MPSC0;
stdout-path = &MPSC0;
};
};

View File

@ -237,6 +237,6 @@
};
chosen {
linux,stdout-path = &UART0;
stdout-path = &UART0;
};
};

View File

@ -78,7 +78,7 @@
};
rtc@56 {
compatible = "mc,rv3029c2";
compatible = "microcrystal,rv3029";
reg = <0x56>;
};

View File

@ -332,6 +332,6 @@
};
chosen {
linux,stdout-path = "/plb/opb/serial@40000200";
stdout-path = "/plb/opb/serial@40000200";
};
};

View File

@ -421,7 +421,7 @@
};
chosen {
linux,stdout-path = "/plb/opb/serial@ef600200";
stdout-path = "/plb/opb/serial@ef600200";
};
};

View File

@ -225,6 +225,6 @@
};
chosen {
linux,stdout-path = "/plb/opb/serial@ef600300";
stdout-path = "/plb/opb/serial@ef600300";
};
};

View File

@ -146,7 +146,7 @@
};
chosen {
linux,stdout-path = &serial0;
stdout-path = &serial0;
};
};

View File

@ -607,7 +607,7 @@
};
chosen {
linux,stdout-path = "/plb/opb/serial@b0020000";
stdout-path = "/plb/opb/serial@b0020000";
bootargs = "console=ttyS0,115200 rw log_buf_len=32768 debug";
};
};

View File

@ -191,6 +191,6 @@
};
chosen {
linux,stdout-path = "/tsi109@c0000000/serial@7808";
stdout-path = "/tsi109@c0000000/serial@7808";
};
};

View File

@ -291,6 +291,6 @@
};
chosen {
linux,stdout-path = &UART0;
stdout-path = &UART0;
};
};

View File

@ -442,6 +442,6 @@
};
chosen {
linux,stdout-path = "/plb/opb/serial@f0000200";
stdout-path = "/plb/opb/serial@f0000200";
};
};

View File

@ -150,6 +150,6 @@
};
chosen {
linux,stdout-path = "/plb/opb/serial@40000200";
stdout-path = "/plb/opb/serial@40000200";
};
};

View File

@ -111,6 +111,6 @@
};
chosen {
linux,stdout-path = "/plb/opb/serial@40000200";
stdout-path = "/plb/opb/serial@40000200";
};
};

View File

@ -505,6 +505,6 @@
};
chosen {
linux,stdout-path = "/plb/opb/serial@f0000200";
stdout-path = "/plb/opb/serial@f0000200";
};
};

View File

@ -222,6 +222,6 @@
};
chosen {
linux,stdout-path = "/plb/opb/serial@50001000";
stdout-path = "/plb/opb/serial@50001000";
};
};

View File

@ -339,6 +339,6 @@
chosen {
linux,stdout-path = "/soc/cpm/serial@91a00";
stdout-path = "/soc/cpm/serial@91a00";
};
};

View File

@ -25,7 +25,7 @@
};
chosen {
linux,stdout-path = &console;
stdout-path = &console;
};
cpus {

View File

@ -262,6 +262,6 @@
};
chosen {
linux,stdout-path = "/soc/cpm/serial@11a00";
stdout-path = "/soc/cpm/serial@11a00";
};
};

View File

@ -185,6 +185,6 @@
};
chosen {
linux,stdout-path = "/soc/cpm/serial@a80";
stdout-path = "/soc/cpm/serial@a80";
};
};

View File

@ -227,6 +227,6 @@
};
chosen {
linux,stdout-path = "/soc/cpm/serial@a80";
stdout-path = "/soc/cpm/serial@a80";
};
};

View File

@ -179,7 +179,7 @@
};
chosen {
linux,stdout-path = &serial0;
stdout-path = &serial0;
};
};

View File

@ -309,6 +309,6 @@
};
};
chosen {
linux,stdout-path = "/plb/opb/serial@ef600200";
stdout-path = "/plb/opb/serial@ef600200";
};
};

View File

@ -242,6 +242,6 @@
};
chosen {
linux,stdout-path = "/soc/cpm/serial@11a00";
stdout-path = "/soc/cpm/serial@11a00";
};
};

View File

@ -344,7 +344,7 @@
};
chosen {
linux,stdout-path = "/plb/opb/serial@ef600300";
stdout-path = "/plb/opb/serial@ef600300";
bootargs = "console=ttyS0,115200";
};
};

View File

@ -381,7 +381,7 @@
chosen {
linux,stdout-path = "/plb/opb/serial@ef600200";
stdout-path = "/plb/opb/serial@ef600200";
};
};

View File

@ -288,6 +288,6 @@
};
chosen {
linux,stdout-path = "/plb/opb/serial@ef600300";
stdout-path = "/plb/opb/serial@ef600300";
};
};

View File

@ -406,7 +406,7 @@
};
chosen {
linux,stdout-path = "/plb/opb/serial@ef600300";
stdout-path = "/plb/opb/serial@ef600300";
bootargs = "console=ttyS0,115200";
};
};

View File

@ -137,6 +137,6 @@
};
chosen {
linux,stdout-path = &serial0;
stdout-path = &serial0;
};
};

View File

@ -422,6 +422,6 @@
};
chosen {
linux,stdout-path = "/plb/opb/serial@40000300";
stdout-path = "/plb/opb/serial@40000300";
};
};

View File

@ -32,7 +32,7 @@
} ;
chosen {
bootargs = "console=ttyS0 root=/dev/ram";
linux,stdout-path = &RS232_Uart_1;
stdout-path = &RS232_Uart_1;
} ;
cpus {
#address-cells = <1>;

View File

@ -26,7 +26,7 @@
} ;
chosen {
bootargs = "console=ttyS0 root=/dev/ram";
linux,stdout-path = "/plb@0/serial@83e00000";
stdout-path = "/plb@0/serial@83e00000";
} ;
cpus {
#address-cells = <1>;

View File

@ -241,6 +241,6 @@
};
chosen {
linux,stdout-path = "/plb/opb/serial@ef600300";
stdout-path = "/plb/opb/serial@ef600300";
};
};

View File

@ -304,6 +304,6 @@
};
chosen {
linux,stdout-path = "/plb/opb/serial@ef600300";
stdout-path = "/plb/opb/serial@ef600300";
};
};

View File

@ -13,6 +13,7 @@
*/
/dts-v1/;
#include <dt-bindings/gpio/gpio.h>
/*
* This is commented-out for now.
@ -176,6 +177,15 @@
compatible = "nintendo,hollywood-gpio";
reg = <0x0d8000c0 0x40>;
gpio-controller;
ngpios = <24>;
gpio-line-names =
"POWER", "SHUTDOWN", "FAN", "DC_DC",
"DI_SPIN", "SLOT_LED", "EJECT_BTN", "SLOT_IN",
"SENSOR_BAR", "DO_EJECT", "EEP_CS", "EEP_CLK",
"EEP_MOSI", "EEP_MISO", "AVE_SCL", "AVE_SDA",
"DEBUG0", "DEBUG1", "DEBUG2", "DEBUG3",
"DEBUG4", "DEBUG5", "DEBUG6", "DEBUG7";
/*
* This is commented out while a standard binding
@ -214,5 +224,16 @@
interrupts = <2>;
};
};
gpio-leds {
compatible = "gpio-leds";
/* This is the blue LED in the disk drive slot */
drive-slot {
label = "wii:blue:drive_slot";
gpios = <&GPIO 5 GPIO_ACTIVE_HIGH>;
panic-indicator;
};
};
};

View File

@ -503,6 +503,6 @@
/* Needed for dtbImage boot wrapper compatibility */
chosen {
linux,stdout-path = &serial0;
stdout-path = &serial0;
};
};

View File

@ -327,6 +327,6 @@
};
chosen {
linux,stdout-path = "/plb/opb/serial@ef600300";
stdout-path = "/plb/opb/serial@ef600300";
};
};

View File

@ -7,8 +7,6 @@
#include "of.h"
typedef u32 uint32_t;
typedef u64 uint64_t;
typedef unsigned long uintptr_t;
typedef __be16 fdt16_t;

View File

@ -62,6 +62,7 @@ void RunModeException(struct pt_regs *regs);
void single_step_exception(struct pt_regs *regs);
void program_check_exception(struct pt_regs *regs);
void alignment_exception(struct pt_regs *regs);
void slb_miss_bad_addr(struct pt_regs *regs);
void StackOverflow(struct pt_regs *regs);
void nonrecoverable_exception(struct pt_regs *regs);
void kernel_fp_unavailable_exception(struct pt_regs *regs);
@ -88,7 +89,18 @@ int sys_swapcontext(struct ucontext __user *old_ctx,
long sys_swapcontext(struct ucontext __user *old_ctx,
struct ucontext __user *new_ctx,
int ctx_size, int r6, int r7, int r8, struct pt_regs *regs);
int sys_debug_setcontext(struct ucontext __user *ctx,
int ndbg, struct sig_dbg_op __user *dbg,
int r6, int r7, int r8,
struct pt_regs *regs);
int
ppc_select(int n, fd_set __user *inp, fd_set __user *outp, fd_set __user *exp, struct timeval __user *tvp);
unsigned long __init early_init(unsigned long dt_ptr);
void __init machine_init(u64 dt_ptr);
#endif
long ppc_fadvise64_64(int fd, int advice, u32 offset_high, u32 offset_low,
u32 len_high, u32 len_low);
long sys_switch_endian(void);
notrace unsigned int __check_irq_replay(void);
void notrace restore_interrupts(void);
@ -126,4 +138,7 @@ extern int __ucmpdi2(u64, u64);
void _mcount(void);
unsigned long prepare_ftrace_return(unsigned long parent, unsigned long ip);
void pnv_power9_force_smt4_catch(void);
void pnv_power9_force_smt4_release(void);
#endif /* _ASM_POWERPC_ASM_PROTOTYPES_H */

View File

@ -35,7 +35,8 @@
#define rmb() __asm__ __volatile__ ("sync" : : : "memory")
#define wmb() __asm__ __volatile__ ("sync" : : : "memory")
#ifdef __SUBARCH_HAS_LWSYNC
/* The sub-arch has lwsync */
#if defined(__powerpc64__) || defined(CONFIG_PPC_E500MC)
# define SMPWMB LWSYNC
#else
# define SMPWMB eieio

View File

@ -11,6 +11,12 @@
#define H_PUD_INDEX_SIZE 9
#define H_PGD_INDEX_SIZE 9
/*
* Each context is 512TB. But on 4k we restrict our max TASK size to 64TB
* Hence also limit max EA bits to 64TB.
*/
#define MAX_EA_BITS_PER_CONTEXT 46
#ifndef __ASSEMBLY__
#define H_PTE_TABLE_SIZE (sizeof(pte_t) << H_PTE_INDEX_SIZE)
#define H_PMD_TABLE_SIZE (sizeof(pmd_t) << H_PMD_INDEX_SIZE)
@ -34,6 +40,14 @@
#define H_PAGE_COMBO 0x0
#define H_PTE_FRAG_NR 0
#define H_PTE_FRAG_SIZE_SHIFT 0
/* memory key bits, only 8 keys supported */
#define H_PTE_PKEY_BIT0 0
#define H_PTE_PKEY_BIT1 0
#define H_PTE_PKEY_BIT2 _RPAGE_RSV3
#define H_PTE_PKEY_BIT3 _RPAGE_RSV4
#define H_PTE_PKEY_BIT4 _RPAGE_RSV5
/*
* On all 4K setups, remap_4k_pfn() equates to remap_pfn_range()
*/

View File

@ -4,9 +4,15 @@
#define H_PTE_INDEX_SIZE 8
#define H_PMD_INDEX_SIZE 10
#define H_PUD_INDEX_SIZE 7
#define H_PUD_INDEX_SIZE 10
#define H_PGD_INDEX_SIZE 8
/*
* Each context is 512TB size. SLB miss for first context/default context
* is handled in the hotpath.
*/
#define MAX_EA_BITS_PER_CONTEXT 49
/*
* 64k aligned address free up few of the lower bits of RPN for us
* We steal that here. For more deatils look at pte_pfn/pfn_pte()
@ -16,6 +22,13 @@
#define H_PAGE_BUSY _RPAGE_RPN44 /* software: PTE & hash are busy */
#define H_PAGE_HASHPTE _RPAGE_RPN43 /* PTE has associated HPTE */
/* memory key bits. */
#define H_PTE_PKEY_BIT0 _RPAGE_RSV1
#define H_PTE_PKEY_BIT1 _RPAGE_RSV2
#define H_PTE_PKEY_BIT2 _RPAGE_RSV3
#define H_PTE_PKEY_BIT3 _RPAGE_RSV4
#define H_PTE_PKEY_BIT4 _RPAGE_RSV5
/*
* We need to differentiate between explicit huge page and THP huge
* page, since THP huge page also need to track real subpage details
@ -24,16 +37,14 @@
/* PTE flags to conserve for HPTE identification */
#define _PAGE_HPTEFLAGS (H_PAGE_BUSY | H_PAGE_HASHPTE | H_PAGE_COMBO)
/*
* we support 16 fragments per PTE page of 64K size.
*/
#define H_PTE_FRAG_NR 16
/*
* We use a 2K PTE page fragment and another 2K for storing
* real_pte_t hash index
* 8 bytes per each pte entry and another 8 bytes for storing
* slot details.
*/
#define H_PTE_FRAG_SIZE_SHIFT 12
#define PTE_FRAG_SIZE (1UL << PTE_FRAG_SIZE_SHIFT)
#define H_PTE_FRAG_SIZE_SHIFT (H_PTE_INDEX_SIZE + 3 + 1)
#define H_PTE_FRAG_NR (PAGE_SIZE >> H_PTE_FRAG_SIZE_SHIFT)
#ifndef __ASSEMBLY__
#include <asm/errno.h>

View File

@ -212,7 +212,7 @@ extern int __meminit hash__vmemmap_create_mapping(unsigned long start,
extern void hash__vmemmap_remove_mapping(unsigned long start,
unsigned long page_size);
int hash__create_section_mapping(unsigned long start, unsigned long end);
int hash__create_section_mapping(unsigned long start, unsigned long end, int nid);
int hash__remove_section_mapping(unsigned long start, unsigned long end);
#endif /* !__ASSEMBLY__ */

View File

@ -80,8 +80,29 @@ struct spinlock;
/* Maximum possible number of NPUs in a system. */
#define NV_MAX_NPUS 8
/*
* One bit per slice. We have lower slices which cover 256MB segments
* upto 4G range. That gets us 16 low slices. For the rest we track slices
* in 1TB size.
*/
struct slice_mask {
u64 low_slices;
DECLARE_BITMAP(high_slices, SLICE_NUM_HIGH);
};
typedef struct {
mm_context_id_t id;
union {
/*
* We use id as the PIDR content for radix. On hash we can use
* more than one id. The extended ids are used when we start
* having address above 512TB. We allocate one extended id
* for each 512TB. The new id is then used with the 49 bit
* EA to build a new VA. We always use ESID_BITS_1T_MASK bits
* from EA and new context ids to build the new VAs.
*/
mm_context_id_t id;
mm_context_id_t extended_id[TASK_SIZE_USER64/TASK_CONTEXT_SIZE];
};
u16 user_psize; /* page size index */
/* Number of bits in the mm_cpumask */
@ -94,9 +115,18 @@ typedef struct {
struct npu_context *npu_context;
#ifdef CONFIG_PPC_MM_SLICES
u64 low_slices_psize; /* SLB page size encodings */
/* SLB page size encodings*/
unsigned char low_slices_psize[BITS_PER_LONG / BITS_PER_BYTE];
unsigned char high_slices_psize[SLICE_ARRAY_SIZE];
unsigned long slb_addr_limit;
# ifdef CONFIG_PPC_64K_PAGES
struct slice_mask mask_64k;
# endif
struct slice_mask mask_4k;
# ifdef CONFIG_HUGETLB_PAGE
struct slice_mask mask_16m;
struct slice_mask mask_16g;
# endif
#else
u16 sllp; /* SLB page size encoding */
#endif
@ -177,5 +207,25 @@ extern void radix_init_pseries(void);
static inline void radix_init_pseries(void) { };
#endif
static inline int get_ea_context(mm_context_t *ctx, unsigned long ea)
{
int index = ea >> MAX_EA_BITS_PER_CONTEXT;
if (likely(index < ARRAY_SIZE(ctx->extended_id)))
return ctx->extended_id[index];
/* should never happen */
WARN_ON(1);
return 0;
}
static inline unsigned long get_user_vsid(mm_context_t *ctx,
unsigned long ea, int ssize)
{
unsigned long context = get_ea_context(ctx, ea);
return get_vsid(context, ea, ssize);
}
#endif /* __ASSEMBLY__ */
#endif /* _ASM_POWERPC_BOOK3S_64_MMU_H_ */

View File

@ -80,8 +80,18 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm)
pgd = kmem_cache_alloc(PGT_CACHE(PGD_INDEX_SIZE),
pgtable_gfp_flags(mm, GFP_KERNEL));
/*
* With hugetlb, we don't clear the second half of the page table.
* If we share the same slab cache with the pmd or pud level table,
* we need to make sure we zero out the full table on alloc.
* With 4K we don't store slot in the second half. Hence we don't
* need to do this for 4k.
*/
#if defined(CONFIG_HUGETLB_PAGE) && defined(CONFIG_PPC_64K_PAGES) && \
((H_PGD_INDEX_SIZE == H_PUD_CACHE_INDEX) || \
(H_PGD_INDEX_SIZE == H_PMD_CACHE_INDEX))
memset(pgd, 0, PGD_TABLE_SIZE);
#endif
return pgd;
}

View File

@ -60,25 +60,6 @@
/* Max physical address bit as per radix table */
#define _RPAGE_PA_MAX 57
#ifdef CONFIG_PPC_MEM_KEYS
#ifdef CONFIG_PPC_64K_PAGES
#define H_PTE_PKEY_BIT0 _RPAGE_RSV1
#define H_PTE_PKEY_BIT1 _RPAGE_RSV2
#else /* CONFIG_PPC_64K_PAGES */
#define H_PTE_PKEY_BIT0 0 /* _RPAGE_RSV1 is not available */
#define H_PTE_PKEY_BIT1 0 /* _RPAGE_RSV2 is not available */
#endif /* CONFIG_PPC_64K_PAGES */
#define H_PTE_PKEY_BIT2 _RPAGE_RSV3
#define H_PTE_PKEY_BIT3 _RPAGE_RSV4
#define H_PTE_PKEY_BIT4 _RPAGE_RSV5
#else /* CONFIG_PPC_MEM_KEYS */
#define H_PTE_PKEY_BIT0 0
#define H_PTE_PKEY_BIT1 0
#define H_PTE_PKEY_BIT2 0
#define H_PTE_PKEY_BIT3 0
#define H_PTE_PKEY_BIT4 0
#endif /* CONFIG_PPC_MEM_KEYS */
/*
* Max physical address bit we will use for now.
*

View File

@ -9,5 +9,10 @@
#define RADIX_PMD_INDEX_SIZE 9 /* 1G huge page */
#define RADIX_PUD_INDEX_SIZE 9
#define RADIX_PGD_INDEX_SIZE 13
/*
* One fragment per per page
*/
#define RADIX_PTE_FRAG_SIZE_SHIFT (RADIX_PTE_INDEX_SIZE + 3)
#define RADIX_PTE_FRAG_NR (PAGE_SIZE >> RADIX_PTE_FRAG_SIZE_SHIFT)
#endif /* _ASM_POWERPC_PGTABLE_RADIX_4K_H */

View File

@ -10,4 +10,10 @@
#define RADIX_PUD_INDEX_SIZE 9
#define RADIX_PGD_INDEX_SIZE 13
/*
* We use a 256 byte PTE page fragment in radix
* 8 bytes per each PTE entry.
*/
#define RADIX_PTE_FRAG_SIZE_SHIFT (RADIX_PTE_INDEX_SIZE + 3)
#define RADIX_PTE_FRAG_NR (PAGE_SIZE >> RADIX_PTE_FRAG_SIZE_SHIFT)
#endif /* _ASM_POWERPC_PGTABLE_RADIX_64K_H */

View File

@ -313,7 +313,7 @@ static inline unsigned long radix__get_tree_size(void)
}
#ifdef CONFIG_MEMORY_HOTPLUG
int radix__create_section_mapping(unsigned long start, unsigned long end);
int radix__create_section_mapping(unsigned long start, unsigned long end, int nid);
int radix__remove_section_mapping(unsigned long start, unsigned long end);
#endif /* CONFIG_MEMORY_HOTPLUG */
#endif /* __ASSEMBLY__ */

View File

@ -0,0 +1,27 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_POWERPC_BOOK3S_64_SLICE_H
#define _ASM_POWERPC_BOOK3S_64_SLICE_H
#ifdef CONFIG_PPC_MM_SLICES
#define SLICE_LOW_SHIFT 28
#define SLICE_LOW_TOP (0x100000000ul)
#define SLICE_NUM_LOW (SLICE_LOW_TOP >> SLICE_LOW_SHIFT)
#define GET_LOW_SLICE_INDEX(addr) ((addr) >> SLICE_LOW_SHIFT)
#define SLICE_HIGH_SHIFT 40
#define SLICE_NUM_HIGH (H_PGTABLE_RANGE >> SLICE_HIGH_SHIFT)
#define GET_HIGH_SLICE_INDEX(addr) ((addr) >> SLICE_HIGH_SHIFT)
#else /* CONFIG_PPC_MM_SLICES */
#define get_slice_psize(mm, addr) ((mm)->context.user_psize)
#define slice_set_user_psize(mm, psize) \
do { \
(mm)->context.user_psize = (psize); \
(mm)->context.sllp = SLB_VSID_USER | mmu_psize_defs[(psize)].sllp; \
} while (0)
#endif /* CONFIG_PPC_MM_SLICES */
#endif /* _ASM_POWERPC_BOOK3S_64_SLICE_H */

View File

@ -99,7 +99,6 @@ static inline void invalidate_dcache_range(unsigned long start,
#ifdef CONFIG_PPC64
extern void flush_dcache_range(unsigned long start, unsigned long stop);
extern void flush_inval_dcache_range(unsigned long start, unsigned long stop);
extern void flush_dcache_phys_range(unsigned long start, unsigned long stop);
#endif
#define copy_to_user_page(vma, page, vaddr, dst, src, len) \

View File

@ -131,41 +131,48 @@ static inline void cpu_feature_keys_init(void) { }
/* CPU kernel features */
/* Retain the 32b definitions all use bottom half of word */
/* Definitions for features that we have on both 32-bit and 64-bit chips */
#define CPU_FTR_COHERENT_ICACHE ASM_CONST(0x00000001)
#define CPU_FTR_L2CR ASM_CONST(0x00000002)
#define CPU_FTR_SPEC7450 ASM_CONST(0x00000004)
#define CPU_FTR_ALTIVEC ASM_CONST(0x00000008)
#define CPU_FTR_TAU ASM_CONST(0x00000010)
#define CPU_FTR_CAN_DOZE ASM_CONST(0x00000020)
#define CPU_FTR_USE_TB ASM_CONST(0x00000040)
#define CPU_FTR_L2CSR ASM_CONST(0x00000080)
#define CPU_FTR_601 ASM_CONST(0x00000100)
#define CPU_FTR_DBELL ASM_CONST(0x00000200)
#define CPU_FTR_CAN_NAP ASM_CONST(0x00000400)
#define CPU_FTR_L3CR ASM_CONST(0x00000800)
#define CPU_FTR_L3_DISABLE_NAP ASM_CONST(0x00001000)
#define CPU_FTR_NAP_DISABLE_L2_PR ASM_CONST(0x00002000)
#define CPU_FTR_DUAL_PLL_750FX ASM_CONST(0x00004000)
#define CPU_FTR_NO_DPM ASM_CONST(0x00008000)
#define CPU_FTR_476_DD2 ASM_CONST(0x00010000)
#define CPU_FTR_NEED_COHERENT ASM_CONST(0x00020000)
#define CPU_FTR_NO_BTIC ASM_CONST(0x00040000)
#define CPU_FTR_DEBUG_LVL_EXC ASM_CONST(0x00080000)
#define CPU_FTR_NODSISRALIGN ASM_CONST(0x00100000)
#define CPU_FTR_PPC_LE ASM_CONST(0x00200000)
#define CPU_FTR_REAL_LE ASM_CONST(0x00400000)
#define CPU_FTR_FPU_UNAVAILABLE ASM_CONST(0x00800000)
#define CPU_FTR_UNIFIED_ID_CACHE ASM_CONST(0x01000000)
#define CPU_FTR_SPE ASM_CONST(0x02000000)
#define CPU_FTR_NEED_PAIRED_STWCX ASM_CONST(0x04000000)
#define CPU_FTR_LWSYNC ASM_CONST(0x08000000)
#define CPU_FTR_NOEXECUTE ASM_CONST(0x10000000)
#define CPU_FTR_INDEXED_DCR ASM_CONST(0x20000000)
#define CPU_FTR_EMB_HV ASM_CONST(0x40000000)
#define CPU_FTR_ALTIVEC ASM_CONST(0x00000002)
#define CPU_FTR_DBELL ASM_CONST(0x00000004)
#define CPU_FTR_CAN_NAP ASM_CONST(0x00000008)
#define CPU_FTR_DEBUG_LVL_EXC ASM_CONST(0x00000010)
#define CPU_FTR_NODSISRALIGN ASM_CONST(0x00000020)
#define CPU_FTR_FPU_UNAVAILABLE ASM_CONST(0x00000040)
#define CPU_FTR_LWSYNC ASM_CONST(0x00000080)
#define CPU_FTR_NOEXECUTE ASM_CONST(0x00000100)
#define CPU_FTR_EMB_HV ASM_CONST(0x00000200)
/* Definitions for features that only exist on 32-bit chips */
#ifdef CONFIG_PPC32
#define CPU_FTR_601 ASM_CONST(0x00001000)
#define CPU_FTR_L2CR ASM_CONST(0x00002000)
#define CPU_FTR_SPEC7450 ASM_CONST(0x00004000)
#define CPU_FTR_TAU ASM_CONST(0x00008000)
#define CPU_FTR_CAN_DOZE ASM_CONST(0x00010000)
#define CPU_FTR_USE_RTC ASM_CONST(0x00020000)
#define CPU_FTR_L3CR ASM_CONST(0x00040000)
#define CPU_FTR_L3_DISABLE_NAP ASM_CONST(0x00080000)
#define CPU_FTR_NAP_DISABLE_L2_PR ASM_CONST(0x00100000)
#define CPU_FTR_DUAL_PLL_750FX ASM_CONST(0x00200000)
#define CPU_FTR_NO_DPM ASM_CONST(0x00400000)
#define CPU_FTR_476_DD2 ASM_CONST(0x00800000)
#define CPU_FTR_NEED_COHERENT ASM_CONST(0x01000000)
#define CPU_FTR_NO_BTIC ASM_CONST(0x02000000)
#define CPU_FTR_PPC_LE ASM_CONST(0x04000000)
#define CPU_FTR_UNIFIED_ID_CACHE ASM_CONST(0x08000000)
#define CPU_FTR_SPE ASM_CONST(0x10000000)
#define CPU_FTR_NEED_PAIRED_STWCX ASM_CONST(0x20000000)
#define CPU_FTR_INDEXED_DCR ASM_CONST(0x40000000)
#else /* CONFIG_PPC32 */
/* Define these to 0 for the sake of tests in common code */
#define CPU_FTR_601 (0)
#define CPU_FTR_PPC_LE (0)
#endif
/*
* Add the 64-bit processor unique features in the top half of the word;
* Definitions for the 64-bit processor unique features;
* on 32-bit, make the names available but defined to be 0.
*/
#ifdef __powerpc64__
@ -174,38 +181,40 @@ static inline void cpu_feature_keys_init(void) { }
#define LONG_ASM_CONST(x) 0
#endif
#define CPU_FTR_HVMODE LONG_ASM_CONST(0x0000000100000000)
#define CPU_FTR_ARCH_201 LONG_ASM_CONST(0x0000000200000000)
#define CPU_FTR_ARCH_206 LONG_ASM_CONST(0x0000000400000000)
#define CPU_FTR_ARCH_207S LONG_ASM_CONST(0x0000000800000000)
#define CPU_FTR_ARCH_300 LONG_ASM_CONST(0x0000001000000000)
#define CPU_FTR_MMCRA LONG_ASM_CONST(0x0000002000000000)
#define CPU_FTR_CTRL LONG_ASM_CONST(0x0000004000000000)
#define CPU_FTR_SMT LONG_ASM_CONST(0x0000008000000000)
#define CPU_FTR_PAUSE_ZERO LONG_ASM_CONST(0x0000010000000000)
#define CPU_FTR_PURR LONG_ASM_CONST(0x0000020000000000)
#define CPU_FTR_CELL_TB_BUG LONG_ASM_CONST(0x0000040000000000)
#define CPU_FTR_SPURR LONG_ASM_CONST(0x0000080000000000)
#define CPU_FTR_DSCR LONG_ASM_CONST(0x0000100000000000)
#define CPU_FTR_VSX LONG_ASM_CONST(0x0000200000000000)
#define CPU_FTR_SAO LONG_ASM_CONST(0x0000400000000000)
#define CPU_FTR_CP_USE_DCBTZ LONG_ASM_CONST(0x0000800000000000)
#define CPU_FTR_UNALIGNED_LD_STD LONG_ASM_CONST(0x0001000000000000)
#define CPU_FTR_ASYM_SMT LONG_ASM_CONST(0x0002000000000000)
#define CPU_FTR_STCX_CHECKS_ADDRESS LONG_ASM_CONST(0x0004000000000000)
#define CPU_FTR_POPCNTB LONG_ASM_CONST(0x0008000000000000)
#define CPU_FTR_POPCNTD LONG_ASM_CONST(0x0010000000000000)
#define CPU_FTR_PKEY LONG_ASM_CONST(0x0020000000000000)
#define CPU_FTR_VMX_COPY LONG_ASM_CONST(0x0040000000000000)
#define CPU_FTR_TM LONG_ASM_CONST(0x0080000000000000)
#define CPU_FTR_CFAR LONG_ASM_CONST(0x0100000000000000)
#define CPU_FTR_HAS_PPR LONG_ASM_CONST(0x0200000000000000)
#define CPU_FTR_DAWR LONG_ASM_CONST(0x0400000000000000)
#define CPU_FTR_DABRX LONG_ASM_CONST(0x0800000000000000)
#define CPU_FTR_PMAO_BUG LONG_ASM_CONST(0x1000000000000000)
#define CPU_FTR_P9_TLBIE_BUG LONG_ASM_CONST(0x2000000000000000)
#define CPU_FTR_POWER9_DD1 LONG_ASM_CONST(0x4000000000000000)
#define CPU_FTR_POWER9_DD2_1 LONG_ASM_CONST(0x8000000000000000)
#define CPU_FTR_REAL_LE LONG_ASM_CONST(0x0000000000001000)
#define CPU_FTR_HVMODE LONG_ASM_CONST(0x0000000000002000)
#define CPU_FTR_ARCH_206 LONG_ASM_CONST(0x0000000000008000)
#define CPU_FTR_ARCH_207S LONG_ASM_CONST(0x0000000000010000)
#define CPU_FTR_ARCH_300 LONG_ASM_CONST(0x0000000000020000)
#define CPU_FTR_MMCRA LONG_ASM_CONST(0x0000000000040000)
#define CPU_FTR_CTRL LONG_ASM_CONST(0x0000000000080000)
#define CPU_FTR_SMT LONG_ASM_CONST(0x0000000000100000)
#define CPU_FTR_PAUSE_ZERO LONG_ASM_CONST(0x0000000000200000)
#define CPU_FTR_PURR LONG_ASM_CONST(0x0000000000400000)
#define CPU_FTR_CELL_TB_BUG LONG_ASM_CONST(0x0000000000800000)
#define CPU_FTR_SPURR LONG_ASM_CONST(0x0000000001000000)
#define CPU_FTR_DSCR LONG_ASM_CONST(0x0000000002000000)
#define CPU_FTR_VSX LONG_ASM_CONST(0x0000000004000000)
#define CPU_FTR_SAO LONG_ASM_CONST(0x0000000008000000)
#define CPU_FTR_CP_USE_DCBTZ LONG_ASM_CONST(0x0000000010000000)
#define CPU_FTR_UNALIGNED_LD_STD LONG_ASM_CONST(0x0000000020000000)
#define CPU_FTR_ASYM_SMT LONG_ASM_CONST(0x0000000040000000)
#define CPU_FTR_STCX_CHECKS_ADDRESS LONG_ASM_CONST(0x0000000080000000)
#define CPU_FTR_POPCNTB LONG_ASM_CONST(0x0000000100000000)
#define CPU_FTR_POPCNTD LONG_ASM_CONST(0x0000000200000000)
#define CPU_FTR_PKEY LONG_ASM_CONST(0x0000000400000000)
#define CPU_FTR_VMX_COPY LONG_ASM_CONST(0x0000000800000000)
#define CPU_FTR_TM LONG_ASM_CONST(0x0000001000000000)
#define CPU_FTR_CFAR LONG_ASM_CONST(0x0000002000000000)
#define CPU_FTR_HAS_PPR LONG_ASM_CONST(0x0000004000000000)
#define CPU_FTR_DAWR LONG_ASM_CONST(0x0000008000000000)
#define CPU_FTR_DABRX LONG_ASM_CONST(0x0000010000000000)
#define CPU_FTR_PMAO_BUG LONG_ASM_CONST(0x0000020000000000)
#define CPU_FTR_POWER9_DD1 LONG_ASM_CONST(0x0000040000000000)
#define CPU_FTR_POWER9_DD2_1 LONG_ASM_CONST(0x0000080000000000)
#define CPU_FTR_P9_TM_HV_ASSIST LONG_ASM_CONST(0x0000100000000000)
#define CPU_FTR_P9_TM_XER_SO_BUG LONG_ASM_CONST(0x0000200000000000)
#define CPU_FTR_P9_TLBIE_BUG LONG_ASM_CONST(0x0000400000000000)
#ifndef __ASSEMBLY__
@ -286,21 +295,19 @@ static inline void cpu_feature_keys_init(void) { }
#endif
#define CPU_FTRS_PPC601 (CPU_FTR_COMMON | CPU_FTR_601 | \
CPU_FTR_COHERENT_ICACHE | CPU_FTR_UNIFIED_ID_CACHE)
#define CPU_FTRS_603 (CPU_FTR_COMMON | \
CPU_FTR_MAYBE_CAN_DOZE | CPU_FTR_USE_TB | \
CPU_FTR_COHERENT_ICACHE | CPU_FTR_UNIFIED_ID_CACHE | CPU_FTR_USE_RTC)
#define CPU_FTRS_603 (CPU_FTR_COMMON | CPU_FTR_MAYBE_CAN_DOZE | \
CPU_FTR_MAYBE_CAN_NAP | CPU_FTR_PPC_LE)
#define CPU_FTRS_604 (CPU_FTR_COMMON | \
CPU_FTR_USE_TB | CPU_FTR_PPC_LE)
#define CPU_FTRS_604 (CPU_FTR_COMMON | CPU_FTR_PPC_LE)
#define CPU_FTRS_740_NOTAU (CPU_FTR_COMMON | \
CPU_FTR_MAYBE_CAN_DOZE | CPU_FTR_USE_TB | CPU_FTR_L2CR | \
CPU_FTR_MAYBE_CAN_DOZE | CPU_FTR_L2CR | \
CPU_FTR_MAYBE_CAN_NAP | CPU_FTR_PPC_LE)
#define CPU_FTRS_740 (CPU_FTR_COMMON | \
CPU_FTR_MAYBE_CAN_DOZE | CPU_FTR_USE_TB | CPU_FTR_L2CR | \
CPU_FTR_MAYBE_CAN_DOZE | CPU_FTR_L2CR | \
CPU_FTR_TAU | CPU_FTR_MAYBE_CAN_NAP | \
CPU_FTR_PPC_LE)
#define CPU_FTRS_750 (CPU_FTR_COMMON | \
CPU_FTR_MAYBE_CAN_DOZE | CPU_FTR_USE_TB | CPU_FTR_L2CR | \
CPU_FTR_MAYBE_CAN_DOZE | CPU_FTR_L2CR | \
CPU_FTR_TAU | CPU_FTR_MAYBE_CAN_NAP | \
CPU_FTR_PPC_LE)
#define CPU_FTRS_750CL (CPU_FTRS_750)
@ -309,125 +316,114 @@ static inline void cpu_feature_keys_init(void) { }
#define CPU_FTRS_750FX (CPU_FTRS_750 | CPU_FTR_DUAL_PLL_750FX)
#define CPU_FTRS_750GX (CPU_FTRS_750FX)
#define CPU_FTRS_7400_NOTAU (CPU_FTR_COMMON | \
CPU_FTR_MAYBE_CAN_DOZE | CPU_FTR_USE_TB | CPU_FTR_L2CR | \
CPU_FTR_MAYBE_CAN_DOZE | CPU_FTR_L2CR | \
CPU_FTR_ALTIVEC_COMP | \
CPU_FTR_MAYBE_CAN_NAP | CPU_FTR_PPC_LE)
#define CPU_FTRS_7400 (CPU_FTR_COMMON | \
CPU_FTR_MAYBE_CAN_DOZE | CPU_FTR_USE_TB | CPU_FTR_L2CR | \
CPU_FTR_MAYBE_CAN_DOZE | CPU_FTR_L2CR | \
CPU_FTR_TAU | CPU_FTR_ALTIVEC_COMP | \
CPU_FTR_MAYBE_CAN_NAP | CPU_FTR_PPC_LE)
#define CPU_FTRS_7450_20 (CPU_FTR_COMMON | \
CPU_FTR_USE_TB | CPU_FTR_L2CR | CPU_FTR_ALTIVEC_COMP | \
CPU_FTR_L2CR | CPU_FTR_ALTIVEC_COMP | \
CPU_FTR_L3CR | CPU_FTR_SPEC7450 | \
CPU_FTR_NEED_COHERENT | CPU_FTR_PPC_LE | CPU_FTR_NEED_PAIRED_STWCX)
#define CPU_FTRS_7450_21 (CPU_FTR_COMMON | \
CPU_FTR_USE_TB | \
CPU_FTR_MAYBE_CAN_NAP | CPU_FTR_L2CR | CPU_FTR_ALTIVEC_COMP | \
CPU_FTR_L3CR | CPU_FTR_SPEC7450 | \
CPU_FTR_NAP_DISABLE_L2_PR | CPU_FTR_L3_DISABLE_NAP | \
CPU_FTR_NEED_COHERENT | CPU_FTR_PPC_LE | CPU_FTR_NEED_PAIRED_STWCX)
#define CPU_FTRS_7450_23 (CPU_FTR_COMMON | \
CPU_FTR_USE_TB | CPU_FTR_NEED_PAIRED_STWCX | \
CPU_FTR_NEED_PAIRED_STWCX | \
CPU_FTR_MAYBE_CAN_NAP | CPU_FTR_L2CR | CPU_FTR_ALTIVEC_COMP | \
CPU_FTR_L3CR | CPU_FTR_SPEC7450 | \
CPU_FTR_NAP_DISABLE_L2_PR | CPU_FTR_NEED_COHERENT | CPU_FTR_PPC_LE)
#define CPU_FTRS_7455_1 (CPU_FTR_COMMON | \
CPU_FTR_USE_TB | CPU_FTR_NEED_PAIRED_STWCX | \
CPU_FTR_NEED_PAIRED_STWCX | \
CPU_FTR_L2CR | CPU_FTR_ALTIVEC_COMP | CPU_FTR_L3CR | \
CPU_FTR_SPEC7450 | CPU_FTR_NEED_COHERENT | CPU_FTR_PPC_LE)
#define CPU_FTRS_7455_20 (CPU_FTR_COMMON | \
CPU_FTR_USE_TB | CPU_FTR_NEED_PAIRED_STWCX | \
CPU_FTR_NEED_PAIRED_STWCX | \
CPU_FTR_MAYBE_CAN_NAP | CPU_FTR_L2CR | CPU_FTR_ALTIVEC_COMP | \
CPU_FTR_L3CR | CPU_FTR_SPEC7450 | \
CPU_FTR_NAP_DISABLE_L2_PR | CPU_FTR_L3_DISABLE_NAP | \
CPU_FTR_NEED_COHERENT | CPU_FTR_PPC_LE)
#define CPU_FTRS_7455 (CPU_FTR_COMMON | \
CPU_FTR_USE_TB | \
CPU_FTR_MAYBE_CAN_NAP | CPU_FTR_L2CR | CPU_FTR_ALTIVEC_COMP | \
CPU_FTR_L3CR | CPU_FTR_SPEC7450 | CPU_FTR_NAP_DISABLE_L2_PR | \
CPU_FTR_NEED_COHERENT | CPU_FTR_PPC_LE | CPU_FTR_NEED_PAIRED_STWCX)
#define CPU_FTRS_7447_10 (CPU_FTR_COMMON | \
CPU_FTR_USE_TB | \
CPU_FTR_MAYBE_CAN_NAP | CPU_FTR_L2CR | CPU_FTR_ALTIVEC_COMP | \
CPU_FTR_L3CR | CPU_FTR_SPEC7450 | CPU_FTR_NAP_DISABLE_L2_PR | \
CPU_FTR_NEED_COHERENT | CPU_FTR_NO_BTIC | CPU_FTR_PPC_LE | \
CPU_FTR_NEED_PAIRED_STWCX)
#define CPU_FTRS_7447 (CPU_FTR_COMMON | \
CPU_FTR_USE_TB | \
CPU_FTR_MAYBE_CAN_NAP | CPU_FTR_L2CR | CPU_FTR_ALTIVEC_COMP | \
CPU_FTR_L3CR | CPU_FTR_SPEC7450 | CPU_FTR_NAP_DISABLE_L2_PR | \
CPU_FTR_NEED_COHERENT | CPU_FTR_PPC_LE | CPU_FTR_NEED_PAIRED_STWCX)
#define CPU_FTRS_7447A (CPU_FTR_COMMON | \
CPU_FTR_USE_TB | \
CPU_FTR_MAYBE_CAN_NAP | CPU_FTR_L2CR | CPU_FTR_ALTIVEC_COMP | \
CPU_FTR_SPEC7450 | CPU_FTR_NAP_DISABLE_L2_PR | \
CPU_FTR_NEED_COHERENT | CPU_FTR_PPC_LE | CPU_FTR_NEED_PAIRED_STWCX)
#define CPU_FTRS_7448 (CPU_FTR_COMMON | \
CPU_FTR_USE_TB | \
CPU_FTR_MAYBE_CAN_NAP | CPU_FTR_L2CR | CPU_FTR_ALTIVEC_COMP | \
CPU_FTR_SPEC7450 | CPU_FTR_NAP_DISABLE_L2_PR | \
CPU_FTR_PPC_LE | CPU_FTR_NEED_PAIRED_STWCX)
#define CPU_FTRS_82XX (CPU_FTR_COMMON | \
CPU_FTR_MAYBE_CAN_DOZE | CPU_FTR_USE_TB)
#define CPU_FTRS_82XX (CPU_FTR_COMMON | CPU_FTR_MAYBE_CAN_DOZE)
#define CPU_FTRS_G2_LE (CPU_FTR_COMMON | CPU_FTR_MAYBE_CAN_DOZE | \
CPU_FTR_USE_TB | CPU_FTR_MAYBE_CAN_NAP)
CPU_FTR_MAYBE_CAN_NAP)
#define CPU_FTRS_E300 (CPU_FTR_MAYBE_CAN_DOZE | \
CPU_FTR_USE_TB | CPU_FTR_MAYBE_CAN_NAP | \
CPU_FTR_MAYBE_CAN_NAP | \
CPU_FTR_COMMON)
#define CPU_FTRS_E300C2 (CPU_FTR_MAYBE_CAN_DOZE | \
CPU_FTR_USE_TB | CPU_FTR_MAYBE_CAN_NAP | \
CPU_FTR_MAYBE_CAN_NAP | \
CPU_FTR_COMMON | CPU_FTR_FPU_UNAVAILABLE)
#define CPU_FTRS_CLASSIC32 (CPU_FTR_COMMON | CPU_FTR_USE_TB)
#define CPU_FTRS_8XX (CPU_FTR_USE_TB | CPU_FTR_NOEXECUTE)
#define CPU_FTRS_40X (CPU_FTR_USE_TB | CPU_FTR_NODSISRALIGN | CPU_FTR_NOEXECUTE)
#define CPU_FTRS_44X (CPU_FTR_USE_TB | CPU_FTR_NODSISRALIGN | CPU_FTR_NOEXECUTE)
#define CPU_FTRS_440x6 (CPU_FTR_USE_TB | CPU_FTR_NODSISRALIGN | CPU_FTR_NOEXECUTE | \
#define CPU_FTRS_CLASSIC32 (CPU_FTR_COMMON)
#define CPU_FTRS_8XX (CPU_FTR_NOEXECUTE)
#define CPU_FTRS_40X (CPU_FTR_NODSISRALIGN | CPU_FTR_NOEXECUTE)
#define CPU_FTRS_44X (CPU_FTR_NODSISRALIGN | CPU_FTR_NOEXECUTE)
#define CPU_FTRS_440x6 (CPU_FTR_NODSISRALIGN | CPU_FTR_NOEXECUTE | \
CPU_FTR_INDEXED_DCR)
#define CPU_FTRS_47X (CPU_FTRS_440x6)
#define CPU_FTRS_E200 (CPU_FTR_USE_TB | CPU_FTR_SPE_COMP | \
#define CPU_FTRS_E200 (CPU_FTR_SPE_COMP | \
CPU_FTR_NODSISRALIGN | CPU_FTR_COHERENT_ICACHE | \
CPU_FTR_UNIFIED_ID_CACHE | CPU_FTR_NOEXECUTE | \
CPU_FTR_DEBUG_LVL_EXC)
#define CPU_FTRS_E500 (CPU_FTR_MAYBE_CAN_DOZE | CPU_FTR_USE_TB | \
#define CPU_FTRS_E500 (CPU_FTR_MAYBE_CAN_DOZE | \
CPU_FTR_SPE_COMP | CPU_FTR_MAYBE_CAN_NAP | CPU_FTR_NODSISRALIGN | \
CPU_FTR_NOEXECUTE)
#define CPU_FTRS_E500_2 (CPU_FTR_MAYBE_CAN_DOZE | CPU_FTR_USE_TB | \
#define CPU_FTRS_E500_2 (CPU_FTR_MAYBE_CAN_DOZE | \
CPU_FTR_SPE_COMP | CPU_FTR_MAYBE_CAN_NAP | \
CPU_FTR_NODSISRALIGN | CPU_FTR_NOEXECUTE)
#define CPU_FTRS_E500MC (CPU_FTR_USE_TB | CPU_FTR_NODSISRALIGN | \
CPU_FTR_L2CSR | CPU_FTR_LWSYNC | CPU_FTR_NOEXECUTE | \
#define CPU_FTRS_E500MC (CPU_FTR_NODSISRALIGN | \
CPU_FTR_LWSYNC | CPU_FTR_NOEXECUTE | \
CPU_FTR_DBELL | CPU_FTR_DEBUG_LVL_EXC | CPU_FTR_EMB_HV)
/*
* e5500/e6500 erratum A-006958 is a timebase bug that can use the
* same workaround as CPU_FTR_CELL_TB_BUG.
*/
#define CPU_FTRS_E5500 (CPU_FTR_USE_TB | CPU_FTR_NODSISRALIGN | \
CPU_FTR_L2CSR | CPU_FTR_LWSYNC | CPU_FTR_NOEXECUTE | \
#define CPU_FTRS_E5500 (CPU_FTR_NODSISRALIGN | \
CPU_FTR_LWSYNC | CPU_FTR_NOEXECUTE | \
CPU_FTR_DBELL | CPU_FTR_POPCNTB | CPU_FTR_POPCNTD | \
CPU_FTR_DEBUG_LVL_EXC | CPU_FTR_EMB_HV | CPU_FTR_CELL_TB_BUG)
#define CPU_FTRS_E6500 (CPU_FTR_USE_TB | CPU_FTR_NODSISRALIGN | \
CPU_FTR_L2CSR | CPU_FTR_LWSYNC | CPU_FTR_NOEXECUTE | \
#define CPU_FTRS_E6500 (CPU_FTR_NODSISRALIGN | \
CPU_FTR_LWSYNC | CPU_FTR_NOEXECUTE | \
CPU_FTR_DBELL | CPU_FTR_POPCNTB | CPU_FTR_POPCNTD | \
CPU_FTR_DEBUG_LVL_EXC | CPU_FTR_EMB_HV | CPU_FTR_ALTIVEC_COMP | \
CPU_FTR_CELL_TB_BUG | CPU_FTR_SMT)
#define CPU_FTRS_GENERIC_32 (CPU_FTR_COMMON | CPU_FTR_NODSISRALIGN)
/* 64-bit CPUs */
#define CPU_FTRS_POWER4 (CPU_FTR_USE_TB | CPU_FTR_LWSYNC | \
#define CPU_FTRS_PPC970 (CPU_FTR_LWSYNC | \
CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | \
CPU_FTR_MMCRA | CPU_FTR_CP_USE_DCBTZ | \
CPU_FTR_STCX_CHECKS_ADDRESS)
#define CPU_FTRS_PPC970 (CPU_FTR_USE_TB | CPU_FTR_LWSYNC | \
CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | CPU_FTR_ARCH_201 | \
CPU_FTR_ALTIVEC_COMP | CPU_FTR_CAN_NAP | CPU_FTR_MMCRA | \
CPU_FTR_CP_USE_DCBTZ | CPU_FTR_STCX_CHECKS_ADDRESS | \
CPU_FTR_HVMODE | CPU_FTR_DABRX)
#define CPU_FTRS_POWER5 (CPU_FTR_USE_TB | CPU_FTR_LWSYNC | \
#define CPU_FTRS_POWER5 (CPU_FTR_LWSYNC | \
CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | \
CPU_FTR_MMCRA | CPU_FTR_SMT | \
CPU_FTR_COHERENT_ICACHE | CPU_FTR_PURR | \
CPU_FTR_STCX_CHECKS_ADDRESS | CPU_FTR_POPCNTB | CPU_FTR_DABRX)
#define CPU_FTRS_POWER6 (CPU_FTR_USE_TB | CPU_FTR_LWSYNC | \
#define CPU_FTRS_POWER6 (CPU_FTR_LWSYNC | \
CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | \
CPU_FTR_MMCRA | CPU_FTR_SMT | \
CPU_FTR_COHERENT_ICACHE | \
@ -435,7 +431,7 @@ static inline void cpu_feature_keys_init(void) { }
CPU_FTR_DSCR | CPU_FTR_UNALIGNED_LD_STD | \
CPU_FTR_STCX_CHECKS_ADDRESS | CPU_FTR_POPCNTB | CPU_FTR_CFAR | \
CPU_FTR_DABRX)
#define CPU_FTRS_POWER7 (CPU_FTR_USE_TB | CPU_FTR_LWSYNC | \
#define CPU_FTRS_POWER7 (CPU_FTR_LWSYNC | \
CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | CPU_FTR_ARCH_206 |\
CPU_FTR_MMCRA | CPU_FTR_SMT | \
CPU_FTR_COHERENT_ICACHE | \
@ -444,7 +440,7 @@ static inline void cpu_feature_keys_init(void) { }
CPU_FTR_STCX_CHECKS_ADDRESS | CPU_FTR_POPCNTB | CPU_FTR_POPCNTD | \
CPU_FTR_CFAR | CPU_FTR_HVMODE | \
CPU_FTR_VMX_COPY | CPU_FTR_HAS_PPR | CPU_FTR_DABRX | CPU_FTR_PKEY)
#define CPU_FTRS_POWER8 (CPU_FTR_USE_TB | CPU_FTR_LWSYNC | \
#define CPU_FTRS_POWER8 (CPU_FTR_LWSYNC | \
CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | CPU_FTR_ARCH_206 |\
CPU_FTR_MMCRA | CPU_FTR_SMT | \
CPU_FTR_COHERENT_ICACHE | \
@ -456,7 +452,7 @@ static inline void cpu_feature_keys_init(void) { }
CPU_FTR_ARCH_207S | CPU_FTR_TM_COMP | CPU_FTR_PKEY)
#define CPU_FTRS_POWER8E (CPU_FTRS_POWER8 | CPU_FTR_PMAO_BUG)
#define CPU_FTRS_POWER8_DD1 (CPU_FTRS_POWER8 & ~CPU_FTR_DBELL)
#define CPU_FTRS_POWER9 (CPU_FTR_USE_TB | CPU_FTR_LWSYNC | \
#define CPU_FTRS_POWER9 (CPU_FTR_LWSYNC | \
CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | CPU_FTR_ARCH_206 |\
CPU_FTR_MMCRA | CPU_FTR_SMT | \
CPU_FTR_COHERENT_ICACHE | \
@ -464,33 +460,45 @@ static inline void cpu_feature_keys_init(void) { }
CPU_FTR_DSCR | CPU_FTR_SAO | \
CPU_FTR_STCX_CHECKS_ADDRESS | CPU_FTR_POPCNTB | CPU_FTR_POPCNTD | \
CPU_FTR_CFAR | CPU_FTR_HVMODE | CPU_FTR_VMX_COPY | \
CPU_FTR_DBELL | CPU_FTR_HAS_PPR | CPU_FTR_DAWR | \
CPU_FTR_ARCH_207S | CPU_FTR_TM_COMP | CPU_FTR_ARCH_300 | \
CPU_FTR_PKEY | CPU_FTR_P9_TLBIE_BUG)
CPU_FTR_DBELL | CPU_FTR_HAS_PPR | CPU_FTR_ARCH_207S | \
CPU_FTR_TM_COMP | CPU_FTR_ARCH_300 | CPU_FTR_PKEY | \
CPU_FTR_P9_TLBIE_BUG)
#define CPU_FTRS_POWER9_DD1 ((CPU_FTRS_POWER9 | CPU_FTR_POWER9_DD1) & \
(~CPU_FTR_SAO))
#define CPU_FTRS_POWER9_DD2_0 CPU_FTRS_POWER9
#define CPU_FTRS_POWER9_DD2_1 (CPU_FTRS_POWER9 | CPU_FTR_POWER9_DD2_1)
#define CPU_FTRS_CELL (CPU_FTR_USE_TB | CPU_FTR_LWSYNC | \
#define CPU_FTRS_POWER9_DD2_2 (CPU_FTRS_POWER9 | CPU_FTR_POWER9_DD2_1 | \
CPU_FTR_P9_TM_HV_ASSIST | \
CPU_FTR_P9_TM_XER_SO_BUG)
#define CPU_FTRS_CELL (CPU_FTR_LWSYNC | \
CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | \
CPU_FTR_ALTIVEC_COMP | CPU_FTR_MMCRA | CPU_FTR_SMT | \
CPU_FTR_PAUSE_ZERO | CPU_FTR_CELL_TB_BUG | CPU_FTR_CP_USE_DCBTZ | \
CPU_FTR_UNALIGNED_LD_STD | CPU_FTR_DABRX)
#define CPU_FTRS_PA6T (CPU_FTR_USE_TB | CPU_FTR_LWSYNC | \
#define CPU_FTRS_PA6T (CPU_FTR_LWSYNC | \
CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_ALTIVEC_COMP | \
CPU_FTR_PURR | CPU_FTR_REAL_LE | CPU_FTR_DABRX)
#define CPU_FTRS_COMPATIBLE (CPU_FTR_USE_TB | CPU_FTR_PPCAS_ARCH_V2)
#define CPU_FTRS_COMPATIBLE (CPU_FTR_PPCAS_ARCH_V2)
#ifdef __powerpc64__
#ifdef CONFIG_PPC_BOOK3E
#define CPU_FTRS_POSSIBLE (CPU_FTRS_E6500 | CPU_FTRS_E5500)
#else
#ifdef CONFIG_CPU_LITTLE_ENDIAN
#define CPU_FTRS_POSSIBLE \
(CPU_FTRS_POWER4 | CPU_FTRS_PPC970 | CPU_FTRS_POWER5 | \
(CPU_FTRS_POWER7 | CPU_FTRS_POWER8E | CPU_FTRS_POWER8 | \
CPU_FTRS_POWER8_DD1 | CPU_FTR_ALTIVEC_COMP | CPU_FTR_VSX_COMP | \
CPU_FTRS_POWER9 | CPU_FTRS_POWER9_DD1 | CPU_FTRS_POWER9_DD2_1 | \
CPU_FTRS_POWER9_DD2_2)
#else
#define CPU_FTRS_POSSIBLE \
(CPU_FTRS_PPC970 | CPU_FTRS_POWER5 | \
CPU_FTRS_POWER6 | CPU_FTRS_POWER7 | CPU_FTRS_POWER8E | \
CPU_FTRS_POWER8 | CPU_FTRS_POWER8_DD1 | CPU_FTRS_CELL | \
CPU_FTRS_PA6T | CPU_FTR_VSX | CPU_FTRS_POWER9 | \
CPU_FTRS_POWER9_DD1 | CPU_FTRS_POWER9_DD2_1)
CPU_FTRS_PA6T | CPU_FTR_VSX_COMP | CPU_FTR_ALTIVEC_COMP | \
CPU_FTRS_POWER9 | CPU_FTRS_POWER9_DD1 | CPU_FTRS_POWER9_DD2_1 | \
CPU_FTRS_POWER9_DD2_2)
#endif /* CONFIG_CPU_LITTLE_ENDIAN */
#endif
#else
enum {
@ -537,12 +545,19 @@ enum {
#ifdef CONFIG_PPC_BOOK3E
#define CPU_FTRS_ALWAYS (CPU_FTRS_E6500 & CPU_FTRS_E5500)
#else
#ifdef CONFIG_CPU_LITTLE_ENDIAN
#define CPU_FTRS_ALWAYS \
(CPU_FTRS_POSSIBLE & ~CPU_FTR_HVMODE & CPU_FTRS_POWER7 & \
CPU_FTRS_POWER8E & CPU_FTRS_POWER8 & CPU_FTRS_POWER8_DD1 & \
CPU_FTRS_POWER9 & CPU_FTRS_POWER9_DD1 & CPU_FTRS_POWER9_DD2_1)
#else
#define CPU_FTRS_ALWAYS \
(CPU_FTRS_POWER4 & CPU_FTRS_PPC970 & CPU_FTRS_POWER5 & \
(CPU_FTRS_PPC970 & CPU_FTRS_POWER5 & \
CPU_FTRS_POWER6 & CPU_FTRS_POWER7 & CPU_FTRS_CELL & \
CPU_FTRS_PA6T & CPU_FTRS_POWER8 & CPU_FTRS_POWER8E & \
CPU_FTRS_POWER8_DD1 & ~CPU_FTR_HVMODE & CPU_FTRS_POSSIBLE & \
CPU_FTRS_POWER9)
CPU_FTRS_POWER9 & CPU_FTRS_POWER9_DD1 & CPU_FTRS_POWER9_DD2_1)
#endif /* CONFIG_CPU_LITTLE_ENDIAN */
#endif
#else
enum {

View File

@ -47,6 +47,7 @@ static inline int debugger_fault_handler(struct pt_regs *regs) { return 0; }
void set_breakpoint(struct arch_hw_breakpoint *brk);
void __set_breakpoint(struct arch_hw_breakpoint *brk);
bool ppc_breakpoint_available(void);
#ifdef CONFIG_PPC_ADV_DEBUG_REGS
extern void do_send_trap(struct pt_regs *regs, unsigned long address,
unsigned long error_code, int brkpt);

View File

@ -256,6 +256,12 @@ static inline void eeh_serialize_unlock(unsigned long flags)
raw_spin_unlock_irqrestore(&confirm_error_lock, flags);
}
static inline bool eeh_state_active(int state)
{
return (state & (EEH_STATE_MMIO_ACTIVE | EEH_STATE_DMA_ACTIVE))
== (EEH_STATE_MMIO_ACTIVE | EEH_STATE_DMA_ACTIVE);
}
typedef void *(*eeh_traverse_func)(void *data, void *flag);
void eeh_set_pe_aux_size(int size);
int eeh_phb_pe_create(struct pci_controller *phb);

View File

@ -34,7 +34,8 @@ struct eeh_event {
int eeh_event_init(void);
int eeh_send_failure_event(struct eeh_pe *pe);
void eeh_remove_event(struct eeh_pe *pe, bool force);
void eeh_handle_event(struct eeh_pe *pe);
void eeh_handle_normal_event(struct eeh_pe *pe);
void eeh_handle_special_event(void);
#endif /* __KERNEL__ */
#endif /* ASM_POWERPC_EEH_EVENT_H */

View File

@ -466,17 +466,17 @@ static inline unsigned long epapr_hypercall(unsigned long *in,
unsigned long *out,
unsigned long nr)
{
unsigned long register r0 asm("r0");
unsigned long register r3 asm("r3") = in[0];
unsigned long register r4 asm("r4") = in[1];
unsigned long register r5 asm("r5") = in[2];
unsigned long register r6 asm("r6") = in[3];
unsigned long register r7 asm("r7") = in[4];
unsigned long register r8 asm("r8") = in[5];
unsigned long register r9 asm("r9") = in[6];
unsigned long register r10 asm("r10") = in[7];
unsigned long register r11 asm("r11") = nr;
unsigned long register r12 asm("r12");
register unsigned long r0 asm("r0");
register unsigned long r3 asm("r3") = in[0];
register unsigned long r4 asm("r4") = in[1];
register unsigned long r5 asm("r5") = in[2];
register unsigned long r6 asm("r6") = in[3];
register unsigned long r7 asm("r7") = in[4];
register unsigned long r8 asm("r8") = in[5];
register unsigned long r9 asm("r9") = in[6];
register unsigned long r10 asm("r10") = in[7];
register unsigned long r11 asm("r11") = nr;
register unsigned long r12 asm("r12");
asm volatile("bl epapr_hypercall_start"
: "=r"(r0), "=r"(r3), "=r"(r4), "=r"(r5), "=r"(r6),

View File

@ -89,17 +89,17 @@ pte_t *huge_pte_offset_and_shift(struct mm_struct *mm,
void flush_dcache_icache_hugepage(struct page *page);
#if defined(CONFIG_PPC_MM_SLICES)
int is_hugepage_only_range(struct mm_struct *mm, unsigned long addr,
int slice_is_hugepage_only_range(struct mm_struct *mm, unsigned long addr,
unsigned long len);
#else
static inline int is_hugepage_only_range(struct mm_struct *mm,
unsigned long addr,
unsigned long len)
{
if (IS_ENABLED(CONFIG_PPC_MM_SLICES) && !radix_enabled())
return slice_is_hugepage_only_range(mm, addr, len);
return 0;
}
#endif
void book3e_hugetlb_preload(struct vm_area_struct *vma, unsigned long ea,
pte_t pte);

View File

@ -88,6 +88,7 @@
#define H_P8 -61
#define H_P9 -62
#define H_TOO_BIG -64
#define H_UNSUPPORTED -67
#define H_OVERLAP -68
#define H_INTERRUPT -69
#define H_BAD_DATA -70
@ -337,6 +338,9 @@
#define H_CPU_CHAR_L1D_FLUSH_ORI30 (1ull << 61) // IBM bit 2
#define H_CPU_CHAR_L1D_FLUSH_TRIG2 (1ull << 60) // IBM bit 3
#define H_CPU_CHAR_L1D_THREAD_PRIV (1ull << 59) // IBM bit 4
#define H_CPU_CHAR_BRANCH_HINTS_HONORED (1ull << 58) // IBM bit 5
#define H_CPU_CHAR_THREAD_RECONFIG_CTRL (1ull << 57) // IBM bit 6
#define H_CPU_CHAR_COUNT_CACHE_DISABLED (1ull << 56) // IBM bit 7
#define H_CPU_BEHAV_FAVOUR_SECURITY (1ull << 63) // IBM bit 0
#define H_CPU_BEHAV_L1D_FLUSH_PR (1ull << 62) // IBM bit 1

View File

@ -66,6 +66,7 @@ extern int hw_breakpoint_exceptions_notify(struct notifier_block *unused,
unsigned long val, void *data);
int arch_install_hw_breakpoint(struct perf_event *bp);
void arch_uninstall_hw_breakpoint(struct perf_event *bp);
void arch_unregister_hw_breakpoint(struct perf_event *bp);
void hw_breakpoint_pmu_read(struct perf_event *bp);
extern void flush_ptrace_hw_breakpoint(struct task_struct *tsk);
@ -79,9 +80,11 @@ static inline void hw_breakpoint_disable(void)
brk.address = 0;
brk.type = 0;
brk.len = 0;
__set_breakpoint(&brk);
if (ppc_breakpoint_available())
__set_breakpoint(&brk);
}
extern void thread_change_pc(struct task_struct *tsk, struct pt_regs *regs);
int hw_breakpoint_handler(struct die_args *args);
#else /* CONFIG_HAVE_HW_BREAKPOINT */
static inline void hw_breakpoint_disable(void) { }

View File

@ -33,8 +33,6 @@ extern struct pci_dev *isa_bridge_pcidev;
#include <asm/mmu.h>
#include <asm/ppc_asm.h>
#include <asm-generic/iomap.h>
#ifdef CONFIG_PPC64
#include <asm/paca.h>
#endif
@ -663,6 +661,8 @@ static inline void name at \
#define writel_relaxed(v, addr) writel(v, addr)
#define writeq_relaxed(v, addr) writeq(v, addr)
#include <asm-generic/iomap.h>
#ifdef CONFIG_PPC32
#define mmiowb()
#else

View File

@ -66,6 +66,7 @@ extern void irq_ctx_init(void);
extern void call_do_softirq(struct thread_info *tp);
extern void call_do_irq(struct pt_regs *regs, struct thread_info *tp);
extern void do_IRQ(struct pt_regs *regs);
extern void __init init_IRQ(void);
extern void __do_irq(struct pt_regs *regs);
int irq_choose_cpu(const struct cpumask *mask);

View File

@ -6,5 +6,6 @@ static inline bool arch_irq_work_has_interrupt(void)
{
return true;
}
extern void arch_irq_work_raise(void);
#endif /* _ASM_POWERPC_IRQ_WORK_H */

View File

@ -108,6 +108,8 @@
/* book3s_hv */
#define BOOK3S_INTERRUPT_HV_SOFTPATCH 0x1500
/*
* Special trap used to indicate to host that this is a
* passthrough interrupt that could not be handled

View File

@ -241,6 +241,10 @@ extern void kvmppc_update_lpcr(struct kvm *kvm, unsigned long lpcr,
unsigned long mask);
extern void kvmppc_set_fscr(struct kvm_vcpu *vcpu, u64 fscr);
extern int kvmhv_p9_tm_emulation_early(struct kvm_vcpu *vcpu);
extern int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu);
extern void kvmhv_emulate_tm_rollback(struct kvm_vcpu *vcpu);
extern void kvmppc_entry_trampoline(void);
extern void kvmppc_hv_entry_trampoline(void);
extern u32 kvmppc_alignment_dsisr(struct kvm_vcpu *vcpu, unsigned int inst);

View File

@ -472,6 +472,49 @@ static inline void set_dirty_bits_atomic(unsigned long *map, unsigned long i,
set_bit_le(i, map);
}
static inline u64 sanitize_msr(u64 msr)
{
msr &= ~MSR_HV;
msr |= MSR_ME;
return msr;
}
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
static inline void copy_from_checkpoint(struct kvm_vcpu *vcpu)
{
vcpu->arch.cr = vcpu->arch.cr_tm;
vcpu->arch.xer = vcpu->arch.xer_tm;
vcpu->arch.lr = vcpu->arch.lr_tm;
vcpu->arch.ctr = vcpu->arch.ctr_tm;
vcpu->arch.amr = vcpu->arch.amr_tm;
vcpu->arch.ppr = vcpu->arch.ppr_tm;
vcpu->arch.dscr = vcpu->arch.dscr_tm;
vcpu->arch.tar = vcpu->arch.tar_tm;
memcpy(vcpu->arch.gpr, vcpu->arch.gpr_tm,
sizeof(vcpu->arch.gpr));
vcpu->arch.fp = vcpu->arch.fp_tm;
vcpu->arch.vr = vcpu->arch.vr_tm;
vcpu->arch.vrsave = vcpu->arch.vrsave_tm;
}
static inline void copy_to_checkpoint(struct kvm_vcpu *vcpu)
{
vcpu->arch.cr_tm = vcpu->arch.cr;
vcpu->arch.xer_tm = vcpu->arch.xer;
vcpu->arch.lr_tm = vcpu->arch.lr;
vcpu->arch.ctr_tm = vcpu->arch.ctr;
vcpu->arch.amr_tm = vcpu->arch.amr;
vcpu->arch.ppr_tm = vcpu->arch.ppr;
vcpu->arch.dscr_tm = vcpu->arch.dscr;
vcpu->arch.tar_tm = vcpu->arch.tar;
memcpy(vcpu->arch.gpr_tm, vcpu->arch.gpr,
sizeof(vcpu->arch.gpr));
vcpu->arch.fp_tm = vcpu->arch.fp;
vcpu->arch.vr_tm = vcpu->arch.vr;
vcpu->arch.vrsave_tm = vcpu->arch.vrsave;
}
#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
#endif /* CONFIG_KVM_BOOK3S_HV_POSSIBLE */
#endif /* __ASM_KVM_BOOK3S_64_H__ */

View File

@ -119,6 +119,7 @@ struct kvmppc_host_state {
u8 host_ipi;
u8 ptid; /* thread number within subcore when split */
u8 tid; /* thread number within whole core */
u8 fake_suspend;
struct kvm_vcpu *kvm_vcpu;
struct kvmppc_vcore *kvm_vcore;
void __iomem *xics_phys;

View File

@ -610,6 +610,7 @@ struct kvm_vcpu_arch {
u64 tfhar;
u64 texasr;
u64 tfiar;
u64 orig_texasr;
u32 cr_tm;
u64 xer_tm;

View File

@ -436,15 +436,15 @@ struct openpic;
extern void kvm_cma_reserve(void) __init;
static inline void kvmppc_set_xics_phys(int cpu, unsigned long addr)
{
paca[cpu].kvm_hstate.xics_phys = (void __iomem *)addr;
paca_ptrs[cpu]->kvm_hstate.xics_phys = (void __iomem *)addr;
}
static inline void kvmppc_set_xive_tima(int cpu,
unsigned long phys_addr,
void __iomem *virt_addr)
{
paca[cpu].kvm_hstate.xive_tima_phys = (void __iomem *)phys_addr;
paca[cpu].kvm_hstate.xive_tima_virt = virt_addr;
paca_ptrs[cpu]->kvm_hstate.xive_tima_phys = (void __iomem *)phys_addr;
paca_ptrs[cpu]->kvm_hstate.xive_tima_virt = virt_addr;
}
static inline u32 kvmppc_get_xics_latch(void)
@ -458,7 +458,7 @@ static inline u32 kvmppc_get_xics_latch(void)
static inline void kvmppc_set_host_ipi(int cpu, u8 host_ipi)
{
paca[cpu].kvm_hstate.host_ipi = host_ipi;
paca_ptrs[cpu]->kvm_hstate.host_ipi = host_ipi;
}
static inline void kvmppc_fast_vcpu_kick(struct kvm_vcpu *vcpu)

View File

@ -34,16 +34,19 @@
#include <linux/threads.h>
#include <asm/types.h>
#include <asm/mmu.h>
#include <asm/firmware.h>
/*
* We only have to have statically allocated lppaca structs on
* legacy iSeries, which supports at most 64 cpus.
*/
#define NR_LPPACAS 1
/*
* The Hypervisor barfs if the lppaca crosses a page boundary. A 1k
* alignment is sufficient to prevent this
* The lppaca is the "virtual processor area" registered with the hypervisor,
* H_REGISTER_VPA etc.
*
* According to PAPR, the structure is 640 bytes long, must be L1 cache line
* aligned, and must not cross a 4kB boundary. Its size field must be at
* least 640 bytes (but may be more).
*
* Pre-v4.14 KVM hypervisors reject the VPA if its size field is smaller than
* 1kB, so we dynamically allocate 1kB and advertise size as 1kB, but keep
* this structure as the canonical 640 byte size.
*/
struct lppaca {
/* cacheline 1 contains read-only data */
@ -97,13 +100,11 @@ struct lppaca {
__be32 page_ins; /* CMO Hint - # page ins by OS */
u8 reserved11[148];
volatile __be64 dtl_idx; /* Dispatch Trace Log head index */
volatile __be64 dtl_idx; /* Dispatch Trace Log head index */
u8 reserved12[96];
} __attribute__((__aligned__(0x400)));
} ____cacheline_aligned;
extern struct lppaca lppaca[];
#define lppaca_of(cpu) (*paca[cpu].lppaca_ptr)
#define lppaca_of(cpu) (*paca_ptrs[cpu]->lppaca_ptr)
/*
* We are using a non architected field to determine if a partition is
@ -114,6 +115,8 @@ extern struct lppaca lppaca[];
static inline bool lppaca_shared_proc(struct lppaca *l)
{
if (!firmware_has_feature(FW_FEATURE_SPLPAR))
return false;
return !!(l->__old_status & LPPACA_OLD_SHARED_PROC);
}

View File

@ -186,11 +186,32 @@
#define M_APG2 0x00000040
#define M_APG3 0x00000060
#ifdef CONFIG_PPC_MM_SLICES
#include <asm/nohash/32/slice.h>
#define SLICE_ARRAY_SIZE (1 << (32 - SLICE_LOW_SHIFT - 1))
#endif
#ifndef __ASSEMBLY__
struct slice_mask {
u64 low_slices;
DECLARE_BITMAP(high_slices, 0);
};
typedef struct {
unsigned int id;
unsigned int active;
unsigned long vdso_base;
#ifdef CONFIG_PPC_MM_SLICES
u16 user_psize; /* page size index */
unsigned char low_slices_psize[SLICE_ARRAY_SIZE];
unsigned char high_slices_psize[0];
unsigned long slb_addr_limit;
struct slice_mask mask_base_psize; /* 4k or 16k */
# ifdef CONFIG_HUGETLB_PAGE
struct slice_mask mask_512k;
struct slice_mask mask_8m;
# endif
#endif
} mm_context_t;
#define PHYS_IMMR_BASE (mfspr(SPRN_IMMR) & 0xfff80000)

View File

@ -111,9 +111,9 @@
/* MMU feature bit sets for various CPUs */
#define MMU_FTRS_DEFAULT_HPTE_ARCH_V2 \
MMU_FTR_HPTE_TABLE | MMU_FTR_PPCAS_ARCH_V2
#define MMU_FTRS_POWER4 MMU_FTRS_DEFAULT_HPTE_ARCH_V2
#define MMU_FTRS_PPC970 MMU_FTRS_POWER4 | MMU_FTR_TLBIE_CROP_VA
#define MMU_FTRS_POWER5 MMU_FTRS_POWER4 | MMU_FTR_LOCKLESS_TLBIE
#define MMU_FTRS_POWER MMU_FTRS_DEFAULT_HPTE_ARCH_V2
#define MMU_FTRS_PPC970 MMU_FTRS_POWER | MMU_FTR_TLBIE_CROP_VA
#define MMU_FTRS_POWER5 MMU_FTRS_POWER | MMU_FTR_LOCKLESS_TLBIE
#define MMU_FTRS_POWER6 MMU_FTRS_POWER5 | MMU_FTR_KERNEL_RO | MMU_FTR_68_BIT_VA
#define MMU_FTRS_POWER7 MMU_FTRS_POWER6
#define MMU_FTRS_POWER8 MMU_FTRS_POWER6

View File

@ -60,12 +60,51 @@ extern int hash__alloc_context_id(void);
extern void hash__reserve_context_id(int id);
extern void __destroy_context(int context_id);
static inline void mmu_context_init(void) { }
static inline int alloc_extended_context(struct mm_struct *mm,
unsigned long ea)
{
int context_id;
int index = ea >> MAX_EA_BITS_PER_CONTEXT;
context_id = hash__alloc_context_id();
if (context_id < 0)
return context_id;
VM_WARN_ON(mm->context.extended_id[index]);
mm->context.extended_id[index] = context_id;
return context_id;
}
static inline bool need_extra_context(struct mm_struct *mm, unsigned long ea)
{
int context_id;
context_id = get_ea_context(&mm->context, ea);
if (!context_id)
return true;
return false;
}
#else
extern void switch_mmu_context(struct mm_struct *prev, struct mm_struct *next,
struct task_struct *tsk);
extern unsigned long __init_new_context(void);
extern void __destroy_context(unsigned long context_id);
extern void mmu_context_init(void);
static inline int alloc_extended_context(struct mm_struct *mm,
unsigned long ea)
{
/* non book3s_64 should never find this called */
WARN_ON(1);
return -ENOMEM;
}
static inline bool need_extra_context(struct mm_struct *mm, unsigned long ea)
{
return false;
}
#endif
#if defined(CONFIG_KVM_BOOK3S_HV_POSSIBLE) && defined(CONFIG_PPC_RADIX_MMU)

View File

@ -0,0 +1,18 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_POWERPC_NOHASH_32_SLICE_H
#define _ASM_POWERPC_NOHASH_32_SLICE_H
#ifdef CONFIG_PPC_MM_SLICES
#define SLICE_LOW_SHIFT 26 /* 64 slices */
#define SLICE_LOW_TOP (0x100000000ull)
#define SLICE_NUM_LOW (SLICE_LOW_TOP >> SLICE_LOW_SHIFT)
#define GET_LOW_SLICE_INDEX(addr) ((addr) >> SLICE_LOW_SHIFT)
#define SLICE_HIGH_SHIFT 0
#define SLICE_NUM_HIGH 0ul
#define GET_HIGH_SLICE_INDEX(addr) (addr & 0)
#endif /* CONFIG_PPC_MM_SLICES */
#endif /* _ASM_POWERPC_NOHASH_32_SLICE_H */

View File

@ -0,0 +1,12 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_POWERPC_NOHASH_64_SLICE_H
#define _ASM_POWERPC_NOHASH_64_SLICE_H
#ifdef CONFIG_PPC_64K_PAGES
#define get_slice_psize(mm, addr) MMU_PAGE_64K
#else /* CONFIG_PPC_64K_PAGES */
#define get_slice_psize(mm, addr) MMU_PAGE_4K
#endif /* !CONFIG_PPC_64K_PAGES */
#define slice_set_user_psize(mm, psize) do { BUG(); } while (0)
#endif /* _ASM_POWERPC_NOHASH_64_SLICE_H */

View File

@ -204,7 +204,9 @@
#define OPAL_NPU_SPA_SETUP 159
#define OPAL_NPU_SPA_CLEAR_CACHE 160
#define OPAL_NPU_TL_SET 161
#define OPAL_LAST 161
#define OPAL_PCI_GET_PBCQ_TUNNEL_BAR 164
#define OPAL_PCI_SET_PBCQ_TUNNEL_BAR 165
#define OPAL_LAST 165
/* Device tree flags */

View File

@ -204,6 +204,8 @@ int64_t opal_unregister_dump_region(uint32_t id);
int64_t opal_slw_set_reg(uint64_t cpu_pir, uint64_t sprn, uint64_t val);
int64_t opal_config_cpu_idle_state(uint64_t state, uint64_t flag);
int64_t opal_pci_set_phb_cxl_mode(uint64_t phb_id, uint64_t mode, uint64_t pe_number);
int64_t opal_pci_get_pbcq_tunnel_bar(uint64_t phb_id, uint64_t *addr);
int64_t opal_pci_set_pbcq_tunnel_bar(uint64_t phb_id, uint64_t addr);
int64_t opal_ipmi_send(uint64_t interface, struct opal_ipmi_msg *msg,
uint64_t msg_len);
int64_t opal_ipmi_recv(uint64_t interface, struct opal_ipmi_msg *msg,
@ -323,7 +325,7 @@ struct rtc_time;
extern unsigned long opal_get_boot_time(void);
extern void opal_nvram_init(void);
extern void opal_flash_update_init(void);
extern void opal_flash_term_callback(void);
extern void opal_flash_update_print_message(void);
extern int opal_elog_init(void);
extern void opal_platform_dump_init(void);
extern void opal_sys_param_init(void);

View File

@ -32,6 +32,7 @@
#include <asm/accounting.h>
#include <asm/hmi.h>
#include <asm/cpuidle.h>
#include <asm/atomic.h>
register struct paca_struct *local_paca asm("r13");
@ -46,7 +47,10 @@ extern unsigned int debug_smp_processor_id(void); /* from linux/smp.h */
#define get_paca() local_paca
#endif
#ifdef CONFIG_PPC_PSERIES
#define get_lppaca() (get_paca()->lppaca_ptr)
#endif
#define get_slb_shadow() (get_paca()->slb_shadow_ptr)
struct task_struct;
@ -58,7 +62,7 @@ struct task_struct;
* processor.
*/
struct paca_struct {
#ifdef CONFIG_PPC_BOOK3S
#ifdef CONFIG_PPC_PSERIES
/*
* Because hw_cpu_id, unlike other paca fields, is accessed
* routinely from other CPUs (from the IRQ code), we stick to
@ -67,7 +71,8 @@ struct paca_struct {
*/
struct lppaca *lppaca_ptr; /* Pointer to LpPaca for PLIC */
#endif /* CONFIG_PPC_BOOK3S */
#endif /* CONFIG_PPC_PSERIES */
/*
* MAGIC: the spinlock functions in arch/powerpc/lib/locks.c
* load lock_token and paca_index with a single lwz
@ -141,7 +146,7 @@ struct paca_struct {
#ifdef CONFIG_PPC_BOOK3S
mm_context_id_t mm_ctx_id;
#ifdef CONFIG_PPC_MM_SLICES
u64 mm_ctx_low_slices_psize;
unsigned char mm_ctx_low_slices_psize[BITS_PER_LONG / BITS_PER_BYTE];
unsigned char mm_ctx_high_slices_psize[SLICE_ARRAY_SIZE];
unsigned long mm_ctx_slb_addr_limit;
#else
@ -160,10 +165,14 @@ struct paca_struct {
u64 saved_msr; /* MSR saved here by enter_rtas */
u16 trap_save; /* Used when bad stack is encountered */
u8 irq_soft_mask; /* mask for irq soft masking */
u8 soft_enabled; /* irq soft-enable flag */
u8 irq_happened; /* irq happened while soft-disabled */
u8 io_sync; /* writel() needs spin_unlock sync */
u8 irq_work_pending; /* IRQ_WORK interrupt while soft-disable */
u8 nap_state_lost; /* NV GPR values lost in power7_idle */
#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
u8 pmcregs_in_use; /* pseries puts this in lppaca */
#endif
u64 sprg_vdso; /* Saved user-visible sprg */
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
u64 tm_scratch; /* TM scratch area for reclaim */
@ -177,6 +186,8 @@ struct paca_struct {
u8 thread_mask;
/* Mask to denote subcore sibling threads */
u8 subcore_sibling_mask;
/* Flag to request this thread not to stop */
atomic_t dont_stop;
/*
* Pointer to an array which contains pointer
* to the sibling threads' paca.
@ -241,18 +252,20 @@ struct paca_struct {
void *rfi_flush_fallback_area;
u64 l1d_flush_size;
#endif
};
} ____cacheline_aligned;
extern void copy_mm_to_paca(struct mm_struct *mm);
extern struct paca_struct *paca;
extern struct paca_struct **paca_ptrs;
extern void initialise_paca(struct paca_struct *new_paca, int cpu);
extern void setup_paca(struct paca_struct *new_paca);
extern void allocate_pacas(void);
extern void allocate_paca_ptrs(void);
extern void allocate_paca(int cpu);
extern void free_unused_pacas(void);
#else /* CONFIG_PPC64 */
static inline void allocate_pacas(void) { };
static inline void allocate_paca_ptrs(void) { };
static inline void allocate_paca(int cpu) { };
static inline void free_unused_pacas(void) { };
#endif /* CONFIG_PPC64 */

View File

@ -126,7 +126,15 @@ extern long long virt_phys_offset;
#ifdef CONFIG_FLATMEM
#define ARCH_PFN_OFFSET ((unsigned long)(MEMORY_START >> PAGE_SHIFT))
#define pfn_valid(pfn) ((pfn) >= ARCH_PFN_OFFSET && (pfn) < max_mapnr)
#ifndef __ASSEMBLY__
extern unsigned long max_mapnr;
static inline bool pfn_valid(unsigned long pfn)
{
unsigned long min_pfn = ARCH_PFN_OFFSET;
return pfn >= min_pfn && pfn < max_mapnr;
}
#endif
#endif
#define virt_to_pfn(kaddr) (__pa(kaddr) >> PAGE_SHIFT)
@ -344,5 +352,6 @@ typedef struct page *pgtable_t;
#include <asm-generic/memory_model.h>
#endif /* __ASSEMBLY__ */
#include <asm/slice.h>
#endif /* _ASM_POWERPC_PAGE_H */

View File

@ -86,65 +86,6 @@ extern u64 ppc64_pft_size;
#endif /* __ASSEMBLY__ */
#ifdef CONFIG_PPC_MM_SLICES
#define SLICE_LOW_SHIFT 28
#define SLICE_HIGH_SHIFT 40
#define SLICE_LOW_TOP (0x100000000ul)
#define SLICE_NUM_LOW (SLICE_LOW_TOP >> SLICE_LOW_SHIFT)
#define SLICE_NUM_HIGH (H_PGTABLE_RANGE >> SLICE_HIGH_SHIFT)
#define GET_LOW_SLICE_INDEX(addr) ((addr) >> SLICE_LOW_SHIFT)
#define GET_HIGH_SLICE_INDEX(addr) ((addr) >> SLICE_HIGH_SHIFT)
#ifndef __ASSEMBLY__
struct mm_struct;
extern unsigned long slice_get_unmapped_area(unsigned long addr,
unsigned long len,
unsigned long flags,
unsigned int psize,
int topdown);
extern unsigned int get_slice_psize(struct mm_struct *mm,
unsigned long addr);
extern void slice_set_user_psize(struct mm_struct *mm, unsigned int psize);
extern void slice_set_range_psize(struct mm_struct *mm, unsigned long start,
unsigned long len, unsigned int psize);
#endif /* __ASSEMBLY__ */
#else
#define slice_init()
#ifdef CONFIG_PPC_BOOK3S_64
#define get_slice_psize(mm, addr) ((mm)->context.user_psize)
#define slice_set_user_psize(mm, psize) \
do { \
(mm)->context.user_psize = (psize); \
(mm)->context.sllp = SLB_VSID_USER | mmu_psize_defs[(psize)].sllp; \
} while (0)
#else /* !CONFIG_PPC_BOOK3S_64 */
#ifdef CONFIG_PPC_64K_PAGES
#define get_slice_psize(mm, addr) MMU_PAGE_64K
#else /* CONFIG_PPC_64K_PAGES */
#define get_slice_psize(mm, addr) MMU_PAGE_4K
#endif /* !CONFIG_PPC_64K_PAGES */
#define slice_set_user_psize(mm, psize) do { BUG(); } while(0)
#endif /* CONFIG_PPC_BOOK3S_64 */
#define slice_set_range_psize(mm, start, len, psize) \
slice_set_user_psize((mm), (psize))
#endif /* CONFIG_PPC_MM_SLICES */
#ifdef CONFIG_HUGETLB_PAGE
#ifdef CONFIG_PPC_MM_SLICES
#define HAVE_ARCH_HUGETLB_UNMAPPED_AREA
#endif
#endif /* !CONFIG_HUGETLB_PAGE */
#define VM_DATA_DEFAULT_FLAGS \
(is_32bit_task() ? \
VM_DATA_DEFAULT_FLAGS32 : VM_DATA_DEFAULT_FLAGS64)

View File

@ -53,6 +53,8 @@ struct power_pmu {
[PERF_COUNT_HW_CACHE_OP_MAX]
[PERF_COUNT_HW_CACHE_RESULT_MAX];
int n_blacklist_ev;
int *blacklist_ev;
/* BHRB entries in the PMU */
int bhrb_nr;
};

View File

@ -2,6 +2,8 @@
#ifndef _ASM_POWERPC_PLPAR_WRAPPERS_H
#define _ASM_POWERPC_PLPAR_WRAPPERS_H
#ifdef CONFIG_PPC_PSERIES
#include <linux/string.h>
#include <linux/irqflags.h>
@ -9,14 +11,6 @@
#include <asm/paca.h>
#include <asm/page.h>
/* Get state of physical CPU from query_cpu_stopped */
int smp_query_cpu_stopped(unsigned int pcpu);
#define QCSS_STOPPED 0
#define QCSS_STOPPING 1
#define QCSS_NOT_STOPPED 2
#define QCSS_HARDWARE_ERROR -1
#define QCSS_HARDWARE_BUSY -2
static inline long poll_pending(void)
{
return plpar_hcall_norets(H_POLL_PENDING);
@ -311,17 +305,17 @@ static inline long enable_little_endian_exceptions(void)
return plpar_set_mode(1, H_SET_MODE_RESOURCE_LE, 0, 0);
}
static inline long plapr_set_ciabr(unsigned long ciabr)
static inline long plpar_set_ciabr(unsigned long ciabr)
{
return plpar_set_mode(0, H_SET_MODE_RESOURCE_SET_CIABR, ciabr, 0);
}
static inline long plapr_set_watchpoint0(unsigned long dawr0, unsigned long dawrx0)
static inline long plpar_set_watchpoint0(unsigned long dawr0, unsigned long dawrx0)
{
return plpar_set_mode(0, H_SET_MODE_RESOURCE_SET_DAWR, dawr0, dawrx0);
}
static inline long plapr_signal_sys_reset(long cpu)
static inline long plpar_signal_sys_reset(long cpu)
{
return plpar_hcall_norets(H_SIGNAL_SYS_RESET, cpu);
}
@ -340,4 +334,12 @@ static inline long plpar_get_cpu_characteristics(struct h_cpu_char_result *p)
return rc;
}
#else /* !CONFIG_PPC_PSERIES */
static inline long plpar_set_ciabr(unsigned long ciabr)
{
return 0;
}
#endif /* CONFIG_PPC_PSERIES */
#endif /* _ASM_POWERPC_PLPAR_WRAPPERS_H */

View File

@ -31,10 +31,21 @@ void ppc_enable_pmcs(void);
#ifdef CONFIG_PPC_BOOK3S_64
#include <asm/lppaca.h>
#include <asm/firmware.h>
static inline void ppc_set_pmu_inuse(int inuse)
{
get_lppaca()->pmcregs_in_use = inuse;
#if defined(CONFIG_PPC_PSERIES) || defined(CONFIG_KVM_BOOK3S_HV_POSSIBLE)
if (firmware_has_feature(FW_FEATURE_LPAR)) {
#ifdef CONFIG_PPC_PSERIES
get_lppaca()->pmcregs_in_use = inuse;
#endif
} else {
#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
get_paca()->pmcregs_in_use = inuse;
#endif
}
#endif
}
extern void power4_enable_pmcs(void);

View File

@ -29,6 +29,12 @@ extern int pnv_pci_set_power_state(uint64_t id, uint8_t state,
extern int pnv_pci_set_p2p(struct pci_dev *initiator, struct pci_dev *target,
u64 desc);
extern int pnv_pci_enable_tunnel(struct pci_dev *dev, uint64_t *asnind);
extern int pnv_pci_disable_tunnel(struct pci_dev *dev);
extern int pnv_pci_set_tunnel_bar(struct pci_dev *dev, uint64_t addr,
int enable);
extern int pnv_pci_get_as_notify_info(struct task_struct *task, u32 *lpid,
u32 *pid, u32 *tid);
int pnv_phb_to_cxl_mode(struct pci_dev *dev, uint64_t mode);
int pnv_cxl_ioda_msi_setup(struct pci_dev *dev, unsigned int hwirq,
unsigned int virq);

View File

@ -40,6 +40,7 @@ static inline int pnv_npu2_handle_fault(struct npu_context *context,
}
static inline void pnv_tm_init(void) { }
static inline void pnv_power9_force_smt4(void) { }
#endif
#endif /* _ASM_POWERNV_H */

View File

@ -232,6 +232,7 @@
#define PPC_INST_MSGSYNC 0x7c0006ec
#define PPC_INST_MSGSNDP 0x7c00011c
#define PPC_INST_MSGCLRP 0x7c00015c
#define PPC_INST_MTMSRD 0x7c000164
#define PPC_INST_MTTMR 0x7c0003dc
#define PPC_INST_NOP 0x60000000
#define PPC_INST_PASTE 0x7c20070d
@ -239,8 +240,10 @@
#define PPC_INST_POPCNTB_MASK 0xfc0007fe
#define PPC_INST_POPCNTD 0x7c0003f4
#define PPC_INST_POPCNTW 0x7c0002f4
#define PPC_INST_RFEBB 0x4c000124
#define PPC_INST_RFCI 0x4c000066
#define PPC_INST_RFDI 0x4c00004e
#define PPC_INST_RFID 0x4c000024
#define PPC_INST_RFMCI 0x4c00004c
#define PPC_INST_MFSPR 0x7c0002a6
#define PPC_INST_MFSPR_DSCR 0x7c1102a6
@ -271,12 +274,14 @@
#define PPC_INST_TLBSRX_DOT 0x7c0006a5
#define PPC_INST_VPMSUMW 0x10000488
#define PPC_INST_VPMSUMD 0x100004c8
#define PPC_INST_VPERMXOR 0x1000002d
#define PPC_INST_XXLOR 0xf0000490
#define PPC_INST_XXSWAPD 0xf0000250
#define PPC_INST_XVCPSGNDP 0xf0000780
#define PPC_INST_TRECHKPT 0x7c0007dd
#define PPC_INST_TRECLAIM 0x7c00075d
#define PPC_INST_TABORT 0x7c00071d
#define PPC_INST_TSR 0x7c0005dd
#define PPC_INST_NAP 0x4c000364
#define PPC_INST_SLEEP 0x4c0003a4
@ -517,6 +522,11 @@
#define XVCPSGNDP(t, a, b) stringify_in_c(.long (PPC_INST_XVCPSGNDP | \
VSX_XX3((t), (a), (b))))
#define VPERMXOR(vrt, vra, vrb, vrc) \
stringify_in_c(.long (PPC_INST_VPERMXOR | \
___PPC_RT(vrt) | ___PPC_RA(vra) | \
___PPC_RB(vrb) | (((vrc) & 0x1f) << 6)))
#define PPC_NAP stringify_in_c(.long PPC_INST_NAP)
#define PPC_SLEEP stringify_in_c(.long PPC_INST_SLEEP)
#define PPC_WINKLE stringify_in_c(.long PPC_INST_WINKLE)

View File

@ -439,14 +439,11 @@ END_FTR_SECTION_IFCLR(CPU_FTR_601)
/* The following stops all load and store data streams associated with stream
* ID (ie. streams created explicitly). The embedded and server mnemonics for
* dcbt are different so we use machine "power4" here explicitly.
* dcbt are different so this must only be used for server.
*/
#define DCBT_STOP_ALL_STREAM_IDS(scratch) \
.machine push ; \
.machine "power4" ; \
lis scratch,0x60000000@h; \
dcbt 0,scratch,0b01010; \
.machine pop
#define DCBT_BOOK3S_STOP_ALL_STREAM_IDS(scratch) \
lis scratch,0x60000000@h; \
dcbt 0,scratch,0b01010
/*
* toreal/fromreal/tophys/tovirt macros. 32-bit BookE makes them

View File

@ -109,6 +109,13 @@ void release_thread(struct task_struct *);
#define TASK_SIZE_64TB (0x0000400000000000UL)
#define TASK_SIZE_128TB (0x0000800000000000UL)
#define TASK_SIZE_512TB (0x0002000000000000UL)
#define TASK_SIZE_1PB (0x0004000000000000UL)
#define TASK_SIZE_2PB (0x0008000000000000UL)
/*
* With 52 bits in the address we can support
* upto 4PB of range.
*/
#define TASK_SIZE_4PB (0x0010000000000000UL)
/*
* For now 512TB is only supported with book3s and 64K linux page size.
@ -117,11 +124,17 @@ void release_thread(struct task_struct *);
/*
* Max value currently used:
*/
#define TASK_SIZE_USER64 TASK_SIZE_512TB
#define TASK_SIZE_USER64 TASK_SIZE_4PB
#define DEFAULT_MAP_WINDOW_USER64 TASK_SIZE_128TB
#define TASK_CONTEXT_SIZE TASK_SIZE_512TB
#else
#define TASK_SIZE_USER64 TASK_SIZE_64TB
#define DEFAULT_MAP_WINDOW_USER64 TASK_SIZE_64TB
/*
* We don't need to allocate extended context ids for 4K page size, because
* we limit the max effective address on this config to 64TB.
*/
#define TASK_CONTEXT_SIZE TASK_SIZE_64TB
#endif
/*
@ -505,6 +518,7 @@ extern int powersave_nap; /* set if nap mode can be used in idle loop */
extern unsigned long power7_idle_insn(unsigned long type); /* PNV_THREAD_NAP/etc*/
extern void power7_idle_type(unsigned long type);
extern unsigned long power9_idle_stop(unsigned long psscr_val);
extern unsigned long power9_offline_stop(unsigned long psscr_val);
extern void power9_idle_type(unsigned long stop_psscr_val,
unsigned long stop_psscr_mask);

View File

@ -156,6 +156,8 @@
#define PSSCR_SD 0x00400000 /* Status Disable */
#define PSSCR_PLS 0xf000000000000000 /* Power-saving Level Status */
#define PSSCR_GUEST_VIS 0xf0000000000003ff /* Guest-visible PSSCR fields */
#define PSSCR_FAKE_SUSPEND 0x00000400 /* Fake-suspend bit (P9 DD2.2) */
#define PSSCR_FAKE_SUSPEND_LG 10 /* Fake-suspend bit position */
/* Floating Point Status and Control Register (FPSCR) Fields */
#define FPSCR_FX 0x80000000 /* FPU exception summary */
@ -237,7 +239,12 @@
#define SPRN_TFIAR 0x81 /* Transaction Failure Inst Addr */
#define SPRN_TEXASR 0x82 /* Transaction EXception & Summary */
#define SPRN_TEXASRU 0x83 /* '' '' '' Upper 32 */
#define TEXASR_ABORT __MASK(63-31) /* terminated by tabort or treclaim */
#define TEXASR_SUSP __MASK(63-32) /* tx failed in suspended state */
#define TEXASR_HV __MASK(63-34) /* MSR[HV] when failure occurred */
#define TEXASR_PR __MASK(63-35) /* MSR[PR] when failure occurred */
#define TEXASR_FS __MASK(63-36) /* TEXASR Failure Summary */
#define TEXASR_EXACT __MASK(63-37) /* TFIAR value is exact */
#define SPRN_TFHAR 0x80 /* Transaction Failure Handler Addr */
#define SPRN_TIDR 144 /* Thread ID register */
#define SPRN_CTRLF 0x088

View File

@ -0,0 +1,74 @@
/* SPDX-License-Identifier: GPL-2.0+ */
/*
* Security related feature bit definitions.
*
* Copyright 2018, Michael Ellerman, IBM Corporation.
*/
#ifndef _ASM_POWERPC_SECURITY_FEATURES_H
#define _ASM_POWERPC_SECURITY_FEATURES_H
extern unsigned long powerpc_security_features;
extern bool rfi_flush;
static inline void security_ftr_set(unsigned long feature)
{
powerpc_security_features |= feature;
}
static inline void security_ftr_clear(unsigned long feature)
{
powerpc_security_features &= ~feature;
}
static inline bool security_ftr_enabled(unsigned long feature)
{
return !!(powerpc_security_features & feature);
}
// Features indicating support for Spectre/Meltdown mitigations
// The L1-D cache can be flushed with ori r30,r30,0
#define SEC_FTR_L1D_FLUSH_ORI30 0x0000000000000001ull
// The L1-D cache can be flushed with mtspr 882,r0 (aka SPRN_TRIG2)
#define SEC_FTR_L1D_FLUSH_TRIG2 0x0000000000000002ull
// ori r31,r31,0 acts as a speculation barrier
#define SEC_FTR_SPEC_BAR_ORI31 0x0000000000000004ull
// Speculation past bctr is disabled
#define SEC_FTR_BCCTRL_SERIALISED 0x0000000000000008ull
// Entries in L1-D are private to a SMT thread
#define SEC_FTR_L1D_THREAD_PRIV 0x0000000000000010ull
// Indirect branch prediction cache disabled
#define SEC_FTR_COUNT_CACHE_DISABLED 0x0000000000000020ull
// Features indicating need for Spectre/Meltdown mitigations
// The L1-D cache should be flushed on MSR[HV] 1->0 transition (hypervisor to guest)
#define SEC_FTR_L1D_FLUSH_HV 0x0000000000000040ull
// The L1-D cache should be flushed on MSR[PR] 0->1 transition (kernel to userspace)
#define SEC_FTR_L1D_FLUSH_PR 0x0000000000000080ull
// A speculation barrier should be used for bounds checks (Spectre variant 1)
#define SEC_FTR_BNDS_CHK_SPEC_BAR 0x0000000000000100ull
// Firmware configuration indicates user favours security over performance
#define SEC_FTR_FAVOUR_SECURITY 0x0000000000000200ull
// Features enabled by default
#define SEC_FTR_DEFAULT \
(SEC_FTR_L1D_FLUSH_HV | \
SEC_FTR_L1D_FLUSH_PR | \
SEC_FTR_BNDS_CHK_SPEC_BAR | \
SEC_FTR_FAVOUR_SECURITY)
#endif /* _ASM_POWERPC_SECURITY_FEATURES_H */

View File

@ -23,6 +23,7 @@ extern void reloc_got2(unsigned long);
#define PTRRELOC(x) ((typeof(x)) add_reloc_offset((unsigned long)(x)))
void check_for_initrd(void);
void mem_topology_setup(void);
void initmem_init(void);
void setup_panic(void);
#define ARCH_PANIC_TIMEOUT 180
@ -49,7 +50,7 @@ enum l1d_flush_type {
L1D_FLUSH_MTTRIG = 0x8,
};
void __init setup_rfi_flush(enum l1d_flush_type, bool enable);
void setup_rfi_flush(enum l1d_flush_type, bool enable);
void do_rfi_flush_fixups(enum l1d_flush_type types);
#endif /* !__ASSEMBLY__ */

View File

@ -0,0 +1,40 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_POWERPC_SLICE_H
#define _ASM_POWERPC_SLICE_H
#ifdef CONFIG_PPC_BOOK3S_64
#include <asm/book3s/64/slice.h>
#elif defined(CONFIG_PPC64)
#include <asm/nohash/64/slice.h>
#elif defined(CONFIG_PPC_MMU_NOHASH)
#include <asm/nohash/32/slice.h>
#endif
#ifdef CONFIG_PPC_MM_SLICES
#ifdef CONFIG_HUGETLB_PAGE
#define HAVE_ARCH_HUGETLB_UNMAPPED_AREA
#endif
#define HAVE_ARCH_UNMAPPED_AREA
#define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN
#ifndef __ASSEMBLY__
struct mm_struct;
unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
unsigned long flags, unsigned int psize,
int topdown);
unsigned int get_slice_psize(struct mm_struct *mm, unsigned long addr);
void slice_set_range_psize(struct mm_struct *mm, unsigned long start,
unsigned long len, unsigned int psize);
void slice_init_new_context_exec(struct mm_struct *mm);
#endif /* __ASSEMBLY__ */
#endif /* CONFIG_PPC_MM_SLICES */
#endif /* _ASM_POWERPC_SLICE_H */

View File

@ -31,6 +31,7 @@
extern int boot_cpuid;
extern int spinning_secondaries;
extern u32 *cpu_to_phys_id;
extern void cpu_die(void);
extern int cpu_to_chip_id(int cpu);
@ -170,12 +171,12 @@ static inline const struct cpumask *cpu_sibling_mask(int cpu)
#ifdef CONFIG_PPC64
static inline int get_hard_smp_processor_id(int cpu)
{
return paca[cpu].hw_cpu_id;
return paca_ptrs[cpu]->hw_cpu_id;
}
static inline void set_hard_smp_processor_id(int cpu, int phys)
{
paca[cpu].hw_cpu_id = phys;
paca_ptrs[cpu]->hw_cpu_id = phys;
}
#else
/* 32-bit */

View File

@ -17,7 +17,7 @@
#endif /* CONFIG_SPARSEMEM */
#ifdef CONFIG_MEMORY_HOTPLUG
extern int create_section_mapping(unsigned long start, unsigned long end);
extern int create_section_mapping(unsigned long start, unsigned long end, int nid);
extern int remove_section_mapping(unsigned long start, unsigned long end);
#ifdef CONFIG_PPC_BOOK3S_64

Some files were not shown because too many files have changed in this diff Show More