1
0
Fork 0

powerpc updates for 4.20

Notable changes:
 
  - A large series to rewrite our SLB miss handling, replacing a lot of fairly
    complicated asm with much fewer lines of C.
 
  - Following on from that, we now maintain a cache of SLB entries for each
    process and preload them on context switch. Leading to a 27% speedup for our
    context switch benchmark on Power9.
 
  - Improvements to our handling of SLB multi-hit errors. We now print more debug
    information when they occur, and try to continue running by flushing the SLB
    and reloading, rather than treating them as fatal.
 
  - Enable THP migration on 64-bit Book3S machines (eg. Power7/8/9).
 
  - Add support for physical memory up to 2PB in the linear mapping on 64-bit
    Book3S. We only support up to 512TB as regular system memory, otherwise the
    percpu allocator runs out of vmalloc space.
 
  - Add stack protector support for 32 and 64-bit, with a per-task canary.
 
  - Add support for PTRACE_SYSEMU and PTRACE_SYSEMU_SINGLESTEP.
 
  - Support recognising "big cores" on Power9, where two SMT4 cores are presented
    to us as a single SMT8 core.
 
  - A large series to cleanup some of our ioremap handling and PTE flags.
 
  - Add a driver for the PAPR SCM (storage class memory) interface, allowing
    guests to operate on SCM devices (acked by Dan).
 
  - Changes to our ftrace code to handle very large kernels, where we need to use
    a trampoline to get to ftrace_caller().
 
 Many other smaller enhancements and cleanups.
 
 Thanks to:
   Alan Modra, Alistair Popple, Aneesh Kumar K.V, Anton Blanchard, Aravinda
   Prasad, Bartlomiej Zolnierkiewicz, Benjamin Herrenschmidt, Breno Leitao,
   Cédric Le Goater, Christophe Leroy, Christophe Lombard, Dan Carpenter, Daniel
   Axtens, Finn Thain, Gautham R. Shenoy, Gustavo Romero, Haren Myneni, Hari
   Bathini, Jia Hongtao, Joel Stanley, John Allen, Laurent Dufour, Madhavan
   Srinivasan, Mahesh Salgaonkar, Mark Hairgrove, Masahiro Yamada, Michael
   Bringmann, Michael Neuling, Michal Suchanek, Murilo Opsfelder Araujo, Nathan
   Fontenot, Naveen N. Rao, Nicholas Piggin, Nick Desaulniers, Oliver O'Halloran,
   Paul Mackerras, Petr Vorel, Rashmica Gupta, Reza Arbab, Rob Herring, Sam
   Bobroff, Samuel Mendoza-Jonas, Scott Wood, Stan Johnson, Stephen Rothwell,
   Stewart Smith, Suraj Jitindar Singh, Tyrel Datwyler, Vaibhav Jain, Vasant
   Hegde, YueHaibing, zhong jiang,
 -----BEGIN PGP SIGNATURE-----
 
 iQIcBAABAgAGBQJb01vTAAoJEFHr6jzI4aWADsEP/jqL3+2qxs098ra80tpXCpXJ
 tgXCosEs4b35sGtyHeUWZZZfWXeisaPAIlP8zTx1n50HACZduDYRAl0Ew9XB7Xdw
 enDHRVccD21FsmHBOx/Ii1rVJlovWlj6EQCWHKeZmNjeRoFuClVZ7CYmf+mBifKR
 sw2Db2fKA/59wMTq2zIMy5pqYgqlAs4jTWS6uN5hKPoBmO/82ARnNG+qgLuloD3Z
 O8zSDM9QQ7PpuyDgTjO9SAo2YjmEfXlEG6cOCCejsU3DMctaEAK5PUZ+blsHYHBH
 BYZYKs/x4pcw0SO41GtTh0M2YqDYBVuBIpRw8lLZap97Xo9ucSkAm5WD3rGxk4CY
 YeZKEPUql6MHN3+DKl8mx2F0V+Et/tio2HNqc9KReR1tfoolZAbe+SFZHfgmc/Rq
 RD9nnG8KRd4K2K1BTqpkTmI1EtE7jPtPJPSV8gMGhgL/N5vPmH3mql/qyOtYx48E
 6/hPzWESgs16VRZ/opLh8VvjlY1HBDODQhehhhl+o23/Vb8qEgRf8Uqhq50rQW1H
 EeOqyyYQ90txSU31Sgy1kQkvOgIFAsBObWT1ZCJ3RbfGbB4/tdEAvZqTZRlXo2OY
 7P0Sqcw/9Le5eJkHIlLtBv0TF7y1OYemCbLgRQzFlcRP+UKtYyg8eFnFjqbPEEmP
 ulwhn/BfFVSgaYKQ503u
 =I0pj
 -----END PGP SIGNATURE-----

Merge tag 'powerpc-4.20-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux

Pull powerpc updates from Michael Ellerman:
 "Notable changes:

   - A large series to rewrite our SLB miss handling, replacing a lot of
     fairly complicated asm with much fewer lines of C.

   - Following on from that, we now maintain a cache of SLB entries for
     each process and preload them on context switch. Leading to a 27%
     speedup for our context switch benchmark on Power9.

   - Improvements to our handling of SLB multi-hit errors. We now print
     more debug information when they occur, and try to continue running
     by flushing the SLB and reloading, rather than treating them as
     fatal.

   - Enable THP migration on 64-bit Book3S machines (eg. Power7/8/9).

   - Add support for physical memory up to 2PB in the linear mapping on
     64-bit Book3S. We only support up to 512TB as regular system
     memory, otherwise the percpu allocator runs out of vmalloc space.

   - Add stack protector support for 32 and 64-bit, with a per-task
     canary.

   - Add support for PTRACE_SYSEMU and PTRACE_SYSEMU_SINGLESTEP.

   - Support recognising "big cores" on Power9, where two SMT4 cores are
     presented to us as a single SMT8 core.

   - A large series to cleanup some of our ioremap handling and PTE
     flags.

   - Add a driver for the PAPR SCM (storage class memory) interface,
     allowing guests to operate on SCM devices (acked by Dan).

   - Changes to our ftrace code to handle very large kernels, where we
     need to use a trampoline to get to ftrace_caller().

  And many other smaller enhancements and cleanups.

  Thanks to: Alan Modra, Alistair Popple, Aneesh Kumar K.V, Anton
  Blanchard, Aravinda Prasad, Bartlomiej Zolnierkiewicz, Benjamin
  Herrenschmidt, Breno Leitao, Cédric Le Goater, Christophe Leroy,
  Christophe Lombard, Dan Carpenter, Daniel Axtens, Finn Thain, Gautham
  R. Shenoy, Gustavo Romero, Haren Myneni, Hari Bathini, Jia Hongtao,
  Joel Stanley, John Allen, Laurent Dufour, Madhavan Srinivasan, Mahesh
  Salgaonkar, Mark Hairgrove, Masahiro Yamada, Michael Bringmann,
  Michael Neuling, Michal Suchanek, Murilo Opsfelder Araujo, Nathan
  Fontenot, Naveen N. Rao, Nicholas Piggin, Nick Desaulniers, Oliver
  O'Halloran, Paul Mackerras, Petr Vorel, Rashmica Gupta, Reza Arbab,
  Rob Herring, Sam Bobroff, Samuel Mendoza-Jonas, Scott Wood, Stan
  Johnson, Stephen Rothwell, Stewart Smith, Suraj Jitindar Singh, Tyrel
  Datwyler, Vaibhav Jain, Vasant Hegde, YueHaibing, zhong jiang"

* tag 'powerpc-4.20-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: (221 commits)
  Revert "selftests/powerpc: Fix out-of-tree build errors"
  powerpc/msi: Fix compile error on mpc83xx
  powerpc: Fix stack protector crashes on CPU hotplug
  powerpc/traps: restore recoverability of machine_check interrupts
  powerpc/64/module: REL32 relocation range check
  powerpc/64s/radix: Fix radix__flush_tlb_collapsed_pmd double flushing pmd
  selftests/powerpc: Add a test of wild bctr
  powerpc/mm: Fix page table dump to work on Radix
  powerpc/mm/radix: Display if mappings are exec or not
  powerpc/mm/radix: Simplify split mapping logic
  powerpc/mm/radix: Remove the retry in the split mapping logic
  powerpc/mm/radix: Fix small page at boundary when splitting
  powerpc/mm/radix: Fix overuse of small pages in splitting logic
  powerpc/mm/radix: Fix off-by-one in split mapping logic
  powerpc/ftrace: Handle large kernel configs
  powerpc/mm: Fix WARN_ON with THP NUMA migration
  selftests/powerpc: Fix out-of-tree build errors
  powerpc/time: no steal_time when CONFIG_PPC_SPLPAR is not selected
  powerpc/time: Only set CONFIG_ARCH_HAS_SCALED_CPUTIME on PPC64
  powerpc/time: isolate scaled cputime accounting in dedicated functions.
  ...
hifive-unleashed-5.1
Linus Torvalds 2018-10-26 14:36:21 -07:00
commit 685f7e4f16
253 changed files with 6333 additions and 3428 deletions

View File

@ -2428,7 +2428,7 @@
seconds. Use this parameter to check at some
other rate. 0 disables periodic checking.
memtest= [KNL,X86,ARM] Enable memtest
memtest= [KNL,X86,ARM,PPC] Enable memtest
Format: <integer>
default : 0 <disable>
Specifies the number of memtest passes to be

View File

@ -37,35 +37,6 @@
static void (*rom_reset)(void);
#ifdef CONFIG_ADB_CUDA
static time64_t cuda_read_time(void)
{
struct adb_request req;
time64_t time;
if (cuda_request(&req, NULL, 2, CUDA_PACKET, CUDA_GET_TIME) < 0)
return 0;
while (!req.complete)
cuda_poll();
time = (u32)((req.reply[3] << 24) | (req.reply[4] << 16) |
(req.reply[5] << 8) | req.reply[6]);
return time - RTC_OFFSET;
}
static void cuda_write_time(time64_t time)
{
struct adb_request req;
u32 data = lower_32_bits(time + RTC_OFFSET);
if (cuda_request(&req, NULL, 6, CUDA_PACKET, CUDA_SET_TIME,
(data >> 24) & 0xFF, (data >> 16) & 0xFF,
(data >> 8) & 0xFF, data & 0xFF) < 0)
return;
while (!req.complete)
cuda_poll();
}
static __u8 cuda_read_pram(int offset)
{
struct adb_request req;
@ -91,33 +62,6 @@ static void cuda_write_pram(int offset, __u8 data)
#endif /* CONFIG_ADB_CUDA */
#ifdef CONFIG_ADB_PMU
static time64_t pmu_read_time(void)
{
struct adb_request req;
time64_t time;
if (pmu_request(&req, NULL, 1, PMU_READ_RTC) < 0)
return 0;
pmu_wait_complete(&req);
time = (u32)((req.reply[0] << 24) | (req.reply[1] << 16) |
(req.reply[2] << 8) | req.reply[3]);
return time - RTC_OFFSET;
}
static void pmu_write_time(time64_t time)
{
struct adb_request req;
u32 data = lower_32_bits(time + RTC_OFFSET);
if (pmu_request(&req, NULL, 5, PMU_SET_RTC,
(data >> 24) & 0xFF, (data >> 16) & 0xFF,
(data >> 8) & 0xFF, data & 0xFF) < 0)
return;
pmu_wait_complete(&req);
}
static __u8 pmu_read_pram(int offset)
{
struct adb_request req;
@ -295,13 +239,17 @@ static time64_t via_read_time(void)
* is basically any machine with Mac II-style ADB.
*/
static void via_write_time(time64_t time)
static void via_set_rtc_time(struct rtc_time *tm)
{
union {
__u8 cdata[4];
__u32 idata;
} data;
__u8 temp;
time64_t time;
time = mktime64(tm->tm_year + 1900, tm->tm_mon + 1, tm->tm_mday,
tm->tm_hour, tm->tm_min, tm->tm_sec);
/* Clear the write protect bit */
@ -641,12 +589,12 @@ int mac_hwclk(int op, struct rtc_time *t)
#ifdef CONFIG_ADB_CUDA
case MAC_ADB_EGRET:
case MAC_ADB_CUDA:
now = cuda_read_time();
now = cuda_get_time();
break;
#endif
#ifdef CONFIG_ADB_PMU
case MAC_ADB_PB2:
now = pmu_read_time();
now = pmu_get_time();
break;
#endif
default:
@ -665,24 +613,21 @@ int mac_hwclk(int op, struct rtc_time *t)
__func__, t->tm_year + 1900, t->tm_mon + 1, t->tm_mday,
t->tm_hour, t->tm_min, t->tm_sec);
now = mktime64(t->tm_year + 1900, t->tm_mon + 1, t->tm_mday,
t->tm_hour, t->tm_min, t->tm_sec);
switch (macintosh_config->adb_type) {
case MAC_ADB_IOP:
case MAC_ADB_II:
case MAC_ADB_PB1:
via_write_time(now);
via_set_rtc_time(t);
break;
#ifdef CONFIG_ADB_CUDA
case MAC_ADB_EGRET:
case MAC_ADB_CUDA:
cuda_write_time(now);
cuda_set_rtc_time(t);
break;
#endif
#ifdef CONFIG_ADB_PMU
case MAC_ADB_PB2:
pmu_write_time(now);
pmu_set_rtc_time(t);
break;
#endif
default:

View File

@ -0,0 +1,16 @@
subdir-ccflags-$(CONFIG_PPC_WERROR) := -Werror
obj-y += kernel/
obj-y += mm/
obj-y += lib/
obj-y += sysdev/
obj-y += platforms/
obj-y += math-emu/
obj-y += crypto/
obj-y += net/
obj-$(CONFIG_XMON) += xmon/
obj-$(CONFIG_KVM) += kvm/
obj-$(CONFIG_PERF_EVENTS) += perf/
obj-$(CONFIG_KEXEC_FILE) += purgatory/

View File

@ -137,7 +137,7 @@ config PPC
select ARCH_HAS_PMEM_API if PPC64
select ARCH_HAS_PTE_SPECIAL
select ARCH_HAS_MEMBARRIER_CALLBACKS
select ARCH_HAS_SCALED_CPUTIME if VIRT_CPU_ACCOUNTING_NATIVE
select ARCH_HAS_SCALED_CPUTIME if VIRT_CPU_ACCOUNTING_NATIVE && PPC64
select ARCH_HAS_SG_CHAIN
select ARCH_HAS_STRICT_KERNEL_RWX if ((PPC_BOOK3S_64 || PPC32) && !RELOCATABLE && !HIBERNATION)
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
@ -180,6 +180,8 @@ config PPC
select HAVE_ARCH_SECCOMP_FILTER
select HAVE_ARCH_TRACEHOOK
select HAVE_CBPF_JIT if !PPC64
select HAVE_STACKPROTECTOR if PPC64 && $(cc-option,-mstack-protector-guard=tls -mstack-protector-guard-reg=r13)
select HAVE_STACKPROTECTOR if PPC32 && $(cc-option,-mstack-protector-guard=tls -mstack-protector-guard-reg=r2)
select HAVE_CONTEXT_TRACKING if PPC64
select HAVE_DEBUG_KMEMLEAK
select HAVE_DEBUG_STACKOVERFLOW
@ -188,6 +190,7 @@ config PPC
select HAVE_EBPF_JIT if PPC64
select HAVE_EFFICIENT_UNALIGNED_ACCESS if !(CPU_LITTLE_ENDIAN && POWER7_CPU)
select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_FUNCTION_ERROR_INJECTION
select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FUNCTION_TRACER
select HAVE_GCC_PLUGINS if GCC_VERSION >= 50200 # plugin support on gcc <= 5.1 is buggy on PPC
@ -285,12 +288,10 @@ config ARCH_MAY_HAVE_PC_FDC
config PPC_UDBG_16550
bool
default n
config GENERIC_TBSYNC
bool
default y if PPC32 && SMP
default n
config AUDIT_ARCH
bool
@ -309,13 +310,11 @@ config EPAPR_BOOT
bool
help
Used to allow a board to specify it wants an ePAPR compliant wrapper.
default n
config DEFAULT_UIMAGE
bool
help
Used to allow a board to specify it wants a uImage built by default
default n
config ARCH_HIBERNATION_POSSIBLE
bool
@ -329,11 +328,9 @@ config ARCH_SUSPEND_POSSIBLE
config PPC_DCR_NATIVE
bool
default n
config PPC_DCR_MMIO
bool
default n
config PPC_DCR
bool
@ -344,7 +341,6 @@ config PPC_OF_PLATFORM_PCI
bool
depends on PCI
depends on PPC64 # not supported on 32 bits yet
default n
config ARCH_SUPPORTS_DEBUG_PAGEALLOC
depends on PPC32 || PPC_BOOK3S_64
@ -447,14 +443,12 @@ config PPC_TRANSACTIONAL_MEM
depends on SMP
select ALTIVEC
select VSX
default n
---help---
Support user-mode Transactional Memory on POWERPC.
config LD_HEAD_STUB_CATCH
bool "Reserve 256 bytes to cope with linker stubs in HEAD text" if EXPERT
depends on PPC64
default n
help
Very large kernels can cause linker branch stubs to be generated by
code in head_64.S, which moves the head text sections out of their
@ -557,7 +551,6 @@ config RELOCATABLE
config RELOCATABLE_TEST
bool "Test relocatable kernel"
depends on (PPC64 && RELOCATABLE)
default n
help
This runs the relocatable kernel at the address it was initially
loaded at, which tends to be non-zero and therefore test the
@ -769,7 +762,6 @@ config PPC_SUBPAGE_PROT
config PPC_COPRO_BASE
bool
default n
config SCHED_SMT
bool "SMT (Hyperthreading) scheduler support"
@ -892,7 +884,6 @@ config PPC_INDIRECT_PCI
bool
depends on PCI
default y if 40x || 44x
default n
config EISA
bool
@ -989,7 +980,6 @@ source "drivers/pcmcia/Kconfig"
config HAS_RAPIDIO
bool
default n
config RAPIDIO
tristate "RapidIO support"
@ -1012,7 +1002,6 @@ endmenu
config NONSTATIC_KERNEL
bool
default n
menu "Advanced setup"
depends on PPC32

View File

@ -2,7 +2,6 @@
config PPC_DISABLE_WERROR
bool "Don't build arch/powerpc code with -Werror"
default n
help
This option tells the compiler NOT to build the code under
arch/powerpc with the -Werror flag (which means warnings
@ -56,7 +55,6 @@ config PPC_EMULATED_STATS
config CODE_PATCHING_SELFTEST
bool "Run self-tests of the code-patching code"
depends on DEBUG_KERNEL
default n
config JUMP_LABEL_FEATURE_CHECKS
bool "Enable use of jump label for cpu/mmu_has_feature()"
@ -70,7 +68,6 @@ config JUMP_LABEL_FEATURE_CHECKS
config JUMP_LABEL_FEATURE_CHECK_DEBUG
bool "Do extra check on feature fixup calls"
depends on DEBUG_KERNEL && JUMP_LABEL_FEATURE_CHECKS
default n
help
This tries to catch incorrect usage of cpu_has_feature() and
mmu_has_feature() in the code.
@ -80,16 +77,13 @@ config JUMP_LABEL_FEATURE_CHECK_DEBUG
config FTR_FIXUP_SELFTEST
bool "Run self-tests of the feature-fixup code"
depends on DEBUG_KERNEL
default n
config MSI_BITMAP_SELFTEST
bool "Run self-tests of the MSI bitmap code"
depends on DEBUG_KERNEL
default n
config PPC_IRQ_SOFT_MASK_DEBUG
bool "Include extra checks for powerpc irq soft masking"
default n
config XMON
bool "Include xmon kernel debugger"

View File

@ -112,6 +112,13 @@ KBUILD_LDFLAGS += -m elf$(BITS)$(LDEMULATION)
KBUILD_ARFLAGS += --target=elf$(BITS)-$(GNUTARGET)
endif
cflags-$(CONFIG_STACKPROTECTOR) += -mstack-protector-guard=tls
ifdef CONFIG_PPC64
cflags-$(CONFIG_STACKPROTECTOR) += -mstack-protector-guard-reg=r13
else
cflags-$(CONFIG_STACKPROTECTOR) += -mstack-protector-guard-reg=r2
endif
LDFLAGS_vmlinux-y := -Bstatic
LDFLAGS_vmlinux-$(CONFIG_RELOCATABLE) := -pie
LDFLAGS_vmlinux := $(LDFLAGS_vmlinux-y)
@ -160,8 +167,17 @@ else
CFLAGS-$(CONFIG_GENERIC_CPU) += -mcpu=powerpc64
endif
ifdef CONFIG_FUNCTION_TRACER
CC_FLAGS_FTRACE := -pg
ifdef CONFIG_MPROFILE_KERNEL
CC_FLAGS_FTRACE := -pg -mprofile-kernel
CC_FLAGS_FTRACE += -mprofile-kernel
endif
# Work around gcc code-gen bugs with -pg / -fno-omit-frame-pointer in gcc <= 4.8
# https://gcc.gnu.org/bugzilla/show_bug.cgi?id=44199
# https://gcc.gnu.org/bugzilla/show_bug.cgi?id=52828
ifneq ($(cc-name),clang)
CC_FLAGS_FTRACE += $(call cc-ifversion, -lt, 0409, -mno-sched-epilog)
endif
endif
CFLAGS-$(CONFIG_TARGET_CPU_BOOL) += $(call cc-option,-mcpu=$(CONFIG_TARGET_CPU))
@ -229,16 +245,15 @@ ifdef CONFIG_6xx
KBUILD_CFLAGS += -mcpu=powerpc
endif
# Work around a gcc code-gen bug with -fno-omit-frame-pointer.
ifdef CONFIG_FUNCTION_TRACER
KBUILD_CFLAGS += -mno-sched-epilog
endif
cpu-as-$(CONFIG_4xx) += -Wa,-m405
cpu-as-$(CONFIG_ALTIVEC) += $(call as-option,-Wa$(comma)-maltivec)
cpu-as-$(CONFIG_E200) += -Wa,-me200
cpu-as-$(CONFIG_E500) += -Wa,-me500
cpu-as-$(CONFIG_PPC_BOOK3S_64) += -Wa,-mpower4
# When using '-many -mpower4' gas will first try and find a matching power4
# mnemonic and failing that it will allow any valid mnemonic that GAS knows
# about. GCC will pass -many to GAS when assembling, clang does not.
cpu-as-$(CONFIG_PPC_BOOK3S_64) += -Wa,-mpower4 -Wa,-many
cpu-as-$(CONFIG_PPC_E500MC) += $(call as-option,-Wa$(comma)-me500mc)
KBUILD_AFLAGS += $(cpu-as-y)
@ -258,18 +273,8 @@ head-$(CONFIG_PPC_FPU) += arch/powerpc/kernel/fpu.o
head-$(CONFIG_ALTIVEC) += arch/powerpc/kernel/vector.o
head-$(CONFIG_PPC_OF_BOOT_TRAMPOLINE) += arch/powerpc/kernel/prom_init.o
core-y += arch/powerpc/kernel/ \
arch/powerpc/mm/ \
arch/powerpc/lib/ \
arch/powerpc/sysdev/ \
arch/powerpc/platforms/ \
arch/powerpc/math-emu/ \
arch/powerpc/crypto/ \
arch/powerpc/net/
core-$(CONFIG_XMON) += arch/powerpc/xmon/
core-$(CONFIG_KVM) += arch/powerpc/kvm/
core-$(CONFIG_PERF_EVENTS) += arch/powerpc/perf/
core-$(CONFIG_KEXEC_FILE) += arch/powerpc/purgatory/
# See arch/powerpc/Kbuild for content of core part of the kernel
core-y += arch/powerpc/
drivers-$(CONFIG_OPROFILE) += arch/powerpc/oprofile/
@ -397,40 +402,20 @@ archclean:
archprepare: checkbin
# Use the file '.tmp_gas_check' for binutils tests, as gas won't output
# to stdout and these checks are run even on install targets.
TOUT := .tmp_gas_check
ifdef CONFIG_STACKPROTECTOR
prepare: stack_protector_prepare
# Check gcc and binutils versions:
# - gcc-3.4 and binutils-2.14 are a fatal combination
# - Require gcc 4.0 or above on 64-bit
# - gcc-4.2.0 has issues compiling modules on 64-bit
stack_protector_prepare: prepare0
ifdef CONFIG_PPC64
$(eval KBUILD_CFLAGS += -mstack-protector-guard-offset=$(shell awk '{if ($$2 == "PACA_CANARY") print $$3;}' include/generated/asm-offsets.h))
else
$(eval KBUILD_CFLAGS += -mstack-protector-guard-offset=$(shell awk '{if ($$2 == "TASK_CANARY") print $$3;}' include/generated/asm-offsets.h))
endif
endif
# Check toolchain versions:
# - gcc-4.6 is the minimum kernel-wide version so nothing required.
checkbin:
@if test "$(cc-name)" != "clang" \
&& test "$(cc-version)" = "0304" ; then \
if ! /bin/echo mftb 5 | $(AS) -v -mppc -many -o $(TOUT) >/dev/null 2>&1 ; then \
echo -n '*** ${VERSION}.${PATCHLEVEL} kernels no longer build '; \
echo 'correctly with gcc-3.4 and your version of binutils.'; \
echo '*** Please upgrade your binutils or downgrade your gcc'; \
false; \
fi ; \
fi
@if test "$(cc-name)" != "clang" \
&& test "$(cc-version)" -lt "0400" \
&& test "x${CONFIG_PPC64}" = "xy" ; then \
echo -n "Sorry, GCC v4.0 or above is required to build " ; \
echo "the 64-bit powerpc kernel." ; \
false ; \
fi
@if test "$(cc-name)" != "clang" \
&& test "$(cc-fullversion)" = "040200" \
&& test "x${CONFIG_MODULES}${CONFIG_PPC64}" = "xyy" ; then \
echo -n '*** GCC-4.2.0 cannot compile the 64-bit powerpc ' ; \
echo 'kernel with modules enabled.' ; \
echo -n '*** Please use a different GCC version or ' ; \
echo 'disable kernel modules' ; \
false ; \
fi
@if test "x${CONFIG_CPU_LITTLE_ENDIAN}" = "xy" \
&& $(LD) --version | head -1 | grep ' 2\.24$$' >/dev/null ; then \
echo -n '*** binutils 2.24 miscompiles weak symbols ' ; \
@ -438,7 +423,3 @@ checkbin:
echo -n '*** Please use a different binutils version.' ; \
false ; \
fi
CLEAN_FILES += $(TOUT)

View File

@ -44,4 +44,5 @@ fdt_sw.c
fdt_wip.c
libfdt.h
libfdt_internal.h
autoconf.h

View File

@ -32,8 +32,8 @@ else
endif
BOOTCFLAGS := -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \
-fno-strict-aliasing -Os -msoft-float -pipe \
-fomit-frame-pointer -fno-builtin -fPIC -nostdinc \
-fno-strict-aliasing -O2 -msoft-float -mno-altivec -mno-vsx \
-pipe -fomit-frame-pointer -fno-builtin -fPIC -nostdinc \
-D$(compress-y)
ifdef CONFIG_PPC64_BOOT_WRAPPER
@ -197,9 +197,14 @@ $(obj)/empty.c:
$(obj)/zImage.coff.lds $(obj)/zImage.ps3.lds : $(obj)/%: $(srctree)/$(src)/%.S
$(Q)cp $< $@
$(obj)/serial.c: $(obj)/autoconf.h
$(obj)/autoconf.h: $(obj)/%: $(objtree)/include/generated/%
$(Q)cp $< $@
clean-files := $(zlib-) $(zlibheader-) $(zliblinuxheader-) \
$(zlib-decomp-) $(libfdt) $(libfdtheader) \
empty.c zImage.coff.lds zImage.ps3.lds zImage.lds
autoconf.h empty.c zImage.coff.lds zImage.ps3.lds zImage.lds
quiet_cmd_bootcc = BOOTCC $@
cmd_bootcc = $(BOOTCC) -Wp,-MD,$(depfile) $(BOOTCFLAGS) -c -o $@ $<

View File

@ -47,8 +47,10 @@ p_end: .long _end
p_pstack: .long _platform_stack_top
#endif
.weak _zimage_start
.globl _zimage_start
/* Clang appears to require the .weak directive to be after the symbol
* is defined. See https://bugs.llvm.org/show_bug.cgi?id=38921 */
.weak _zimage_start
_zimage_start:
.globl _zimage_start_lib
_zimage_start_lib:

View File

@ -13,8 +13,6 @@
#include <libfdt.h>
#include "../include/asm/opal-api.h"
#ifdef CONFIG_PPC64_BOOT_WRAPPER
/* Global OPAL struct used by opal-call.S */
struct opal {
u64 base;
@ -101,9 +99,3 @@ int opal_console_init(void *devp, struct serial_console_data *scdp)
return 0;
}
#else
int opal_console_init(void *devp, struct serial_console_data *scdp)
{
return -1;
}
#endif /* __powerpc64__ */

View File

@ -18,6 +18,7 @@
#include "stdio.h"
#include "io.h"
#include "ops.h"
#include "autoconf.h"
static int serial_open(void)
{

View File

@ -262,3 +262,4 @@ CONFIG_CRYPTO_SERPENT=m
CONFIG_CRYPTO_TEA=m
CONFIG_CRYPTO_TWOFISH=m
# CONFIG_CRYPTO_HW is not set
CONFIG_PRINTK_TIME=y

View File

@ -112,3 +112,4 @@ CONFIG_PPC_EARLY_DEBUG=y
CONFIG_CRYPTO_ECB=m
CONFIG_CRYPTO_PCBC=m
# CONFIG_CRYPTO_HW is not set
CONFIG_PRINTK_TIME=y

View File

@ -44,6 +44,9 @@ CONFIG_PPC_MEMTRACE=y
# CONFIG_PPC_PSERIES is not set
# CONFIG_PPC_OF_BOOT_TRAMPOLINE is not set
CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y
CONFIG_CPU_FREQ_GOV_POWERSAVE=y
CONFIG_CPU_FREQ_GOV_USERSPACE=y
CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
CONFIG_CPU_IDLE=y
CONFIG_HZ_100=y
CONFIG_BINFMT_MISC=m
@ -350,3 +353,4 @@ CONFIG_VIRTUALIZATION=y
CONFIG_KVM_BOOK3S_64=m
CONFIG_KVM_BOOK3S_64_HV=m
CONFIG_VHOST_NET=m
CONFIG_PRINTK_TIME=y

View File

@ -40,6 +40,9 @@ CONFIG_PS3_LPM=m
CONFIG_PPC_IBM_CELL_BLADE=y
CONFIG_RTAS_FLASH=m
CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y
CONFIG_CPU_FREQ_GOV_POWERSAVE=y
CONFIG_CPU_FREQ_GOV_USERSPACE=y
CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
CONFIG_CPU_FREQ_PMAC64=y
CONFIG_HZ_100=y
CONFIG_BINFMT_MISC=m
@ -365,3 +368,4 @@ CONFIG_VIRTUALIZATION=y
CONFIG_KVM_BOOK3S_64=m
CONFIG_KVM_BOOK3S_64_HV=m
CONFIG_VHOST_NET=m
CONFIG_PRINTK_TIME=y

View File

@ -171,3 +171,4 @@ CONFIG_CRYPTO_PCBC=m
CONFIG_CRYPTO_MICHAEL_MIC=m
CONFIG_CRYPTO_SALSA20=m
CONFIG_CRYPTO_LZO=m
CONFIG_PRINTK_TIME=y

View File

@ -325,3 +325,4 @@ CONFIG_VIRTUALIZATION=y
CONFIG_KVM_BOOK3S_64=m
CONFIG_KVM_BOOK3S_64_HV=m
CONFIG_VHOST_NET=m
CONFIG_PRINTK_TIME=y

View File

@ -3,20 +3,17 @@ CONFIG_ALTIVEC=y
CONFIG_VSX=y
CONFIG_NR_CPUS=2048
CONFIG_CPU_LITTLE_ENDIAN=y
CONFIG_KERNEL_XZ=y
# CONFIG_SWAP is not set
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
# CONFIG_CROSS_MEMORY_ATTACH is not set
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_TASKSTATS=y
CONFIG_TASK_DELAY_ACCT=y
CONFIG_TASK_XACCT=y
CONFIG_TASK_IO_ACCOUNTING=y
# CONFIG_CPU_ISOLATION is not set
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=20
CONFIG_RELAY=y
CONFIG_BLK_DEV_INITRD=y
# CONFIG_RD_GZIP is not set
# CONFIG_RD_BZIP2 is not set
@ -24,8 +21,14 @@ CONFIG_BLK_DEV_INITRD=y
# CONFIG_RD_LZO is not set
# CONFIG_RD_LZ4 is not set
CONFIG_CC_OPTIMIZE_FOR_SIZE=y
CONFIG_EXPERT=y
# CONFIG_SGETMASK_SYSCALL is not set
# CONFIG_SYSFS_SYSCALL is not set
# CONFIG_SHMEM is not set
# CONFIG_AIO is not set
CONFIG_PERF_EVENTS=y
# CONFIG_COMPAT_BRK is not set
CONFIG_SLAB_FREELIST_HARDENED=y
CONFIG_JUMP_LABEL=y
CONFIG_STRICT_KERNEL_RWX=y
CONFIG_MODULES=y
@ -35,7 +38,9 @@ CONFIG_MODULE_SIG_FORCE=y
CONFIG_MODULE_SIG_SHA512=y
CONFIG_PARTITION_ADVANCED=y
# CONFIG_IOSCHED_DEADLINE is not set
# CONFIG_PPC_VAS is not set
# CONFIG_PPC_PSERIES is not set
# CONFIG_PPC_OF_BOOT_TRAMPOLINE is not set
CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y
CONFIG_CPU_IDLE=y
CONFIG_HZ_100=y
@ -48,8 +53,9 @@ CONFIG_NUMA=y
CONFIG_PPC_64K_PAGES=y
CONFIG_SCHED_SMT=y
CONFIG_CMDLINE_BOOL=y
CONFIG_CMDLINE="console=tty0 console=hvc0 powersave=off"
CONFIG_CMDLINE="console=tty0 console=hvc0 ipr.fast_reboot=1 quiet"
# CONFIG_SECCOMP is not set
# CONFIG_PPC_MEM_KEYS is not set
CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
@ -60,7 +66,6 @@ CONFIG_SYN_COOKIES=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_IPV6 is not set
CONFIG_DNS_RESOLVER=y
# CONFIG_WIRELESS is not set
CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
@ -73,8 +78,10 @@ CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_SIZE=65536
CONFIG_VIRTIO_BLK=m
CONFIG_BLK_DEV_NVME=m
CONFIG_EEPROM_AT24=y
CONFIG_NVME_MULTIPATH=y
CONFIG_EEPROM_AT24=m
# CONFIG_CXL is not set
# CONFIG_OCXL is not set
CONFIG_BLK_DEV_SD=m
CONFIG_BLK_DEV_SR=m
CONFIG_BLK_DEV_SR_VENDOR=y
@ -85,7 +92,6 @@ CONFIG_SCSI_FC_ATTRS=y
CONFIG_SCSI_CXGB3_ISCSI=m
CONFIG_SCSI_CXGB4_ISCSI=m
CONFIG_SCSI_BNX2_ISCSI=m
CONFIG_BE2ISCSI=m
CONFIG_SCSI_AACRAID=m
CONFIG_MEGARAID_NEWGEN=y
CONFIG_MEGARAID_MM=m
@ -102,7 +108,7 @@ CONFIG_SCSI_VIRTIO=m
CONFIG_SCSI_DH=y
CONFIG_SCSI_DH_ALUA=m
CONFIG_ATA=y
CONFIG_SATA_AHCI=y
CONFIG_SATA_AHCI=m
# CONFIG_ATA_SFF is not set
CONFIG_MD=y
CONFIG_BLK_DEV_MD=m
@ -119,25 +125,72 @@ CONFIG_DM_SNAPSHOT=m
CONFIG_DM_MIRROR=m
CONFIG_DM_ZERO=m
CONFIG_DM_MULTIPATH=m
# CONFIG_NET_VENDOR_3COM is not set
# CONFIG_NET_VENDOR_ADAPTEC is not set
# CONFIG_NET_VENDOR_AGERE is not set
# CONFIG_NET_VENDOR_ALACRITECH is not set
CONFIG_ACENIC=m
CONFIG_ACENIC_OMIT_TIGON_I=y
CONFIG_TIGON3=y
# CONFIG_NET_VENDOR_AMAZON is not set
# CONFIG_NET_VENDOR_AMD is not set
# CONFIG_NET_VENDOR_AQUANTIA is not set
# CONFIG_NET_VENDOR_ARC is not set
# CONFIG_NET_VENDOR_ATHEROS is not set
CONFIG_TIGON3=m
CONFIG_BNX2X=m
CONFIG_CHELSIO_T1=y
# CONFIG_NET_VENDOR_BROCADE is not set
# CONFIG_NET_CADENCE is not set
# CONFIG_NET_VENDOR_CAVIUM is not set
CONFIG_CHELSIO_T1=m
# CONFIG_NET_VENDOR_CISCO is not set
# CONFIG_NET_VENDOR_CORTINA is not set
# CONFIG_NET_VENDOR_DEC is not set
# CONFIG_NET_VENDOR_DLINK is not set
CONFIG_BE2NET=m
CONFIG_S2IO=m
CONFIG_E100=m
# CONFIG_NET_VENDOR_EZCHIP is not set
# CONFIG_NET_VENDOR_HP is not set
# CONFIG_NET_VENDOR_HUAWEI is not set
CONFIG_E1000=m
CONFIG_E1000E=m
CONFIG_IGB=m
CONFIG_IXGB=m
CONFIG_IXGBE=m
CONFIG_I40E=m
CONFIG_S2IO=m
# CONFIG_NET_VENDOR_MARVELL is not set
CONFIG_MLX4_EN=m
# CONFIG_MLX4_CORE_GEN2 is not set
CONFIG_MLX5_CORE=m
CONFIG_MLX5_CORE_EN=y
# CONFIG_NET_VENDOR_MICREL is not set
CONFIG_MYRI10GE=m
# CONFIG_NET_VENDOR_NATSEMI is not set
# CONFIG_NET_VENDOR_NETRONOME is not set
# CONFIG_NET_VENDOR_NI is not set
# CONFIG_NET_VENDOR_NVIDIA is not set
# CONFIG_NET_VENDOR_OKI is not set
# CONFIG_NET_PACKET_ENGINE is not set
CONFIG_QLGE=m
CONFIG_NETXEN_NIC=m
# CONFIG_NET_VENDOR_QUALCOMM is not set
# CONFIG_NET_VENDOR_RDC is not set
# CONFIG_NET_VENDOR_REALTEK is not set
# CONFIG_NET_VENDOR_RENESAS is not set
# CONFIG_NET_VENDOR_ROCKER is not set
# CONFIG_NET_VENDOR_SAMSUNG is not set
# CONFIG_NET_VENDOR_SEEQ is not set
CONFIG_SFC=m
# CONFIG_NET_VENDOR_SILAN is not set
# CONFIG_NET_VENDOR_SIS is not set
# CONFIG_NET_VENDOR_SMSC is not set
# CONFIG_NET_VENDOR_SOCIONEXT is not set
# CONFIG_NET_VENDOR_STMICRO is not set
# CONFIG_NET_VENDOR_SUN is not set
# CONFIG_NET_VENDOR_SYNOPSYS is not set
# CONFIG_NET_VENDOR_TEHUTI is not set
# CONFIG_NET_VENDOR_TI is not set
# CONFIG_NET_VENDOR_VIA is not set
# CONFIG_NET_VENDOR_WIZNET is not set
# CONFIG_NET_VENDOR_XILINX is not set
CONFIG_PHYLIB=y
# CONFIG_USB_NET_DRIVERS is not set
# CONFIG_WLAN is not set
CONFIG_INPUT_EVDEV=y
@ -149,39 +202,51 @@ CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_IPMI_HANDLER=y
CONFIG_IPMI_DEVICE_INTERFACE=y
CONFIG_IPMI_POWERNV=y
CONFIG_IPMI_WATCHDOG=y
CONFIG_HW_RANDOM=y
CONFIG_TCG_TPM=y
CONFIG_TCG_TIS_I2C_NUVOTON=y
CONFIG_I2C=y
# CONFIG_I2C_COMPAT is not set
CONFIG_I2C_CHARDEV=y
# CONFIG_I2C_HELPER_AUTO is not set
CONFIG_DRM=y
CONFIG_DRM_RADEON=y
CONFIG_I2C_ALGOBIT=y
CONFIG_I2C_OPAL=m
CONFIG_PPS=y
CONFIG_SENSORS_IBMPOWERNV=m
CONFIG_DRM=m
CONFIG_DRM_AST=m
CONFIG_FB=y
CONFIG_FIRMWARE_EDID=y
CONFIG_FB_MODE_HELPERS=y
CONFIG_FB_OF=y
CONFIG_FB_MATROX=y
CONFIG_FB_MATROX_MILLENIUM=y
CONFIG_FB_MATROX_MYSTIQUE=y
CONFIG_FB_MATROX_G=y
# CONFIG_LCD_CLASS_DEVICE is not set
# CONFIG_BACKLIGHT_GENERIC is not set
# CONFIG_VGA_CONSOLE is not set
CONFIG_FRAMEBUFFER_CONSOLE=y
CONFIG_LOGO=y
# CONFIG_LOGO_LINUX_MONO is not set
# CONFIG_LOGO_LINUX_VGA16 is not set
CONFIG_HID_GENERIC=m
CONFIG_HID_A4TECH=y
CONFIG_HID_BELKIN=y
CONFIG_HID_CHERRY=y
CONFIG_HID_CHICONY=y
CONFIG_HID_CYPRESS=y
CONFIG_HID_EZKEY=y
CONFIG_HID_ITE=y
CONFIG_HID_KENSINGTON=y
CONFIG_HID_LOGITECH=y
CONFIG_HID_MICROSOFT=y
CONFIG_HID_MONTEREY=y
CONFIG_USB_HIDDEV=y
CONFIG_USB=y
CONFIG_USB_MON=y
CONFIG_USB_XHCI_HCD=y
CONFIG_USB_EHCI_HCD=y
CONFIG_USB=m
CONFIG_USB_XHCI_HCD=m
CONFIG_USB_EHCI_HCD=m
# CONFIG_USB_EHCI_HCD_PPC_OF is not set
CONFIG_USB_OHCI_HCD=y
CONFIG_USB_STORAGE=y
CONFIG_USB_OHCI_HCD=m
CONFIG_USB_STORAGE=m
CONFIG_RTC_CLASS=y
CONFIG_RTC_DRV_OPAL=m
CONFIG_RTC_DRV_GENERIC=m
CONFIG_VIRT_DRIVERS=y
CONFIG_VIRTIO_PCI=y
CONFIG_VIRTIO_PCI=m
# CONFIG_IOMMU_SUPPORT is not set
CONFIG_EXT4_FS=m
CONFIG_EXT4_FS_POSIX_ACL=y
@ -195,10 +260,9 @@ CONFIG_UDF_FS=m
CONFIG_MSDOS_FS=m
CONFIG_VFAT_FS=m
CONFIG_PROC_KCORE=y
CONFIG_TMPFS=y
CONFIG_TMPFS_POSIX_ACL=y
# CONFIG_MISC_FILESYSTEMS is not set
# CONFIG_NETWORK_FILESYSTEMS is not set
CONFIG_NLS=y
CONFIG_NLS_DEFAULT="utf8"
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ASCII=y
@ -207,26 +271,24 @@ CONFIG_NLS_UTF8=y
CONFIG_CRC16=y
CONFIG_CRC_ITU_T=y
CONFIG_LIBCRC32C=y
# CONFIG_XZ_DEC_X86 is not set
# CONFIG_XZ_DEC_IA64 is not set
# CONFIG_XZ_DEC_ARM is not set
# CONFIG_XZ_DEC_ARMTHUMB is not set
# CONFIG_XZ_DEC_SPARC is not set
CONFIG_PRINTK_TIME=y
CONFIG_MAGIC_SYSRQ=y
CONFIG_DEBUG_KERNEL=y
CONFIG_DEBUG_STACKOVERFLOW=y
CONFIG_SOFTLOCKUP_DETECTOR=y
CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=y
CONFIG_HARDLOCKUP_DETECTOR=y
CONFIG_BOOTPARAM_HARDLOCKUP_PANIC=y
CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=y
CONFIG_WQ_WATCHDOG=y
CONFIG_SCHEDSTATS=y
# CONFIG_SCHED_DEBUG is not set
# CONFIG_FTRACE is not set
# CONFIG_RUNTIME_TESTING_MENU is not set
CONFIG_XMON=y
CONFIG_XMON_DEFAULT=y
CONFIG_SECURITY=y
CONFIG_IMA=y
CONFIG_EVM=y
CONFIG_ENCRYPTED_KEYS=y
# CONFIG_CRYPTO_ECHAINIV is not set
CONFIG_CRYPTO_ECB=y
CONFIG_CRYPTO_CMAC=y
CONFIG_CRYPTO_MD4=y
CONFIG_CRYPTO_ARC4=y
CONFIG_CRYPTO_DES=y
# CONFIG_CRYPTO_HW is not set

View File

@ -15,8 +15,10 @@ struct cpu_accounting_data {
/* Accumulated cputime values to flush on ticks*/
unsigned long utime;
unsigned long stime;
#ifdef CONFIG_ARCH_HAS_SCALED_CPUTIME
unsigned long utime_scaled;
unsigned long stime_scaled;
#endif
unsigned long gtime;
unsigned long hardirq_time;
unsigned long softirq_time;
@ -25,8 +27,10 @@ struct cpu_accounting_data {
/* Internal counters */
unsigned long starttime; /* TB value snapshot */
unsigned long starttime_user; /* TB value on exit to usermode */
#ifdef CONFIG_ARCH_HAS_SCALED_CPUTIME
unsigned long startspurr; /* SPURR value snapshot */
unsigned long utime_sspurr; /* ->user_time when ->startspurr set */
#endif
};
#endif

View File

@ -63,7 +63,6 @@ void program_check_exception(struct pt_regs *regs);
void alignment_exception(struct pt_regs *regs);
void slb_miss_bad_addr(struct pt_regs *regs);
void StackOverflow(struct pt_regs *regs);
void nonrecoverable_exception(struct pt_regs *regs);
void kernel_fp_unavailable_exception(struct pt_regs *regs);
void altivec_unavailable_exception(struct pt_regs *regs);
void vsx_unavailable_exception(struct pt_regs *regs);
@ -78,6 +77,8 @@ void kernel_bad_stack(struct pt_regs *regs);
void system_reset_exception(struct pt_regs *regs);
void machine_check_exception(struct pt_regs *regs);
void emulation_assist_interrupt(struct pt_regs *regs);
long do_slb_fault(struct pt_regs *regs, unsigned long ea);
void do_bad_slb_fault(struct pt_regs *regs, unsigned long ea, long err);
/* signals, syscalls and interrupts */
long sys_swapcontext(struct ucontext __user *old_ctx,

View File

@ -8,7 +8,97 @@
#include <asm/book3s/32/hash.h>
/* And here we include common definitions */
#include <asm/pte-common.h>
#define _PAGE_KERNEL_RO 0
#define _PAGE_KERNEL_ROX 0
#define _PAGE_KERNEL_RW (_PAGE_DIRTY | _PAGE_RW)
#define _PAGE_KERNEL_RWX (_PAGE_DIRTY | _PAGE_RW)
#define _PAGE_HPTEFLAGS _PAGE_HASHPTE
#ifndef __ASSEMBLY__
static inline bool pte_user(pte_t pte)
{
return pte_val(pte) & _PAGE_USER;
}
#endif /* __ASSEMBLY__ */
/*
* Location of the PFN in the PTE. Most 32-bit platforms use the same
* as _PAGE_SHIFT here (ie, naturally aligned).
* Platform who don't just pre-define the value so we don't override it here.
*/
#define PTE_RPN_SHIFT (PAGE_SHIFT)
/*
* The mask covered by the RPN must be a ULL on 32-bit platforms with
* 64-bit PTEs.
*/
#ifdef CONFIG_PTE_64BIT
#define PTE_RPN_MASK (~((1ULL << PTE_RPN_SHIFT) - 1))
#else
#define PTE_RPN_MASK (~((1UL << PTE_RPN_SHIFT) - 1))
#endif
/*
* _PAGE_CHG_MASK masks of bits that are to be preserved across
* pgprot changes.
*/
#define _PAGE_CHG_MASK (PTE_RPN_MASK | _PAGE_HASHPTE | _PAGE_DIRTY | \
_PAGE_ACCESSED | _PAGE_SPECIAL)
/*
* We define 2 sets of base prot bits, one for basic pages (ie,
* cacheable kernel and user pages) and one for non cacheable
* pages. We always set _PAGE_COHERENT when SMP is enabled or
* the processor might need it for DMA coherency.
*/
#define _PAGE_BASE_NC (_PAGE_PRESENT | _PAGE_ACCESSED)
#define _PAGE_BASE (_PAGE_BASE_NC | _PAGE_COHERENT)
/*
* Permission masks used to generate the __P and __S table.
*
* Note:__pgprot is defined in arch/powerpc/include/asm/page.h
*
* Write permissions imply read permissions for now.
*/
#define PAGE_NONE __pgprot(_PAGE_BASE)
#define PAGE_SHARED __pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_RW)
#define PAGE_SHARED_X __pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_RW)
#define PAGE_COPY __pgprot(_PAGE_BASE | _PAGE_USER)
#define PAGE_COPY_X __pgprot(_PAGE_BASE | _PAGE_USER)
#define PAGE_READONLY __pgprot(_PAGE_BASE | _PAGE_USER)
#define PAGE_READONLY_X __pgprot(_PAGE_BASE | _PAGE_USER)
/* Permission masks used for kernel mappings */
#define PAGE_KERNEL __pgprot(_PAGE_BASE | _PAGE_KERNEL_RW)
#define PAGE_KERNEL_NC __pgprot(_PAGE_BASE_NC | _PAGE_KERNEL_RW | _PAGE_NO_CACHE)
#define PAGE_KERNEL_NCG __pgprot(_PAGE_BASE_NC | _PAGE_KERNEL_RW | \
_PAGE_NO_CACHE | _PAGE_GUARDED)
#define PAGE_KERNEL_X __pgprot(_PAGE_BASE | _PAGE_KERNEL_RWX)
#define PAGE_KERNEL_RO __pgprot(_PAGE_BASE | _PAGE_KERNEL_RO)
#define PAGE_KERNEL_ROX __pgprot(_PAGE_BASE | _PAGE_KERNEL_ROX)
/*
* Protection used for kernel text. We want the debuggers to be able to
* set breakpoints anywhere, so don't write protect the kernel text
* on platforms where such control is possible.
*/
#if defined(CONFIG_KGDB) || defined(CONFIG_XMON) || defined(CONFIG_BDI_SWITCH) ||\
defined(CONFIG_KPROBES) || defined(CONFIG_DYNAMIC_FTRACE)
#define PAGE_KERNEL_TEXT PAGE_KERNEL_X
#else
#define PAGE_KERNEL_TEXT PAGE_KERNEL_ROX
#endif
/* Make modules code happy. We don't set RO yet */
#define PAGE_KERNEL_EXEC PAGE_KERNEL_X
/* Advertise special mapping type for AGP */
#define PAGE_AGP (PAGE_KERNEL_NC)
#define HAVE_PAGE_AGP
#define PTE_INDEX_SIZE PTE_SHIFT
#define PMD_INDEX_SIZE 0
@ -219,7 +309,7 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr,
pte_t *ptep)
{
pte_update(ptep, (_PAGE_RW | _PAGE_HWWRITE), _PAGE_RO);
pte_update(ptep, _PAGE_RW, 0);
}
static inline void huge_ptep_set_wrprotect(struct mm_struct *mm,
unsigned long addr, pte_t *ptep)
@ -234,10 +324,9 @@ static inline void __ptep_set_access_flags(struct vm_area_struct *vma,
int psize)
{
unsigned long set = pte_val(entry) &
(_PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_RW | _PAGE_EXEC);
unsigned long clr = ~pte_val(entry) & _PAGE_RO;
(_PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_RW);
pte_update(ptep, clr, set);
pte_update(ptep, 0, set);
flush_tlb_page(vma, address);
}
@ -292,7 +381,7 @@ static inline void __ptep_set_access_flags(struct vm_area_struct *vma,
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) >> 3 })
#define __swp_entry_to_pte(x) ((pte_t) { (x).val << 3 })
int map_kernel_page(unsigned long va, phys_addr_t pa, int flags);
int map_kernel_page(unsigned long va, phys_addr_t pa, pgprot_t prot);
/* Generic accessors to PTE bits */
static inline int pte_write(pte_t pte) { return !!(pte_val(pte) & _PAGE_RW);}
@ -301,13 +390,28 @@ static inline int pte_dirty(pte_t pte) { return !!(pte_val(pte) & _PAGE_DIRTY);
static inline int pte_young(pte_t pte) { return !!(pte_val(pte) & _PAGE_ACCESSED); }
static inline int pte_special(pte_t pte) { return !!(pte_val(pte) & _PAGE_SPECIAL); }
static inline int pte_none(pte_t pte) { return (pte_val(pte) & ~_PTE_NONE_MASK) == 0; }
static inline pgprot_t pte_pgprot(pte_t pte) { return __pgprot(pte_val(pte) & PAGE_PROT_BITS); }
static inline bool pte_exec(pte_t pte) { return true; }
static inline int pte_present(pte_t pte)
{
return pte_val(pte) & _PAGE_PRESENT;
}
static inline bool pte_hw_valid(pte_t pte)
{
return pte_val(pte) & _PAGE_PRESENT;
}
static inline bool pte_hashpte(pte_t pte)
{
return !!(pte_val(pte) & _PAGE_HASHPTE);
}
static inline bool pte_ci(pte_t pte)
{
return !!(pte_val(pte) & _PAGE_NO_CACHE);
}
/*
* We only find page table entry in the last level
* Hence no need for other accessors
@ -315,17 +419,14 @@ static inline int pte_present(pte_t pte)
#define pte_access_permitted pte_access_permitted
static inline bool pte_access_permitted(pte_t pte, bool write)
{
unsigned long pteval = pte_val(pte);
/*
* A read-only access is controlled by _PAGE_USER bit.
* We have _PAGE_READ set for WRITE and EXECUTE
*/
unsigned long need_pte_bits = _PAGE_PRESENT | _PAGE_USER;
if (!pte_present(pte) || !pte_user(pte) || !pte_read(pte))
return false;
if (write)
need_pte_bits |= _PAGE_WRITE;
if ((pteval & need_pte_bits) != need_pte_bits)
if (write && !pte_write(pte))
return false;
return true;
@ -354,6 +455,11 @@ static inline pte_t pte_wrprotect(pte_t pte)
return __pte(pte_val(pte) & ~_PAGE_RW);
}
static inline pte_t pte_exprotect(pte_t pte)
{
return pte;
}
static inline pte_t pte_mkclean(pte_t pte)
{
return __pte(pte_val(pte) & ~_PAGE_DIRTY);
@ -364,6 +470,16 @@ static inline pte_t pte_mkold(pte_t pte)
return __pte(pte_val(pte) & ~_PAGE_ACCESSED);
}
static inline pte_t pte_mkexec(pte_t pte)
{
return pte;
}
static inline pte_t pte_mkpte(pte_t pte)
{
return pte;
}
static inline pte_t pte_mkwrite(pte_t pte)
{
return __pte(pte_val(pte) | _PAGE_RW);
@ -389,6 +505,16 @@ static inline pte_t pte_mkhuge(pte_t pte)
return pte;
}
static inline pte_t pte_mkprivileged(pte_t pte)
{
return __pte(pte_val(pte) & ~_PAGE_USER);
}
static inline pte_t pte_mkuser(pte_t pte)
{
return __pte(pte_val(pte) | _PAGE_USER);
}
static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
{
return __pte((pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot));

View File

@ -66,7 +66,7 @@ static inline int hash__hugepd_ok(hugepd_t hpd)
* if it is not a pte and have hugepd shift mask
* set, then it is a hugepd directory pointer
*/
if (!(hpdval & _PAGE_PTE) &&
if (!(hpdval & _PAGE_PTE) && (hpdval & _PAGE_PRESENT) &&
((hpdval & HUGEPD_SHIFT_MASK) != 0))
return true;
return false;

View File

@ -18,6 +18,11 @@
#include <asm/book3s/64/hash-4k.h>
#endif
/* Bits to set in a PMD/PUD/PGD entry valid bit*/
#define HASH_PMD_VAL_BITS (0x8000000000000000UL)
#define HASH_PUD_VAL_BITS (0x8000000000000000UL)
#define HASH_PGD_VAL_BITS (0x8000000000000000UL)
/*
* Size of EA range mapped by our pagetables.
*/
@ -196,8 +201,7 @@ static inline void hpte_do_hugepage_flush(struct mm_struct *mm,
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
extern int hash__map_kernel_page(unsigned long ea, unsigned long pa,
unsigned long flags);
int hash__map_kernel_page(unsigned long ea, unsigned long pa, pgprot_t prot);
extern int __meminit hash__vmemmap_create_mapping(unsigned long start,
unsigned long page_size,
unsigned long phys);

View File

@ -39,4 +39,7 @@ static inline bool gigantic_page_supported(void)
}
#endif
/* hugepd entry valid bit */
#define HUGEPD_VAL_BITS (0x8000000000000000UL)
#endif

View File

@ -30,7 +30,7 @@
* SLB
*/
#define SLB_NUM_BOLTED 3
#define SLB_NUM_BOLTED 2
#define SLB_CACHE_ENTRIES 8
#define SLB_MIN_SIZE 32
@ -499,6 +499,8 @@ int htab_remove_mapping(unsigned long vstart, unsigned long vend,
extern void pseries_add_gpage(u64 addr, u64 page_size, unsigned long number_of_pages);
extern void demote_segment_4k(struct mm_struct *mm, unsigned long addr);
extern void hash__setup_new_exec(void);
#ifdef CONFIG_PPC_PSERIES
void hpte_init_pseries(void);
#else
@ -507,11 +509,18 @@ static inline void hpte_init_pseries(void) { }
extern void hpte_init_native(void);
struct slb_entry {
u64 esid;
u64 vsid;
};
extern void slb_initialize(void);
extern void slb_flush_and_rebolt(void);
void slb_flush_and_restore_bolted(void);
void slb_flush_all_realmode(void);
void __slb_restore_bolted_realmode(void);
void slb_restore_bolted_realmode(void);
void slb_save_contents(struct slb_entry *slb_ptr);
void slb_dump_contents(struct slb_entry *slb_ptr);
extern void slb_vmalloc_update(void);
extern void slb_set_size(u16 size);
@ -524,13 +533,9 @@ extern void slb_set_size(u16 size);
* from mmu context id and effective segment id of the address.
*
* For user processes max context id is limited to MAX_USER_CONTEXT.
* For kernel space, we use context ids 1-4 to map addresses as below:
* NOTE: each context only support 64TB now.
* 0x00001 - [ 0xc000000000000000 - 0xc0003fffffffffff ]
* 0x00002 - [ 0xd000000000000000 - 0xd0003fffffffffff ]
* 0x00003 - [ 0xe000000000000000 - 0xe0003fffffffffff ]
* 0x00004 - [ 0xf000000000000000 - 0xf0003fffffffffff ]
* more details in get_user_context
*
* For kernel space get_kernel_context
*
* The proto-VSIDs are then scrambled into real VSIDs with the
* multiplicative hash:
@ -570,6 +575,21 @@ extern void slb_set_size(u16 size);
#define ESID_BITS_MASK ((1 << ESID_BITS) - 1)
#define ESID_BITS_1T_MASK ((1 << ESID_BITS_1T) - 1)
/*
* Now certain config support MAX_PHYSMEM more than 512TB. Hence we will need
* to use more than one context for linear mapping the kernel.
* For vmalloc and memmap, we use just one context with 512TB. With 64 byte
* struct page size, we need ony 32 TB in memmap for 2PB (51 bits (MAX_PHYSMEM_BITS)).
*/
#if (MAX_PHYSMEM_BITS > MAX_EA_BITS_PER_CONTEXT)
#define MAX_KERNEL_CTX_CNT (1UL << (MAX_PHYSMEM_BITS - MAX_EA_BITS_PER_CONTEXT))
#else
#define MAX_KERNEL_CTX_CNT 1
#endif
#define MAX_VMALLOC_CTX_CNT 1
#define MAX_MEMMAP_CTX_CNT 1
/*
* 256MB segment
* The proto-VSID space has 2^(CONTEX_BITS + ESID_BITS) - 1 segments
@ -580,12 +600,13 @@ extern void slb_set_size(u16 size);
* We also need to avoid the last segment of the last context, because that
* would give a protovsid of 0x1fffffffff. That will result in a VSID 0
* because of the modulo operation in vsid scramble.
*
* We add one extra context to MIN_USER_CONTEXT so that we can map kernel
* context easily. The +1 is to map the unused 0xe region mapping.
*/
#define MAX_USER_CONTEXT ((ASM_CONST(1) << CONTEXT_BITS) - 2)
#define MIN_USER_CONTEXT (5)
/* Would be nice to use KERNEL_REGION_ID here */
#define KERNEL_REGION_CONTEXT_OFFSET (0xc - 1)
#define MIN_USER_CONTEXT (MAX_KERNEL_CTX_CNT + MAX_VMALLOC_CTX_CNT + \
MAX_MEMMAP_CTX_CNT + 2)
/*
* For platforms that support on 65bit VA we limit the context bits
@ -745,6 +766,39 @@ static inline unsigned long get_vsid(unsigned long context, unsigned long ea,
return vsid_scramble(protovsid, VSID_MULTIPLIER_1T, vsid_bits);
}
/*
* For kernel space, we use context ids as below
* below. Range is 512TB per context.
*
* 0x00001 - [ 0xc000000000000000 - 0xc001ffffffffffff]
* 0x00002 - [ 0xc002000000000000 - 0xc003ffffffffffff]
* 0x00003 - [ 0xc004000000000000 - 0xc005ffffffffffff]
* 0x00004 - [ 0xc006000000000000 - 0xc007ffffffffffff]
* 0x00005 - [ 0xd000000000000000 - 0xd001ffffffffffff ]
* 0x00006 - Not used - Can map 0xe000000000000000 range.
* 0x00007 - [ 0xf000000000000000 - 0xf001ffffffffffff ]
*
* So we can compute the context from the region (top nibble) by
* subtracting 11, or 0xc - 1.
*/
static inline unsigned long get_kernel_context(unsigned long ea)
{
unsigned long region_id = REGION_ID(ea);
unsigned long ctx;
/*
* For linear mapping we do support multiple context
*/
if (region_id == KERNEL_REGION_ID) {
/*
* We already verified ea to be not beyond the addr limit.
*/
ctx = 1 + ((ea & ~REGION_MASK) >> MAX_EA_BITS_PER_CONTEXT);
} else
ctx = (region_id - 0xc) + MAX_KERNEL_CTX_CNT;
return ctx;
}
/*
* This is only valid for addresses >= PAGE_OFFSET
*/
@ -755,20 +809,7 @@ static inline unsigned long get_kernel_vsid(unsigned long ea, int ssize)
if (!is_kernel_addr(ea))
return 0;
/*
* For kernel space, we use context ids 1-4 to map the address space as
* below:
*
* 0x00001 - [ 0xc000000000000000 - 0xc0003fffffffffff ]
* 0x00002 - [ 0xd000000000000000 - 0xd0003fffffffffff ]
* 0x00003 - [ 0xe000000000000000 - 0xe0003fffffffffff ]
* 0x00004 - [ 0xf000000000000000 - 0xf0003fffffffffff ]
*
* So we can compute the context from the region (top nibble) by
* subtracting 11, or 0xc - 1.
*/
context = (ea >> 60) - KERNEL_REGION_CONTEXT_OFFSET;
context = get_kernel_context(ea);
return get_vsid(context, ea, ssize);
}

View File

@ -208,7 +208,7 @@ extern void radix_init_pseries(void);
static inline void radix_init_pseries(void) { };
#endif
static inline int get_ea_context(mm_context_t *ctx, unsigned long ea)
static inline int get_user_context(mm_context_t *ctx, unsigned long ea)
{
int index = ea >> MAX_EA_BITS_PER_CONTEXT;
@ -223,7 +223,7 @@ static inline int get_ea_context(mm_context_t *ctx, unsigned long ea)
static inline unsigned long get_user_vsid(mm_context_t *ctx,
unsigned long ea, int ssize)
{
unsigned long context = get_ea_context(ctx, ea);
unsigned long context = get_user_context(ctx, ea);
return get_vsid(context, ea, ssize);
}

View File

@ -10,6 +10,9 @@
*
* Defined in such a way that we can optimize away code block at build time
* if CONFIG_HUGETLB_PAGE=n.
*
* returns true for pmd migration entries, THP, devmap, hugetlb
* But compile time dependent on CONFIG_HUGETLB_PAGE
*/
static inline int pmd_huge(pmd_t pmd)
{

View File

@ -14,10 +14,6 @@
*/
#define _PAGE_BIT_SWAP_TYPE 0
#define _PAGE_NA 0
#define _PAGE_RO 0
#define _PAGE_USER 0
#define _PAGE_EXEC 0x00001 /* execute permission */
#define _PAGE_WRITE 0x00002 /* write access allowed */
#define _PAGE_READ 0x00004 /* read access allowed */
@ -122,10 +118,6 @@
#define _PAGE_KERNEL_RO (_PAGE_PRIVILEGED | _PAGE_READ)
#define _PAGE_KERNEL_RWX (_PAGE_PRIVILEGED | _PAGE_DIRTY | \
_PAGE_RW | _PAGE_EXEC)
/*
* No page size encoding in the linux PTE
*/
#define _PAGE_PSIZE 0
/*
* _PAGE_CHG_MASK masks of bits that are to be preserved across
* pgprot changes
@ -136,20 +128,13 @@
#define H_PTE_PKEY (H_PTE_PKEY_BIT0 | H_PTE_PKEY_BIT1 | H_PTE_PKEY_BIT2 | \
H_PTE_PKEY_BIT3 | H_PTE_PKEY_BIT4)
/*
* Mask of bits returned by pte_pgprot()
*/
#define PAGE_PROT_BITS (_PAGE_SAO | _PAGE_NON_IDEMPOTENT | _PAGE_TOLERANT | \
H_PAGE_4K_PFN | _PAGE_PRIVILEGED | _PAGE_ACCESSED | \
_PAGE_READ | _PAGE_WRITE | _PAGE_DIRTY | _PAGE_EXEC | \
_PAGE_SOFT_DIRTY | H_PTE_PKEY)
/*
* We define 2 sets of base prot bits, one for basic pages (ie,
* cacheable kernel and user pages) and one for non cacheable
* pages. We always set _PAGE_COHERENT when SMP is enabled or
* the processor might need it for DMA coherency.
*/
#define _PAGE_BASE_NC (_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_PSIZE)
#define _PAGE_BASE_NC (_PAGE_PRESENT | _PAGE_ACCESSED)
#define _PAGE_BASE (_PAGE_BASE_NC)
/* Permission masks used to generate the __P and __S table,
@ -159,8 +144,6 @@
* Write permissions imply read permissions for now (we could make write-only
* pages on BookE but we don't bother for now). Execute permission control is
* possible on platforms that define _PAGE_EXEC
*
* Note due to the way vm flags are laid out, the bits are XWR
*/
#define PAGE_NONE __pgprot(_PAGE_BASE | _PAGE_PRIVILEGED)
#define PAGE_SHARED __pgprot(_PAGE_BASE | _PAGE_RW)
@ -170,24 +153,6 @@
#define PAGE_READONLY __pgprot(_PAGE_BASE | _PAGE_READ)
#define PAGE_READONLY_X __pgprot(_PAGE_BASE | _PAGE_READ | _PAGE_EXEC)
#define __P000 PAGE_NONE
#define __P001 PAGE_READONLY
#define __P010 PAGE_COPY
#define __P011 PAGE_COPY
#define __P100 PAGE_READONLY_X
#define __P101 PAGE_READONLY_X
#define __P110 PAGE_COPY_X
#define __P111 PAGE_COPY_X
#define __S000 PAGE_NONE
#define __S001 PAGE_READONLY
#define __S010 PAGE_SHARED
#define __S011 PAGE_SHARED
#define __S100 PAGE_READONLY_X
#define __S101 PAGE_READONLY_X
#define __S110 PAGE_SHARED_X
#define __S111 PAGE_SHARED_X
/* Permission masks used for kernel mappings */
#define PAGE_KERNEL __pgprot(_PAGE_BASE | _PAGE_KERNEL_RW)
#define PAGE_KERNEL_NC __pgprot(_PAGE_BASE_NC | _PAGE_KERNEL_RW | \
@ -519,7 +484,11 @@ static inline int pte_special(pte_t pte)
return !!(pte_raw(pte) & cpu_to_be64(_PAGE_SPECIAL));
}
static inline pgprot_t pte_pgprot(pte_t pte) { return __pgprot(pte_val(pte) & PAGE_PROT_BITS); }
static inline bool pte_exec(pte_t pte)
{
return !!(pte_raw(pte) & cpu_to_be64(_PAGE_EXEC));
}
#ifdef CONFIG_HAVE_ARCH_SOFT_DIRTY
static inline bool pte_soft_dirty(pte_t pte)
@ -529,12 +498,12 @@ static inline bool pte_soft_dirty(pte_t pte)
static inline pte_t pte_mksoft_dirty(pte_t pte)
{
return __pte(pte_val(pte) | _PAGE_SOFT_DIRTY);
return __pte_raw(pte_raw(pte) | cpu_to_be64(_PAGE_SOFT_DIRTY));
}
static inline pte_t pte_clear_soft_dirty(pte_t pte)
{
return __pte(pte_val(pte) & ~_PAGE_SOFT_DIRTY);
return __pte_raw(pte_raw(pte) & cpu_to_be64(~_PAGE_SOFT_DIRTY));
}
#endif /* CONFIG_HAVE_ARCH_SOFT_DIRTY */
@ -555,7 +524,7 @@ static inline pte_t pte_mk_savedwrite(pte_t pte)
*/
VM_BUG_ON((pte_raw(pte) & cpu_to_be64(_PAGE_PRESENT | _PAGE_RWX | _PAGE_PRIVILEGED)) !=
cpu_to_be64(_PAGE_PRESENT | _PAGE_PRIVILEGED));
return __pte(pte_val(pte) & ~_PAGE_PRIVILEGED);
return __pte_raw(pte_raw(pte) & cpu_to_be64(~_PAGE_PRIVILEGED));
}
#define pte_clear_savedwrite pte_clear_savedwrite
@ -565,14 +534,14 @@ static inline pte_t pte_clear_savedwrite(pte_t pte)
* Used by KSM subsystem to make a protnone pte readonly.
*/
VM_BUG_ON(!pte_protnone(pte));
return __pte(pte_val(pte) | _PAGE_PRIVILEGED);
return __pte_raw(pte_raw(pte) | cpu_to_be64(_PAGE_PRIVILEGED));
}
#else
#define pte_clear_savedwrite pte_clear_savedwrite
static inline pte_t pte_clear_savedwrite(pte_t pte)
{
VM_WARN_ON(1);
return __pte(pte_val(pte) & ~_PAGE_WRITE);
return __pte_raw(pte_raw(pte) & cpu_to_be64(~_PAGE_WRITE));
}
#endif /* CONFIG_NUMA_BALANCING */
@ -587,6 +556,11 @@ static inline int pte_present(pte_t pte)
return !!(pte_raw(pte) & cpu_to_be64(_PAGE_PRESENT | _PAGE_INVALID));
}
static inline bool pte_hw_valid(pte_t pte)
{
return !!(pte_raw(pte) & cpu_to_be64(_PAGE_PRESENT));
}
#ifdef CONFIG_PPC_MEM_KEYS
extern bool arch_pte_access_permitted(u64 pte, bool write, bool execute);
#else
@ -596,25 +570,22 @@ static inline bool arch_pte_access_permitted(u64 pte, bool write, bool execute)
}
#endif /* CONFIG_PPC_MEM_KEYS */
static inline bool pte_user(pte_t pte)
{
return !(pte_raw(pte) & cpu_to_be64(_PAGE_PRIVILEGED));
}
#define pte_access_permitted pte_access_permitted
static inline bool pte_access_permitted(pte_t pte, bool write)
{
unsigned long pteval = pte_val(pte);
/* Also check for pte_user */
unsigned long clear_pte_bits = _PAGE_PRIVILEGED;
/*
* _PAGE_READ is needed for any access and will be
* cleared for PROT_NONE
*/
unsigned long need_pte_bits = _PAGE_PRESENT | _PAGE_READ;
if (write)
need_pte_bits |= _PAGE_WRITE;
if ((pteval & need_pte_bits) != need_pte_bits)
if (!pte_present(pte) || !pte_user(pte) || !pte_read(pte))
return false;
if ((pteval & clear_pte_bits) == clear_pte_bits)
if (write && !pte_write(pte))
return false;
return arch_pte_access_permitted(pte_val(pte), write, 0);
@ -643,17 +614,32 @@ static inline pte_t pte_wrprotect(pte_t pte)
{
if (unlikely(pte_savedwrite(pte)))
return pte_clear_savedwrite(pte);
return __pte(pte_val(pte) & ~_PAGE_WRITE);
return __pte_raw(pte_raw(pte) & cpu_to_be64(~_PAGE_WRITE));
}
static inline pte_t pte_exprotect(pte_t pte)
{
return __pte_raw(pte_raw(pte) & cpu_to_be64(~_PAGE_EXEC));
}
static inline pte_t pte_mkclean(pte_t pte)
{
return __pte(pte_val(pte) & ~_PAGE_DIRTY);
return __pte_raw(pte_raw(pte) & cpu_to_be64(~_PAGE_DIRTY));
}
static inline pte_t pte_mkold(pte_t pte)
{
return __pte(pte_val(pte) & ~_PAGE_ACCESSED);
return __pte_raw(pte_raw(pte) & cpu_to_be64(~_PAGE_ACCESSED));
}
static inline pte_t pte_mkexec(pte_t pte)
{
return __pte_raw(pte_raw(pte) | cpu_to_be64(_PAGE_EXEC));
}
static inline pte_t pte_mkpte(pte_t pte)
{
return __pte_raw(pte_raw(pte) | cpu_to_be64(_PAGE_PTE));
}
static inline pte_t pte_mkwrite(pte_t pte)
@ -661,22 +647,22 @@ static inline pte_t pte_mkwrite(pte_t pte)
/*
* write implies read, hence set both
*/
return __pte(pte_val(pte) | _PAGE_RW);
return __pte_raw(pte_raw(pte) | cpu_to_be64(_PAGE_RW));
}
static inline pte_t pte_mkdirty(pte_t pte)
{
return __pte(pte_val(pte) | _PAGE_DIRTY | _PAGE_SOFT_DIRTY);
return __pte_raw(pte_raw(pte) | cpu_to_be64(_PAGE_DIRTY | _PAGE_SOFT_DIRTY));
}
static inline pte_t pte_mkyoung(pte_t pte)
{
return __pte(pte_val(pte) | _PAGE_ACCESSED);
return __pte_raw(pte_raw(pte) | cpu_to_be64(_PAGE_ACCESSED));
}
static inline pte_t pte_mkspecial(pte_t pte)
{
return __pte(pte_val(pte) | _PAGE_SPECIAL);
return __pte_raw(pte_raw(pte) | cpu_to_be64(_PAGE_SPECIAL));
}
static inline pte_t pte_mkhuge(pte_t pte)
@ -686,7 +672,17 @@ static inline pte_t pte_mkhuge(pte_t pte)
static inline pte_t pte_mkdevmap(pte_t pte)
{
return __pte(pte_val(pte) | _PAGE_SPECIAL|_PAGE_DEVMAP);
return __pte_raw(pte_raw(pte) | cpu_to_be64(_PAGE_SPECIAL | _PAGE_DEVMAP));
}
static inline pte_t pte_mkprivileged(pte_t pte)
{
return __pte_raw(pte_raw(pte) | cpu_to_be64(_PAGE_PRIVILEGED));
}
static inline pte_t pte_mkuser(pte_t pte)
{
return __pte_raw(pte_raw(pte) & cpu_to_be64(~_PAGE_PRIVILEGED));
}
/*
@ -705,12 +701,8 @@ static inline int pte_devmap(pte_t pte)
static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
{
/* FIXME!! check whether this need to be a conditional */
return __pte((pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot));
}
static inline bool pte_user(pte_t pte)
{
return !(pte_raw(pte) & cpu_to_be64(_PAGE_PRIVILEGED));
return __pte_raw((pte_raw(pte) & cpu_to_be64(_PAGE_CHG_MASK)) |
cpu_to_be64(pgprot_val(newprot)));
}
/* Encode and de-code a swap entry */
@ -741,6 +733,8 @@ static inline bool pte_user(pte_t pte)
*/
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val((pte)) & ~_PAGE_PTE })
#define __swp_entry_to_pte(x) __pte((x).val | _PAGE_PTE)
#define __pmd_to_swp_entry(pmd) (__pte_to_swp_entry(pmd_pte(pmd)))
#define __swp_entry_to_pmd(x) (pte_pmd(__swp_entry_to_pte(x)))
#ifdef CONFIG_MEM_SOFT_DIRTY
#define _PAGE_SWP_SOFT_DIRTY (1UL << (SWP_TYPE_BITS + _PAGE_BIT_SWAP_TYPE))
@ -751,7 +745,7 @@ static inline bool pte_user(pte_t pte)
#ifdef CONFIG_HAVE_ARCH_SOFT_DIRTY
static inline pte_t pte_swp_mksoft_dirty(pte_t pte)
{
return __pte(pte_val(pte) | _PAGE_SWP_SOFT_DIRTY);
return __pte_raw(pte_raw(pte) | cpu_to_be64(_PAGE_SWP_SOFT_DIRTY));
}
static inline bool pte_swp_soft_dirty(pte_t pte)
@ -761,7 +755,7 @@ static inline bool pte_swp_soft_dirty(pte_t pte)
static inline pte_t pte_swp_clear_soft_dirty(pte_t pte)
{
return __pte(pte_val(pte) & ~_PAGE_SWP_SOFT_DIRTY);
return __pte_raw(pte_raw(pte) & cpu_to_be64(~_PAGE_SWP_SOFT_DIRTY));
}
#endif /* CONFIG_HAVE_ARCH_SOFT_DIRTY */
@ -850,10 +844,10 @@ static inline pgprot_t pgprot_writecombine(pgprot_t prot)
*/
static inline bool pte_ci(pte_t pte)
{
unsigned long pte_v = pte_val(pte);
__be64 pte_v = pte_raw(pte);
if (((pte_v & _PAGE_CACHE_CTL) == _PAGE_TOLERANT) ||
((pte_v & _PAGE_CACHE_CTL) == _PAGE_NON_IDEMPOTENT))
if (((pte_v & cpu_to_be64(_PAGE_CACHE_CTL)) == cpu_to_be64(_PAGE_TOLERANT)) ||
((pte_v & cpu_to_be64(_PAGE_CACHE_CTL)) == cpu_to_be64(_PAGE_NON_IDEMPOTENT)))
return true;
return false;
}
@ -875,8 +869,16 @@ static inline int pmd_none(pmd_t pmd)
static inline int pmd_present(pmd_t pmd)
{
/*
* A pmd is considerent present if _PAGE_PRESENT is set.
* We also need to consider the pmd present which is marked
* invalid during a split. Hence we look for _PAGE_INVALID
* if we find _PAGE_PRESENT cleared.
*/
if (pmd_raw(pmd) & cpu_to_be64(_PAGE_PRESENT | _PAGE_INVALID))
return true;
return !pmd_none(pmd);
return false;
}
static inline int pmd_bad(pmd_t pmd)
@ -903,7 +905,7 @@ static inline int pud_none(pud_t pud)
static inline int pud_present(pud_t pud)
{
return !pud_none(pud);
return (pud_raw(pud) & cpu_to_be64(_PAGE_PRESENT));
}
extern struct page *pud_page(pud_t pud);
@ -950,7 +952,7 @@ static inline int pgd_none(pgd_t pgd)
static inline int pgd_present(pgd_t pgd)
{
return !pgd_none(pgd);
return (pgd_raw(pgd) & cpu_to_be64(_PAGE_PRESENT));
}
static inline pte_t pgd_pte(pgd_t pgd)
@ -1020,17 +1022,16 @@ extern struct page *pgd_page(pgd_t pgd);
#define pgd_ERROR(e) \
pr_err("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, pgd_val(e))
static inline int map_kernel_page(unsigned long ea, unsigned long pa,
unsigned long flags)
static inline int map_kernel_page(unsigned long ea, unsigned long pa, pgprot_t prot)
{
if (radix_enabled()) {
#if defined(CONFIG_PPC_RADIX_MMU) && defined(DEBUG_VM)
unsigned long page_size = 1 << mmu_psize_defs[mmu_io_psize].shift;
WARN((page_size != PAGE_SIZE), "I/O page size != PAGE_SIZE");
#endif
return radix__map_kernel_page(ea, pa, __pgprot(flags), PAGE_SIZE);
return radix__map_kernel_page(ea, pa, prot, PAGE_SIZE);
}
return hash__map_kernel_page(ea, pa, flags);
return hash__map_kernel_page(ea, pa, prot);
}
static inline int __meminit vmemmap_create_mapping(unsigned long start,
@ -1082,6 +1083,12 @@ static inline pte_t *pmdp_ptep(pmd_t *pmd)
#define pmd_soft_dirty(pmd) pte_soft_dirty(pmd_pte(pmd))
#define pmd_mksoft_dirty(pmd) pte_pmd(pte_mksoft_dirty(pmd_pte(pmd)))
#define pmd_clear_soft_dirty(pmd) pte_pmd(pte_clear_soft_dirty(pmd_pte(pmd)))
#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
#define pmd_swp_mksoft_dirty(pmd) pte_pmd(pte_swp_mksoft_dirty(pmd_pte(pmd)))
#define pmd_swp_soft_dirty(pmd) pte_swp_soft_dirty(pmd_pte(pmd))
#define pmd_swp_clear_soft_dirty(pmd) pte_pmd(pte_swp_clear_soft_dirty(pmd_pte(pmd)))
#endif
#endif /* CONFIG_HAVE_ARCH_SOFT_DIRTY */
#ifdef CONFIG_NUMA_BALANCING
@ -1127,6 +1134,10 @@ pmd_hugepage_update(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp,
return hash__pmd_hugepage_update(mm, addr, pmdp, clr, set);
}
/*
* returns true for pmd migration entries, THP, devmap, hugetlb
* But compile time dependent on THP config
*/
static inline int pmd_large(pmd_t pmd)
{
return !!(pmd_raw(pmd) & cpu_to_be64(_PAGE_PTE));
@ -1161,8 +1172,22 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm, unsigned long addr,
pmd_hugepage_update(mm, addr, pmdp, 0, _PAGE_PRIVILEGED);
}
/*
* Only returns true for a THP. False for pmd migration entry.
* We also need to return true when we come across a pte that
* in between a thp split. While splitting THP, we mark the pmd
* invalid (pmdp_invalidate()) before we set it with pte page
* address. A pmd_trans_huge() check against a pmd entry during that time
* should return true.
* We should not call this on a hugetlb entry. We should check for HugeTLB
* entry using vma->vm_flags
* The page table walk rule is explained in Documentation/vm/transhuge.rst
*/
static inline int pmd_trans_huge(pmd_t pmd)
{
if (!pmd_present(pmd))
return false;
if (radix_enabled())
return radix__pmd_trans_huge(pmd);
return hash__pmd_trans_huge(pmd);

View File

@ -23,11 +23,13 @@
extern int threads_per_core;
extern int threads_per_subcore;
extern int threads_shift;
extern bool has_big_cores;
extern cpumask_t threads_core_mask;
#else
#define threads_per_core 1
#define threads_per_subcore 1
#define threads_shift 0
#define has_big_cores 0
#define threads_core_mask (*get_cpu_mask(0))
#endif

View File

@ -61,7 +61,6 @@ static inline void arch_vtime_task_switch(struct task_struct *prev)
struct cpu_accounting_data *acct0 = get_accounting(prev);
acct->starttime = acct0->starttime;
acct->startspurr = acct0->startspurr;
}
#endif

View File

@ -99,4 +99,9 @@ void __init walk_drmem_lmbs_early(unsigned long node,
void (*func)(struct drmem_lmb *, const __be32 **));
#endif
static inline void invalidate_lmb_associativity_index(struct drmem_lmb *lmb)
{
lmb->aa_index = 0xffffffff;
}
#endif /* _ASM_POWERPC_LMB_H */

View File

@ -43,7 +43,6 @@ struct pci_dn;
#define EEH_VALID_PE_ZERO 0x10 /* PE#0 is valid */
#define EEH_ENABLE_IO_FOR_LOG 0x20 /* Enable IO for log */
#define EEH_EARLY_DUMP_LOG 0x40 /* Dump log immediately */
#define EEH_POSTPONED_PROBE 0x80 /* Powernv may postpone device probe */
/*
* Delay for PE reset, all in ms
@ -99,13 +98,13 @@ struct eeh_pe {
atomic_t pass_dev_cnt; /* Count of passed through devs */
struct eeh_pe *parent; /* Parent PE */
void *data; /* PE auxillary data */
struct list_head child_list; /* Link PE to the child list */
struct list_head edevs; /* Link list of EEH devices */
struct list_head child; /* Child PEs */
struct list_head child_list; /* List of PEs below this PE */
struct list_head child; /* Memb. child_list/eeh_phb_pe */
struct list_head edevs; /* List of eeh_dev in this PE */
};
#define eeh_pe_for_each_dev(pe, edev, tmp) \
list_for_each_entry_safe(edev, tmp, &pe->edevs, list)
list_for_each_entry_safe(edev, tmp, &pe->edevs, entry)
#define eeh_for_each_pe(root, pe) \
for (pe = root; pe; pe = eeh_pe_next(pe, root))
@ -142,13 +141,12 @@ struct eeh_dev {
int aer_cap; /* Saved AER capability */
int af_cap; /* Saved AF capability */
struct eeh_pe *pe; /* Associated PE */
struct list_head list; /* Form link list in the PE */
struct list_head rmv_list; /* Record the removed edevs */
struct list_head entry; /* Membership in eeh_pe.edevs */
struct list_head rmv_entry; /* Membership in rmv_list */
struct pci_dn *pdn; /* Associated PCI device node */
struct pci_dev *pdev; /* Associated PCI device */
bool in_error; /* Error flag for edev */
struct pci_dev *physfn; /* Associated SRIOV PF */
struct pci_bus *bus; /* PCI bus for partial hotplug */
};
static inline struct pci_dn *eeh_dev_to_pdn(struct eeh_dev *edev)
@ -207,9 +205,8 @@ struct eeh_ops {
void* (*probe)(struct pci_dn *pdn, void *data);
int (*set_option)(struct eeh_pe *pe, int option);
int (*get_pe_addr)(struct eeh_pe *pe);
int (*get_state)(struct eeh_pe *pe, int *state);
int (*get_state)(struct eeh_pe *pe, int *delay);
int (*reset)(struct eeh_pe *pe, int option);
int (*wait_state)(struct eeh_pe *pe, int max_wait);
int (*get_log)(struct eeh_pe *pe, int severity, char *drv_log, unsigned long len);
int (*configure_bridge)(struct eeh_pe *pe);
int (*err_inject)(struct eeh_pe *pe, int type, int func,
@ -243,11 +240,7 @@ static inline bool eeh_has_flag(int flag)
static inline bool eeh_enabled(void)
{
if (eeh_has_flag(EEH_FORCE_DISABLED) ||
!eeh_has_flag(EEH_ENABLED))
return false;
return true;
return eeh_has_flag(EEH_ENABLED) && !eeh_has_flag(EEH_FORCE_DISABLED);
}
static inline void eeh_serialize_lock(unsigned long *flags)
@ -270,6 +263,7 @@ typedef void *(*eeh_edev_traverse_func)(struct eeh_dev *edev, void *flag);
typedef void *(*eeh_pe_traverse_func)(struct eeh_pe *pe, void *flag);
void eeh_set_pe_aux_size(int size);
int eeh_phb_pe_create(struct pci_controller *phb);
int eeh_wait_state(struct eeh_pe *pe, int max_wait);
struct eeh_pe *eeh_phb_pe_get(struct pci_controller *phb);
struct eeh_pe *eeh_pe_next(struct eeh_pe *pe, struct eeh_pe *root);
struct eeh_pe *eeh_pe_get(struct pci_controller *phb,

View File

@ -0,0 +1,13 @@
/* SPDX-License-Identifier: GPL-2.0+ */
#ifndef _ASM_ERROR_INJECTION_H
#define _ASM_ERROR_INJECTION_H
#include <linux/compiler.h>
#include <linux/linkage.h>
#include <asm/ptrace.h>
#include <asm-generic/error-injection.h>
void override_function_with_return(struct pt_regs *regs);
#endif /* _ASM_ERROR_INJECTION_H */

View File

@ -60,14 +60,6 @@
*/
#define MAX_MCE_DEPTH 4
/*
* EX_LR is only used in EXSLB and where it does not overlap with EX_DAR
* EX_CCR similarly with DSISR, but being 4 byte registers there is a hole
* in the save area so it's not necessary to overlap them. Could be used
* for future savings though if another 4 byte register was to be saved.
*/
#define EX_LR EX_DAR
/*
* EX_R3 is only used by the bad_stack handler. bad_stack reloads and
* saves DAR from SPRN_DAR, and EX_DAR is not used. So EX_R3 can overlap
@ -236,11 +228,10 @@
* PPR save/restore macros used in exceptions_64s.S
* Used for P7 or later processors
*/
#define SAVE_PPR(area, ra, rb) \
#define SAVE_PPR(area, ra) \
BEGIN_FTR_SECTION_NESTED(940) \
ld ra,PACACURRENT(r13); \
ld rb,area+EX_PPR(r13); /* Read PPR from paca */ \
std rb,TASKTHREADPPR(ra); \
ld ra,area+EX_PPR(r13); /* Read PPR from paca */ \
std ra,_PPR(r1); \
END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,940)
#define RESTORE_PPR_PACA(area, ra) \
@ -508,7 +499,7 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
3: EXCEPTION_PROLOG_COMMON_1(); \
beq 4f; /* if from kernel mode */ \
ACCOUNT_CPU_USER_ENTRY(r13, r9, r10); \
SAVE_PPR(area, r9, r10); \
SAVE_PPR(area, r9); \
4: EXCEPTION_PROLOG_COMMON_2(area) \
EXCEPTION_PROLOG_COMMON_3(n) \
ACCOUNT_STOLEN_TIME

View File

@ -52,6 +52,8 @@
#define FW_FEATURE_PRRN ASM_CONST(0x0000000200000000)
#define FW_FEATURE_DRMEM_V2 ASM_CONST(0x0000000400000000)
#define FW_FEATURE_DRC_INFO ASM_CONST(0x0000000800000000)
#define FW_FEATURE_BLOCK_REMOVE ASM_CONST(0x0000001000000000)
#define FW_FEATURE_PAPR_SCM ASM_CONST(0x0000002000000000)
#ifndef __ASSEMBLY__
@ -69,7 +71,8 @@ enum {
FW_FEATURE_SET_MODE | FW_FEATURE_BEST_ENERGY |
FW_FEATURE_TYPE1_AFFINITY | FW_FEATURE_PRRN |
FW_FEATURE_HPT_RESIZE | FW_FEATURE_DRMEM_V2 |
FW_FEATURE_DRC_INFO,
FW_FEATURE_DRC_INFO | FW_FEATURE_BLOCK_REMOVE |
FW_FEATURE_PAPR_SCM,
FW_FEATURE_PSERIES_ALWAYS = 0,
FW_FEATURE_POWERNV_POSSIBLE = FW_FEATURE_OPAL,
FW_FEATURE_POWERNV_ALWAYS = 0,

View File

@ -72,7 +72,7 @@ enum fixed_addresses {
static inline void __set_fixmap(enum fixed_addresses idx,
phys_addr_t phys, pgprot_t flags)
{
map_kernel_page(fix_to_virt(idx), phys, pgprot_val(flags));
map_kernel_page(fix_to_virt(idx), phys, flags);
}
#endif /* !__ASSEMBLY__ */

View File

@ -278,6 +278,7 @@
#define H_COP 0x304
#define H_GET_MPP_X 0x314
#define H_SET_MODE 0x31C
#define H_BLOCK_REMOVE 0x328
#define H_CLEAR_HPT 0x358
#define H_REQUEST_VMC 0x360
#define H_RESIZE_HPT_PREPARE 0x36C
@ -295,7 +296,15 @@
#define H_INT_ESB 0x3C8
#define H_INT_SYNC 0x3CC
#define H_INT_RESET 0x3D0
#define MAX_HCALL_OPCODE H_INT_RESET
#define H_SCM_READ_METADATA 0x3E4
#define H_SCM_WRITE_METADATA 0x3E8
#define H_SCM_BIND_MEM 0x3EC
#define H_SCM_UNBIND_MEM 0x3F0
#define H_SCM_QUERY_BLOCK_MEM_BINDING 0x3F4
#define H_SCM_QUERY_LOGICAL_MEM_BINDING 0x3F8
#define H_SCM_MEM_QUERY 0x3FC
#define H_SCM_BLOCK_CLEAR 0x400
#define MAX_HCALL_OPCODE H_SCM_BLOCK_CLEAR
/* H_VIOCTL functions */
#define H_GET_VIOA_DUMP_SIZE 0x01

View File

@ -3,6 +3,9 @@
#ifdef __KERNEL__
#define ARCH_HAS_IOREMAP_WC
#ifdef CONFIG_PPC32
#define ARCH_HAS_IOREMAP_WT
#endif
/*
* This program is free software; you can redistribute it and/or
@ -108,25 +111,6 @@ extern bool isa_io_special;
#define IO_SET_SYNC_FLAG()
#endif
/* gcc 4.0 and older doesn't have 'Z' constraint */
#if __GNUC__ < 4 || (__GNUC__ == 4 && __GNUC_MINOR__ == 0)
#define DEF_MMIO_IN_X(name, size, insn) \
static inline u##size name(const volatile u##size __iomem *addr) \
{ \
u##size ret; \
__asm__ __volatile__("sync;"#insn" %0,0,%1;twi 0,%0,0;isync" \
: "=r" (ret) : "r" (addr), "m" (*addr) : "memory"); \
return ret; \
}
#define DEF_MMIO_OUT_X(name, size, insn) \
static inline void name(volatile u##size __iomem *addr, u##size val) \
{ \
__asm__ __volatile__("sync;"#insn" %1,0,%2" \
: "=m" (*addr) : "r" (val), "r" (addr) : "memory"); \
IO_SET_SYNC_FLAG(); \
}
#else /* newer gcc */
#define DEF_MMIO_IN_X(name, size, insn) \
static inline u##size name(const volatile u##size __iomem *addr) \
{ \
@ -143,7 +127,6 @@ static inline void name(volatile u##size __iomem *addr, u##size val) \
: "=Z" (*addr) : "r" (val) : "memory"); \
IO_SET_SYNC_FLAG(); \
}
#endif
#define DEF_MMIO_IN_D(name, size, insn) \
static inline u##size name(const volatile u##size __iomem *addr) \
@ -746,6 +729,10 @@ static inline void iosync(void)
*
* * ioremap_wc enables write combining
*
* * ioremap_wt enables write through
*
* * ioremap_coherent maps coherent cached memory
*
* * iounmap undoes such a mapping and can be hooked
*
* * __ioremap_at (and the pending __iounmap_at) are low level functions to
@ -767,6 +754,8 @@ extern void __iomem *ioremap(phys_addr_t address, unsigned long size);
extern void __iomem *ioremap_prot(phys_addr_t address, unsigned long size,
unsigned long flags);
extern void __iomem *ioremap_wc(phys_addr_t address, unsigned long size);
void __iomem *ioremap_wt(phys_addr_t address, unsigned long size);
void __iomem *ioremap_coherent(phys_addr_t address, unsigned long size);
#define ioremap_nocache(addr, size) ioremap((addr), (size))
#define ioremap_uc(addr, size) ioremap((addr), (size))
#define ioremap_cache(addr, size) \
@ -777,12 +766,12 @@ extern void iounmap(volatile void __iomem *addr);
extern void __iomem *__ioremap(phys_addr_t, unsigned long size,
unsigned long flags);
extern void __iomem *__ioremap_caller(phys_addr_t, unsigned long size,
unsigned long flags, void *caller);
pgprot_t prot, void *caller);
extern void __iounmap(volatile void __iomem *addr);
extern void __iomem * __ioremap_at(phys_addr_t pa, void *ea,
unsigned long size, unsigned long flags);
unsigned long size, pgprot_t prot);
extern void __iounmap_at(void *ea, unsigned long size);
/*

View File

@ -26,9 +26,12 @@
#define BREAK_INSTR_SIZE 4
#define BUFMAX ((NUMREGBYTES * 2) + 512)
#define OUTBUFMAX ((NUMREGBYTES * 2) + 512)
#define BREAK_INSTR 0x7d821008 /* twge r2, r2 */
static inline void arch_kgdb_breakpoint(void)
{
asm(".long 0x7d821008"); /* twge r2, r2 */
asm(stringify_in_c(.long BREAK_INSTR));
}
#define CACHE_FLUSH_IS_SAFE 1
#define DBG_MAX_REG_NUM 70

View File

@ -35,7 +35,7 @@ struct machdep_calls {
char *name;
#ifdef CONFIG_PPC64
void __iomem * (*ioremap)(phys_addr_t addr, unsigned long size,
unsigned long flags, void *caller);
pgprot_t prot, void *caller);
void (*iounmap)(volatile void __iomem *token);
#ifdef CONFIG_PM
@ -108,6 +108,7 @@ struct machdep_calls {
/* Early exception handlers called in realmode */
int (*hmi_exception_early)(struct pt_regs *regs);
long (*machine_check_early)(struct pt_regs *regs);
/* Called during machine check exception to retrive fixup address. */
bool (*mce_check_early_recovery)(struct pt_regs *regs);

View File

@ -210,4 +210,7 @@ extern void release_mce_event(void);
extern void machine_check_queue_event(void);
extern void machine_check_print_event_info(struct machine_check_event *evt,
bool user_mode);
#ifdef CONFIG_PPC_BOOK3S_64
void flush_and_reload_slb(void);
#endif /* CONFIG_PPC_BOOK3S_64 */
#endif /* __ASM_PPC64_MCE_H__ */

View File

@ -309,6 +309,21 @@ static inline u16 get_mm_addr_key(struct mm_struct *mm, unsigned long address)
*/
#define MMU_PAGE_COUNT 16
/*
* If we store section details in page->flags we can't increase the MAX_PHYSMEM_BITS
* if we increase SECTIONS_WIDTH we will not store node details in page->flags and
* page_to_nid does a page->section->node lookup
* Hence only increase for VMEMMAP. Further depending on SPARSEMEM_EXTREME reduce
* memory requirements with large number of sections.
* 51 bits is the max physical real address on POWER9
*/
#if defined(CONFIG_SPARSEMEM_VMEMMAP) && defined(CONFIG_SPARSEMEM_EXTREME) && \
defined (CONFIG_PPC_64K_PAGES)
#define MAX_PHYSMEM_BITS 51
#else
#define MAX_PHYSMEM_BITS 46
#endif
#ifdef CONFIG_PPC_BOOK3S_64
#include <asm/book3s/64/mmu.h>
#else /* CONFIG_PPC_BOOK3S_64 */

View File

@ -82,7 +82,7 @@ static inline bool need_extra_context(struct mm_struct *mm, unsigned long ea)
{
int context_id;
context_id = get_ea_context(&mm->context, ea);
context_id = get_user_context(&mm->context, ea);
if (!context_id)
return true;
return false;

View File

@ -393,7 +393,14 @@ extern struct bus_type mpic_subsys;
#define MPIC_REGSET_TSI108 MPIC_REGSET(1) /* Tsi108/109 PIC */
/* Get the version of primary MPIC */
#ifdef CONFIG_MPIC
extern u32 fsl_mpic_primary_get_version(void);
#else
static inline u32 fsl_mpic_primary_get_version(void)
{
return 0;
}
#endif
/* Allocate the controller structure and setup the linux irq descs
* for the range if interrupts passed in. No HW initialization is

View File

@ -128,14 +128,65 @@ extern int icache_44x_need_flush;
#include <asm/nohash/32/pte-8xx.h>
#endif
/* And here we include common definitions */
#include <asm/pte-common.h>
/*
* Location of the PFN in the PTE. Most 32-bit platforms use the same
* as _PAGE_SHIFT here (ie, naturally aligned).
* Platform who don't just pre-define the value so we don't override it here.
*/
#ifndef PTE_RPN_SHIFT
#define PTE_RPN_SHIFT (PAGE_SHIFT)
#endif
/*
* The mask covered by the RPN must be a ULL on 32-bit platforms with
* 64-bit PTEs.
*/
#if defined(CONFIG_PPC32) && defined(CONFIG_PTE_64BIT)
#define PTE_RPN_MASK (~((1ULL << PTE_RPN_SHIFT) - 1))
#else
#define PTE_RPN_MASK (~((1UL << PTE_RPN_SHIFT) - 1))
#endif
/*
* _PAGE_CHG_MASK masks of bits that are to be preserved across
* pgprot changes.
*/
#define _PAGE_CHG_MASK (PTE_RPN_MASK | _PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_SPECIAL)
#ifndef __ASSEMBLY__
#define pte_clear(mm, addr, ptep) \
do { pte_update(ptep, ~0, 0); } while (0)
#ifndef pte_mkwrite
static inline pte_t pte_mkwrite(pte_t pte)
{
return __pte(pte_val(pte) | _PAGE_RW);
}
#endif
static inline pte_t pte_mkdirty(pte_t pte)
{
return __pte(pte_val(pte) | _PAGE_DIRTY);
}
static inline pte_t pte_mkyoung(pte_t pte)
{
return __pte(pte_val(pte) | _PAGE_ACCESSED);
}
#ifndef pte_wrprotect
static inline pte_t pte_wrprotect(pte_t pte)
{
return __pte(pte_val(pte) & ~_PAGE_RW);
}
#endif
static inline pte_t pte_mkexec(pte_t pte)
{
return __pte(pte_val(pte) | _PAGE_EXEC);
}
#define pmd_none(pmd) (!pmd_val(pmd))
#define pmd_bad(pmd) (pmd_val(pmd) & _PMD_BAD)
#define pmd_present(pmd) (pmd_val(pmd) & _PMD_PRESENT_MASK)
@ -244,7 +295,10 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr,
pte_t *ptep)
{
pte_update(ptep, (_PAGE_RW | _PAGE_HWWRITE), _PAGE_RO);
unsigned long clr = ~pte_val(pte_wrprotect(__pte(~0)));
unsigned long set = pte_val(pte_wrprotect(__pte(0)));
pte_update(ptep, clr, set);
}
static inline void huge_ptep_set_wrprotect(struct mm_struct *mm,
unsigned long addr, pte_t *ptep)
@ -258,9 +312,10 @@ static inline void __ptep_set_access_flags(struct vm_area_struct *vma,
unsigned long address,
int psize)
{
unsigned long set = pte_val(entry) &
(_PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_RW | _PAGE_EXEC);
unsigned long clr = ~pte_val(entry) & (_PAGE_RO | _PAGE_NA);
pte_t pte_set = pte_mkyoung(pte_mkdirty(pte_mkwrite(pte_mkexec(__pte(0)))));
pte_t pte_clr = pte_mkyoung(pte_mkdirty(pte_mkwrite(pte_mkexec(__pte(~0)))));
unsigned long set = pte_val(entry) & pte_val(pte_set);
unsigned long clr = ~pte_val(entry) & ~pte_val(pte_clr);
pte_update(ptep, clr, set);
@ -323,7 +378,7 @@ static inline int pte_young(pte_t pte)
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) >> 3 })
#define __swp_entry_to_pte(x) ((pte_t) { (x).val << 3 })
int map_kernel_page(unsigned long va, phys_addr_t pa, int flags);
int map_kernel_page(unsigned long va, phys_addr_t pa, pgprot_t prot);
#endif /* !__ASSEMBLY__ */

View File

@ -50,13 +50,56 @@
#define _PAGE_EXEC 0x200 /* hardware: EX permission */
#define _PAGE_ACCESSED 0x400 /* software: R: page referenced */
/* No page size encoding in the linux PTE */
#define _PAGE_PSIZE 0
/* cache related flags non existing on 40x */
#define _PAGE_COHERENT 0
#define _PAGE_KERNEL_RO 0
#define _PAGE_KERNEL_ROX _PAGE_EXEC
#define _PAGE_KERNEL_RW (_PAGE_DIRTY | _PAGE_RW | _PAGE_HWWRITE)
#define _PAGE_KERNEL_RWX (_PAGE_DIRTY | _PAGE_RW | _PAGE_HWWRITE | _PAGE_EXEC)
#define _PMD_PRESENT 0x400 /* PMD points to page of PTEs */
#define _PMD_PRESENT_MASK _PMD_PRESENT
#define _PMD_BAD 0x802
#define _PMD_SIZE_4M 0x0c0
#define _PMD_SIZE_16M 0x0e0
#define _PMD_USER 0
#define _PTE_NONE_MASK 0
/* Until my rework is finished, 40x still needs atomic PTE updates */
#define PTE_ATOMIC_UPDATES 1
#define _PAGE_BASE_NC (_PAGE_PRESENT | _PAGE_ACCESSED)
#define _PAGE_BASE (_PAGE_BASE_NC)
/* Permission masks used to generate the __P and __S table */
#define PAGE_NONE __pgprot(_PAGE_BASE)
#define PAGE_SHARED __pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_RW)
#define PAGE_SHARED_X __pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_RW | _PAGE_EXEC)
#define PAGE_COPY __pgprot(_PAGE_BASE | _PAGE_USER)
#define PAGE_COPY_X __pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_EXEC)
#define PAGE_READONLY __pgprot(_PAGE_BASE | _PAGE_USER)
#define PAGE_READONLY_X __pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_EXEC)
#ifndef __ASSEMBLY__
static inline pte_t pte_wrprotect(pte_t pte)
{
return __pte(pte_val(pte) & ~(_PAGE_RW | _PAGE_HWWRITE));
}
#define pte_wrprotect pte_wrprotect
static inline pte_t pte_mkclean(pte_t pte)
{
return __pte(pte_val(pte) & ~(_PAGE_DIRTY | _PAGE_HWWRITE));
}
#define pte_mkclean pte_mkclean
#endif
#endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_NOHASH_32_PTE_40x_H */

View File

@ -85,14 +85,44 @@
#define _PAGE_NO_CACHE 0x00000400 /* H: I bit */
#define _PAGE_WRITETHRU 0x00000800 /* H: W bit */
/* No page size encoding in the linux PTE */
#define _PAGE_PSIZE 0
#define _PAGE_KERNEL_RO 0
#define _PAGE_KERNEL_ROX _PAGE_EXEC
#define _PAGE_KERNEL_RW (_PAGE_DIRTY | _PAGE_RW)
#define _PAGE_KERNEL_RWX (_PAGE_DIRTY | _PAGE_RW | _PAGE_EXEC)
/* TODO: Add large page lowmem mapping support */
#define _PMD_PRESENT 0
#define _PMD_PRESENT_MASK (PAGE_MASK)
#define _PMD_BAD (~PAGE_MASK)
#define _PMD_USER 0
/* ERPN in a PTE never gets cleared, ignore it */
#define _PTE_NONE_MASK 0xffffffff00000000ULL
/*
* We define 2 sets of base prot bits, one for basic pages (ie,
* cacheable kernel and user pages) and one for non cacheable
* pages. We always set _PAGE_COHERENT when SMP is enabled or
* the processor might need it for DMA coherency.
*/
#define _PAGE_BASE_NC (_PAGE_PRESENT | _PAGE_ACCESSED)
#if defined(CONFIG_SMP)
#define _PAGE_BASE (_PAGE_BASE_NC | _PAGE_COHERENT)
#else
#define _PAGE_BASE (_PAGE_BASE_NC)
#endif
/* Permission masks used to generate the __P and __S table */
#define PAGE_NONE __pgprot(_PAGE_BASE)
#define PAGE_SHARED __pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_RW)
#define PAGE_SHARED_X __pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_RW | _PAGE_EXEC)
#define PAGE_COPY __pgprot(_PAGE_BASE | _PAGE_USER)
#define PAGE_COPY_X __pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_EXEC)
#define PAGE_READONLY __pgprot(_PAGE_BASE | _PAGE_USER)
#define PAGE_READONLY_X __pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_EXEC)
#endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_NOHASH_32_PTE_44x_H */

View File

@ -29,10 +29,10 @@
*/
/* Definitions for 8xx embedded chips. */
#define _PAGE_PRESENT 0x0001 /* Page is valid */
#define _PAGE_NO_CACHE 0x0002 /* I: cache inhibit */
#define _PAGE_PRIVILEGED 0x0004 /* No ASID (context) compare */
#define _PAGE_HUGE 0x0008 /* SPS: Small Page Size (1 if 16k, 512k or 8M)*/
#define _PAGE_PRESENT 0x0001 /* V: Page is valid */
#define _PAGE_NO_CACHE 0x0002 /* CI: cache inhibit */
#define _PAGE_SH 0x0004 /* SH: No ASID (context) compare */
#define _PAGE_SPS 0x0008 /* SPS: Small Page Size (1 if 16k, 512k or 8M)*/
#define _PAGE_DIRTY 0x0100 /* C: page changed */
/* These 4 software bits must be masked out when the L2 entry is loaded
@ -46,18 +46,95 @@
#define _PAGE_NA 0x0200 /* Supervisor NA, User no access */
#define _PAGE_RO 0x0600 /* Supervisor RO, User no access */
/* cache related flags non existing on 8xx */
#define _PAGE_COHERENT 0
#define _PAGE_WRITETHRU 0
#define _PAGE_KERNEL_RO (_PAGE_SH | _PAGE_RO)
#define _PAGE_KERNEL_ROX (_PAGE_SH | _PAGE_RO | _PAGE_EXEC)
#define _PAGE_KERNEL_RW (_PAGE_SH | _PAGE_DIRTY)
#define _PAGE_KERNEL_RWX (_PAGE_SH | _PAGE_DIRTY | _PAGE_EXEC)
#define _PMD_PRESENT 0x0001
#define _PMD_PRESENT_MASK _PMD_PRESENT
#define _PMD_BAD 0x0fd0
#define _PMD_PAGE_MASK 0x000c
#define _PMD_PAGE_8M 0x000c
#define _PMD_PAGE_512K 0x0004
#define _PMD_USER 0x0020 /* APG 1 */
#define _PTE_NONE_MASK 0
/* Until my rework is finished, 8xx still needs atomic PTE updates */
#define PTE_ATOMIC_UPDATES 1
#ifdef CONFIG_PPC_16K_PAGES
#define _PAGE_PSIZE _PAGE_HUGE
#define _PAGE_PSIZE _PAGE_SPS
#else
#define _PAGE_PSIZE 0
#endif
#define _PAGE_BASE_NC (_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_PSIZE)
#define _PAGE_BASE (_PAGE_BASE_NC)
/* Permission masks used to generate the __P and __S table */
#define PAGE_NONE __pgprot(_PAGE_BASE | _PAGE_NA)
#define PAGE_SHARED __pgprot(_PAGE_BASE)
#define PAGE_SHARED_X __pgprot(_PAGE_BASE | _PAGE_EXEC)
#define PAGE_COPY __pgprot(_PAGE_BASE | _PAGE_RO)
#define PAGE_COPY_X __pgprot(_PAGE_BASE | _PAGE_RO | _PAGE_EXEC)
#define PAGE_READONLY __pgprot(_PAGE_BASE | _PAGE_RO)
#define PAGE_READONLY_X __pgprot(_PAGE_BASE | _PAGE_RO | _PAGE_EXEC)
#ifndef __ASSEMBLY__
static inline pte_t pte_wrprotect(pte_t pte)
{
return __pte(pte_val(pte) | _PAGE_RO);
}
#define pte_wrprotect pte_wrprotect
static inline int pte_write(pte_t pte)
{
return !(pte_val(pte) & _PAGE_RO);
}
#define pte_write pte_write
static inline pte_t pte_mkwrite(pte_t pte)
{
return __pte(pte_val(pte) & ~_PAGE_RO);
}
#define pte_mkwrite pte_mkwrite
static inline bool pte_user(pte_t pte)
{
return !(pte_val(pte) & _PAGE_SH);
}
#define pte_user pte_user
static inline pte_t pte_mkprivileged(pte_t pte)
{
return __pte(pte_val(pte) | _PAGE_SH);
}
#define pte_mkprivileged pte_mkprivileged
static inline pte_t pte_mkuser(pte_t pte)
{
return __pte(pte_val(pte) & ~_PAGE_SH);
}
#define pte_mkuser pte_mkuser
static inline pte_t pte_mkhuge(pte_t pte)
{
return __pte(pte_val(pte) | _PAGE_SPS);
}
#define pte_mkhuge pte_mkhuge
#endif
#endif /* __KERNEL__ */

View File

@ -31,11 +31,44 @@
#define _PAGE_WRITETHRU 0x00400 /* H: W bit */
#define _PAGE_SPECIAL 0x00800 /* S: Special page */
#define _PAGE_KERNEL_RO 0
#define _PAGE_KERNEL_ROX _PAGE_EXEC
#define _PAGE_KERNEL_RW (_PAGE_DIRTY | _PAGE_RW)
#define _PAGE_KERNEL_RWX (_PAGE_DIRTY | _PAGE_RW | _PAGE_EXEC)
/* No page size encoding in the linux PTE */
#define _PAGE_PSIZE 0
#define _PMD_PRESENT 0
#define _PMD_PRESENT_MASK (PAGE_MASK)
#define _PMD_BAD (~PAGE_MASK)
#define _PMD_USER 0
#define _PTE_NONE_MASK 0
#define PTE_WIMGE_SHIFT (6)
/*
* We define 2 sets of base prot bits, one for basic pages (ie,
* cacheable kernel and user pages) and one for non cacheable
* pages. We always set _PAGE_COHERENT when SMP is enabled or
* the processor might need it for DMA coherency.
*/
#define _PAGE_BASE_NC (_PAGE_PRESENT | _PAGE_ACCESSED)
#if defined(CONFIG_SMP) || defined(CONFIG_PPC_E500MC)
#define _PAGE_BASE (_PAGE_BASE_NC | _PAGE_COHERENT)
#else
#define _PAGE_BASE (_PAGE_BASE_NC)
#endif
/* Permission masks used to generate the __P and __S table */
#define PAGE_NONE __pgprot(_PAGE_BASE)
#define PAGE_SHARED __pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_RW)
#define PAGE_SHARED_X __pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_RW | _PAGE_EXEC)
#define PAGE_COPY __pgprot(_PAGE_BASE | _PAGE_USER)
#define PAGE_COPY_X __pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_EXEC)
#define PAGE_READONLY __pgprot(_PAGE_BASE | _PAGE_USER)
#define PAGE_READONLY_X __pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_EXEC)
#endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_NOHASH_32_PTE_FSL_BOOKE_H */

View File

@ -89,11 +89,47 @@
* Include the PTE bits definitions
*/
#include <asm/nohash/pte-book3e.h>
#include <asm/pte-common.h>
#define _PAGE_SAO 0
#define PTE_RPN_MASK (~((1UL << PTE_RPN_SHIFT) - 1))
/*
* _PAGE_CHG_MASK masks of bits that are to be preserved across
* pgprot changes.
*/
#define _PAGE_CHG_MASK (PTE_RPN_MASK | _PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_SPECIAL)
#define H_PAGE_4K_PFN 0
#ifndef __ASSEMBLY__
/* pte_clear moved to later in this file */
static inline pte_t pte_mkwrite(pte_t pte)
{
return __pte(pte_val(pte) | _PAGE_RW);
}
static inline pte_t pte_mkdirty(pte_t pte)
{
return __pte(pte_val(pte) | _PAGE_DIRTY);
}
static inline pte_t pte_mkyoung(pte_t pte)
{
return __pte(pte_val(pte) | _PAGE_ACCESSED);
}
static inline pte_t pte_wrprotect(pte_t pte)
{
return __pte(pte_val(pte) & ~_PAGE_RW);
}
static inline pte_t pte_mkexec(pte_t pte)
{
return __pte(pte_val(pte) | _PAGE_EXEC);
}
#define PMD_BAD_BITS (PTE_TABLE_SIZE-1)
#define PUD_BAD_BITS (PMD_TABLE_SIZE-1)
@ -327,8 +363,7 @@ static inline void __ptep_set_access_flags(struct vm_area_struct *vma,
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val((pte)) })
#define __swp_entry_to_pte(x) __pte((x).val)
extern int map_kernel_page(unsigned long ea, unsigned long pa,
unsigned long flags);
int map_kernel_page(unsigned long ea, unsigned long pa, pgprot_t prot);
extern int __meminit vmemmap_create_mapping(unsigned long start,
unsigned long page_size,
unsigned long phys);

View File

@ -8,18 +8,50 @@
#include <asm/nohash/32/pgtable.h>
#endif
/* Permission masks used for kernel mappings */
#define PAGE_KERNEL __pgprot(_PAGE_BASE | _PAGE_KERNEL_RW)
#define PAGE_KERNEL_NC __pgprot(_PAGE_BASE_NC | _PAGE_KERNEL_RW | _PAGE_NO_CACHE)
#define PAGE_KERNEL_NCG __pgprot(_PAGE_BASE_NC | _PAGE_KERNEL_RW | \
_PAGE_NO_CACHE | _PAGE_GUARDED)
#define PAGE_KERNEL_X __pgprot(_PAGE_BASE | _PAGE_KERNEL_RWX)
#define PAGE_KERNEL_RO __pgprot(_PAGE_BASE | _PAGE_KERNEL_RO)
#define PAGE_KERNEL_ROX __pgprot(_PAGE_BASE | _PAGE_KERNEL_ROX)
/*
* Protection used for kernel text. We want the debuggers to be able to
* set breakpoints anywhere, so don't write protect the kernel text
* on platforms where such control is possible.
*/
#if defined(CONFIG_KGDB) || defined(CONFIG_XMON) || defined(CONFIG_BDI_SWITCH) ||\
defined(CONFIG_KPROBES) || defined(CONFIG_DYNAMIC_FTRACE)
#define PAGE_KERNEL_TEXT PAGE_KERNEL_X
#else
#define PAGE_KERNEL_TEXT PAGE_KERNEL_ROX
#endif
/* Make modules code happy. We don't set RO yet */
#define PAGE_KERNEL_EXEC PAGE_KERNEL_X
/* Advertise special mapping type for AGP */
#define PAGE_AGP (PAGE_KERNEL_NC)
#define HAVE_PAGE_AGP
#ifndef __ASSEMBLY__
/* Generic accessors to PTE bits */
#ifndef pte_write
static inline int pte_write(pte_t pte)
{
return (pte_val(pte) & (_PAGE_RW | _PAGE_RO)) != _PAGE_RO;
return pte_val(pte) & _PAGE_RW;
}
#endif
static inline int pte_read(pte_t pte) { return 1; }
static inline int pte_dirty(pte_t pte) { return pte_val(pte) & _PAGE_DIRTY; }
static inline int pte_special(pte_t pte) { return pte_val(pte) & _PAGE_SPECIAL; }
static inline int pte_none(pte_t pte) { return (pte_val(pte) & ~_PTE_NONE_MASK) == 0; }
static inline pgprot_t pte_pgprot(pte_t pte) { return __pgprot(pte_val(pte) & PAGE_PROT_BITS); }
static inline bool pte_hashpte(pte_t pte) { return false; }
static inline bool pte_ci(pte_t pte) { return pte_val(pte) & _PAGE_NO_CACHE; }
static inline bool pte_exec(pte_t pte) { return pte_val(pte) & _PAGE_EXEC; }
#ifdef CONFIG_NUMA_BALANCING
/*
@ -29,8 +61,7 @@ static inline pgprot_t pte_pgprot(pte_t pte) { return __pgprot(pte_val(pte) & PA
*/
static inline int pte_protnone(pte_t pte)
{
return (pte_val(pte) &
(_PAGE_PRESENT | _PAGE_USER)) == _PAGE_PRESENT;
return pte_present(pte) && !pte_user(pte);
}
static inline int pmd_protnone(pmd_t pmd)
@ -44,6 +75,23 @@ static inline int pte_present(pte_t pte)
return pte_val(pte) & _PAGE_PRESENT;
}
static inline bool pte_hw_valid(pte_t pte)
{
return pte_val(pte) & _PAGE_PRESENT;
}
/*
* Don't just check for any non zero bits in __PAGE_USER, since for book3e
* and PTE_64BIT, PAGE_KERNEL_X contains _PAGE_BAP_SR which is also in
* _PAGE_USER. Need to explicitly match _PAGE_BAP_UR bit in that case too.
*/
#ifndef pte_user
static inline bool pte_user(pte_t pte)
{
return (pte_val(pte) & _PAGE_USER) == _PAGE_USER;
}
#endif
/*
* We only find page table entry in the last level
* Hence no need for other accessors
@ -77,42 +125,26 @@ static inline unsigned long pte_pfn(pte_t pte) {
return pte_val(pte) >> PTE_RPN_SHIFT; }
/* Generic modifiers for PTE bits */
static inline pte_t pte_wrprotect(pte_t pte)
static inline pte_t pte_exprotect(pte_t pte)
{
pte_basic_t ptev;
ptev = pte_val(pte) & ~(_PAGE_RW | _PAGE_HWWRITE);
ptev |= _PAGE_RO;
return __pte(ptev);
return __pte(pte_val(pte) & ~_PAGE_EXEC);
}
#ifndef pte_mkclean
static inline pte_t pte_mkclean(pte_t pte)
{
return __pte(pte_val(pte) & ~(_PAGE_DIRTY | _PAGE_HWWRITE));
return __pte(pte_val(pte) & ~_PAGE_DIRTY);
}
#endif
static inline pte_t pte_mkold(pte_t pte)
{
return __pte(pte_val(pte) & ~_PAGE_ACCESSED);
}
static inline pte_t pte_mkwrite(pte_t pte)
static inline pte_t pte_mkpte(pte_t pte)
{
pte_basic_t ptev;
ptev = pte_val(pte) & ~_PAGE_RO;
ptev |= _PAGE_RW;
return __pte(ptev);
}
static inline pte_t pte_mkdirty(pte_t pte)
{
return __pte(pte_val(pte) | _PAGE_DIRTY);
}
static inline pte_t pte_mkyoung(pte_t pte)
{
return __pte(pte_val(pte) | _PAGE_ACCESSED);
return pte;
}
static inline pte_t pte_mkspecial(pte_t pte)
@ -120,10 +152,26 @@ static inline pte_t pte_mkspecial(pte_t pte)
return __pte(pte_val(pte) | _PAGE_SPECIAL);
}
#ifndef pte_mkhuge
static inline pte_t pte_mkhuge(pte_t pte)
{
return __pte(pte_val(pte) | _PAGE_HUGE);
return __pte(pte_val(pte));
}
#endif
#ifndef pte_mkprivileged
static inline pte_t pte_mkprivileged(pte_t pte)
{
return __pte(pte_val(pte) & ~_PAGE_USER);
}
#endif
#ifndef pte_mkuser
static inline pte_t pte_mkuser(pte_t pte)
{
return __pte(pte_val(pte) | _PAGE_USER);
}
#endif
static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
{
@ -197,6 +245,8 @@ extern int ptep_set_access_flags(struct vm_area_struct *vma, unsigned long addre
#if _PAGE_WRITETHRU != 0
#define pgprot_cached_wthru(prot) (__pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL) | \
_PAGE_COHERENT | _PAGE_WRITETHRU))
#else
#define pgprot_cached_wthru(prot) pgprot_noncached(prot)
#endif
#define pgprot_cached_noncoherent(prot) \

View File

@ -77,7 +77,48 @@
#define _PMD_PRESENT 0
#define _PMD_PRESENT_MASK (PAGE_MASK)
#define _PMD_BAD (~PAGE_MASK)
#define _PMD_USER 0
#else
#define _PTE_NONE_MASK 0
#endif
/*
* We define 2 sets of base prot bits, one for basic pages (ie,
* cacheable kernel and user pages) and one for non cacheable
* pages. We always set _PAGE_COHERENT when SMP is enabled or
* the processor might need it for DMA coherency.
*/
#define _PAGE_BASE_NC (_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_PSIZE)
#if defined(CONFIG_SMP)
#define _PAGE_BASE (_PAGE_BASE_NC | _PAGE_COHERENT)
#else
#define _PAGE_BASE (_PAGE_BASE_NC)
#endif
/* Permission masks used to generate the __P and __S table */
#define PAGE_NONE __pgprot(_PAGE_BASE)
#define PAGE_SHARED __pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_RW)
#define PAGE_SHARED_X __pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_RW | _PAGE_EXEC)
#define PAGE_COPY __pgprot(_PAGE_BASE | _PAGE_USER)
#define PAGE_COPY_X __pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_EXEC)
#define PAGE_READONLY __pgprot(_PAGE_BASE | _PAGE_USER)
#define PAGE_READONLY_X __pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_EXEC)
#ifndef __ASSEMBLY__
static inline pte_t pte_mkprivileged(pte_t pte)
{
return __pte((pte_val(pte) & ~_PAGE_USER) | _PAGE_PRIVILEGED);
}
#define pte_mkprivileged pte_mkprivileged
static inline pte_t pte_mkuser(pte_t pte)
{
return __pte((pte_val(pte) & ~_PAGE_PRIVILEGED) | _PAGE_USER);
}
#define pte_mkuser pte_mkuser
#endif /* __ASSEMBLY__ */
#endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_NOHASH_PTE_BOOK3E_H */

View File

@ -1050,6 +1050,7 @@ enum OpalSysCooling {
enum {
OPAL_REBOOT_NORMAL = 0,
OPAL_REBOOT_PLATFORM_ERROR = 1,
OPAL_REBOOT_FULL_IPL = 2,
};
/* Argument to OPAL_PCI_TCE_KILL */

View File

@ -113,7 +113,13 @@ struct paca_struct {
* on the linear mapping */
/* SLB related definitions */
u16 vmalloc_sllp;
u16 slb_cache_ptr;
u8 slb_cache_ptr;
u8 stab_rr; /* stab/slb round-robin counter */
#ifdef CONFIG_DEBUG_VM
u8 in_kernel_slb_handler;
#endif
u32 slb_used_bitmap; /* Bitmaps for first 32 SLB entries. */
u32 slb_kern_bitmap;
u32 slb_cache[SLB_CACHE_ENTRIES];
#endif /* CONFIG_PPC_BOOK3S_64 */
@ -160,7 +166,6 @@ struct paca_struct {
*/
struct task_struct *__current; /* Pointer to current */
u64 kstack; /* Saved Kernel stack addr */
u64 stab_rr; /* stab/slb round-robin counter */
u64 saved_r1; /* r1 save for RTAS calls or PM or EE=0 */
u64 saved_msr; /* MSR saved here by enter_rtas */
u16 trap_save; /* Used when bad stack is encountered */
@ -250,6 +255,15 @@ struct paca_struct {
#ifdef CONFIG_PPC_PSERIES
u8 *mce_data_buf; /* buffer to hold per cpu rtas errlog */
#endif /* CONFIG_PPC_PSERIES */
#ifdef CONFIG_PPC_BOOK3S_64
/* Capture SLB related old contents in MCE handler. */
struct slb_entry *mce_faulty_slbs;
u16 slb_save_cache_ptr;
#endif /* CONFIG_PPC_BOOK3S_64 */
#ifdef CONFIG_STACKPROTECTOR
unsigned long canary;
#endif
} ____cacheline_aligned;
extern void copy_mm_to_paca(struct mm_struct *mm);

View File

@ -20,6 +20,25 @@ struct mm_struct;
#include <asm/nohash/pgtable.h>
#endif /* !CONFIG_PPC_BOOK3S */
/* Note due to the way vm flags are laid out, the bits are XWR */
#define __P000 PAGE_NONE
#define __P001 PAGE_READONLY
#define __P010 PAGE_COPY
#define __P011 PAGE_COPY
#define __P100 PAGE_READONLY_X
#define __P101 PAGE_READONLY_X
#define __P110 PAGE_COPY_X
#define __P111 PAGE_COPY_X
#define __S000 PAGE_NONE
#define __S001 PAGE_READONLY
#define __S010 PAGE_SHARED
#define __S011 PAGE_SHARED
#define __S100 PAGE_READONLY_X
#define __S101 PAGE_READONLY_X
#define __S110 PAGE_SHARED_X
#define __S111 PAGE_SHARED_X
#ifndef __ASSEMBLY__
#include <asm/tlbflush.h>
@ -27,6 +46,16 @@ struct mm_struct;
/* Keep these as a macros to avoid include dependency mess */
#define pte_page(x) pfn_to_page(pte_pfn(x))
#define mk_pte(page, pgprot) pfn_pte(page_to_pfn(page), (pgprot))
/*
* Select all bits except the pfn
*/
static inline pgprot_t pte_pgprot(pte_t pte)
{
unsigned long pte_flags;
pte_flags = pte_val(pte) & ~PTE_RPN_MASK;
return __pgprot(pte_flags);
}
/*
* ZERO_PAGE is a global shared page that is always zero: used

View File

@ -58,6 +58,7 @@ void eeh_save_bars(struct eeh_dev *edev);
int rtas_write_config(struct pci_dn *, int where, int size, u32 val);
int rtas_read_config(struct pci_dn *, int where, int size, u32 *val);
void eeh_pe_state_mark(struct eeh_pe *pe, int state);
void eeh_pe_mark_isolated(struct eeh_pe *pe);
void eeh_pe_state_clear(struct eeh_pe *pe, int state);
void eeh_pe_state_mark_with_cfg(struct eeh_pe *pe, int state);
void eeh_pe_dev_mode_mark(struct eeh_pe *pe, int mode);

View File

@ -32,9 +32,9 @@
/* Default SMT priority is set to 3. Use 11- 13bits to save priority. */
#define PPR_PRIORITY 3
#ifdef __ASSEMBLY__
#define INIT_PPR (PPR_PRIORITY << 50)
#define DEFAULT_PPR (PPR_PRIORITY << 50)
#else
#define INIT_PPR ((u64)PPR_PRIORITY << 50)
#define DEFAULT_PPR ((u64)PPR_PRIORITY << 50)
#endif /* __ASSEMBLY__ */
#endif /* CONFIG_PPC64 */
@ -273,6 +273,7 @@ struct thread_struct {
#endif /* CONFIG_HAVE_HW_BREAKPOINT */
struct arch_hw_breakpoint hw_brk; /* info on the hardware breakpoint */
unsigned long trap_nr; /* last trap # on this thread */
u8 load_slb; /* Ages out SLB preload cache entries */
u8 load_fp;
#ifdef CONFIG_ALTIVEC
u8 load_vec;
@ -341,7 +342,6 @@ struct thread_struct {
* onwards.
*/
int dscr_inherit;
unsigned long ppr; /* used to save/restore SMT priority */
unsigned long tidr;
#endif
#ifdef CONFIG_PPC_BOOK3S_64
@ -389,7 +389,6 @@ struct thread_struct {
.regs = (struct pt_regs *)INIT_SP - 1, /* XXX bogus, I think */ \
.addr_limit = KERNEL_DS, \
.fpexc_mode = 0, \
.ppr = INIT_PPR, \
.fscr = FSCR_TAR | FSCR_EBB \
}
#endif

View File

@ -1,219 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Included from asm/pgtable-*.h only ! */
/*
* Some bits are only used on some cpu families... Make sure that all
* the undefined gets a sensible default
*/
#ifndef _PAGE_HASHPTE
#define _PAGE_HASHPTE 0
#endif
#ifndef _PAGE_HWWRITE
#define _PAGE_HWWRITE 0
#endif
#ifndef _PAGE_EXEC
#define _PAGE_EXEC 0
#endif
#ifndef _PAGE_ENDIAN
#define _PAGE_ENDIAN 0
#endif
#ifndef _PAGE_COHERENT
#define _PAGE_COHERENT 0
#endif
#ifndef _PAGE_WRITETHRU
#define _PAGE_WRITETHRU 0
#endif
#ifndef _PAGE_4K_PFN
#define _PAGE_4K_PFN 0
#endif
#ifndef _PAGE_SAO
#define _PAGE_SAO 0
#endif
#ifndef _PAGE_PSIZE
#define _PAGE_PSIZE 0
#endif
/* _PAGE_RO and _PAGE_RW shall not be defined at the same time */
#ifndef _PAGE_RO
#define _PAGE_RO 0
#else
#define _PAGE_RW 0
#endif
#ifndef _PAGE_PTE
#define _PAGE_PTE 0
#endif
/* At least one of _PAGE_PRIVILEGED or _PAGE_USER must be defined */
#ifndef _PAGE_PRIVILEGED
#define _PAGE_PRIVILEGED 0
#else
#ifndef _PAGE_USER
#define _PAGE_USER 0
#endif
#endif
#ifndef _PAGE_NA
#define _PAGE_NA 0
#endif
#ifndef _PAGE_HUGE
#define _PAGE_HUGE 0
#endif
#ifndef _PMD_PRESENT_MASK
#define _PMD_PRESENT_MASK _PMD_PRESENT
#endif
#ifndef _PMD_USER
#define _PMD_USER 0
#endif
#ifndef _PAGE_KERNEL_RO
#define _PAGE_KERNEL_RO (_PAGE_PRIVILEGED | _PAGE_RO)
#endif
#ifndef _PAGE_KERNEL_ROX
#define _PAGE_KERNEL_ROX (_PAGE_PRIVILEGED | _PAGE_RO | _PAGE_EXEC)
#endif
#ifndef _PAGE_KERNEL_RW
#define _PAGE_KERNEL_RW (_PAGE_PRIVILEGED | _PAGE_DIRTY | _PAGE_RW | \
_PAGE_HWWRITE)
#endif
#ifndef _PAGE_KERNEL_RWX
#define _PAGE_KERNEL_RWX (_PAGE_PRIVILEGED | _PAGE_DIRTY | _PAGE_RW | \
_PAGE_HWWRITE | _PAGE_EXEC)
#endif
#ifndef _PAGE_HPTEFLAGS
#define _PAGE_HPTEFLAGS _PAGE_HASHPTE
#endif
#ifndef _PTE_NONE_MASK
#define _PTE_NONE_MASK _PAGE_HPTEFLAGS
#endif
#ifndef __ASSEMBLY__
/*
* Don't just check for any non zero bits in __PAGE_USER, since for book3e
* and PTE_64BIT, PAGE_KERNEL_X contains _PAGE_BAP_SR which is also in
* _PAGE_USER. Need to explicitly match _PAGE_BAP_UR bit in that case too.
*/
static inline bool pte_user(pte_t pte)
{
return (pte_val(pte) & (_PAGE_USER | _PAGE_PRIVILEGED)) == _PAGE_USER;
}
#endif /* __ASSEMBLY__ */
/* Location of the PFN in the PTE. Most 32-bit platforms use the same
* as _PAGE_SHIFT here (ie, naturally aligned).
* Platform who don't just pre-define the value so we don't override it here
*/
#ifndef PTE_RPN_SHIFT
#define PTE_RPN_SHIFT (PAGE_SHIFT)
#endif
/* The mask covered by the RPN must be a ULL on 32-bit platforms with
* 64-bit PTEs
*/
#if defined(CONFIG_PPC32) && defined(CONFIG_PTE_64BIT)
#define PTE_RPN_MASK (~((1ULL<<PTE_RPN_SHIFT)-1))
#else
#define PTE_RPN_MASK (~((1UL<<PTE_RPN_SHIFT)-1))
#endif
/* _PAGE_CHG_MASK masks of bits that are to be preserved across
* pgprot changes
*/
#define _PAGE_CHG_MASK (PTE_RPN_MASK | _PAGE_HPTEFLAGS | _PAGE_DIRTY | \
_PAGE_ACCESSED | _PAGE_SPECIAL)
/* Mask of bits returned by pte_pgprot() */
#define PAGE_PROT_BITS (_PAGE_GUARDED | _PAGE_COHERENT | _PAGE_NO_CACHE | \
_PAGE_WRITETHRU | _PAGE_ENDIAN | _PAGE_4K_PFN | \
_PAGE_USER | _PAGE_ACCESSED | _PAGE_RO | _PAGE_NA | \
_PAGE_PRIVILEGED | \
_PAGE_RW | _PAGE_HWWRITE | _PAGE_DIRTY | _PAGE_EXEC)
/*
* We define 2 sets of base prot bits, one for basic pages (ie,
* cacheable kernel and user pages) and one for non cacheable
* pages. We always set _PAGE_COHERENT when SMP is enabled or
* the processor might need it for DMA coherency.
*/
#define _PAGE_BASE_NC (_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_PSIZE)
#if defined(CONFIG_SMP) || defined(CONFIG_PPC_STD_MMU) || \
defined(CONFIG_PPC_E500MC)
#define _PAGE_BASE (_PAGE_BASE_NC | _PAGE_COHERENT)
#else
#define _PAGE_BASE (_PAGE_BASE_NC)
#endif
/* Permission masks used to generate the __P and __S table,
*
* Note:__pgprot is defined in arch/powerpc/include/asm/page.h
*
* Write permissions imply read permissions for now (we could make write-only
* pages on BookE but we don't bother for now). Execute permission control is
* possible on platforms that define _PAGE_EXEC
*
* Note due to the way vm flags are laid out, the bits are XWR
*/
#define PAGE_NONE __pgprot(_PAGE_BASE | _PAGE_NA)
#define PAGE_SHARED __pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_RW)
#define PAGE_SHARED_X __pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_RW | \
_PAGE_EXEC)
#define PAGE_COPY __pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_RO)
#define PAGE_COPY_X __pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_RO | \
_PAGE_EXEC)
#define PAGE_READONLY __pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_RO)
#define PAGE_READONLY_X __pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_RO | \
_PAGE_EXEC)
#define __P000 PAGE_NONE
#define __P001 PAGE_READONLY
#define __P010 PAGE_COPY
#define __P011 PAGE_COPY
#define __P100 PAGE_READONLY_X
#define __P101 PAGE_READONLY_X
#define __P110 PAGE_COPY_X
#define __P111 PAGE_COPY_X
#define __S000 PAGE_NONE
#define __S001 PAGE_READONLY
#define __S010 PAGE_SHARED
#define __S011 PAGE_SHARED
#define __S100 PAGE_READONLY_X
#define __S101 PAGE_READONLY_X
#define __S110 PAGE_SHARED_X
#define __S111 PAGE_SHARED_X
/* Permission masks used for kernel mappings */
#define PAGE_KERNEL __pgprot(_PAGE_BASE | _PAGE_KERNEL_RW)
#define PAGE_KERNEL_NC __pgprot(_PAGE_BASE_NC | _PAGE_KERNEL_RW | \
_PAGE_NO_CACHE)
#define PAGE_KERNEL_NCG __pgprot(_PAGE_BASE_NC | _PAGE_KERNEL_RW | \
_PAGE_NO_CACHE | _PAGE_GUARDED)
#define PAGE_KERNEL_X __pgprot(_PAGE_BASE | _PAGE_KERNEL_RWX)
#define PAGE_KERNEL_RO __pgprot(_PAGE_BASE | _PAGE_KERNEL_RO)
#define PAGE_KERNEL_ROX __pgprot(_PAGE_BASE | _PAGE_KERNEL_ROX)
/* Protection used for kernel text. We want the debuggers to be able to
* set breakpoints anywhere, so don't write protect the kernel text
* on platforms where such control is possible.
*/
#if defined(CONFIG_KGDB) || defined(CONFIG_XMON) || defined(CONFIG_BDI_SWITCH) ||\
defined(CONFIG_KPROBES) || defined(CONFIG_DYNAMIC_FTRACE)
#define PAGE_KERNEL_TEXT PAGE_KERNEL_X
#else
#define PAGE_KERNEL_TEXT PAGE_KERNEL_ROX
#endif
/* Make modules code happy. We don't set RO yet */
#define PAGE_KERNEL_EXEC PAGE_KERNEL_X
/* Advertise special mapping type for AGP */
#define PAGE_AGP (PAGE_KERNEL_NC)
#define HAVE_PAGE_AGP
#ifndef _PAGE_READ
/* if not defined, we should not find _PAGE_WRITE too */
#define _PAGE_READ 0
#define _PAGE_WRITE _PAGE_RW
#endif
#ifndef H_PAGE_4K_PFN
#define H_PAGE_4K_PFN 0
#endif

View File

@ -26,6 +26,37 @@
#include <uapi/asm/ptrace.h>
#include <asm/asm-const.h>
#ifndef __ASSEMBLY__
struct pt_regs
{
union {
struct user_pt_regs user_regs;
struct {
unsigned long gpr[32];
unsigned long nip;
unsigned long msr;
unsigned long orig_gpr3;
unsigned long ctr;
unsigned long link;
unsigned long xer;
unsigned long ccr;
#ifdef CONFIG_PPC64
unsigned long softe;
#else
unsigned long mq;
#endif
unsigned long trap;
unsigned long dar;
unsigned long dsisr;
unsigned long result;
};
};
#ifdef CONFIG_PPC64
unsigned long ppr;
#endif
};
#endif
#ifdef __powerpc64__
@ -102,6 +133,11 @@ static inline long regs_return_value(struct pt_regs *regs)
return -regs->gpr[3];
}
static inline void regs_set_return_value(struct pt_regs *regs, unsigned long rc)
{
regs->gpr[3] = rc;
}
#ifdef __powerpc64__
#define user_mode(regs) ((((regs)->msr) >> MSR_PR_LG) & 0x1)
#else

View File

@ -118,11 +118,16 @@
#define MSR_TS_S __MASK(MSR_TS_S_LG) /* Transaction Suspended */
#define MSR_TS_T __MASK(MSR_TS_T_LG) /* Transaction Transactional */
#define MSR_TS_MASK (MSR_TS_T | MSR_TS_S) /* Transaction State bits */
#define MSR_TM_ACTIVE(x) (((x) & MSR_TS_MASK) != 0) /* Transaction active? */
#define MSR_TM_RESV(x) (((x) & MSR_TS_MASK) == MSR_TS_MASK) /* Reserved */
#define MSR_TM_TRANSACTIONAL(x) (((x) & MSR_TS_MASK) == MSR_TS_T)
#define MSR_TM_SUSPENDED(x) (((x) & MSR_TS_MASK) == MSR_TS_S)
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
#define MSR_TM_ACTIVE(x) (((x) & MSR_TS_MASK) != 0) /* Transaction active? */
#else
#define MSR_TM_ACTIVE(x) 0
#endif
#if defined(CONFIG_PPC_BOOK3S_64)
#define MSR_64BIT MSR_SF

View File

@ -125,6 +125,7 @@ struct rtas_suspend_me_data {
#define RTAS_TYPE_INFO 0xE2
#define RTAS_TYPE_DEALLOC 0xE3
#define RTAS_TYPE_DUMP 0xE4
#define RTAS_TYPE_HOTPLUG 0xE5
/* I don't add PowerMGM events right now, this is a different topic */
#define RTAS_TYPE_PMGM_POWER_SW_ON 0x60
#define RTAS_TYPE_PMGM_POWER_SW_OFF 0x61
@ -185,11 +186,23 @@ static inline uint8_t rtas_error_disposition(const struct rtas_error_log *elog)
return (elog->byte1 & 0x18) >> 3;
}
static inline
void rtas_set_disposition_recovered(struct rtas_error_log *elog)
{
elog->byte1 &= ~0x18;
elog->byte1 |= (RTAS_DISP_FULLY_RECOVERED << 3);
}
static inline uint8_t rtas_error_extended(const struct rtas_error_log *elog)
{
return (elog->byte1 & 0x04) >> 2;
}
static inline uint8_t rtas_error_initiator(const struct rtas_error_log *elog)
{
return (elog->byte2 & 0xf0) >> 4;
}
#define rtas_error_type(x) ((x)->byte3)
static inline
@ -275,6 +288,7 @@ inline uint32_t rtas_ext_event_company_id(struct rtas_ext_event_log_v6 *ext_log)
#define PSERIES_ELOG_SECT_ID_CALL_HOME (('C' << 8) | 'H')
#define PSERIES_ELOG_SECT_ID_USER_DEF (('U' << 8) | 'D')
#define PSERIES_ELOG_SECT_ID_HOTPLUG (('H' << 8) | 'P')
#define PSERIES_ELOG_SECT_ID_MCE (('M' << 8) | 'C')
/* Vendor specific Platform Event Log Format, Version 6, section header */
struct pseries_errorlog {
@ -316,6 +330,7 @@ struct pseries_hp_errorlog {
#define PSERIES_HP_ELOG_RESOURCE_MEM 2
#define PSERIES_HP_ELOG_RESOURCE_SLOT 3
#define PSERIES_HP_ELOG_RESOURCE_PHB 4
#define PSERIES_HP_ELOG_RESOURCE_PMEM 6
#define PSERIES_HP_ELOG_ACTION_ADD 1
#define PSERIES_HP_ELOG_ACTION_REMOVE 2

View File

@ -32,6 +32,7 @@ void slice_set_range_psize(struct mm_struct *mm, unsigned long start,
unsigned long len, unsigned int psize);
void slice_init_new_context_exec(struct mm_struct *mm);
void slice_setup_new_exec(void);
#endif /* __ASSEMBLY__ */

View File

@ -100,6 +100,7 @@ static inline void set_hard_smp_processor_id(int cpu, int phys)
DECLARE_PER_CPU(cpumask_var_t, cpu_sibling_map);
DECLARE_PER_CPU(cpumask_var_t, cpu_l2_cache_map);
DECLARE_PER_CPU(cpumask_var_t, cpu_core_map);
DECLARE_PER_CPU(cpumask_var_t, cpu_smallcore_map);
static inline struct cpumask *cpu_sibling_mask(int cpu)
{
@ -116,6 +117,11 @@ static inline struct cpumask *cpu_l2_cache_mask(int cpu)
return per_cpu(cpu_l2_cache_map, cpu);
}
static inline struct cpumask *cpu_smallcore_mask(int cpu)
{
return per_cpu(cpu_smallcore_map, cpu);
}
extern int cpu_to_core_id(int cpu);
/* Since OpenPIC has only 4 IPIs, we use slightly different message numbers.
@ -166,6 +172,11 @@ static inline const struct cpumask *cpu_sibling_mask(int cpu)
return cpumask_of(cpu);
}
static inline const struct cpumask *cpu_smallcore_mask(int cpu)
{
return cpumask_of(cpu);
}
#endif /* CONFIG_SMP */
#ifdef CONFIG_PPC64

View File

@ -9,17 +9,6 @@
* MAX_PHYSMEM_BITS 2^N: how much memory we can have in that space
*/
#define SECTION_SIZE_BITS 24
/*
* If we store section details in page->flags we can't increase the MAX_PHYSMEM_BITS
* if we increase SECTIONS_WIDTH we will not store node details in page->flags and
* page_to_nid does a page->section->node lookup
* Hence only increase for VMEMMAP.
*/
#ifdef CONFIG_SPARSEMEM_VMEMMAP
#define MAX_PHYSMEM_BITS 47
#else
#define MAX_PHYSMEM_BITS 46
#endif
#endif /* CONFIG_SPARSEMEM */

View File

@ -0,0 +1,38 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* GCC stack protector support.
*
*/
#ifndef _ASM_STACKPROTECTOR_H
#define _ASM_STACKPROTECTOR_H
#include <linux/random.h>
#include <linux/version.h>
#include <asm/reg.h>
#include <asm/current.h>
#include <asm/paca.h>
/*
* Initialize the stackprotector canary value.
*
* NOTE: this must only be called from functions that never return,
* and it must always be inlined.
*/
static __always_inline void boot_init_stack_canary(void)
{
unsigned long canary;
/* Try to get a semi random initial value. */
canary = get_random_canary();
canary ^= mftb();
canary ^= LINUX_VERSION_CODE;
canary &= CANARY_MASK;
current->stack_canary = canary;
#ifdef CONFIG_PPC64
get_paca()->canary = canary;
#endif
}
#endif /* _ASM_STACKPROTECTOR_H */

View File

@ -29,6 +29,7 @@
#include <asm/page.h>
#include <asm/accounting.h>
#define SLB_PRELOAD_NR 16U
/*
* low level task data.
*/
@ -44,6 +45,10 @@ struct thread_info {
#if defined(CONFIG_VIRT_CPU_ACCOUNTING_NATIVE) && defined(CONFIG_PPC32)
struct cpu_accounting_data accounting;
#endif
unsigned char slb_preload_nr;
unsigned char slb_preload_tail;
u32 slb_preload_esid[SLB_PRELOAD_NR];
/* low level flags - has atomic operations done on it */
unsigned long flags ____cacheline_aligned_in_smp;
};
@ -72,6 +77,12 @@ static inline struct thread_info *current_thread_info(void)
}
extern int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src);
#ifdef CONFIG_PPC_BOOK3S_64
void arch_setup_new_exec(void);
#define arch_setup_new_exec arch_setup_new_exec
#endif
#endif /* __ASSEMBLY__ */
/*
@ -81,7 +92,7 @@ extern int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src
#define TIF_SIGPENDING 1 /* signal pending */
#define TIF_NEED_RESCHED 2 /* rescheduling necessary */
#define TIF_FSCHECK 3 /* Check FS is USER_DS on return */
#define TIF_32BIT 4 /* 32 bit binary */
#define TIF_SYSCALL_EMU 4 /* syscall emulation active */
#define TIF_RESTORE_TM 5 /* need to restore TM FP/VEC/VSX */
#define TIF_PATCH_PENDING 6 /* pending live patching update */
#define TIF_SYSCALL_AUDIT 7 /* syscall auditing active */
@ -100,6 +111,7 @@ extern int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src
#define TIF_ELF2ABI 18 /* function descriptors must die! */
#endif
#define TIF_POLLING_NRFLAG 19 /* true if poll_idle() is polling TIF_NEED_RESCHED */
#define TIF_32BIT 20 /* 32 bit binary */
/* as above, but as bit values */
#define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE)
@ -120,9 +132,10 @@ extern int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src
#define _TIF_EMULATE_STACK_STORE (1<<TIF_EMULATE_STACK_STORE)
#define _TIF_NOHZ (1<<TIF_NOHZ)
#define _TIF_FSCHECK (1<<TIF_FSCHECK)
#define _TIF_SYSCALL_EMU (1<<TIF_SYSCALL_EMU)
#define _TIF_SYSCALL_DOTRACE (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \
_TIF_SECCOMP | _TIF_SYSCALL_TRACEPOINT | \
_TIF_NOHZ)
_TIF_NOHZ | _TIF_SYSCALL_EMU)
#define _TIF_USER_WORK_MASK (_TIF_SIGPENDING | _TIF_NEED_RESCHED | \
_TIF_NOTIFY_RESUME | _TIF_UPROBE | \

View File

@ -201,6 +201,21 @@ TRACE_EVENT(tlbie,
__entry->r)
);
TRACE_EVENT(tlbia,
TP_PROTO(unsigned long id),
TP_ARGS(id),
TP_STRUCT__entry(
__field(unsigned long, id)
),
TP_fast_assign(
__entry->id = id;
),
TP_printk("ctx.id=0x%lx", __entry->id)
);
#endif /* _TRACE_POWERPC_H */
#undef TRACE_INCLUDE_PATH

View File

@ -260,7 +260,7 @@ do { \
({ \
long __gu_err; \
__long_type(*(ptr)) __gu_val; \
const __typeof__(*(ptr)) __user *__gu_addr = (ptr); \
__typeof__(*(ptr)) __user *__gu_addr = (ptr); \
__chk_user_ptr(ptr); \
if (!is_kernel_addr((unsigned long)__gu_addr)) \
might_fault(); \
@ -274,7 +274,7 @@ do { \
({ \
long __gu_err = -EFAULT; \
__long_type(*(ptr)) __gu_val = 0; \
const __typeof__(*(ptr)) __user *__gu_addr = (ptr); \
__typeof__(*(ptr)) __user *__gu_addr = (ptr); \
might_fault(); \
if (access_ok(VERIFY_READ, __gu_addr, (size))) { \
barrier_nospec(); \
@ -288,7 +288,7 @@ do { \
({ \
long __gu_err; \
__long_type(*(ptr)) __gu_val; \
const __typeof__(*(ptr)) __user *__gu_addr = (ptr); \
__typeof__(*(ptr)) __user *__gu_addr = (ptr); \
__chk_user_ptr(ptr); \
barrier_nospec(); \
__get_user_size(__gu_val, __gu_addr, (size), __gu_err); \

View File

@ -31,7 +31,7 @@
* to write an integer number of pages.
*/
struct user {
struct pt_regs regs; /* entire machine state */
struct user_pt_regs regs; /* entire machine state */
size_t u_tsize; /* text size (pages) */
size_t u_dsize; /* data size (pages) */
size_t u_ssize; /* stack size (pages) */

View File

@ -29,7 +29,12 @@
#ifndef __ASSEMBLY__
struct pt_regs {
#ifdef __KERNEL__
struct user_pt_regs
#else
struct pt_regs
#endif
{
unsigned long gpr[32];
unsigned long nip;
unsigned long msr;
@ -160,6 +165,10 @@ struct pt_regs {
#define PTRACE_GETVSRREGS 0x1b
#define PTRACE_SETVSRREGS 0x1c
/* Syscall emulation defines */
#define PTRACE_SYSEMU 0x1d
#define PTRACE_SYSEMU_SINGLESTEP 0x1e
/*
* Get or set a debug register. The first 16 are DABR registers and the
* second 16 are IABR registers.

View File

@ -22,7 +22,11 @@ struct sigcontext {
#endif
unsigned long handler;
unsigned long oldmask;
struct pt_regs __user *regs;
#ifdef __KERNEL__
struct user_pt_regs __user *regs;
#else
struct pt_regs *regs;
#endif
#ifdef __powerpc64__
elf_gregset_t gp_regs;
elf_fpregset_t fp_regs;

View File

@ -5,7 +5,8 @@
CFLAGS_ptrace.o += -DUTS_MACHINE='"$(UTS_MACHINE)"'
subdir-ccflags-$(CONFIG_PPC_WERROR) := -Werror
# Disable clang warning for using setjmp without setjmp.h header
CFLAGS_crash.o += $(call cc-disable-warning, builtin-requires-header)
ifdef CONFIG_PPC64
CFLAGS_prom_init.o += $(NO_MINIMAL_TOC)
@ -20,12 +21,14 @@ CFLAGS_prom_init.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
CFLAGS_btext.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
CFLAGS_prom.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
CFLAGS_prom_init.o += $(call cc-option, -fno-stack-protector)
ifdef CONFIG_FUNCTION_TRACER
# Do not trace early boot code
CFLAGS_REMOVE_cputable.o = -mno-sched-epilog $(CC_FLAGS_FTRACE)
CFLAGS_REMOVE_prom_init.o = -mno-sched-epilog $(CC_FLAGS_FTRACE)
CFLAGS_REMOVE_btext.o = -mno-sched-epilog $(CC_FLAGS_FTRACE)
CFLAGS_REMOVE_prom.o = -mno-sched-epilog $(CC_FLAGS_FTRACE)
CFLAGS_REMOVE_cputable.o = $(CC_FLAGS_FTRACE)
CFLAGS_REMOVE_prom_init.o = $(CC_FLAGS_FTRACE)
CFLAGS_REMOVE_btext.o = $(CC_FLAGS_FTRACE)
CFLAGS_REMOVE_prom.o = $(CC_FLAGS_FTRACE)
endif
obj-y := cputable.o ptrace.o syscalls.o \

View File

@ -79,11 +79,16 @@ int main(void)
{
OFFSET(THREAD, task_struct, thread);
OFFSET(MM, task_struct, mm);
#ifdef CONFIG_STACKPROTECTOR
OFFSET(TASK_CANARY, task_struct, stack_canary);
#ifdef CONFIG_PPC64
OFFSET(PACA_CANARY, paca_struct, canary);
#endif
#endif
OFFSET(MMCONTEXTID, mm_struct, context.id);
#ifdef CONFIG_PPC64
DEFINE(SIGSEGV, SIGSEGV);
DEFINE(NMI_MASK, NMI_MASK);
OFFSET(TASKTHREADPPR, task_struct, thread.ppr);
#else
OFFSET(THREAD_INFO, task_struct, stack);
DEFINE(THREAD_INFO_GAP, _ALIGN_UP(sizeof(struct thread_info), 16));
@ -173,7 +178,6 @@ int main(void)
OFFSET(PACAKSAVE, paca_struct, kstack);
OFFSET(PACACURRENT, paca_struct, __current);
OFFSET(PACASAVEDMSR, paca_struct, saved_msr);
OFFSET(PACASTABRR, paca_struct, stab_rr);
OFFSET(PACAR1, paca_struct, saved_r1);
OFFSET(PACATOC, paca_struct, kernel_toc);
OFFSET(PACAKBASE, paca_struct, kernelbase);
@ -212,6 +216,7 @@ int main(void)
#ifdef CONFIG_PPC_BOOK3S_64
OFFSET(PACASLBCACHE, paca_struct, slb_cache);
OFFSET(PACASLBCACHEPTR, paca_struct, slb_cache_ptr);
OFFSET(PACASTABRR, paca_struct, stab_rr);
OFFSET(PACAVMALLOCSLLP, paca_struct, vmalloc_sllp);
#ifdef CONFIG_PPC_MM_SLICES
OFFSET(MMUPSIZESLLP, mmu_psize_def, sllp);
@ -274,11 +279,6 @@ int main(void)
/* Interrupt register frame */
DEFINE(INT_FRAME_SIZE, STACK_INT_FRAME_SIZE);
DEFINE(SWITCH_FRAME_SIZE, STACK_FRAME_OVERHEAD + sizeof(struct pt_regs));
#ifdef CONFIG_PPC64
/* Create extra stack space for SRR0 and SRR1 when calling prom/rtas. */
DEFINE(PROM_FRAME_SIZE, STACK_FRAME_OVERHEAD + sizeof(struct pt_regs) + 16);
DEFINE(RTAS_FRAME_SIZE, STACK_FRAME_OVERHEAD + sizeof(struct pt_regs) + 16);
#endif /* CONFIG_PPC64 */
STACK_PT_REGS_OFFSET(GPR0, gpr[0]);
STACK_PT_REGS_OFFSET(GPR1, gpr[1]);
STACK_PT_REGS_OFFSET(GPR2, gpr[2]);
@ -322,10 +322,7 @@ int main(void)
STACK_PT_REGS_OFFSET(_ESR, dsisr);
#else /* CONFIG_PPC64 */
STACK_PT_REGS_OFFSET(SOFTE, softe);
/* These _only_ to be used with {PROM,RTAS}_FRAME_SIZE!!! */
DEFINE(_SRR0, STACK_FRAME_OVERHEAD+sizeof(struct pt_regs));
DEFINE(_SRR1, STACK_FRAME_OVERHEAD+sizeof(struct pt_regs)+8);
STACK_PT_REGS_OFFSET(_PPR, ppr);
#endif /* CONFIG_PPC64 */
#if defined(CONFIG_PPC32)

View File

@ -163,7 +163,7 @@ void btext_map(void)
offset = ((unsigned long) dispDeviceBase) - base;
size = dispDeviceRowBytes * dispDeviceRect[3] + offset
+ dispDeviceRect[0];
vbase = __ioremap(base, size, pgprot_val(pgprot_noncached_wc(__pgprot(0))));
vbase = ioremap_wc(base, size);
if (!vbase)
return;
logicalDisplayBase = vbase + offset;

View File

@ -20,6 +20,8 @@
#include <linux/percpu.h>
#include <linux/slab.h>
#include <asm/prom.h>
#include <asm/cputhreads.h>
#include <asm/smp.h>
#include "cacheinfo.h"
@ -627,17 +629,48 @@ static ssize_t level_show(struct kobject *k, struct kobj_attribute *attr, char *
static struct kobj_attribute cache_level_attr =
__ATTR(level, 0444, level_show, NULL);
static unsigned int index_dir_to_cpu(struct cache_index_dir *index)
{
struct kobject *index_dir_kobj = &index->kobj;
struct kobject *cache_dir_kobj = index_dir_kobj->parent;
struct kobject *cpu_dev_kobj = cache_dir_kobj->parent;
struct device *dev = kobj_to_dev(cpu_dev_kobj);
return dev->id;
}
/*
* On big-core systems, each core has two groups of CPUs each of which
* has its own L1-cache. The thread-siblings which share l1-cache with
* @cpu can be obtained via cpu_smallcore_mask().
*/
static const struct cpumask *get_big_core_shared_cpu_map(int cpu, struct cache *cache)
{
if (cache->level == 1)
return cpu_smallcore_mask(cpu);
return &cache->shared_cpu_map;
}
static ssize_t shared_cpu_map_show(struct kobject *k, struct kobj_attribute *attr, char *buf)
{
struct cache_index_dir *index;
struct cache *cache;
int ret;
const struct cpumask *mask;
int ret, cpu;
index = kobj_to_cache_index_dir(k);
cache = index->cache;
if (has_big_cores) {
cpu = index_dir_to_cpu(index);
mask = get_big_core_shared_cpu_map(cpu, cache);
} else {
mask = &cache->shared_cpu_map;
}
ret = scnprintf(buf, PAGE_SIZE - 1, "%*pb\n",
cpumask_pr_args(&cache->shared_cpu_map));
cpumask_pr_args(mask));
buf[ret++] = '\n';
buf[ret] = '\0';
return ret;

View File

@ -110,7 +110,7 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
vaddr = __va(paddr);
csize = copy_oldmem_vaddr(vaddr, buf, csize, offset, userbuf);
} else {
vaddr = __ioremap(paddr, PAGE_SIZE, 0);
vaddr = ioremap_cache(paddr, PAGE_SIZE);
csize = copy_oldmem_vaddr(vaddr, buf, csize, offset, userbuf);
iounmap(vaddr);
}

View File

@ -169,6 +169,11 @@ static size_t eeh_dump_dev_log(struct eeh_dev *edev, char *buf, size_t len)
int n = 0, l = 0;
char buffer[128];
if (!pdn) {
pr_warn("EEH: Note: No error log for absent device.\n");
return 0;
}
n += scnprintf(buf+n, len-n, "%04x:%02x:%02x.%01x\n",
pdn->phb->global_number, pdn->busno,
PCI_SLOT(pdn->devfn), PCI_FUNC(pdn->devfn));
@ -399,7 +404,7 @@ static int eeh_phb_check_failure(struct eeh_pe *pe)
}
/* Isolate the PHB and send event */
eeh_pe_state_mark(phb_pe, EEH_PE_ISOLATED);
eeh_pe_mark_isolated(phb_pe);
eeh_serialize_unlock(flags);
pr_err("EEH: PHB#%x failure detected, location: %s\n",
@ -558,7 +563,7 @@ int eeh_dev_check_failure(struct eeh_dev *edev)
* with other functions on this device, and functions under
* bridges.
*/
eeh_pe_state_mark(pe, EEH_PE_ISOLATED);
eeh_pe_mark_isolated(pe);
eeh_serialize_unlock(flags);
/* Most EEH events are due to device driver bugs. Having
@ -676,7 +681,7 @@ int eeh_pci_enable(struct eeh_pe *pe, int function)
/* Check if the request is finished successfully */
if (active_flag) {
rc = eeh_ops->wait_state(pe, PCI_BUS_RESET_WAIT_MSEC);
rc = eeh_wait_state(pe, PCI_BUS_RESET_WAIT_MSEC);
if (rc < 0)
return rc;
@ -825,7 +830,8 @@ int pcibios_set_pcie_reset_state(struct pci_dev *dev, enum pcie_reset_state stat
eeh_pe_state_clear(pe, EEH_PE_ISOLATED);
break;
case pcie_hot_reset:
eeh_pe_state_mark_with_cfg(pe, EEH_PE_ISOLATED);
eeh_pe_mark_isolated(pe);
eeh_pe_state_clear(pe, EEH_PE_CFG_BLOCKED);
eeh_ops->set_option(pe, EEH_OPT_FREEZE_PE);
eeh_pe_dev_traverse(pe, eeh_disable_and_save_dev_state, dev);
if (!(pe->type & EEH_PE_VF))
@ -833,7 +839,8 @@ int pcibios_set_pcie_reset_state(struct pci_dev *dev, enum pcie_reset_state stat
eeh_ops->reset(pe, EEH_RESET_HOT);
break;
case pcie_warm_reset:
eeh_pe_state_mark_with_cfg(pe, EEH_PE_ISOLATED);
eeh_pe_mark_isolated(pe);
eeh_pe_state_clear(pe, EEH_PE_CFG_BLOCKED);
eeh_ops->set_option(pe, EEH_OPT_FREEZE_PE);
eeh_pe_dev_traverse(pe, eeh_disable_and_save_dev_state, dev);
if (!(pe->type & EEH_PE_VF))
@ -913,16 +920,15 @@ int eeh_pe_reset_full(struct eeh_pe *pe)
break;
/* Wait until the PE is in a functioning state */
state = eeh_ops->wait_state(pe, PCI_BUS_RESET_WAIT_MSEC);
if (eeh_state_active(state))
break;
state = eeh_wait_state(pe, PCI_BUS_RESET_WAIT_MSEC);
if (state < 0) {
pr_warn("%s: Unrecoverable slot failure on PHB#%x-PE#%x",
__func__, pe->phb->global_number, pe->addr);
ret = -ENOTRECOVERABLE;
break;
}
if (eeh_state_active(state))
break;
/* Set error in case this is our last attempt */
ret = -EIO;
@ -1036,6 +1042,11 @@ void eeh_probe_devices(void)
pdn = hose->pci_data;
traverse_pci_dn(pdn, eeh_ops->probe, NULL);
}
if (eeh_enabled())
pr_info("EEH: PCI Enhanced I/O Error Handling Enabled\n");
else
pr_info("EEH: No capable adapters found\n");
}
/**
@ -1079,18 +1090,7 @@ static int eeh_init(void)
eeh_dev_phb_init_dynamic(hose);
/* Initialize EEH event */
ret = eeh_event_init();
if (ret)
return ret;
eeh_probe_devices();
if (eeh_enabled())
pr_info("EEH: PCI Enhanced I/O Error Handling Enabled\n");
else if (!eeh_has_flag(EEH_POSTPONED_PROBE))
pr_info("EEH: No capable adapters found\n");
return ret;
return eeh_event_init();
}
core_initcall_sync(eeh_init);

View File

@ -60,8 +60,6 @@ struct eeh_dev *eeh_dev_init(struct pci_dn *pdn)
/* Associate EEH device with OF node */
pdn->edev = edev;
edev->pdn = pdn;
INIT_LIST_HEAD(&edev->list);
INIT_LIST_HEAD(&edev->rmv_list);
return edev;
}

View File

@ -35,8 +35,8 @@
#include <asm/rtas.h>
struct eeh_rmv_data {
struct list_head edev_list;
int removed;
struct list_head removed_vf_list;
int removed_dev_count;
};
static int eeh_result_priority(enum pci_ers_result result)
@ -281,6 +281,10 @@ static void eeh_pe_report_edev(struct eeh_dev *edev, eeh_report_fn fn,
struct pci_driver *driver;
enum pci_ers_result new_result;
if (!edev->pdev) {
eeh_edev_info(edev, "no device");
return;
}
device_lock(&edev->pdev->dev);
if (eeh_edev_actionable(edev)) {
driver = eeh_pcid_get(edev->pdev);
@ -400,7 +404,7 @@ static void *eeh_dev_restore_state(struct eeh_dev *edev, void *userdata)
* EEH device is created.
*/
if (edev->pe && (edev->pe->state & EEH_PE_CFG_RESTRICTED)) {
if (list_is_last(&edev->list, &edev->pe->edevs))
if (list_is_last(&edev->entry, &edev->pe->edevs))
eeh_pe_restore_bars(edev->pe);
return NULL;
@ -465,10 +469,9 @@ static enum pci_ers_result eeh_report_failure(struct eeh_dev *edev,
return rc;
}
static void *eeh_add_virt_device(void *data, void *userdata)
static void *eeh_add_virt_device(struct eeh_dev *edev)
{
struct pci_driver *driver;
struct eeh_dev *edev = (struct eeh_dev *)data;
struct pci_dev *dev = eeh_dev_to_pci_dev(edev);
struct pci_dn *pdn = eeh_dev_to_pdn(edev);
@ -499,7 +502,6 @@ static void *eeh_rmv_device(struct eeh_dev *edev, void *userdata)
struct pci_driver *driver;
struct pci_dev *dev = eeh_dev_to_pci_dev(edev);
struct eeh_rmv_data *rmv_data = (struct eeh_rmv_data *)userdata;
int *removed = rmv_data ? &rmv_data->removed : NULL;
/*
* Actually, we should remove the PCI bridges as well.
@ -521,7 +523,7 @@ static void *eeh_rmv_device(struct eeh_dev *edev, void *userdata)
if (eeh_dev_removed(edev))
return NULL;
if (removed) {
if (rmv_data) {
if (eeh_pe_passed(edev->pe))
return NULL;
driver = eeh_pcid_get(dev);
@ -539,10 +541,9 @@ static void *eeh_rmv_device(struct eeh_dev *edev, void *userdata)
/* Remove it from PCI subsystem */
pr_debug("EEH: Removing %s without EEH sensitive driver\n",
pci_name(dev));
edev->bus = dev->bus;
edev->mode |= EEH_DEV_DISCONNECTED;
if (removed)
(*removed)++;
if (rmv_data)
rmv_data->removed_dev_count++;
if (edev->physfn) {
#ifdef CONFIG_PCI_IOV
@ -558,7 +559,7 @@ static void *eeh_rmv_device(struct eeh_dev *edev, void *userdata)
pdn->pe_number = IODA_INVALID_PE;
#endif
if (rmv_data)
list_add(&edev->rmv_list, &rmv_data->edev_list);
list_add(&edev->rmv_entry, &rmv_data->removed_vf_list);
} else {
pci_lock_rescan_remove();
pci_stop_and_remove_bus_device(dev);
@ -727,7 +728,7 @@ static int eeh_reset_device(struct eeh_pe *pe, struct pci_bus *bus,
* the device up before the scripts have taken it down,
* potentially weird things happen.
*/
if (!driver_eeh_aware || rmv_data->removed) {
if (!driver_eeh_aware || rmv_data->removed_dev_count) {
pr_info("EEH: Sleep 5s ahead of %s hotplug\n",
(driver_eeh_aware ? "partial" : "complete"));
ssleep(5);
@ -737,10 +738,10 @@ static int eeh_reset_device(struct eeh_pe *pe, struct pci_bus *bus,
* PE. We should disconnect it so the binding can be
* rebuilt when adding PCI devices.
*/
edev = list_first_entry(&pe->edevs, struct eeh_dev, list);
edev = list_first_entry(&pe->edevs, struct eeh_dev, entry);
eeh_pe_traverse(pe, eeh_pe_detach_dev, NULL);
if (pe->type & EEH_PE_VF) {
eeh_add_virt_device(edev, NULL);
eeh_add_virt_device(edev);
} else {
if (!driver_eeh_aware)
eeh_pe_state_clear(pe, EEH_PE_PRI_BUS);
@ -789,7 +790,8 @@ void eeh_handle_normal_event(struct eeh_pe *pe)
struct eeh_pe *tmp_pe;
int rc = 0;
enum pci_ers_result result = PCI_ERS_RESULT_NONE;
struct eeh_rmv_data rmv_data = {LIST_HEAD_INIT(rmv_data.edev_list), 0};
struct eeh_rmv_data rmv_data =
{LIST_HEAD_INIT(rmv_data.removed_vf_list), 0};
bus = eeh_pe_bus_get(pe);
if (!bus) {
@ -806,10 +808,8 @@ void eeh_handle_normal_event(struct eeh_pe *pe)
pr_err("EEH: PHB#%x-PE#%x has failed %d times in the last hour and has been permanently disabled.\n",
pe->phb->global_number, pe->addr,
pe->freeze_count);
goto hard_fail;
result = PCI_ERS_RESULT_DISCONNECT;
}
pr_warn("EEH: This PCI device has failed %d times in the last hour and will be permanently disabled after %d failures.\n",
pe->freeze_count, eeh_max_freezes);
/* Walk the various device drivers attached to this slot through
* a reset sequence, giving each an opportunity to do what it needs
@ -821,31 +821,39 @@ void eeh_handle_normal_event(struct eeh_pe *pe)
* the error. Override the result if necessary to have partially
* hotplug for this case.
*/
pr_info("EEH: Notify device drivers to shutdown\n");
eeh_set_channel_state(pe, pci_channel_io_frozen);
eeh_set_irq_state(pe, false);
eeh_pe_report("error_detected(IO frozen)", pe, eeh_report_error,
&result);
if ((pe->type & EEH_PE_PHB) &&
result != PCI_ERS_RESULT_NONE &&
result != PCI_ERS_RESULT_NEED_RESET)
result = PCI_ERS_RESULT_NEED_RESET;
if (result != PCI_ERS_RESULT_DISCONNECT) {
pr_warn("EEH: This PCI device has failed %d times in the last hour and will be permanently disabled after %d failures.\n",
pe->freeze_count, eeh_max_freezes);
pr_info("EEH: Notify device drivers to shutdown\n");
eeh_set_channel_state(pe, pci_channel_io_frozen);
eeh_set_irq_state(pe, false);
eeh_pe_report("error_detected(IO frozen)", pe,
eeh_report_error, &result);
if ((pe->type & EEH_PE_PHB) &&
result != PCI_ERS_RESULT_NONE &&
result != PCI_ERS_RESULT_NEED_RESET)
result = PCI_ERS_RESULT_NEED_RESET;
}
/* Get the current PCI slot state. This can take a long time,
* sometimes over 300 seconds for certain systems.
*/
rc = eeh_ops->wait_state(pe, MAX_WAIT_FOR_RECOVERY*1000);
if (rc < 0 || rc == EEH_STATE_NOT_SUPPORT) {
pr_warn("EEH: Permanent failure\n");
goto hard_fail;
if (result != PCI_ERS_RESULT_DISCONNECT) {
rc = eeh_wait_state(pe, MAX_WAIT_FOR_RECOVERY*1000);
if (rc < 0 || rc == EEH_STATE_NOT_SUPPORT) {
pr_warn("EEH: Permanent failure\n");
result = PCI_ERS_RESULT_DISCONNECT;
}
}
/* Since rtas may enable MMIO when posting the error log,
* don't post the error log until after all dev drivers
* have been informed.
*/
pr_info("EEH: Collect temporary log\n");
eeh_slot_error_detail(pe, EEH_LOG_TEMP);
if (result != PCI_ERS_RESULT_DISCONNECT) {
pr_info("EEH: Collect temporary log\n");
eeh_slot_error_detail(pe, EEH_LOG_TEMP);
}
/* If all device drivers were EEH-unaware, then shut
* down all of the device drivers, and hope they
@ -857,7 +865,7 @@ void eeh_handle_normal_event(struct eeh_pe *pe)
if (rc) {
pr_warn("%s: Unable to reset, err=%d\n",
__func__, rc);
goto hard_fail;
result = PCI_ERS_RESULT_DISCONNECT;
}
}
@ -866,9 +874,9 @@ void eeh_handle_normal_event(struct eeh_pe *pe)
pr_info("EEH: Enable I/O for affected devices\n");
rc = eeh_pci_enable(pe, EEH_OPT_THAW_MMIO);
if (rc < 0)
goto hard_fail;
if (rc) {
if (rc < 0) {
result = PCI_ERS_RESULT_DISCONNECT;
} else if (rc) {
result = PCI_ERS_RESULT_NEED_RESET;
} else {
pr_info("EEH: Notify device drivers to resume I/O\n");
@ -882,9 +890,9 @@ void eeh_handle_normal_event(struct eeh_pe *pe)
pr_info("EEH: Enabled DMA for affected devices\n");
rc = eeh_pci_enable(pe, EEH_OPT_THAW_DMA);
if (rc < 0)
goto hard_fail;
if (rc) {
if (rc < 0) {
result = PCI_ERS_RESULT_DISCONNECT;
} else if (rc) {
result = PCI_ERS_RESULT_NEED_RESET;
} else {
/*
@ -897,12 +905,6 @@ void eeh_handle_normal_event(struct eeh_pe *pe)
}
}
/* If any device has a hard failure, then shut off everything. */
if (result == PCI_ERS_RESULT_DISCONNECT) {
pr_warn("EEH: Device driver gave up\n");
goto hard_fail;
}
/* If any device called out for a reset, then reset the slot */
if (result == PCI_ERS_RESULT_NEED_RESET) {
pr_info("EEH: Reset without hotplug activity\n");
@ -910,88 +912,81 @@ void eeh_handle_normal_event(struct eeh_pe *pe)
if (rc) {
pr_warn("%s: Cannot reset, err=%d\n",
__func__, rc);
goto hard_fail;
result = PCI_ERS_RESULT_DISCONNECT;
} else {
result = PCI_ERS_RESULT_NONE;
eeh_set_channel_state(pe, pci_channel_io_normal);
eeh_set_irq_state(pe, true);
eeh_pe_report("slot_reset", pe, eeh_report_reset,
&result);
}
}
if ((result == PCI_ERS_RESULT_RECOVERED) ||
(result == PCI_ERS_RESULT_NONE)) {
/*
* For those hot removed VFs, we should add back them after PF
* get recovered properly.
*/
list_for_each_entry_safe(edev, tmp, &rmv_data.removed_vf_list,
rmv_entry) {
eeh_add_virt_device(edev);
list_del(&edev->rmv_entry);
}
pr_info("EEH: Notify device drivers "
"the completion of reset\n");
result = PCI_ERS_RESULT_NONE;
/* Tell all device drivers that they can resume operations */
pr_info("EEH: Notify device driver to resume\n");
eeh_set_channel_state(pe, pci_channel_io_normal);
eeh_set_irq_state(pe, true);
eeh_pe_report("slot_reset", pe, eeh_report_reset, &result);
}
eeh_pe_report("resume", pe, eeh_report_resume, NULL);
eeh_for_each_pe(pe, tmp_pe) {
eeh_pe_for_each_dev(tmp_pe, edev, tmp) {
edev->mode &= ~EEH_DEV_NO_HANDLER;
edev->in_error = false;
}
}
/* All devices should claim they have recovered by now. */
if ((result != PCI_ERS_RESULT_RECOVERED) &&
(result != PCI_ERS_RESULT_NONE)) {
pr_warn("EEH: Not recovered\n");
goto hard_fail;
}
pr_info("EEH: Recovery successful.\n");
} else {
/*
* About 90% of all real-life EEH failures in the field
* are due to poorly seated PCI cards. Only 10% or so are
* due to actual, failed cards.
*/
pr_err("EEH: Unable to recover from failure from PHB#%x-PE#%x.\n"
"Please try reseating or replacing it\n",
pe->phb->global_number, pe->addr);
/*
* For those hot removed VFs, we should add back them after PF get
* recovered properly.
*/
list_for_each_entry_safe(edev, tmp, &rmv_data.edev_list, rmv_list) {
eeh_add_virt_device(edev, NULL);
list_del(&edev->rmv_list);
}
eeh_slot_error_detail(pe, EEH_LOG_PERM);
/* Tell all device drivers that they can resume operations */
pr_info("EEH: Notify device driver to resume\n");
eeh_set_channel_state(pe, pci_channel_io_normal);
eeh_set_irq_state(pe, true);
eeh_pe_report("resume", pe, eeh_report_resume, NULL);
eeh_for_each_pe(pe, tmp_pe) {
eeh_pe_for_each_dev(tmp_pe, edev, tmp) {
edev->mode &= ~EEH_DEV_NO_HANDLER;
edev->in_error = false;
/* Notify all devices that they're about to go down. */
eeh_set_channel_state(pe, pci_channel_io_perm_failure);
eeh_set_irq_state(pe, false);
eeh_pe_report("error_detected(permanent failure)", pe,
eeh_report_failure, NULL);
/* Mark the PE to be removed permanently */
eeh_pe_state_mark(pe, EEH_PE_REMOVED);
/*
* Shut down the device drivers for good. We mark
* all removed devices correctly to avoid access
* the their PCI config any more.
*/
if (pe->type & EEH_PE_VF) {
eeh_pe_dev_traverse(pe, eeh_rmv_device, NULL);
eeh_pe_dev_mode_mark(pe, EEH_DEV_REMOVED);
} else {
eeh_pe_state_clear(pe, EEH_PE_PRI_BUS);
eeh_pe_dev_mode_mark(pe, EEH_DEV_REMOVED);
pci_lock_rescan_remove();
pci_hp_remove_devices(bus);
pci_unlock_rescan_remove();
/* The passed PE should no longer be used */
return;
}
}
pr_info("EEH: Recovery successful.\n");
goto final;
hard_fail:
/*
* About 90% of all real-life EEH failures in the field
* are due to poorly seated PCI cards. Only 10% or so are
* due to actual, failed cards.
*/
pr_err("EEH: Unable to recover from failure from PHB#%x-PE#%x.\n"
"Please try reseating or replacing it\n",
pe->phb->global_number, pe->addr);
eeh_slot_error_detail(pe, EEH_LOG_PERM);
/* Notify all devices that they're about to go down. */
eeh_set_channel_state(pe, pci_channel_io_perm_failure);
eeh_set_irq_state(pe, false);
eeh_pe_report("error_detected(permanent failure)", pe,
eeh_report_failure, NULL);
/* Mark the PE to be removed permanently */
eeh_pe_state_mark(pe, EEH_PE_REMOVED);
/*
* Shut down the device drivers for good. We mark
* all removed devices correctly to avoid access
* the their PCI config any more.
*/
if (pe->type & EEH_PE_VF) {
eeh_pe_dev_traverse(pe, eeh_rmv_device, NULL);
eeh_pe_dev_mode_mark(pe, EEH_DEV_REMOVED);
} else {
eeh_pe_state_clear(pe, EEH_PE_PRI_BUS);
eeh_pe_dev_mode_mark(pe, EEH_DEV_REMOVED);
pci_lock_rescan_remove();
pci_hp_remove_devices(bus);
pci_unlock_rescan_remove();
/* The passed PE should no longer be used */
return;
}
final:
eeh_pe_state_clear(pe, EEH_PE_RECOVERING);
}
@ -1026,7 +1021,7 @@ void eeh_handle_special_event(void)
phb_pe = eeh_phb_pe_get(hose);
if (!phb_pe) continue;
eeh_pe_state_mark(phb_pe, EEH_PE_ISOLATED);
eeh_pe_mark_isolated(phb_pe);
}
eeh_serialize_unlock(flags);
@ -1041,11 +1036,9 @@ void eeh_handle_special_event(void)
/* Purge all events of the PHB */
eeh_remove_event(pe, true);
if (rc == EEH_NEXT_ERR_DEAD_PHB)
eeh_pe_state_mark(pe, EEH_PE_ISOLATED);
else
eeh_pe_state_mark(pe,
EEH_PE_ISOLATED | EEH_PE_RECOVERING);
if (rc != EEH_NEXT_ERR_DEAD_PHB)
eeh_pe_state_mark(pe, EEH_PE_RECOVERING);
eeh_pe_mark_isolated(pe);
eeh_serialize_unlock(flags);

View File

@ -75,7 +75,6 @@ static struct eeh_pe *eeh_pe_alloc(struct pci_controller *phb, int type)
pe->type = type;
pe->phb = phb;
INIT_LIST_HEAD(&pe->child_list);
INIT_LIST_HEAD(&pe->child);
INIT_LIST_HEAD(&pe->edevs);
pe->data = (void *)pe + ALIGN(sizeof(struct eeh_pe),
@ -109,6 +108,57 @@ int eeh_phb_pe_create(struct pci_controller *phb)
return 0;
}
/**
* eeh_wait_state - Wait for PE state
* @pe: EEH PE
* @max_wait: maximal period in millisecond
*
* Wait for the state of associated PE. It might take some time
* to retrieve the PE's state.
*/
int eeh_wait_state(struct eeh_pe *pe, int max_wait)
{
int ret;
int mwait;
/*
* According to PAPR, the state of PE might be temporarily
* unavailable. Under the circumstance, we have to wait
* for indicated time determined by firmware. The maximal
* wait time is 5 minutes, which is acquired from the original
* EEH implementation. Also, the original implementation
* also defined the minimal wait time as 1 second.
*/
#define EEH_STATE_MIN_WAIT_TIME (1000)
#define EEH_STATE_MAX_WAIT_TIME (300 * 1000)
while (1) {
ret = eeh_ops->get_state(pe, &mwait);
if (ret != EEH_STATE_UNAVAILABLE)
return ret;
if (max_wait <= 0) {
pr_warn("%s: Timeout when getting PE's state (%d)\n",
__func__, max_wait);
return EEH_STATE_NOT_SUPPORT;
}
if (mwait < EEH_STATE_MIN_WAIT_TIME) {
pr_warn("%s: Firmware returned bad wait value %d\n",
__func__, mwait);
mwait = EEH_STATE_MIN_WAIT_TIME;
} else if (mwait > EEH_STATE_MAX_WAIT_TIME) {
pr_warn("%s: Firmware returned too long wait value %d\n",
__func__, mwait);
mwait = EEH_STATE_MAX_WAIT_TIME;
}
msleep(min(mwait, max_wait));
max_wait -= mwait;
}
}
/**
* eeh_phb_pe_get - Retrieve PHB PE based on the given PHB
* @phb: PCI controller
@ -360,7 +410,7 @@ int eeh_add_to_parent_pe(struct eeh_dev *edev)
edev->pe = pe;
/* Put the edev to PE */
list_add_tail(&edev->list, &pe->edevs);
list_add_tail(&edev->entry, &pe->edevs);
pr_debug("EEH: Add %04x:%02x:%02x.%01x to Bus PE#%x\n",
pdn->phb->global_number,
pdn->busno,
@ -369,7 +419,7 @@ int eeh_add_to_parent_pe(struct eeh_dev *edev)
pe->addr);
return 0;
} else if (pe && (pe->type & EEH_PE_INVALID)) {
list_add_tail(&edev->list, &pe->edevs);
list_add_tail(&edev->entry, &pe->edevs);
edev->pe = pe;
/*
* We're running to here because of PCI hotplug caused by
@ -379,7 +429,7 @@ int eeh_add_to_parent_pe(struct eeh_dev *edev)
while (parent) {
if (!(parent->type & EEH_PE_INVALID))
break;
parent->type &= ~(EEH_PE_INVALID | EEH_PE_KEEP);
parent->type &= ~EEH_PE_INVALID;
parent = parent->parent;
}
@ -429,7 +479,7 @@ int eeh_add_to_parent_pe(struct eeh_dev *edev)
* link the EEH device accordingly.
*/
list_add_tail(&pe->child, &parent->child_list);
list_add_tail(&edev->list, &pe->edevs);
list_add_tail(&edev->entry, &pe->edevs);
edev->pe = pe;
pr_debug("EEH: Add %04x:%02x:%02x.%01x to "
"Device PE#%x, Parent PE#%x\n",
@ -457,7 +507,8 @@ int eeh_rmv_from_parent_pe(struct eeh_dev *edev)
int cnt;
struct pci_dn *pdn = eeh_dev_to_pdn(edev);
if (!edev->pe) {
pe = eeh_dev_to_pe(edev);
if (!pe) {
pr_debug("%s: No PE found for device %04x:%02x:%02x.%01x\n",
__func__, pdn->phb->global_number,
pdn->busno,
@ -467,9 +518,8 @@ int eeh_rmv_from_parent_pe(struct eeh_dev *edev)
}
/* Remove the EEH device */
pe = eeh_dev_to_pe(edev);
edev->pe = NULL;
list_del(&edev->list);
list_del(&edev->entry);
/*
* Check if the parent PE includes any EEH devices.
@ -540,44 +590,6 @@ void eeh_pe_update_time_stamp(struct eeh_pe *pe)
}
}
/**
* __eeh_pe_state_mark - Mark the state for the PE
* @data: EEH PE
* @flag: state
*
* The function is used to mark the indicated state for the given
* PE. Also, the associated PCI devices will be put into IO frozen
* state as well.
*/
static void *__eeh_pe_state_mark(struct eeh_pe *pe, void *flag)
{
int state = *((int *)flag);
struct eeh_dev *edev, *tmp;
struct pci_dev *pdev;
/* Keep the state of permanently removed PE intact */
if (pe->state & EEH_PE_REMOVED)
return NULL;
pe->state |= state;
/* Offline PCI devices if applicable */
if (!(state & EEH_PE_ISOLATED))
return NULL;
eeh_pe_for_each_dev(pe, edev, tmp) {
pdev = eeh_dev_to_pci_dev(edev);
if (pdev)
pdev->error_state = pci_channel_io_frozen;
}
/* Block PCI config access if required */
if (pe->state & EEH_PE_CFG_RESTRICTED)
pe->state |= EEH_PE_CFG_BLOCKED;
return NULL;
}
/**
* eeh_pe_state_mark - Mark specified state for PE and its associated device
* @pe: EEH PE
@ -586,12 +598,44 @@ static void *__eeh_pe_state_mark(struct eeh_pe *pe, void *flag)
* is used to mark appropriate state for the affected PEs and the
* associated devices.
*/
void eeh_pe_state_mark(struct eeh_pe *pe, int state)
void eeh_pe_state_mark(struct eeh_pe *root, int state)
{
eeh_pe_traverse(pe, __eeh_pe_state_mark, &state);
struct eeh_pe *pe;
eeh_for_each_pe(root, pe)
if (!(pe->state & EEH_PE_REMOVED))
pe->state |= state;
}
EXPORT_SYMBOL_GPL(eeh_pe_state_mark);
/**
* eeh_pe_mark_isolated
* @pe: EEH PE
*
* Record that a PE has been isolated by marking the PE and it's children as
* EEH_PE_ISOLATED (and EEH_PE_CFG_BLOCKED, if required) and their PCI devices
* as pci_channel_io_frozen.
*/
void eeh_pe_mark_isolated(struct eeh_pe *root)
{
struct eeh_pe *pe;
struct eeh_dev *edev;
struct pci_dev *pdev;
eeh_pe_state_mark(root, EEH_PE_ISOLATED);
eeh_for_each_pe(root, pe) {
list_for_each_entry(edev, &pe->edevs, entry) {
pdev = eeh_dev_to_pci_dev(edev);
if (pdev)
pdev->error_state = pci_channel_io_frozen;
}
/* Block PCI config access if required */
if (pe->state & EEH_PE_CFG_RESTRICTED)
pe->state |= EEH_PE_CFG_BLOCKED;
}
}
EXPORT_SYMBOL_GPL(eeh_pe_mark_isolated);
static void *__eeh_pe_dev_mode_mark(struct eeh_dev *edev, void *flag)
{
int mode = *((int *)flag);
@ -671,28 +715,6 @@ void eeh_pe_state_clear(struct eeh_pe *pe, int state)
eeh_pe_traverse(pe, __eeh_pe_state_clear, &state);
}
/**
* eeh_pe_state_mark_with_cfg - Mark PE state with unblocked config space
* @pe: PE
* @state: PE state to be set
*
* Set specified flag to PE and its child PEs. The PCI config space
* of some PEs is blocked automatically when EEH_PE_ISOLATED is set,
* which isn't needed in some situations. The function allows to set
* the specified flag to indicated PEs without blocking their PCI
* config space.
*/
void eeh_pe_state_mark_with_cfg(struct eeh_pe *pe, int state)
{
eeh_pe_traverse(pe, __eeh_pe_state_mark, &state);
if (!(state & EEH_PE_ISOLATED))
return;
/* Clear EEH_PE_CFG_BLOCKED, which might be set just now */
state = EEH_PE_CFG_BLOCKED;
eeh_pe_traverse(pe, __eeh_pe_state_clear, &state);
}
/*
* Some PCI bridges (e.g. PLX bridges) have primary/secondary
* buses assigned explicitly by firmware, and we probably have
@ -945,7 +967,7 @@ struct pci_bus *eeh_pe_bus_get(struct eeh_pe *pe)
return pe->bus;
/* Retrieve the parent PCI bus of first (top) PCI device */
edev = list_first_entry_or_null(&pe->edevs, struct eeh_dev, list);
edev = list_first_entry_or_null(&pe->edevs, struct eeh_dev, entry);
pdev = eeh_dev_to_pci_dev(edev);
if (pdev)
return pdev->bus;

View File

@ -794,7 +794,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_601)
lis r10,MSR_KERNEL@h
ori r10,r10,MSR_KERNEL@l
bl transfer_to_handler_full
.long nonrecoverable_exception
.long unrecoverable_exception
.long ret_from_except
#endif
@ -1297,7 +1297,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_601)
rlwinm r3,r3,0,0,30
stw r3,_TRAP(r1)
4: addi r3,r1,STACK_FRAME_OVERHEAD
bl nonrecoverable_exception
bl unrecoverable_exception
/* shouldn't return */
b 4b

View File

@ -171,7 +171,7 @@ system_call: /* label this so stack traces look sane */
* based on caller's run-mode / personality.
*/
ld r11,SYS_CALL_TABLE@toc(2)
andi. r10,r10,_TIF_32BIT
andis. r10,r10,_TIF_32BIT@h
beq 15f
addi r11,r11,8 /* use 32-bit syscall entries */
clrldi r3,r3,32
@ -386,10 +386,9 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
4: /* Anything else left to do? */
BEGIN_FTR_SECTION
lis r3,INIT_PPR@highest /* Set thread.ppr = 3 */
ld r10,PACACURRENT(r13)
lis r3,DEFAULT_PPR@highest /* Set default PPR */
sldi r3,r3,32 /* bits 11-13 are used for ppr */
std r3,TASKTHREADPPR(r10)
std r3,_PPR(r1)
END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
andi. r0,r9,(_TIF_SYSCALL_DOTRACE|_TIF_SINGLESTEP)
@ -624,6 +623,10 @@ _GLOBAL(_switch)
addi r6,r4,-THREAD /* Convert THREAD to 'current' */
std r6,PACACURRENT(r13) /* Set new 'current' */
#if defined(CONFIG_STACKPROTECTOR)
ld r6, TASK_CANARY(r6)
std r6, PACA_CANARY(r13)
#endif
ld r8,KSP(r4) /* new stack pointer */
#ifdef CONFIG_PPC_BOOK3S_64
@ -672,7 +675,9 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_1T_SEGMENT)
isync
slbie r6
BEGIN_FTR_SECTION
slbie r6 /* Workaround POWER5 < DD2.1 issue */
END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S)
slbmte r7,r0
isync
2:
@ -936,12 +941,6 @@ fast_exception_return:
andi. r0,r3,MSR_RI
beq- .Lunrecov_restore
/* Load PPR from thread struct before we clear MSR:RI */
BEGIN_FTR_SECTION
ld r2,PACACURRENT(r13)
ld r2,TASKTHREADPPR(r2)
END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
/*
* Clear RI before restoring r13. If we are returning to
* userspace and we take an exception after restoring r13,
@ -962,7 +961,9 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
andi. r0,r3,MSR_PR
beq 1f
BEGIN_FTR_SECTION
mtspr SPRN_PPR,r2 /* Restore PPR */
/* Restore PPR */
ld r2,_PPR(r1)
mtspr SPRN_PPR,r2
END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
ACCOUNT_CPU_USER_EXIT(r13, r2, r4)
REST_GPR(13, r1)
@ -1118,7 +1119,7 @@ _ASM_NOKPROBE_SYMBOL(fast_exception_return);
_GLOBAL(enter_rtas)
mflr r0
std r0,16(r1)
stdu r1,-RTAS_FRAME_SIZE(r1) /* Save SP and create stack space. */
stdu r1,-SWITCH_FRAME_SIZE(r1) /* Save SP and create stack space. */
/* Because RTAS is running in 32b mode, it clobbers the high order half
* of all registers that it saves. We therefore save those registers
@ -1250,7 +1251,7 @@ rtas_restore_regs:
ld r8,_DSISR(r1)
mtdsisr r8
addi r1,r1,RTAS_FRAME_SIZE /* Unstack our frame */
addi r1,r1,SWITCH_FRAME_SIZE /* Unstack our frame */
ld r0,16(r1) /* get return address */
mtlr r0
@ -1261,7 +1262,7 @@ rtas_restore_regs:
_GLOBAL(enter_prom)
mflr r0
std r0,16(r1)
stdu r1,-PROM_FRAME_SIZE(r1) /* Save SP and create stack space */
stdu r1,-SWITCH_FRAME_SIZE(r1) /* Save SP and create stack space */
/* Because PROM is running in 32b mode, it clobbers the high order half
* of all registers that it saves. We therefore save those registers
@ -1318,8 +1319,8 @@ _GLOBAL(enter_prom)
REST_10GPRS(22, r1)
ld r4,_CCR(r1)
mtcr r4
addi r1,r1,PROM_FRAME_SIZE
addi r1,r1,SWITCH_FRAME_SIZE
ld r0,16(r1)
mtlr r0
blr

View File

@ -244,14 +244,13 @@ EXC_REAL_BEGIN(machine_check, 0x200, 0x100)
SET_SCRATCH0(r13) /* save r13 */
EXCEPTION_PROLOG_0(PACA_EXMC)
BEGIN_FTR_SECTION
b machine_check_powernv_early
b machine_check_common_early
FTR_SECTION_ELSE
b machine_check_pSeries_0
ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
EXC_REAL_END(machine_check, 0x200, 0x100)
EXC_VIRT_NONE(0x4200, 0x100)
TRAMP_REAL_BEGIN(machine_check_powernv_early)
BEGIN_FTR_SECTION
TRAMP_REAL_BEGIN(machine_check_common_early)
EXCEPTION_PROLOG_1(PACA_EXMC, NOTEST, 0x200)
/*
* Register contents:
@ -305,7 +304,9 @@ BEGIN_FTR_SECTION
/* Save r9 through r13 from EXMC save area to stack frame. */
EXCEPTION_PROLOG_COMMON_2(PACA_EXMC)
mfmsr r11 /* get MSR value */
BEGIN_FTR_SECTION
ori r11,r11,MSR_ME /* turn on ME bit */
END_FTR_SECTION_IFSET(CPU_FTR_HVMODE)
ori r11,r11,MSR_RI /* turn on RI bit */
LOAD_HANDLER(r12, machine_check_handle_early)
1: mtspr SPRN_SRR0,r12
@ -324,13 +325,15 @@ BEGIN_FTR_SECTION
andc r11,r11,r10 /* Turn off MSR_ME */
b 1b
b . /* prevent speculative execution */
END_FTR_SECTION_IFSET(CPU_FTR_HVMODE)
TRAMP_REAL_BEGIN(machine_check_pSeries)
.globl machine_check_fwnmi
machine_check_fwnmi:
SET_SCRATCH0(r13) /* save r13 */
EXCEPTION_PROLOG_0(PACA_EXMC)
BEGIN_FTR_SECTION
b machine_check_common_early
END_FTR_SECTION_IFCLR(CPU_FTR_HVMODE)
machine_check_pSeries_0:
EXCEPTION_PROLOG_1(PACA_EXMC, KVMTEST_PR, 0x200)
/*
@ -440,6 +443,9 @@ EXC_COMMON_BEGIN(machine_check_handle_early)
bl machine_check_early
std r3,RESULT(r1) /* Save result */
ld r12,_MSR(r1)
BEGIN_FTR_SECTION
b 4f
END_FTR_SECTION_IFCLR(CPU_FTR_HVMODE)
#ifdef CONFIG_PPC_P7_NAP
/*
@ -463,11 +469,12 @@ EXC_COMMON_BEGIN(machine_check_handle_early)
*/
rldicl. r11,r12,4,63 /* See if MC hit while in HV mode. */
beq 5f
andi. r11,r12,MSR_PR /* See if coming from user. */
4: andi. r11,r12,MSR_PR /* See if coming from user. */
bne 9f /* continue in V mode if we are. */
5:
#ifdef CONFIG_KVM_BOOK3S_64_HANDLER
BEGIN_FTR_SECTION
/*
* We are coming from kernel context. Check if we are coming from
* guest. if yes, then we can continue. We will fall through
@ -476,6 +483,7 @@ EXC_COMMON_BEGIN(machine_check_handle_early)
lbz r11,HSTATE_IN_GUEST(r13)
cmpwi r11,0 /* Check if coming from guest */
bne 9f /* continue if we are. */
END_FTR_SECTION_IFSET(CPU_FTR_HVMODE)
#endif
/*
* At this point we are not sure about what context we come from.
@ -510,6 +518,7 @@ EXC_COMMON_BEGIN(machine_check_handle_early)
cmpdi r3,0 /* see if we handled MCE successfully */
beq 1b /* if !handled then panic */
BEGIN_FTR_SECTION
/*
* Return from MC interrupt.
* Queue up the MCE event so that we can log it later, while
@ -518,10 +527,24 @@ EXC_COMMON_BEGIN(machine_check_handle_early)
bl machine_check_queue_event
MACHINE_CHECK_HANDLER_WINDUP
RFI_TO_USER_OR_KERNEL
FTR_SECTION_ELSE
/*
* pSeries: Return from MC interrupt. Before that stay on emergency
* stack and call machine_check_exception to log the MCE event.
*/
LOAD_HANDLER(r10,mce_return)
mtspr SPRN_SRR0,r10
ld r10,PACAKMSR(r13)
mtspr SPRN_SRR1,r10
RFI_TO_KERNEL
b .
ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
9:
/* Deliver the machine check to host kernel in V mode. */
MACHINE_CHECK_HANDLER_WINDUP
b machine_check_pSeries
SET_SCRATCH0(r13) /* save r13 */
EXCEPTION_PROLOG_0(PACA_EXMC)
b machine_check_pSeries_0
EXC_COMMON_BEGIN(unrecover_mce)
/* Invoke machine_check_exception to print MCE event and panic. */
@ -535,6 +558,13 @@ EXC_COMMON_BEGIN(unrecover_mce)
bl unrecoverable_exception
b 1b
EXC_COMMON_BEGIN(mce_return)
/* Invoke machine_check_exception to print MCE event and return. */
addi r3,r1,STACK_FRAME_OVERHEAD
bl machine_check_exception
MACHINE_CHECK_HANDLER_WINDUP
RFI_TO_KERNEL
b .
EXC_REAL(data_access, 0x300, 0x80)
EXC_VIRT(data_access, 0x4300, 0x80, 0x300)
@ -566,28 +596,36 @@ ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX)
EXC_REAL_BEGIN(data_access_slb, 0x380, 0x80)
SET_SCRATCH0(r13)
EXCEPTION_PROLOG_0(PACA_EXSLB)
EXCEPTION_PROLOG_1(PACA_EXSLB, KVMTEST_PR, 0x380)
mr r12,r3 /* save r3 */
mfspr r3,SPRN_DAR
mfspr r11,SPRN_SRR1
crset 4*cr6+eq
BRANCH_TO_COMMON(r10, slb_miss_common)
EXCEPTION_PROLOG(PACA_EXSLB, data_access_slb_common, EXC_STD, KVMTEST_PR, 0x380);
EXC_REAL_END(data_access_slb, 0x380, 0x80)
EXC_VIRT_BEGIN(data_access_slb, 0x4380, 0x80)
SET_SCRATCH0(r13)
EXCEPTION_PROLOG_0(PACA_EXSLB)
EXCEPTION_PROLOG_1(PACA_EXSLB, NOTEST, 0x380)
mr r12,r3 /* save r3 */
mfspr r3,SPRN_DAR
mfspr r11,SPRN_SRR1
crset 4*cr6+eq
BRANCH_TO_COMMON(r10, slb_miss_common)
EXCEPTION_RELON_PROLOG(PACA_EXSLB, data_access_slb_common, EXC_STD, NOTEST, 0x380);
EXC_VIRT_END(data_access_slb, 0x4380, 0x80)
TRAMP_KVM_SKIP(PACA_EXSLB, 0x380)
EXC_COMMON_BEGIN(data_access_slb_common)
mfspr r10,SPRN_DAR
std r10,PACA_EXSLB+EX_DAR(r13)
EXCEPTION_PROLOG_COMMON(0x380, PACA_EXSLB)
ld r4,PACA_EXSLB+EX_DAR(r13)
std r4,_DAR(r1)
addi r3,r1,STACK_FRAME_OVERHEAD
bl do_slb_fault
cmpdi r3,0
bne- 1f
b fast_exception_return
1: /* Error case */
std r3,RESULT(r1)
bl save_nvgprs
RECONCILE_IRQ_STATE(r10, r11)
ld r4,_DAR(r1)
ld r5,RESULT(r1)
addi r3,r1,STACK_FRAME_OVERHEAD
bl do_bad_slb_fault
b ret_from_except
EXC_REAL(instruction_access, 0x400, 0x80)
EXC_VIRT(instruction_access, 0x4400, 0x80, 0x400)
@ -610,160 +648,34 @@ ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX)
EXC_REAL_BEGIN(instruction_access_slb, 0x480, 0x80)
SET_SCRATCH0(r13)
EXCEPTION_PROLOG_0(PACA_EXSLB)
EXCEPTION_PROLOG_1(PACA_EXSLB, KVMTEST_PR, 0x480)
mr r12,r3 /* save r3 */
mfspr r3,SPRN_SRR0 /* SRR0 is faulting address */
mfspr r11,SPRN_SRR1
crclr 4*cr6+eq
BRANCH_TO_COMMON(r10, slb_miss_common)
EXCEPTION_PROLOG(PACA_EXSLB, instruction_access_slb_common, EXC_STD, KVMTEST_PR, 0x480);
EXC_REAL_END(instruction_access_slb, 0x480, 0x80)
EXC_VIRT_BEGIN(instruction_access_slb, 0x4480, 0x80)
SET_SCRATCH0(r13)
EXCEPTION_PROLOG_0(PACA_EXSLB)
EXCEPTION_PROLOG_1(PACA_EXSLB, NOTEST, 0x480)
mr r12,r3 /* save r3 */
mfspr r3,SPRN_SRR0 /* SRR0 is faulting address */
mfspr r11,SPRN_SRR1
crclr 4*cr6+eq
BRANCH_TO_COMMON(r10, slb_miss_common)
EXCEPTION_RELON_PROLOG(PACA_EXSLB, instruction_access_slb_common, EXC_STD, NOTEST, 0x480);
EXC_VIRT_END(instruction_access_slb, 0x4480, 0x80)
TRAMP_KVM(PACA_EXSLB, 0x480)
/*
* This handler is used by the 0x380 and 0x480 SLB miss interrupts, as well as
* the virtual mode 0x4380 and 0x4480 interrupts if AIL is enabled.
*/
EXC_COMMON_BEGIN(slb_miss_common)
/*
* r13 points to the PACA, r9 contains the saved CR,
* r12 contains the saved r3,
* r11 contain the saved SRR1, SRR0 is still ready for return
* r3 has the faulting address
* r9 - r13 are saved in paca->exslb.
* cr6.eq is set for a D-SLB miss, clear for a I-SLB miss
* We assume we aren't going to take any exceptions during this
* procedure.
*/
mflr r10
stw r9,PACA_EXSLB+EX_CCR(r13) /* save CR in exc. frame */
std r10,PACA_EXSLB+EX_LR(r13) /* save LR */
andi. r9,r11,MSR_PR // Check for exception from userspace
cmpdi cr4,r9,MSR_PR // And save the result in CR4 for later
/*
* Test MSR_RI before calling slb_allocate_realmode, because the
* MSR in r11 gets clobbered. However we still want to allocate
* SLB in case MSR_RI=0, to minimise the risk of getting stuck in
* recursive SLB faults. So use cr5 for this, which is preserved.
*/
andi. r11,r11,MSR_RI /* check for unrecoverable exception */
cmpdi cr5,r11,MSR_RI
crset 4*cr0+eq
#ifdef CONFIG_PPC_BOOK3S_64
BEGIN_MMU_FTR_SECTION
bl slb_allocate
END_MMU_FTR_SECTION_IFCLR(MMU_FTR_TYPE_RADIX)
#endif
ld r10,PACA_EXSLB+EX_LR(r13)
lwz r9,PACA_EXSLB+EX_CCR(r13) /* get saved CR */
mtlr r10
/*
* Large address, check whether we have to allocate new contexts.
*/
beq- 8f
bne- cr5,2f /* if unrecoverable exception, oops */
/* All done -- return from exception. */
bne cr4,1f /* returning to kernel */
mtcrf 0x80,r9
mtcrf 0x08,r9 /* MSR[PR] indication is in cr4 */
mtcrf 0x04,r9 /* MSR[RI] indication is in cr5 */
mtcrf 0x02,r9 /* I/D indication is in cr6 */
mtcrf 0x01,r9 /* slb_allocate uses cr0 and cr7 */
RESTORE_CTR(r9, PACA_EXSLB)
RESTORE_PPR_PACA(PACA_EXSLB, r9)
mr r3,r12
ld r9,PACA_EXSLB+EX_R9(r13)
ld r10,PACA_EXSLB+EX_R10(r13)
ld r11,PACA_EXSLB+EX_R11(r13)
ld r12,PACA_EXSLB+EX_R12(r13)
ld r13,PACA_EXSLB+EX_R13(r13)
RFI_TO_USER
b . /* prevent speculative execution */
1:
mtcrf 0x80,r9
mtcrf 0x08,r9 /* MSR[PR] indication is in cr4 */
mtcrf 0x04,r9 /* MSR[RI] indication is in cr5 */
mtcrf 0x02,r9 /* I/D indication is in cr6 */
mtcrf 0x01,r9 /* slb_allocate uses cr0 and cr7 */
RESTORE_CTR(r9, PACA_EXSLB)
RESTORE_PPR_PACA(PACA_EXSLB, r9)
mr r3,r12
ld r9,PACA_EXSLB+EX_R9(r13)
ld r10,PACA_EXSLB+EX_R10(r13)
ld r11,PACA_EXSLB+EX_R11(r13)
ld r12,PACA_EXSLB+EX_R12(r13)
ld r13,PACA_EXSLB+EX_R13(r13)
RFI_TO_KERNEL
b . /* prevent speculative execution */
2: std r3,PACA_EXSLB+EX_DAR(r13)
mr r3,r12
mfspr r11,SPRN_SRR0
mfspr r12,SPRN_SRR1
LOAD_HANDLER(r10,unrecov_slb)
mtspr SPRN_SRR0,r10
ld r10,PACAKMSR(r13)
mtspr SPRN_SRR1,r10
RFI_TO_KERNEL
b .
8: std r3,PACA_EXSLB+EX_DAR(r13)
mr r3,r12
mfspr r11,SPRN_SRR0
mfspr r12,SPRN_SRR1
LOAD_HANDLER(r10, large_addr_slb)
mtspr SPRN_SRR0,r10
ld r10,PACAKMSR(r13)
mtspr SPRN_SRR1,r10
RFI_TO_KERNEL
b .
EXC_COMMON_BEGIN(unrecov_slb)
EXCEPTION_PROLOG_COMMON(0x4100, PACA_EXSLB)
RECONCILE_IRQ_STATE(r10, r11)
EXC_COMMON_BEGIN(instruction_access_slb_common)
EXCEPTION_PROLOG_COMMON(0x480, PACA_EXSLB)
ld r4,_NIP(r1)
addi r3,r1,STACK_FRAME_OVERHEAD
bl do_slb_fault
cmpdi r3,0
bne- 1f
b fast_exception_return
1: /* Error case */
std r3,RESULT(r1)
bl save_nvgprs
1: addi r3,r1,STACK_FRAME_OVERHEAD
bl unrecoverable_exception
b 1b
EXC_COMMON_BEGIN(large_addr_slb)
EXCEPTION_PROLOG_COMMON(0x380, PACA_EXSLB)
RECONCILE_IRQ_STATE(r10, r11)
ld r3, PACA_EXSLB+EX_DAR(r13)
std r3, _DAR(r1)
beq cr6, 2f
li r10, 0x481 /* fix trap number for I-SLB miss */
std r10, _TRAP(r1)
2: bl save_nvgprs
addi r3, r1, STACK_FRAME_OVERHEAD
bl slb_miss_large_addr
ld r4,_NIP(r1)
ld r5,RESULT(r1)
addi r3,r1,STACK_FRAME_OVERHEAD
bl do_bad_slb_fault
b ret_from_except
EXC_REAL_BEGIN(hardware_interrupt, 0x500, 0x100)
.globl hardware_interrupt_hv;
hardware_interrupt_hv:

View File

@ -1444,8 +1444,8 @@ static ssize_t fadump_register_store(struct kobject *kobj,
break;
case 1:
if (fw_dump.dump_registered == 1) {
ret = -EEXIST;
goto unlock_out;
/* Un-register Firmware-assisted dump */
fadump_unregister_dump(&fdm);
}
/* Register Firmware-assisted dump */
ret = register_fadump();

View File

@ -642,7 +642,7 @@ DTLBMissIMMR:
mtspr SPRN_MD_TWC, r10
mfspr r10, SPRN_IMMR /* Get current IMMR */
rlwinm r10, r10, 0, 0xfff80000 /* Get 512 kbytes boundary */
ori r10, r10, 0xf0 | MD_SPS16K | _PAGE_PRIVILEGED | _PAGE_DIRTY | \
ori r10, r10, 0xf0 | MD_SPS16K | _PAGE_SH | _PAGE_DIRTY | \
_PAGE_PRESENT | _PAGE_NO_CACHE
mtspr SPRN_MD_RPN, r10 /* Update TLB entry */
@ -660,7 +660,7 @@ DTLBMissLinear:
li r11, MD_PS8MEG | MD_SVALID | M_APG2
mtspr SPRN_MD_TWC, r11
rlwinm r10, r10, 0, 0x0f800000 /* 8xx supports max 256Mb RAM */
ori r10, r10, 0xf0 | MD_SPS16K | _PAGE_PRIVILEGED | _PAGE_DIRTY | \
ori r10, r10, 0xf0 | MD_SPS16K | _PAGE_SH | _PAGE_DIRTY | \
_PAGE_PRESENT
mtspr SPRN_MD_RPN, r10 /* Update TLB entry */
@ -679,7 +679,7 @@ ITLBMissLinear:
li r11, MI_PS8MEG | MI_SVALID | M_APG2
mtspr SPRN_MI_TWC, r11
rlwinm r10, r10, 0, 0x0f800000 /* 8xx supports max 256Mb RAM */
ori r10, r10, 0xf0 | MI_SPS16K | _PAGE_PRIVILEGED | _PAGE_DIRTY | \
ori r10, r10, 0xf0 | MI_SPS16K | _PAGE_SH | _PAGE_DIRTY | \
_PAGE_PRESENT
mtspr SPRN_MI_RPN, r10 /* Update TLB entry */

View File

@ -153,10 +153,10 @@ static const struct ppc_pci_io iowa_pci_io = {
#ifdef CONFIG_PPC_INDIRECT_MMIO
static void __iomem *iowa_ioremap(phys_addr_t addr, unsigned long size,
unsigned long flags, void *caller)
pgprot_t prot, void *caller)
{
struct iowa_bus *bus;
void __iomem *res = __ioremap_caller(addr, size, flags, caller);
void __iomem *res = __ioremap_caller(addr, size, prot, caller);
int busno;
bus = iowa_pci_find(0, (unsigned long)addr);

View File

@ -785,9 +785,9 @@ dma_addr_t iommu_map_page(struct device *dev, struct iommu_table *tbl,
vaddr = page_address(page) + offset;
uaddr = (unsigned long)vaddr;
npages = iommu_num_pages(uaddr, size, IOMMU_PAGE_SIZE(tbl));
if (tbl) {
npages = iommu_num_pages(uaddr, size, IOMMU_PAGE_SIZE(tbl));
align = 0;
if (tbl->it_page_shift < PAGE_SHIFT && size >= PAGE_SIZE &&
((unsigned long)vaddr & ~PAGE_MASK) == 0)

View File

@ -110,14 +110,14 @@ static void pci_process_ISA_OF_ranges(struct device_node *isa_node,
size = 0x10000;
__ioremap_at(phb_io_base_phys, (void *)ISA_IO_BASE,
size, pgprot_val(pgprot_noncached(__pgprot(0))));
size, pgprot_noncached(PAGE_KERNEL));
return;
inval_range:
printk(KERN_ERR "no ISA IO ranges or unexpected isa range, "
"mapping 64k\n");
__ioremap_at(phb_io_base_phys, (void *)ISA_IO_BASE,
0x10000, pgprot_val(pgprot_noncached(__pgprot(0))));
0x10000, pgprot_noncached(PAGE_KERNEL));
}
@ -253,7 +253,7 @@ void __init isa_bridge_init_non_pci(struct device_node *np)
*/
isa_io_base = ISA_IO_BASE;
__ioremap_at(pbase, (void *)ISA_IO_BASE,
size, pgprot_val(pgprot_noncached(__pgprot(0))));
size, pgprot_noncached(PAGE_KERNEL));
pr_debug("ISA: Non-PCI bridge is %pOF\n", np);
}

View File

@ -24,6 +24,7 @@
#include <asm/processor.h>
#include <asm/machdep.h>
#include <asm/debug.h>
#include <asm/code-patching.h>
#include <linux/slab.h>
/*
@ -144,7 +145,7 @@ static int kgdb_handle_breakpoint(struct pt_regs *regs)
if (kgdb_handle_exception(1, SIGTRAP, 0, regs) != 0)
return 0;
if (*(u32 *) (regs->nip) == *(u32 *) (&arch_kgdb_ops.gdb_bpt_instr))
if (*(u32 *)regs->nip == BREAK_INSTR)
regs->nip += BREAK_INSTR_SIZE;
return 1;
@ -441,16 +442,42 @@ int kgdb_arch_handle_exception(int vector, int signo, int err_code,
return -1;
}
int kgdb_arch_set_breakpoint(struct kgdb_bkpt *bpt)
{
int err;
unsigned int instr;
unsigned int *addr = (unsigned int *)bpt->bpt_addr;
err = probe_kernel_address(addr, instr);
if (err)
return err;
err = patch_instruction(addr, BREAK_INSTR);
if (err)
return -EFAULT;
*(unsigned int *)bpt->saved_instr = instr;
return 0;
}
int kgdb_arch_remove_breakpoint(struct kgdb_bkpt *bpt)
{
int err;
unsigned int instr = *(unsigned int *)bpt->saved_instr;
unsigned int *addr = (unsigned int *)bpt->bpt_addr;
err = patch_instruction(addr, instr);
if (err)
return -EFAULT;
return 0;
}
/*
* Global data
*/
struct kgdb_arch arch_kgdb_ops = {
#ifdef __LITTLE_ENDIAN__
.gdb_bpt_instr = {0x08, 0x10, 0x82, 0x7d},
#else
.gdb_bpt_instr = {0x7d, 0x82, 0x10, 0x08},
#endif
};
struct kgdb_arch arch_kgdb_ops;
static int kgdb_not_implemented(struct pt_regs *regs)
{

View File

@ -488,10 +488,11 @@ long machine_check_early(struct pt_regs *regs)
{
long handled = 0;
__this_cpu_inc(irq_stat.mce_exceptions);
if (cur_cpu_spec && cur_cpu_spec->machine_check_early)
handled = cur_cpu_spec->machine_check_early(regs);
/*
* See if platform is capable of handling machine check.
*/
if (ppc_md.machine_check_early)
handled = ppc_md.machine_check_early(regs);
return handled;
}

View File

@ -60,7 +60,7 @@ static unsigned long addr_to_pfn(struct pt_regs *regs, unsigned long addr)
/* flush SLBs and reload */
#ifdef CONFIG_PPC_BOOK3S_64
static void flush_and_reload_slb(void)
void flush_and_reload_slb(void)
{
/* Invalidate all SLBs */
slb_flush_all_realmode();
@ -89,6 +89,13 @@ static void flush_and_reload_slb(void)
static void flush_erat(void)
{
#ifdef CONFIG_PPC_BOOK3S_64
if (!early_cpu_has_feature(CPU_FTR_ARCH_300)) {
flush_and_reload_slb();
return;
}
#endif
/* PPC_INVALIDATE_ERAT can only be used on ISA v3 and newer */
asm volatile(PPC_INVALIDATE_ERAT : : :"memory");
}

View File

@ -74,6 +74,14 @@ int module_finalize(const Elf_Ehdr *hdr,
(void *)sect->sh_addr + sect->sh_size);
#endif /* CONFIG_PPC64 */
#ifdef PPC64_ELF_ABI_v1
sect = find_section(hdr, sechdrs, ".opd");
if (sect != NULL) {
me->arch.start_opd = sect->sh_addr;
me->arch.end_opd = sect->sh_addr + sect->sh_size;
}
#endif /* PPC64_ELF_ABI_v1 */
#ifdef CONFIG_PPC_BARRIER_NOSPEC
sect = find_section(hdr, sechdrs, "__spec_barrier_fixup");
if (sect != NULL)

View File

@ -360,11 +360,6 @@ int module_frob_arch_sections(Elf64_Ehdr *hdr,
else if (strcmp(secstrings+sechdrs[i].sh_name,"__versions")==0)
dedotify_versions((void *)hdr + sechdrs[i].sh_offset,
sechdrs[i].sh_size);
else if (!strcmp(secstrings + sechdrs[i].sh_name, ".opd")) {
me->arch.start_opd = sechdrs[i].sh_addr;
me->arch.end_opd = sechdrs[i].sh_addr +
sechdrs[i].sh_size;
}
/* We don't handle .init for the moment: rename to _init */
while ((p = strstr(secstrings + sechdrs[i].sh_name, ".init")))
@ -685,7 +680,14 @@ int apply_relocate_add(Elf64_Shdr *sechdrs,
case R_PPC64_REL32:
/* 32 bits relative (used by relative exception tables) */
*(u32 *)location = value - (unsigned long)location;
/* Convert value to relative */
value -= (unsigned long)location;
if (value + 0x80000000 > 0xffffffff) {
pr_err("%s: REL32 %li out of range!\n",
me->name, (long int)value);
return -ENOEXEC;
}
*(u32 *)location = value;
break;
case R_PPC64_TOCSAVE:

View File

@ -17,7 +17,6 @@
#include <linux/of.h>
#include <linux/slab.h>
#include <linux/export.h>
#include <linux/syscalls.h>
#include <asm/processor.h>
#include <asm/io.h>

View File

@ -159,7 +159,7 @@ static int pcibios_map_phb_io_space(struct pci_controller *hose)
/* Establish the mapping */
if (__ioremap_at(phys_page, area->addr, size_page,
pgprot_val(pgprot_noncached(__pgprot(0)))) == NULL)
pgprot_noncached(PAGE_KERNEL)) == NULL)
return -ENOMEM;
/* Fixup hose IO resource */

View File

@ -43,6 +43,7 @@
#include <linux/uaccess.h>
#include <linux/elf-randomize.h>
#include <linux/pkeys.h>
#include <linux/seq_buf.h>
#include <asm/pgtable.h>
#include <asm/io.h>
@ -65,6 +66,7 @@
#include <asm/livepatch.h>
#include <asm/cpu_has_feature.h>
#include <asm/asm-prototypes.h>
#include <asm/stacktrace.h>
#include <linux/kprobes.h>
#include <linux/kdebug.h>
@ -102,24 +104,18 @@ static void check_if_tm_restore_required(struct task_struct *tsk)
}
}
static inline bool msr_tm_active(unsigned long msr)
{
return MSR_TM_ACTIVE(msr);
}
static bool tm_active_with_fp(struct task_struct *tsk)
{
return msr_tm_active(tsk->thread.regs->msr) &&
return MSR_TM_ACTIVE(tsk->thread.regs->msr) &&
(tsk->thread.ckpt_regs.msr & MSR_FP);
}
static bool tm_active_with_altivec(struct task_struct *tsk)
{
return msr_tm_active(tsk->thread.regs->msr) &&
return MSR_TM_ACTIVE(tsk->thread.regs->msr) &&
(tsk->thread.ckpt_regs.msr & MSR_VEC);
}
#else
static inline bool msr_tm_active(unsigned long msr) { return false; }
static inline void check_if_tm_restore_required(struct task_struct *tsk) { }
static inline bool tm_active_with_fp(struct task_struct *tsk) { return false; }
static inline bool tm_active_with_altivec(struct task_struct *tsk) { return false; }
@ -247,7 +243,8 @@ void enable_kernel_fp(void)
* giveup as this would save to the 'live' structure not the
* checkpointed structure.
*/
if(!msr_tm_active(cpumsr) && msr_tm_active(current->thread.regs->msr))
if (!MSR_TM_ACTIVE(cpumsr) &&
MSR_TM_ACTIVE(current->thread.regs->msr))
return;
__giveup_fpu(current);
}
@ -311,7 +308,8 @@ void enable_kernel_altivec(void)
* giveup as this would save to the 'live' structure not the
* checkpointed structure.
*/
if(!msr_tm_active(cpumsr) && msr_tm_active(current->thread.regs->msr))
if (!MSR_TM_ACTIVE(cpumsr) &&
MSR_TM_ACTIVE(current->thread.regs->msr))
return;
__giveup_altivec(current);
}
@ -397,7 +395,8 @@ void enable_kernel_vsx(void)
* giveup as this would save to the 'live' structure not the
* checkpointed structure.
*/
if(!msr_tm_active(cpumsr) && msr_tm_active(current->thread.regs->msr))
if (!MSR_TM_ACTIVE(cpumsr) &&
MSR_TM_ACTIVE(current->thread.regs->msr))
return;
__giveup_vsx(current);
}
@ -530,7 +529,7 @@ void restore_math(struct pt_regs *regs)
{
unsigned long msr;
if (!msr_tm_active(regs->msr) &&
if (!MSR_TM_ACTIVE(regs->msr) &&
!current->thread.load_fp && !loadvec(current->thread))
return;
@ -1252,17 +1251,16 @@ struct task_struct *__switch_to(struct task_struct *prev,
return last;
}
static int instructions_to_print = 16;
#define NR_INSN_TO_PRINT 16
static void show_instructions(struct pt_regs *regs)
{
int i;
unsigned long pc = regs->nip - (instructions_to_print * 3 / 4 *
sizeof(int));
unsigned long pc = regs->nip - (NR_INSN_TO_PRINT * 3 / 4 * sizeof(int));
printk("Instruction dump:");
for (i = 0; i < instructions_to_print; i++) {
for (i = 0; i < NR_INSN_TO_PRINT; i++) {
int instr;
if (!(i % 8))
@ -1277,7 +1275,7 @@ static void show_instructions(struct pt_regs *regs)
#endif
if (!__kernel_text_address(pc) ||
probe_kernel_address((unsigned int __user *)pc, instr)) {
probe_kernel_address((const void *)pc, instr)) {
pr_cont("XXXXXXXX ");
} else {
if (regs->nip == pc)
@ -1295,43 +1293,43 @@ static void show_instructions(struct pt_regs *regs)
void show_user_instructions(struct pt_regs *regs)
{
unsigned long pc;
int i;
int n = NR_INSN_TO_PRINT;
struct seq_buf s;
char buf[96]; /* enough for 8 times 9 + 2 chars */
pc = regs->nip - (instructions_to_print * 3 / 4 * sizeof(int));
pc = regs->nip - (NR_INSN_TO_PRINT * 3 / 4 * sizeof(int));
/*
* Make sure the NIP points at userspace, not kernel text/data or
* elsewhere.
*/
if (!__access_ok(pc, instructions_to_print * sizeof(int), USER_DS)) {
if (!__access_ok(pc, NR_INSN_TO_PRINT * sizeof(int), USER_DS)) {
pr_info("%s[%d]: Bad NIP, not dumping instructions.\n",
current->comm, current->pid);
return;
}
pr_info("%s[%d]: code: ", current->comm, current->pid);
seq_buf_init(&s, buf, sizeof(buf));
for (i = 0; i < instructions_to_print; i++) {
int instr;
while (n) {
int i;
if (!(i % 8) && (i > 0)) {
pr_cont("\n");
pr_info("%s[%d]: code: ", current->comm, current->pid);
seq_buf_clear(&s);
for (i = 0; i < 8 && n; i++, n--, pc += sizeof(int)) {
int instr;
if (probe_kernel_address((const void *)pc, instr)) {
seq_buf_printf(&s, "XXXXXXXX ");
continue;
}
seq_buf_printf(&s, regs->nip == pc ? "<%08x> " : "%08x ", instr);
}
if (probe_kernel_address((unsigned int __user *)pc, instr)) {
pr_cont("XXXXXXXX ");
} else {
if (regs->nip == pc)
pr_cont("<%08x> ", instr);
else
pr_cont("%08x ", instr);
}
pc += sizeof(int);
if (!seq_buf_has_overflowed(&s))
pr_info("%s[%d]: code: %s\n", current->comm,
current->pid, s.buffer);
}
pr_cont("\n");
}
struct regbit {
@ -1485,6 +1483,15 @@ void flush_thread(void)
#endif /* CONFIG_HAVE_HW_BREAKPOINT */
}
#ifdef CONFIG_PPC_BOOK3S_64
void arch_setup_new_exec(void)
{
if (radix_enabled())
return;
hash__setup_new_exec();
}
#endif
int set_thread_uses_vas(void)
{
#ifdef CONFIG_PPC_BOOK3S_64
@ -1705,7 +1712,7 @@ int copy_thread(unsigned long clone_flags, unsigned long usp,
p->thread.dscr = mfspr(SPRN_DSCR);
}
if (cpu_has_feature(CPU_FTR_HAS_PPR))
p->thread.ppr = INIT_PPR;
childregs->ppr = DEFAULT_PPR;
p->thread.tidr = 0;
#endif
@ -1713,6 +1720,8 @@ int copy_thread(unsigned long clone_flags, unsigned long usp,
return 0;
}
void preload_new_slb_context(unsigned long start, unsigned long sp);
/*
* Set up a thread for executing a new program
*/
@ -1720,6 +1729,10 @@ void start_thread(struct pt_regs *regs, unsigned long start, unsigned long sp)
{
#ifdef CONFIG_PPC64
unsigned long load_addr = regs->gpr[2]; /* saved by ELF_PLAT_INIT */
#ifdef CONFIG_PPC_BOOK3S_64
preload_new_slb_context(start, sp);
#endif
#endif
/*
@ -1810,6 +1823,7 @@ void start_thread(struct pt_regs *regs, unsigned long start, unsigned long sp)
#ifdef CONFIG_VSX
current->thread.used_vsr = 0;
#endif
current->thread.load_slb = 0;
current->thread.load_fp = 0;
memset(&current->thread.fp_state, 0, sizeof(current->thread.fp_state));
current->thread.fp_save_area = NULL;

View File

@ -43,11 +43,13 @@
#include <asm/btext.h>
#include <asm/sections.h>
#include <asm/machdep.h>
#include <asm/opal.h>
#include <asm/asm-prototypes.h>
#include <linux/linux_logo.h>
/* All of prom_init bss lives here */
#define __prombss __section(.bss.prominit)
/*
* Eventually bump that one up
*/
@ -87,7 +89,7 @@
#define OF_WORKAROUNDS 0
#else
#define OF_WORKAROUNDS of_workarounds
int of_workarounds;
static int of_workarounds __prombss;
#endif
#define OF_WA_CLAIM 1 /* do phys/virt claim separately, then map */
@ -148,29 +150,31 @@ extern void copy_and_flush(unsigned long dest, unsigned long src,
unsigned long size, unsigned long offset);
/* prom structure */
static struct prom_t __initdata prom;
static struct prom_t __prombss prom;
static unsigned long prom_entry __initdata;
static unsigned long __prombss prom_entry;
#define PROM_SCRATCH_SIZE 256
static char __initdata of_stdout_device[256];
static char __initdata prom_scratch[PROM_SCRATCH_SIZE];
static char __prombss of_stdout_device[256];
static char __prombss prom_scratch[PROM_SCRATCH_SIZE];
static unsigned long __initdata dt_header_start;
static unsigned long __initdata dt_struct_start, dt_struct_end;
static unsigned long __initdata dt_string_start, dt_string_end;
static unsigned long __prombss dt_header_start;
static unsigned long __prombss dt_struct_start, dt_struct_end;
static unsigned long __prombss dt_string_start, dt_string_end;
static unsigned long __initdata prom_initrd_start, prom_initrd_end;
static unsigned long __prombss prom_initrd_start, prom_initrd_end;
#ifdef CONFIG_PPC64
static int __initdata prom_iommu_force_on;
static int __initdata prom_iommu_off;
static unsigned long __initdata prom_tce_alloc_start;
static unsigned long __initdata prom_tce_alloc_end;
static int __prombss prom_iommu_force_on;
static int __prombss prom_iommu_off;
static unsigned long __prombss prom_tce_alloc_start;
static unsigned long __prombss prom_tce_alloc_end;
#endif
static bool prom_radix_disable __initdata = !IS_ENABLED(CONFIG_PPC_RADIX_MMU_DEFAULT);
#ifdef CONFIG_PPC_PSERIES
static bool __prombss prom_radix_disable;
#endif
struct platform_support {
bool hash_mmu;
@ -188,26 +192,25 @@ struct platform_support {
#define PLATFORM_LPAR 0x0001
#define PLATFORM_POWERMAC 0x0400
#define PLATFORM_GENERIC 0x0500
#define PLATFORM_OPAL 0x0600
static int __initdata of_platform;
static int __prombss of_platform;
static char __initdata prom_cmd_line[COMMAND_LINE_SIZE];
static char __prombss prom_cmd_line[COMMAND_LINE_SIZE];
static unsigned long __initdata prom_memory_limit;
static unsigned long __prombss prom_memory_limit;
static unsigned long __initdata alloc_top;
static unsigned long __initdata alloc_top_high;
static unsigned long __initdata alloc_bottom;
static unsigned long __initdata rmo_top;
static unsigned long __initdata ram_top;
static unsigned long __prombss alloc_top;
static unsigned long __prombss alloc_top_high;
static unsigned long __prombss alloc_bottom;
static unsigned long __prombss rmo_top;
static unsigned long __prombss ram_top;
static struct mem_map_entry __initdata mem_reserve_map[MEM_RESERVE_MAP_SIZE];
static int __initdata mem_reserve_cnt;
static struct mem_map_entry __prombss mem_reserve_map[MEM_RESERVE_MAP_SIZE];
static int __prombss mem_reserve_cnt;
static cell_t __initdata regbuf[1024];
static cell_t __prombss regbuf[1024];
static bool rtas_has_query_cpu_stopped;
static bool __prombss rtas_has_query_cpu_stopped;
/*
@ -522,8 +525,8 @@ static void add_string(char **str, const char *q)
static char *tohex(unsigned int x)
{
static char digits[] = "0123456789abcdef";
static char result[9];
static const char digits[] __initconst = "0123456789abcdef";
static char result[9] __prombss;
int i;
result[8] = 0;
@ -664,6 +667,8 @@ static void __init early_cmdline_parse(void)
#endif
}
#ifdef CONFIG_PPC_PSERIES
prom_radix_disable = !IS_ENABLED(CONFIG_PPC_RADIX_MMU_DEFAULT);
opt = strstr(prom_cmd_line, "disable_radix");
if (opt) {
opt += 13;
@ -679,9 +684,10 @@ static void __init early_cmdline_parse(void)
}
if (prom_radix_disable)
prom_debug("Radix disabled from cmdline\n");
#endif /* CONFIG_PPC_PSERIES */
}
#if defined(CONFIG_PPC_PSERIES) || defined(CONFIG_PPC_POWERNV)
#ifdef CONFIG_PPC_PSERIES
/*
* The architecture vector has an array of PVR mask/value pairs,
* followed by # option vectors - 1, followed by the option vectors.
@ -782,7 +788,7 @@ struct ibm_arch_vec {
struct option_vector6 vec6;
} __packed;
struct ibm_arch_vec __cacheline_aligned ibm_architecture_vec = {
static const struct ibm_arch_vec ibm_architecture_vec_template __initconst = {
.pvrs = {
{
.mask = cpu_to_be32(0xfffe0000), /* POWER5/POWER5+ */
@ -920,9 +926,11 @@ struct ibm_arch_vec __cacheline_aligned ibm_architecture_vec = {
},
};
static struct ibm_arch_vec __prombss ibm_architecture_vec ____cacheline_aligned;
/* Old method - ELF header with PT_NOTE sections only works on BE */
#ifdef __BIG_ENDIAN__
static struct fake_elf {
static const struct fake_elf {
Elf32_Ehdr elfhdr;
Elf32_Phdr phdr[2];
struct chrpnote {
@ -955,7 +963,7 @@ static struct fake_elf {
u32 ignore_me;
} rpadesc;
} rpanote;
} fake_elf = {
} fake_elf __initconst = {
.elfhdr = {
.e_ident = { 0x7f, 'E', 'L', 'F',
ELFCLASS32, ELFDATA2MSB, EV_CURRENT },
@ -1129,14 +1137,21 @@ static void __init prom_check_platform_support(void)
};
int prop_len = prom_getproplen(prom.chosen,
"ibm,arch-vec-5-platform-support");
/* First copy the architecture vec template */
ibm_architecture_vec = ibm_architecture_vec_template;
if (prop_len > 1) {
int i;
u8 vec[prop_len];
u8 vec[8];
prom_debug("Found ibm,arch-vec-5-platform-support, len: %d\n",
prop_len);
if (prop_len > sizeof(vec))
prom_printf("WARNING: ibm,arch-vec-5-platform-support longer than expected (len: %d)\n",
prop_len);
prom_getprop(prom.chosen, "ibm,arch-vec-5-platform-support",
&vec, sizeof(vec));
for (i = 0; i < prop_len; i += 2) {
for (i = 0; i < sizeof(vec); i += 2) {
prom_debug("%d: index = 0x%x val = 0x%x\n", i / 2
, vec[i]
, vec[i + 1]);
@ -1225,7 +1240,7 @@ static void __init prom_send_capabilities(void)
}
#endif /* __BIG_ENDIAN__ */
}
#endif /* #if defined(CONFIG_PPC_PSERIES) || defined(CONFIG_PPC_POWERNV) */
#endif /* CONFIG_PPC_PSERIES */
/*
* Memory allocation strategy... our layout is normally:
@ -1562,88 +1577,6 @@ static void __init prom_close_stdin(void)
}
}
#ifdef CONFIG_PPC_POWERNV
#ifdef CONFIG_PPC_EARLY_DEBUG_OPAL
static u64 __initdata prom_opal_base;
static u64 __initdata prom_opal_entry;
#endif
/*
* Allocate room for and instantiate OPAL
*/
static void __init prom_instantiate_opal(void)
{
phandle opal_node;
ihandle opal_inst;
u64 base, entry;
u64 size = 0, align = 0x10000;
__be64 val64;
u32 rets[2];
prom_debug("prom_instantiate_opal: start...\n");
opal_node = call_prom("finddevice", 1, 1, ADDR("/ibm,opal"));
prom_debug("opal_node: %x\n", opal_node);
if (!PHANDLE_VALID(opal_node))
return;
val64 = 0;
prom_getprop(opal_node, "opal-runtime-size", &val64, sizeof(val64));
size = be64_to_cpu(val64);
if (size == 0)
return;
val64 = 0;
prom_getprop(opal_node, "opal-runtime-alignment", &val64,sizeof(val64));
align = be64_to_cpu(val64);
base = alloc_down(size, align, 0);
if (base == 0) {
prom_printf("OPAL allocation failed !\n");
return;
}
opal_inst = call_prom("open", 1, 1, ADDR("/ibm,opal"));
if (!IHANDLE_VALID(opal_inst)) {
prom_printf("opening opal package failed (%x)\n", opal_inst);
return;
}
prom_printf("instantiating opal at 0x%llx...", base);
if (call_prom_ret("call-method", 4, 3, rets,
ADDR("load-opal-runtime"),
opal_inst,
base >> 32, base & 0xffffffff) != 0
|| (rets[0] == 0 && rets[1] == 0)) {
prom_printf(" failed\n");
return;
}
entry = (((u64)rets[0]) << 32) | rets[1];
prom_printf(" done\n");
reserve_mem(base, size);
prom_debug("opal base = 0x%llx\n", base);
prom_debug("opal align = 0x%llx\n", align);
prom_debug("opal entry = 0x%llx\n", entry);
prom_debug("opal size = 0x%llx\n", size);
prom_setprop(opal_node, "/ibm,opal", "opal-base-address",
&base, sizeof(base));
prom_setprop(opal_node, "/ibm,opal", "opal-entry-address",
&entry, sizeof(entry));
#ifdef CONFIG_PPC_EARLY_DEBUG_OPAL
prom_opal_base = base;
prom_opal_entry = entry;
#endif
prom_debug("prom_instantiate_opal: end...\n");
}
#endif /* CONFIG_PPC_POWERNV */
/*
* Allocate room for and instantiate RTAS
*/
@ -2150,10 +2083,6 @@ static int __init prom_find_machine_type(void)
}
}
#ifdef CONFIG_PPC64
/* Try to detect OPAL */
if (PHANDLE_VALID(call_prom("finddevice", 1, 1, ADDR("/ibm,opal"))))
return PLATFORM_OPAL;
/* Try to figure out if it's an IBM pSeries or any other
* PAPR compliant platform. We assume it is if :
* - /device_type is "chrp" (please, do NOT use that for future
@ -2202,7 +2131,7 @@ static void __init prom_check_displays(void)
ihandle ih;
int i;
static unsigned char default_colors[] = {
static const unsigned char default_colors[] __initconst = {
0x00, 0x00, 0x00,
0x00, 0x00, 0xaa,
0x00, 0xaa, 0x00,
@ -2398,7 +2327,7 @@ static void __init scan_dt_build_struct(phandle node, unsigned long *mem_start,
char *namep, *prev_name, *sstart, *p, *ep, *lp, *path;
unsigned long soff;
unsigned char *valp;
static char pname[MAX_PROPERTY_NAME];
static char pname[MAX_PROPERTY_NAME] __prombss;
int l, room, has_phandle = 0;
dt_push_token(OF_DT_BEGIN_NODE, mem_start, mem_end);
@ -2481,14 +2410,11 @@ static void __init scan_dt_build_struct(phandle node, unsigned long *mem_start,
has_phandle = 1;
}
/* Add a "linux,phandle" property if no "phandle" property already
* existed (can happen with OPAL)
*/
/* Add a "phandle" property if none already exist */
if (!has_phandle) {
soff = dt_find_string("linux,phandle");
soff = dt_find_string("phandle");
if (soff == 0)
prom_printf("WARNING: Can't find string index for"
" <linux-phandle> node %s\n", path);
prom_printf("WARNING: Can't find string index for <phandle> node %s\n", path);
else {
dt_push_token(OF_DT_PROP, mem_start, mem_end);
dt_push_token(4, mem_start, mem_end);
@ -2548,9 +2474,9 @@ static void __init flatten_device_tree(void)
dt_string_start = mem_start;
mem_start += 4; /* hole */
/* Add "linux,phandle" in there, we'll need it */
/* Add "phandle" in there, we'll need it */
namep = make_room(&mem_start, &mem_end, 16, 1);
strcpy(namep, "linux,phandle");
strcpy(namep, "phandle");
mem_start = (unsigned long)namep + strlen(namep) + 1;
/* Build string array */
@ -3172,7 +3098,7 @@ unsigned long __init prom_init(unsigned long r3, unsigned long r4,
*/
early_cmdline_parse();
#if defined(CONFIG_PPC_PSERIES) || defined(CONFIG_PPC_POWERNV)
#ifdef CONFIG_PPC_PSERIES
/*
* On pSeries, inform the firmware about our capabilities
*/
@ -3216,15 +3142,9 @@ unsigned long __init prom_init(unsigned long r3, unsigned long r4,
* On non-powermacs, try to instantiate RTAS. PowerMacs don't
* have a usable RTAS implementation.
*/
if (of_platform != PLATFORM_POWERMAC &&
of_platform != PLATFORM_OPAL)
if (of_platform != PLATFORM_POWERMAC)
prom_instantiate_rtas();
#ifdef CONFIG_PPC_POWERNV
if (of_platform == PLATFORM_OPAL)
prom_instantiate_opal();
#endif /* CONFIG_PPC_POWERNV */
#ifdef CONFIG_PPC64
/* instantiate sml */
prom_instantiate_sml();
@ -3237,8 +3157,7 @@ unsigned long __init prom_init(unsigned long r3, unsigned long r4,
*
* (This must be done after instanciating RTAS)
*/
if (of_platform != PLATFORM_POWERMAC &&
of_platform != PLATFORM_OPAL)
if (of_platform != PLATFORM_POWERMAC)
prom_hold_cpus();
/*
@ -3282,11 +3201,9 @@ unsigned long __init prom_init(unsigned long r3, unsigned long r4,
/*
* in case stdin is USB and still active on IBM machines...
* Unfortunately quiesce crashes on some powermacs if we have
* closed stdin already (in particular the powerbook 101). It
* appears that the OPAL version of OFW doesn't like it either.
* closed stdin already (in particular the powerbook 101).
*/
if (of_platform != PLATFORM_POWERMAC &&
of_platform != PLATFORM_OPAL)
if (of_platform != PLATFORM_POWERMAC)
prom_close_stdin();
/*
@ -3304,10 +3221,8 @@ unsigned long __init prom_init(unsigned long r3, unsigned long r4,
hdr = dt_header_start;
/* Don't print anything after quiesce under OPAL, it crashes OFW */
if (of_platform != PLATFORM_OPAL) {
prom_printf("Booting Linux via __start() @ 0x%lx ...\n", kbase);
prom_debug("->dt_header_start=0x%lx\n", hdr);
}
prom_printf("Booting Linux via __start() @ 0x%lx ...\n", kbase);
prom_debug("->dt_header_start=0x%lx\n", hdr);
#ifdef CONFIG_PPC32
reloc_got2(-offset);
@ -3315,13 +3230,7 @@ unsigned long __init prom_init(unsigned long r3, unsigned long r4,
unreloc_toc();
#endif
#ifdef CONFIG_PPC_EARLY_DEBUG_OPAL
/* OPAL early debug gets the OPAL base & entry in r8 and r9 */
__start(hdr, kbase, 0, 0, 0,
prom_opal_base, prom_opal_entry);
#else
__start(hdr, kbase, 0, 0, 0, 0, 0);
#endif
return 0;
}

View File

@ -28,6 +28,18 @@ OBJ="$2"
ERROR=0
function check_section()
{
file=$1
section=$2
size=$(objdump -h -j $section $file 2>/dev/null | awk "\$2 == \"$section\" {print \$3}")
size=${size:-0}
if [ $size -ne 0 ]; then
ERROR=1
echo "Error: Section $section not empty in prom_init.c" >&2
fi
}
for UNDEF in $($NM -u $OBJ | awk '{print $2}')
do
# On 64-bit nm gives us the function descriptors, which have
@ -66,4 +78,8 @@ do
fi
done
check_section $OBJ .data
check_section $OBJ .bss
check_section $OBJ .init.data
exit $ERROR

View File

@ -297,7 +297,7 @@ int ptrace_get_reg(struct task_struct *task, int regno, unsigned long *data)
}
#endif
if (regno < (sizeof(struct pt_regs) / sizeof(unsigned long))) {
if (regno < (sizeof(struct user_pt_regs) / sizeof(unsigned long))) {
*data = ((unsigned long *)task->thread.regs)[regno];
return 0;
}
@ -360,10 +360,10 @@ static int gpr_get(struct task_struct *target, const struct user_regset *regset,
ret = user_regset_copyout(&pos, &count, &kbuf, &ubuf,
&target->thread.regs->orig_gpr3,
offsetof(struct pt_regs, orig_gpr3),
sizeof(struct pt_regs));
sizeof(struct user_pt_regs));
if (!ret)
ret = user_regset_copyout_zero(&pos, &count, &kbuf, &ubuf,
sizeof(struct pt_regs), -1);
sizeof(struct user_pt_regs), -1);
return ret;
}
@ -853,10 +853,10 @@ static int tm_cgpr_get(struct task_struct *target,
ret = user_regset_copyout(&pos, &count, &kbuf, &ubuf,
&target->thread.ckpt_regs.orig_gpr3,
offsetof(struct pt_regs, orig_gpr3),
sizeof(struct pt_regs));
sizeof(struct user_pt_regs));
if (!ret)
ret = user_regset_copyout_zero(&pos, &count, &kbuf, &ubuf,
sizeof(struct pt_regs), -1);
sizeof(struct user_pt_regs), -1);
return ret;
}
@ -1609,7 +1609,7 @@ static int ppr_get(struct task_struct *target,
void *kbuf, void __user *ubuf)
{
return user_regset_copyout(&pos, &count, &kbuf, &ubuf,
&target->thread.ppr, 0, sizeof(u64));
&target->thread.regs->ppr, 0, sizeof(u64));
}
static int ppr_set(struct task_struct *target,
@ -1618,7 +1618,7 @@ static int ppr_set(struct task_struct *target,
const void *kbuf, const void __user *ubuf)
{
return user_regset_copyin(&pos, &count, &kbuf, &ubuf,
&target->thread.ppr, 0, sizeof(u64));
&target->thread.regs->ppr, 0, sizeof(u64));
}
static int dscr_get(struct task_struct *target,
@ -2508,6 +2508,7 @@ void ptrace_disable(struct task_struct *child)
{
/* make sure the single step bit is not set. */
user_disable_single_step(child);
clear_tsk_thread_flag(child, TIF_SYSCALL_EMU);
}
#ifdef CONFIG_PPC_ADV_DEBUG_REGS
@ -3130,7 +3131,7 @@ long arch_ptrace(struct task_struct *child, long request,
case PTRACE_GETREGS: /* Get all pt_regs from the child. */
return copy_regset_to_user(child, &user_ppc_native_view,
REGSET_GPR,
0, sizeof(struct pt_regs),
0, sizeof(struct user_pt_regs),
datavp);
#ifdef CONFIG_PPC64
@ -3139,7 +3140,7 @@ long arch_ptrace(struct task_struct *child, long request,
case PTRACE_SETREGS: /* Set all gp regs in the child. */
return copy_regset_from_user(child, &user_ppc_native_view,
REGSET_GPR,
0, sizeof(struct pt_regs),
0, sizeof(struct user_pt_regs),
datavp);
case PTRACE_GETFPREGS: /* Get the child FPU state (FPR0...31 + FPSCR) */
@ -3264,6 +3265,16 @@ long do_syscall_trace_enter(struct pt_regs *regs)
{
user_exit();
if (test_thread_flag(TIF_SYSCALL_EMU)) {
ptrace_report_syscall(regs);
/*
* Returning -1 will skip the syscall execution. We want to
* avoid clobbering any register also, thus, not 'gotoing'
* skip label.
*/
return -1;
}
/*
* The tracer may decide to abort the syscall, if so tracehook
* will return !0. Note that the tracer may also just change
@ -3324,3 +3335,42 @@ void do_syscall_trace_leave(struct pt_regs *regs)
user_enter();
}
void __init pt_regs_check(void)
{
BUILD_BUG_ON(offsetof(struct pt_regs, gpr) !=
offsetof(struct user_pt_regs, gpr));
BUILD_BUG_ON(offsetof(struct pt_regs, nip) !=
offsetof(struct user_pt_regs, nip));
BUILD_BUG_ON(offsetof(struct pt_regs, msr) !=
offsetof(struct user_pt_regs, msr));
BUILD_BUG_ON(offsetof(struct pt_regs, msr) !=
offsetof(struct user_pt_regs, msr));
BUILD_BUG_ON(offsetof(struct pt_regs, orig_gpr3) !=
offsetof(struct user_pt_regs, orig_gpr3));
BUILD_BUG_ON(offsetof(struct pt_regs, ctr) !=
offsetof(struct user_pt_regs, ctr));
BUILD_BUG_ON(offsetof(struct pt_regs, link) !=
offsetof(struct user_pt_regs, link));
BUILD_BUG_ON(offsetof(struct pt_regs, xer) !=
offsetof(struct user_pt_regs, xer));
BUILD_BUG_ON(offsetof(struct pt_regs, ccr) !=
offsetof(struct user_pt_regs, ccr));
#ifdef __powerpc64__
BUILD_BUG_ON(offsetof(struct pt_regs, softe) !=
offsetof(struct user_pt_regs, softe));
#else
BUILD_BUG_ON(offsetof(struct pt_regs, mq) !=
offsetof(struct user_pt_regs, mq));
#endif
BUILD_BUG_ON(offsetof(struct pt_regs, trap) !=
offsetof(struct user_pt_regs, trap));
BUILD_BUG_ON(offsetof(struct pt_regs, dar) !=
offsetof(struct user_pt_regs, dar));
BUILD_BUG_ON(offsetof(struct pt_regs, dsisr) !=
offsetof(struct user_pt_regs, dsisr));
BUILD_BUG_ON(offsetof(struct pt_regs, result) !=
offsetof(struct user_pt_regs, result));
BUILD_BUG_ON(sizeof(struct user_pt_regs) > sizeof(struct pt_regs));
}

View File

@ -981,7 +981,15 @@ int rtas_ibm_suspend_me(u64 handle)
goto out;
}
stop_topology_update();
cpu_hotplug_disable();
/* Check if we raced with a CPU-Offline Operation */
if (unlikely(!cpumask_equal(cpu_present_mask, cpu_online_mask))) {
pr_err("%s: Raced against a concurrent CPU-Offline\n",
__func__);
atomic_set(&data.error, -EBUSY);
goto out_hotplug_enable;
}
/* Call function on all CPUs. One of us will make the
* rtas call
@ -994,7 +1002,8 @@ int rtas_ibm_suspend_me(u64 handle)
if (atomic_read(&data.error) != 0)
printk(KERN_ERR "Error doing global join\n");
start_topology_update();
out_hotplug_enable:
cpu_hotplug_enable();
/* Take down CPUs not online prior to suspend */
cpuret = rtas_offline_cpus_mask(offline_mask);

Some files were not shown because too many files have changed in this diff Show More