License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 08:07:57 -06:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2015-11-30 20:36:38 -07:00
|
|
|
#ifndef _ASM_POWERPC_NOHASH_64_PGTABLE_H
|
|
|
|
#define _ASM_POWERPC_NOHASH_64_PGTABLE_H
|
2007-04-30 00:30:56 -06:00
|
|
|
/*
|
|
|
|
* This file contains the functions and defines necessary to modify and use
|
2018-07-05 10:25:11 -06:00
|
|
|
* the ppc64 non-hashed page table.
|
2007-04-30 00:30:56 -06:00
|
|
|
*/
|
|
|
|
|
2015-11-30 20:36:38 -07:00
|
|
|
#include <asm/nohash/64/pgtable-4k.h>
|
2013-06-20 03:00:15 -06:00
|
|
|
#include <asm/barrier.h>
|
2018-07-05 10:24:57 -06:00
|
|
|
#include <asm/asm-const.h>
|
2007-04-30 00:30:56 -06:00
|
|
|
|
2018-04-16 05:27:18 -06:00
|
|
|
#ifdef CONFIG_PPC_64K_PAGES
|
|
|
|
#error "Page size not supported"
|
|
|
|
#endif
|
|
|
|
|
2015-02-11 16:26:41 -07:00
|
|
|
#define FIRST_USER_ADDRESS 0UL
|
2007-04-30 00:30:56 -06:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Size of EA range mapped by our pagetables.
|
|
|
|
*/
|
|
|
|
#define PGTABLE_EADDR_SIZE (PTE_INDEX_SIZE + PMD_INDEX_SIZE + \
|
2015-11-30 20:36:38 -07:00
|
|
|
PUD_INDEX_SIZE + PGD_INDEX_SIZE + PAGE_SHIFT)
|
[POWERPC] Rewrite IO allocation & mapping on powerpc64
This rewrites pretty much from scratch the handling of MMIO and PIO
space allocations on powerpc64. The main goals are:
- Get rid of imalloc and use more common code where possible
- Simplify the current mess so that PIO space is allocated and
mapped in a single place for PCI bridges
- Handle allocation constraints of PIO for all bridges including
hot plugged ones within the 2GB space reserved for IO ports,
so that devices on hotplugged busses will now work with drivers
that assume IO ports fit in an int.
- Cleanup and separate tracking of the ISA space in the reserved
low 64K of IO space. No ISA -> Nothing mapped there.
I booted a cell blade with IDE on PIO and MMIO and a dual G5 so
far, that's it :-)
With this patch, all allocations are done using the code in
mm/vmalloc.c, though we use the low level __get_vm_area with
explicit start/stop constraints in order to manage separate
areas for vmalloc/vmap, ioremap, and PCI IOs.
This greatly simplifies a lot of things, as you can see in the
diffstat of that patch :-)
A new pair of functions pcibios_map/unmap_io_space() now replace
all of the previous code that used to manipulate PCI IOs space.
The allocation is done at mapping time, which is now called from
scan_phb's, just before the devices are probed (instead of after,
which is by itself a bug fix). The only other caller is the PCI
hotplug code for hot adding PCI-PCI bridges (slots).
imalloc is gone, as is the "sub-allocation" thing, but I do beleive
that hotplug should still work in the sense that the space allocation
is always done by the PHB, but if you unmap a child bus of this PHB
(which seems to be possible), then the code should properly tear
down all the HPTE mappings for that area of the PHB allocated IO space.
I now always reserve the first 64K of IO space for the bridge with
the ISA bus on it. I have moved the code for tracking ISA in a separate
file which should also make it smarter if we ever are capable of
hot unplugging or re-plugging an ISA bridge.
This should have a side effect on platforms like powermac where VGA IOs
will no longer work. This is done on purpose though as they would have
worked semi-randomly before. The idea at this point is to isolate drivers
that might need to access those and fix them by providing a proper
function to obtain an offset to the legacy IOs of a given bus.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-06-03 23:15:36 -06:00
|
|
|
#define PGTABLE_RANGE (ASM_CONST(1) << PGTABLE_EADDR_SIZE)
|
2007-04-30 00:30:56 -06:00
|
|
|
|
2013-06-20 03:00:14 -06:00
|
|
|
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
|
|
|
|
#define PMD_CACHE_INDEX (PMD_INDEX_SIZE + 1)
|
|
|
|
#else
|
|
|
|
#define PMD_CACHE_INDEX PMD_INDEX_SIZE
|
|
|
|
#endif
|
2018-02-11 08:00:06 -07:00
|
|
|
#define PUD_CACHE_INDEX PUD_INDEX_SIZE
|
2016-08-25 00:31:10 -06:00
|
|
|
|
2007-04-30 00:30:56 -06:00
|
|
|
/*
|
2009-07-27 19:59:34 -06:00
|
|
|
* Define the address range of the kernel non-linear virtual area
|
2007-04-30 00:30:56 -06:00
|
|
|
*/
|
2009-07-27 19:59:34 -06:00
|
|
|
#define KERN_VIRT_START ASM_CONST(0x8000000000000000)
|
2012-09-09 20:52:51 -06:00
|
|
|
#define KERN_VIRT_SIZE ASM_CONST(0x0000100000000000)
|
2007-04-30 00:30:56 -06:00
|
|
|
|
|
|
|
/*
|
2009-07-27 19:59:34 -06:00
|
|
|
* The vmalloc space starts at the beginning of that region, and
|
2018-07-05 10:25:11 -06:00
|
|
|
* occupies a quarter of it on Book3E
|
2009-07-23 17:15:58 -06:00
|
|
|
* (we keep a quarter for the virtual memmap)
|
2009-07-27 19:59:34 -06:00
|
|
|
*/
|
|
|
|
#define VMALLOC_START KERN_VIRT_START
|
|
|
|
#define VMALLOC_SIZE (KERN_VIRT_SIZE >> 2)
|
|
|
|
#define VMALLOC_END (VMALLOC_START + VMALLOC_SIZE)
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The second half of the kernel virtual space is used for IO mappings,
|
|
|
|
* it's itself carved into the PIO region (ISA and PHB IO space) and
|
|
|
|
* the ioremap space
|
[POWERPC] Rewrite IO allocation & mapping on powerpc64
This rewrites pretty much from scratch the handling of MMIO and PIO
space allocations on powerpc64. The main goals are:
- Get rid of imalloc and use more common code where possible
- Simplify the current mess so that PIO space is allocated and
mapped in a single place for PCI bridges
- Handle allocation constraints of PIO for all bridges including
hot plugged ones within the 2GB space reserved for IO ports,
so that devices on hotplugged busses will now work with drivers
that assume IO ports fit in an int.
- Cleanup and separate tracking of the ISA space in the reserved
low 64K of IO space. No ISA -> Nothing mapped there.
I booted a cell blade with IDE on PIO and MMIO and a dual G5 so
far, that's it :-)
With this patch, all allocations are done using the code in
mm/vmalloc.c, though we use the low level __get_vm_area with
explicit start/stop constraints in order to manage separate
areas for vmalloc/vmap, ioremap, and PCI IOs.
This greatly simplifies a lot of things, as you can see in the
diffstat of that patch :-)
A new pair of functions pcibios_map/unmap_io_space() now replace
all of the previous code that used to manipulate PCI IOs space.
The allocation is done at mapping time, which is now called from
scan_phb's, just before the devices are probed (instead of after,
which is by itself a bug fix). The only other caller is the PCI
hotplug code for hot adding PCI-PCI bridges (slots).
imalloc is gone, as is the "sub-allocation" thing, but I do beleive
that hotplug should still work in the sense that the space allocation
is always done by the PHB, but if you unmap a child bus of this PHB
(which seems to be possible), then the code should properly tear
down all the HPTE mappings for that area of the PHB allocated IO space.
I now always reserve the first 64K of IO space for the bridge with
the ISA bus on it. I have moved the code for tracking ISA in a separate
file which should also make it smarter if we ever are capable of
hot unplugging or re-plugging an ISA bridge.
This should have a side effect on platforms like powermac where VGA IOs
will no longer work. This is done on purpose though as they would have
worked semi-randomly before. The idea at this point is to isolate drivers
that might need to access those and fix them by providing a proper
function to obtain an offset to the legacy IOs of a given bus.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-06-03 23:15:36 -06:00
|
|
|
*
|
2009-07-27 19:59:34 -06:00
|
|
|
* ISA_IO_BASE = KERN_IO_START, 64K reserved area
|
[POWERPC] Rewrite IO allocation & mapping on powerpc64
This rewrites pretty much from scratch the handling of MMIO and PIO
space allocations on powerpc64. The main goals are:
- Get rid of imalloc and use more common code where possible
- Simplify the current mess so that PIO space is allocated and
mapped in a single place for PCI bridges
- Handle allocation constraints of PIO for all bridges including
hot plugged ones within the 2GB space reserved for IO ports,
so that devices on hotplugged busses will now work with drivers
that assume IO ports fit in an int.
- Cleanup and separate tracking of the ISA space in the reserved
low 64K of IO space. No ISA -> Nothing mapped there.
I booted a cell blade with IDE on PIO and MMIO and a dual G5 so
far, that's it :-)
With this patch, all allocations are done using the code in
mm/vmalloc.c, though we use the low level __get_vm_area with
explicit start/stop constraints in order to manage separate
areas for vmalloc/vmap, ioremap, and PCI IOs.
This greatly simplifies a lot of things, as you can see in the
diffstat of that patch :-)
A new pair of functions pcibios_map/unmap_io_space() now replace
all of the previous code that used to manipulate PCI IOs space.
The allocation is done at mapping time, which is now called from
scan_phb's, just before the devices are probed (instead of after,
which is by itself a bug fix). The only other caller is the PCI
hotplug code for hot adding PCI-PCI bridges (slots).
imalloc is gone, as is the "sub-allocation" thing, but I do beleive
that hotplug should still work in the sense that the space allocation
is always done by the PHB, but if you unmap a child bus of this PHB
(which seems to be possible), then the code should properly tear
down all the HPTE mappings for that area of the PHB allocated IO space.
I now always reserve the first 64K of IO space for the bridge with
the ISA bus on it. I have moved the code for tracking ISA in a separate
file which should also make it smarter if we ever are capable of
hot unplugging or re-plugging an ISA bridge.
This should have a side effect on platforms like powermac where VGA IOs
will no longer work. This is done on purpose though as they would have
worked semi-randomly before. The idea at this point is to isolate drivers
that might need to access those and fix them by providing a proper
function to obtain an offset to the legacy IOs of a given bus.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-06-03 23:15:36 -06:00
|
|
|
* PHB_IO_BASE = ISA_IO_BASE + 64K to ISA_IO_BASE + 2G, PHB IO spaces
|
|
|
|
* IOREMAP_BASE = ISA_IO_BASE + 2G to VMALLOC_START + PGTABLE_RANGE
|
2007-04-30 00:30:56 -06:00
|
|
|
*/
|
2009-07-27 19:59:34 -06:00
|
|
|
#define KERN_IO_START (KERN_VIRT_START + (KERN_VIRT_SIZE >> 1))
|
[POWERPC] Rewrite IO allocation & mapping on powerpc64
This rewrites pretty much from scratch the handling of MMIO and PIO
space allocations on powerpc64. The main goals are:
- Get rid of imalloc and use more common code where possible
- Simplify the current mess so that PIO space is allocated and
mapped in a single place for PCI bridges
- Handle allocation constraints of PIO for all bridges including
hot plugged ones within the 2GB space reserved for IO ports,
so that devices on hotplugged busses will now work with drivers
that assume IO ports fit in an int.
- Cleanup and separate tracking of the ISA space in the reserved
low 64K of IO space. No ISA -> Nothing mapped there.
I booted a cell blade with IDE on PIO and MMIO and a dual G5 so
far, that's it :-)
With this patch, all allocations are done using the code in
mm/vmalloc.c, though we use the low level __get_vm_area with
explicit start/stop constraints in order to manage separate
areas for vmalloc/vmap, ioremap, and PCI IOs.
This greatly simplifies a lot of things, as you can see in the
diffstat of that patch :-)
A new pair of functions pcibios_map/unmap_io_space() now replace
all of the previous code that used to manipulate PCI IOs space.
The allocation is done at mapping time, which is now called from
scan_phb's, just before the devices are probed (instead of after,
which is by itself a bug fix). The only other caller is the PCI
hotplug code for hot adding PCI-PCI bridges (slots).
imalloc is gone, as is the "sub-allocation" thing, but I do beleive
that hotplug should still work in the sense that the space allocation
is always done by the PHB, but if you unmap a child bus of this PHB
(which seems to be possible), then the code should properly tear
down all the HPTE mappings for that area of the PHB allocated IO space.
I now always reserve the first 64K of IO space for the bridge with
the ISA bus on it. I have moved the code for tracking ISA in a separate
file which should also make it smarter if we ever are capable of
hot unplugging or re-plugging an ISA bridge.
This should have a side effect on platforms like powermac where VGA IOs
will no longer work. This is done on purpose though as they would have
worked semi-randomly before. The idea at this point is to isolate drivers
that might need to access those and fix them by providing a proper
function to obtain an offset to the legacy IOs of a given bus.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-06-03 23:15:36 -06:00
|
|
|
#define FULL_IO_SIZE 0x80000000ul
|
2009-07-27 19:59:34 -06:00
|
|
|
#define ISA_IO_BASE (KERN_IO_START)
|
|
|
|
#define ISA_IO_END (KERN_IO_START + 0x10000ul)
|
[POWERPC] Rewrite IO allocation & mapping on powerpc64
This rewrites pretty much from scratch the handling of MMIO and PIO
space allocations on powerpc64. The main goals are:
- Get rid of imalloc and use more common code where possible
- Simplify the current mess so that PIO space is allocated and
mapped in a single place for PCI bridges
- Handle allocation constraints of PIO for all bridges including
hot plugged ones within the 2GB space reserved for IO ports,
so that devices on hotplugged busses will now work with drivers
that assume IO ports fit in an int.
- Cleanup and separate tracking of the ISA space in the reserved
low 64K of IO space. No ISA -> Nothing mapped there.
I booted a cell blade with IDE on PIO and MMIO and a dual G5 so
far, that's it :-)
With this patch, all allocations are done using the code in
mm/vmalloc.c, though we use the low level __get_vm_area with
explicit start/stop constraints in order to manage separate
areas for vmalloc/vmap, ioremap, and PCI IOs.
This greatly simplifies a lot of things, as you can see in the
diffstat of that patch :-)
A new pair of functions pcibios_map/unmap_io_space() now replace
all of the previous code that used to manipulate PCI IOs space.
The allocation is done at mapping time, which is now called from
scan_phb's, just before the devices are probed (instead of after,
which is by itself a bug fix). The only other caller is the PCI
hotplug code for hot adding PCI-PCI bridges (slots).
imalloc is gone, as is the "sub-allocation" thing, but I do beleive
that hotplug should still work in the sense that the space allocation
is always done by the PHB, but if you unmap a child bus of this PHB
(which seems to be possible), then the code should properly tear
down all the HPTE mappings for that area of the PHB allocated IO space.
I now always reserve the first 64K of IO space for the bridge with
the ISA bus on it. I have moved the code for tracking ISA in a separate
file which should also make it smarter if we ever are capable of
hot unplugging or re-plugging an ISA bridge.
This should have a side effect on platforms like powermac where VGA IOs
will no longer work. This is done on purpose though as they would have
worked semi-randomly before. The idea at this point is to isolate drivers
that might need to access those and fix them by providing a proper
function to obtain an offset to the legacy IOs of a given bus.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-06-03 23:15:36 -06:00
|
|
|
#define PHB_IO_BASE (ISA_IO_END)
|
2009-07-27 19:59:34 -06:00
|
|
|
#define PHB_IO_END (KERN_IO_START + FULL_IO_SIZE)
|
[POWERPC] Rewrite IO allocation & mapping on powerpc64
This rewrites pretty much from scratch the handling of MMIO and PIO
space allocations on powerpc64. The main goals are:
- Get rid of imalloc and use more common code where possible
- Simplify the current mess so that PIO space is allocated and
mapped in a single place for PCI bridges
- Handle allocation constraints of PIO for all bridges including
hot plugged ones within the 2GB space reserved for IO ports,
so that devices on hotplugged busses will now work with drivers
that assume IO ports fit in an int.
- Cleanup and separate tracking of the ISA space in the reserved
low 64K of IO space. No ISA -> Nothing mapped there.
I booted a cell blade with IDE on PIO and MMIO and a dual G5 so
far, that's it :-)
With this patch, all allocations are done using the code in
mm/vmalloc.c, though we use the low level __get_vm_area with
explicit start/stop constraints in order to manage separate
areas for vmalloc/vmap, ioremap, and PCI IOs.
This greatly simplifies a lot of things, as you can see in the
diffstat of that patch :-)
A new pair of functions pcibios_map/unmap_io_space() now replace
all of the previous code that used to manipulate PCI IOs space.
The allocation is done at mapping time, which is now called from
scan_phb's, just before the devices are probed (instead of after,
which is by itself a bug fix). The only other caller is the PCI
hotplug code for hot adding PCI-PCI bridges (slots).
imalloc is gone, as is the "sub-allocation" thing, but I do beleive
that hotplug should still work in the sense that the space allocation
is always done by the PHB, but if you unmap a child bus of this PHB
(which seems to be possible), then the code should properly tear
down all the HPTE mappings for that area of the PHB allocated IO space.
I now always reserve the first 64K of IO space for the bridge with
the ISA bus on it. I have moved the code for tracking ISA in a separate
file which should also make it smarter if we ever are capable of
hot unplugging or re-plugging an ISA bridge.
This should have a side effect on platforms like powermac where VGA IOs
will no longer work. This is done on purpose though as they would have
worked semi-randomly before. The idea at this point is to isolate drivers
that might need to access those and fix them by providing a proper
function to obtain an offset to the legacy IOs of a given bus.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-06-03 23:15:36 -06:00
|
|
|
#define IOREMAP_BASE (PHB_IO_END)
|
2009-07-27 19:59:34 -06:00
|
|
|
#define IOREMAP_END (KERN_VIRT_START + KERN_VIRT_SIZE)
|
|
|
|
|
2007-04-30 00:30:56 -06:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Region IDs
|
|
|
|
*/
|
|
|
|
#define REGION_SHIFT 60UL
|
|
|
|
#define REGION_MASK (0xfUL << REGION_SHIFT)
|
|
|
|
#define REGION_ID(ea) (((unsigned long)(ea)) >> REGION_SHIFT)
|
|
|
|
|
|
|
|
#define VMALLOC_REGION_ID (REGION_ID(VMALLOC_START))
|
|
|
|
#define KERNEL_REGION_ID (REGION_ID(PAGE_OFFSET))
|
2009-07-23 17:15:58 -06:00
|
|
|
#define VMEMMAP_REGION_ID (0xfUL) /* Server only */
|
2007-04-30 00:30:56 -06:00
|
|
|
#define USER_REGION_ID (0UL)
|
|
|
|
|
2007-10-16 02:24:17 -06:00
|
|
|
/*
|
2009-07-27 19:59:34 -06:00
|
|
|
* Defines the address of the vmemap area, in its own region on
|
2018-07-05 10:25:11 -06:00
|
|
|
* after the vmalloc space on Book3E
|
2007-10-16 02:24:17 -06:00
|
|
|
*/
|
2009-07-27 19:59:34 -06:00
|
|
|
#define VMEMMAP_BASE VMALLOC_END
|
|
|
|
#define VMEMMAP_END KERN_IO_START
|
[POWERPC] vmemmap fixes to use smaller pages
This changes vmemmap to use a different region (region 0xf) of the
address space, and to configure the page size of that region
dynamically at boot.
The problem with the current approach of always using 16M pages is that
it's not well suited to machines that have small amounts of memory such
as small partitions on pseries, or PS3's.
In fact, on the PS3, failure to allocate the 16M page backing vmmemmap
tends to prevent hotplugging the HV's "additional" memory, thus limiting
the available memory even more, from my experience down to something
like 80M total, which makes it really not very useable.
The logic used by my match to choose the vmemmap page size is:
- If 16M pages are available and there's 1G or more RAM at boot,
use that size.
- Else if 64K pages are available, use that
- Else use 4K pages
I've tested on a POWER6 (16M pages) and on an iSeries POWER3 (4K pages)
and it seems to work fine.
Note that I intend to change the way we organize the kernel regions &
SLBs so the actual region will change from 0xf back to something else at
one point, as I simplify the SLB miss handler, but that will be for a
later patch.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-04-29 23:41:48 -06:00
|
|
|
#define vmemmap ((struct page *)VMEMMAP_BASE)
|
|
|
|
|
2007-10-16 02:24:17 -06:00
|
|
|
|
2007-04-30 00:30:56 -06:00
|
|
|
/*
|
2009-03-10 11:53:29 -06:00
|
|
|
* Include the PTE bits definitions
|
2007-04-30 00:30:56 -06:00
|
|
|
*/
|
2015-11-30 20:36:38 -07:00
|
|
|
#include <asm/nohash/pte-book3e.h>
|
2018-10-09 07:52:10 -06:00
|
|
|
|
|
|
|
#define _PAGE_SAO 0
|
|
|
|
|
|
|
|
#define PTE_RPN_MASK (~((1UL << PTE_RPN_SHIFT) - 1))
|
|
|
|
|
|
|
|
/*
|
|
|
|
* _PAGE_CHG_MASK masks of bits that are to be preserved across
|
|
|
|
* pgprot changes.
|
|
|
|
*/
|
|
|
|
#define _PAGE_CHG_MASK (PTE_RPN_MASK | _PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_SPECIAL)
|
|
|
|
|
|
|
|
#define H_PAGE_4K_PFN 0
|
2009-03-10 11:53:29 -06:00
|
|
|
|
2007-04-30 00:30:56 -06:00
|
|
|
#ifndef __ASSEMBLY__
|
|
|
|
/* pte_clear moved to later in this file */
|
|
|
|
|
2018-10-09 07:51:50 -06:00
|
|
|
static inline pte_t pte_mkwrite(pte_t pte)
|
|
|
|
{
|
|
|
|
return __pte(pte_val(pte) | _PAGE_RW);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline pte_t pte_mkdirty(pte_t pte)
|
|
|
|
{
|
|
|
|
return __pte(pte_val(pte) | _PAGE_DIRTY);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline pte_t pte_mkyoung(pte_t pte)
|
|
|
|
{
|
|
|
|
return __pte(pte_val(pte) | _PAGE_ACCESSED);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline pte_t pte_wrprotect(pte_t pte)
|
|
|
|
{
|
|
|
|
return __pte(pte_val(pte) & ~_PAGE_RW);
|
|
|
|
}
|
|
|
|
|
2018-10-09 07:51:52 -06:00
|
|
|
static inline pte_t pte_mkexec(pte_t pte)
|
|
|
|
{
|
|
|
|
return __pte(pte_val(pte) | _PAGE_EXEC);
|
|
|
|
}
|
|
|
|
|
2007-04-30 00:30:56 -06:00
|
|
|
#define PMD_BAD_BITS (PTE_TABLE_SIZE-1)
|
|
|
|
#define PUD_BAD_BITS (PMD_TABLE_SIZE-1)
|
|
|
|
|
2015-11-30 20:36:35 -07:00
|
|
|
static inline void pmd_set(pmd_t *pmdp, unsigned long val)
|
|
|
|
{
|
|
|
|
*pmdp = __pmd(val);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void pmd_clear(pmd_t *pmdp)
|
|
|
|
{
|
|
|
|
*pmdp = __pmd(0);
|
|
|
|
}
|
|
|
|
|
2015-11-30 20:36:53 -07:00
|
|
|
static inline pte_t pmd_pte(pmd_t pmd)
|
|
|
|
{
|
|
|
|
return __pte(pmd_val(pmd));
|
|
|
|
}
|
|
|
|
|
2007-04-30 00:30:56 -06:00
|
|
|
#define pmd_none(pmd) (!pmd_val(pmd))
|
|
|
|
#define pmd_bad(pmd) (!is_kernel_addr(pmd_val(pmd)) \
|
|
|
|
|| (pmd_val(pmd) & PMD_BAD_BITS))
|
2014-11-05 09:27:39 -07:00
|
|
|
#define pmd_present(pmd) (!pmd_none(pmd))
|
2007-04-30 00:30:56 -06:00
|
|
|
#define pmd_page_vaddr(pmd) (pmd_val(pmd) & ~PMD_MASKED_BITS)
|
2013-06-20 03:00:15 -06:00
|
|
|
extern struct page *pmd_page(pmd_t pmd);
|
2007-04-30 00:30:56 -06:00
|
|
|
|
2015-11-30 20:36:35 -07:00
|
|
|
static inline void pud_set(pud_t *pudp, unsigned long val)
|
|
|
|
{
|
|
|
|
*pudp = __pud(val);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void pud_clear(pud_t *pudp)
|
|
|
|
{
|
|
|
|
*pudp = __pud(0);
|
|
|
|
}
|
|
|
|
|
2007-04-30 00:30:56 -06:00
|
|
|
#define pud_none(pud) (!pud_val(pud))
|
|
|
|
#define pud_bad(pud) (!is_kernel_addr(pud_val(pud)) \
|
|
|
|
|| (pud_val(pud) & PUD_BAD_BITS))
|
|
|
|
#define pud_present(pud) (pud_val(pud) != 0)
|
|
|
|
#define pud_page_vaddr(pud) (pud_val(pud) & ~PUD_MASKED_BITS)
|
|
|
|
|
2014-11-05 09:27:39 -07:00
|
|
|
extern struct page *pud_page(pud_t pud);
|
|
|
|
|
|
|
|
static inline pte_t pud_pte(pud_t pud)
|
|
|
|
{
|
|
|
|
return __pte(pud_val(pud));
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline pud_t pte_pud(pte_t pte)
|
|
|
|
{
|
|
|
|
return __pud(pte_val(pte));
|
|
|
|
}
|
|
|
|
#define pud_write(pud) pte_write(pud_pte(pud))
|
|
|
|
#define pgd_write(pgd) pte_write(pgd_pte(pgd))
|
2007-04-30 00:30:56 -06:00
|
|
|
|
2015-11-30 20:36:35 -07:00
|
|
|
static inline void pgd_set(pgd_t *pgdp, unsigned long val)
|
|
|
|
{
|
|
|
|
*pgdp = __pgd(val);
|
|
|
|
}
|
|
|
|
|
2007-04-30 00:30:56 -06:00
|
|
|
/*
|
|
|
|
* Find an entry in a page-table-directory. We combine the address region
|
|
|
|
* (the high order N bits) and the pgd portion of the address.
|
|
|
|
*/
|
2013-04-28 03:37:28 -06:00
|
|
|
#define pgd_index(address) (((address) >> (PGDIR_SHIFT)) & (PTRS_PER_PGD - 1))
|
2007-04-30 00:30:56 -06:00
|
|
|
|
|
|
|
#define pgd_offset(mm, address) ((mm)->pgd + pgd_index(address))
|
|
|
|
|
|
|
|
#define pmd_offset(pudp,addr) \
|
|
|
|
(((pmd_t *) pud_page_vaddr(*(pudp))) + (((addr) >> PMD_SHIFT) & (PTRS_PER_PMD - 1)))
|
|
|
|
|
|
|
|
#define pte_offset_kernel(dir,addr) \
|
|
|
|
(((pte_t *) pmd_page_vaddr(*(dir))) + (((addr) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)))
|
|
|
|
|
|
|
|
#define pte_offset_map(dir,addr) pte_offset_kernel((dir), (addr))
|
|
|
|
#define pte_unmap(pte) do { } while(0)
|
|
|
|
|
|
|
|
/* to find an entry in a kernel page-table-directory */
|
|
|
|
/* This now only contains the vmalloc pages */
|
|
|
|
#define pgd_offset_k(address) pgd_offset(&init_mm, address)
|
|
|
|
|
|
|
|
/* Atomic PTE updates */
|
|
|
|
static inline unsigned long pte_update(struct mm_struct *mm,
|
|
|
|
unsigned long addr,
|
|
|
|
pte_t *ptep, unsigned long clr,
|
2014-02-11 20:43:36 -07:00
|
|
|
unsigned long set,
|
2007-04-30 00:30:56 -06:00
|
|
|
int huge)
|
|
|
|
{
|
2009-03-19 13:34:15 -06:00
|
|
|
#ifdef PTE_ATOMIC_UPDATES
|
2007-04-30 00:30:56 -06:00
|
|
|
unsigned long old, tmp;
|
|
|
|
|
|
|
|
__asm__ __volatile__(
|
|
|
|
"1: ldarx %0,0,%3 # pte_update\n\
|
|
|
|
andc %1,%0,%4 \n\
|
2018-04-24 10:31:30 -06:00
|
|
|
or %1,%1,%6\n\
|
2007-04-30 00:30:56 -06:00
|
|
|
stdcx. %1,0,%3 \n\
|
|
|
|
bne- 1b"
|
|
|
|
: "=&r" (old), "=&r" (tmp), "=m" (*ptep)
|
2018-04-24 10:31:30 -06:00
|
|
|
: "r" (ptep), "r" (clr), "m" (*ptep), "r" (set)
|
2007-04-30 00:30:56 -06:00
|
|
|
: "cc" );
|
2009-03-19 13:34:15 -06:00
|
|
|
#else
|
|
|
|
unsigned long old = pte_val(*ptep);
|
2014-02-11 20:43:36 -07:00
|
|
|
*ptep = __pte((old & ~clr) | set);
|
2009-03-19 13:34:15 -06:00
|
|
|
#endif
|
powerpc/mm: Rework I$/D$ coherency (v3)
This patch reworks the way we do I and D cache coherency on PowerPC.
The "old" way was split in 3 different parts depending on the processor type:
- Hash with per-page exec support (64-bit and >= POWER4 only) does it
at hashing time, by preventing exec on unclean pages and cleaning pages
on exec faults.
- Everything without per-page exec support (32-bit hash, 8xx, and
64-bit < POWER4) does it for all page going to user space in update_mmu_cache().
- Embedded with per-page exec support does it from do_page_fault() on
exec faults, in a way similar to what the hash code does.
That leads to confusion, and bugs. For example, the method using update_mmu_cache()
is racy on SMP where another processor can see the new PTE and hash it in before
we have cleaned the cache, and then blow trying to execute. This is hard to hit but
I think it has bitten us in the past.
Also, it's inefficient for embedded where we always end up having to do at least
one more page fault.
This reworks the whole thing by moving the cache sync into two main call sites,
though we keep different behaviours depending on the HW capability. The call
sites are set_pte_at() which is now made out of line, and ptep_set_access_flags()
which joins the former in pgtable.c
The base idea for Embedded with per-page exec support, is that we now do the
flush at set_pte_at() time when coming from an exec fault, which allows us
to avoid the double fault problem completely (we can even improve the situation
more by implementing TLB preload in update_mmu_cache() but that's for later).
If for some reason we didn't do it there and we try to execute, we'll hit
the page fault, which will do a minor fault, which will hit ptep_set_access_flags()
to do things like update _PAGE_ACCESSED or _PAGE_DIRTY if needed, we just make
this guys also perform the I/D cache sync for exec faults now. This second path
is the catch all for things that weren't cleaned at set_pte_at() time.
For cpus without per-pag exec support, we always do the sync at set_pte_at(),
thus guaranteeing that when the PTE is visible to other processors, the cache
is clean.
For the 64-bit hash with per-page exec support case, we keep the old mechanism
for now. I'll look into changing it later, once I've reworked a bit how we
use _PAGE_EXEC.
This is also a first step for adding _PAGE_EXEC support for embedded platforms
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-02-10 09:02:37 -07:00
|
|
|
/* huge pages use the old page table lock */
|
|
|
|
if (!huge)
|
|
|
|
assert_pte_locked(mm, addr);
|
|
|
|
|
2007-04-30 00:30:56 -06:00
|
|
|
return old;
|
|
|
|
}
|
|
|
|
|
2018-04-24 10:31:28 -06:00
|
|
|
static inline int pte_young(pte_t pte)
|
|
|
|
{
|
|
|
|
return pte_val(pte) & _PAGE_ACCESSED;
|
|
|
|
}
|
|
|
|
|
2007-04-30 00:30:56 -06:00
|
|
|
static inline int __ptep_test_and_clear_young(struct mm_struct *mm,
|
|
|
|
unsigned long addr, pte_t *ptep)
|
|
|
|
{
|
|
|
|
unsigned long old;
|
|
|
|
|
2018-04-24 10:31:28 -06:00
|
|
|
if (pte_young(*ptep))
|
2007-04-30 00:30:56 -06:00
|
|
|
return 0;
|
2014-02-11 20:43:36 -07:00
|
|
|
old = pte_update(mm, addr, ptep, _PAGE_ACCESSED, 0, 0);
|
2007-04-30 00:30:56 -06:00
|
|
|
return (old & _PAGE_ACCESSED) != 0;
|
|
|
|
}
|
|
|
|
#define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
|
|
|
|
#define ptep_test_and_clear_young(__vma, __addr, __ptep) \
|
|
|
|
({ \
|
|
|
|
int __r; \
|
|
|
|
__r = __ptep_test_and_clear_young((__vma)->vm_mm, __addr, __ptep); \
|
|
|
|
__r; \
|
|
|
|
})
|
|
|
|
|
|
|
|
#define __HAVE_ARCH_PTEP_SET_WRPROTECT
|
|
|
|
static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr,
|
|
|
|
pte_t *ptep)
|
|
|
|
{
|
|
|
|
|
2011-05-06 22:11:31 -06:00
|
|
|
if ((pte_val(*ptep) & _PAGE_RW) == 0)
|
|
|
|
return;
|
|
|
|
|
2014-02-11 20:43:36 -07:00
|
|
|
pte_update(mm, addr, ptep, _PAGE_RW, 0, 0);
|
2007-04-30 00:30:56 -06:00
|
|
|
}
|
|
|
|
|
powerpc: Add 64 bit version of huge_ptep_set_wrprotect
The implementation of huge_ptep_set_wrprotect() directly calls
ptep_set_wrprotect() to mark a hugepte write protected. However this
call is not appropriate on ppc64 kernels as this is a small page only
implementation. This can lead to the hash not being flushed correctly
when a mapping is being converted to COW, allowing processes to continue
using the original copy.
Currently huge_ptep_set_wrprotect() unconditionally calls
ptep_set_wrprotect(). This is fine on ppc32 kernels as this call is
generic. On 64 bit this is implemented as:
pte_update(mm, addr, ptep, _PAGE_RW, 0);
On ppc64 this last parameter is the page size and is passed directly on
to hpte_need_flush():
hpte_need_flush(mm, addr, ptep, old, huge);
And this directly affects the page size we pass to flush_hash_page():
flush_hash_page(vaddr, rpte, psize, ssize, 0);
As this changes the way the hash is calculated we will flush the wrong
pages, potentially leaving live hashes to the original page.
Move the definition of huge_ptep_set_wrprotect() to the 32/64 bit specific
headers.
Signed-off-by: Andy Whitcroft <apw@shadowen.org>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-06-26 03:55:58 -06:00
|
|
|
static inline void huge_ptep_set_wrprotect(struct mm_struct *mm,
|
|
|
|
unsigned long addr, pte_t *ptep)
|
|
|
|
{
|
Correct hash flushing from huge_ptep_set_wrprotect()
As Andy Whitcroft recently pointed out, the current powerpc version of
huge_ptep_set_wrprotect() has a bug. It just calls ptep_set_wrprotect()
which in turn calls pte_update() then hpte_need_flush() with the 'huge'
argument set to 0. This will cause hpte_need_flush() to flush the wrong
hash entries (of any). Andy's fix for this is already in the powerpc
tree as commit 016b33c4958681c24056abed8ec95844a0da80a3.
I have confirmed this is a real bug, not masked by some other
synchronization, with a new testcase for libhugetlbfs. A process write
a (MAP_PRIVATE) hugepage mapping, fork(), then alter the mapping and
have the child incorrectly see the second write.
Therefore, this should be fixed for 2.6.26, and for the stable tree.
Here is a suitable patch for 2.6.26, which I think will also be suitable
for the stable tree (neither of the headers in question has been changed
much recently).
It is cut down slighlty from Andy's original version, in that it does
not include a 32-bit version of huge_ptep_set_wrprotect(). Currently,
hugepages are not supported on any 32-bit powerpc platform. When they
are, a suitable 32-bit version can be added - the only 32-bit hardware
which supports hugepages does not use the conventional hashtable MMU and
so will have different needs anyway.
Signed-off-by: Andy Whitcroft <apw@shadowen.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-07 23:58:16 -06:00
|
|
|
if ((pte_val(*ptep) & _PAGE_RW) == 0)
|
|
|
|
return;
|
2011-05-06 22:11:31 -06:00
|
|
|
|
2014-02-11 20:43:36 -07:00
|
|
|
pte_update(mm, addr, ptep, _PAGE_RW, 0, 1);
|
powerpc: Add 64 bit version of huge_ptep_set_wrprotect
The implementation of huge_ptep_set_wrprotect() directly calls
ptep_set_wrprotect() to mark a hugepte write protected. However this
call is not appropriate on ppc64 kernels as this is a small page only
implementation. This can lead to the hash not being flushed correctly
when a mapping is being converted to COW, allowing processes to continue
using the original copy.
Currently huge_ptep_set_wrprotect() unconditionally calls
ptep_set_wrprotect(). This is fine on ppc32 kernels as this call is
generic. On 64 bit this is implemented as:
pte_update(mm, addr, ptep, _PAGE_RW, 0);
On ppc64 this last parameter is the page size and is passed directly on
to hpte_need_flush():
hpte_need_flush(mm, addr, ptep, old, huge);
And this directly affects the page size we pass to flush_hash_page():
flush_hash_page(vaddr, rpte, psize, ssize, 0);
As this changes the way the hash is calculated we will flush the wrong
pages, potentially leaving live hashes to the original page.
Move the definition of huge_ptep_set_wrprotect() to the 32/64 bit specific
headers.
Signed-off-by: Andy Whitcroft <apw@shadowen.org>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-06-26 03:55:58 -06:00
|
|
|
}
|
2007-04-30 00:30:56 -06:00
|
|
|
|
|
|
|
#define __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH
|
|
|
|
#define ptep_clear_flush_young(__vma, __address, __ptep) \
|
|
|
|
({ \
|
|
|
|
int __young = __ptep_test_and_clear_young((__vma)->vm_mm, __address, \
|
|
|
|
__ptep); \
|
|
|
|
__young; \
|
|
|
|
})
|
|
|
|
|
|
|
|
#define __HAVE_ARCH_PTEP_GET_AND_CLEAR
|
|
|
|
static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
|
|
|
|
unsigned long addr, pte_t *ptep)
|
|
|
|
{
|
2014-02-11 20:43:36 -07:00
|
|
|
unsigned long old = pte_update(mm, addr, ptep, ~0UL, 0, 0);
|
2007-04-30 00:30:56 -06:00
|
|
|
return __pte(old);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void pte_clear(struct mm_struct *mm, unsigned long addr,
|
|
|
|
pte_t * ptep)
|
|
|
|
{
|
2014-02-11 20:43:36 -07:00
|
|
|
pte_update(mm, addr, ptep, ~0UL, 0, 0);
|
2007-04-30 00:30:56 -06:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2018-07-05 10:25:11 -06:00
|
|
|
/* Set the dirty and/or accessed bits atomically in a linux PTE */
|
2018-05-29 08:28:40 -06:00
|
|
|
static inline void __ptep_set_access_flags(struct vm_area_struct *vma,
|
2016-11-27 23:17:02 -07:00
|
|
|
pte_t *ptep, pte_t entry,
|
2018-05-29 08:28:40 -06:00
|
|
|
unsigned long address,
|
|
|
|
int psize)
|
2007-04-30 00:30:56 -06:00
|
|
|
{
|
|
|
|
unsigned long bits = pte_val(entry) &
|
2009-08-18 13:00:34 -06:00
|
|
|
(_PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_RW | _PAGE_EXEC);
|
2009-03-19 13:34:15 -06:00
|
|
|
|
|
|
|
#ifdef PTE_ATOMIC_UPDATES
|
2007-04-30 00:30:56 -06:00
|
|
|
unsigned long old, tmp;
|
|
|
|
|
|
|
|
__asm__ __volatile__(
|
|
|
|
"1: ldarx %0,0,%4\n\
|
|
|
|
or %0,%3,%0\n\
|
|
|
|
stdcx. %0,0,%4\n\
|
|
|
|
bne- 1b"
|
|
|
|
:"=&r" (old), "=&r" (tmp), "=m" (*ptep)
|
2018-04-24 10:31:30 -06:00
|
|
|
:"r" (bits), "r" (ptep), "m" (*ptep)
|
2007-04-30 00:30:56 -06:00
|
|
|
:"cc");
|
2009-03-19 13:34:15 -06:00
|
|
|
#else
|
|
|
|
unsigned long old = pte_val(*ptep);
|
|
|
|
*ptep = __pte(old | bits);
|
|
|
|
#endif
|
2018-05-29 08:28:41 -06:00
|
|
|
|
|
|
|
flush_tlb_page(vma, address);
|
2007-04-30 00:30:56 -06:00
|
|
|
}
|
|
|
|
|
|
|
|
#define __HAVE_ARCH_PTE_SAME
|
2018-04-24 10:31:28 -06:00
|
|
|
#define pte_same(A,B) ((pte_val(A) ^ pte_val(B)) == 0)
|
2007-04-30 00:30:56 -06:00
|
|
|
|
|
|
|
#define pte_ERROR(e) \
|
2014-09-16 22:39:39 -06:00
|
|
|
pr_err("%s:%d: bad pte %08lx.\n", __FILE__, __LINE__, pte_val(e))
|
2007-04-30 00:30:56 -06:00
|
|
|
#define pmd_ERROR(e) \
|
2014-09-16 22:39:39 -06:00
|
|
|
pr_err("%s:%d: bad pmd %08lx.\n", __FILE__, __LINE__, pmd_val(e))
|
2007-04-30 00:30:56 -06:00
|
|
|
#define pgd_ERROR(e) \
|
2014-09-16 22:39:39 -06:00
|
|
|
pr_err("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, pgd_val(e))
|
2007-04-30 00:30:56 -06:00
|
|
|
|
|
|
|
/* Encode and de-code a swap entry */
|
2015-06-16 20:43:41 -06:00
|
|
|
#define MAX_SWAPFILES_CHECK() do { \
|
|
|
|
BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > SWP_TYPE_BITS); \
|
|
|
|
} while (0)
|
|
|
|
/*
|
|
|
|
* on pte we don't need handle RADIX_TREE_EXCEPTIONAL_SHIFT;
|
|
|
|
*/
|
|
|
|
#define SWP_TYPE_BITS 5
|
|
|
|
#define __swp_type(x) (((x).val >> _PAGE_BIT_SWAP_TYPE) \
|
|
|
|
& ((1UL << SWP_TYPE_BITS) - 1))
|
|
|
|
#define __swp_offset(x) ((x).val >> PTE_RPN_SHIFT)
|
|
|
|
#define __swp_entry(type, offset) ((swp_entry_t) { \
|
|
|
|
((type) << _PAGE_BIT_SWAP_TYPE) \
|
|
|
|
| ((offset) << PTE_RPN_SHIFT) })
|
|
|
|
|
|
|
|
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val((pte)) })
|
|
|
|
#define __swp_entry_to_pte(x) __pte((x).val)
|
2007-04-30 00:30:56 -06:00
|
|
|
|
2018-10-09 07:51:45 -06:00
|
|
|
int map_kernel_page(unsigned long ea, unsigned long pa, pgprot_t prot);
|
2016-04-29 07:25:59 -06:00
|
|
|
extern int __meminit vmemmap_create_mapping(unsigned long start,
|
|
|
|
unsigned long page_size,
|
|
|
|
unsigned long phys);
|
|
|
|
extern void vmemmap_remove_mapping(unsigned long start,
|
|
|
|
unsigned long page_size);
|
2007-04-30 00:30:56 -06:00
|
|
|
#endif /* __ASSEMBLY__ */
|
|
|
|
|
2015-11-30 20:36:38 -07:00
|
|
|
#endif /* _ASM_POWERPC_NOHASH_64_PGTABLE_H */
|