1
0
Fork 0

docs/vm: rename documentation files to .rst

Signed-off-by: Mike Rapoport <rppt@linux.vnet.ibm.com>
Signed-off-by: Jonathan Corbet <corbet@lwn.net>
hifive-unleashed-5.1
Mike Rapoport 2018-03-21 21:22:47 +02:00 committed by Jonathan Corbet
parent 3406bb5c64
commit ad56b738c5
63 changed files with 87 additions and 87 deletions

View File

@ -90,4 +90,4 @@ Date: December 2009
Contact: Lee Schermerhorn <lee.schermerhorn@hp.com>
Description:
The node's huge page size control/query attributes.
See Documentation/vm/hugetlbpage.txt
See Documentation/vm/hugetlbpage.rst

View File

@ -12,4 +12,4 @@ Description:
free_hugepages
surplus_hugepages
resv_hugepages
See Documentation/vm/hugetlbpage.txt for details.
See Documentation/vm/hugetlbpage.rst for details.

View File

@ -40,7 +40,7 @@ Description: Kernel Samepage Merging daemon sysfs interface
sleep_millisecs: how many milliseconds ksm should sleep between
scans.
See Documentation/vm/ksm.txt for more information.
See Documentation/vm/ksm.rst for more information.
What: /sys/kernel/mm/ksm/merge_across_nodes
Date: January 2013

View File

@ -37,7 +37,7 @@ Description:
The alloc_calls file is read-only and lists the kernel code
locations from which allocations for this cache were performed.
The alloc_calls file only contains information if debugging is
enabled for that cache (see Documentation/vm/slub.txt).
enabled for that cache (see Documentation/vm/slub.rst).
What: /sys/kernel/slab/cache/alloc_fastpath
Date: February 2008
@ -219,7 +219,7 @@ Contact: Pekka Enberg <penberg@cs.helsinki.fi>,
Description:
The free_calls file is read-only and lists the locations of
object frees if slab debugging is enabled (see
Documentation/vm/slub.txt).
Documentation/vm/slub.rst).
What: /sys/kernel/slab/cache/free_fastpath
Date: February 2008

View File

@ -3887,7 +3887,7 @@
cache (risks via metadata attacks are mostly
unchanged). Debug options disable merging on their
own.
For more information see Documentation/vm/slub.txt.
For more information see Documentation/vm/slub.rst.
slab_max_order= [MM, SLAB]
Determines the maximum allowed order for slabs.
@ -3901,7 +3901,7 @@
slub_debug can create guard zones around objects and
may poison objects when not in use. Also tracks the
last alloc / free. For more information see
Documentation/vm/slub.txt.
Documentation/vm/slub.rst.
slub_memcg_sysfs= [MM, SLUB]
Determines whether to enable sysfs directories for
@ -3915,7 +3915,7 @@
Determines the maximum allowed order for slabs.
A high setting may cause OOMs due to memory
fragmentation. For more information see
Documentation/vm/slub.txt.
Documentation/vm/slub.rst.
slub_min_objects= [MM, SLUB]
The minimum number of objects per slab. SLUB will
@ -3924,12 +3924,12 @@
the number of objects indicated. The higher the number
of objects the smaller the overhead of tracking slabs
and the less frequently locks need to be acquired.
For more information see Documentation/vm/slub.txt.
For more information see Documentation/vm/slub.rst.
slub_min_order= [MM, SLUB]
Determines the minimum page order for slabs. Must be
lower than slub_max_order.
For more information see Documentation/vm/slub.txt.
For more information see Documentation/vm/slub.rst.
slub_nomerge [MM, SLUB]
Same with slab_nomerge. This is supported for legacy.
@ -4285,7 +4285,7 @@
Format: [always|madvise|never]
Can be used to control the default behavior of the system
with respect to transparent hugepages.
See Documentation/vm/transhuge.txt for more details.
See Documentation/vm/transhuge.rst for more details.
tsc= Disable clocksource stability checks for TSC.
Format: <string>

View File

@ -120,7 +120,7 @@ A typical out of bounds access report looks like this::
The header of the report discribe what kind of bug happened and what kind of
access caused it. It's followed by the description of the accessed slub object
(see 'SLUB Debug output' section in Documentation/vm/slub.txt for details) and
(see 'SLUB Debug output' section in Documentation/vm/slub.rst for details) and
the description of the accessed memory page.
In the last section the report shows memory state around the accessed address.

View File

@ -515,7 +515,7 @@ guarantees:
The /proc/PID/clear_refs is used to reset the PG_Referenced and ACCESSED/YOUNG
bits on both physical and virtual pages associated with a process, and the
soft-dirty bit on pte (see Documentation/vm/soft-dirty.txt for details).
soft-dirty bit on pte (see Documentation/vm/soft-dirty.rst for details).
To clear the bits for all the pages associated with the process
> echo 1 > /proc/PID/clear_refs
@ -536,7 +536,7 @@ Any other value written to /proc/PID/clear_refs will have no effect.
The /proc/pid/pagemap gives the PFN, which can be used to find the pageflags
using /proc/kpageflags and number of times a page is mapped using
/proc/kpagecount. For detailed explanation, see Documentation/vm/pagemap.txt.
/proc/kpagecount. For detailed explanation, see Documentation/vm/pagemap.rst.
The /proc/pid/numa_maps is an extension based on maps, showing the memory
locality and binding policy, as well as the memory usage (in pages) of

View File

@ -105,7 +105,7 @@ policy for the file will revert to "default" policy.
NUMA memory allocation policies have optional flags that can be used in
conjunction with their modes. These optional flags can be specified
when tmpfs is mounted by appending them to the mode before the NodeList.
See Documentation/vm/numa_memory_policy.txt for a list of all available
See Documentation/vm/numa_memory_policy.rst for a list of all available
memory allocation policy mode flags and their effect on memory policy.
=static is equivalent to MPOL_F_STATIC_NODES

View File

@ -516,7 +516,7 @@ nr_hugepages
Change the minimum size of the hugepage pool.
See Documentation/vm/hugetlbpage.txt
See Documentation/vm/hugetlbpage.rst
==============================================================
@ -525,7 +525,7 @@ nr_overcommit_hugepages
Change the maximum size of the hugepage pool. The maximum is
nr_hugepages + nr_overcommit_hugepages.
See Documentation/vm/hugetlbpage.txt
See Documentation/vm/hugetlbpage.rst
==============================================================
@ -668,7 +668,7 @@ and don't use much of it.
The default value is 0.
See Documentation/vm/overcommit-accounting and
See Documentation/vm/overcommit-accounting.rst and
mm/mmap.c::__vm_enough_memory() for more information.
==============================================================

View File

@ -1,62 +1,62 @@
00-INDEX
- this file.
active_mm.txt
active_mm.rst
- An explanation from Linus about tsk->active_mm vs tsk->mm.
balance
balance.rst
- various information on memory balancing.
cleancache.txt
cleancache.rst
- Intro to cleancache and page-granularity victim cache.
frontswap.txt
frontswap.rst
- Outline frontswap, part of the transcendent memory frontend.
highmem.txt
highmem.rst
- Outline of highmem and common issues.
hmm.txt
hmm.rst
- Documentation of heterogeneous memory management
hugetlbpage.txt
hugetlbpage.rst
- a brief summary of hugetlbpage support in the Linux kernel.
hugetlbfs_reserv.txt
hugetlbfs_reserv.rst
- A brief overview of hugetlbfs reservation design/implementation.
hwpoison.txt
hwpoison.rst
- explains what hwpoison is
idle_page_tracking.txt
idle_page_tracking.rst
- description of the idle page tracking feature.
ksm.txt
ksm.rst
- how to use the Kernel Samepage Merging feature.
mmu_notifier.txt
mmu_notifier.rst
- a note about clearing pte/pmd and mmu notifications
numa
numa.rst
- information about NUMA specific code in the Linux vm.
numa_memory_policy.txt
numa_memory_policy.rst
- documentation of concepts and APIs of the 2.6 memory policy support.
overcommit-accounting
overcommit-accounting.rst
- description of the Linux kernels overcommit handling modes.
page_frags
page_frags.rst
- description of page fragments allocator
page_migration
page_migration.rst
- description of page migration in NUMA systems.
pagemap.txt
pagemap.rst
- pagemap, from the userspace perspective
page_owner.txt
page_owner.rst
- tracking about who allocated each page
remap_file_pages.txt
remap_file_pages.rst
- a note about remap_file_pages() system call
slub.txt
slub.rst
- a short users guide for SLUB.
soft-dirty.txt
soft-dirty.rst
- short explanation for soft-dirty PTEs
split_page_table_lock
split_page_table_lock.rst
- Separate per-table lock to improve scalability of the old page_table_lock.
swap_numa.txt
swap_numa.rst
- automatic binding of swap device to numa node
transhuge.txt
transhuge.rst
- Transparent Hugepage Support, alternative way of using hugepages.
unevictable-lru.txt
unevictable-lru.rst
- Unevictable LRU infrastructure
userfaultfd.txt
userfaultfd.rst
- description of userfaultfd system call
z3fold.txt
- outline of z3fold allocator for storing compressed pages
zsmalloc.txt
zsmalloc.rst
- outline of zsmalloc allocator for storing compressed pages
zswap.txt
zswap.rst
- Intro to compressed cache for swap pages

View File

@ -217,7 +217,7 @@ When adjusting the persistent hugepage count via ``nr_hugepages_mempolicy``, any
memory policy mode--bind, preferred, local or interleave--may be used. The
resulting effect on persistent huge page allocation is as follows:
#. Regardless of mempolicy mode [see Documentation/vm/numa_memory_policy.txt],
#. Regardless of mempolicy mode [see Documentation/vm/numa_memory_policy.rst],
persistent huge pages will be distributed across the node or nodes
specified in the mempolicy as if "interleave" had been specified.
However, if a node in the policy does not contain sufficient contiguous

View File

@ -155,7 +155,7 @@ Testing
value). This allows stress testing of many kinds of
pages. The page_flags are the same as in /proc/kpageflags. The
flag bits are defined in include/linux/kernel-page-flags.h and
documented in Documentation/vm/pagemap.txt
documented in Documentation/vm/pagemap.rst
* Architecture specific MCE injector

View File

@ -65,7 +65,7 @@ workload one should:
are not reclaimable, he or she can filter them out using
``/proc/kpageflags``.
See Documentation/vm/pagemap.txt for more information about
See Documentation/vm/pagemap.rst for more information about
``/proc/pid/pagemap``, ``/proc/kpageflags``, and ``/proc/kpagecgroup``.
.. _impl_details:

View File

@ -110,7 +110,7 @@ to improve NUMA locality using various CPU affinity command line interfaces,
such as taskset(1) and numactl(1), and program interfaces such as
sched_setaffinity(2). Further, one can modify the kernel's default local
allocation behavior using Linux NUMA memory policy.
[see Documentation/vm/numa_memory_policy.txt.]
[see Documentation/vm/numa_memory_policy.rst.]
System administrators can restrict the CPUs and nodes' memories that a non-
privileged user can specify in the scheduling or NUMA commands and functions

View File

@ -18,7 +18,7 @@ There are four components to pagemap:
* Bits 0-54 page frame number (PFN) if present
* Bits 0-4 swap type if swapped
* Bits 5-54 swap offset if swapped
* Bit 55 pte is soft-dirty (see Documentation/vm/soft-dirty.txt)
* Bit 55 pte is soft-dirty (see Documentation/vm/soft-dirty.rst)
* Bit 56 page exclusively mapped (since 4.2)
* Bits 57-60 zero
* Bit 61 page is file-page or shared-anon (since 3.5)
@ -97,7 +97,7 @@ Short descriptions to the page flags:
A compound page with order N consists of 2^N physically contiguous pages.
A compound page with order 2 takes the form of "HTTT", where H donates its
head page and T donates its tail page(s). The major consumers of compound
pages are hugeTLB pages (Documentation/vm/hugetlbpage.txt), the SLUB etc.
pages are hugeTLB pages (Documentation/vm/hugetlbpage.rst), the SLUB etc.
memory allocators and various device drivers. However in this interface,
only huge/giga pages are made visible to end users.
16 - COMPOUND_TAIL
@ -118,7 +118,7 @@ Short descriptions to the page flags:
zero page for pfn_zero or huge_zero page
25 - IDLE
page has not been accessed since it was marked idle (see
Documentation/vm/idle_page_tracking.txt). Note that this flag may be
Documentation/vm/idle_page_tracking.rst). Note that this flag may be
stale in case the page was accessed via a PTE. To make sure the flag
is up-to-date one has to read ``/sys/kernel/mm/page_idle/bitmap`` first.

View File

@ -15406,7 +15406,7 @@ L: linux-mm@kvack.org
S: Maintained
F: mm/zsmalloc.c
F: include/linux/zsmalloc.h
F: Documentation/vm/zsmalloc.txt
F: Documentation/vm/zsmalloc.rst
ZSWAP COMPRESSED SWAP CACHING
M: Seth Jennings <sjenning@redhat.com>

View File

@ -584,7 +584,7 @@ config ARCH_DISCONTIGMEM_ENABLE
Say Y to support efficient handling of discontiguous physical memory,
for architectures which are either NUMA (Non-Uniform Memory Access)
or have huge holes in the physical address space for other reasons.
See <file:Documentation/vm/numa> for more.
See <file:Documentation/vm/numa.rst> for more.
source "mm/Kconfig"

View File

@ -397,7 +397,7 @@ config ARCH_DISCONTIGMEM_ENABLE
Say Y to support efficient handling of discontiguous physical memory,
for architectures which are either NUMA (Non-Uniform Memory Access)
or have huge holes in the physical address space for other reasons.
See <file:Documentation/vm/numa> for more.
See <file:Documentation/vm/numa.rst> for more.
config ARCH_FLATMEM_ENABLE
def_bool y

View File

@ -2551,7 +2551,7 @@ config ARCH_DISCONTIGMEM_ENABLE
Say Y to support efficient handling of discontiguous physical memory,
for architectures which are either NUMA (Non-Uniform Memory Access)
or have huge holes in the physical address space for other reasons.
See <file:Documentation/vm/numa> for more.
See <file:Documentation/vm/numa.rst> for more.
config ARCH_SPARSEMEM_ENABLE
bool

View File

@ -880,7 +880,7 @@ config PPC_MEM_KEYS
page-based protections, but without requiring modification of the
page tables when an application changes protection domains.
For details, see Documentation/vm/protection-keys.txt
For details, see Documentation/vm/protection-keys.rst
If unsure, say y.

View File

@ -196,7 +196,7 @@ config HUGETLBFS
help
hugetlbfs is a filesystem backing for HugeTLB pages, based on
ramfs. For architectures that support it, say Y here and read
<file:Documentation/vm/hugetlbpage.txt> for details.
<file:Documentation/vm/hugetlbpage.rst> for details.
If unsure, say N.

View File

@ -618,7 +618,7 @@ static void dax_mapping_entry_mkclean(struct address_space *mapping,
* downgrading page table protection not changing it to point
* to a new page.
*
* See Documentation/vm/mmu_notifier.txt
* See Documentation/vm/mmu_notifier.rst
*/
if (pmdp) {
#ifdef CONFIG_FS_DAX_PMD

View File

@ -956,7 +956,7 @@ static inline void clear_soft_dirty(struct vm_area_struct *vma,
/*
* The soft-dirty tracker uses #PF-s to catch writes
* to pages, so write-protect the pte as well. See the
* Documentation/vm/soft-dirty.txt for full description
* Documentation/vm/soft-dirty.rst for full description
* of how soft-dirty works.
*/
pte_t ptent = *pte;
@ -1436,7 +1436,7 @@ static int pagemap_hugetlb_range(pte_t *ptep, unsigned long hmask,
* Bits 0-54 page frame number (PFN) if present
* Bits 0-4 swap type if swapped
* Bits 5-54 swap offset if swapped
* Bit 55 pte is soft-dirty (see Documentation/vm/soft-dirty.txt)
* Bit 55 pte is soft-dirty (see Documentation/vm/soft-dirty.rst)
* Bit 56 page exclusively mapped
* Bits 57-60 zero
* Bit 61 page is file-page or shared-anon

View File

@ -16,7 +16,7 @@
/*
* Heterogeneous Memory Management (HMM)
*
* See Documentation/vm/hmm.txt for reasons and overview of what HMM is and it
* See Documentation/vm/hmm.rst for reasons and overview of what HMM is and it
* is for. Here we focus on the HMM API description, with some explanation of
* the underlying implementation.
*

View File

@ -45,7 +45,7 @@ struct vmem_altmap {
* must be treated as an opaque object, rather than a "normal" struct page.
*
* A more complete discussion of unaddressable memory may be found in
* include/linux/hmm.h and Documentation/vm/hmm.txt.
* include/linux/hmm.h and Documentation/vm/hmm.rst.
*
* MEMORY_DEVICE_PUBLIC:
* Device memory that is cache coherent from device and CPU point of view. This
@ -67,7 +67,7 @@ enum memory_type {
* page_free()
*
* Additional notes about MEMORY_DEVICE_PRIVATE may be found in
* include/linux/hmm.h and Documentation/vm/hmm.txt. There is also a brief
* include/linux/hmm.h and Documentation/vm/hmm.rst. There is also a brief
* explanation in include/linux/memory_hotplug.h.
*
* The page_fault() callback must migrate page back, from device memory to

View File

@ -174,7 +174,7 @@ struct mmu_notifier_ops {
* invalidate_range_start()/end() notifiers, as
* invalidate_range() alread catches the points in time when an
* external TLB range needs to be flushed. For more in depth
* discussion on this see Documentation/vm/mmu_notifier.txt
* discussion on this see Documentation/vm/mmu_notifier.rst
*
* Note that this function might be called with just a sub-range
* of what was passed to invalidate_range_start()/end(), if

View File

@ -28,7 +28,7 @@ extern struct mm_struct *mm_alloc(void);
*
* Use mmdrop() to release the reference acquired by mmgrab().
*
* See also <Documentation/vm/active_mm.txt> for an in-depth explanation
* See also <Documentation/vm/active_mm.rst> for an in-depth explanation
* of &mm_struct.mm_count vs &mm_struct.mm_users.
*/
static inline void mmgrab(struct mm_struct *mm)
@ -51,7 +51,7 @@ extern void mmdrop(struct mm_struct *mm);
*
* Use mmput() to release the reference acquired by mmget().
*
* See also <Documentation/vm/active_mm.txt> for an in-depth explanation
* See also <Documentation/vm/active_mm.rst> for an in-depth explanation
* of &mm_struct.mm_count vs &mm_struct.mm_users.
*/
static inline void mmget(struct mm_struct *mm)

View File

@ -53,7 +53,7 @@ static inline int current_is_kswapd(void)
/*
* Unaddressable device memory support. See include/linux/hmm.h and
* Documentation/vm/hmm.txt. Short description is we need struct pages for
* Documentation/vm/hmm.rst. Short description is we need struct pages for
* device memory that is unaddressable (inaccessible) by CPU, so that we can
* migrate part of a process memory to device memory.
*

View File

@ -312,7 +312,7 @@ config KSM
the many instances by a single page with that content, so
saving memory until one or another app needs to modify the content.
Recommended for use with KVM, or with other duplicative applications.
See Documentation/vm/ksm.txt for more information: KSM is inactive
See Documentation/vm/ksm.rst for more information: KSM is inactive
until a program has madvised that an area is MADV_MERGEABLE, and
root has set /sys/kernel/mm/ksm/run to 1 (if CONFIG_SYSFS is set).
@ -537,7 +537,7 @@ config MEM_SOFT_DIRTY
into a page just as regular dirty bit, but unlike the latter
it can be cleared by hands.
See Documentation/vm/soft-dirty.txt for more details.
See Documentation/vm/soft-dirty.rst for more details.
config ZSWAP
bool "Compressed cache for swap pages (EXPERIMENTAL)"
@ -664,7 +664,7 @@ config IDLE_PAGE_TRACKING
be useful to tune memory cgroup limits and/or for job placement
within a compute cluster.
See Documentation/vm/idle_page_tracking.txt for more details.
See Documentation/vm/idle_page_tracking.rst for more details.
# arch_add_memory() comprehends device memory
config ARCH_HAS_ZONE_DEVICE

View File

@ -3,7 +3,7 @@
*
* This code provides the generic "frontend" layer to call a matching
* "backend" driver implementation of cleancache. See
* Documentation/vm/cleancache.txt for more information.
* Documentation/vm/cleancache.rst for more information.
*
* Copyright (C) 2009-2010 Oracle Corp. All rights reserved.
* Author: Dan Magenheimer

View File

@ -3,7 +3,7 @@
*
* This code provides the generic "frontend" layer to call a matching
* "backend" driver implementation of frontswap. See
* Documentation/vm/frontswap.txt for more information.
* Documentation/vm/frontswap.rst for more information.
*
* Copyright (C) 2009-2012 Oracle Corp. All rights reserved.
* Author: Dan Magenheimer

View File

@ -37,7 +37,7 @@
#if defined(CONFIG_DEVICE_PRIVATE) || defined(CONFIG_DEVICE_PUBLIC)
/*
* Device private memory see HMM (Documentation/vm/hmm.txt) or hmm.h
* Device private memory see HMM (Documentation/vm/hmm.rst) or hmm.h
*/
DEFINE_STATIC_KEY_FALSE(device_private_key);
EXPORT_SYMBOL(device_private_key);

View File

@ -1185,7 +1185,7 @@ static int do_huge_pmd_wp_page_fallback(struct vm_fault *vmf, pmd_t orig_pmd,
* mmu_notifier_invalidate_range_end() happens which can lead to a
* device seeing memory write in different order than CPU.
*
* See Documentation/vm/mmu_notifier.txt
* See Documentation/vm/mmu_notifier.rst
*/
pmdp_huge_clear_flush_notify(vma, haddr, vmf->pmd);
@ -2037,7 +2037,7 @@ static void __split_huge_zero_page_pmd(struct vm_area_struct *vma,
* replacing a zero pmd write protected page with a zero pte write
* protected page.
*
* See Documentation/vm/mmu_notifier.txt
* See Documentation/vm/mmu_notifier.rst
*/
pmdp_huge_clear_flush(vma, haddr, pmd);

View File

@ -3289,7 +3289,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
* table protection not changing it to point
* to a new page.
*
* See Documentation/vm/mmu_notifier.txt
* See Documentation/vm/mmu_notifier.rst
*/
huge_ptep_set_wrprotect(src, addr, src_pte);
}
@ -4355,7 +4355,7 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
* No need to call mmu_notifier_invalidate_range() we are downgrading
* page table protection not changing it to point to a new page.
*
* See Documentation/vm/mmu_notifier.txt
* See Documentation/vm/mmu_notifier.rst
*/
i_mmap_unlock_write(vma->vm_file->f_mapping);
mmu_notifier_invalidate_range_end(mm, start, end);

View File

@ -1049,7 +1049,7 @@ static int write_protect_page(struct vm_area_struct *vma, struct page *page,
* No need to notify as we are downgrading page table to read
* only not changing it to point to a new page.
*
* See Documentation/vm/mmu_notifier.txt
* See Documentation/vm/mmu_notifier.rst
*/
entry = ptep_clear_flush(vma, pvmw.address, pvmw.pte);
/*
@ -1138,7 +1138,7 @@ static int replace_page(struct vm_area_struct *vma, struct page *page,
* No need to notify as we are replacing a read only page with another
* read only page with the same content.
*
* See Documentation/vm/mmu_notifier.txt
* See Documentation/vm/mmu_notifier.rst
*/
ptep_clear_flush(vma, addr, ptep);
set_pte_at_notify(mm, addr, ptep, newpte);

View File

@ -2769,7 +2769,7 @@ SYSCALL_DEFINE5(remap_file_pages, unsigned long, start, unsigned long, size,
unsigned long ret = -EINVAL;
struct file *file;
pr_warn_once("%s (%d) uses deprecated remap_file_pages() syscall. See Documentation/vm/remap_file_pages.txt.\n",
pr_warn_once("%s (%d) uses deprecated remap_file_pages() syscall. See Documentation/vm/remap_file_pages.rst.\n",
current->comm, current->pid);
if (prot)

View File

@ -942,7 +942,7 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma,
* downgrading page table protection not changing it to point
* to a new page.
*
* See Documentation/vm/mmu_notifier.txt
* See Documentation/vm/mmu_notifier.rst
*/
if (ret)
(*cleaned)++;
@ -1587,7 +1587,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
* point at new page while a device still is using this
* page.
*
* See Documentation/vm/mmu_notifier.txt
* See Documentation/vm/mmu_notifier.rst
*/
dec_mm_counter(mm, mm_counter_file(page));
}
@ -1597,7 +1597,7 @@ discard:
* done above for all cases requiring it to happen under page
* table lock before mmu_notifier_invalidate_range_end()
*
* See Documentation/vm/mmu_notifier.txt
* See Documentation/vm/mmu_notifier.rst
*/
page_remove_rmap(subpage, PageHuge(page));
put_page(page);

View File

@ -609,7 +609,7 @@ EXPORT_SYMBOL_GPL(vm_memory_committed);
* succeed and -ENOMEM implies there is not.
*
* We currently support three overcommit policies, which are set via the
* vm.overcommit_memory sysctl. See Documentation/vm/overcommit-accounting
* vm.overcommit_memory sysctl. See Documentation/vm/overcommit-accounting.rst
*
* Strict overcommit modes added 2002 Feb 26 by Alan Cox.
* Additional code 2002 Jul 20 by Robert Love.