Merge branch 'mm-rst' into docs-next

Mike Rapoport says:

  These patches convert files in Documentation/vm to ReST format, add an
  initial index and link it to the top level documentation.

  There are no contents changes in the documentation, except few spelling
  fixes. The relatively large diffstat stems from the indentation and
  paragraph wrapping changes.

  I've tried to keep the formatting as consistent as possible, but I could
  miss some places that needed markup and add some markup where it was not
  necessary.

[jc: significant conflicts in vm/hmm.rst]
This commit is contained in:
Jonathan Corbet 2018-04-16 14:25:08 -06:00
commit 24844fd339
72 changed files with 2589 additions and 2190 deletions

View file

@ -90,4 +90,4 @@ Date: December 2009
Contact: Lee Schermerhorn <lee.schermerhorn@hp.com>
Description:
The node's huge page size control/query attributes.
See Documentation/vm/hugetlbpage.txt
See Documentation/vm/hugetlbpage.rst

View file

@ -12,4 +12,4 @@ Description:
free_hugepages
surplus_hugepages
resv_hugepages
See Documentation/vm/hugetlbpage.txt for details.
See Documentation/vm/hugetlbpage.rst for details.

View file

@ -40,7 +40,7 @@ Description: Kernel Samepage Merging daemon sysfs interface
sleep_millisecs: how many milliseconds ksm should sleep between
scans.
See Documentation/vm/ksm.txt for more information.
See Documentation/vm/ksm.rst for more information.
What: /sys/kernel/mm/ksm/merge_across_nodes
Date: January 2013

View file

@ -37,7 +37,7 @@ Description:
The alloc_calls file is read-only and lists the kernel code
locations from which allocations for this cache were performed.
The alloc_calls file only contains information if debugging is
enabled for that cache (see Documentation/vm/slub.txt).
enabled for that cache (see Documentation/vm/slub.rst).
What: /sys/kernel/slab/cache/alloc_fastpath
Date: February 2008
@ -219,7 +219,7 @@ Contact: Pekka Enberg <penberg@cs.helsinki.fi>,
Description:
The free_calls file is read-only and lists the locations of
object frees if slab debugging is enabled (see
Documentation/vm/slub.txt).
Documentation/vm/slub.rst).
What: /sys/kernel/slab/cache/free_fastpath
Date: February 2008

View file

@ -3915,7 +3915,7 @@
cache (risks via metadata attacks are mostly
unchanged). Debug options disable merging on their
own.
For more information see Documentation/vm/slub.txt.
For more information see Documentation/vm/slub.rst.
slab_max_order= [MM, SLAB]
Determines the maximum allowed order for slabs.
@ -3929,7 +3929,7 @@
slub_debug can create guard zones around objects and
may poison objects when not in use. Also tracks the
last alloc / free. For more information see
Documentation/vm/slub.txt.
Documentation/vm/slub.rst.
slub_memcg_sysfs= [MM, SLUB]
Determines whether to enable sysfs directories for
@ -3943,7 +3943,7 @@
Determines the maximum allowed order for slabs.
A high setting may cause OOMs due to memory
fragmentation. For more information see
Documentation/vm/slub.txt.
Documentation/vm/slub.rst.
slub_min_objects= [MM, SLUB]
The minimum number of objects per slab. SLUB will
@ -3952,12 +3952,12 @@
the number of objects indicated. The higher the number
of objects the smaller the overhead of tracking slabs
and the less frequently locks need to be acquired.
For more information see Documentation/vm/slub.txt.
For more information see Documentation/vm/slub.rst.
slub_min_order= [MM, SLUB]
Determines the minimum page order for slabs. Must be
lower than slub_max_order.
For more information see Documentation/vm/slub.txt.
For more information see Documentation/vm/slub.rst.
slub_nomerge [MM, SLUB]
Same with slab_nomerge. This is supported for legacy.
@ -4313,7 +4313,7 @@
Format: [always|madvise|never]
Can be used to control the default behavior of the system
with respect to transparent hugepages.
See Documentation/vm/transhuge.txt for more details.
See Documentation/vm/transhuge.rst for more details.
tsc= Disable clocksource stability checks for TSC.
Format: <string>

View file

@ -120,7 +120,7 @@ A typical out of bounds access report looks like this::
The header of the report discribe what kind of bug happened and what kind of
access caused it. It's followed by the description of the accessed slub object
(see 'SLUB Debug output' section in Documentation/vm/slub.txt for details) and
(see 'SLUB Debug output' section in Documentation/vm/slub.rst for details) and
the description of the accessed memory page.
In the last section the report shows memory state around the accessed address.

View file

@ -515,7 +515,7 @@ guarantees:
The /proc/PID/clear_refs is used to reset the PG_Referenced and ACCESSED/YOUNG
bits on both physical and virtual pages associated with a process, and the
soft-dirty bit on pte (see Documentation/vm/soft-dirty.txt for details).
soft-dirty bit on pte (see Documentation/vm/soft-dirty.rst for details).
To clear the bits for all the pages associated with the process
> echo 1 > /proc/PID/clear_refs
@ -536,7 +536,7 @@ Any other value written to /proc/PID/clear_refs will have no effect.
The /proc/pid/pagemap gives the PFN, which can be used to find the pageflags
using /proc/kpageflags and number of times a page is mapped using
/proc/kpagecount. For detailed explanation, see Documentation/vm/pagemap.txt.
/proc/kpagecount. For detailed explanation, see Documentation/vm/pagemap.rst.
The /proc/pid/numa_maps is an extension based on maps, showing the memory
locality and binding policy, as well as the memory usage (in pages) of

View file

@ -105,7 +105,7 @@ policy for the file will revert to "default" policy.
NUMA memory allocation policies have optional flags that can be used in
conjunction with their modes. These optional flags can be specified
when tmpfs is mounted by appending them to the mode before the NodeList.
See Documentation/vm/numa_memory_policy.txt for a list of all available
See Documentation/vm/numa_memory_policy.rst for a list of all available
memory allocation policy mode flags and their effect on memory policy.
=static is equivalent to MPOL_F_STATIC_NODES

View file

@ -45,7 +45,7 @@ the kernel interface as seen by application developers.
.. toctree::
:maxdepth: 2
userspace-api/index
userspace-api/index
Introduction to kernel development
@ -89,6 +89,7 @@ needed).
sound/index
crypto/index
filesystems/index
vm/index
Architecture-specific documentation
-----------------------------------

View file

@ -515,7 +515,7 @@ nr_hugepages
Change the minimum size of the hugepage pool.
See Documentation/vm/hugetlbpage.txt
See Documentation/vm/hugetlbpage.rst
==============================================================
@ -524,7 +524,7 @@ nr_overcommit_hugepages
Change the maximum size of the hugepage pool. The maximum is
nr_hugepages + nr_overcommit_hugepages.
See Documentation/vm/hugetlbpage.txt
See Documentation/vm/hugetlbpage.rst
==============================================================
@ -667,7 +667,7 @@ and don't use much of it.
The default value is 0.
See Documentation/vm/overcommit-accounting and
See Documentation/vm/overcommit-accounting.rst and
mm/mmap.c::__vm_enough_memory() for more information.
==============================================================

View file

@ -1,62 +1,62 @@
00-INDEX
- this file.
active_mm.txt
active_mm.rst
- An explanation from Linus about tsk->active_mm vs tsk->mm.
balance
balance.rst
- various information on memory balancing.
cleancache.txt
cleancache.rst
- Intro to cleancache and page-granularity victim cache.
frontswap.txt
frontswap.rst
- Outline frontswap, part of the transcendent memory frontend.
highmem.txt
highmem.rst
- Outline of highmem and common issues.
hmm.txt
hmm.rst
- Documentation of heterogeneous memory management
hugetlbpage.txt
hugetlbpage.rst
- a brief summary of hugetlbpage support in the Linux kernel.
hugetlbfs_reserv.txt
hugetlbfs_reserv.rst
- A brief overview of hugetlbfs reservation design/implementation.
hwpoison.txt
hwpoison.rst
- explains what hwpoison is
idle_page_tracking.txt
idle_page_tracking.rst
- description of the idle page tracking feature.
ksm.txt
ksm.rst
- how to use the Kernel Samepage Merging feature.
mmu_notifier.txt
mmu_notifier.rst
- a note about clearing pte/pmd and mmu notifications
numa
numa.rst
- information about NUMA specific code in the Linux vm.
numa_memory_policy.txt
numa_memory_policy.rst
- documentation of concepts and APIs of the 2.6 memory policy support.
overcommit-accounting
overcommit-accounting.rst
- description of the Linux kernels overcommit handling modes.
page_frags
page_frags.rst
- description of page fragments allocator
page_migration
page_migration.rst
- description of page migration in NUMA systems.
pagemap.txt
pagemap.rst
- pagemap, from the userspace perspective
page_owner.txt
page_owner.rst
- tracking about who allocated each page
remap_file_pages.txt
remap_file_pages.rst
- a note about remap_file_pages() system call
slub.txt
slub.rst
- a short users guide for SLUB.
soft-dirty.txt
soft-dirty.rst
- short explanation for soft-dirty PTEs
split_page_table_lock
split_page_table_lock.rst
- Separate per-table lock to improve scalability of the old page_table_lock.
swap_numa.txt
swap_numa.rst
- automatic binding of swap device to numa node
transhuge.txt
transhuge.rst
- Transparent Hugepage Support, alternative way of using hugepages.
unevictable-lru.txt
unevictable-lru.rst
- Unevictable LRU infrastructure
userfaultfd.txt
userfaultfd.rst
- description of userfaultfd system call
z3fold.txt
- outline of z3fold allocator for storing compressed pages
zsmalloc.txt
zsmalloc.rst
- outline of zsmalloc allocator for storing compressed pages
zswap.txt
zswap.rst
- Intro to compressed cache for swap pages

View file

@ -0,0 +1,91 @@
.. _active_mm:
=========
Active MM
=========
::
List: linux-kernel
Subject: Re: active_mm
From: Linus Torvalds <torvalds () transmeta ! com>
Date: 1999-07-30 21:36:24
Cc'd to linux-kernel, because I don't write explanations all that often,
and when I do I feel better about more people reading them.
On Fri, 30 Jul 1999, David Mosberger wrote:
>
> Is there a brief description someplace on how "mm" vs. "active_mm" in
> the task_struct are supposed to be used? (My apologies if this was
> discussed on the mailing lists---I just returned from vacation and
> wasn't able to follow linux-kernel for a while).
Basically, the new setup is:
- we have "real address spaces" and "anonymous address spaces". The
difference is that an anonymous address space doesn't care about the
user-level page tables at all, so when we do a context switch into an
anonymous address space we just leave the previous address space
active.
The obvious use for a "anonymous address space" is any thread that
doesn't need any user mappings - all kernel threads basically fall into
this category, but even "real" threads can temporarily say that for
some amount of time they are not going to be interested in user space,
and that the scheduler might as well try to avoid wasting time on
switching the VM state around. Currently only the old-style bdflush
sync does that.
- "tsk->mm" points to the "real address space". For an anonymous process,
tsk->mm will be NULL, for the logical reason that an anonymous process
really doesn't _have_ a real address space at all.
- however, we obviously need to keep track of which address space we
"stole" for such an anonymous user. For that, we have "tsk->active_mm",
which shows what the currently active address space is.
The rule is that for a process with a real address space (ie tsk->mm is
non-NULL) the active_mm obviously always has to be the same as the real
one.
For a anonymous process, tsk->mm == NULL, and tsk->active_mm is the
"borrowed" mm while the anonymous process is running. When the
anonymous process gets scheduled away, the borrowed address space is
returned and cleared.
To support all that, the "struct mm_struct" now has two counters: a
"mm_users" counter that is how many "real address space users" there are,
and a "mm_count" counter that is the number of "lazy" users (ie anonymous
users) plus one if there are any real users.
Usually there is at least one real user, but it could be that the real
user exited on another CPU while a lazy user was still active, so you do
actually get cases where you have a address space that is _only_ used by
lazy users. That is often a short-lived state, because once that thread
gets scheduled away in favour of a real thread, the "zombie" mm gets
released because "mm_users" becomes zero.
Also, a new rule is that _nobody_ ever has "init_mm" as a real MM any
more. "init_mm" should be considered just a "lazy context when no other
context is available", and in fact it is mainly used just at bootup when
no real VM has yet been created. So code that used to check
if (current->mm == &init_mm)
should generally just do
if (!current->mm)
instead (which makes more sense anyway - the test is basically one of "do
we have a user context", and is generally done by the page fault handler
and things like that).
Anyway, I put a pre-patch-2.3.13-1 on ftp.kernel.org just a moment ago,
because it slightly changes the interfaces to accommodate the alpha (who
would have thought it, but the alpha actually ends up having one of the
ugliest context switch codes - unlike the other architectures where the MM
and register state is separate, the alpha PALcode joins the two, and you
need to switch both together).
(From http://marc.info/?l=linux-kernel&m=93337278602211&w=2)

View file

@ -1,83 +0,0 @@
List: linux-kernel
Subject: Re: active_mm
From: Linus Torvalds <torvalds () transmeta ! com>
Date: 1999-07-30 21:36:24
Cc'd to linux-kernel, because I don't write explanations all that often,
and when I do I feel better about more people reading them.
On Fri, 30 Jul 1999, David Mosberger wrote:
>
> Is there a brief description someplace on how "mm" vs. "active_mm" in
> the task_struct are supposed to be used? (My apologies if this was
> discussed on the mailing lists---I just returned from vacation and
> wasn't able to follow linux-kernel for a while).
Basically, the new setup is:
- we have "real address spaces" and "anonymous address spaces". The
difference is that an anonymous address space doesn't care about the
user-level page tables at all, so when we do a context switch into an
anonymous address space we just leave the previous address space
active.
The obvious use for a "anonymous address space" is any thread that
doesn't need any user mappings - all kernel threads basically fall into
this category, but even "real" threads can temporarily say that for
some amount of time they are not going to be interested in user space,
and that the scheduler might as well try to avoid wasting time on
switching the VM state around. Currently only the old-style bdflush
sync does that.
- "tsk->mm" points to the "real address space". For an anonymous process,
tsk->mm will be NULL, for the logical reason that an anonymous process
really doesn't _have_ a real address space at all.
- however, we obviously need to keep track of which address space we
"stole" for such an anonymous user. For that, we have "tsk->active_mm",
which shows what the currently active address space is.
The rule is that for a process with a real address space (ie tsk->mm is
non-NULL) the active_mm obviously always has to be the same as the real
one.
For a anonymous process, tsk->mm == NULL, and tsk->active_mm is the
"borrowed" mm while the anonymous process is running. When the
anonymous process gets scheduled away, the borrowed address space is
returned and cleared.
To support all that, the "struct mm_struct" now has two counters: a
"mm_users" counter that is how many "real address space users" there are,
and a "mm_count" counter that is the number of "lazy" users (ie anonymous
users) plus one if there are any real users.
Usually there is at least one real user, but it could be that the real
user exited on another CPU while a lazy user was still active, so you do
actually get cases where you have a address space that is _only_ used by
lazy users. That is often a short-lived state, because once that thread
gets scheduled away in favour of a real thread, the "zombie" mm gets
released because "mm_users" becomes zero.
Also, a new rule is that _nobody_ ever has "init_mm" as a real MM any
more. "init_mm" should be considered just a "lazy context when no other
context is available", and in fact it is mainly used just at bootup when
no real VM has yet been created. So code that used to check
if (current->mm == &init_mm)
should generally just do
if (!current->mm)
instead (which makes more sense anyway - the test is basically one of "do
we have a user context", and is generally done by the page fault handler
and things like that).
Anyway, I put a pre-patch-2.3.13-1 on ftp.kernel.org just a moment ago,
because it slightly changes the interfaces to accommodate the alpha (who
would have thought it, but the alpha actually ends up having one of the
ugliest context switch codes - unlike the other architectures where the MM
and register state is separate, the alpha PALcode joins the two, and you
need to switch both together).
(From http://marc.info/?l=linux-kernel&m=93337278602211&w=2)

View file

@ -1,3 +1,9 @@
.. _balance:
================
Memory Balancing
================
Started Jan 2000 by Kanoj Sarcar <kanoj@sgi.com>
Memory balancing is needed for !__GFP_ATOMIC and !__GFP_KSWAPD_RECLAIM as
@ -62,11 +68,11 @@ for non-sleepable allocations. Second, the HIGHMEM zone is also balanced,
so as to give a fighting chance for replace_with_highmem() to get a
HIGHMEM page, as well as to ensure that HIGHMEM allocations do not
fall back into regular zone. This also makes sure that HIGHMEM pages
are not leaked (for example, in situations where a HIGHMEM page is in
are not leaked (for example, in situations where a HIGHMEM page is in
the swapcache but is not being used by anyone)
kswapd also needs to know about the zones it should balance. kswapd is
primarily needed in a situation where balancing can not be done,
primarily needed in a situation where balancing can not be done,
probably because all allocation requests are coming from intr context
and all process contexts are sleeping. For 2.3, kswapd does not really
need to balance the highmem zone, since intr context does not request
@ -89,7 +95,8 @@ pages is below watermark[WMARK_LOW]; in which case zone_wake_kswapd is also set.
(Good) Ideas that I have heard:
1. Dynamic experience should influence balancing: number of failed requests
for a zone can be tracked and fed into the balancing scheme (jalvo@mbay.net)
for a zone can be tracked and fed into the balancing scheme (jalvo@mbay.net)
2. Implement a replace_with_highmem()-like replace_with_regular() to preserve
dma pages. (lkd@tantalophile.demon.co.uk)
dma pages. (lkd@tantalophile.demon.co.uk)

View file

@ -1,4 +1,11 @@
MOTIVATION
.. _cleancache:
==========
Cleancache
==========
Motivation
==========
Cleancache is a new optional feature provided by the VFS layer that
potentially dramatically increases page cache effectiveness for
@ -21,9 +28,10 @@ Transcendent memory "drivers" for cleancache are currently implemented
in Xen (using hypervisor memory) and zcache (using in-kernel compressed
memory) and other implementations are in development.
FAQs are included below.
:ref:`FAQs <faq>` are included below.
IMPLEMENTATION OVERVIEW
Implementation Overview
=======================
A cleancache "backend" that provides transcendent memory registers itself
to the kernel's cleancache "frontend" by calling cleancache_register_ops,
@ -80,22 +88,33 @@ different Linux threads are simultaneously putting and invalidating a page
with the same handle, the results are indeterminate. Callers must
lock the page to ensure serial behavior.
CLEANCACHE PERFORMANCE METRICS
Cleancache Performance Metrics
==============================
If properly configured, monitoring of cleancache is done via debugfs in
the /sys/kernel/debug/cleancache directory. The effectiveness of cleancache
the `/sys/kernel/debug/cleancache` directory. The effectiveness of cleancache
can be measured (across all filesystems) with:
succ_gets - number of gets that were successful
failed_gets - number of gets that failed
puts - number of puts attempted (all "succeed")
invalidates - number of invalidates attempted
``succ_gets``
number of gets that were successful
``failed_gets``
number of gets that failed
``puts``
number of puts attempted (all "succeed")
``invalidates``
number of invalidates attempted
A backend implementation may provide additional metrics.
FAQ
.. _faq:
1) Where's the value? (Andrew Morton)
FAQ
===
* Where's the value? (Andrew Morton)
Cleancache provides a significant performance benefit to many workloads
in many environments with negligible overhead by improving the
@ -137,8 +156,8 @@ device that stores pages of data in a compressed state. And
the proposed "RAMster" driver shares RAM across multiple physical
systems.
2) Why does cleancache have its sticky fingers so deep inside the
filesystems and VFS? (Andrew Morton and Christoph Hellwig)
* Why does cleancache have its sticky fingers so deep inside the
filesystems and VFS? (Andrew Morton and Christoph Hellwig)
The core hooks for cleancache in VFS are in most cases a single line
and the minimum set are placed precisely where needed to maintain
@ -168,9 +187,9 @@ filesystems in the future.
The total impact of the hooks to existing fs and mm files is only
about 40 lines added (not counting comments and blank lines).
3) Why not make cleancache asynchronous and batched so it can
more easily interface with real devices with DMA instead
of copying each individual page? (Minchan Kim)
* Why not make cleancache asynchronous and batched so it can more
easily interface with real devices with DMA instead of copying each
individual page? (Minchan Kim)
The one-page-at-a-time copy semantics simplifies the implementation
on both the frontend and backend and also allows the backend to
@ -182,8 +201,8 @@ are avoided. While the interface seems odd for a "real device"
or for real kernel-addressable RAM, it makes perfect sense for
transcendent memory.
4) Why is non-shared cleancache "exclusive"? And where is the
page "invalidated" after a "get"? (Minchan Kim)
* Why is non-shared cleancache "exclusive"? And where is the
page "invalidated" after a "get"? (Minchan Kim)
The main reason is to free up space in transcendent memory and
to avoid unnecessary cleancache_invalidate calls. If you want inclusive,
@ -193,7 +212,7 @@ be easily extended to add a "get_no_invalidate" call.
The invalidate is done by the cleancache backend implementation.
5) What's the performance impact?
* What's the performance impact?
Performance analysis has been presented at OLS'09 and LCA'10.
Briefly, performance gains can be significant on most workloads,
@ -206,7 +225,7 @@ single-core systems with slow memory-copy speeds, cleancache
has little value, but in newer multicore machines, especially
consolidated/virtualized machines, it has great value.
6) How do I add cleancache support for filesystem X? (Boaz Harrash)
* How do I add cleancache support for filesystem X? (Boaz Harrash)
Filesystems that are well-behaved and conform to certain
restrictions can utilize cleancache simply by making a call to
@ -217,26 +236,26 @@ not enable the optional cleancache.
Some points for a filesystem to consider:
- The FS should be block-device-based (e.g. a ram-based FS such
as tmpfs should not enable cleancache)
- To ensure coherency/correctness, the FS must ensure that all
file removal or truncation operations either go through VFS or
add hooks to do the equivalent cleancache "invalidate" operations
- To ensure coherency/correctness, either inode numbers must
be unique across the lifetime of the on-disk file OR the
FS must provide an "encode_fh" function.
- The FS must call the VFS superblock alloc and deactivate routines
or add hooks to do the equivalent cleancache calls done there.
- To maximize performance, all pages fetched from the FS should
go through the do_mpag_readpage routine or the FS should add
hooks to do the equivalent (cf. btrfs)
- Currently, the FS blocksize must be the same as PAGESIZE. This
is not an architectural restriction, but no backends currently
support anything different.
- A clustered FS should invoke the "shared_init_fs" cleancache
hook to get best performance for some backends.
- The FS should be block-device-based (e.g. a ram-based FS such
as tmpfs should not enable cleancache)
- To ensure coherency/correctness, the FS must ensure that all
file removal or truncation operations either go through VFS or
add hooks to do the equivalent cleancache "invalidate" operations
- To ensure coherency/correctness, either inode numbers must
be unique across the lifetime of the on-disk file OR the
FS must provide an "encode_fh" function.
- The FS must call the VFS superblock alloc and deactivate routines
or add hooks to do the equivalent cleancache calls done there.
- To maximize performance, all pages fetched from the FS should
go through the do_mpag_readpage routine or the FS should add
hooks to do the equivalent (cf. btrfs)
- Currently, the FS blocksize must be the same as PAGESIZE. This
is not an architectural restriction, but no backends currently
support anything different.
- A clustered FS should invoke the "shared_init_fs" cleancache
hook to get best performance for some backends.
7) Why not use the KVA of the inode as the key? (Christoph Hellwig)
* Why not use the KVA of the inode as the key? (Christoph Hellwig)
If cleancache would use the inode virtual address instead of
inode/filehandle, the pool id could be eliminated. But, this
@ -251,7 +270,7 @@ of cleancache would be lost because the cache of pages in cleanache
is potentially much larger than the kernel pagecache and is most
useful if the pages survive inode cache removal.
8) Why is a global variable required?
* Why is a global variable required?
The cleancache_enabled flag is checked in all of the frequently-used
cleancache hooks. The alternative is a function call to check a static
@ -262,14 +281,14 @@ global variable allows cleancache to be enabled by default at compile
time, but have insignificant performance impact when cleancache remains
disabled at runtime.
9) Does cleanache work with KVM?
* Does cleanache work with KVM?
The memory model of KVM is sufficiently different that a cleancache
backend may have less value for KVM. This remains to be tested,
especially in an overcommitted system.
10) Does cleancache work in userspace? It sounds useful for
memory hungry caches like web browsers. (Jamie Lokier)
* Does cleancache work in userspace? It sounds useful for
memory hungry caches like web browsers. (Jamie Lokier)
No plans yet, though we agree it sounds useful, at least for
apps that bypass the page cache (e.g. O_DIRECT).

10
Documentation/vm/conf.py Normal file
View file

@ -0,0 +1,10 @@
# -*- coding: utf-8; mode: python -*-
project = "Linux Memory Management Documentation"
tags.add("subproject")
latex_documents = [
('index', 'memory-management.tex', project,
'The kernel development community', 'manual'),
]

View file

@ -1,13 +1,20 @@
.. _frontswap:
=========
Frontswap
=========
Frontswap provides a "transcendent memory" interface for swap pages.
In some environments, dramatic performance savings may be obtained because
swapped pages are saved in RAM (or a RAM-like device) instead of a swap disk.
(Note, frontswap -- and cleancache (merged at 3.0) -- are the "frontends"
(Note, frontswap -- and :ref:`cleancache` (merged at 3.0) -- are the "frontends"
and the only necessary changes to the core kernel for transcendent memory;
all other supporting code -- the "backends" -- is implemented as drivers.
See the LWN.net article "Transcendent memory in a nutshell" for a detailed
overview of frontswap and related kernel parts:
https://lwn.net/Articles/454795/ )
See the LWN.net article `Transcendent memory in a nutshell`_
for a detailed overview of frontswap and related kernel parts)
.. _Transcendent memory in a nutshell: https://lwn.net/Articles/454795/
Frontswap is so named because it can be thought of as the opposite of
a "backing" store for a swap device. The storage is assumed to be
@ -50,19 +57,27 @@ or the store fails AND the page is invalidated. This ensures stale data may
never be obtained from frontswap.
If properly configured, monitoring of frontswap is done via debugfs in
the /sys/kernel/debug/frontswap directory. The effectiveness of
the `/sys/kernel/debug/frontswap` directory. The effectiveness of
frontswap can be measured (across all swap devices) with:
failed_stores - how many store attempts have failed
loads - how many loads were attempted (all should succeed)
succ_stores - how many store attempts have succeeded
invalidates - how many invalidates were attempted
``failed_stores``
how many store attempts have failed
``loads``
how many loads were attempted (all should succeed)
``succ_stores``
how many store attempts have succeeded
``invalidates``
how many invalidates were attempted
A backend implementation may provide additional metrics.
FAQ
===
1) Where's the value?
* Where's the value?
When a workload starts swapping, performance falls through the floor.
Frontswap significantly increases performance in many such workloads by
@ -117,8 +132,8 @@ A KVM implementation is underway and has been RFC'ed to lkml. And,
using frontswap, investigation is also underway on the use of NVM as
a memory extension technology.
2) Sure there may be performance advantages in some situations, but
what's the space/time overhead of frontswap?
* Sure there may be performance advantages in some situations, but
what's the space/time overhead of frontswap?
If CONFIG_FRONTSWAP is disabled, every frontswap hook compiles into
nothingness and the only overhead is a few extra bytes per swapon'ed
@ -148,8 +163,8 @@ pressure that can potentially outweigh the other advantages. A
backend, such as zcache, must implement policies to carefully (but
dynamically) manage memory limits to ensure this doesn't happen.
3) OK, how about a quick overview of what this frontswap patch does
in terms that a kernel hacker can grok?
* OK, how about a quick overview of what this frontswap patch does
in terms that a kernel hacker can grok?
Let's assume that a frontswap "backend" has registered during
kernel initialization; this registration indicates that this
@ -188,9 +203,9 @@ and (potentially) a swap device write are replaced by a "frontswap backend
store" and (possibly) a "frontswap backend loads", which are presumably much
faster.
4) Can't frontswap be configured as a "special" swap device that is
just higher priority than any real swap device (e.g. like zswap,
or maybe swap-over-nbd/NFS)?
* Can't frontswap be configured as a "special" swap device that is
just higher priority than any real swap device (e.g. like zswap,
or maybe swap-over-nbd/NFS)?
No. First, the existing swap subsystem doesn't allow for any kind of
swap hierarchy. Perhaps it could be rewritten to accommodate a hierarchy,
@ -240,9 +255,9 @@ installation, frontswap is useless. Swapless portable devices
can still use frontswap but a backend for such devices must configure
some kind of "ghost" swap device and ensure that it is never used.
5) Why this weird definition about "duplicate stores"? If a page
has been previously successfully stored, can't it always be
successfully overwritten?
* Why this weird definition about "duplicate stores"? If a page
has been previously successfully stored, can't it always be
successfully overwritten?
Nearly always it can, but no, sometimes it cannot. Consider an example
where data is compressed and the original 4K page has been compressed
@ -254,7 +269,7 @@ the old data and ensure that it is no longer accessible. Since the
swap subsystem then writes the new data to the read swap device,
this is the correct course of action to ensure coherency.
6) What is frontswap_shrink for?
* What is frontswap_shrink for?
When the (non-frontswap) swap subsystem swaps out a page to a real
swap device, that page is only taking up low-value pre-allocated disk
@ -267,7 +282,7 @@ to "repatriate" pages sent to a remote machine back to the local machine;
this is driven using the frontswap_shrink mechanism when memory pressure
subsides.
7) Why does the frontswap patch create the new include file swapfile.h?
* Why does the frontswap patch create the new include file swapfile.h?
The frontswap code depends on some swap-subsystem-internal data
structures that have, over the years, moved back and forth between

View file

@ -1,25 +1,14 @@
.. _highmem:
====================
HIGH MEMORY HANDLING
====================
====================
High Memory Handling
====================
By: Peter Zijlstra <a.p.zijlstra@chello.nl>
Contents:
.. contents:: :local:
(*) What is high memory?
(*) Temporary virtual mappings.
(*) Using kmap_atomic.
(*) Cost of temporary mappings.
(*) i386 PAE.
====================
WHAT IS HIGH MEMORY?
What Is High Memory?
====================
High memory (highmem) is used when the size of physical memory approaches or
@ -38,7 +27,7 @@ kernel entry/exit. This means the available virtual memory space (4GiB on
i386) has to be divided between user and kernel space.
The traditional split for architectures using this approach is 3:1, 3GiB for
userspace and the top 1GiB for kernel space:
userspace and the top 1GiB for kernel space::
+--------+ 0xffffffff
| Kernel |
@ -58,40 +47,38 @@ and user maps. Some hardware (like some ARMs), however, have limited virtual
space when they use mm context tags.
==========================
TEMPORARY VIRTUAL MAPPINGS
Temporary Virtual Mappings
==========================
The kernel contains several ways of creating temporary mappings:
(*) vmap(). This can be used to make a long duration mapping of multiple
physical pages into a contiguous virtual space. It needs global
synchronization to unmap.
* vmap(). This can be used to make a long duration mapping of multiple
physical pages into a contiguous virtual space. It needs global
synchronization to unmap.
(*) kmap(). This permits a short duration mapping of a single page. It needs
global synchronization, but is amortized somewhat. It is also prone to
deadlocks when using in a nested fashion, and so it is not recommended for
new code.
* kmap(). This permits a short duration mapping of a single page. It needs
global synchronization, but is amortized somewhat. It is also prone to
deadlocks when using in a nested fashion, and so it is not recommended for
new code.
(*) kmap_atomic(). This permits a very short duration mapping of a single
page. Since the mapping is restricted to the CPU that issued it, it
performs well, but the issuing task is therefore required to stay on that
CPU until it has finished, lest some other task displace its mappings.
* kmap_atomic(). This permits a very short duration mapping of a single
page. Since the mapping is restricted to the CPU that issued it, it
performs well, but the issuing task is therefore required to stay on that
CPU until it has finished, lest some other task displace its mappings.
kmap_atomic() may also be used by interrupt contexts, since it is does not
sleep and the caller may not sleep until after kunmap_atomic() is called.
kmap_atomic() may also be used by interrupt contexts, since it is does not
sleep and the caller may not sleep until after kunmap_atomic() is called.
It may be assumed that k[un]map_atomic() won't fail.
It may be assumed that k[un]map_atomic() won't fail.
=================
USING KMAP_ATOMIC
Using kmap_atomic
=================
When and where to use kmap_atomic() is straightforward. It is used when code
wants to access the contents of a page that might be allocated from high memory
(see __GFP_HIGHMEM), for example a page in the pagecache. The API has two
functions, and they can be used in a manner similar to the following:
functions, and they can be used in a manner similar to the following::
/* Find the page of interest. */
struct page *page = find_get_page(mapping, offset);
@ -109,7 +96,7 @@ Note that the kunmap_atomic() call takes the result of the kmap_atomic() call
not the argument.
If you need to map two pages because you want to copy from one page to
another you need to keep the kmap_atomic calls strictly nested, like:
another you need to keep the kmap_atomic calls strictly nested, like::
vaddr1 = kmap_atomic(page1);
vaddr2 = kmap_atomic(page2);
@ -120,8 +107,7 @@ another you need to keep the kmap_atomic calls strictly nested, like:
kunmap_atomic(vaddr1);
==========================
COST OF TEMPORARY MAPPINGS
Cost of Temporary Mappings
==========================
The cost of creating temporary mappings can be quite high. The arch has to
@ -136,25 +122,24 @@ If CONFIG_MMU is not set, then there can be no temporary mappings and no
highmem. In such a case, the arithmetic approach will also be used.
========
i386 PAE
========
The i386 arch, under some circumstances, will permit you to stick up to 64GiB
of RAM into your 32-bit machine. This has a number of consequences:
(*) Linux needs a page-frame structure for each page in the system and the
pageframes need to live in the permanent mapping, which means:
* Linux needs a page-frame structure for each page in the system and the
pageframes need to live in the permanent mapping, which means:
(*) you can have 896M/sizeof(struct page) page-frames at most; with struct
page being 32-bytes that would end up being something in the order of 112G
worth of pages; the kernel, however, needs to store more than just
page-frames in that memory...
* you can have 896M/sizeof(struct page) page-frames at most; with struct
page being 32-bytes that would end up being something in the order of 112G
worth of pages; the kernel, however, needs to store more than just
page-frames in that memory...
(*) PAE makes your page tables larger - which slows the system down as more
data has to be accessed to traverse in TLB fills and the like. One
advantage is that PAE has more PTE bits and can provide advanced features
like NX and PAT.
* PAE makes your page tables larger - which slows the system down as more
data has to be accessed to traverse in TLB fills and the like. One
advantage is that PAE has more PTE bits and can provide advanced features
like NX and PAT.
The general recommendation is that you don't use more than 8GiB on a 32-bit
machine - although more might work for you and your workload, you're pretty

View file

@ -1,4 +1,8 @@
.. hmm:
=====================================
Heterogeneous Memory Management (HMM)
=====================================
Provide infrastructure and helpers to integrate non-conventional memory (device
memory like GPU on board memory) into regular kernel path, with the cornerstone
@ -6,10 +10,10 @@ of this being specialized struct page for such memory (see sections 5 to 7 of
this document).
HMM also provides optional helpers for SVM (Share Virtual Memory), i.e.,
allowing a device to transparently access program address coherently with the
CPU meaning that any valid pointer on the CPU is also a valid pointer for the
device. This is becoming mandatory to simplify the use of advanced hetero-
geneous computing where GPU, DSP, or FPGA are used to perform various
allowing a device to transparently access program address coherently with
the CPU meaning that any valid pointer on the CPU is also a valid pointer
for the device. This is becoming mandatory to simplify the use of advanced
heterogeneous computing where GPU, DSP, or FPGA are used to perform various
computations on behalf of a process.
This document is divided as follows: in the first section I expose the problems
@ -21,19 +25,10 @@ fifth section deals with how device memory is represented inside the kernel.
Finally, the last section presents a new migration helper that allows lever-
aging the device DMA engine.
.. contents:: :local:
1) Problems of using a device specific memory allocator:
2) I/O bus, device memory characteristics
3) Shared address space and migration
4) Address space mirroring implementation and API
5) Represent and manage device memory from core kernel point of view
6) Migration to and from device memory
7) Memory cgroup (memcg) and rss accounting
-------------------------------------------------------------------------------
1) Problems of using a device specific memory allocator:
Problems of using a device specific memory allocator
====================================================
Devices with a large amount of on board memory (several gigabytes) like GPUs
have historically managed their memory through dedicated driver specific APIs.
@ -77,9 +72,8 @@ are only do-able with a shared address space. It is also more reasonable to use
a shared address space for all other patterns.
-------------------------------------------------------------------------------
2) I/O bus, device memory characteristics
I/O bus, device memory characteristics
======================================
I/O buses cripple shared address spaces due to a few limitations. Most I/O
buses only allow basic memory access from device to main memory; even cache
@ -109,9 +103,8 @@ access any memory but we must also permit any memory to be migrated to device
memory while device is using it (blocking CPU access while it happens).
-------------------------------------------------------------------------------
3) Shared address space and migration
Shared address space and migration
==================================
HMM intends to provide two main features. First one is to share the address
space by duplicating the CPU page table in the device page table so the same
@ -148,23 +141,23 @@ ages device memory by migrating the part of the data set that is actively being
used by the device.
-------------------------------------------------------------------------------
4) Address space mirroring implementation and API
Address space mirroring implementation and API
==============================================
Address space mirroring's main objective is to allow duplication of a range of
CPU page table into a device page table; HMM helps keep both synchronized. A
device driver that wants to mirror a process address space must start with the
registration of an hmm_mirror struct:
registration of an hmm_mirror struct::
int hmm_mirror_register(struct hmm_mirror *mirror,
struct mm_struct *mm);
int hmm_mirror_register_locked(struct hmm_mirror *mirror,
struct mm_struct *mm);
The locked variant is to be used when the driver is already holding mmap_sem
of the mm in write mode. The mirror struct has a set of callbacks that are used
to propagate CPU page tables:
to propagate CPU page tables::
struct hmm_mirror_ops {
/* sync_cpu_device_pagetables() - synchronize page tables
@ -193,10 +186,10 @@ The device driver must perform the update action to the range (mark range
read only, or fully unmap, ...). The device must be done with the update before
the driver callback returns.
When the device driver wants to populate a range of virtual addresses, it can
use either:
int hmm_vma_get_pfns(struct vm_area_struct *vma,
use either::
int hmm_vma_get_pfns(struct vm_area_struct *vma,
struct hmm_range *range,
unsigned long start,
unsigned long end,
@ -221,7 +214,7 @@ provides a set of flags to help the driver identify special CPU page table
entries.
Locking with the update() callback is the most important aspect the driver must
respect in order to keep things properly synchronized. The usage pattern is:
respect in order to keep things properly synchronized. The usage pattern is::
int driver_populate_range(...)
{
@ -262,9 +255,8 @@ report commands as executed is serialized (there is no point in doing this
concurrently).
-------------------------------------------------------------------------------
5) Represent and manage device memory from core kernel point of view
Represent and manage device memory from core kernel point of view
=================================================================
Several different designs were tried to support device memory. First one used
a device specific data structure to keep information about migrated memory and
@ -280,14 +272,14 @@ unaware of the difference. We only need to make sure that no one ever tries to
map those pages from the CPU side.
HMM provides a set of helpers to register and hotplug device memory as a new
region needing a struct page. This is offered through a very simple API:
region needing a struct page. This is offered through a very simple API::
struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops,
struct device *device,
unsigned long size);
void hmm_devmem_remove(struct hmm_devmem *devmem);
The hmm_devmem_ops is where most of the important things are:
The hmm_devmem_ops is where most of the important things are::
struct hmm_devmem_ops {
void (*free)(struct hmm_devmem *devmem, struct page *page);
@ -306,13 +298,12 @@ which it cannot do. This second callback must trigger a migration back to
system memory.
-------------------------------------------------------------------------------
6) Migration to and from device memory
Migration to and from device memory
===================================
Because the CPU cannot access device memory, migration must use the device DMA
engine to perform copy from and to device memory. For this we need a new
migration helper:
migration helper::
int migrate_vma(const struct migrate_vma_ops *ops,
struct vm_area_struct *vma,
@ -331,7 +322,7 @@ migration might be for a range of addresses the device is actively accessing.
The migrate_vma_ops struct defines two callbacks. First one (alloc_and_copy())
controls destination memory allocation and copy operation. Second one is there
to allow the device driver to perform cleanup operations after migration.
to allow the device driver to perform cleanup operations after migration::
struct migrate_vma_ops {
void (*alloc_and_copy)(struct vm_area_struct *vma,
@ -365,9 +356,8 @@ bandwidth but this is considered as a rare event and a price that we are
willing to pay to keep all the code simpler.
-------------------------------------------------------------------------------
7) Memory cgroup (memcg) and rss accounting
Memory cgroup (memcg) and rss accounting
========================================
For now device memory is accounted as any regular page in rss counters (either
anonymous if device page is used for anonymous, file if device page is used for

View file

@ -1,6 +1,13 @@
Hugetlbfs Reservation Overview
------------------------------
Huge pages as described at 'Documentation/vm/hugetlbpage.txt' are typically
.. _hugetlbfs_reserve:
=====================
Hugetlbfs Reservation
=====================
Overview
========
Huge pages as described at :ref:`hugetlbpage` are typically
preallocated for application use. These huge pages are instantiated in a
task's address space at page fault time if the VMA indicates huge pages are
to be used. If no huge page exists at page fault time, the task is sent
@ -17,47 +24,55 @@ describe how huge page reserve processing is done in the v4.10 kernel.
Audience
--------
========
This description is primarily targeted at kernel developers who are modifying
hugetlbfs code.
The Data Structures
-------------------
===================
resv_huge_pages
This is a global (per-hstate) count of reserved huge pages. Reserved
huge pages are only available to the task which reserved them.
Therefore, the number of huge pages generally available is computed
as (free_huge_pages - resv_huge_pages).
as (``free_huge_pages - resv_huge_pages``).
Reserve Map
A reserve map is described by the structure:
struct resv_map {
struct kref refs;
spinlock_t lock;
struct list_head regions;
long adds_in_progress;
struct list_head region_cache;
long region_cache_count;
};
A reserve map is described by the structure::
struct resv_map {
struct kref refs;
spinlock_t lock;
struct list_head regions;
long adds_in_progress;
struct list_head region_cache;
long region_cache_count;
};
There is one reserve map for each huge page mapping in the system.
The regions list within the resv_map describes the regions within
the mapping. A region is described as:
struct file_region {
struct list_head link;
long from;
long to;
};
the mapping. A region is described as::
struct file_region {
struct list_head link;
long from;
long to;
};
The 'from' and 'to' fields of the file region structure are huge page
indices into the mapping. Depending on the type of mapping, a
region in the reserv_map may indicate reservations exist for the
range, or reservations do not exist.
Flags for MAP_PRIVATE Reservations
These are stored in the bottom bits of the reservation map pointer.
#define HPAGE_RESV_OWNER (1UL << 0) Indicates this task is the
owner of the reservations associated with the mapping.
#define HPAGE_RESV_UNMAPPED (1UL << 1) Indicates task originally
mapping this range (and creating reserves) has unmapped a
page from this task (the child) due to a failed COW.
``#define HPAGE_RESV_OWNER (1UL << 0)``
Indicates this task is the owner of the reservations
associated with the mapping.
``#define HPAGE_RESV_UNMAPPED (1UL << 1)``
Indicates task originally mapping this range (and creating
reserves) has unmapped a page from this task (the child)
due to a failed COW.
Page Flags
The PagePrivate page flag is used to indicate that a huge page
reservation must be restored when the huge page is freed. More
@ -65,12 +80,14 @@ Page Flags
Reservation Map Location (Private or Shared)
--------------------------------------------
============================================
A huge page mapping or segment is either private or shared. If private,
it is typically only available to a single address space (task). If shared,
it can be mapped into multiple address spaces (tasks). The location and
semantics of the reservation map is significantly different for two types
of mappings. Location differences are:
- For private mappings, the reservation map hangs off the the VMA structure.
Specifically, vma->vm_private_data. This reserve map is created at the
time the mapping (mmap(MAP_PRIVATE)) is created.
@ -82,15 +99,15 @@ of mappings. Location differences are:
Creating Reservations
---------------------
=====================
Reservations are created when a huge page backed shared memory segment is
created (shmget(SHM_HUGETLB)) or a mapping is created via mmap(MAP_HUGETLB).
These operations result in a call to the routine hugetlb_reserve_pages()
These operations result in a call to the routine hugetlb_reserve_pages()::
int hugetlb_reserve_pages(struct inode *inode,
long from, long to,
struct vm_area_struct *vma,
vm_flags_t vm_flags)
int hugetlb_reserve_pages(struct inode *inode,
long from, long to,
struct vm_area_struct *vma,
vm_flags_t vm_flags)
The first thing hugetlb_reserve_pages() does is check for the NORESERVE
flag was specified in either the shmget() or mmap() call. If NORESERVE
@ -105,6 +122,7 @@ the 'from' and 'to' arguments have been adjusted by this offset.
One of the big differences between PRIVATE and SHARED mappings is the way
in which reservations are represented in the reservation map.
- For shared mappings, an entry in the reservation map indicates a reservation
exists or did exist for the corresponding page. As reservations are
consumed, the reservation map is not modified.
@ -121,12 +139,13 @@ to indicate this VMA owns the reservations.
The reservation map is consulted to determine how many huge page reservations
are needed for the current mapping/segment. For private mappings, this is
always the value (to - from). However, for shared mappings it is possible that some reservations may already exist within the range (to - from). See the
section "Reservation Map Modifications" for details on how this is accomplished.
section :ref:`Reservation Map Modifications <resv_map_modifications>`
for details on how this is accomplished.
The mapping may be associated with a subpool. If so, the subpool is consulted
to ensure there is sufficient space for the mapping. It is possible that the
subpool has set aside reservations that can be used for the mapping. See the
section "Subpool Reservations" for more details.
section :ref:`Subpool Reservations <sub_pool_resv>` for more details.
After consulting the reservation map and subpool, the number of needed new
reservations is known. The routine hugetlb_acct_memory() is called to check
@ -135,9 +154,11 @@ calls into routines that potentially allocate and adjust surplus page counts.
However, within those routines the code is simply checking to ensure there
are enough free huge pages to accommodate the reservation. If there are,
the global reservation count resv_huge_pages is adjusted something like the
following.
following::
if (resv_needed <= (resv_huge_pages - free_huge_pages))
resv_huge_pages += resv_needed;
Note that the global lock hugetlb_lock is held when checking and adjusting
these counters.
@ -152,14 +173,18 @@ If hugetlb_reserve_pages() was successful, the global reservation count and
reservation map associated with the mapping will be modified as required to
ensure reservations exist for the range 'from' - 'to'.
.. _consume_resv:
Consuming Reservations/Allocating a Huge Page
---------------------------------------------
=============================================
Reservations are consumed when huge pages associated with the reservations
are allocated and instantiated in the corresponding mapping. The allocation
is performed within the routine alloc_huge_page().
struct page *alloc_huge_page(struct vm_area_struct *vma,
unsigned long addr, int avoid_reserve)
is performed within the routine alloc_huge_page()::
struct page *alloc_huge_page(struct vm_area_struct *vma,
unsigned long addr, int avoid_reserve)
alloc_huge_page is passed a VMA pointer and a virtual address, so it can
consult the reservation map to determine if a reservation exists. In addition,
alloc_huge_page takes the argument avoid_reserve which indicates reserves
@ -170,8 +195,9 @@ page are being allocated.
The helper routine vma_needs_reservation() is called to determine if a
reservation exists for the address within the mapping(vma). See the section
"Reservation Map Helper Routines" for detailed information on what this
routine does. The value returned from vma_needs_reservation() is generally
:ref:`Reservation Map Helper Routines <resv_map_helpers>` for detailed
information on what this routine does.
The value returned from vma_needs_reservation() is generally
0 or 1. 0 if a reservation exists for the address, 1 if no reservation exists.
If a reservation does not exist, and there is a subpool associated with the
mapping the subpool is consulted to determine if it contains reservations.
@ -180,21 +206,25 @@ However, in every case the avoid_reserve argument overrides the use of
a reservation for the allocation. After determining whether a reservation
exists and can be used for the allocation, the routine dequeue_huge_page_vma()
is called. This routine takes two arguments related to reservations:
- avoid_reserve, this is the same value/argument passed to alloc_huge_page()
- chg, even though this argument is of type long only the values 0 or 1 are
passed to dequeue_huge_page_vma. If the value is 0, it indicates a
reservation exists (see the section "Memory Policy and Reservations" for
possible issues). If the value is 1, it indicates a reservation does not
exist and the page must be taken from the global free pool if possible.
The free lists associated with the memory policy of the VMA are searched for
a free page. If a page is found, the value free_huge_pages is decremented
when the page is removed from the free list. If there was a reservation
associated with the page, the following adjustments are made:
associated with the page, the following adjustments are made::
SetPagePrivate(page); /* Indicates allocating this page consumed
* a reservation, and if an error is
* encountered such that the page must be
* freed, the reservation will be restored. */
resv_huge_pages--; /* Decrement the global reservation count */
Note, if no huge page can be found that satisfies the VMA's memory policy
an attempt will be made to allocate one using the buddy allocator. This
brings up the issue of surplus huge pages and overcommit which is beyond
@ -222,12 +252,14 @@ mapping. In such cases, the reservation count and subpool free page count
will be off by one. This rare condition can be identified by comparing the
return value from vma_needs_reservation and vma_commit_reservation. If such
a race is detected, the subpool and global reserve counts are adjusted to
compensate. See the section "Reservation Map Helper Routines" for more
compensate. See the section
:ref:`Reservation Map Helper Routines <resv_map_helpers>` for more
information on these routines.
Instantiate Huge Pages
----------------------
======================
After huge page allocation, the page is typically added to the page tables
of the allocating task. Before this, pages in a shared mapping are added
to the page cache and pages in private mappings are added to an anonymous
@ -237,7 +269,8 @@ to the global reservation count (resv_huge_pages).
Freeing Huge Pages
------------------
==================
Huge page freeing is performed by the routine free_huge_page(). This routine
is the destructor for hugetlbfs compound pages. As a result, it is only
passed a pointer to the page struct. When a huge page is freed, reservation
@ -247,7 +280,8 @@ on an error path where a global reserve count must be restored.
The page->private field points to any subpool associated with the page.
If the PagePrivate flag is set, it indicates the global reserve count should
be adjusted (see the section "Consuming Reservations/Allocating a Huge Page"
be adjusted (see the section
:ref:`Consuming Reservations/Allocating a Huge Page <consume_resv>`
for information on how these are set).
The routine first calls hugepage_subpool_put_pages() for the page. If this
@ -259,9 +293,11 @@ Therefore, the global resv_huge_pages counter is incremented in this case.
If the PagePrivate flag was set in the page, the global resv_huge_pages counter
will always be incremented.
.. _sub_pool_resv:
Subpool Reservations
--------------------
====================
There is a struct hstate associated with each huge page size. The hstate
tracks all huge pages of the specified size. A subpool represents a subset
of pages within a hstate that is associated with a mounted hugetlbfs
@ -295,7 +331,8 @@ the global pools.
COW and Reservations
--------------------
====================
Since shared mappings all point to and use the same underlying pages, the
biggest reservation concern for COW is private mappings. In this case,
two tasks can be pointing at the same previously allocated page. One task
@ -326,30 +363,36 @@ faults on a non-present page. But, the original owner of the
mapping/reservation will behave as expected.
.. _resv_map_modifications:
Reservation Map Modifications
-----------------------------
=============================
The following low level routines are used to make modifications to a
reservation map. Typically, these routines are not called directly. Rather,
a reservation map helper routine is called which calls one of these low level
routines. These low level routines are fairly well documented in the source
code (mm/hugetlb.c). These routines are:
long region_chg(struct resv_map *resv, long f, long t);
long region_add(struct resv_map *resv, long f, long t);
void region_abort(struct resv_map *resv, long f, long t);
long region_count(struct resv_map *resv, long f, long t);
code (mm/hugetlb.c). These routines are::
long region_chg(struct resv_map *resv, long f, long t);
long region_add(struct resv_map *resv, long f, long t);
void region_abort(struct resv_map *resv, long f, long t);
long region_count(struct resv_map *resv, long f, long t);
Operations on the reservation map typically involve two operations:
1) region_chg() is called to examine the reserve map and determine how
many pages in the specified range [f, t) are NOT currently represented.
The calling code performs global checks and allocations to determine if
there are enough huge pages for the operation to succeed.
2a) If the operation can succeed, region_add() is called to actually modify
the reservation map for the same range [f, t) previously passed to
region_chg().
2b) If the operation can not succeed, region_abort is called for the same range
[f, t) to abort the operation.
2)
a) If the operation can succeed, region_add() is called to actually modify
the reservation map for the same range [f, t) previously passed to
region_chg().
b) If the operation can not succeed, region_abort is called for the same
range [f, t) to abort the operation.
Note that this is a two step process where region_add() and region_abort()
are guaranteed to succeed after a prior call to region_chg() for the same
@ -371,6 +414,7 @@ and make the appropriate adjustments.
The routine region_del() is called to remove regions from a reservation map.
It is typically called in the following situations:
- When a file in the hugetlbfs filesystem is being removed, the inode will
be released and the reservation map freed. Before freeing the reservation
map, all the individual file_region structures must be freed. In this case
@ -384,6 +428,7 @@ It is typically called in the following situations:
removed, region_del() is called to remove the corresponding entry from the
reservation map. In this case, region_del is passed the range
[page_idx, page_idx + 1).
In every case, region_del() will return the number of pages removed from the
reservation map. In VERY rare cases, region_del() can fail. This can only
happen in the hole punch case where it has to split an existing file_region
@ -403,9 +448,11 @@ outstanding (outstanding = (end - start) - region_count(resv, start, end)).
Since the mapping is going away, the subpool and global reservation counts
are decremented by the number of outstanding reservations.
.. _resv_map_helpers:
Reservation Map Helper Routines
-------------------------------
===============================
Several helper routines exist to query and modify the reservation maps.
These routines are only interested with reservations for a specific huge
page, so they just pass in an address instead of a range. In addition,
@ -414,32 +461,40 @@ or shared) and the location of the reservation map (inode or VMA) can be
determined. These routines simply call the underlying routines described
in the section "Reservation Map Modifications". However, they do take into
account the 'opposite' meaning of reservation map entries for private and
shared mappings and hide this detail from the caller.
shared mappings and hide this detail from the caller::
long vma_needs_reservation(struct hstate *h,
struct vm_area_struct *vma,
unsigned long addr)
long vma_needs_reservation(struct hstate *h,
struct vm_area_struct *vma, unsigned long addr)
This routine calls region_chg() for the specified page. If no reservation
exists, 1 is returned. If a reservation exists, 0 is returned.
exists, 1 is returned. If a reservation exists, 0 is returned::
long vma_commit_reservation(struct hstate *h,
struct vm_area_struct *vma,
unsigned long addr)
long vma_commit_reservation(struct hstate *h,
struct vm_area_struct *vma, unsigned long addr)
This calls region_add() for the specified page. As in the case of region_chg
and region_add, this routine is to be called after a previous call to
vma_needs_reservation. It will add a reservation entry for the page. It
returns 1 if the reservation was added and 0 if not. The return value should
be compared with the return value of the previous call to
vma_needs_reservation. An unexpected difference indicates the reservation
map was modified between calls.
map was modified between calls::
void vma_end_reservation(struct hstate *h,
struct vm_area_struct *vma,
unsigned long addr)
void vma_end_reservation(struct hstate *h,
struct vm_area_struct *vma, unsigned long addr)
This calls region_abort() for the specified page. As in the case of region_chg
and region_abort, this routine is to be called after a previous call to
vma_needs_reservation. It will abort/end the in progress reservation add
operation.
operation::
long vma_add_reservation(struct hstate *h,
struct vm_area_struct *vma,
unsigned long addr)
long vma_add_reservation(struct hstate *h,
struct vm_area_struct *vma, unsigned long addr)
This is a special wrapper routine to help facilitate reservation cleanup
on error paths. It is only called from the routine restore_reserve_on_error().
This routine is used in conjunction with vma_needs_reservation in an attempt
@ -453,8 +508,10 @@ be done on error paths.
Reservation Cleanup in Error Paths
----------------------------------
As mentioned in the section "Reservation Map Helper Routines", reservation
==================================
As mentioned in the section
:ref:`Reservation Map Helper Routines <resv_map_helpers>`, reservation
map modifications are performed in two steps. First vma_needs_reservation
is called before a page is allocated. If the allocation is successful,
then vma_commit_reservation is called. If not, vma_end_reservation is called.
@ -494,13 +551,14 @@ so that a reservation will not be leaked when the huge page is freed.
Reservations and Memory Policy
------------------------------
==============================
Per-node huge page lists existed in struct hstate when git was first used
to manage Linux code. The concept of reservations was added some time later.
When reservations were added, no attempt was made to take memory policy
into account. While cpusets are not exactly the same as memory policy, this
comment in hugetlb_acct_memory sums up the interaction between reservations
and cpusets/memory policy.
and cpusets/memory policy::
/*
* When cpuset is configured, it breaks the strict hugetlb page
* reservation as the accounting is done on a global variable. Such

View file

@ -1,3 +1,11 @@
.. _hugetlbpage:
=============
HugeTLB Pages
=============
Overview
========
The intent of this file is to give a brief summary of hugetlbpage support in
the Linux kernel. This support is built on top of multiple page size support
@ -18,53 +26,59 @@ First the Linux kernel needs to be built with the CONFIG_HUGETLBFS
automatically when CONFIG_HUGETLBFS is selected) configuration
options.
The /proc/meminfo file provides information about the total number of
The ``/proc/meminfo`` file provides information about the total number of
persistent hugetlb pages in the kernel's huge page pool. It also displays
default huge page size and information about the number of free, reserved
and surplus huge pages in the pool of huge pages of default size.
The huge page size is needed for generating the proper alignment and
size of the arguments to system calls that map huge page regions.
The output of "cat /proc/meminfo" will include lines like:
The output of ``cat /proc/meminfo`` will include lines like::
.....
HugePages_Total: uuu
HugePages_Free: vvv
HugePages_Rsvd: www
HugePages_Surp: xxx
Hugepagesize: yyy kB
Hugetlb: zzz kB
HugePages_Total: uuu
HugePages_Free: vvv
HugePages_Rsvd: www
HugePages_Surp: xxx
Hugepagesize: yyy kB
Hugetlb: zzz kB
where:
HugePages_Total is the size of the pool of huge pages.
HugePages_Free is the number of huge pages in the pool that are not yet
allocated.
HugePages_Rsvd is short for "reserved," and is the number of huge pages for
which a commitment to allocate from the pool has been made,
but no allocation has yet been made. Reserved huge pages
guarantee that an application will be able to allocate a
huge page from the pool of huge pages at fault time.
HugePages_Surp is short for "surplus," and is the number of huge pages in
the pool above the value in /proc/sys/vm/nr_hugepages. The
maximum number of surplus huge pages is controlled by
/proc/sys/vm/nr_overcommit_hugepages.
Hugepagesize is the default hugepage size (in Kb).
Hugetlb is the total amount of memory (in kB), consumed by huge
pages of all sizes.
If huge pages of different sizes are in use, this number
will exceed HugePages_Total * Hugepagesize. To get more
detailed information, please, refer to
/sys/kernel/mm/hugepages (described below).
HugePages_Total
is the size of the pool of huge pages.
HugePages_Free
is the number of huge pages in the pool that are not yet
allocated.
HugePages_Rsvd
is short for "reserved," and is the number of huge pages for
which a commitment to allocate from the pool has been made,
but no allocation has yet been made. Reserved huge pages
guarantee that an application will be able to allocate a
huge page from the pool of huge pages at fault time.
HugePages_Surp
is short for "surplus," and is the number of huge pages in
the pool above the value in ``/proc/sys/vm/nr_hugepages``. The
maximum number of surplus huge pages is controlled by
``/proc/sys/vm/nr_overcommit_hugepages``.
Hugepagesize
is the default hugepage size (in Kb).
Hugetlb
is the total amount of memory (in kB), consumed by huge
pages of all sizes.
If huge pages of different sizes are in use, this number
will exceed HugePages_Total \* Hugepagesize. To get more
detailed information, please, refer to
``/sys/kernel/mm/hugepages`` (described below).
/proc/filesystems should also show a filesystem of type "hugetlbfs" configured
in the kernel.
``/proc/filesystems`` should also show a filesystem of type "hugetlbfs"
configured in the kernel.
/proc/sys/vm/nr_hugepages indicates the current number of "persistent" huge
``/proc/sys/vm/nr_hugepages`` indicates the current number of "persistent" huge
pages in the kernel's huge page pool. "Persistent" huge pages will be
returned to the huge page pool when freed by a task. A user with root
privileges can dynamically allocate more or free some persistent huge pages
by increasing or decreasing the value of 'nr_hugepages'.
by increasing or decreasing the value of ``nr_hugepages``.
Pages that are used as huge pages are reserved inside the kernel and cannot
be used for other purposes. Huge pages cannot be swapped out under
@ -86,10 +100,10 @@ with a huge page size selection parameter "hugepagesz=<size>". <size> must
be specified in bytes with optional scale suffix [kKmMgG]. The default huge
page size may be selected with the "default_hugepagesz=<size>" boot parameter.
When multiple huge page sizes are supported, /proc/sys/vm/nr_hugepages
When multiple huge page sizes are supported, ``/proc/sys/vm/nr_hugepages``
indicates the current number of pre-allocated huge pages of the default size.
Thus, one can use the following command to dynamically allocate/deallocate
default sized persistent huge pages:
default sized persistent huge pages::
echo 20 > /proc/sys/vm/nr_hugepages
@ -98,7 +112,7 @@ huge page pool to 20, allocating or freeing huge pages, as required.
On a NUMA platform, the kernel will attempt to distribute the huge page pool
over all the set of allowed nodes specified by the NUMA memory policy of the
task that modifies nr_hugepages. The default for the allowed nodes--when the
task that modifies ``nr_hugepages``. The default for the allowed nodes--when the
task has default memory policy--is all on-line nodes with memory. Allowed
nodes with insufficient available, contiguous memory for a huge page will be
silently skipped when allocating persistent huge pages. See the discussion
@ -117,51 +131,52 @@ init files. This will enable the kernel to allocate huge pages early in
the boot process when the possibility of getting physical contiguous pages
is still very high. Administrators can verify the number of huge pages
actually allocated by checking the sysctl or meminfo. To check the per node
distribution of huge pages in a NUMA system, use:
distribution of huge pages in a NUMA system, use::
cat /sys/devices/system/node/node*/meminfo | fgrep Huge
/proc/sys/vm/nr_overcommit_hugepages specifies how large the pool of
huge pages can grow, if more huge pages than /proc/sys/vm/nr_hugepages are
``/proc/sys/vm/nr_overcommit_hugepages`` specifies how large the pool of
huge pages can grow, if more huge pages than ``/proc/sys/vm/nr_hugepages`` are
requested by applications. Writing any non-zero value into this file
indicates that the hugetlb subsystem is allowed to try to obtain that
number of "surplus" huge pages from the kernel's normal page pool, when the
persistent huge page pool is exhausted. As these surplus huge pages become
unused, they are freed back to the kernel's normal page pool.
When increasing the huge page pool size via nr_hugepages, any existing surplus
pages will first be promoted to persistent huge pages. Then, additional
When increasing the huge page pool size via ``nr_hugepages``, any existing
surplus pages will first be promoted to persistent huge pages. Then, additional
huge pages will be allocated, if necessary and if possible, to fulfill
the new persistent huge page pool size.
The administrator may shrink the pool of persistent huge pages for
the default huge page size by setting the nr_hugepages sysctl to a
the default huge page size by setting the ``nr_hugepages`` sysctl to a
smaller value. The kernel will attempt to balance the freeing of huge pages
across all nodes in the memory policy of the task modifying nr_hugepages.
across all nodes in the memory policy of the task modifying ``nr_hugepages``.
Any free huge pages on the selected nodes will be freed back to the kernel's
normal page pool.
Caveat: Shrinking the persistent huge page pool via nr_hugepages such that
Caveat: Shrinking the persistent huge page pool via ``nr_hugepages`` such that
it becomes less than the number of huge pages in use will convert the balance
of the in-use huge pages to surplus huge pages. This will occur even if
the number of surplus pages it would exceed the overcommit value. As long as
this condition holds--that is, until nr_hugepages+nr_overcommit_hugepages is
this condition holds--that is, until ``nr_hugepages+nr_overcommit_hugepages`` is
increased sufficiently, or the surplus huge pages go out of use and are freed--
no more surplus huge pages will be allowed to be allocated.
With support for multiple huge page pools at run-time available, much of
the huge page userspace interface in /proc/sys/vm has been duplicated in sysfs.
The /proc interfaces discussed above have been retained for backwards
compatibility. The root huge page control directory in sysfs is:
the huge page userspace interface in ``/proc/sys/vm`` has been duplicated in
sysfs.
The ``/proc`` interfaces discussed above have been retained for backwards
compatibility. The root huge page control directory in sysfs is::
/sys/kernel/mm/hugepages
For each huge page size supported by the running kernel, a subdirectory
will exist, of the form:
will exist, of the form::
hugepages-${size}kB
Inside each of these directories, the same set of files will exist:
Inside each of these directories, the same set of files will exist::
nr_hugepages
nr_hugepages_mempolicy
@ -176,33 +191,33 @@ which function as described above for the default huge page-sized case.
Interaction of Task Memory Policy with Huge Page Allocation/Freeing
===================================================================
Whether huge pages are allocated and freed via the /proc interface or
the /sysfs interface using the nr_hugepages_mempolicy attribute, the NUMA
nodes from which huge pages are allocated or freed are controlled by the
NUMA memory policy of the task that modifies the nr_hugepages_mempolicy
sysctl or attribute. When the nr_hugepages attribute is used, mempolicy
Whether huge pages are allocated and freed via the ``/proc`` interface or
the ``/sysfs`` interface using the ``nr_hugepages_mempolicy`` attribute, the
NUMA nodes from which huge pages are allocated or freed are controlled by the
NUMA memory policy of the task that modifies the ``nr_hugepages_mempolicy``
sysctl or attribute. When the ``nr_hugepages`` attribute is used, mempolicy
is ignored.
The recommended method to allocate or free huge pages to/from the kernel
huge page pool, using the nr_hugepages example above, is:
huge page pool, using the ``nr_hugepages`` example above, is::
numactl --interleave <node-list> echo 20 \
>/proc/sys/vm/nr_hugepages_mempolicy
or, more succinctly:
or, more succinctly::
numactl -m <node-list> echo 20 >/proc/sys/vm/nr_hugepages_mempolicy
This will allocate or free abs(20 - nr_hugepages) to or from the nodes
This will allocate or free ``abs(20 - nr_hugepages)`` to or from the nodes
specified in <node-list>, depending on whether number of persistent huge pages
is initially less than or greater than 20, respectively. No huge pages will be
allocated nor freed on any node not included in the specified <node-list>.
When adjusting the persistent hugepage count via nr_hugepages_mempolicy, any
When adjusting the persistent hugepage count via ``nr_hugepages_mempolicy``, any
memory policy mode--bind, preferred, local or interleave--may be used. The
resulting effect on persistent huge page allocation is as follows:
1) Regardless of mempolicy mode [see Documentation/vm/numa_memory_policy.txt],
#. Regardless of mempolicy mode [see Documentation/vm/numa_memory_policy.rst],
persistent huge pages will be distributed across the node or nodes
specified in the mempolicy as if "interleave" had been specified.
However, if a node in the policy does not contain sufficient contiguous
@ -212,7 +227,7 @@ resulting effect on persistent huge page allocation is as follows:
possibly, allocation of persistent huge pages on nodes not allowed by
the task's memory policy.
2) One or more nodes may be specified with the bind or interleave policy.
#. One or more nodes may be specified with the bind or interleave policy.
If more than one node is specified with the preferred policy, only the
lowest numeric id will be used. Local policy will select the node where
the task is running at the time the nodes_allowed mask is constructed.
@ -222,20 +237,20 @@ resulting effect on persistent huge page allocation is as follows:
indeterminate. Thus, local policy is not very useful for this purpose.
Any of the other mempolicy modes may be used to specify a single node.
3) The nodes allowed mask will be derived from any non-default task mempolicy,
#. The nodes allowed mask will be derived from any non-default task mempolicy,
whether this policy was set explicitly by the task itself or one of its
ancestors, such as numactl. This means that if the task is invoked from a
shell with non-default policy, that policy will be used. One can specify a
node list of "all" with numactl --interleave or --membind [-m] to achieve
interleaving over all nodes in the system or cpuset.
4) Any task mempolicy specified--e.g., using numactl--will be constrained by
#. Any task mempolicy specified--e.g., using numactl--will be constrained by
the resource limits of any cpuset in which the task runs. Thus, there will
be no way for a task with non-default policy running in a cpuset with a
subset of the system nodes to allocate huge pages outside the cpuset
without first moving to a cpuset that contains all of the desired nodes.
5) Boot-time huge page allocation attempts to distribute the requested number
#. Boot-time huge page allocation attempts to distribute the requested number
of huge pages over all on-lines nodes with memory.
Per Node Hugepages Attributes
@ -243,22 +258,22 @@ Per Node Hugepages Attributes
A subset of the contents of the root huge page control directory in sysfs,
described above, will be replicated under each the system device of each
NUMA node with memory in:
NUMA node with memory in::
/sys/devices/system/node/node[0-9]*/hugepages/
Under this directory, the subdirectory for each supported huge page size
contains the following attribute files:
contains the following attribute files::
nr_hugepages
free_hugepages
surplus_hugepages
The free_' and surplus_' attribute files are read-only. They return the number
The free\_' and surplus\_' attribute files are read-only. They return the number
of free and surplus [overcommitted] huge pages, respectively, on the parent
node.
The nr_hugepages attribute returns the total number of huge pages on the
The ``nr_hugepages`` attribute returns the total number of huge pages on the
specified node. When this attribute is written, the number of persistent huge
pages on the parent node will be adjusted to the specified value, if sufficient
resources exist, regardless of the task's mempolicy or cpuset constraints.
@ -273,37 +288,51 @@ Using Huge Pages
If the user applications are going to request huge pages using mmap system
call, then it is required that system administrator mount a file system of
type hugetlbfs:
type hugetlbfs::
mount -t hugetlbfs \
-o uid=<value>,gid=<value>,mode=<value>,pagesize=<value>,size=<value>,\
min_size=<value>,nr_inodes=<value> none /mnt/huge
This command mounts a (pseudo) filesystem of type hugetlbfs on the directory
/mnt/huge. Any files created on /mnt/huge uses huge pages. The uid and gid
options sets the owner and group of the root of the file system. By default
the uid and gid of the current process are taken. The mode option sets the
mode of root of file system to value & 01777. This value is given in octal.
By default the value 0755 is picked. If the platform supports multiple huge
page sizes, the pagesize option can be used to specify the huge page size and
associated pool. pagesize is specified in bytes. If pagesize is not specified
the platform's default huge page size and associated pool will be used. The
size option sets the maximum value of memory (huge pages) allowed for that
filesystem (/mnt/huge). The size option can be specified in bytes, or as a
percentage of the specified huge page pool (nr_hugepages). The size is
rounded down to HPAGE_SIZE boundary. The min_size option sets the minimum
value of memory (huge pages) allowed for the filesystem. min_size can be
specified in the same way as size, either bytes or a percentage of the
huge page pool. At mount time, the number of huge pages specified by
min_size are reserved for use by the filesystem. If there are not enough
free huge pages available, the mount will fail. As huge pages are allocated
to the filesystem and freed, the reserve count is adjusted so that the sum
of allocated and reserved huge pages is always at least min_size. The option
nr_inodes sets the maximum number of inodes that /mnt/huge can use. If the
size, min_size or nr_inodes option is not provided on command line then
no limits are set. For pagesize, size, min_size and nr_inodes options, you
can use [G|g]/[M|m]/[K|k] to represent giga/mega/kilo. For example, size=2K
has the same meaning as size=2048.
``/mnt/huge``. Any files created on ``/mnt/huge`` uses huge pages.
The ``uid`` and ``gid`` options sets the owner and group of the root of the
file system. By default the ``uid`` and ``gid`` of the current process
are taken.
The ``mode`` option sets the mode of root of file system to value & 01777.
This value is given in octal. By default the value 0755 is picked.
If the platform supports multiple huge page sizes, the ``pagesize`` option can
be used to specify the huge page size and associated pool. ``pagesize``
is specified in bytes. If ``pagesize`` is not specified the platform's
default huge page size and associated pool will be used.
The ``size`` option sets the maximum value of memory (huge pages) allowed
for that filesystem (``/mnt/huge``). The ``size`` option can be specified
in bytes, or as a percentage of the specified huge page pool (``nr_hugepages``).
The size is rounded down to HPAGE_SIZE boundary.
The ``min_size`` option sets the minimum value of memory (huge pages) allowed
for the filesystem. ``min_size`` can be specified in the same way as ``size``,
either bytes or a percentage of the huge page pool.
At mount time, the number of huge pages specified by ``min_size`` are reserved
for use by the filesystem.
If there are not enough free huge pages available, the mount will fail.
As huge pages are allocated to the filesystem and freed, the reserve count
is adjusted so that the sum of allocated and reserved huge pages is always
at least ``min_size``.
The option ``nr_inodes`` sets the maximum number of inodes that ``/mnt/huge``
can use.
If the ``size``, ``min_size`` or ``nr_inodes`` option is not provided on
command line then no limits are set.
For ``pagesize``, ``size``, ``min_size`` and ``nr_inodes`` options, you can
use [G|g]/[M|m]/[K|k] to represent giga/mega/kilo.
For example, size=2K has the same meaning as size=2048.
While read system calls are supported on files that reside on hugetlb
file systems, write system calls are not.
@ -313,12 +342,12 @@ used to change the file attributes on hugetlbfs.
Also, it is important to note that no such mount command is required if
applications are going to use only shmat/shmget system calls or mmap with
MAP_HUGETLB. For an example of how to use mmap with MAP_HUGETLB see map_hugetlb
below.
MAP_HUGETLB. For an example of how to use mmap with MAP_HUGETLB see
:ref:`map_hugetlb <map_hugetlb>` below.
Users who wish to use hugetlb memory via shared memory segment should be a
member of a supplementary group and system admin needs to configure that gid
into /proc/sys/vm/hugetlb_shm_group. It is possible for same or different
into ``/proc/sys/vm/hugetlb_shm_group``. It is possible for same or different
applications to use any combination of mmaps and shm* calls, though the mount of
filesystem will be required for using mmap calls without MAP_HUGETLB.
@ -332,15 +361,21 @@ a hugetlb page and the length is smaller than the hugepage size.
Examples
========
1) map_hugetlb: see tools/testing/selftests/vm/map_hugetlb.c
.. _map_hugetlb:
2) hugepage-shm: see tools/testing/selftests/vm/hugepage-shm.c
``map_hugetlb``
see tools/testing/selftests/vm/map_hugetlb.c
3) hugepage-mmap: see tools/testing/selftests/vm/hugepage-mmap.c
``hugepage-shm``
see tools/testing/selftests/vm/hugepage-shm.c
4) The libhugetlbfs (https://github.com/libhugetlbfs/libhugetlbfs) library
provides a wide range of userspace tools to help with huge page usability,
environment setup, and control.
``hugepage-mmap``
see tools/testing/selftests/vm/hugepage-mmap.c
The `libhugetlbfs`_ library provides a wide range of userspace tools
to help with huge page usability, environment setup, and control.
.. _libhugetlbfs: https://github.com/libhugetlbfs/libhugetlbfs
Kernel development regression testing
=====================================

View file

@ -1,7 +1,14 @@
.. hwpoison:
========
hwpoison
========
What is hwpoison?
=================
Upcoming Intel CPUs have support for recovering from some memory errors
(``MCA recovery''). This requires the OS to declare a page "poisoned",
(``MCA recovery``). This requires the OS to declare a page "poisoned",
kill the processes associated with it and avoid using it in the future.
This patchkit implements the necessary infrastructure in the VM.
@ -46,9 +53,10 @@ address. This in theory allows other applications to handle
memory failures too. The expection is that near all applications
won't do that, but some very specialized ones might.
---
Failure recovery modes
======================
There are two (actually three) modi memory failure recovery can be in:
There are two (actually three) modes memory failure recovery can be in:
vm.memory_failure_recovery sysctl set to zero:
All memory failures cause a panic. Do not attempt recovery.
@ -67,9 +75,8 @@ late kill
This is best for memory error unaware applications and default
Note some pages are always handled as late kill.
---
User control:
User control
============
vm.memory_failure_recovery
See sysctl.txt
@ -79,11 +86,19 @@ vm.memory_failure_early_kill
PR_MCE_KILL
Set early/late kill mode/revert to system default
arg1: PR_MCE_KILL_CLEAR: Revert to system default
arg1: PR_MCE_KILL_SET: arg2 defines thread specific mode
PR_MCE_KILL_EARLY: Early kill
PR_MCE_KILL_LATE: Late kill
PR_MCE_KILL_DEFAULT: Use system global default
arg1: PR_MCE_KILL_CLEAR:
Revert to system default
arg1: PR_MCE_KILL_SET:
arg2 defines thread specific mode
PR_MCE_KILL_EARLY:
Early kill
PR_MCE_KILL_LATE:
Late kill
PR_MCE_KILL_DEFAULT
Use system global default
Note that if you want to have a dedicated thread which handles
the SIGBUS(BUS_MCEERR_AO) on behalf of the process, you should
call prctl(PR_MCE_KILL_EARLY) on the designated thread. Otherwise,
@ -92,77 +107,64 @@ PR_MCE_KILL
PR_MCE_KILL_GET
return current mode
Testing
=======
---
* madvise(MADV_HWPOISON, ....) (as root) - Poison a page in the
process for testing
Testing:
* hwpoison-inject module through debugfs ``/sys/kernel/debug/hwpoison/``
madvise(MADV_HWPOISON, ....)
(as root)
Poison a page in the process for testing
corrupt-pfn
Inject hwpoison fault at PFN echoed into this file. This does
some early filtering to avoid corrupted unintended pages in test suites.
unpoison-pfn
Software-unpoison page at PFN echoed into this file. This way
a page can be reused again. This only works for Linux
injected failures, not for real memory failures.
hwpoison-inject module through debugfs
Note these injection interfaces are not stable and might change between
kernel versions
/sys/kernel/debug/hwpoison/
corrupt-filter-dev-major, corrupt-filter-dev-minor
Only handle memory failures to pages associated with the file
system defined by block device major/minor. -1U is the
wildcard value. This should be only used for testing with
artificial injection.
corrupt-pfn
corrupt-filter-memcg
Limit injection to pages owned by memgroup. Specified by inode
number of the memcg.
Inject hwpoison fault at PFN echoed into this file. This does
some early filtering to avoid corrupted unintended pages in test suites.
Example::
unpoison-pfn
mkdir /sys/fs/cgroup/mem/hwpoison
Software-unpoison page at PFN echoed into this file. This
way a page can be reused again.
This only works for Linux injected failures, not for real
memory failures.
usemem -m 100 -s 1000 &
echo `jobs -p` > /sys/fs/cgroup/mem/hwpoison/tasks
Note these injection interfaces are not stable and might change between
kernel versions
memcg_ino=$(ls -id /sys/fs/cgroup/mem/hwpoison | cut -f1 -d' ')
echo $memcg_ino > /debug/hwpoison/corrupt-filter-memcg
corrupt-filter-dev-major
corrupt-filter-dev-minor
page-types -p `pidof init` --hwpoison # shall do nothing
page-types -p `pidof usemem` --hwpoison # poison its pages
Only handle memory failures to pages associated with the file system defined
by block device major/minor. -1U is the wildcard value.
This should be only used for testing with artificial injection.
corrupt-filter-flags-mask, corrupt-filter-flags-value
When specified, only poison pages if ((page_flags & mask) ==
value). This allows stress testing of many kinds of
pages. The page_flags are the same as in /proc/kpageflags. The
flag bits are defined in include/linux/kernel-page-flags.h and
documented in Documentation/vm/pagemap.rst
corrupt-filter-memcg
* Architecture specific MCE injector
Limit injection to pages owned by memgroup. Specified by inode number
of the memcg.
x86 has mce-inject, mce-test
Example:
mkdir /sys/fs/cgroup/mem/hwpoison
Some portable hwpoison test programs in mce-test, see below.
usemem -m 100 -s 1000 &
echo `jobs -p` > /sys/fs/cgroup/mem/hwpoison/tasks
memcg_ino=$(ls -id /sys/fs/cgroup/mem/hwpoison | cut -f1 -d' ')
echo $memcg_ino > /debug/hwpoison/corrupt-filter-memcg
page-types -p `pidof init` --hwpoison # shall do nothing
page-types -p `pidof usemem` --hwpoison # poison its pages
corrupt-filter-flags-mask
corrupt-filter-flags-value
When specified, only poison pages if ((page_flags & mask) == value).
This allows stress testing of many kinds of pages. The page_flags
are the same as in /proc/kpageflags. The flag bits are defined in
include/linux/kernel-page-flags.h and documented in
Documentation/vm/pagemap.txt
Architecture specific MCE injector
x86 has mce-inject, mce-test
Some portable hwpoison test programs in mce-test, see blow.
---
References:
References
==========
http://halobates.de/mce-lc09-2.pdf
Overview presentation from LinuxCon 09
@ -174,14 +176,11 @@ git://git.kernel.org/pub/scm/utils/cpu/mce/mce-inject.git
x86 specific injector
---
Limitations:
Limitations
===========
- Not all page types are supported and never will. Most kernel internal
objects cannot be recovered, only LRU pages for now.
objects cannot be recovered, only LRU pages for now.
- Right now hugepage support is missing.
---
Andi Kleen, Oct 2009

View file

@ -1,4 +1,11 @@
MOTIVATION
.. _idle_page_tracking:
==================
Idle Page Tracking
==================
Motivation
==========
The idle page tracking feature allows to track which memory pages are being
accessed by a workload and which are idle. This information can be useful for
@ -8,10 +15,14 @@ or deciding where to place the workload within a compute cluster.
It is enabled by CONFIG_IDLE_PAGE_TRACKING=y.
USER API
.. _user_api:
The idle page tracking API is located at /sys/kernel/mm/page_idle. Currently,
it consists of the only read-write file, /sys/kernel/mm/page_idle/bitmap.
User API
========
The idle page tracking API is located at ``/sys/kernel/mm/page_idle``.
Currently, it consists of the only read-write file,
``/sys/kernel/mm/page_idle/bitmap``.
The file implements a bitmap where each bit corresponds to a memory page. The
bitmap is represented by an array of 8-byte integers, and the page at PFN #i is
@ -19,8 +30,9 @@ mapped to bit #i%64 of array element #i/64, byte order is native. When a bit is
set, the corresponding page is idle.
A page is considered idle if it has not been accessed since it was marked idle
(for more details on what "accessed" actually means see the IMPLEMENTATION
DETAILS section). To mark a page idle one has to set the bit corresponding to
(for more details on what "accessed" actually means see the :ref:`Implementation
Details <impl_details>` section).
To mark a page idle one has to set the bit corresponding to
the page by writing to the file. A value written to the file is OR-ed with the
current bitmap value.
@ -30,9 +42,9 @@ page types (e.g. SLAB pages) an attempt to mark a page idle is silently ignored,
and hence such pages are never reported idle.
For huge pages the idle flag is set only on the head page, so one has to read
/proc/kpageflags in order to correctly count idle huge pages.
``/proc/kpageflags`` in order to correctly count idle huge pages.
Reading from or writing to /sys/kernel/mm/page_idle/bitmap will return
Reading from or writing to ``/sys/kernel/mm/page_idle/bitmap`` will return
-EINVAL if you are not starting the read/write on an 8-byte boundary, or
if the size of the read/write is not a multiple of 8 bytes. Writing to
this file beyond max PFN will return -ENXIO.
@ -41,21 +53,25 @@ That said, in order to estimate the amount of pages that are not used by a
workload one should:
1. Mark all the workload's pages as idle by setting corresponding bits in
/sys/kernel/mm/page_idle/bitmap. The pages can be found by reading
/proc/pid/pagemap if the workload is represented by a process, or by
filtering out alien pages using /proc/kpagecgroup in case the workload is
placed in a memory cgroup.
``/sys/kernel/mm/page_idle/bitmap``. The pages can be found by reading
``/proc/pid/pagemap`` if the workload is represented by a process, or by
filtering out alien pages using ``/proc/kpagecgroup`` in case the workload
is placed in a memory cgroup.
2. Wait until the workload accesses its working set.
3. Read /sys/kernel/mm/page_idle/bitmap and count the number of bits set. If
one wants to ignore certain types of pages, e.g. mlocked pages since they
are not reclaimable, he or she can filter them out using /proc/kpageflags.
3. Read ``/sys/kernel/mm/page_idle/bitmap`` and count the number of bits set.
If one wants to ignore certain types of pages, e.g. mlocked pages since they
are not reclaimable, he or she can filter them out using
``/proc/kpageflags``.
See Documentation/vm/pagemap.txt for more information about /proc/pid/pagemap,
/proc/kpageflags, and /proc/kpagecgroup.
See Documentation/vm/pagemap.rst for more information about
``/proc/pid/pagemap``, ``/proc/kpageflags``, and ``/proc/kpagecgroup``.
IMPLEMENTATION DETAILS
.. _impl_details:
Implementation Details
======================
The kernel internally keeps track of accesses to user memory pages in order to
reclaim unreferenced pages first on memory shortage conditions. A page is
@ -77,7 +93,8 @@ When a dirty page is written to swap or disk as a result of memory reclaim or
exceeding the dirty memory limit, it is not marked referenced.
The idle memory tracking feature adds a new page flag, the Idle flag. This flag
is set manually, by writing to /sys/kernel/mm/page_idle/bitmap (see the USER API
is set manually, by writing to ``/sys/kernel/mm/page_idle/bitmap`` (see the
:ref:`User API <user_api>`
section), and cleared automatically whenever a page is referenced as defined
above.

View file

@ -0,0 +1,56 @@
=====================================
Linux Memory Management Documentation
=====================================
This is a collection of documents about Linux memory management (mm) subsystem.
User guides for MM features
===========================
The following documents provide guides for controlling and tuning
various features of the Linux memory management
.. toctree::
:maxdepth: 1
hugetlbpage
idle_page_tracking
ksm
numa_memory_policy
pagemap
transhuge
soft-dirty
swap_numa
userfaultfd
zswap
Kernel developers MM documentation
==================================
The below documents describe MM internals with different level of
details ranging from notes and mailing list responses to elaborate
descriptions of data structures and algorithms.
.. toctree::
:maxdepth: 1
active_mm
balance
cleancache
frontswap
highmem
hmm
hwpoison
hugetlbfs_reserv
mmu_notifier
numa
overcommit-accounting
page_migration
page_frags
page_owner
remap_file_pages
slub
split_page_table_lock
unevictable-lru
z3fold
zsmalloc

183
Documentation/vm/ksm.rst Normal file
View file

@ -0,0 +1,183 @@
.. _ksm:
=======================
Kernel Samepage Merging
=======================
KSM is a memory-saving de-duplication feature, enabled by CONFIG_KSM=y,
added to the Linux kernel in 2.6.32. See ``mm/ksm.c`` for its implementation,
and http://lwn.net/Articles/306704/ and http://lwn.net/Articles/330589/
The KSM daemon ksmd periodically scans those areas of user memory which
have been registered with it, looking for pages of identical content which
can be replaced by a single write-protected page (which is automatically
copied if a process later wants to update its content).
KSM was originally developed for use with KVM (where it was known as
Kernel Shared Memory), to fit more virtual machines into physical memory,
by sharing the data common between them. But it can be useful to any
application which generates many instances of the same data.
KSM only merges anonymous (private) pages, never pagecache (file) pages.
KSM's merged pages were originally locked into kernel memory, but can now
be swapped out just like other user pages (but sharing is broken when they
are swapped back in: ksmd must rediscover their identity and merge again).
KSM only operates on those areas of address space which an application
has advised to be likely candidates for merging, by using the madvise(2)
system call: int madvise(addr, length, MADV_MERGEABLE).
The app may call int madvise(addr, length, MADV_UNMERGEABLE) to cancel
that advice and restore unshared pages: whereupon KSM unmerges whatever
it merged in that range. Note: this unmerging call may suddenly require
more memory than is available - possibly failing with EAGAIN, but more
probably arousing the Out-Of-Memory killer.
If KSM is not configured into the running kernel, madvise MADV_MERGEABLE
and MADV_UNMERGEABLE simply fail with EINVAL. If the running kernel was
built with CONFIG_KSM=y, those calls will normally succeed: even if the
the KSM daemon is not currently running, MADV_MERGEABLE still registers
the range for whenever the KSM daemon is started; even if the range
cannot contain any pages which KSM could actually merge; even if
MADV_UNMERGEABLE is applied to a range which was never MADV_MERGEABLE.
If a region of memory must be split into at least one new MADV_MERGEABLE
or MADV_UNMERGEABLE region, the madvise may return ENOMEM if the process
will exceed vm.max_map_count (see Documentation/sysctl/vm.txt).
Like other madvise calls, they are intended for use on mapped areas of
the user address space: they will report ENOMEM if the specified range
includes unmapped gaps (though working on the intervening mapped areas),
and might fail with EAGAIN if not enough memory for internal structures.
Applications should be considerate in their use of MADV_MERGEABLE,
restricting its use to areas likely to benefit. KSM's scans may use a lot
of processing power: some installations will disable KSM for that reason.
The KSM daemon is controlled by sysfs files in ``/sys/kernel/mm/ksm/``,
readable by all but writable only by root:
pages_to_scan
how many present pages to scan before ksmd goes to sleep
e.g. ``echo 100 > /sys/kernel/mm/ksm/pages_to_scan`` Default: 100
(chosen for demonstration purposes)
sleep_millisecs
how many milliseconds ksmd should sleep before next scan
e.g. ``echo 20 > /sys/kernel/mm/ksm/sleep_millisecs`` Default: 20
(chosen for demonstration purposes)
merge_across_nodes
specifies if pages from different numa nodes can be merged.
When set to 0, ksm merges only pages which physically reside
in the memory area of same NUMA node. That brings lower
latency to access of shared pages. Systems with more nodes, at
significant NUMA distances, are likely to benefit from the
lower latency of setting 0. Smaller systems, which need to
minimize memory usage, are likely to benefit from the greater
sharing of setting 1 (default). You may wish to compare how
your system performs under each setting, before deciding on
which to use. merge_across_nodes setting can be changed only
when there are no ksm shared pages in system: set run 2 to
unmerge pages first, then to 1 after changing
merge_across_nodes, to remerge according to the new setting.
Default: 1 (merging across nodes as in earlier releases)
run
set 0 to stop ksmd from running but keep merged pages,
set 1 to run ksmd e.g. ``echo 1 > /sys/kernel/mm/ksm/run``,
set 2 to stop ksmd and unmerge all pages currently merged, but
leave mergeable areas registered for next run Default: 0 (must
be changed to 1 to activate KSM, except if CONFIG_SYSFS is
disabled)
use_zero_pages
specifies whether empty pages (i.e. allocated pages that only
contain zeroes) should be treated specially. When set to 1,
empty pages are merged with the kernel zero page(s) instead of
with each other as it would happen normally. This can improve
the performance on architectures with coloured zero pages,
depending on the workload. Care should be taken when enabling
this setting, as it can potentially degrade the performance of
KSM for some workloads, for example if the checksums of pages
candidate for merging match the checksum of an empty
page. This setting can be changed at any time, it is only
effective for pages merged after the change. Default: 0
(normal KSM behaviour as in earlier releases)
max_page_sharing
Maximum sharing allowed for each KSM page. This enforces a
deduplication limit to avoid the virtual memory rmap lists to
grow too large. The minimum value is 2 as a newly created KSM
page will have at least two sharers. The rmap walk has O(N)
complexity where N is the number of rmap_items (i.e. virtual
mappings) that are sharing the page, which is in turn capped
by max_page_sharing. So this effectively spread the the linear
O(N) computational complexity from rmap walk context over
different KSM pages. The ksmd walk over the stable_node
"chains" is also O(N), but N is the number of stable_node
"dups", not the number of rmap_items, so it has not a
significant impact on ksmd performance. In practice the best
stable_node "dup" candidate will be kept and found at the head
of the "dups" list. The higher this value the faster KSM will
merge the memory (because there will be fewer stable_node dups
queued into the stable_node chain->hlist to check for pruning)
and the higher the deduplication factor will be, but the
slowest the worst case rmap walk could be for any given KSM
page. Slowing down the rmap_walk means there will be higher
latency for certain virtual memory operations happening during
swapping, compaction, NUMA balancing and page migration, in
turn decreasing responsiveness for the caller of those virtual
memory operations. The scheduler latency of other tasks not
involved with the VM operations doing the rmap walk is not
affected by this parameter as the rmap walks are always
schedule friendly themselves.
stable_node_chains_prune_millisecs
How frequently to walk the whole list of stable_node "dups"
linked in the stable_node "chains" in order to prune stale
stable_nodes. Smaller milllisecs values will free up the KSM
metadata with lower latency, but they will make ksmd use more
CPU during the scan. This only applies to the stable_node
chains so it's a noop if not a single KSM page hit the
max_page_sharing yet (there would be no stable_node chains in
such case).
The effectiveness of KSM and MADV_MERGEABLE is shown in ``/sys/kernel/mm/ksm/``:
pages_shared
how many shared pages are being used
pages_sharing
how many more sites are sharing them i.e. how much saved
pages_unshared
how many pages unique but repeatedly checked for merging
pages_volatile
how many pages changing too fast to be placed in a tree
full_scans
how many times all mergeable areas have been scanned
stable_node_chains
number of stable node chains allocated, this is effectively
the number of KSM pages that hit the max_page_sharing limit
stable_node_dups
number of stable node dups queued into the stable_node chains
A high ratio of pages_sharing to pages_shared indicates good sharing, but
a high ratio of pages_unshared to pages_sharing indicates wasted effort.
pages_volatile embraces several different kinds of activity, but a high
proportion there would also indicate poor use of madvise MADV_MERGEABLE.
The maximum possible page_sharing/page_shared ratio is limited by the
max_page_sharing tunable. To increase the ratio max_page_sharing must
be increased accordingly.
The stable_node_dups/stable_node_chains ratio is also affected by the
max_page_sharing tunable, and an high ratio may indicate fragmentation
in the stable_node dups, which could be solved by introducing
fragmentation algorithms in ksmd which would refile rmap_items from
one stable_node dup to another stable_node dup, in order to freeup
stable_node "dups" with few rmap_items in them, but that may increase
the ksmd CPU usage and possibly slowdown the readonly computations on
the KSM pages of the applications.
Izik Eidus,
Hugh Dickins, 17 Nov 2009

View file

@ -1,178 +0,0 @@
How to use the Kernel Samepage Merging feature
----------------------------------------------
KSM is a memory-saving de-duplication feature, enabled by CONFIG_KSM=y,
added to the Linux kernel in 2.6.32. See mm/ksm.c for its implementation,
and http://lwn.net/Articles/306704/ and http://lwn.net/Articles/330589/
The KSM daemon ksmd periodically scans those areas of user memory which
have been registered with it, looking for pages of identical content which
can be replaced by a single write-protected page (which is automatically
copied if a process later wants to update its content).
KSM was originally developed for use with KVM (where it was known as
Kernel Shared Memory), to fit more virtual machines into physical memory,
by sharing the data common between them. But it can be useful to any
application which generates many instances of the same data.
KSM only merges anonymous (private) pages, never pagecache (file) pages.
KSM's merged pages were originally locked into kernel memory, but can now
be swapped out just like other user pages (but sharing is broken when they
are swapped back in: ksmd must rediscover their identity and merge again).
KSM only operates on those areas of address space which an application
has advised to be likely candidates for merging, by using the madvise(2)
system call: int madvise(addr, length, MADV_MERGEABLE).
The app may call int madvise(addr, length, MADV_UNMERGEABLE) to cancel
that advice and restore unshared pages: whereupon KSM unmerges whatever
it merged in that range. Note: this unmerging call may suddenly require
more memory than is available - possibly failing with EAGAIN, but more
probably arousing the Out-Of-Memory killer.
If KSM is not configured into the running kernel, madvise MADV_MERGEABLE
and MADV_UNMERGEABLE simply fail with EINVAL. If the running kernel was
built with CONFIG_KSM=y, those calls will normally succeed: even if the
the KSM daemon is not currently running, MADV_MERGEABLE still registers
the range for whenever the KSM daemon is started; even if the range
cannot contain any pages which KSM could actually merge; even if
MADV_UNMERGEABLE is applied to a range which was never MADV_MERGEABLE.
If a region of memory must be split into at least one new MADV_MERGEABLE
or MADV_UNMERGEABLE region, the madvise may return ENOMEM if the process
will exceed vm.max_map_count (see Documentation/sysctl/vm.txt).
Like other madvise calls, they are intended for use on mapped areas of
the user address space: they will report ENOMEM if the specified range
includes unmapped gaps (though working on the intervening mapped areas),
and might fail with EAGAIN if not enough memory for internal structures.
Applications should be considerate in their use of MADV_MERGEABLE,
restricting its use to areas likely to benefit. KSM's scans may use a lot
of processing power: some installations will disable KSM for that reason.
The KSM daemon is controlled by sysfs files in /sys/kernel/mm/ksm/,
readable by all but writable only by root:
pages_to_scan - how many present pages to scan before ksmd goes to sleep
e.g. "echo 100 > /sys/kernel/mm/ksm/pages_to_scan"
Default: 100 (chosen for demonstration purposes)
sleep_millisecs - how many milliseconds ksmd should sleep before next scan
e.g. "echo 20 > /sys/kernel/mm/ksm/sleep_millisecs"
Default: 20 (chosen for demonstration purposes)
merge_across_nodes - specifies if pages from different numa nodes can be merged.
When set to 0, ksm merges only pages which physically
reside in the memory area of same NUMA node. That brings
lower latency to access of shared pages. Systems with more
nodes, at significant NUMA distances, are likely to benefit
from the lower latency of setting 0. Smaller systems, which
need to minimize memory usage, are likely to benefit from
the greater sharing of setting 1 (default). You may wish to
compare how your system performs under each setting, before
deciding on which to use. merge_across_nodes setting can be
changed only when there are no ksm shared pages in system:
set run 2 to unmerge pages first, then to 1 after changing
merge_across_nodes, to remerge according to the new setting.
Default: 1 (merging across nodes as in earlier releases)
run - set 0 to stop ksmd from running but keep merged pages,
set 1 to run ksmd e.g. "echo 1 > /sys/kernel/mm/ksm/run",
set 2 to stop ksmd and unmerge all pages currently merged,
but leave mergeable areas registered for next run
Default: 0 (must be changed to 1 to activate KSM,
except if CONFIG_SYSFS is disabled)
use_zero_pages - specifies whether empty pages (i.e. allocated pages
that only contain zeroes) should be treated specially.
When set to 1, empty pages are merged with the kernel
zero page(s) instead of with each other as it would
happen normally. This can improve the performance on
architectures with coloured zero pages, depending on
the workload. Care should be taken when enabling this
setting, as it can potentially degrade the performance
of KSM for some workloads, for example if the checksums
of pages candidate for merging match the checksum of
an empty page. This setting can be changed at any time,
it is only effective for pages merged after the change.
Default: 0 (normal KSM behaviour as in earlier releases)
max_page_sharing - Maximum sharing allowed for each KSM page. This
enforces a deduplication limit to avoid the virtual
memory rmap lists to grow too large. The minimum
value is 2 as a newly created KSM page will have at
least two sharers. The rmap walk has O(N)
complexity where N is the number of rmap_items
(i.e. virtual mappings) that are sharing the page,
which is in turn capped by max_page_sharing. So
this effectively spread the the linear O(N)
computational complexity from rmap walk context
over different KSM pages. The ksmd walk over the
stable_node "chains" is also O(N), but N is the
number of stable_node "dups", not the number of
rmap_items, so it has not a significant impact on
ksmd performance. In practice the best stable_node
"dup" candidate will be kept and found at the head
of the "dups" list. The higher this value the
faster KSM will merge the memory (because there
will be fewer stable_node dups queued into the
stable_node chain->hlist to check for pruning) and
the higher the deduplication factor will be, but
the slowest the worst case rmap walk could be for
any given KSM page. Slowing down the rmap_walk
means there will be higher latency for certain
virtual memory operations happening during
swapping, compaction, NUMA balancing and page
migration, in turn decreasing responsiveness for
the caller of those virtual memory operations. The
scheduler latency of other tasks not involved with
the VM operations doing the rmap walk is not
affected by this parameter as the rmap walks are
always schedule friendly themselves.
stable_node_chains_prune_millisecs - How frequently to walk the whole
list of stable_node "dups" linked in the
stable_node "chains" in order to prune stale
stable_nodes. Smaller milllisecs values will free
up the KSM metadata with lower latency, but they
will make ksmd use more CPU during the scan. This
only applies to the stable_node chains so it's a
noop if not a single KSM page hit the
max_page_sharing yet (there would be no stable_node
chains in such case).
The effectiveness of KSM and MADV_MERGEABLE is shown in /sys/kernel/mm/ksm/:
pages_shared - how many shared pages are being used
pages_sharing - how many more sites are sharing them i.e. how much saved
pages_unshared - how many pages unique but repeatedly checked for merging
pages_volatile - how many pages changing too fast to be placed in a tree
full_scans - how many times all mergeable areas have been scanned
stable_node_chains - number of stable node chains allocated, this is
effectively the number of KSM pages that hit the
max_page_sharing limit
stable_node_dups - number of stable node dups queued into the
stable_node chains
A high ratio of pages_sharing to pages_shared indicates good sharing, but
a high ratio of pages_unshared to pages_sharing indicates wasted effort.
pages_volatile embraces several different kinds of activity, but a high
proportion there would also indicate poor use of madvise MADV_MERGEABLE.
The maximum possible page_sharing/page_shared ratio is limited by the
max_page_sharing tunable. To increase the ratio max_page_sharing must
be increased accordingly.
The stable_node_dups/stable_node_chains ratio is also affected by the
max_page_sharing tunable, and an high ratio may indicate fragmentation
in the stable_node dups, which could be solved by introducing
fragmentation algorithms in ksmd which would refile rmap_items from
one stable_node dup to another stable_node dup, in order to freeup
stable_node "dups" with few rmap_items in them, but that may increase
the ksmd CPU usage and possibly slowdown the readonly computations on
the KSM pages of the applications.
Izik Eidus,
Hugh Dickins, 17 Nov 2009

View file

@ -0,0 +1,99 @@
.. _mmu_notifier:
When do you need to notify inside page table lock ?
===================================================
When clearing a pte/pmd we are given a choice to notify the event through
(notify version of \*_clear_flush call mmu_notifier_invalidate_range) under
the page table lock. But that notification is not necessary in all cases.
For secondary TLB (non CPU TLB) like IOMMU TLB or device TLB (when device use
thing like ATS/PASID to get the IOMMU to walk the CPU page table to access a
process virtual address space). There is only 2 cases when you need to notify
those secondary TLB while holding page table lock when clearing a pte/pmd:
A) page backing address is free before mmu_notifier_invalidate_range_end()
B) a page table entry is updated to point to a new page (COW, write fault
on zero page, __replace_page(), ...)
Case A is obvious you do not want to take the risk for the device to write to
a page that might now be used by some completely different task.
Case B is more subtle. For correctness it requires the following sequence to
happen:
- take page table lock
- clear page table entry and notify ([pmd/pte]p_huge_clear_flush_notify())
- set page table entry to point to new page
If clearing the page table entry is not followed by a notify before setting
the new pte/pmd value then you can break memory model like C11 or C++11 for
the device.
Consider the following scenario (device use a feature similar to ATS/PASID):
Two address addrA and addrB such that \|addrA - addrB\| >= PAGE_SIZE we assume
they are write protected for COW (other case of B apply too).
::
[Time N] --------------------------------------------------------------------
CPU-thread-0 {try to write to addrA}
CPU-thread-1 {try to write to addrB}
CPU-thread-2 {}
CPU-thread-3 {}
DEV-thread-0 {read addrA and populate device TLB}
DEV-thread-2 {read addrB and populate device TLB}
[Time N+1] ------------------------------------------------------------------
CPU-thread-0 {COW_step0: {mmu_notifier_invalidate_range_start(addrA)}}
CPU-thread-1 {COW_step0: {mmu_notifier_invalidate_range_start(addrB)}}
CPU-thread-2 {}
CPU-thread-3 {}
DEV-thread-0 {}
DEV-thread-2 {}
[Time N+2] ------------------------------------------------------------------
CPU-thread-0 {COW_step1: {update page table to point to new page for addrA}}
CPU-thread-1 {COW_step1: {update page table to point to new page for addrB}}
CPU-thread-2 {}
CPU-thread-3 {}
DEV-thread-0 {}
DEV-thread-2 {}
[Time N+3] ------------------------------------------------------------------
CPU-thread-0 {preempted}
CPU-thread-1 {preempted}
CPU-thread-2 {write to addrA which is a write to new page}
CPU-thread-3 {}
DEV-thread-0 {}
DEV-thread-2 {}
[Time N+3] ------------------------------------------------------------------
CPU-thread-0 {preempted}
CPU-thread-1 {preempted}
CPU-thread-2 {}
CPU-thread-3 {write to addrB which is a write to new page}
DEV-thread-0 {}
DEV-thread-2 {}
[Time N+4] ------------------------------------------------------------------
CPU-thread-0 {preempted}
CPU-thread-1 {COW_step3: {mmu_notifier_invalidate_range_end(addrB)}}
CPU-thread-2 {}
CPU-thread-3 {}
DEV-thread-0 {}
DEV-thread-2 {}
[Time N+5] ------------------------------------------------------------------
CPU-thread-0 {preempted}
CPU-thread-1 {}
CPU-thread-2 {}
CPU-thread-3 {}
DEV-thread-0 {read addrA from old page}
DEV-thread-2 {read addrB from new page}
So here because at time N+2 the clear page table entry was not pair with a
notification to invalidate the secondary TLB, the device see the new value for
addrB before seing the new value for addrA. This break total memory ordering
for the device.
When changing a pte to write protect or to point to a new write protected page
with same content (KSM) it is fine to delay the mmu_notifier_invalidate_range
call to mmu_notifier_invalidate_range_end() outside the page table lock. This
is true even if the thread doing the page table update is preempted right after
releasing page table lock but before call mmu_notifier_invalidate_range_end().

View file

@ -1,93 +0,0 @@
When do you need to notify inside page table lock ?
When clearing a pte/pmd we are given a choice to notify the event through
(notify version of *_clear_flush call mmu_notifier_invalidate_range) under
the page table lock. But that notification is not necessary in all cases.
For secondary TLB (non CPU TLB) like IOMMU TLB or device TLB (when device use
thing like ATS/PASID to get the IOMMU to walk the CPU page table to access a
process virtual address space). There is only 2 cases when you need to notify
those secondary TLB while holding page table lock when clearing a pte/pmd:
A) page backing address is free before mmu_notifier_invalidate_range_end()
B) a page table entry is updated to point to a new page (COW, write fault
on zero page, __replace_page(), ...)
Case A is obvious you do not want to take the risk for the device to write to
a page that might now be used by some completely different task.
Case B is more subtle. For correctness it requires the following sequence to
happen:
- take page table lock
- clear page table entry and notify ([pmd/pte]p_huge_clear_flush_notify())
- set page table entry to point to new page
If clearing the page table entry is not followed by a notify before setting
the new pte/pmd value then you can break memory model like C11 or C++11 for
the device.
Consider the following scenario (device use a feature similar to ATS/PASID):
Two address addrA and addrB such that |addrA - addrB| >= PAGE_SIZE we assume
they are write protected for COW (other case of B apply too).
[Time N] --------------------------------------------------------------------
CPU-thread-0 {try to write to addrA}
CPU-thread-1 {try to write to addrB}
CPU-thread-2 {}
CPU-thread-3 {}
DEV-thread-0 {read addrA and populate device TLB}
DEV-thread-2 {read addrB and populate device TLB}
[Time N+1] ------------------------------------------------------------------
CPU-thread-0 {COW_step0: {mmu_notifier_invalidate_range_start(addrA)}}
CPU-thread-1 {COW_step0: {mmu_notifier_invalidate_range_start(addrB)}}
CPU-thread-2 {}
CPU-thread-3 {}
DEV-thread-0 {}
DEV-thread-2 {}
[Time N+2] ------------------------------------------------------------------
CPU-thread-0 {COW_step1: {update page table to point to new page for addrA}}
CPU-thread-1 {COW_step1: {update page table to point to new page for addrB}}
CPU-thread-2 {}
CPU-thread-3 {}
DEV-thread-0 {}
DEV-thread-2 {}
[Time N+3] ------------------------------------------------------------------
CPU-thread-0 {preempted}
CPU-thread-1 {preempted}
CPU-thread-2 {write to addrA which is a write to new page}
CPU-thread-3 {}
DEV-thread-0 {}
DEV-thread-2 {}
[Time N+3] ------------------------------------------------------------------
CPU-thread-0 {preempted}
CPU-thread-1 {preempted}
CPU-thread-2 {}
CPU-thread-3 {write to addrB which is a write to new page}
DEV-thread-0 {}
DEV-thread-2 {}
[Time N+4] ------------------------------------------------------------------
CPU-thread-0 {preempted}
CPU-thread-1 {COW_step3: {mmu_notifier_invalidate_range_end(addrB)}}
CPU-thread-2 {}
CPU-thread-3 {}
DEV-thread-0 {}
DEV-thread-2 {}
[Time N+5] ------------------------------------------------------------------
CPU-thread-0 {preempted}
CPU-thread-1 {}
CPU-thread-2 {}
CPU-thread-3 {}
DEV-thread-0 {read addrA from old page}
DEV-thread-2 {read addrB from new page}
So here because at time N+2 the clear page table entry was not pair with a
notification to invalidate the secondary TLB, the device see the new value for
addrB before seing the new value for addrA. This break total memory ordering
for the device.
When changing a pte to write protect or to point to a new write protected page
with same content (KSM) it is fine to delay the mmu_notifier_invalidate_range
call to mmu_notifier_invalidate_range_end() outside the page table lock. This
is true even if the thread doing the page table update is preempted right after
releasing page table lock but before call mmu_notifier_invalidate_range_end().

View file

@ -1,6 +1,10 @@
.. _numa:
Started Nov 1999 by Kanoj Sarcar <kanoj@sgi.com>
=============
What is NUMA?
=============
This question can be answered from a couple of perspectives: the
hardware view and the Linux software view.
@ -106,7 +110,7 @@ to improve NUMA locality using various CPU affinity command line interfaces,
such as taskset(1) and numactl(1), and program interfaces such as
sched_setaffinity(2). Further, one can modify the kernel's default local
allocation behavior using Linux NUMA memory policy.
[see Documentation/vm/numa_memory_policy.txt.]
[see Documentation/vm/numa_memory_policy.rst.]
System administrators can restrict the CPUs and nodes' memories that a non-
privileged user can specify in the scheduling or NUMA commands and functions

View file

@ -0,0 +1,485 @@
.. _numa_memory_policy:
===================
Linux Memory Policy
===================
What is Linux Memory Policy?
============================
In the Linux kernel, "memory policy" determines from which node the kernel will
allocate memory in a NUMA system or in an emulated NUMA system. Linux has
supported platforms with Non-Uniform Memory Access architectures since 2.4.?.
The current memory policy support was added to Linux 2.6 around May 2004. This
document attempts to describe the concepts and APIs of the 2.6 memory policy
support.
Memory policies should not be confused with cpusets
(``Documentation/cgroup-v1/cpusets.txt``)
which is an administrative mechanism for restricting the nodes from which
memory may be allocated by a set of processes. Memory policies are a
programming interface that a NUMA-aware application can take advantage of. When
both cpusets and policies are applied to a task, the restrictions of the cpuset
takes priority. See :ref:`Memory Policies and cpusets <mem_pol_and_cpusets>`
below for more details.
Memory Policy Concepts
======================
Scope of Memory Policies
------------------------
The Linux kernel supports _scopes_ of memory policy, described here from
most general to most specific:
System Default Policy
this policy is "hard coded" into the kernel. It is the policy
that governs all page allocations that aren't controlled by
one of the more specific policy scopes discussed below. When
the system is "up and running", the system default policy will
use "local allocation" described below. However, during boot
up, the system default policy will be set to interleave
allocations across all nodes with "sufficient" memory, so as
not to overload the initial boot node with boot-time
allocations.
Task/Process Policy
this is an optional, per-task policy. When defined for a specific task, this policy controls all page allocations made by or on behalf of the task that aren't controlled by a more specific scope. If a task does not define a task policy, then all page allocations that would have been controlled by the task policy "fall back" to the System Default Policy.
The task policy applies to the entire address space of a task. Thus,
it is inheritable, and indeed is inherited, across both fork()
[clone() w/o the CLONE_VM flag] and exec*(). This allows a parent task
to establish the task policy for a child task exec()'d from an
executable image that has no awareness of memory policy. See the
MEMORY POLICY APIS section, below, for an overview of the system call
that a task may use to set/change its task/process policy.
In a multi-threaded task, task policies apply only to the thread
[Linux kernel task] that installs the policy and any threads
subsequently created by that thread. Any sibling threads existing
at the time a new task policy is installed retain their current
policy.
A task policy applies only to pages allocated after the policy is
installed. Any pages already faulted in by the task when the task
changes its task policy remain where they were allocated based on
the policy at the time they were allocated.
.. _vma_policy:
VMA Policy
A "VMA" or "Virtual Memory Area" refers to a range of a task's
virtual address space. A task may define a specific policy for a range
of its virtual address space. See the MEMORY POLICIES APIS section,
below, for an overview of the mbind() system call used to set a VMA
policy.
A VMA policy will govern the allocation of pages that back
this region ofthe address space. Any regions of the task's
address space that don't have an explicit VMA policy will fall
back to the task policy, which may itself fall back to the
System Default Policy.
VMA policies have a few complicating details:
* VMA policy applies ONLY to anonymous pages. These include
pages allocated for anonymous segments, such as the task
stack and heap, and any regions of the address space
mmap()ed with the MAP_ANONYMOUS flag. If a VMA policy is
applied to a file mapping, it will be ignored if the mapping
used the MAP_SHARED flag. If the file mapping used the
MAP_PRIVATE flag, the VMA policy will only be applied when
an anonymous page is allocated on an attempt to write to the
mapping-- i.e., at Copy-On-Write.
* VMA policies are shared between all tasks that share a
virtual address space--a.k.a. threads--independent of when
the policy is installed; and they are inherited across
fork(). However, because VMA policies refer to a specific
region of a task's address space, and because the address
space is discarded and recreated on exec*(), VMA policies
are NOT inheritable across exec(). Thus, only NUMA-aware
applications may use VMA policies.
* A task may install a new VMA policy on a sub-range of a
previously mmap()ed region. When this happens, Linux splits
the existing virtual memory area into 2 or 3 VMAs, each with
it's own policy.
* By default, VMA policy applies only to pages allocated after
the policy is installed. Any pages already faulted into the
VMA range remain where they were allocated based on the
policy at the time they were allocated. However, since
2.6.16, Linux supports page migration via the mbind() system
call, so that page contents can be moved to match a newly
installed policy.
Shared Policy
Conceptually, shared policies apply to "memory objects" mapped
shared into one or more tasks' distinct address spaces. An
application installs a shared policies the same way as VMA
policies--using the mbind() system call specifying a range of
virtual addresses that map the shared object. However, unlike
VMA policies, which can be considered to be an attribute of a
range of a task's address space, shared policies apply
directly to the shared object. Thus, all tasks that attach to
the object share the policy, and all pages allocated for the
shared object, by any task, will obey the shared policy.
As of 2.6.22, only shared memory segments, created by shmget() or
mmap(MAP_ANONYMOUS|MAP_SHARED), support shared policy. When shared
policy support was added to Linux, the associated data structures were
added to hugetlbfs shmem segments. At the time, hugetlbfs did not
support allocation at fault time--a.k.a lazy allocation--so hugetlbfs
shmem segments were never "hooked up" to the shared policy support.
Although hugetlbfs segments now support lazy allocation, their support
for shared policy has not been completed.
As mentioned above :ref:`VMA policies <vma_policy>`,
allocations of page cache pages for regular files mmap()ed
with MAP_SHARED ignore any VMA policy installed on the virtual
address range backed by the shared file mapping. Rather,
shared page cache pages, including pages backing private
mappings that have not yet been written by the task, follow
task policy, if any, else System Default Policy.
The shared policy infrastructure supports different policies on subset
ranges of the shared object. However, Linux still splits the VMA of
the task that installs the policy for each range of distinct policy.
Thus, different tasks that attach to a shared memory segment can have
different VMA configurations mapping that one shared object. This
can be seen by examining the /proc/<pid>/numa_maps of tasks sharing
a shared memory region, when one task has installed shared policy on
one or more ranges of the region.
Components of Memory Policies
-----------------------------
A Linux memory policy consists of a "mode", optional mode flags, and
an optional set of nodes. The mode determines the behavior of the
policy, the optional mode flags determine the behavior of the mode,
and the optional set of nodes can be viewed as the arguments to the
policy behavior.
Internally, memory policies are implemented by a reference counted
structure, struct mempolicy. Details of this structure will be
discussed in context, below, as required to explain the behavior.
Linux memory policy supports the following 4 behavioral modes:
Default Mode--MPOL_DEFAULT
This mode is only used in the memory policy APIs. Internally,
MPOL_DEFAULT is converted to the NULL memory policy in all
policy scopes. Any existing non-default policy will simply be
removed when MPOL_DEFAULT is specified. As a result,
MPOL_DEFAULT means "fall back to the next most specific policy
scope."
For example, a NULL or default task policy will fall back to the
system default policy. A NULL or default vma policy will fall
back to the task policy.
When specified in one of the memory policy APIs, the Default mode
does not use the optional set of nodes.
It is an error for the set of nodes specified for this policy to
be non-empty.
MPOL_BIND
This mode specifies that memory must come from the set of
nodes specified by the policy. Memory will be allocated from
the node in the set with sufficient free memory that is
closest to the node where the allocation takes place.
MPOL_PREFERRED
This mode specifies that the allocation should be attempted
from the single node specified in the policy. If that
allocation fails, the kernel will search other nodes, in order
of increasing distance from the preferred node based on
information provided by the platform firmware.
Internally, the Preferred policy uses a single node--the
preferred_node member of struct mempolicy. When the internal
mode flag MPOL_F_LOCAL is set, the preferred_node is ignored
and the policy is interpreted as local allocation. "Local"
allocation policy can be viewed as a Preferred policy that
starts at the node containing the cpu where the allocation
takes place.
It is possible for the user to specify that local allocation
is always preferred by passing an empty nodemask with this
mode. If an empty nodemask is passed, the policy cannot use
the MPOL_F_STATIC_NODES or MPOL_F_RELATIVE_NODES flags
described below.
MPOL_INTERLEAVED
This mode specifies that page allocations be interleaved, on a
page granularity, across the nodes specified in the policy.
This mode also behaves slightly differently, based on the
context where it is used:
For allocation of anonymous pages and shared memory pages,
Interleave mode indexes the set of nodes specified by the
policy using the page offset of the faulting address into the
segment [VMA] containing the address modulo the number of
nodes specified by the policy. It then attempts to allocate a
page, starting at the selected node, as if the node had been
specified by a Preferred policy or had been selected by a
local allocation. That is, allocation will follow the per
node zonelist.
For allocation of page cache pages, Interleave mode indexes
the set of nodes specified by the policy using a node counter
maintained per task. This counter wraps around to the lowest
specified node after it reaches the highest specified node.
This will tend to spread the pages out over the nodes
specified by the policy based on the order in which they are
allocated, rather than based on any page offset into an
address range or file. During system boot up, the temporary
interleaved system default policy works in this mode.
Linux memory policy supports the following optional mode flags:
MPOL_F_STATIC_NODES
This flag specifies that the nodemask passed by
the user should not be remapped if the task or VMA's set of allowed
nodes changes after the memory policy has been defined.
Without this flag, anytime a mempolicy is rebound because of a
change in the set of allowed nodes, the node (Preferred) or
nodemask (Bind, Interleave) is remapped to the new set of
allowed nodes. This may result in nodes being used that were
previously undesired.
With this flag, if the user-specified nodes overlap with the
nodes allowed by the task's cpuset, then the memory policy is
applied to their intersection. If the two sets of nodes do not
overlap, the Default policy is used.
For example, consider a task that is attached to a cpuset with
mems 1-3 that sets an Interleave policy over the same set. If
the cpuset's mems change to 3-5, the Interleave will now occur
over nodes 3, 4, and 5. With this flag, however, since only node
3 is allowed from the user's nodemask, the "interleave" only
occurs over that node. If no nodes from the user's nodemask are
now allowed, the Default behavior is used.
MPOL_F_STATIC_NODES cannot be combined with the
MPOL_F_RELATIVE_NODES flag. It also cannot be used for
MPOL_PREFERRED policies that were created with an empty nodemask
(local allocation).
MPOL_F_RELATIVE_NODES
This flag specifies that the nodemask passed
by the user will be mapped relative to the set of the task or VMA's
set of allowed nodes. The kernel stores the user-passed nodemask,
and if the allowed nodes changes, then that original nodemask will
be remapped relative to the new set of allowed nodes.
Without this flag (and without MPOL_F_STATIC_NODES), anytime a
mempolicy is rebound because of a change in the set of allowed
nodes, the node (Preferred) or nodemask (Bind, Interleave) is
remapped to the new set of allowed nodes. That remap may not
preserve the relative nature of the user's passed nodemask to its
set of allowed nodes upon successive rebinds: a nodemask of
1,3,5 may be remapped to 7-9 and then to 1-3 if the set of
allowed nodes is restored to its original state.
With this flag, the remap is done so that the node numbers from
the user's passed nodemask are relative to the set of allowed
nodes. In other words, if nodes 0, 2, and 4 are set in the user's
nodemask, the policy will be effected over the first (and in the
Bind or Interleave case, the third and fifth) nodes in the set of
allowed nodes. The nodemask passed by the user represents nodes
relative to task or VMA's set of allowed nodes.
If the user's nodemask includes nodes that are outside the range
of the new set of allowed nodes (for example, node 5 is set in
the user's nodemask when the set of allowed nodes is only 0-3),
then the remap wraps around to the beginning of the nodemask and,
if not already set, sets the node in the mempolicy nodemask.
For example, consider a task that is attached to a cpuset with
mems 2-5 that sets an Interleave policy over the same set with
MPOL_F_RELATIVE_NODES. If the cpuset's mems change to 3-7, the
interleave now occurs over nodes 3,5-7. If the cpuset's mems
then change to 0,2-3,5, then the interleave occurs over nodes
0,2-3,5.
Thanks to the consistent remapping, applications preparing
nodemasks to specify memory policies using this flag should
disregard their current, actual cpuset imposed memory placement
and prepare the nodemask as if they were always located on
memory nodes 0 to N-1, where N is the number of memory nodes the
policy is intended to manage. Let the kernel then remap to the
set of memory nodes allowed by the task's cpuset, as that may
change over time.
MPOL_F_RELATIVE_NODES cannot be combined with the
MPOL_F_STATIC_NODES flag. It also cannot be used for
MPOL_PREFERRED policies that were created with an empty nodemask
(local allocation).
Memory Policy Reference Counting
================================
To resolve use/free races, struct mempolicy contains an atomic reference
count field. Internal interfaces, mpol_get()/mpol_put() increment and
decrement this reference count, respectively. mpol_put() will only free
the structure back to the mempolicy kmem cache when the reference count
goes to zero.
When a new memory policy is allocated, its reference count is initialized
to '1', representing the reference held by the task that is installing the
new policy. When a pointer to a memory policy structure is stored in another
structure, another reference is added, as the task's reference will be dropped
on completion of the policy installation.
During run-time "usage" of the policy, we attempt to minimize atomic operations
on the reference count, as this can lead to cache lines bouncing between cpus
and NUMA nodes. "Usage" here means one of the following:
1) querying of the policy, either by the task itself [using the get_mempolicy()
API discussed below] or by another task using the /proc/<pid>/numa_maps
interface.
2) examination of the policy to determine the policy mode and associated node
or node lists, if any, for page allocation. This is considered a "hot
path". Note that for MPOL_BIND, the "usage" extends across the entire
allocation process, which may sleep during page reclaimation, because the
BIND policy nodemask is used, by reference, to filter ineligible nodes.
We can avoid taking an extra reference during the usages listed above as
follows:
1) we never need to get/free the system default policy as this is never
changed nor freed, once the system is up and running.
2) for querying the policy, we do not need to take an extra reference on the
target task's task policy nor vma policies because we always acquire the
task's mm's mmap_sem for read during the query. The set_mempolicy() and
mbind() APIs [see below] always acquire the mmap_sem for write when
installing or replacing task or vma policies. Thus, there is no possibility
of a task or thread freeing a policy while another task or thread is
querying it.
3) Page allocation usage of task or vma policy occurs in the fault path where
we hold them mmap_sem for read. Again, because replacing the task or vma
policy requires that the mmap_sem be held for write, the policy can't be
freed out from under us while we're using it for page allocation.
4) Shared policies require special consideration. One task can replace a
shared memory policy while another task, with a distinct mmap_sem, is
querying or allocating a page based on the policy. To resolve this
potential race, the shared policy infrastructure adds an extra reference
to the shared policy during lookup while holding a spin lock on the shared
policy management structure. This requires that we drop this extra
reference when we're finished "using" the policy. We must drop the
extra reference on shared policies in the same query/allocation paths
used for non-shared policies. For this reason, shared policies are marked
as such, and the extra reference is dropped "conditionally"--i.e., only
for shared policies.
Because of this extra reference counting, and because we must lookup
shared policies in a tree structure under spinlock, shared policies are
more expensive to use in the page allocation path. This is especially
true for shared policies on shared memory regions shared by tasks running
on different NUMA nodes. This extra overhead can be avoided by always
falling back to task or system default policy for shared memory regions,
or by prefaulting the entire shared memory region into memory and locking
it down. However, this might not be appropriate for all applications.
Memory Policy APIs
Linux supports 3 system calls for controlling memory policy. These APIS
always affect only the calling task, the calling task's address space, or
some shared object mapped into the calling task's address space.
.. note::
the headers that define these APIs and the parameter data types for
user space applications reside in a package that is not part of the
Linux kernel. The kernel system call interfaces, with the 'sys\_'
prefix, are defined in <linux/syscalls.h>; the mode and flag
definitions are defined in <linux/mempolicy.h>.
Set [Task] Memory Policy::
long set_mempolicy(int mode, const unsigned long *nmask,
unsigned long maxnode);
Set's the calling task's "task/process memory policy" to mode
specified by the 'mode' argument and the set of nodes defined by
'nmask'. 'nmask' points to a bit mask of node ids containing at least
'maxnode' ids. Optional mode flags may be passed by combining the
'mode' argument with the flag (for example: MPOL_INTERLEAVE |
MPOL_F_STATIC_NODES).
See the set_mempolicy(2) man page for more details
Get [Task] Memory Policy or Related Information::
long get_mempolicy(int *mode,
const unsigned long *nmask, unsigned long maxnode,
void *addr, int flags);
Queries the "task/process memory policy" of the calling task, or the
policy or location of a specified virtual address, depending on the
'flags' argument.
See the get_mempolicy(2) man page for more details
Install VMA/Shared Policy for a Range of Task's Address Space::
long mbind(void *start, unsigned long len, int mode,
const unsigned long *nmask, unsigned long maxnode,
unsigned flags);
mbind() installs the policy specified by (mode, nmask, maxnodes) as a
VMA policy for the range of the calling task's address space specified
by the 'start' and 'len' arguments. Additional actions may be
requested via the 'flags' argument.
See the mbind(2) man page for more details.
Memory Policy Command Line Interface
====================================
Although not strictly part of the Linux implementation of memory policy,
a command line tool, numactl(8), exists that allows one to:
+ set the task policy for a specified program via set_mempolicy(2), fork(2) and
exec(2)
+ set the shared policy for a shared memory segment via mbind(2)
The numactl(8) tool is packaged with the run-time version of the library
containing the memory policy system call wrappers. Some distributions
package the headers and compile-time libraries in a separate development
package.
.. _mem_pol_and_cpusets:
Memory Policies and cpusets
===========================
Memory policies work within cpusets as described above. For memory policies
that require a node or set of nodes, the nodes are restricted to the set of
nodes whose memories are allowed by the cpuset constraints. If the nodemask
specified for the policy contains nodes that are not allowed by the cpuset and
MPOL_F_RELATIVE_NODES is not used, the intersection of the set of nodes
specified for the policy and the set of nodes with memory is used. If the
result is the empty set, the policy is considered invalid and cannot be
installed. If MPOL_F_RELATIVE_NODES is used, the policy's nodes are mapped
onto and folded into the task's set of allowed nodes as previously described.
The interaction of memory policies and cpusets can be problematic when tasks
in two cpusets share access to a memory region, such as shared memory segments
created by shmget() of mmap() with the MAP_ANONYMOUS and MAP_SHARED flags, and
any of the tasks install shared policy on the region, only nodes whose
memories are allowed in both cpusets may be used in the policies. Obtaining
this information requires "stepping outside" the memory policy APIs to use the
cpuset information and requires that one know in what cpusets other task might
be attaching to the shared region. Furthermore, if the cpusets' allowed
memory sets are disjoint, "local" allocation is the only valid policy.

View file

@ -1,452 +0,0 @@
What is Linux Memory Policy?
In the Linux kernel, "memory policy" determines from which node the kernel will
allocate memory in a NUMA system or in an emulated NUMA system. Linux has
supported platforms with Non-Uniform Memory Access architectures since 2.4.?.
The current memory policy support was added to Linux 2.6 around May 2004. This
document attempts to describe the concepts and APIs of the 2.6 memory policy
support.
Memory policies should not be confused with cpusets
(Documentation/cgroup-v1/cpusets.txt)
which is an administrative mechanism for restricting the nodes from which
memory may be allocated by a set of processes. Memory policies are a
programming interface that a NUMA-aware application can take advantage of. When
both cpusets and policies are applied to a task, the restrictions of the cpuset
takes priority. See "MEMORY POLICIES AND CPUSETS" below for more details.
MEMORY POLICY CONCEPTS
Scope of Memory Policies
The Linux kernel supports _scopes_ of memory policy, described here from
most general to most specific:
System Default Policy: this policy is "hard coded" into the kernel. It
is the policy that governs all page allocations that aren't controlled
by one of the more specific policy scopes discussed below. When the
system is "up and running", the system default policy will use "local
allocation" described below. However, during boot up, the system
default policy will be set to interleave allocations across all nodes
with "sufficient" memory, so as not to overload the initial boot node
with boot-time allocations.
Task/Process Policy: this is an optional, per-task policy. When defined
for a specific task, this policy controls all page allocations made by or
on behalf of the task that aren't controlled by a more specific scope.
If a task does not define a task policy, then all page allocations that
would have been controlled by the task policy "fall back" to the System
Default Policy.
The task policy applies to the entire address space of a task. Thus,
it is inheritable, and indeed is inherited, across both fork()
[clone() w/o the CLONE_VM flag] and exec*(). This allows a parent task
to establish the task policy for a child task exec()'d from an
executable image that has no awareness of memory policy. See the
MEMORY POLICY APIS section, below, for an overview of the system call
that a task may use to set/change its task/process policy.
In a multi-threaded task, task policies apply only to the thread
[Linux kernel task] that installs the policy and any threads
subsequently created by that thread. Any sibling threads existing
at the time a new task policy is installed retain their current
policy.
A task policy applies only to pages allocated after the policy is
installed. Any pages already faulted in by the task when the task
changes its task policy remain where they were allocated based on
the policy at the time they were allocated.
VMA Policy: A "VMA" or "Virtual Memory Area" refers to a range of a task's
virtual address space. A task may define a specific policy for a range
of its virtual address space. See the MEMORY POLICIES APIS section,
below, for an overview of the mbind() system call used to set a VMA
policy.
A VMA policy will govern the allocation of pages that back this region of
the address space. Any regions of the task's address space that don't
have an explicit VMA policy will fall back to the task policy, which may
itself fall back to the System Default Policy.
VMA policies have a few complicating details:
VMA policy applies ONLY to anonymous pages. These include pages
allocated for anonymous segments, such as the task stack and heap, and
any regions of the address space mmap()ed with the MAP_ANONYMOUS flag.
If a VMA policy is applied to a file mapping, it will be ignored if
the mapping used the MAP_SHARED flag. If the file mapping used the
MAP_PRIVATE flag, the VMA policy will only be applied when an
anonymous page is allocated on an attempt to write to the mapping--
i.e., at Copy-On-Write.
VMA policies are shared between all tasks that share a virtual address
space--a.k.a. threads--independent of when the policy is installed; and
they are inherited across fork(). However, because VMA policies refer
to a specific region of a task's address space, and because the address
space is discarded and recreated on exec*(), VMA policies are NOT
inheritable across exec(). Thus, only NUMA-aware applications may
use VMA policies.
A task may install a new VMA policy on a sub-range of a previously
mmap()ed region. When this happens, Linux splits the existing virtual
memory area into 2 or 3 VMAs, each with it's own policy.
By default, VMA policy applies only to pages allocated after the policy
is installed. Any pages already faulted into the VMA range remain
where they were allocated based on the policy at the time they were
allocated. However, since 2.6.16, Linux supports page migration via
the mbind() system call, so that page contents can be moved to match
a newly installed policy.
Shared Policy: Conceptually, shared policies apply to "memory objects"
mapped shared into one or more tasks' distinct address spaces. An
application installs a shared policies the same way as VMA policies--using
the mbind() system call specifying a range of virtual addresses that map
the shared object. However, unlike VMA policies, which can be considered
to be an attribute of a range of a task's address space, shared policies
apply directly to the shared object. Thus, all tasks that attach to the
object share the policy, and all pages allocated for the shared object,
by any task, will obey the shared policy.
As of 2.6.22, only shared memory segments, created by shmget() or
mmap(MAP_ANONYMOUS|MAP_SHARED), support shared policy. When shared
policy support was added to Linux, the associated data structures were
added to hugetlbfs shmem segments. At the time, hugetlbfs did not
support allocation at fault time--a.k.a lazy allocation--so hugetlbfs
shmem segments were never "hooked up" to the shared policy support.
Although hugetlbfs segments now support lazy allocation, their support
for shared policy has not been completed.
As mentioned above [re: VMA policies], allocations of page cache
pages for regular files mmap()ed with MAP_SHARED ignore any VMA
policy installed on the virtual address range backed by the shared
file mapping. Rather, shared page cache pages, including pages backing
private mappings that have not yet been written by the task, follow
task policy, if any, else System Default Policy.
The shared policy infrastructure supports different policies on subset
ranges of the shared object. However, Linux still splits the VMA of
the task that installs the policy for each range of distinct policy.
Thus, different tasks that attach to a shared memory segment can have
different VMA configurations mapping that one shared object. This
can be seen by examining the /proc/<pid>/numa_maps of tasks sharing
a shared memory region, when one task has installed shared policy on
one or more ranges of the region.
Components of Memory Policies
A Linux memory policy consists of a "mode", optional mode flags, and an
optional set of nodes. The mode determines the behavior of the policy,
the optional mode flags determine the behavior of the mode, and the
optional set of nodes can be viewed as the arguments to the policy
behavior.
Internally, memory policies are implemented by a reference counted
structure, struct mempolicy. Details of this structure will be discussed
in context, below, as required to explain the behavior.
Linux memory policy supports the following 4 behavioral modes:
Default Mode--MPOL_DEFAULT: This mode is only used in the memory
policy APIs. Internally, MPOL_DEFAULT is converted to the NULL
memory policy in all policy scopes. Any existing non-default policy
will simply be removed when MPOL_DEFAULT is specified. As a result,
MPOL_DEFAULT means "fall back to the next most specific policy scope."
For example, a NULL or default task policy will fall back to the
system default policy. A NULL or default vma policy will fall
back to the task policy.
When specified in one of the memory policy APIs, the Default mode
does not use the optional set of nodes.
It is an error for the set of nodes specified for this policy to
be non-empty.
MPOL_BIND: This mode specifies that memory must come from the
set of nodes specified by the policy. Memory will be allocated from
the node in the set with sufficient free memory that is closest to
the node where the allocation takes place.
MPOL_PREFERRED: This mode specifies that the allocation should be
attempted from the single node specified in the policy. If that
allocation fails, the kernel will search other nodes, in order of
increasing distance from the preferred node based on information
provided by the platform firmware.
Internally, the Preferred policy uses a single node--the
preferred_node member of struct mempolicy. When the internal
mode flag MPOL_F_LOCAL is set, the preferred_node is ignored and
the policy is interpreted as local allocation. "Local" allocation
policy can be viewed as a Preferred policy that starts at the node
containing the cpu where the allocation takes place.
It is possible for the user to specify that local allocation is
always preferred by passing an empty nodemask with this mode.
If an empty nodemask is passed, the policy cannot use the
MPOL_F_STATIC_NODES or MPOL_F_RELATIVE_NODES flags described
below.
MPOL_INTERLEAVED: This mode specifies that page allocations be
interleaved, on a page granularity, across the nodes specified in
the policy. This mode also behaves slightly differently, based on
the context where it is used:
For allocation of anonymous pages and shared memory pages,
Interleave mode indexes the set of nodes specified by the policy
using the page offset of the faulting address into the segment
[VMA] containing the address modulo the number of nodes specified
by the policy. It then attempts to allocate a page, starting at
the selected node, as if the node had been specified by a Preferred
policy or had been selected by a local allocation. That is,
allocation will follow the per node zonelist.
For allocation of page cache pages, Interleave mode indexes the set
of nodes specified by the policy using a node counter maintained
per task. This counter wraps around to the lowest specified node
after it reaches the highest specified node. This will tend to
spread the pages out over the nodes specified by the policy based
on the order in which they are allocated, rather than based on any
page offset into an address range or file. During system boot up,
the temporary interleaved system default policy works in this
mode.
Linux memory policy supports the following optional mode flags:
MPOL_F_STATIC_NODES: This flag specifies that the nodemask passed by
the user should not be remapped if the task or VMA's set of allowed
nodes changes after the memory policy has been defined.
Without this flag, anytime a mempolicy is rebound because of a
change in the set of allowed nodes, the node (Preferred) or
nodemask (Bind, Interleave) is remapped to the new set of
allowed nodes. This may result in nodes being used that were
previously undesired.
With this flag, if the user-specified nodes overlap with the
nodes allowed by the task's cpuset, then the memory policy is
applied to their intersection. If the two sets of nodes do not
overlap, the Default policy is used.
For example, consider a task that is attached to a cpuset with
mems 1-3 that sets an Interleave policy over the same set. If
the cpuset's mems change to 3-5, the Interleave will now occur
over nodes 3, 4, and 5. With this flag, however, since only node
3 is allowed from the user's nodemask, the "interleave" only
occurs over that node. If no nodes from the user's nodemask are
now allowed, the Default behavior is used.
MPOL_F_STATIC_NODES cannot be combined with the
MPOL_F_RELATIVE_NODES flag. It also cannot be used for
MPOL_PREFERRED policies that were created with an empty nodemask
(local allocation).
MPOL_F_RELATIVE_NODES: This flag specifies that the nodemask passed
by the user will be mapped relative to the set of the task or VMA's
set of allowed nodes. The kernel stores the user-passed nodemask,
and if the allowed nodes changes, then that original nodemask will
be remapped relative to the new set of allowed nodes.
Without this flag (and without MPOL_F_STATIC_NODES), anytime a
mempolicy is rebound because of a change in the set of allowed
nodes, the node (Preferred) or nodemask (Bind, Interleave) is
remapped to the new set of allowed nodes. That remap may not
preserve the relative nature of the user's passed nodemask to its
set of allowed nodes upon successive rebinds: a nodemask of
1,3,5 may be remapped to 7-9 and then to 1-3 if the set of
allowed nodes is restored to its original state.
With this flag, the remap is done so that the node numbers from
the user's passed nodemask are relative to the set of allowed
nodes. In other words, if nodes 0, 2, and 4 are set in the user's
nodemask, the policy will be effected over the first (and in the
Bind or Interleave case, the third and fifth) nodes in the set of
allowed nodes. The nodemask passed by the user represents nodes
relative to task or VMA's set of allowed nodes.
If the user's nodemask includes nodes that are outside the range
of the new set of allowed nodes (for example, node 5 is set in
the user's nodemask when the set of allowed nodes is only 0-3),
then the remap wraps around to the beginning of the nodemask and,
if not already set, sets the node in the mempolicy nodemask.
For example, consider a task that is attached to a cpuset with
mems 2-5 that sets an Interleave policy over the same set with
MPOL_F_RELATIVE_NODES. If the cpuset's mems change to 3-7, the
interleave now occurs over nodes 3,5-7. If the cpuset's mems
then change to 0,2-3,5, then the interleave occurs over nodes
0,2-3,5.
Thanks to the consistent remapping, applications preparing
nodemasks to specify memory policies using this flag should
disregard their current, actual cpuset imposed memory placement
and prepare the nodemask as if they were always located on
memory nodes 0 to N-1, where N is the number of memory nodes the
policy is intended to manage. Let the kernel then remap to the
set of memory nodes allowed by the task's cpuset, as that may
change over time.
MPOL_F_RELATIVE_NODES cannot be combined with the
MPOL_F_STATIC_NODES flag. It also cannot be used for
MPOL_PREFERRED policies that were created with an empty nodemask
(local allocation).
MEMORY POLICY REFERENCE COUNTING
To resolve use/free races, struct mempolicy contains an atomic reference
count field. Internal interfaces, mpol_get()/mpol_put() increment and
decrement this reference count, respectively. mpol_put() will only free
the structure back to the mempolicy kmem cache when the reference count
goes to zero.
When a new memory policy is allocated, its reference count is initialized
to '1', representing the reference held by the task that is installing the
new policy. When a pointer to a memory policy structure is stored in another
structure, another reference is added, as the task's reference will be dropped
on completion of the policy installation.
During run-time "usage" of the policy, we attempt to minimize atomic operations
on the reference count, as this can lead to cache lines bouncing between cpus
and NUMA nodes. "Usage" here means one of the following:
1) querying of the policy, either by the task itself [using the get_mempolicy()
API discussed below] or by another task using the /proc/<pid>/numa_maps
interface.
2) examination of the policy to determine the policy mode and associated node
or node lists, if any, for page allocation. This is considered a "hot
path". Note that for MPOL_BIND, the "usage" extends across the entire
allocation process, which may sleep during page reclaimation, because the
BIND policy nodemask is used, by reference, to filter ineligible nodes.
We can avoid taking an extra reference during the usages listed above as
follows:
1) we never need to get/free the system default policy as this is never
changed nor freed, once the system is up and running.
2) for querying the policy, we do not need to take an extra reference on the
target task's task policy nor vma policies because we always acquire the
task's mm's mmap_sem for read during the query. The set_mempolicy() and
mbind() APIs [see below] always acquire the mmap_sem for write when
installing or replacing task or vma policies. Thus, there is no possibility
of a task or thread freeing a policy while another task or thread is
querying it.
3) Page allocation usage of task or vma policy occurs in the fault path where
we hold them mmap_sem for read. Again, because replacing the task or vma
policy requires that the mmap_sem be held for write, the policy can't be
freed out from under us while we're using it for page allocation.
4) Shared policies require special consideration. One task can replace a
shared memory policy while another task, with a distinct mmap_sem, is
querying or allocating a page based on the policy. To resolve this
potential race, the shared policy infrastructure adds an extra reference
to the shared policy during lookup while holding a spin lock on the shared
policy management structure. This requires that we drop this extra
reference when we're finished "using" the policy. We must drop the
extra reference on shared policies in the same query/allocation paths
used for non-shared policies. For this reason, shared policies are marked
as such, and the extra reference is dropped "conditionally"--i.e., only
for shared policies.
Because of this extra reference counting, and because we must lookup
shared policies in a tree structure under spinlock, shared policies are
more expensive to use in the page allocation path. This is especially
true for shared policies on shared memory regions shared by tasks running
on different NUMA nodes. This extra overhead can be avoided by always
falling back to task or system default policy for shared memory regions,
or by prefaulting the entire shared memory region into memory and locking
it down. However, this might not be appropriate for all applications.
MEMORY POLICY APIs
Linux supports 3 system calls for controlling memory policy. These APIS
always affect only the calling task, the calling task's address space, or
some shared object mapped into the calling task's address space.
Note: the headers that define these APIs and the parameter data types
for user space applications reside in a package that is not part of
the Linux kernel. The kernel system call interfaces, with the 'sys_'
prefix, are defined in <linux/syscalls.h>; the mode and flag
definitions are defined in <linux/mempolicy.h>.
Set [Task] Memory Policy:
long set_mempolicy(int mode, const unsigned long *nmask,
unsigned long maxnode);
Set's the calling task's "task/process memory policy" to mode
specified by the 'mode' argument and the set of nodes defined
by 'nmask'. 'nmask' points to a bit mask of node ids containing
at least 'maxnode' ids. Optional mode flags may be passed by
combining the 'mode' argument with the flag (for example:
MPOL_INTERLEAVE | MPOL_F_STATIC_NODES).
See the set_mempolicy(2) man page for more details
Get [Task] Memory Policy or Related Information
long get_mempolicy(int *mode,
const unsigned long *nmask, unsigned long maxnode,
void *addr, int flags);
Queries the "task/process memory policy" of the calling task, or
the policy or location of a specified virtual address, depending
on the 'flags' argument.
See the get_mempolicy(2) man page for more details
Install VMA/Shared Policy for a Range of Task's Address Space
long mbind(void *start, unsigned long len, int mode,
const unsigned long *nmask, unsigned long maxnode,
unsigned flags);
mbind() installs the policy specified by (mode, nmask, maxnodes) as
a VMA policy for the range of the calling task's address space
specified by the 'start' and 'len' arguments. Additional actions
may be requested via the 'flags' argument.
See the mbind(2) man page for more details.
MEMORY POLICY COMMAND LINE INTERFACE
Although not strictly part of the Linux implementation of memory policy,
a command line tool, numactl(8), exists that allows one to:
+ set the task policy for a specified program via set_mempolicy(2), fork(2) and
exec(2)
+ set the shared policy for a shared memory segment via mbind(2)
The numactl(8) tool is packaged with the run-time version of the library
containing the memory policy system call wrappers. Some distributions
package the headers and compile-time libraries in a separate development
package.
MEMORY POLICIES AND CPUSETS
Memory policies work within cpusets as described above. For memory policies
that require a node or set of nodes, the nodes are restricted to the set of
nodes whose memories are allowed by the cpuset constraints. If the nodemask
specified for the policy contains nodes that are not allowed by the cpuset and
MPOL_F_RELATIVE_NODES is not used, the intersection of the set of nodes
specified for the policy and the set of nodes with memory is used. If the
result is the empty set, the policy is considered invalid and cannot be
installed. If MPOL_F_RELATIVE_NODES is used, the policy's nodes are mapped
onto and folded into the task's set of allowed nodes as previously described.
The interaction of memory policies and cpusets can be problematic when tasks
in two cpusets share access to a memory region, such as shared memory segments
created by shmget() of mmap() with the MAP_ANONYMOUS and MAP_SHARED flags, and
any of the tasks install shared policy on the region, only nodes whose
memories are allowed in both cpusets may be used in the policies. Obtaining
this information requires "stepping outside" the memory policy APIs to use the
cpuset information and requires that one know in what cpusets other task might
be attaching to the shared region. Furthermore, if the cpusets' allowed
memory sets are disjoint, "local" allocation is the only valid policy.

View file

@ -1,80 +0,0 @@
The Linux kernel supports the following overcommit handling modes
0 - Heuristic overcommit handling. Obvious overcommits of
address space are refused. Used for a typical system. It
ensures a seriously wild allocation fails while allowing
overcommit to reduce swap usage. root is allowed to
allocate slightly more memory in this mode. This is the
default.
1 - Always overcommit. Appropriate for some scientific
applications. Classic example is code using sparse arrays
and just relying on the virtual memory consisting almost
entirely of zero pages.
2 - Don't overcommit. The total address space commit
for the system is not permitted to exceed swap + a
configurable amount (default is 50%) of physical RAM.
Depending on the amount you use, in most situations
this means a process will not be killed while accessing
pages but will receive errors on memory allocation as
appropriate.
Useful for applications that want to guarantee their
memory allocations will be available in the future
without having to initialize every page.
The overcommit policy is set via the sysctl `vm.overcommit_memory'.
The overcommit amount can be set via `vm.overcommit_ratio' (percentage)
or `vm.overcommit_kbytes' (absolute value).
The current overcommit limit and amount committed are viewable in
/proc/meminfo as CommitLimit and Committed_AS respectively.
Gotchas
-------
The C language stack growth does an implicit mremap. If you want absolute
guarantees and run close to the edge you MUST mmap your stack for the
largest size you think you will need. For typical stack usage this does
not matter much but it's a corner case if you really really care
In mode 2 the MAP_NORESERVE flag is ignored.
How It Works
------------
The overcommit is based on the following rules
For a file backed map
SHARED or READ-only - 0 cost (the file is the map not swap)
PRIVATE WRITABLE - size of mapping per instance
For an anonymous or /dev/zero map
SHARED - size of mapping
PRIVATE READ-only - 0 cost (but of little use)
PRIVATE WRITABLE - size of mapping per instance
Additional accounting
Pages made writable copies by mmap
shmfs memory drawn from the same pool
Status
------
o We account mmap memory mappings
o We account mprotect changes in commit
o We account mremap changes in size
o We account brk
o We account munmap
o We report the commit status in /proc
o Account and check on fork
o Review stack handling/building on exec
o SHMfs accounting
o Implement actual limit enforcement
To Do
-----
o Account ptrace pages (this is hard)

View file

@ -0,0 +1,87 @@
.. _overcommit_accounting:
=====================
Overcommit Accounting
=====================
The Linux kernel supports the following overcommit handling modes
0
Heuristic overcommit handling. Obvious overcommits of address
space are refused. Used for a typical system. It ensures a
seriously wild allocation fails while allowing overcommit to
reduce swap usage. root is allowed to allocate slightly more
memory in this mode. This is the default.
1
Always overcommit. Appropriate for some scientific
applications. Classic example is code using sparse arrays and
just relying on the virtual memory consisting almost entirely
of zero pages.
2
Don't overcommit. The total address space commit for the
system is not permitted to exceed swap + a configurable amount
(default is 50%) of physical RAM. Depending on the amount you
use, in most situations this means a process will not be
killed while accessing pages but will receive errors on memory
allocation as appropriate.
Useful for applications that want to guarantee their memory
allocations will be available in the future without having to
initialize every page.
The overcommit policy is set via the sysctl ``vm.overcommit_memory``.
The overcommit amount can be set via ``vm.overcommit_ratio`` (percentage)
or ``vm.overcommit_kbytes`` (absolute value).
The current overcommit limit and amount committed are viewable in
``/proc/meminfo`` as CommitLimit and Committed_AS respectively.
Gotchas
=======
The C language stack growth does an implicit mremap. If you want absolute
guarantees and run close to the edge you MUST mmap your stack for the
largest size you think you will need. For typical stack usage this does
not matter much but it's a corner case if you really really care
In mode 2 the MAP_NORESERVE flag is ignored.
How It Works
============
The overcommit is based on the following rules
For a file backed map
| SHARED or READ-only - 0 cost (the file is the map not swap)
| PRIVATE WRITABLE - size of mapping per instance
For an anonymous or ``/dev/zero`` map
| SHARED - size of mapping
| PRIVATE READ-only - 0 cost (but of little use)
| PRIVATE WRITABLE - size of mapping per instance
Additional accounting
| Pages made writable copies by mmap
| shmfs memory drawn from the same pool
Status
======
* We account mmap memory mappings
* We account mprotect changes in commit
* We account mremap changes in size
* We account brk
* We account munmap
* We report the commit status in /proc
* Account and check on fork
* Review stack handling/building on exec
* SHMfs accounting
* Implement actual limit enforcement
To Do
=====
* Account ptrace pages (this is hard)

View file

@ -1,5 +1,8 @@
.. _page_frags:
==============
Page fragments
--------------
==============
A page fragment is an arbitrary-length arbitrary-offset area of memory
which resides within a 0 or higher order compound page. Multiple

View file

@ -1,5 +1,8 @@
.. _page_migration:
==============
Page migration
--------------
==============
Page migration allows the moving of the physical location of pages between
nodes in a numa system while the process is running. This means that the
@ -20,7 +23,7 @@ Page migration functions are provided by the numactl package by Andi Kleen
(a version later than 0.9.3 is required. Get it from
ftp://oss.sgi.com/www/projects/libnuma/download/). numactl provides libnuma
which provides an interface similar to other numa functionality for page
migration. cat /proc/<pid>/numa_maps allows an easy review of where the
migration. cat ``/proc/<pid>/numa_maps`` allows an easy review of where the
pages of a process are located. See also the numa_maps documentation in the
proc(5) man page.
@ -56,8 +59,8 @@ description for those trying to use migrate_pages() from the kernel
(for userspace usage see the Andi Kleen's numactl package mentioned above)
and then a low level description of how the low level details work.
A. In kernel use of migrate_pages()
-----------------------------------
In kernel use of migrate_pages()
================================
1. Remove pages from the LRU.
@ -78,8 +81,8 @@ A. In kernel use of migrate_pages()
the new page for each page that is considered for
moving.
B. How migrate_pages() works
----------------------------
How migrate_pages() works
=========================
migrate_pages() does several passes over its list of pages. A page is moved
if all references to a page are removable at the time. The page has
@ -142,8 +145,8 @@ Steps:
20. The new page is moved to the LRU and can be scanned by the swapper
etc again.
C. Non-LRU page migration
-------------------------
Non-LRU page migration
======================
Although original migration aimed for reducing the latency of memory access
for NUMA, compaction who want to create high-order page is also main customer.
@ -164,89 +167,91 @@ migration path.
If a driver want to make own pages movable, it should define three functions
which are function pointers of struct address_space_operations.
1. bool (*isolate_page) (struct page *page, isolate_mode_t mode);
1. ``bool (*isolate_page) (struct page *page, isolate_mode_t mode);``
What VM expects on isolate_page function of driver is to return *true*
if driver isolates page successfully. On returing true, VM marks the page
as PG_isolated so concurrent isolation in several CPUs skip the page
for isolation. If a driver cannot isolate the page, it should return *false*.
What VM expects on isolate_page function of driver is to return *true*
if driver isolates page successfully. On returing true, VM marks the page
as PG_isolated so concurrent isolation in several CPUs skip the page
for isolation. If a driver cannot isolate the page, it should return *false*.
Once page is successfully isolated, VM uses page.lru fields so driver
shouldn't expect to preserve values in that fields.
Once page is successfully isolated, VM uses page.lru fields so driver
shouldn't expect to preserve values in that fields.
2. int (*migratepage) (struct address_space *mapping,
struct page *newpage, struct page *oldpage, enum migrate_mode);
2. ``int (*migratepage) (struct address_space *mapping,``
| ``struct page *newpage, struct page *oldpage, enum migrate_mode);``
After isolation, VM calls migratepage of driver with isolated page.
The function of migratepage is to move content of the old page to new page
and set up fields of struct page newpage. Keep in mind that you should
indicate to the VM the oldpage is no longer movable via __ClearPageMovable()
under page_lock if you migrated the oldpage successfully and returns
MIGRATEPAGE_SUCCESS. If driver cannot migrate the page at the moment, driver
can return -EAGAIN. On -EAGAIN, VM will retry page migration in a short time
because VM interprets -EAGAIN as "temporal migration failure". On returning
any error except -EAGAIN, VM will give up the page migration without retrying
in this time.
After isolation, VM calls migratepage of driver with isolated page.
The function of migratepage is to move content of the old page to new page
and set up fields of struct page newpage. Keep in mind that you should
indicate to the VM the oldpage is no longer movable via __ClearPageMovable()
under page_lock if you migrated the oldpage successfully and returns
MIGRATEPAGE_SUCCESS. If driver cannot migrate the page at the moment, driver
can return -EAGAIN. On -EAGAIN, VM will retry page migration in a short time
because VM interprets -EAGAIN as "temporal migration failure". On returning
any error except -EAGAIN, VM will give up the page migration without retrying
in this time.
Driver shouldn't touch page.lru field VM using in the functions.
Driver shouldn't touch page.lru field VM using in the functions.
3. void (*putback_page)(struct page *);
3. ``void (*putback_page)(struct page *);``
If migration fails on isolated page, VM should return the isolated page
to the driver so VM calls driver's putback_page with migration failed page.
In this function, driver should put the isolated page back to the own data
structure.
If migration fails on isolated page, VM should return the isolated page
to the driver so VM calls driver's putback_page with migration failed page.
In this function, driver should put the isolated page back to the own data
structure.
4. non-lru movable page flags
There are two page flags for supporting non-lru movable page.
There are two page flags for supporting non-lru movable page.
* PG_movable
* PG_movable
Driver should use the below function to make page movable under page_lock.
Driver should use the below function to make page movable under page_lock::
void __SetPageMovable(struct page *page, struct address_space *mapping)
It needs argument of address_space for registering migration family functions
which will be called by VM. Exactly speaking, PG_movable is not a real flag of
struct page. Rather than, VM reuses page->mapping's lower bits to represent it.
It needs argument of address_space for registering migration
family functions which will be called by VM. Exactly speaking,
PG_movable is not a real flag of struct page. Rather than, VM
reuses page->mapping's lower bits to represent it.
::
#define PAGE_MAPPING_MOVABLE 0x2
page->mapping = page->mapping | PAGE_MAPPING_MOVABLE;
so driver shouldn't access page->mapping directly. Instead, driver should
use page_mapping which mask off the low two bits of page->mapping under
page lock so it can get right struct address_space.
so driver shouldn't access page->mapping directly. Instead, driver should
use page_mapping which mask off the low two bits of page->mapping under
page lock so it can get right struct address_space.
For testing of non-lru movable page, VM supports __PageMovable function.
However, it doesn't guarantee to identify non-lru movable page because
page->mapping field is unified with other variables in struct page.
As well, if driver releases the page after isolation by VM, page->mapping
doesn't have stable value although it has PAGE_MAPPING_MOVABLE
(Look at __ClearPageMovable). But __PageMovable is cheap to catch whether
page is LRU or non-lru movable once the page has been isolated. Because
LRU pages never can have PAGE_MAPPING_MOVABLE in page->mapping. It is also
good for just peeking to test non-lru movable pages before more expensive
checking with lock_page in pfn scanning to select victim.
For testing of non-lru movable page, VM supports __PageMovable function.
However, it doesn't guarantee to identify non-lru movable page because
page->mapping field is unified with other variables in struct page.
As well, if driver releases the page after isolation by VM, page->mapping
doesn't have stable value although it has PAGE_MAPPING_MOVABLE
(Look at __ClearPageMovable). But __PageMovable is cheap to catch whether
page is LRU or non-lru movable once the page has been isolated. Because
LRU pages never can have PAGE_MAPPING_MOVABLE in page->mapping. It is also
good for just peeking to test non-lru movable pages before more expensive
checking with lock_page in pfn scanning to select victim.
For guaranteeing non-lru movable page, VM provides PageMovable function.
Unlike __PageMovable, PageMovable functions validates page->mapping and
mapping->a_ops->isolate_page under lock_page. The lock_page prevents sudden
destroying of page->mapping.
For guaranteeing non-lru movable page, VM provides PageMovable function.
Unlike __PageMovable, PageMovable functions validates page->mapping and
mapping->a_ops->isolate_page under lock_page. The lock_page prevents sudden
destroying of page->mapping.
Driver using __SetPageMovable should clear the flag via __ClearMovablePage
under page_lock before the releasing the page.
Driver using __SetPageMovable should clear the flag via __ClearMovablePage
under page_lock before the releasing the page.
* PG_isolated
* PG_isolated
To prevent concurrent isolation among several CPUs, VM marks isolated page
as PG_isolated under lock_page. So if a CPU encounters PG_isolated non-lru
movable page, it can skip it. Driver doesn't need to manipulate the flag
because VM will set/clear it automatically. Keep in mind that if driver
sees PG_isolated page, it means the page have been isolated by VM so it
shouldn't touch page.lru field.
PG_isolated is alias with PG_reclaim flag so driver shouldn't use the flag
for own purpose.
To prevent concurrent isolation among several CPUs, VM marks isolated page
as PG_isolated under lock_page. So if a CPU encounters PG_isolated non-lru
movable page, it can skip it. Driver doesn't need to manipulate the flag
because VM will set/clear it automatically. Keep in mind that if driver
sees PG_isolated page, it means the page have been isolated by VM so it
shouldn't touch page.lru field.
PG_isolated is alias with PG_reclaim flag so driver shouldn't use the flag
for own purpose.
Christoph Lameter, May 8, 2006.
Minchan Kim, Mar 28, 2016.

View file

@ -1,7 +1,11 @@
page owner: Tracking about who allocated each page
-----------------------------------------------------------
.. _page_owner:
* Introduction
==================================================
page owner: Tracking about who allocated each page
==================================================
Introduction
============
page owner is for the tracking about who allocated each page.
It can be used to debug memory leak or to find a memory hogger.
@ -34,13 +38,15 @@ not affect to allocation performance, especially if the static keys jump
label patching functionality is available. Following is the kernel's code
size change due to this facility.
- Without page owner
text data bss dec hex filename
40662 1493 644 42799 a72f mm/page_alloc.o
- Without page owner::
- With page owner
text data bss dec hex filename
40892 1493 644 43029 a815 mm/page_alloc.o
40662 1493 644 42799 a72f mm/page_alloc.o
- With page owner::
text data bss dec hex filename
40892 1493 644 43029 a815 mm/page_alloc.o
1427 24 8 1459 5b3 mm/page_ext.o
2722 50 0 2772 ad4 mm/page_owner.o
@ -62,21 +68,23 @@ are catched and marked, although they are mostly allocated from struct
page extension feature. Anyway, after that, no page is left in
un-tracking state.
* Usage
Usage
=====
1) Build user-space helper::
1) Build user-space helper
cd tools/vm
make page_owner_sort
2) Enable page owner
Add "page_owner=on" to boot cmdline.
2) Enable page owner: add "page_owner=on" to boot cmdline.
3) Do the job what you want to debug
4) Analyze information from page owner
4) Analyze information from page owner::
cat /sys/kernel/debug/page_owner > page_owner_full.txt
grep -v ^PFN page_owner_full.txt > page_owner.txt
./page_owner_sort page_owner.txt sorted_page_owner.txt
See the result about who allocated each page
in the sorted_page_owner.txt.
See the result about who allocated each page
in the ``sorted_page_owner.txt``.

View file

@ -1,13 +1,16 @@
pagemap, from the userspace perspective
---------------------------------------
.. _pagemap:
======================================
pagemap from the Userspace Perspective
======================================
pagemap is a new (as of 2.6.25) set of interfaces in the kernel that allow
userspace programs to examine the page tables and related information by
reading files in /proc.
reading files in ``/proc``.
There are four components to pagemap:
* /proc/pid/pagemap. This file lets a userspace process find out which
* ``/proc/pid/pagemap``. This file lets a userspace process find out which
physical frame each virtual page is mapped to. It contains one 64-bit
value for each virtual page, containing the following data (from
fs/proc/task_mmu.c, above pagemap_read):
@ -15,7 +18,7 @@ There are four components to pagemap:
* Bits 0-54 page frame number (PFN) if present
* Bits 0-4 swap type if swapped
* Bits 5-54 swap offset if swapped
* Bit 55 pte is soft-dirty (see Documentation/vm/soft-dirty.txt)
* Bit 55 pte is soft-dirty (see Documentation/vm/soft-dirty.rst)
* Bit 56 page exclusively mapped (since 4.2)
* Bits 57-60 zero
* Bit 61 page is file-page or shared-anon (since 3.5)
@ -37,24 +40,24 @@ There are four components to pagemap:
determine which areas of memory are actually mapped and llseek to
skip over unmapped regions.
* /proc/kpagecount. This file contains a 64-bit count of the number of
* ``/proc/kpagecount``. This file contains a 64-bit count of the number of
times each page is mapped, indexed by PFN.
* /proc/kpageflags. This file contains a 64-bit set of flags for each
* ``/proc/kpageflags``. This file contains a 64-bit set of flags for each
page, indexed by PFN.
The flags are (from fs/proc/page.c, above kpageflags_read):
The flags are (from ``fs/proc/page.c``, above kpageflags_read):
0. LOCKED
1. ERROR
2. REFERENCED
3. UPTODATE
4. DIRTY
5. LRU
6. ACTIVE
7. SLAB
8. WRITEBACK
9. RECLAIM
0. LOCKED
1. ERROR
2. REFERENCED
3. UPTODATE
4. DIRTY
5. LRU
6. ACTIVE
7. SLAB
8. WRITEBACK
9. RECLAIM
10. BUDDY
11. MMAP
12. ANON
@ -72,98 +75,108 @@ There are four components to pagemap:
24. ZERO_PAGE
25. IDLE
* /proc/kpagecgroup. This file contains a 64-bit inode number of the
* ``/proc/kpagecgroup``. This file contains a 64-bit inode number of the
memory cgroup each page is charged to, indexed by PFN. Only available when
CONFIG_MEMCG is set.
Short descriptions to the page flags:
=====================================
0. LOCKED
page is being locked for exclusive access, eg. by undergoing read/write IO
7. SLAB
page is managed by the SLAB/SLOB/SLUB/SLQB kernel memory allocator
When compound page is used, SLUB/SLQB will only set this flag on the head
page; SLOB will not flag it at all.
10. BUDDY
0 - LOCKED
page is being locked for exclusive access, eg. by undergoing read/write IO
7 - SLAB
page is managed by the SLAB/SLOB/SLUB/SLQB kernel memory allocator
When compound page is used, SLUB/SLQB will only set this flag on the head
page; SLOB will not flag it at all.
10 - BUDDY
a free memory block managed by the buddy system allocator
The buddy system organizes free memory in blocks of various orders.
An order N block has 2^N physically contiguous pages, with the BUDDY flag
set for and _only_ for the first page.
15. COMPOUND_HEAD
16. COMPOUND_TAIL
15 - COMPOUND_HEAD
A compound page with order N consists of 2^N physically contiguous pages.
A compound page with order 2 takes the form of "HTTT", where H donates its
head page and T donates its tail page(s). The major consumers of compound
pages are hugeTLB pages (Documentation/vm/hugetlbpage.txt), the SLUB etc.
pages are hugeTLB pages (Documentation/vm/hugetlbpage.rst), the SLUB etc.
memory allocators and various device drivers. However in this interface,
only huge/giga pages are made visible to end users.
17. HUGE
16 - COMPOUND_TAIL
A compound page tail (see description above).
17 - HUGE
this is an integral part of a HugeTLB page
19. HWPOISON
19 - HWPOISON
hardware detected memory corruption on this page: don't touch the data!
20. NOPAGE
20 - NOPAGE
no page frame exists at the requested address
21. KSM
21 - KSM
identical memory pages dynamically shared between one or more processes
22. THP
22 - THP
contiguous pages which construct transparent hugepages
23. BALLOON
23 - BALLOON
balloon compaction page
24. ZERO_PAGE
24 - ZERO_PAGE
zero page for pfn_zero or huge_zero page
25. IDLE
25 - IDLE
page has not been accessed since it was marked idle (see
Documentation/vm/idle_page_tracking.txt). Note that this flag may be
Documentation/vm/idle_page_tracking.rst). Note that this flag may be
stale in case the page was accessed via a PTE. To make sure the flag
is up-to-date one has to read /sys/kernel/mm/page_idle/bitmap first.
is up-to-date one has to read ``/sys/kernel/mm/page_idle/bitmap`` first.
[IO related page flags]
1. ERROR IO error occurred
3. UPTODATE page has up-to-date data
ie. for file backed page: (in-memory data revision >= on-disk one)
4. DIRTY page has been written to, hence contains new data
ie. for file backed page: (in-memory data revision > on-disk one)
8. WRITEBACK page is being synced to disk
IO related page flags
---------------------
[LRU related page flags]
5. LRU page is in one of the LRU lists
6. ACTIVE page is in the active LRU list
18. UNEVICTABLE page is in the unevictable (non-)LRU list
It is somehow pinned and not a candidate for LRU page reclaims,
eg. ramfs pages, shmctl(SHM_LOCK) and mlock() memory segments
2. REFERENCED page has been referenced since last LRU list enqueue/requeue
9. RECLAIM page will be reclaimed soon after its pageout IO completed
11. MMAP a memory mapped page
12. ANON a memory mapped page that is not part of a file
13. SWAPCACHE page is mapped to swap space, ie. has an associated swap entry
14. SWAPBACKED page is backed by swap/RAM
1 - ERROR
IO error occurred
3 - UPTODATE
page has up-to-date data
ie. for file backed page: (in-memory data revision >= on-disk one)
4 - DIRTY
page has been written to, hence contains new data
ie. for file backed page: (in-memory data revision > on-disk one)
8 - WRITEBACK
page is being synced to disk
LRU related page flags
----------------------
5 - LRU
page is in one of the LRU lists
6 - ACTIVE
page is in the active LRU list
18 - UNEVICTABLE
page is in the unevictable (non-)LRU list It is somehow pinned and
not a candidate for LRU page reclaims, eg. ramfs pages,
shmctl(SHM_LOCK) and mlock() memory segments
2 - REFERENCED
page has been referenced since last LRU list enqueue/requeue
9 - RECLAIM
page will be reclaimed soon after its pageout IO completed
11 - MMAP
a memory mapped page
12 - ANON
a memory mapped page that is not part of a file
13 - SWAPCACHE
page is mapped to swap space, ie. has an associated swap entry
14 - SWAPBACKED
page is backed by swap/RAM
The page-types tool in the tools/vm directory can be used to query the
above flags.
Using pagemap to do something useful:
Using pagemap to do something useful
====================================
The general procedure for using pagemap to find out about a process' memory
usage goes like this:
1. Read /proc/pid/maps to determine which parts of the memory space are
1. Read ``/proc/pid/maps`` to determine which parts of the memory space are
mapped to what.
2. Select the maps you are interested in -- all of them, or a particular
library, or the stack or the heap, etc.
3. Open /proc/pid/pagemap and seek to the pages you would like to examine.
3. Open ``/proc/pid/pagemap`` and seek to the pages you would like to examine.
4. Read a u64 for each page from pagemap.
5. Open /proc/kpagecount and/or /proc/kpageflags. For each PFN you just
read, seek to that entry in the file, and read the data you want.
5. Open ``/proc/kpagecount`` and/or ``/proc/kpageflags``. For each PFN you
just read, seek to that entry in the file, and read the data you want.
For example, to find the "unique set size" (USS), which is the amount of
memory that a process is using that is not shared with any other process,
@ -171,7 +184,8 @@ you can go through every map in the process, find the PFNs, look those up
in kpagecount, and tally up the number of pages that are only referenced
once.
Other notes:
Other notes
===========
Reading from any of the files will return -EINVAL if you are not starting
the read on an 8-byte boundary (e.g., if you sought an odd number of bytes

View file

@ -1,3 +1,9 @@
.. _remap_file_pages:
==============================
remap_file_pages() system call
==============================
The remap_file_pages() system call is used to create a nonlinear mapping,
that is, a mapping in which the pages of the file are mapped into a
nonsequential order in memory. The advantage of using remap_file_pages()

361
Documentation/vm/slub.rst Normal file
View file

@ -0,0 +1,361 @@
.. _slub:
==========================
Short users guide for SLUB
==========================
The basic philosophy of SLUB is very different from SLAB. SLAB
requires rebuilding the kernel to activate debug options for all
slab caches. SLUB always includes full debugging but it is off by default.
SLUB can enable debugging only for selected slabs in order to avoid
an impact on overall system performance which may make a bug more
difficult to find.
In order to switch debugging on one can add an option ``slub_debug``
to the kernel command line. That will enable full debugging for
all slabs.
Typically one would then use the ``slabinfo`` command to get statistical
data and perform operation on the slabs. By default ``slabinfo`` only lists
slabs that have data in them. See "slabinfo -h" for more options when
running the command. ``slabinfo`` can be compiled with
::
gcc -o slabinfo tools/vm/slabinfo.c
Some of the modes of operation of ``slabinfo`` require that slub debugging
be enabled on the command line. F.e. no tracking information will be
available without debugging on and validation can only partially
be performed if debugging was not switched on.
Some more sophisticated uses of slub_debug:
-------------------------------------------
Parameters may be given to ``slub_debug``. If none is specified then full
debugging is enabled. Format:
slub_debug=<Debug-Options>
Enable options for all slabs
slub_debug=<Debug-Options>,<slab name>
Enable options only for select slabs
Possible debug options are::
F Sanity checks on (enables SLAB_DEBUG_CONSISTENCY_CHECKS
Sorry SLAB legacy issues)
Z Red zoning
P Poisoning (object and padding)
U User tracking (free and alloc)
T Trace (please only use on single slabs)
A Toggle failslab filter mark for the cache
O Switch debugging off for caches that would have
caused higher minimum slab orders
- Switch all debugging off (useful if the kernel is
configured with CONFIG_SLUB_DEBUG_ON)
F.e. in order to boot just with sanity checks and red zoning one would specify::
slub_debug=FZ
Trying to find an issue in the dentry cache? Try::
slub_debug=,dentry
to only enable debugging on the dentry cache.
Red zoning and tracking may realign the slab. We can just apply sanity checks
to the dentry cache with::
slub_debug=F,dentry
Debugging options may require the minimum possible slab order to increase as
a result of storing the metadata (for example, caches with PAGE_SIZE object
sizes). This has a higher liklihood of resulting in slab allocation errors
in low memory situations or if there's high fragmentation of memory. To
switch off debugging for such caches by default, use::
slub_debug=O
In case you forgot to enable debugging on the kernel command line: It is
possible to enable debugging manually when the kernel is up. Look at the
contents of::
/sys/kernel/slab/<slab name>/
Look at the writable files. Writing 1 to them will enable the
corresponding debug option. All options can be set on a slab that does
not contain objects. If the slab already contains objects then sanity checks
and tracing may only be enabled. The other options may cause the realignment
of objects.
Careful with tracing: It may spew out lots of information and never stop if
used on the wrong slab.
Slab merging
============
If no debug options are specified then SLUB may merge similar slabs together
in order to reduce overhead and increase cache hotness of objects.
``slabinfo -a`` displays which slabs were merged together.
Slab validation
===============
SLUB can validate all object if the kernel was booted with slub_debug. In
order to do so you must have the ``slabinfo`` tool. Then you can do
::
slabinfo -v
which will test all objects. Output will be generated to the syslog.
This also works in a more limited way if boot was without slab debug.
In that case ``slabinfo -v`` simply tests all reachable objects. Usually
these are in the cpu slabs and the partial slabs. Full slabs are not
tracked by SLUB in a non debug situation.
Getting more performance
========================
To some degree SLUB's performance is limited by the need to take the
list_lock once in a while to deal with partial slabs. That overhead is
governed by the order of the allocation for each slab. The allocations
can be influenced by kernel parameters:
.. slub_min_objects=x (default 4)
.. slub_min_order=x (default 0)
.. slub_max_order=x (default 3 (PAGE_ALLOC_COSTLY_ORDER))
``slub_min_objects``
allows to specify how many objects must at least fit into one
slab in order for the allocation order to be acceptable. In
general slub will be able to perform this number of
allocations on a slab without consulting centralized resources
(list_lock) where contention may occur.
``slub_min_order``
specifies a minim order of slabs. A similar effect like
``slub_min_objects``.
``slub_max_order``
specified the order at which ``slub_min_objects`` should no
longer be checked. This is useful to avoid SLUB trying to
generate super large order pages to fit ``slub_min_objects``
of a slab cache with large object sizes into one high order
page. Setting command line parameter
``debug_guardpage_minorder=N`` (N > 0), forces setting
``slub_max_order`` to 0, what cause minimum possible order of
slabs allocation.
SLUB Debug output
=================
Here is a sample of slub debug output::
====================================================================
BUG kmalloc-8: Redzone overwritten
--------------------------------------------------------------------
INFO: 0xc90f6d28-0xc90f6d2b. First byte 0x00 instead of 0xcc
INFO: Slab 0xc528c530 flags=0x400000c3 inuse=61 fp=0xc90f6d58
INFO: Object 0xc90f6d20 @offset=3360 fp=0xc90f6d58
INFO: Allocated in get_modalias+0x61/0xf5 age=53 cpu=1 pid=554
Bytes b4 0xc90f6d10: 00 00 00 00 00 00 00 00 5a 5a 5a 5a 5a 5a 5a 5a ........ZZZZZZZZ
Object 0xc90f6d20: 31 30 31 39 2e 30 30 35 1019.005
Redzone 0xc90f6d28: 00 cc cc cc .
Padding 0xc90f6d50: 5a 5a 5a 5a 5a 5a 5a 5a ZZZZZZZZ
[<c010523d>] dump_trace+0x63/0x1eb
[<c01053df>] show_trace_log_lvl+0x1a/0x2f
[<c010601d>] show_trace+0x12/0x14
[<c0106035>] dump_stack+0x16/0x18
[<c017e0fa>] object_err+0x143/0x14b
[<c017e2cc>] check_object+0x66/0x234
[<c017eb43>] __slab_free+0x239/0x384
[<c017f446>] kfree+0xa6/0xc6
[<c02e2335>] get_modalias+0xb9/0xf5
[<c02e23b7>] dmi_dev_uevent+0x27/0x3c
[<c027866a>] dev_uevent+0x1ad/0x1da
[<c0205024>] kobject_uevent_env+0x20a/0x45b
[<c020527f>] kobject_uevent+0xa/0xf
[<c02779f1>] store_uevent+0x4f/0x58
[<c027758e>] dev_attr_store+0x29/0x2f
[<c01bec4f>] sysfs_write_file+0x16e/0x19c
[<c0183ba7>] vfs_write+0xd1/0x15a
[<c01841d7>] sys_write+0x3d/0x72
[<c0104112>] sysenter_past_esp+0x5f/0x99
[<b7f7b410>] 0xb7f7b410
=======================
FIX kmalloc-8: Restoring Redzone 0xc90f6d28-0xc90f6d2b=0xcc
If SLUB encounters a corrupted object (full detection requires the kernel
to be booted with slub_debug) then the following output will be dumped
into the syslog:
1. Description of the problem encountered
This will be a message in the system log starting with::
===============================================
BUG <slab cache affected>: <What went wrong>
-----------------------------------------------
INFO: <corruption start>-<corruption_end> <more info>
INFO: Slab <address> <slab information>
INFO: Object <address> <object information>
INFO: Allocated in <kernel function> age=<jiffies since alloc> cpu=<allocated by
cpu> pid=<pid of the process>
INFO: Freed in <kernel function> age=<jiffies since free> cpu=<freed by cpu>
pid=<pid of the process>
(Object allocation / free information is only available if SLAB_STORE_USER is
set for the slab. slub_debug sets that option)
2. The object contents if an object was involved.
Various types of lines can follow the BUG SLUB line:
Bytes b4 <address> : <bytes>
Shows a few bytes before the object where the problem was detected.
Can be useful if the corruption does not stop with the start of the
object.
Object <address> : <bytes>
The bytes of the object. If the object is inactive then the bytes
typically contain poison values. Any non-poison value shows a
corruption by a write after free.
Redzone <address> : <bytes>
The Redzone following the object. The Redzone is used to detect
writes after the object. All bytes should always have the same
value. If there is any deviation then it is due to a write after
the object boundary.
(Redzone information is only available if SLAB_RED_ZONE is set.
slub_debug sets that option)
Padding <address> : <bytes>
Unused data to fill up the space in order to get the next object
properly aligned. In the debug case we make sure that there are
at least 4 bytes of padding. This allows the detection of writes
before the object.
3. A stackdump
The stackdump describes the location where the error was detected. The cause
of the corruption is may be more likely found by looking at the function that
allocated or freed the object.
4. Report on how the problem was dealt with in order to ensure the continued
operation of the system.
These are messages in the system log beginning with::
FIX <slab cache affected>: <corrective action taken>
In the above sample SLUB found that the Redzone of an active object has
been overwritten. Here a string of 8 characters was written into a slab that
has the length of 8 characters. However, a 8 character string needs a
terminating 0. That zero has overwritten the first byte of the Redzone field.
After reporting the details of the issue encountered the FIX SLUB message
tells us that SLUB has restored the Redzone to its proper value and then
system operations continue.
Emergency operations
====================
Minimal debugging (sanity checks alone) can be enabled by booting with::
slub_debug=F
This will be generally be enough to enable the resiliency features of slub
which will keep the system running even if a bad kernel component will
keep corrupting objects. This may be important for production systems.
Performance will be impacted by the sanity checks and there will be a
continual stream of error messages to the syslog but no additional memory
will be used (unlike full debugging).
No guarantees. The kernel component still needs to be fixed. Performance
may be optimized further by locating the slab that experiences corruption
and enabling debugging only for that cache
I.e.::
slub_debug=F,dentry
If the corruption occurs by writing after the end of the object then it
may be advisable to enable a Redzone to avoid corrupting the beginning
of other objects::
slub_debug=FZ,dentry
Extended slabinfo mode and plotting
===================================
The ``slabinfo`` tool has a special 'extended' ('-X') mode that includes:
- Slabcache Totals
- Slabs sorted by size (up to -N <num> slabs, default 1)
- Slabs sorted by loss (up to -N <num> slabs, default 1)
Additionally, in this mode ``slabinfo`` does not dynamically scale
sizes (G/M/K) and reports everything in bytes (this functionality is
also available to other slabinfo modes via '-B' option) which makes
reporting more precise and accurate. Moreover, in some sense the `-X'
mode also simplifies the analysis of slabs' behaviour, because its
output can be plotted using the ``slabinfo-gnuplot.sh`` script. So it
pushes the analysis from looking through the numbers (tons of numbers)
to something easier -- visual analysis.
To generate plots:
a) collect slabinfo extended records, for example::
while [ 1 ]; do slabinfo -X >> FOO_STATS; sleep 1; done
b) pass stats file(-s) to ``slabinfo-gnuplot.sh`` script::
slabinfo-gnuplot.sh FOO_STATS [FOO_STATS2 .. FOO_STATSN]
The ``slabinfo-gnuplot.sh`` script will pre-processes the collected records
and generates 3 png files (and 3 pre-processing cache files) per STATS
file:
- Slabcache Totals: FOO_STATS-totals.png
- Slabs sorted by size: FOO_STATS-slabs-by-size.png
- Slabs sorted by loss: FOO_STATS-slabs-by-loss.png
Another use case, when ``slabinfo-gnuplot.sh`` can be useful, is when you
need to compare slabs' behaviour "prior to" and "after" some code
modification. To help you out there, ``slabinfo-gnuplot.sh`` script
can 'merge' the `Slabcache Totals` sections from different
measurements. To visually compare N plots:
a) Collect as many STATS1, STATS2, .. STATSN files as you need::
while [ 1 ]; do slabinfo -X >> STATS<X>; sleep 1; done
b) Pre-process those STATS files::
slabinfo-gnuplot.sh STATS1 STATS2 .. STATSN
c) Execute ``slabinfo-gnuplot.sh`` in '-t' mode, passing all of the
generated pre-processed \*-totals::
slabinfo-gnuplot.sh -t STATS1-totals STATS2-totals .. STATSN-totals
This will produce a single plot (png file).
Plots, expectedly, can be large so some fluctuations or small spikes
can go unnoticed. To deal with that, ``slabinfo-gnuplot.sh`` has two
options to 'zoom-in'/'zoom-out':
a) ``-s %d,%d`` -- overwrites the default image width and heigh
b) ``-r %d,%d`` -- specifies a range of samples to use (for example,
in ``slabinfo -X >> FOO_STATS; sleep 1;`` case, using a ``-r
40,60`` range will plot only samples collected between 40th and
60th seconds).
Christoph Lameter, May 30, 2007
Sergey Senozhatsky, October 23, 2015

View file

@ -1,342 +0,0 @@
Short users guide for SLUB
--------------------------
The basic philosophy of SLUB is very different from SLAB. SLAB
requires rebuilding the kernel to activate debug options for all
slab caches. SLUB always includes full debugging but it is off by default.
SLUB can enable debugging only for selected slabs in order to avoid
an impact on overall system performance which may make a bug more
difficult to find.
In order to switch debugging on one can add an option "slub_debug"
to the kernel command line. That will enable full debugging for
all slabs.
Typically one would then use the "slabinfo" command to get statistical
data and perform operation on the slabs. By default slabinfo only lists
slabs that have data in them. See "slabinfo -h" for more options when
running the command. slabinfo can be compiled with
gcc -o slabinfo tools/vm/slabinfo.c
Some of the modes of operation of slabinfo require that slub debugging
be enabled on the command line. F.e. no tracking information will be
available without debugging on and validation can only partially
be performed if debugging was not switched on.
Some more sophisticated uses of slub_debug:
-------------------------------------------
Parameters may be given to slub_debug. If none is specified then full
debugging is enabled. Format:
slub_debug=<Debug-Options> Enable options for all slabs
slub_debug=<Debug-Options>,<slab name>
Enable options only for select slabs
Possible debug options are
F Sanity checks on (enables SLAB_DEBUG_CONSISTENCY_CHECKS
Sorry SLAB legacy issues)
Z Red zoning
P Poisoning (object and padding)
U User tracking (free and alloc)
T Trace (please only use on single slabs)
A Toggle failslab filter mark for the cache
O Switch debugging off for caches that would have
caused higher minimum slab orders
- Switch all debugging off (useful if the kernel is
configured with CONFIG_SLUB_DEBUG_ON)
F.e. in order to boot just with sanity checks and red zoning one would specify:
slub_debug=FZ
Trying to find an issue in the dentry cache? Try
slub_debug=,dentry
to only enable debugging on the dentry cache.
Red zoning and tracking may realign the slab. We can just apply sanity checks
to the dentry cache with
slub_debug=F,dentry
Debugging options may require the minimum possible slab order to increase as
a result of storing the metadata (for example, caches with PAGE_SIZE object
sizes). This has a higher liklihood of resulting in slab allocation errors
in low memory situations or if there's high fragmentation of memory. To
switch off debugging for such caches by default, use
slub_debug=O
In case you forgot to enable debugging on the kernel command line: It is
possible to enable debugging manually when the kernel is up. Look at the
contents of:
/sys/kernel/slab/<slab name>/
Look at the writable files. Writing 1 to them will enable the
corresponding debug option. All options can be set on a slab that does
not contain objects. If the slab already contains objects then sanity checks
and tracing may only be enabled. The other options may cause the realignment
of objects.
Careful with tracing: It may spew out lots of information and never stop if
used on the wrong slab.
Slab merging
------------
If no debug options are specified then SLUB may merge similar slabs together
in order to reduce overhead and increase cache hotness of objects.
slabinfo -a displays which slabs were merged together.
Slab validation
---------------
SLUB can validate all object if the kernel was booted with slub_debug. In
order to do so you must have the slabinfo tool. Then you can do
slabinfo -v
which will test all objects. Output will be generated to the syslog.
This also works in a more limited way if boot was without slab debug.
In that case slabinfo -v simply tests all reachable objects. Usually
these are in the cpu slabs and the partial slabs. Full slabs are not
tracked by SLUB in a non debug situation.
Getting more performance
------------------------
To some degree SLUB's performance is limited by the need to take the
list_lock once in a while to deal with partial slabs. That overhead is
governed by the order of the allocation for each slab. The allocations
can be influenced by kernel parameters:
slub_min_objects=x (default 4)
slub_min_order=x (default 0)
slub_max_order=x (default 3 (PAGE_ALLOC_COSTLY_ORDER))
slub_min_objects allows to specify how many objects must at least fit
into one slab in order for the allocation order to be acceptable.
In general slub will be able to perform this number of allocations
on a slab without consulting centralized resources (list_lock) where
contention may occur.
slub_min_order specifies a minim order of slabs. A similar effect like
slub_min_objects.
slub_max_order specified the order at which slub_min_objects should no
longer be checked. This is useful to avoid SLUB trying to generate
super large order pages to fit slub_min_objects of a slab cache with
large object sizes into one high order page. Setting command line
parameter debug_guardpage_minorder=N (N > 0), forces setting
slub_max_order to 0, what cause minimum possible order of slabs
allocation.
SLUB Debug output
-----------------
Here is a sample of slub debug output:
====================================================================
BUG kmalloc-8: Redzone overwritten
--------------------------------------------------------------------
INFO: 0xc90f6d28-0xc90f6d2b. First byte 0x00 instead of 0xcc
INFO: Slab 0xc528c530 flags=0x400000c3 inuse=61 fp=0xc90f6d58
INFO: Object 0xc90f6d20 @offset=3360 fp=0xc90f6d58
INFO: Allocated in get_modalias+0x61/0xf5 age=53 cpu=1 pid=554
Bytes b4 0xc90f6d10: 00 00 00 00 00 00 00 00 5a 5a 5a 5a 5a 5a 5a 5a ........ZZZZZZZZ
Object 0xc90f6d20: 31 30 31 39 2e 30 30 35 1019.005
Redzone 0xc90f6d28: 00 cc cc cc .
Padding 0xc90f6d50: 5a 5a 5a 5a 5a 5a 5a 5a ZZZZZZZZ
[<c010523d>] dump_trace+0x63/0x1eb
[<c01053df>] show_trace_log_lvl+0x1a/0x2f
[<c010601d>] show_trace+0x12/0x14
[<c0106035>] dump_stack+0x16/0x18
[<c017e0fa>] object_err+0x143/0x14b
[<c017e2cc>] check_object+0x66/0x234
[<c017eb43>] __slab_free+0x239/0x384
[<c017f446>] kfree+0xa6/0xc6
[<c02e2335>] get_modalias+0xb9/0xf5
[<c02e23b7>] dmi_dev_uevent+0x27/0x3c
[<c027866a>] dev_uevent+0x1ad/0x1da
[<c0205024>] kobject_uevent_env+0x20a/0x45b
[<c020527f>] kobject_uevent+0xa/0xf
[<c02779f1>] store_uevent+0x4f/0x58
[<c027758e>] dev_attr_store+0x29/0x2f
[<c01bec4f>] sysfs_write_file+0x16e/0x19c
[<c0183ba7>] vfs_write+0xd1/0x15a
[<c01841d7>] sys_write+0x3d/0x72
[<c0104112>] sysenter_past_esp+0x5f/0x99
[<b7f7b410>] 0xb7f7b410
=======================
FIX kmalloc-8: Restoring Redzone 0xc90f6d28-0xc90f6d2b=0xcc
If SLUB encounters a corrupted object (full detection requires the kernel
to be booted with slub_debug) then the following output will be dumped
into the syslog:
1. Description of the problem encountered
This will be a message in the system log starting with
===============================================
BUG <slab cache affected>: <What went wrong>
-----------------------------------------------
INFO: <corruption start>-<corruption_end> <more info>
INFO: Slab <address> <slab information>
INFO: Object <address> <object information>
INFO: Allocated in <kernel function> age=<jiffies since alloc> cpu=<allocated by
cpu> pid=<pid of the process>
INFO: Freed in <kernel function> age=<jiffies since free> cpu=<freed by cpu>
pid=<pid of the process>
(Object allocation / free information is only available if SLAB_STORE_USER is
set for the slab. slub_debug sets that option)
2. The object contents if an object was involved.
Various types of lines can follow the BUG SLUB line:
Bytes b4 <address> : <bytes>
Shows a few bytes before the object where the problem was detected.
Can be useful if the corruption does not stop with the start of the
object.
Object <address> : <bytes>
The bytes of the object. If the object is inactive then the bytes
typically contain poison values. Any non-poison value shows a
corruption by a write after free.
Redzone <address> : <bytes>
The Redzone following the object. The Redzone is used to detect
writes after the object. All bytes should always have the same
value. If there is any deviation then it is due to a write after
the object boundary.
(Redzone information is only available if SLAB_RED_ZONE is set.
slub_debug sets that option)
Padding <address> : <bytes>
Unused data to fill up the space in order to get the next object
properly aligned. In the debug case we make sure that there are
at least 4 bytes of padding. This allows the detection of writes
before the object.
3. A stackdump
The stackdump describes the location where the error was detected. The cause
of the corruption is may be more likely found by looking at the function that
allocated or freed the object.
4. Report on how the problem was dealt with in order to ensure the continued
operation of the system.
These are messages in the system log beginning with
FIX <slab cache affected>: <corrective action taken>
In the above sample SLUB found that the Redzone of an active object has
been overwritten. Here a string of 8 characters was written into a slab that
has the length of 8 characters. However, a 8 character string needs a
terminating 0. That zero has overwritten the first byte of the Redzone field.
After reporting the details of the issue encountered the FIX SLUB message
tells us that SLUB has restored the Redzone to its proper value and then
system operations continue.
Emergency operations:
---------------------
Minimal debugging (sanity checks alone) can be enabled by booting with
slub_debug=F
This will be generally be enough to enable the resiliency features of slub
which will keep the system running even if a bad kernel component will
keep corrupting objects. This may be important for production systems.
Performance will be impacted by the sanity checks and there will be a
continual stream of error messages to the syslog but no additional memory
will be used (unlike full debugging).
No guarantees. The kernel component still needs to be fixed. Performance
may be optimized further by locating the slab that experiences corruption
and enabling debugging only for that cache
I.e.
slub_debug=F,dentry
If the corruption occurs by writing after the end of the object then it
may be advisable to enable a Redzone to avoid corrupting the beginning
of other objects.
slub_debug=FZ,dentry
Extended slabinfo mode and plotting
-----------------------------------
The slabinfo tool has a special 'extended' ('-X') mode that includes:
- Slabcache Totals
- Slabs sorted by size (up to -N <num> slabs, default 1)
- Slabs sorted by loss (up to -N <num> slabs, default 1)
Additionally, in this mode slabinfo does not dynamically scale sizes (G/M/K)
and reports everything in bytes (this functionality is also available to
other slabinfo modes via '-B' option) which makes reporting more precise and
accurate. Moreover, in some sense the `-X' mode also simplifies the analysis
of slabs' behaviour, because its output can be plotted using the
slabinfo-gnuplot.sh script. So it pushes the analysis from looking through
the numbers (tons of numbers) to something easier -- visual analysis.
To generate plots:
a) collect slabinfo extended records, for example:
while [ 1 ]; do slabinfo -X >> FOO_STATS; sleep 1; done
b) pass stats file(-s) to slabinfo-gnuplot.sh script:
slabinfo-gnuplot.sh FOO_STATS [FOO_STATS2 .. FOO_STATSN]
The slabinfo-gnuplot.sh script will pre-processes the collected records
and generates 3 png files (and 3 pre-processing cache files) per STATS
file:
- Slabcache Totals: FOO_STATS-totals.png
- Slabs sorted by size: FOO_STATS-slabs-by-size.png
- Slabs sorted by loss: FOO_STATS-slabs-by-loss.png
Another use case, when slabinfo-gnuplot can be useful, is when you need
to compare slabs' behaviour "prior to" and "after" some code modification.
To help you out there, slabinfo-gnuplot.sh script can 'merge' the
`Slabcache Totals` sections from different measurements. To visually
compare N plots:
a) Collect as many STATS1, STATS2, .. STATSN files as you need
while [ 1 ]; do slabinfo -X >> STATS<X>; sleep 1; done
b) Pre-process those STATS files
slabinfo-gnuplot.sh STATS1 STATS2 .. STATSN
c) Execute slabinfo-gnuplot.sh in '-t' mode, passing all of the
generated pre-processed *-totals
slabinfo-gnuplot.sh -t STATS1-totals STATS2-totals .. STATSN-totals
This will produce a single plot (png file).
Plots, expectedly, can be large so some fluctuations or small spikes
can go unnoticed. To deal with that, `slabinfo-gnuplot.sh' has two
options to 'zoom-in'/'zoom-out':
a) -s %d,%d overwrites the default image width and heigh
b) -r %d,%d specifies a range of samples to use (for example,
in `slabinfo -X >> FOO_STATS; sleep 1;' case, using
a "-r 40,60" range will plot only samples collected
between 40th and 60th seconds).
Christoph Lameter, May 30, 2007
Sergey Senozhatsky, October 23, 2015

View file

@ -1,34 +1,38 @@
SOFT-DIRTY PTEs
.. _soft_dirty:
The soft-dirty is a bit on a PTE which helps to track which pages a task
===============
Soft-Dirty PTEs
===============
The soft-dirty is a bit on a PTE which helps to track which pages a task
writes to. In order to do this tracking one should
1. Clear soft-dirty bits from the task's PTEs.
This is done by writing "4" into the /proc/PID/clear_refs file of the
This is done by writing "4" into the ``/proc/PID/clear_refs`` file of the
task in question.
2. Wait some time.
3. Read soft-dirty bits from the PTEs.
This is done by reading from the /proc/PID/pagemap. The bit 55 of the
This is done by reading from the ``/proc/PID/pagemap``. The bit 55 of the
64-bit qword is the soft-dirty one. If set, the respective PTE was
written to since step 1.
Internally, to do this tracking, the writable bit is cleared from PTEs
Internally, to do this tracking, the writable bit is cleared from PTEs
when the soft-dirty bit is cleared. So, after this, when the task tries to
modify a page at some virtual address the #PF occurs and the kernel sets
the soft-dirty bit on the respective PTE.
Note, that although all the task's address space is marked as r/o after the
Note, that although all the task's address space is marked as r/o after the
soft-dirty bits clear, the #PF-s that occur after that are processed fast.
This is so, since the pages are still mapped to physical memory, and thus all
the kernel does is finds this fact out and puts both writable and soft-dirty
bits on the PTE.
While in most cases tracking memory changes by #PF-s is more than enough
While in most cases tracking memory changes by #PF-s is more than enough
there is still a scenario when we can lose soft dirty bits -- a task
unmaps a previously mapped memory region and then maps a new one at exactly
the same place. When unmap is called, the kernel internally clears PTE values
@ -36,7 +40,7 @@ including soft dirty bits. To notify user space application about such
memory region renewal the kernel always marks new memory regions (and
expanded regions) as soft dirty.
This feature is actively used by the checkpoint-restore project. You
This feature is actively used by the checkpoint-restore project. You
can find more details about it on http://criu.org

View file

@ -1,3 +1,6 @@
.. _split_page_table_lock:
=====================
Split page table lock
=====================
@ -11,6 +14,7 @@ access to the table. At the moment we use split lock for PTE and PMD
tables. Access to higher level tables protected by mm->page_table_lock.
There are helpers to lock/unlock a table and other accessor functions:
- pte_offset_map_lock()
maps pte and takes PTE table lock, returns pointer to the taken
lock;
@ -34,12 +38,13 @@ Split page table lock for PMD tables is enabled, if it's enabled for PTE
tables and the architecture supports it (see below).
Hugetlb and split page table lock
---------------------------------
=================================
Hugetlb can support several page sizes. We use split lock only for PMD
level, but not for PUD.
Hugetlb-specific helpers:
- huge_pte_lock()
takes pmd split lock for PMD_SIZE page, mm->page_table_lock
otherwise;
@ -47,7 +52,7 @@ Hugetlb-specific helpers:
returns pointer to table lock;
Support of split page table lock by an architecture
---------------------------------------------------
===================================================
There's no need in special enabling of PTE split page table lock:
everything required is done by pgtable_page_ctor() and pgtable_page_dtor(),
@ -73,7 +78,7 @@ NOTE: pgtable_page_ctor() and pgtable_pmd_page_ctor() can fail -- it must
be handled properly.
page->ptl
---------
=========
page->ptl is used to access split page table lock, where 'page' is struct
page of page containing the table. It shares storage with page->private
@ -81,6 +86,7 @@ page of page containing the table. It shares storage with page->private
To avoid increasing size of struct page and have best performance, we use a
trick:
- if spinlock_t fits into long, we use page->ptr as spinlock, so we
can avoid indirect access and save a cache line.
- if size of spinlock_t is bigger then size of long, we use page->ptl as

View file

@ -1,5 +1,8 @@
.. _swap_numa:
===========================================
Automatically bind swap device to numa node
-------------------------------------------
===========================================
If the system has more than one swap device and swap device has the node
information, we can make use of this information to decide which swap
@ -7,15 +10,16 @@ device to use in get_swap_pages() to get better performance.
How to use this feature
-----------------------
=======================
Swap device has priority and that decides the order of it to be used. To make
use of automatically binding, there is no need to manipulate priority settings
for swap devices. e.g. on a 2 node machine, assume 2 swap devices swapA and
swapB, with swapA attached to node 0 and swapB attached to node 1, are going
to be swapped on. Simply swapping them on by doing:
# swapon /dev/swapA
# swapon /dev/swapB
to be swapped on. Simply swapping them on by doing::
# swapon /dev/swapA
# swapon /dev/swapB
Then node 0 will use the two swap devices in the order of swapA then swapB and
node 1 will use the two swap devices in the order of swapB then swapA. Note
@ -24,32 +28,39 @@ that the order of them being swapped on doesn't matter.
A more complex example on a 4 node machine. Assume 6 swap devices are going to
be swapped on: swapA and swapB are attached to node 0, swapC is attached to
node 1, swapD and swapE are attached to node 2 and swapF is attached to node3.
The way to swap them on is the same as above:
# swapon /dev/swapA
# swapon /dev/swapB
# swapon /dev/swapC
# swapon /dev/swapD
# swapon /dev/swapE
# swapon /dev/swapF
The way to swap them on is the same as above::
# swapon /dev/swapA
# swapon /dev/swapB
# swapon /dev/swapC
# swapon /dev/swapD
# swapon /dev/swapE
# swapon /dev/swapF
Then node 0 will use them in the order of::
swapA/swapB -> swapC -> swapD -> swapE -> swapF
Then node 0 will use them in the order of:
swapA/swapB -> swapC -> swapD -> swapE -> swapF
swapA and swapB will be used in a round robin mode before any other swap device.
node 1 will use them in the order of:
swapC -> swapA -> swapB -> swapD -> swapE -> swapF
node 1 will use them in the order of::
swapC -> swapA -> swapB -> swapD -> swapE -> swapF
node 2 will use them in the order of::
swapD/swapE -> swapA -> swapB -> swapC -> swapF
node 2 will use them in the order of:
swapD/swapE -> swapA -> swapB -> swapC -> swapF
Similaly, swapD and swapE will be used in a round robin mode before any
other swap devices.
node 3 will use them in the order of:
swapF -> swapA -> swapB -> swapC -> swapD -> swapE
node 3 will use them in the order of::
swapF -> swapA -> swapB -> swapC -> swapD -> swapE
Implementation details
----------------------
======================
The current code uses a priority based list, swap_avail_list, to decide
which swap device to use and if multiple swap devices share the same

View file

@ -1,6 +1,11 @@
= Transparent Hugepage Support =
.. _transhuge:
== Objective ==
============================
Transparent Hugepage Support
============================
Objective
=========
Performance critical computing applications dealing with large memory
working sets are already running on top of libhugetlbfs and in turn
@ -33,7 +38,8 @@ are using hugepages but a significant speedup already happens if only
one of the two is using hugepages just because of the fact the TLB
miss is going to run faster.
== Design ==
Design
======
- "graceful fallback": mm components which don't have transparent hugepage
knowledge fall back to breaking huge pmd mapping into table of ptes and,
@ -88,16 +94,17 @@ Applications that gets a lot of benefit from hugepages and that don't
risk to lose memory by using hugepages, should use
madvise(MADV_HUGEPAGE) on their critical mmapped regions.
== sysfs ==
sysfs
=====
Transparent Hugepage Support for anonymous memory can be entirely disabled
(mostly for debugging purposes) or only enabled inside MADV_HUGEPAGE
regions (to avoid the risk of consuming more memory resources) or enabled
system wide. This can be achieved with one of:
system wide. This can be achieved with one of::
echo always >/sys/kernel/mm/transparent_hugepage/enabled
echo madvise >/sys/kernel/mm/transparent_hugepage/enabled
echo never >/sys/kernel/mm/transparent_hugepage/enabled
echo always >/sys/kernel/mm/transparent_hugepage/enabled
echo madvise >/sys/kernel/mm/transparent_hugepage/enabled
echo never >/sys/kernel/mm/transparent_hugepage/enabled
It's also possible to limit defrag efforts in the VM to generate
anonymous hugepages in case they're not immediately free to madvise
@ -108,44 +115,53 @@ use hugepages later instead of regular pages. This isn't always
guaranteed, but it may be more likely in case the allocation is for a
MADV_HUGEPAGE region.
echo always >/sys/kernel/mm/transparent_hugepage/defrag
echo defer >/sys/kernel/mm/transparent_hugepage/defrag
echo defer+madvise >/sys/kernel/mm/transparent_hugepage/defrag
echo madvise >/sys/kernel/mm/transparent_hugepage/defrag
echo never >/sys/kernel/mm/transparent_hugepage/defrag
::
"always" means that an application requesting THP will stall on allocation
failure and directly reclaim pages and compact memory in an effort to
allocate a THP immediately. This may be desirable for virtual machines
that benefit heavily from THP use and are willing to delay the VM start
to utilise them.
echo always >/sys/kernel/mm/transparent_hugepage/defrag
echo defer >/sys/kernel/mm/transparent_hugepage/defrag
echo defer+madvise >/sys/kernel/mm/transparent_hugepage/defrag
echo madvise >/sys/kernel/mm/transparent_hugepage/defrag
echo never >/sys/kernel/mm/transparent_hugepage/defrag
"defer" means that an application will wake kswapd in the background
to reclaim pages and wake kcompactd to compact memory so that THP is
available in the near future. It's the responsibility of khugepaged
to then install the THP pages later.
always
means that an application requesting THP will stall on
allocation failure and directly reclaim pages and compact
memory in an effort to allocate a THP immediately. This may be
desirable for virtual machines that benefit heavily from THP
use and are willing to delay the VM start to utilise them.
"defer+madvise" will enter direct reclaim and compaction like "always", but
only for regions that have used madvise(MADV_HUGEPAGE); all other regions
will wake kswapd in the background to reclaim pages and wake kcompactd to
compact memory so that THP is available in the near future.
defer
means that an application will wake kswapd in the background
to reclaim pages and wake kcompactd to compact memory so that
THP is available in the near future. It's the responsibility
of khugepaged to then install the THP pages later.
"madvise" will enter direct reclaim like "always" but only for regions
that are have used madvise(MADV_HUGEPAGE). This is the default behaviour.
defer+madvise
will enter direct reclaim and compaction like ``always``, but
only for regions that have used madvise(MADV_HUGEPAGE); all
other regions will wake kswapd in the background to reclaim
pages and wake kcompactd to compact memory so that THP is
available in the near future.
"never" should be self-explanatory.
madvise
will enter direct reclaim like ``always`` but only for regions
that are have used madvise(MADV_HUGEPAGE). This is the default
behaviour.
never
should be self-explanatory.
By default kernel tries to use huge zero page on read page fault to
anonymous mapping. It's possible to disable huge zero page by writing 0
or enable it back by writing 1:
or enable it back by writing 1::
echo 0 >/sys/kernel/mm/transparent_hugepage/use_zero_page
echo 1 >/sys/kernel/mm/transparent_hugepage/use_zero_page
echo 0 >/sys/kernel/mm/transparent_hugepage/use_zero_page
echo 1 >/sys/kernel/mm/transparent_hugepage/use_zero_page
Some userspace (such as a test program, or an optimized memory allocation
library) may want to know the size (in bytes) of a transparent hugepage:
library) may want to know the size (in bytes) of a transparent hugepage::
cat /sys/kernel/mm/transparent_hugepage/hpage_pmd_size
cat /sys/kernel/mm/transparent_hugepage/hpage_pmd_size
khugepaged will be automatically started when
transparent_hugepage/enabled is set to "always" or "madvise, and it'll
@ -155,84 +171,86 @@ khugepaged runs usually at low frequency so while one may not want to
invoke defrag algorithms synchronously during the page faults, it
should be worth invoking defrag at least in khugepaged. However it's
also possible to disable defrag in khugepaged by writing 0 or enable
defrag in khugepaged by writing 1:
defrag in khugepaged by writing 1::
echo 0 >/sys/kernel/mm/transparent_hugepage/khugepaged/defrag
echo 1 >/sys/kernel/mm/transparent_hugepage/khugepaged/defrag
echo 0 >/sys/kernel/mm/transparent_hugepage/khugepaged/defrag
echo 1 >/sys/kernel/mm/transparent_hugepage/khugepaged/defrag
You can also control how many pages khugepaged should scan at each
pass:
pass::
/sys/kernel/mm/transparent_hugepage/khugepaged/pages_to_scan
/sys/kernel/mm/transparent_hugepage/khugepaged/pages_to_scan
and how many milliseconds to wait in khugepaged between each pass (you
can set this to 0 to run khugepaged at 100% utilization of one core):
can set this to 0 to run khugepaged at 100% utilization of one core)::
/sys/kernel/mm/transparent_hugepage/khugepaged/scan_sleep_millisecs
/sys/kernel/mm/transparent_hugepage/khugepaged/scan_sleep_millisecs
and how many milliseconds to wait in khugepaged if there's an hugepage
allocation failure to throttle the next allocation attempt.
allocation failure to throttle the next allocation attempt::
/sys/kernel/mm/transparent_hugepage/khugepaged/alloc_sleep_millisecs
/sys/kernel/mm/transparent_hugepage/khugepaged/alloc_sleep_millisecs
The khugepaged progress can be seen in the number of pages collapsed:
The khugepaged progress can be seen in the number of pages collapsed::
/sys/kernel/mm/transparent_hugepage/khugepaged/pages_collapsed
/sys/kernel/mm/transparent_hugepage/khugepaged/pages_collapsed
for each pass:
for each pass::
/sys/kernel/mm/transparent_hugepage/khugepaged/full_scans
/sys/kernel/mm/transparent_hugepage/khugepaged/full_scans
max_ptes_none specifies how many extra small pages (that are
``max_ptes_none`` specifies how many extra small pages (that are
not already mapped) can be allocated when collapsing a group
of small pages into one large page.
of small pages into one large page::
/sys/kernel/mm/transparent_hugepage/khugepaged/max_ptes_none
/sys/kernel/mm/transparent_hugepage/khugepaged/max_ptes_none
A higher value leads to use additional memory for programs.
A lower value leads to gain less thp performance. Value of
max_ptes_none can waste cpu time very little, you can
ignore it.
max_ptes_swap specifies how many pages can be brought in from
swap when collapsing a group of pages into a transparent huge page.
``max_ptes_swap`` specifies how many pages can be brought in from
swap when collapsing a group of pages into a transparent huge page::
/sys/kernel/mm/transparent_hugepage/khugepaged/max_ptes_swap
/sys/kernel/mm/transparent_hugepage/khugepaged/max_ptes_swap
A higher value can cause excessive swap IO and waste
memory. A lower value can prevent THPs from being
collapsed, resulting fewer pages being collapsed into
THPs, and lower memory access performance.
== Boot parameter ==
Boot parameter
==============
You can change the sysfs boot time defaults of Transparent Hugepage
Support by passing the parameter "transparent_hugepage=always" or
"transparent_hugepage=madvise" or "transparent_hugepage=never"
(without "") to the kernel command line.
Support by passing the parameter ``transparent_hugepage=always`` or
``transparent_hugepage=madvise`` or ``transparent_hugepage=never``
to the kernel command line.
== Hugepages in tmpfs/shmem ==
Hugepages in tmpfs/shmem
========================
You can control hugepage allocation policy in tmpfs with mount option
"huge=". It can have following values:
``huge=``. It can have following values:
- "always":
always
Attempt to allocate huge pages every time we need a new page;
- "never":
never
Do not allocate huge pages;
- "within_size":
within_size
Only allocate huge page if it will be fully within i_size.
Also respect fadvise()/madvise() hints;
- "advise:
advise
Only allocate huge pages if requested with fadvise()/madvise();
The default policy is "never".
The default policy is ``never``.
"mount -o remount,huge= /mountpoint" works fine after mount: remounting
huge=never will not attempt to break up huge pages at all, just stop more
``mount -o remount,huge= /mountpoint`` works fine after mount: remounting
``huge=never`` will not attempt to break up huge pages at all, just stop more
from being allocated.
There's also sysfs knob to control hugepage allocation policy for internal
@ -243,110 +261,130 @@ MAP_ANONYMOUS), GPU drivers' DRM objects, Ashmem.
In addition to policies listed above, shmem_enabled allows two further
values:
- "deny":
deny
For use in emergencies, to force the huge option off from
all mounts;
- "force":
force
Force the huge option on for all - very useful for testing;
== Need of application restart ==
Need of application restart
===========================
The transparent_hugepage/enabled values and tmpfs mount option only affect
future behavior. So to make them effective you need to restart any
application that could have been using hugepages. This also applies to the
regions registered in khugepaged.
== Monitoring usage ==
Monitoring usage
================
The number of anonymous transparent huge pages currently used by the
system is available by reading the AnonHugePages field in /proc/meminfo.
system is available by reading the AnonHugePages field in ``/proc/meminfo``.
To identify what applications are using anonymous transparent huge pages,
it is necessary to read /proc/PID/smaps and count the AnonHugePages fields
it is necessary to read ``/proc/PID/smaps`` and count the AnonHugePages fields
for each mapping.
The number of file transparent huge pages mapped to userspace is available
by reading ShmemPmdMapped and ShmemHugePages fields in /proc/meminfo.
by reading ShmemPmdMapped and ShmemHugePages fields in ``/proc/meminfo``.
To identify what applications are mapping file transparent huge pages, it
is necessary to read /proc/PID/smaps and count the FileHugeMapped fields
is necessary to read ``/proc/PID/smaps`` and count the FileHugeMapped fields
for each mapping.
Note that reading the smaps file is expensive and reading it
frequently will incur overhead.
There are a number of counters in /proc/vmstat that may be used to
There are a number of counters in ``/proc/vmstat`` that may be used to
monitor how successfully the system is providing huge pages for use.
thp_fault_alloc is incremented every time a huge page is successfully
thp_fault_alloc
is incremented every time a huge page is successfully
allocated to handle a page fault. This applies to both the
first time a page is faulted and for COW faults.
thp_collapse_alloc is incremented by khugepaged when it has found
thp_collapse_alloc
is incremented by khugepaged when it has found
a range of pages to collapse into one huge page and has
successfully allocated a new huge page to store the data.
thp_fault_fallback is incremented if a page fault fails to allocate
thp_fault_fallback
is incremented if a page fault fails to allocate
a huge page and instead falls back to using small pages.
thp_collapse_alloc_failed is incremented if khugepaged found a range
thp_collapse_alloc_failed
is incremented if khugepaged found a range
of pages that should be collapsed into one huge page but failed
the allocation.
thp_file_alloc is incremented every time a file huge page is successfully
thp_file_alloc
is incremented every time a file huge page is successfully
allocated.
thp_file_mapped is incremented every time a file huge page is mapped into
thp_file_mapped
is incremented every time a file huge page is mapped into
user address space.
thp_split_page is incremented every time a huge page is split into base
thp_split_page
is incremented every time a huge page is split into base
pages. This can happen for a variety of reasons but a common
reason is that a huge page is old and is being reclaimed.
This action implies splitting all PMD the page mapped with.
thp_split_page_failed is incremented if kernel fails to split huge
thp_split_page_failed
is incremented if kernel fails to split huge
page. This can happen if the page was pinned by somebody.
thp_deferred_split_page is incremented when a huge page is put onto split
thp_deferred_split_page
is incremented when a huge page is put onto split
queue. This happens when a huge page is partially unmapped and
splitting it would free up some memory. Pages on split queue are
going to be split under memory pressure.
thp_split_pmd is incremented every time a PMD split into table of PTEs.
thp_split_pmd
is incremented every time a PMD split into table of PTEs.
This can happen, for instance, when application calls mprotect() or
munmap() on part of huge page. It doesn't split huge page, only
page table entry.
thp_zero_page_alloc is incremented every time a huge zero page is
thp_zero_page_alloc
is incremented every time a huge zero page is
successfully allocated. It includes allocations which where
dropped due race with other allocation. Note, it doesn't count
every map of the huge zero page, only its allocation.
thp_zero_page_alloc_failed is incremented if kernel fails to allocate
thp_zero_page_alloc_failed
is incremented if kernel fails to allocate
huge zero page and falls back to using small pages.
As the system ages, allocating huge pages may be expensive as the
system uses memory compaction to copy data around memory to free a
huge page for use. There are some counters in /proc/vmstat to help
huge page for use. There are some counters in ``/proc/vmstat`` to help
monitor this overhead.
compact_stall is incremented every time a process stalls to run
compact_stall
is incremented every time a process stalls to run
memory compaction so that a huge page is free for use.
compact_success is incremented if the system compacted memory and
compact_success
is incremented if the system compacted memory and
freed a huge page for use.
compact_fail is incremented if the system tries to compact memory
compact_fail
is incremented if the system tries to compact memory
but failed.
compact_pages_moved is incremented each time a page is moved. If
compact_pages_moved
is incremented each time a page is moved. If
this value is increasing rapidly, it implies that the system
is copying a lot of data to satisfy the huge page allocation.
It is possible that the cost of copying exceeds any savings
from reduced TLB misses.
compact_pagemigrate_failed is incremented when the underlying mechanism
compact_pagemigrate_failed
is incremented when the underlying mechanism
for moving a page failed.
compact_blocks_moved is incremented each time memory compaction examines
compact_blocks_moved
is incremented each time memory compaction examines
a huge page aligned range of pages.
It is possible to establish how long the stalls were using the function
@ -354,7 +392,8 @@ tracer to record how long was spent in __alloc_pages_nodemask and
using the mm_page_alloc tracepoint to identify which allocations were
for huge pages.
== get_user_pages and follow_page ==
get_user_pages and follow_page
==============================
get_user_pages and follow_page if run on a hugepage, will return the
head or tail pages as usual (exactly as they would do on
@ -367,10 +406,11 @@ for the head page and not the tail page), it should be updated to jump
to check head page instead. Taking reference on any head/tail page would
prevent page from being split by anyone.
NOTE: these aren't new constraints to the GUP API, and they match the
same constrains that applies to hugetlbfs too, so any driver capable
of handling GUP on hugetlbfs will also work fine on transparent
hugepage backed mappings.
.. note::
these aren't new constraints to the GUP API, and they match the
same constrains that applies to hugetlbfs too, so any driver capable
of handling GUP on hugetlbfs will also work fine on transparent
hugepage backed mappings.
In case you can't handle compound pages if they're returned by
follow_page, the FOLL_SPLIT bit can be specified as parameter to
@ -383,13 +423,15 @@ hugepages being returned (as it's not only checking the pfn of the
page and pinning it during the copy but it pretends to migrate the
memory in regular page sizes and with regular pte/pmd mappings).
== Optimizing the applications ==
Optimizing the applications
===========================
To be guaranteed that the kernel will map a 2M page immediately in any
memory region, the mmap region has to be hugepage naturally
aligned. posix_memalign() can provide that guarantee.
== Hugetlbfs ==
Hugetlbfs
=========
You can use hugetlbfs on a kernel that has transparent hugepage
support enabled just fine as always. No difference can be noted in
@ -397,7 +439,8 @@ hugetlbfs other than there will be less overall fragmentation. All
usual features belonging to hugetlbfs are preserved and
unaffected. libhugetlbfs will also work fine as usual.
== Graceful fallback ==
Graceful fallback
=================
Code walking pagetables but unaware about huge pmds can simply call
split_huge_pmd(vma, pmd, addr) where the pmd is the one returned by
@ -415,20 +458,21 @@ it tries to swapout the hugepage for example. split_huge_page() can fail
if the page is pinned and you must handle this correctly.
Example to make mremap.c transparent hugepage aware with a one liner
change:
change::
diff --git a/mm/mremap.c b/mm/mremap.c
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -41,6 +41,7 @@ static pmd_t *get_old_pmd(struct mm_stru
return NULL;
diff --git a/mm/mremap.c b/mm/mremap.c
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -41,6 +41,7 @@ static pmd_t *get_old_pmd(struct mm_stru
return NULL;
pmd = pmd_offset(pud, addr);
+ split_huge_pmd(vma, pmd, addr);
if (pmd_none_or_clear_bad(pmd))
return NULL;
pmd = pmd_offset(pud, addr);
+ split_huge_pmd(vma, pmd, addr);
if (pmd_none_or_clear_bad(pmd))
return NULL;
== Locking in hugepage aware code ==
Locking in hugepage aware code
==============================
We want as much code as possible hugepage aware, as calling
split_huge_page() or split_huge_pmd() has a cost.
@ -448,7 +492,8 @@ should just drop the page table lock and fallback to the old code as
before. Otherwise you can proceed to process the huge pmd and the
hugepage natively. Once finished you can drop the page table lock.
== Refcounts and transparent huge pages ==
Refcounts and transparent huge pages
====================================
Refcounting on THP is mostly consistent with refcounting on other compound
pages:
@ -510,7 +555,8 @@ clear where reference should go after split: it will stay on head page.
Note that split_huge_pmd() doesn't have any limitation on refcounting:
pmd can be split at any point and never fails.
== Partial unmap and deferred_split_huge_page() ==
Partial unmap and deferred_split_huge_page()
============================================
Unmapping part of THP (with munmap() or other way) is not going to free
memory immediately. Instead, we detect that a subpage of THP is not in use

View file

@ -1,37 +1,13 @@
==============================
UNEVICTABLE LRU INFRASTRUCTURE
==============================
.. _unevictable_lru:
========
CONTENTS
========
==============================
Unevictable LRU Infrastructure
==============================
(*) The Unevictable LRU
- The unevictable page list.
- Memory control group interaction.
- Marking address spaces unevictable.
- Detecting Unevictable Pages.
- vmscan's handling of unevictable pages.
(*) mlock()'d pages.
- History.
- Basic management.
- mlock()/mlockall() system call handling.
- Filtering special vmas.
- munlock()/munlockall() system call handling.
- Migrating mlocked pages.
- Compacting mlocked pages.
- mmap(MAP_LOCKED) system call handling.
- munmap()/exit()/exec() system call handling.
- try_to_unmap().
- try_to_munlock() reverse map scan.
- Page reclaim in shrink_*_list().
.. contents:: :local:
============
INTRODUCTION
Introduction
============
This document describes the Linux memory manager's "Unevictable LRU"
@ -46,8 +22,8 @@ details - the "what does it do?" - by reading the code. One hopes that the
descriptions below add value by provide the answer to "why does it do that?".
===================
THE UNEVICTABLE LRU
The Unevictable LRU
===================
The Unevictable LRU facility adds an additional LRU list to track unevictable
@ -66,17 +42,17 @@ completely unresponsive.
The unevictable list addresses the following classes of unevictable pages:
(*) Those owned by ramfs.
* Those owned by ramfs.
(*) Those mapped into SHM_LOCK'd shared memory regions.
* Those mapped into SHM_LOCK'd shared memory regions.
(*) Those mapped into VM_LOCKED [mlock()ed] VMAs.
* Those mapped into VM_LOCKED [mlock()ed] VMAs.
The infrastructure may also be able to handle other conditions that make pages
unevictable, either by definition or by circumstance, in the future.
THE UNEVICTABLE PAGE LIST
The Unevictable Page List
-------------------------
The Unevictable LRU infrastructure consists of an additional, per-zone, LRU list
@ -118,7 +94,7 @@ the unevictable list when one task has the page isolated from the LRU and other
tasks are changing the "evictability" state of the page.
MEMORY CONTROL GROUP INTERACTION
Memory Control Group Interaction
--------------------------------
The unevictable LRU facility interacts with the memory control group [aka
@ -144,7 +120,9 @@ effects:
the control group to thrash or to OOM-kill tasks.
MARKING ADDRESS SPACES UNEVICTABLE
.. _mark_addr_space_unevict:
Marking Address Spaces Unevictable
----------------------------------
For facilities such as ramfs none of the pages attached to the address space
@ -152,15 +130,15 @@ may be evicted. To prevent eviction of any such pages, the AS_UNEVICTABLE
address space flag is provided, and this can be manipulated by a filesystem
using a number of wrapper functions:
(*) void mapping_set_unevictable(struct address_space *mapping);
* ``void mapping_set_unevictable(struct address_space *mapping);``
Mark the address space as being completely unevictable.
(*) void mapping_clear_unevictable(struct address_space *mapping);
* ``void mapping_clear_unevictable(struct address_space *mapping);``
Mark the address space as being evictable.
(*) int mapping_unevictable(struct address_space *mapping);
* ``int mapping_unevictable(struct address_space *mapping);``
Query the address space, and return true if it is completely
unevictable.
@ -177,12 +155,13 @@ These are currently used in two places in the kernel:
ensure they're in memory.
DETECTING UNEVICTABLE PAGES
Detecting Unevictable Pages
---------------------------
The function page_evictable() in vmscan.c determines whether a page is
evictable or not using the query function outlined above [see section "Marking
address spaces unevictable"] to check the AS_UNEVICTABLE flag.
evictable or not using the query function outlined above [see section
:ref:`Marking address spaces unevictable <mark_addr_space_unevict>`]
to check the AS_UNEVICTABLE flag.
For address spaces that are so marked after being populated (as SHM regions
might be), the lock action (eg: SHM_LOCK) can be lazy, and need not populate
@ -202,7 +181,7 @@ flag, PG_mlocked (as wrapped by PageMlocked()), which is set when a page is
faulted into a VM_LOCKED vma, or found in a vma being VM_LOCKED.
VMSCAN'S HANDLING OF UNEVICTABLE PAGES
Vmscan's Handling of Unevictable Pages
--------------------------------------
If unevictable pages are culled in the fault path, or moved to the unevictable
@ -233,8 +212,7 @@ extra evictabilty checks should not occur in the majority of calls to
putback_lru_page().
=============
MLOCKED PAGES
MLOCKED Pages
=============
The unevictable page list is also useful for mlock(), in addition to ramfs and
@ -242,7 +220,7 @@ SYSV SHM. Note that mlock() is only available in CONFIG_MMU=y situations; in
NOMMU situations, all mappings are effectively mlocked.
HISTORY
History
-------
The "Unevictable mlocked Pages" infrastructure is based on work originally
@ -263,7 +241,7 @@ replaced by walking the reverse map to determine whether any VM_LOCKED VMAs
mapped the page. More on this below.
BASIC MANAGEMENT
Basic Management
----------------
mlocked pages - pages mapped into a VM_LOCKED VMA - are a class of unevictable
@ -304,10 +282,10 @@ mlocked pages become unlocked and rescued from the unevictable list when:
(4) before a page is COW'd in a VM_LOCKED VMA.
mlock()/mlockall() SYSTEM CALL HANDLING
mlock()/mlockall() System Call Handling
---------------------------------------
Both [do_]mlock() and [do_]mlockall() system call handlers call mlock_fixup()
Both [do\_]mlock() and [do\_]mlockall() system call handlers call mlock_fixup()
for each VMA in the range specified by the call. In the case of mlockall(),
this is the entire active address space of the task. Note that mlock_fixup()
is used for both mlocking and munlocking a range of memory. A call to mlock()
@ -351,7 +329,7 @@ mlock_vma_page() is unable to isolate the page from the LRU, vmscan will handle
it later if and when it attempts to reclaim the page.
FILTERING SPECIAL VMAS
Filtering Special VMAs
----------------------
mlock_fixup() filters several classes of "special" VMAs:
@ -379,8 +357,9 @@ VM_LOCKED flag. Therefore, we won't have to deal with them later during
munlock(), munmap() or task exit. Neither does mlock_fixup() account these
VMAs against the task's "locked_vm".
.. _munlock_munlockall_handling:
munlock()/munlockall() SYSTEM CALL HANDLING
munlock()/munlockall() System Call Handling
-------------------------------------------
The munlock() and munlockall() system calls are handled by the same functions -
@ -426,7 +405,7 @@ This is fine, because we'll catch it later if and if vmscan tries to reclaim
the page. This should be relatively rare.
MIGRATING MLOCKED PAGES
Migrating MLOCKED Pages
-----------------------
A page that is being migrated has been isolated from the LRU lists and is held
@ -451,7 +430,7 @@ list because of a race between munlock and migration, page migration uses the
putback_lru_page() function to add migrated pages back to the LRU.
COMPACTING MLOCKED PAGES
Compacting MLOCKED Pages
------------------------
The unevictable LRU can be scanned for compactable regions and the default
@ -461,7 +440,7 @@ unevictable LRU is enabled, the work of compaction is mostly handled by
the page migration code and the same work flow as described in MIGRATING
MLOCKED PAGES will apply.
MLOCKING TRANSPARENT HUGE PAGES
MLOCKING Transparent Huge Pages
-------------------------------
A transparent huge page is represented by a single entry on an LRU list.
@ -483,7 +462,7 @@ to unevictable LRU and the rest can be reclaimed.
See also comment in follow_trans_huge_pmd().
mmap(MAP_LOCKED) SYSTEM CALL HANDLING
mmap(MAP_LOCKED) System Call Handling
-------------------------------------
In addition the mlock()/mlockall() system calls, an application can request
@ -514,7 +493,7 @@ memory range accounted as locked_vm, as the protections could be changed later
and pages allocated into that region.
munmap()/exit()/exec() SYSTEM CALL HANDLING
munmap()/exit()/exec() System Call Handling
-------------------------------------------
When unmapping an mlocked region of memory, whether by an explicit call to
@ -568,16 +547,18 @@ munlock or munmap system calls, mm teardown (munlock_vma_pages_all), reclaim,
holepunching, and truncation of file pages and their anonymous COWed pages.
try_to_munlock() REVERSE MAP SCAN
try_to_munlock() Reverse Map Scan
---------------------------------
[!] TODO/FIXME: a better name might be page_mlocked() - analogous to the
page_referenced() reverse map walker.
.. warning::
[!] TODO/FIXME: a better name might be page_mlocked() - analogous to the
page_referenced() reverse map walker.
When munlock_vma_page() [see section "munlock()/munlockall() System Call
Handling" above] tries to munlock a page, it needs to determine whether or not
the page is mapped by any VM_LOCKED VMA without actually attempting to unmap
all PTEs from the page. For this purpose, the unevictable/mlock infrastructure
When munlock_vma_page() [see section :ref:`munlock()/munlockall() System Call
Handling <munlock_munlockall_handling>` above] tries to munlock a
page, it needs to determine whether or not the page is mapped by any
VM_LOCKED VMA without actually attempting to unmap all PTEs from the
page. For this purpose, the unevictable/mlock infrastructure
introduced a variant of try_to_unmap() called try_to_munlock().
try_to_munlock() calls the same functions as try_to_unmap() for anonymous and
@ -595,7 +576,7 @@ large region or tearing down a large address space that has been mlocked via
mlockall(), overall this is a fairly rare event.
PAGE RECLAIM IN shrink_*_list()
Page Reclaim in shrink_*_list()
-------------------------------
shrink_active_list() culls any obviously unevictable pages - i.e.

View file

@ -1,6 +1,11 @@
= Userfaultfd =
.. _userfaultfd:
== Objective ==
===========
Userfaultfd
===========
Objective
=========
Userfaults allow the implementation of on-demand paging from userland
and more generally they allow userland to take control of various
@ -9,7 +14,8 @@ memory page faults, something otherwise only the kernel code could do.
For example userfaults allows a proper and more optimal implementation
of the PROT_NONE+SIGSEGV trick.
== Design ==
Design
======
Userfaults are delivered and resolved through the userfaultfd syscall.
@ -41,7 +47,8 @@ different processes without them being aware about what is going on
themselves on the same region the manager is already tracking, which
is a corner case that would currently return -EBUSY).
== API ==
API
===
When first opened the userfaultfd must be enabled invoking the
UFFDIO_API ioctl specifying a uffdio_api.api value set to UFFD_API (or
@ -101,7 +108,8 @@ UFFDIO_COPY. They're atomic as in guaranteeing that nothing can see an
half copied page since it'll keep userfaulting until the copy has
finished.
== QEMU/KVM ==
QEMU/KVM
========
QEMU/KVM is using the userfaultfd syscall to implement postcopy live
migration. Postcopy live migration is one form of memory
@ -163,7 +171,8 @@ sending the same page twice (in case the userfault is read by the
postcopy thread just before UFFDIO_COPY|ZEROPAGE runs in the migration
thread).
== Non-cooperative userfaultfd ==
Non-cooperative userfaultfd
===========================
When the userfaultfd is monitored by an external manager, the manager
must be able to track changes in the process virtual memory
@ -172,27 +181,30 @@ the same read(2) protocol as for the page fault notifications. The
manager has to explicitly enable these events by setting appropriate
bits in uffdio_api.features passed to UFFDIO_API ioctl:
UFFD_FEATURE_EVENT_FORK - enable userfaultfd hooks for fork(). When
this feature is enabled, the userfaultfd context of the parent process
is duplicated into the newly created process. The manager receives
UFFD_EVENT_FORK with file descriptor of the new userfaultfd context in
the uffd_msg.fork.
UFFD_FEATURE_EVENT_FORK
enable userfaultfd hooks for fork(). When this feature is
enabled, the userfaultfd context of the parent process is
duplicated into the newly created process. The manager
receives UFFD_EVENT_FORK with file descriptor of the new
userfaultfd context in the uffd_msg.fork.
UFFD_FEATURE_EVENT_REMAP - enable notifications about mremap()
calls. When the non-cooperative process moves a virtual memory area to
a different location, the manager will receive UFFD_EVENT_REMAP. The
uffd_msg.remap will contain the old and new addresses of the area and
its original length.
UFFD_FEATURE_EVENT_REMAP
enable notifications about mremap() calls. When the
non-cooperative process moves a virtual memory area to a
different location, the manager will receive
UFFD_EVENT_REMAP. The uffd_msg.remap will contain the old and
new addresses of the area and its original length.
UFFD_FEATURE_EVENT_REMOVE - enable notifications about
madvise(MADV_REMOVE) and madvise(MADV_DONTNEED) calls. The event
UFFD_EVENT_REMOVE will be generated upon these calls to madvise. The
uffd_msg.remove will contain start and end addresses of the removed
area.
UFFD_FEATURE_EVENT_REMOVE
enable notifications about madvise(MADV_REMOVE) and
madvise(MADV_DONTNEED) calls. The event UFFD_EVENT_REMOVE will
be generated upon these calls to madvise. The uffd_msg.remove
will contain start and end addresses of the removed area.
UFFD_FEATURE_EVENT_UNMAP - enable notifications about memory
unmapping. The manager will get UFFD_EVENT_UNMAP with uffd_msg.remove
containing start and end addresses of the unmapped area.
UFFD_FEATURE_EVENT_UNMAP
enable notifications about memory unmapping. The manager will
get UFFD_EVENT_UNMAP with uffd_msg.remove containing start and
end addresses of the unmapped area.
Although the UFFD_FEATURE_EVENT_REMOVE and UFFD_FEATURE_EVENT_UNMAP
are pretty similar, they quite differ in the action expected from the

View file

@ -1,5 +1,8 @@
.. _z3fold:
======
z3fold
------
======
z3fold is a special purpose allocator for storing compressed pages.
It is designed to store up to three compressed pages per physical page.
@ -7,6 +10,7 @@ It is a zbud derivative which allows for higher compression
ratio keeping the simplicity and determinism of its predecessor.
The main differences between z3fold and zbud are:
* unlike zbud, z3fold allows for up to PAGE_SIZE allocations
* z3fold can hold up to 3 compressed pages in its page
* z3fold doesn't export any API itself and is thus intended to be used

View file

@ -1,5 +1,8 @@
.. _zsmalloc:
========
zsmalloc
--------
========
This allocator is designed for use with zram. Thus, the allocator is
supposed to work well under low memory conditions. In particular, it
@ -31,40 +34,49 @@ be mapped using zs_map_object() to get a usable pointer and subsequently
unmapped using zs_unmap_object().
stat
----
====
With CONFIG_ZSMALLOC_STAT, we could see zsmalloc internal information via
/sys/kernel/debug/zsmalloc/<user name>. Here is a sample of stat output:
``/sys/kernel/debug/zsmalloc/<user name>``. Here is a sample of stat output::
# cat /sys/kernel/debug/zsmalloc/zram0/classes
# cat /sys/kernel/debug/zsmalloc/zram0/classes
class size almost_full almost_empty obj_allocated obj_used pages_used pages_per_zspage
..
..
...
...
9 176 0 1 186 129 8 4
10 192 1 0 2880 2872 135 3
11 208 0 1 819 795 42 2
12 224 0 1 219 159 12 4
..
..
...
...
class: index
size: object size zspage stores
almost_empty: the number of ZS_ALMOST_EMPTY zspages(see below)
almost_full: the number of ZS_ALMOST_FULL zspages(see below)
obj_allocated: the number of objects allocated
obj_used: the number of objects allocated to the user
pages_used: the number of pages allocated for the class
pages_per_zspage: the number of 0-order pages to make a zspage
class
index
size
object size zspage stores
almost_empty
the number of ZS_ALMOST_EMPTY zspages(see below)
almost_full
the number of ZS_ALMOST_FULL zspages(see below)
obj_allocated
the number of objects allocated
obj_used
the number of objects allocated to the user
pages_used
the number of pages allocated for the class
pages_per_zspage
the number of 0-order pages to make a zspage
We assign a zspage to ZS_ALMOST_EMPTY fullness group when:
n <= N / f, where
n = number of allocated objects
N = total number of objects zspage can store
f = fullness_threshold_frac(ie, 4 at the moment)
We assign a zspage to ZS_ALMOST_EMPTY fullness group when n <= N / f, where
* n = number of allocated objects
* N = total number of objects zspage can store
* f = fullness_threshold_frac(ie, 4 at the moment)
Similarly, we assign zspage to:
ZS_ALMOST_FULL when n > N / f
ZS_EMPTY when n == 0
ZS_FULL when n == N
* ZS_ALMOST_FULL when n > N / f
* ZS_EMPTY when n == 0
* ZS_FULL when n == N

View file

@ -1,4 +1,11 @@
Overview:
.. _zswap:
=====
zswap
=====
Overview
========
Zswap is a lightweight compressed cache for swap pages. It takes pages that are
in the process of being swapped out and attempts to compress them into a
@ -7,32 +14,34 @@ for potentially reduced swap I/O.  This trade-off can also result in a
significant performance improvement if reads from the compressed cache are
faster than reads from a swap device.
NOTE: Zswap is a new feature as of v3.11 and interacts heavily with memory
reclaim. This interaction has not been fully explored on the large set of
potential configurations and workloads that exist. For this reason, zswap
is a work in progress and should be considered experimental.
.. note::
Zswap is a new feature as of v3.11 and interacts heavily with memory
reclaim. This interaction has not been fully explored on the large set of
potential configurations and workloads that exist. For this reason, zswap
is a work in progress and should be considered experimental.
Some potential benefits:
Some potential benefits:
* Desktop/laptop users with limited RAM capacities can mitigate the
    performance impact of swapping.
performance impact of swapping.
* Overcommitted guests that share a common I/O resource can
    dramatically reduce their swap I/O pressure, avoiding heavy handed I/O
throttling by the hypervisor. This allows more work to get done with less
impact to the guest workload and guests sharing the I/O subsystem
dramatically reduce their swap I/O pressure, avoiding heavy handed I/O
throttling by the hypervisor. This allows more work to get done with less
impact to the guest workload and guests sharing the I/O subsystem
* Users with SSDs as swap devices can extend the life of the device by
    drastically reducing life-shortening writes.
drastically reducing life-shortening writes.
Zswap evicts pages from compressed cache on an LRU basis to the backing swap
device when the compressed pool reaches its size limit. This requirement had
been identified in prior community discussions.
Zswap is disabled by default but can be enabled at boot time by setting
the "enabled" attribute to 1 at boot time. ie: zswap.enabled=1. Zswap
the ``enabled`` attribute to 1 at boot time. ie: ``zswap.enabled=1``. Zswap
can also be enabled and disabled at runtime using the sysfs interface.
An example command to enable zswap at runtime, assuming sysfs is mounted
at /sys, is:
at ``/sys``, is::
echo 1 > /sys/module/zswap/parameters/enabled
echo 1 > /sys/module/zswap/parameters/enabled
When zswap is disabled at runtime it will stop storing pages that are
being swapped out. However, it will _not_ immediately write out or fault
@ -43,7 +52,8 @@ pages out of the compressed pool, a swapoff on the swap device(s) will
fault back into memory all swapped out pages, including those in the
compressed pool.
Design:
Design
======
Zswap receives pages for compression through the Frontswap API and is able to
evict pages from its own compressed pool on an LRU basis and write them back to
@ -53,12 +63,12 @@ Zswap makes use of zpool for the managing the compressed memory pool. Each
allocation in zpool is not directly accessible by address. Rather, a handle is
returned by the allocation routine and that handle must be mapped before being
accessed. The compressed memory pool grows on demand and shrinks as compressed
pages are freed. The pool is not preallocated. By default, a zpool of type
zbud is created, but it can be selected at boot time by setting the "zpool"
attribute, e.g. zswap.zpool=zbud. It can also be changed at runtime using the
sysfs "zpool" attribute, e.g.
pages are freed. The pool is not preallocated. By default, a zpool
of type zbud is created, but it can be selected at boot time by
setting the ``zpool`` attribute, e.g. ``zswap.zpool=zbud``. It can
also be changed at runtime using the sysfs ``zpool`` attribute, e.g.::
echo zbud > /sys/module/zswap/parameters/zpool
echo zbud > /sys/module/zswap/parameters/zpool
The zbud type zpool allocates exactly 1 page to store 2 compressed pages, which
means the compression ratio will always be 2:1 or worse (because of half-full
@ -83,14 +93,16 @@ via frontswap, to free the compressed entry.
Zswap seeks to be simple in its policies. Sysfs attributes allow for one user
controlled policy:
* max_pool_percent - The maximum percentage of memory that the compressed
pool can occupy.
pool can occupy.
The default compressor is lzo, but it can be selected at boot time by setting
the “compressor” attribute, e.g. zswap.compressor=lzo. It can also be changed
at runtime using the sysfs "compressor" attribute, e.g.
The default compressor is lzo, but it can be selected at boot time by
setting the ``compressor`` attribute, e.g. ``zswap.compressor=lzo``.
It can also be changed at runtime using the sysfs "compressor"
attribute, e.g.::
echo lzo > /sys/module/zswap/parameters/compressor
echo lzo > /sys/module/zswap/parameters/compressor
When the zpool and/or compressor parameter is changed at runtime, any existing
compressed pages are not modified; they are left in their own zpool. When a
@ -106,11 +118,12 @@ compressed length of the page is set to zero and the pattern or same-filled
value is stored.
Same-value filled pages identification feature is enabled by default and can be
disabled at boot time by setting the "same_filled_pages_enabled" attribute to 0,
e.g. zswap.same_filled_pages_enabled=0. It can also be enabled and disabled at
runtime using the sysfs "same_filled_pages_enabled" attribute, e.g.
disabled at boot time by setting the ``same_filled_pages_enabled`` attribute
to 0, e.g. ``zswap.same_filled_pages_enabled=0``. It can also be enabled and
disabled at runtime using the sysfs ``same_filled_pages_enabled``
attribute, e.g.::
echo 1 > /sys/module/zswap/parameters/same_filled_pages_enabled
echo 1 > /sys/module/zswap/parameters/same_filled_pages_enabled
When zswap same-filled page identification is disabled at runtime, it will stop
checking for the same-value filled pages during store operation. However, the

View file

@ -15621,7 +15621,7 @@ L: linux-mm@kvack.org
S: Maintained
F: mm/zsmalloc.c
F: include/linux/zsmalloc.h
F: Documentation/vm/zsmalloc.txt
F: Documentation/vm/zsmalloc.rst
ZSWAP COMPRESSED SWAP CACHING
M: Seth Jennings <sjenning@redhat.com>

View file

@ -585,7 +585,7 @@ config ARCH_DISCONTIGMEM_ENABLE
Say Y to support efficient handling of discontiguous physical memory,
for architectures which are either NUMA (Non-Uniform Memory Access)
or have huge holes in the physical address space for other reasons.
See <file:Documentation/vm/numa> for more.
See <file:Documentation/vm/numa.rst> for more.
source "mm/Kconfig"

View file

@ -397,7 +397,7 @@ config ARCH_DISCONTIGMEM_ENABLE
Say Y to support efficient handling of discontiguous physical memory,
for architectures which are either NUMA (Non-Uniform Memory Access)
or have huge holes in the physical address space for other reasons.
See <file:Documentation/vm/numa> for more.
See <file:Documentation/vm/numa.rst> for more.
config ARCH_FLATMEM_ENABLE
def_bool y

View file

@ -2556,7 +2556,7 @@ config ARCH_DISCONTIGMEM_ENABLE
Say Y to support efficient handling of discontiguous physical memory,
for architectures which are either NUMA (Non-Uniform Memory Access)
or have huge holes in the physical address space for other reasons.
See <file:Documentation/vm/numa> for more.
See <file:Documentation/vm/numa.rst> for more.
config ARCH_SPARSEMEM_ENABLE
bool

View file

@ -883,7 +883,7 @@ config PPC_MEM_KEYS
page-based protections, but without requiring modification of the
page tables when an application changes protection domains.
For details, see Documentation/vm/protection-keys.txt
For details, see Documentation/vm/protection-keys.rst
If unsure, say y.

View file

@ -196,7 +196,7 @@ config HUGETLBFS
help
hugetlbfs is a filesystem backing for HugeTLB pages, based on
ramfs. For architectures that support it, say Y here and read
<file:Documentation/vm/hugetlbpage.txt> for details.
<file:Documentation/vm/hugetlbpage.rst> for details.
If unsure, say N.

View file

@ -677,7 +677,7 @@ static void dax_mapping_entry_mkclean(struct address_space *mapping,
* downgrading page table protection not changing it to point
* to a new page.
*
* See Documentation/vm/mmu_notifier.txt
* See Documentation/vm/mmu_notifier.rst
*/
if (pmdp) {
#ifdef CONFIG_FS_DAX_PMD

View file

@ -937,7 +937,7 @@ static inline void clear_soft_dirty(struct vm_area_struct *vma,
/*
* The soft-dirty tracker uses #PF-s to catch writes
* to pages, so write-protect the pte as well. See the
* Documentation/vm/soft-dirty.txt for full description
* Documentation/vm/soft-dirty.rst for full description
* of how soft-dirty works.
*/
pte_t ptent = *pte;
@ -1417,7 +1417,7 @@ static int pagemap_hugetlb_range(pte_t *ptep, unsigned long hmask,
* Bits 0-54 page frame number (PFN) if present
* Bits 0-4 swap type if swapped
* Bits 5-54 swap offset if swapped
* Bit 55 pte is soft-dirty (see Documentation/vm/soft-dirty.txt)
* Bit 55 pte is soft-dirty (see Documentation/vm/soft-dirty.rst)
* Bit 56 page exclusively mapped
* Bits 57-60 zero
* Bit 61 page is file-page or shared-anon

View file

@ -16,7 +16,7 @@
/*
* Heterogeneous Memory Management (HMM)
*
* See Documentation/vm/hmm.txt for reasons and overview of what HMM is and it
* See Documentation/vm/hmm.rst for reasons and overview of what HMM is and it
* is for. Here we focus on the HMM API description, with some explanation of
* the underlying implementation.
*

View file

@ -45,7 +45,7 @@ struct vmem_altmap {
* must be treated as an opaque object, rather than a "normal" struct page.
*
* A more complete discussion of unaddressable memory may be found in
* include/linux/hmm.h and Documentation/vm/hmm.txt.
* include/linux/hmm.h and Documentation/vm/hmm.rst.
*
* MEMORY_DEVICE_PUBLIC:
* Device memory that is cache coherent from device and CPU point of view. This
@ -67,7 +67,7 @@ enum memory_type {
* page_free()
*
* Additional notes about MEMORY_DEVICE_PRIVATE may be found in
* include/linux/hmm.h and Documentation/vm/hmm.txt. There is also a brief
* include/linux/hmm.h and Documentation/vm/hmm.rst. There is also a brief
* explanation in include/linux/memory_hotplug.h.
*
* The page_fault() callback must migrate page back, from device memory to

View file

@ -174,7 +174,7 @@ struct mmu_notifier_ops {
* invalidate_range_start()/end() notifiers, as
* invalidate_range() alread catches the points in time when an
* external TLB range needs to be flushed. For more in depth
* discussion on this see Documentation/vm/mmu_notifier.txt
* discussion on this see Documentation/vm/mmu_notifier.rst
*
* Note that this function might be called with just a sub-range
* of what was passed to invalidate_range_start()/end(), if

View file

@ -28,7 +28,7 @@ extern struct mm_struct *mm_alloc(void);
*
* Use mmdrop() to release the reference acquired by mmgrab().
*
* See also <Documentation/vm/active_mm.txt> for an in-depth explanation
* See also <Documentation/vm/active_mm.rst> for an in-depth explanation
* of &mm_struct.mm_count vs &mm_struct.mm_users.
*/
static inline void mmgrab(struct mm_struct *mm)
@ -62,7 +62,7 @@ static inline void mmdrop(struct mm_struct *mm)
*
* Use mmput() to release the reference acquired by mmget().
*
* See also <Documentation/vm/active_mm.txt> for an in-depth explanation
* See also <Documentation/vm/active_mm.rst> for an in-depth explanation
* of &mm_struct.mm_count vs &mm_struct.mm_users.
*/
static inline void mmget(struct mm_struct *mm)

View file

@ -53,7 +53,7 @@ static inline int current_is_kswapd(void)
/*
* Unaddressable device memory support. See include/linux/hmm.h and
* Documentation/vm/hmm.txt. Short description is we need struct pages for
* Documentation/vm/hmm.rst. Short description is we need struct pages for
* device memory that is unaddressable (inaccessible) by CPU, so that we can
* migrate part of a process memory to device memory.
*

View file

@ -305,7 +305,7 @@ config KSM
the many instances by a single page with that content, so
saving memory until one or another app needs to modify the content.
Recommended for use with KVM, or with other duplicative applications.
See Documentation/vm/ksm.txt for more information: KSM is inactive
See Documentation/vm/ksm.rst for more information: KSM is inactive
until a program has madvised that an area is MADV_MERGEABLE, and
root has set /sys/kernel/mm/ksm/run to 1 (if CONFIG_SYSFS is set).
@ -530,7 +530,7 @@ config MEM_SOFT_DIRTY
into a page just as regular dirty bit, but unlike the latter
it can be cleared by hands.
See Documentation/vm/soft-dirty.txt for more details.
See Documentation/vm/soft-dirty.rst for more details.
config ZSWAP
bool "Compressed cache for swap pages (EXPERIMENTAL)"
@ -656,7 +656,7 @@ config IDLE_PAGE_TRACKING
be useful to tune memory cgroup limits and/or for job placement
within a compute cluster.
See Documentation/vm/idle_page_tracking.txt for more details.
See Documentation/vm/idle_page_tracking.rst for more details.
# arch_add_memory() comprehends device memory
config ARCH_HAS_ZONE_DEVICE

View file

@ -3,7 +3,7 @@
*
* This code provides the generic "frontend" layer to call a matching
* "backend" driver implementation of cleancache. See
* Documentation/vm/cleancache.txt for more information.
* Documentation/vm/cleancache.rst for more information.
*
* Copyright (C) 2009-2010 Oracle Corp. All rights reserved.
* Author: Dan Magenheimer

View file

@ -3,7 +3,7 @@
*
* This code provides the generic "frontend" layer to call a matching
* "backend" driver implementation of frontswap. See
* Documentation/vm/frontswap.txt for more information.
* Documentation/vm/frontswap.rst for more information.
*
* Copyright (C) 2009-2012 Oracle Corp. All rights reserved.
* Author: Dan Magenheimer

View file

@ -37,7 +37,7 @@
#if defined(CONFIG_DEVICE_PRIVATE) || defined(CONFIG_DEVICE_PUBLIC)
/*
* Device private memory see HMM (Documentation/vm/hmm.txt) or hmm.h
* Device private memory see HMM (Documentation/vm/hmm.rst) or hmm.h
*/
DEFINE_STATIC_KEY_FALSE(device_private_key);
EXPORT_SYMBOL(device_private_key);

View file

@ -1185,7 +1185,7 @@ static int do_huge_pmd_wp_page_fallback(struct vm_fault *vmf, pmd_t orig_pmd,
* mmu_notifier_invalidate_range_end() happens which can lead to a
* device seeing memory write in different order than CPU.
*
* See Documentation/vm/mmu_notifier.txt
* See Documentation/vm/mmu_notifier.rst
*/
pmdp_huge_clear_flush_notify(vma, haddr, vmf->pmd);
@ -2037,7 +2037,7 @@ static void __split_huge_zero_page_pmd(struct vm_area_struct *vma,
* replacing a zero pmd write protected page with a zero pte write
* protected page.
*
* See Documentation/vm/mmu_notifier.txt
* See Documentation/vm/mmu_notifier.rst
*/
pmdp_huge_clear_flush(vma, haddr, pmd);

View file

@ -3291,7 +3291,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
* table protection not changing it to point
* to a new page.
*
* See Documentation/vm/mmu_notifier.txt
* See Documentation/vm/mmu_notifier.rst
*/
huge_ptep_set_wrprotect(src, addr, src_pte);
}
@ -4357,7 +4357,7 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
* No need to call mmu_notifier_invalidate_range() we are downgrading
* page table protection not changing it to point to a new page.
*
* See Documentation/vm/mmu_notifier.txt
* See Documentation/vm/mmu_notifier.rst
*/
i_mmap_unlock_write(vma->vm_file->f_mapping);
mmu_notifier_invalidate_range_end(mm, start, end);

View file

@ -1049,7 +1049,7 @@ static int write_protect_page(struct vm_area_struct *vma, struct page *page,
* No need to notify as we are downgrading page table to read
* only not changing it to point to a new page.
*
* See Documentation/vm/mmu_notifier.txt
* See Documentation/vm/mmu_notifier.rst
*/
entry = ptep_clear_flush(vma, pvmw.address, pvmw.pte);
/*
@ -1145,7 +1145,7 @@ static int replace_page(struct vm_area_struct *vma, struct page *page,
* No need to notify as we are replacing a read only page with another
* read only page with the same content.
*
* See Documentation/vm/mmu_notifier.txt
* See Documentation/vm/mmu_notifier.rst
*/
ptep_clear_flush(vma, addr, ptep);
set_pte_at_notify(mm, addr, ptep, newpte);

View file

@ -2787,7 +2787,7 @@ SYSCALL_DEFINE5(remap_file_pages, unsigned long, start, unsigned long, size,
unsigned long ret = -EINVAL;
struct file *file;
pr_warn_once("%s (%d) uses deprecated remap_file_pages() syscall. See Documentation/vm/remap_file_pages.txt.\n",
pr_warn_once("%s (%d) uses deprecated remap_file_pages() syscall. See Documentation/vm/remap_file_pages.rst.\n",
current->comm, current->pid);
if (prot)

View file

@ -942,7 +942,7 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma,
* downgrading page table protection not changing it to point
* to a new page.
*
* See Documentation/vm/mmu_notifier.txt
* See Documentation/vm/mmu_notifier.rst
*/
if (ret)
(*cleaned)++;
@ -1602,7 +1602,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
* point at new page while a device still is using this
* page.
*
* See Documentation/vm/mmu_notifier.txt
* See Documentation/vm/mmu_notifier.rst
*/
dec_mm_counter(mm, mm_counter_file(page));
}
@ -1612,7 +1612,7 @@ discard:
* done above for all cases requiring it to happen under page
* table lock before mmu_notifier_invalidate_range_end()
*
* See Documentation/vm/mmu_notifier.txt
* See Documentation/vm/mmu_notifier.rst
*/
page_remove_rmap(subpage, PageHuge(page));
put_page(page);

View file

@ -621,7 +621,7 @@ EXPORT_SYMBOL_GPL(vm_memory_committed);
* succeed and -ENOMEM implies there is not.
*
* We currently support three overcommit policies, which are set via the
* vm.overcommit_memory sysctl. See Documentation/vm/overcommit-accounting
* vm.overcommit_memory sysctl. See Documentation/vm/overcommit-accounting.rst
*
* Strict overcommit modes added 2002 Feb 26 by Alan Cox.
* Additional code 2002 Jul 20 by Robert Love.