1
0
Fork 0

Merge branch 'akpm' (patches from Andrew)

Merge misc updates from Andrew Morton:

 - a few random little subsystems

 - almost all of the MM patches which are staged ahead of linux-next
   material. I'll trickle to post-linux-next work in as the dependents
   get merged up.

Subsystems affected by this patch series: kthread, kbuild, ide, ntfs,
ocfs2, arch, and mm (slab-generic, slab, slub, dax, debug, pagecache,
gup, swap, shmem, memcg, pagemap, mremap, hmm, vmalloc, documentation,
kasan, pagealloc, memory-failure, hugetlb, vmscan, z3fold, compaction,
oom-kill, migration, cma, page-poison, userfaultfd, zswap, zsmalloc,
uaccess, zram, and cleanups).

* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (200 commits)
  mm: cleanup kstrto*() usage
  mm: fix fall-through warnings for Clang
  mm: slub: convert sysfs sprintf family to sysfs_emit/sysfs_emit_at
  mm: shmem: convert shmem_enabled_show to use sysfs_emit_at
  mm:backing-dev: use sysfs_emit in macro defining functions
  mm: huge_memory: convert remaining use of sprintf to sysfs_emit and neatening
  mm: use sysfs_emit for struct kobject * uses
  mm: fix kernel-doc markups
  zram: break the strict dependency from lzo
  zram: add stat to gather incompressible pages since zram set up
  zram: support page writeback
  mm/process_vm_access: remove redundant initialization of iov_r
  mm/zsmalloc.c: rework the list_add code in insert_zspage()
  mm/zswap: move to use crypto_acomp API for hardware acceleration
  mm/zswap: fix passing zero to 'PTR_ERR' warning
  mm/zswap: make struct kernel_param_ops definitions const
  userfaultfd/selftests: hint the test runner on required privilege
  userfaultfd/selftests: fix retval check for userfaultfd_open()
  userfaultfd/selftests: always dump something in modes
  userfaultfd: selftests: make __{s,u}64 format specifiers portable
  ...
zero-sugar-mainline-defconfig
Linus Torvalds 2020-12-15 12:53:37 -08:00
commit ac73e3dc8a
216 changed files with 4330 additions and 2881 deletions

View File

@ -266,6 +266,7 @@ line of text and contains the following stats separated by whitespace:
No memory is allocated for such pages.
pages_compacted the number of pages freed during compaction
huge_pages the number of incompressible pages
huge_pages_since the number of incompressible pages since zram set up
================ =============================================================
File /sys/block/zram<id>/bd_stat
@ -334,6 +335,11 @@ Admin can request writeback of those idle pages at right timing via::
With the command, zram writeback idle pages from memory to the storage.
If admin want to write a specific page in zram device to backing device,
they could write a page index into the interface.
echo "page_index=1251" > /sys/block/zramX/writeback
If there are lots of write IO with flash device, potentially, it has
flash wearout problem so that admin needs to design write limitation
to guarantee storage health for entire product life.

View File

@ -219,13 +219,11 @@ Under below explanation, we assume CONFIG_MEM_RES_CTRL_SWAP=y.
This is an easy way to test page migration, too.
9.5 mkdir/rmdir
---------------
9.5 nested cgroups
------------------
When using hierarchy, mkdir/rmdir test should be done.
Use tests like the following::
Use tests like the following for testing nested cgroups::
echo 1 >/opt/cgroup/01/memory/use_hierarchy
mkdir /opt/cgroup/01/child_a
mkdir /opt/cgroup/01/child_b

View File

@ -77,6 +77,8 @@ Brief summary of control files.
memory.soft_limit_in_bytes set/show soft limit of memory usage
memory.stat show various statistics
memory.use_hierarchy set/show hierarchical account enabled
This knob is deprecated and shouldn't be
used.
memory.force_empty trigger forced page reclaim
memory.pressure_level set memory pressure notifications
memory.swappiness set/show swappiness parameter of vmscan
@ -495,16 +497,13 @@ cgroup might have some charge associated with it, even though all
tasks have migrated away from it. (because we charge against pages, not
against tasks.)
We move the stats to root (if use_hierarchy==0) or parent (if
use_hierarchy==1), and no change on the charge except uncharging
We move the stats to parent, and no change on the charge except uncharging
from the child.
Charges recorded in swap information is not updated at removal of cgroup.
Recorded information is discarded and a cgroup which uses swap (swapcache)
will be charged as a new owner of it.
About use_hierarchy, see Section 6.
5. Misc. interfaces
===================
@ -527,8 +526,6 @@ About use_hierarchy, see Section 6.
write will still return success. In this case, it is expected that
memory.kmem.usage_in_bytes == memory.usage_in_bytes.
About use_hierarchy, see Section 6.
5.2 stat file
-------------
@ -675,32 +672,21 @@ hierarchy::
d e
In the diagram above, with hierarchical accounting enabled, all memory
usage of e, is accounted to its ancestors up until the root (i.e, c and root),
that has memory.use_hierarchy enabled. If one of the ancestors goes over its
limit, the reclaim algorithm reclaims from the tasks in the ancestor and the
children of the ancestor.
usage of e, is accounted to its ancestors up until the root (i.e, c and root).
If one of the ancestors goes over its limit, the reclaim algorithm reclaims
from the tasks in the ancestor and the children of the ancestor.
6.1 Enabling hierarchical accounting and reclaim
------------------------------------------------
6.1 Hierarchical accounting and reclaim
---------------------------------------
A memory cgroup by default disables the hierarchy feature. Support
can be enabled by writing 1 to memory.use_hierarchy file of the root cgroup::
Hierarchical accounting is enabled by default. Disabling the hierarchical
accounting is deprecated. An attempt to do it will result in a failure
and a warning printed to dmesg.
For compatibility reasons writing 1 to memory.use_hierarchy will always pass::
# echo 1 > memory.use_hierarchy
The feature can be disabled by::
# echo 0 > memory.use_hierarchy
NOTE1:
Enabling/disabling will fail if either the cgroup already has other
cgroups created below it, or if the parent cgroup has use_hierarchy
enabled.
NOTE2:
When panic_on_oom is set to "2", the whole system will panic in
case of an OOM event in any cgroup.
7. Soft limits
==============

View File

@ -1274,6 +1274,9 @@ PAGE_SIZE multiple when read back.
kernel_stack
Amount of memory allocated to kernel stacks.
pagetables
Amount of memory allocated for page tables.
percpu(npn)
Amount of memory used for storing per-cpu kernel
data structures.
@ -1300,6 +1303,14 @@ PAGE_SIZE multiple when read back.
Amount of memory used in anonymous mappings backed by
transparent hugepages
file_thp
Amount of cached filesystem data backed by transparent
hugepages
shmem_thp
Amount of shm, tmpfs, shared anonymous mmap()s backed by
transparent hugepages
inactive_anon, active_anon, inactive_file, active_file, unevictable
Amount of memory, swap-backed and filesystem-backed,
on the internal memory management lists used by the

View File

@ -401,21 +401,6 @@ compact_fail
is incremented if the system tries to compact memory
but failed.
compact_pages_moved
is incremented each time a page is moved. If
this value is increasing rapidly, it implies that the system
is copying a lot of data to satisfy the huge page allocation.
It is possible that the cost of copying exceeds any savings
from reduced TLB misses.
compact_pagemigrate_failed
is incremented when the underlying mechanism
for moving a page failed.
compact_blocks_moved
is incremented each time memory compaction examines
a huge page aligned range of pages.
It is possible to establish how long the stalls were using the function
tracer to record how long was spent in __alloc_pages_nodemask and
using the mm_page_alloc tracepoint to identify which allocations were

View File

@ -873,12 +873,17 @@ file-backed pages is less than the high watermark in a zone.
unprivileged_userfaultfd
========================
This flag controls whether unprivileged users can use the userfaultfd
system calls. Set this to 1 to allow unprivileged users to use the
userfaultfd system calls, or set this to 0 to restrict userfaultfd to only
privileged users (with SYS_CAP_PTRACE capability).
This flag controls the mode in which unprivileged users can use the
userfaultfd system calls. Set this to 0 to restrict unprivileged users
to handle page faults in user mode only. In this case, users without
SYS_CAP_PTRACE must pass UFFD_USER_MODE_ONLY in order for userfaultfd to
succeed. Prohibiting use of userfaultfd for handling faults from kernel
mode may make certain vulnerabilities more difficult to exploit.
The default value is 1.
Set this to 1 to allow unprivileged users to use the userfaultfd system
calls without any restrictions.
The default value is 0.
user_reserve_kbytes

View File

@ -147,6 +147,10 @@ The address of a chunk allocated with `kmalloc` is aligned to at least
ARCH_KMALLOC_MINALIGN bytes. For sizes which are a power of two, the
alignment is also guaranteed to be at least the respective size.
Chunks allocated with kmalloc() can be resized with krealloc(). Similarly
to kmalloc_array(): a helper for resizing arrays is provided in the form of
krealloc_array().
For large allocations you can use vmalloc() and vzalloc(), or directly
request pages from the page allocator. The memory allocated by `vmalloc`
and related functions is not physically contiguous.

View File

@ -221,12 +221,12 @@ Unit testing
============
This file::
tools/testing/selftests/vm/gup_benchmark.c
tools/testing/selftests/vm/gup_test.c
has the following new calls to exercise the new pin*() wrapper functions:
* PIN_FAST_BENCHMARK (./gup_benchmark -a)
* PIN_BENCHMARK (./gup_benchmark -b)
* PIN_FAST_BENCHMARK (./gup_test -a)
* PIN_BASIC_TEST (./gup_test -b)
You can monitor how many total dma-pinned pages have been acquired and released
since the system was booted, via two new /proc/vmstat entries: ::

View File

@ -190,8 +190,9 @@ function calls GCC directly inserts the code to check the shadow memory.
This option significantly enlarges kernel but it gives x1.1-x2 performance
boost over outline instrumented kernel.
Generic KASAN prints up to 2 call_rcu() call stacks in reports, the last one
and the second to last.
Generic KASAN also reports the last 2 call stacks to creation of work that
potentially has access to an object. Call stacks for the following are shown:
call_rcu() and workqueue queuing.
Software tag-based KASAN
~~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -4,7 +4,7 @@
Tmpfs
=====
Tmpfs is a file system which keeps all files in virtual memory.
Tmpfs is a file system which keeps all of its files in virtual memory.
Everything in tmpfs is temporary in the sense that no files will be
@ -35,7 +35,7 @@ tmpfs has the following uses:
memory.
This mount does not depend on CONFIG_TMPFS. If CONFIG_TMPFS is not
set, the user visible part of tmpfs is not build. But the internal
set, the user visible part of tmpfs is not built. But the internal
mechanisms are always present.
2) glibc 2.2 and above expects tmpfs to be mounted at /dev/shm for
@ -50,7 +50,7 @@ tmpfs has the following uses:
This mount is _not_ needed for SYSV shared memory. The internal
mount is used for that. (In the 2.3 kernel versions it was
necessary to mount the predecessor of tmpfs (shm fs) to use SYSV
shared memory)
shared memory.)
3) Some people (including me) find it very convenient to mount it
e.g. on /tmp and /var/tmp and have a big swap partition. And now
@ -83,7 +83,7 @@ If nr_blocks=0 (or size=0), blocks will not be limited in that instance;
if nr_inodes=0, inodes will not be limited. It is generally unwise to
mount with such options, since it allows any user with write access to
use up all the memory on the machine; but enhances the scalability of
that instance in a system with many cpus making intensive use of it.
that instance in a system with many CPUs making intensive use of it.
tmpfs has a mount option to set the NUMA memory allocation policy for

View File

@ -51,8 +51,7 @@ call :c:func:`free_area_init` function. Yet, the mappings array is not
usable until the call to :c:func:`memblock_free_all` that hands all the
memory to the page allocator.
If an architecture enables `CONFIG_ARCH_HAS_HOLES_MEMORYMODEL` option,
it may free parts of the `mem_map` array that do not cover the
An architecture may free parts of the `mem_map` array that do not cover the
actual physical pages. In such case, the architecture specific
:c:func:`pfn_valid` implementation should take the holes in the
`mem_map` into account.

View File

@ -41,17 +41,17 @@ size change due to this facility.
- Without page owner::
text data bss dec hex filename
40662 1493 644 42799 a72f mm/page_alloc.o
48392 2333 644 51369 c8a9 mm/page_alloc.o
- With page owner::
text data bss dec hex filename
40892 1493 644 43029 a815 mm/page_alloc.o
1427 24 8 1459 5b3 mm/page_ext.o
2722 50 0 2772 ad4 mm/page_owner.o
48800 2445 644 51889 cab1 mm/page_alloc.o
6574 108 29 6711 1a37 mm/page_owner.o
1025 8 8 1041 411 mm/page_ext.o
Although, roughly, 4 KB code is added in total, page_alloc.o increase by
230 bytes and only half of it is in hotpath. Building the kernel with
Although, roughly, 8 KB code is added in total, page_alloc.o increase by
520 bytes and less than half of it is in hotpath. Building the kernel with
page owner and turning it on if needed would be great option to debug
kernel memory problem.

View File

@ -261,7 +261,7 @@ config ARCH_HAS_SET_DIRECT_MAP
#
# Select if the architecture provides the arch_dma_set_uncached symbol to
# either provide an uncached segement alias for a DMA allocation, or
# either provide an uncached segment alias for a DMA allocation, or
# to remap the page tables in place.
#
config ARCH_HAS_DMA_SET_UNCACHED
@ -314,14 +314,14 @@ config ARCH_32BIT_OFF_T
config HAVE_ASM_MODVERSIONS
bool
help
This symbol should be selected by an architecure if it provides
This symbol should be selected by an architecture if it provides
<asm/asm-prototypes.h> to support the module versioning for symbols
exported from assembly code.
config HAVE_REGS_AND_STACK_ACCESS_API
bool
help
This symbol should be selected by an architecure if it supports
This symbol should be selected by an architecture if it supports
the API needed to access registers and stack entries from pt_regs,
declared in asm/ptrace.h
For example the kprobes-based event tracer needs this API.
@ -336,7 +336,7 @@ config HAVE_RSEQ
config HAVE_FUNCTION_ARG_ACCESS_API
bool
help
This symbol should be selected by an architecure if it supports
This symbol should be selected by an architecture if it supports
the API needed to access function arguments from pt_regs,
declared in asm/ptrace.h
@ -665,6 +665,13 @@ config HAVE_IRQ_TIME_ACCOUNTING
Archs need to ensure they use a high enough resolution clock to
support irq time accounting and then call enable_sched_clock_irqtime().
config HAVE_MOVE_PUD
bool
help
Architectures that select this are able to move page tables at the
PUD level. If there are only 3 page table levels, the move effectively
happens at the PGD level.
config HAVE_MOVE_PMD
bool
help
@ -1054,6 +1061,12 @@ config ARCH_WANT_LD_ORPHAN_WARN
by the linker, since the locations of such sections can change between linker
versions.
config HAVE_ARCH_PFN_VALID
bool
config ARCH_SUPPORTS_DEBUG_PAGEALLOC
bool
source "kernel/gcov/Kconfig"
source "scripts/gcc-plugins/Kconfig"

View File

@ -40,6 +40,7 @@ config ALPHA
select CPU_NO_EFFICIENT_FFS if !ALPHA_EV67
select MMU_GATHER_NO_RANGE
select SET_FS
select SPARSEMEM_EXTREME if SPARSEMEM
help
The Alpha is a 64-bit general-purpose processor designed and
marketed by the Digital Equipment Corporation of blessed memory,
@ -551,12 +552,19 @@ config NR_CPUS
config ARCH_DISCONTIGMEM_ENABLE
bool "Discontiguous Memory Support"
depends on BROKEN
help
Say Y to support efficient handling of discontiguous physical memory,
for architectures which are either NUMA (Non-Uniform Memory Access)
or have huge holes in the physical address space for other reasons.
See <file:Documentation/vm/numa.rst> for more.
config ARCH_SPARSEMEM_ENABLE
bool "Sparse Memory Support"
help
Say Y to support efficient handling of discontiguous physical memory,
for systems that have huge holes in the physical address space.
config NUMA
bool "NUMA Support (EXPERIMENTAL)"
depends on DISCONTIGMEM && BROKEN

View File

@ -6,6 +6,8 @@
#ifndef _ASM_MMZONE_H_
#define _ASM_MMZONE_H_
#ifdef CONFIG_DISCONTIGMEM
#include <asm/smp.h>
/*
@ -45,8 +47,6 @@ PLAT_NODE_DATA_LOCALNR(unsigned long p, int n)
}
#endif
#ifdef CONFIG_DISCONTIGMEM
/*
* Following are macros that each numa implementation must define.
*/
@ -68,11 +68,6 @@ PLAT_NODE_DATA_LOCALNR(unsigned long p, int n)
/* XXX: FIXME -- nyc */
#define kern_addr_valid(kaddr) (0)
#define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
#define pmd_page(pmd) (pfn_to_page(pmd_val(pmd) >> 32))
#define pte_pfn(pte) (pte_val(pte) >> 32)
#define mk_pte(page, pgprot) \
({ \
pte_t pte; \
@ -95,16 +90,11 @@ PLAT_NODE_DATA_LOCALNR(unsigned long p, int n)
__xx; \
})
#define page_to_pa(page) \
(page_to_pfn(page) << PAGE_SHIFT)
#define pfn_to_nid(pfn) pa_to_nid(((u64)(pfn) << PAGE_SHIFT))
#define pfn_valid(pfn) \
(((pfn) - node_start_pfn(pfn_to_nid(pfn))) < \
node_spanned_pages(pfn_to_nid(pfn))) \
#define virt_addr_valid(kaddr) pfn_valid((__pa(kaddr) >> PAGE_SHIFT))
#endif /* CONFIG_DISCONTIGMEM */
#endif /* _ASM_MMZONE_H_ */

View File

@ -83,12 +83,13 @@ typedef struct page *pgtable_t;
#define __pa(x) ((unsigned long) (x) - PAGE_OFFSET)
#define __va(x) ((void *)((unsigned long) (x) + PAGE_OFFSET))
#ifndef CONFIG_DISCONTIGMEM
#define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
#define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
#define virt_addr_valid(kaddr) pfn_valid((__pa(kaddr) >> PAGE_SHIFT))
#ifdef CONFIG_FLATMEM
#define pfn_valid(pfn) ((pfn) < max_mapnr)
#define virt_addr_valid(kaddr) pfn_valid(__pa(kaddr) >> PAGE_SHIFT)
#endif /* CONFIG_DISCONTIGMEM */
#endif /* CONFIG_FLATMEM */
#include <asm-generic/memory_model.h>
#include <asm-generic/getorder.h>

View File

@ -203,10 +203,10 @@ extern unsigned long __zero_page(void);
* Conversion functions: convert a page and protection to a page entry,
* and a page entry and page directory to the page they refer to.
*/
#ifndef CONFIG_DISCONTIGMEM
#define page_to_pa(page) (((page) - mem_map) << PAGE_SHIFT)
#define page_to_pa(page) (page_to_pfn(page) << PAGE_SHIFT)
#define pte_pfn(pte) (pte_val(pte) >> 32)
#ifndef CONFIG_DISCONTIGMEM
#define pte_page(pte) pfn_to_page(pte_pfn(pte))
#define mk_pte(page, pgprot) \
({ \
@ -236,10 +236,8 @@ pmd_page_vaddr(pmd_t pmd)
return ((pmd_val(pmd) & _PFN_MASK) >> (32-PAGE_SHIFT)) + PAGE_OFFSET;
}
#ifndef CONFIG_DISCONTIGMEM
#define pmd_page(pmd) (mem_map + ((pmd_val(pmd) & _PFN_MASK) >> 32))
#define pud_page(pud) (mem_map + ((pud_val(pud) & _PFN_MASK) >> 32))
#endif
#define pmd_page(pmd) (pfn_to_page(pmd_val(pmd) >> 32))
#define pud_page(pud) (pfn_to_page(pud_val(pud) >> 32))
extern inline unsigned long pud_page_vaddr(pud_t pgd)
{ return PAGE_OFFSET + ((pud_val(pgd) & _PFN_MASK) >> (32-PAGE_SHIFT)); }

View File

@ -0,0 +1,18 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_ALPHA_SPARSEMEM_H
#define _ASM_ALPHA_SPARSEMEM_H
#ifdef CONFIG_SPARSEMEM
#define SECTION_SIZE_BITS 27
/*
* According to "Alpha Architecture Reference Manual" physical
* addresses are at most 48 bits.
* https://download.majix.org/dec/alpha_arch_ref.pdf
*/
#define MAX_PHYSMEM_BITS 48
#endif /* CONFIG_SPARSEMEM */
#endif /* _ASM_ALPHA_SPARSEMEM_H */

View File

@ -648,6 +648,7 @@ setup_arch(char **cmdline_p)
/* Find our memory. */
setup_memory(kernel_end);
memblock_set_bottom_up(true);
sparse_init();
/* First guess at cpu cache sizes. Do this before init_arch. */
determine_cpu_caches(cpu->type);

View File

@ -67,6 +67,7 @@ config GENERIC_CSUM
config ARCH_DISCONTIGMEM_ENABLE
def_bool n
depends on BROKEN
config ARCH_FLATMEM_ENABLE
def_bool y
@ -506,7 +507,7 @@ config LINUX_RAM_BASE
config HIGHMEM
bool "High Memory Support"
select ARCH_DISCONTIGMEM_ENABLE
select HAVE_ARCH_PFN_VALID
select KMAP_LOCAL
help
With ARC 2G:2G address split, only upper 2G is directly addressable by

View File

@ -82,11 +82,25 @@ typedef pte_t * pgtable_t;
*/
#define virt_to_pfn(kaddr) (__pa(kaddr) >> PAGE_SHIFT)
#define ARCH_PFN_OFFSET virt_to_pfn(CONFIG_LINUX_RAM_BASE)
/*
* When HIGHMEM is enabled we have holes in the memory map so we need
* pfn_valid() that takes into account the actual extents of the physical
* memory
*/
#ifdef CONFIG_HIGHMEM
#ifdef CONFIG_FLATMEM
extern unsigned long arch_pfn_offset;
#define ARCH_PFN_OFFSET arch_pfn_offset
extern int pfn_valid(unsigned long pfn);
#define pfn_valid pfn_valid
#else /* CONFIG_HIGHMEM */
#define ARCH_PFN_OFFSET virt_to_pfn(CONFIG_LINUX_RAM_BASE)
#define pfn_valid(pfn) (((pfn) - ARCH_PFN_OFFSET) < max_mapnr)
#endif
#endif /* CONFIG_HIGHMEM */
/*
* __pa, __va, virt_to_page (ALERT: deprecated, don't use them)

View File

@ -28,6 +28,8 @@ static unsigned long low_mem_sz;
static unsigned long min_high_pfn, max_high_pfn;
static phys_addr_t high_mem_start;
static phys_addr_t high_mem_sz;
unsigned long arch_pfn_offset;
EXPORT_SYMBOL(arch_pfn_offset);
#endif
#ifdef CONFIG_DISCONTIGMEM
@ -98,16 +100,11 @@ void __init setup_arch_memory(void)
init_mm.brk = (unsigned long)_end;
/* first page of system - kernel .vector starts here */
min_low_pfn = ARCH_PFN_OFFSET;
min_low_pfn = virt_to_pfn(CONFIG_LINUX_RAM_BASE);
/* Last usable page of low mem */
max_low_pfn = max_pfn = PFN_DOWN(low_mem_start + low_mem_sz);
#ifdef CONFIG_FLATMEM
/* pfn_valid() uses this */
max_mapnr = max_low_pfn - min_low_pfn;
#endif
/*------------- bootmem allocator setup -----------------------*/
/*
@ -153,7 +150,9 @@ void __init setup_arch_memory(void)
* DISCONTIGMEM in turns requires multiple nodes. node 0 above is
* populated with normal memory zone while node 1 only has highmem
*/
#ifdef CONFIG_DISCONTIGMEM
node_set_online(1);
#endif
min_high_pfn = PFN_DOWN(high_mem_start);
max_high_pfn = PFN_DOWN(high_mem_start + high_mem_sz);
@ -161,8 +160,15 @@ void __init setup_arch_memory(void)
max_zone_pfn[ZONE_HIGHMEM] = min_low_pfn;
high_memory = (void *)(min_high_pfn << PAGE_SHIFT);
arch_pfn_offset = min(min_low_pfn, min_high_pfn);
kmap_init();
#endif
#else /* CONFIG_HIGHMEM */
/* pfn_valid() uses this when FLATMEM=y and HIGHMEM=n */
max_mapnr = max_low_pfn - min_low_pfn;
#endif /* CONFIG_HIGHMEM */
free_area_init(max_zone_pfn);
}
@ -190,3 +196,12 @@ void __init mem_init(void)
highmem_init();
mem_init_print_info(NULL);
}
#ifdef CONFIG_HIGHMEM
int pfn_valid(unsigned long pfn)
{
return (pfn >= min_high_pfn && pfn <= max_high_pfn) ||
(pfn >= min_low_pfn && pfn <= max_low_pfn);
}
EXPORT_SYMBOL(pfn_valid);
#endif

View File

@ -25,7 +25,7 @@ config ARM
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
select ARCH_HAVE_CUSTOM_GPIO_H
select ARCH_HAS_GCOV_PROFILE_ALL
select ARCH_KEEP_MEMBLOCK if HAVE_ARCH_PFN_VALID || KEXEC
select ARCH_KEEP_MEMBLOCK
select ARCH_MIGHT_HAVE_PC_PARPORT
select ARCH_NO_SG_CHAIN if !ARM_HAS_SG_CHAIN
select ARCH_OPTIONAL_KERNEL_RWX if ARCH_HAS_STRICT_KERNEL_RWX
@ -69,6 +69,7 @@ config ARM
select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU
select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU
select HAVE_ARCH_MMAP_RND_BITS if MMU
select HAVE_ARCH_PFN_VALID
select HAVE_ARCH_SECCOMP
select HAVE_ARCH_SECCOMP_FILTER if AEABI && !OABI_COMPAT
select HAVE_ARCH_THREAD_STRUCT_WHITELIST
@ -520,7 +521,6 @@ config ARCH_S3C24XX
config ARCH_OMAP1
bool "TI OMAP1"
depends on MMU
select ARCH_HAS_HOLES_MEMORYMODEL
select ARCH_OMAP
select CLKDEV_LOOKUP
select CLKSRC_MMIO
@ -1480,9 +1480,6 @@ config OABI_COMPAT
UNPREDICTABLE (in fact it can be predicted that it won't work
at all). If in doubt say N.
config ARCH_HAS_HOLES_MEMORYMODEL
bool
config ARCH_SELECT_MEMORY_MODEL
bool
@ -1493,9 +1490,6 @@ config ARCH_SPARSEMEM_ENABLE
bool
select SPARSEMEM_STATIC if SPARSEMEM
config HAVE_ARCH_PFN_VALID
def_bool ARCH_HAS_HOLES_MEMORYMODEL || !SPARSEMEM
config HIGHMEM
bool "High Memory Support"
depends on MMU

View File

@ -50,15 +50,6 @@ static const struct vm_special_mapping vdso_data_mapping = {
static int vdso_mremap(const struct vm_special_mapping *sm,
struct vm_area_struct *new_vma)
{
unsigned long new_size = new_vma->vm_end - new_vma->vm_start;
unsigned long vdso_size;
/* without VVAR page */
vdso_size = (vdso_total_pages - 1) << PAGE_SHIFT;
if (vdso_size != new_size)
return -EINVAL;
current->mm->context.vdso = new_vma->vm_start;
return 0;

View File

@ -211,7 +211,6 @@ config ARCH_BRCMSTB
select BCM7038_L1_IRQ
select BRCMSTB_L2_IRQ
select BCM7120_L2_IRQ
select ARCH_HAS_HOLES_MEMORYMODEL
select ZONE_DMA if ARM_LPAE
select SOC_BRCMSTB
select SOC_BUS

View File

@ -5,7 +5,6 @@ menuconfig ARCH_DAVINCI
depends on ARCH_MULTI_V5
select DAVINCI_TIMER
select ZONE_DMA
select ARCH_HAS_HOLES_MEMORYMODEL
select PM_GENERIC_DOMAINS if PM
select PM_GENERIC_DOMAINS_OF if PM && OF
select REGMAP_MMIO

View File

@ -8,7 +8,6 @@
menuconfig ARCH_EXYNOS
bool "Samsung Exynos"
depends on ARCH_MULTI_V7
select ARCH_HAS_HOLES_MEMORYMODEL
select ARCH_SUPPORTS_BIG_ENDIAN
select ARM_AMBA
select ARM_GIC

View File

@ -2,7 +2,6 @@
config ARCH_HIGHBANK
bool "Calxeda ECX-1000/2000 (Highbank/Midway)"
depends on ARCH_MULTI_V7
select ARCH_HAS_HOLES_MEMORYMODEL
select ARCH_SUPPORTS_BIG_ENDIAN
select ARM_AMBA
select ARM_ERRATA_764369 if SMP

View File

@ -93,7 +93,6 @@ config SOC_DRA7XX
config ARCH_OMAP2PLUS
bool
select ARCH_HAS_BANDGAP
select ARCH_HAS_HOLES_MEMORYMODEL
select ARCH_HAS_RESET_CONTROLLER
select ARCH_OMAP
select CLKSRC_MMIO

View File

@ -8,7 +8,6 @@
config ARCH_S5PV210
bool "Samsung S5PV210/S5PC110"
depends on ARCH_MULTI_V7
select ARCH_HAS_HOLES_MEMORYMODEL
select ARM_VIC
select CLKSRC_SAMSUNG_PWM
select COMMON_CLK_SAMSUNG

View File

@ -3,7 +3,6 @@ config ARCH_TANGO
bool "Sigma Designs Tango4 (SMP87xx)"
depends on ARCH_MULTI_V7
# Cortex-A9 MPCore r3p0, PL310 r3p2
select ARCH_HAS_HOLES_MEMORYMODEL
select ARM_ERRATA_754322
select ARM_ERRATA_764369 if SMP
select ARM_ERRATA_775420

View File

@ -267,83 +267,6 @@ static inline void poison_init_mem(void *s, size_t count)
*p++ = 0xe7fddef0;
}
static inline void __init
free_memmap(unsigned long start_pfn, unsigned long end_pfn)
{
struct page *start_pg, *end_pg;
phys_addr_t pg, pgend;
/*
* Convert start_pfn/end_pfn to a struct page pointer.
*/
start_pg = pfn_to_page(start_pfn - 1) + 1;
end_pg = pfn_to_page(end_pfn - 1) + 1;
/*
* Convert to physical addresses, and
* round start upwards and end downwards.
*/
pg = PAGE_ALIGN(__pa(start_pg));
pgend = __pa(end_pg) & PAGE_MASK;
/*
* If there are free pages between these,
* free the section of the memmap array.
*/
if (pg < pgend)
memblock_free_early(pg, pgend - pg);
}
/*
* The mem_map array can get very big. Free the unused area of the memory map.
*/
static void __init free_unused_memmap(void)
{
unsigned long start, end, prev_end = 0;
int i;
/*
* This relies on each bank being in address order.
* The banks are sorted previously in bootmem_init().
*/
for_each_mem_pfn_range(i, MAX_NUMNODES, &start, &end, NULL) {
#ifdef CONFIG_SPARSEMEM
/*
* Take care not to free memmap entries that don't exist
* due to SPARSEMEM sections which aren't present.
*/
start = min(start,
ALIGN(prev_end, PAGES_PER_SECTION));
#else
/*
* Align down here since the VM subsystem insists that the
* memmap entries are valid from the bank start aligned to
* MAX_ORDER_NR_PAGES.
*/
start = round_down(start, MAX_ORDER_NR_PAGES);
#endif
/*
* If we had a previous bank, and there is a space
* between the current bank and the previous, free it.
*/
if (prev_end && prev_end < start)
free_memmap(prev_end, start);
/*
* Align up here since the VM subsystem insists that the
* memmap entries are valid from the bank end aligned to
* MAX_ORDER_NR_PAGES.
*/
prev_end = ALIGN(end, MAX_ORDER_NR_PAGES);
}
#ifdef CONFIG_SPARSEMEM
if (!IS_ALIGNED(prev_end, PAGES_PER_SECTION))
free_memmap(prev_end,
ALIGN(prev_end, PAGES_PER_SECTION));
#endif
}
static void __init free_highpages(void)
{
#ifdef CONFIG_HIGHMEM
@ -385,7 +308,6 @@ void __init mem_init(void)
set_max_mapnr(pfn_to_page(max_pfn) - mem_map);
/* this will put all unused low memory onto the freelists */
free_unused_memmap();
memblock_free_all();
#ifdef CONFIG_SA1111

View File

@ -71,6 +71,7 @@ config ARM64
select ARCH_USE_QUEUED_RWLOCKS
select ARCH_USE_QUEUED_SPINLOCKS
select ARCH_USE_SYM_ANNOTATIONS
select ARCH_SUPPORTS_DEBUG_PAGEALLOC
select ARCH_SUPPORTS_MEMORY_FAILURE
select ARCH_SUPPORTS_SHADOW_CALL_STACK if CC_HAVE_SHADOW_CALL_STACK
select ARCH_SUPPORTS_ATOMIC_RMW
@ -125,6 +126,7 @@ config ARM64
select HANDLE_DOMAIN_IRQ
select HARDIRQS_SW_RESEND
select HAVE_MOVE_PMD
select HAVE_MOVE_PUD
select HAVE_PCI
select HAVE_ACPI_APEI if (ACPI && EFI)
select HAVE_ALIGNED_STRUCT_PAGE if SLUB
@ -139,6 +141,7 @@ config ARM64
select HAVE_ARCH_KGDB
select HAVE_ARCH_MMAP_RND_BITS
select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT
select HAVE_ARCH_PFN_VALID
select HAVE_ARCH_PREL32_RELOCATIONS
select HAVE_ARCH_SECCOMP_FILTER
select HAVE_ARCH_STACKLEAK
@ -1027,9 +1030,6 @@ config HOLES_IN_ZONE
source "kernel/Kconfig.hz"
config ARCH_SUPPORTS_DEBUG_PAGEALLOC
def_bool y
config ARCH_SPARSEMEM_ENABLE
def_bool y
select SPARSEMEM_VMEMMAP_ENABLE
@ -1043,9 +1043,6 @@ config ARCH_SELECT_MEMORY_MODEL
config ARCH_FLATMEM_ENABLE
def_bool !NUMA
config HAVE_ARCH_PFN_VALID
def_bool y
config HW_PERF_EVENTS
def_bool y
depends on ARM_PMU

View File

@ -140,6 +140,7 @@ int set_memory_valid(unsigned long addr, int numpages, int enable);
int set_direct_map_invalid_noflush(struct page *page);
int set_direct_map_default_noflush(struct page *page);
bool kernel_page_present(struct page *page);
#include <asm-generic/cacheflush.h>

View File

@ -463,6 +463,7 @@ static inline pmd_t pmd_mkdevmap(pmd_t pmd)
#define pfn_pud(pfn,prot) __pud(__phys_to_pud_val((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot))
#define set_pmd_at(mm, addr, pmdp, pmd) set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd))
#define set_pud_at(mm, addr, pudp, pud) set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud))
#define __p4d_to_phys(p4d) __pte_to_phys(p4d_pte(p4d))
#define __phys_to_p4d_val(phys) __phys_to_pte_val(phys)

View File

@ -78,17 +78,9 @@ static union {
} vdso_data_store __page_aligned_data;
struct vdso_data *vdso_data = vdso_data_store.data;
static int __vdso_remap(enum vdso_abi abi,
const struct vm_special_mapping *sm,
struct vm_area_struct *new_vma)
static int vdso_mremap(const struct vm_special_mapping *sm,
struct vm_area_struct *new_vma)
{
unsigned long new_size = new_vma->vm_end - new_vma->vm_start;
unsigned long vdso_size = vdso_info[abi].vdso_code_end -
vdso_info[abi].vdso_code_start;
if (vdso_size != new_size)
return -EINVAL;
current->mm->context.vdso = (void *)new_vma->vm_start;
return 0;
@ -219,17 +211,6 @@ static vm_fault_t vvar_fault(const struct vm_special_mapping *sm,
return vmf_insert_pfn(vma, vmf->address, pfn);
}
static int vvar_mremap(const struct vm_special_mapping *sm,
struct vm_area_struct *new_vma)
{
unsigned long new_size = new_vma->vm_end - new_vma->vm_start;
if (new_size != VVAR_NR_PAGES * PAGE_SIZE)
return -EINVAL;
return 0;
}
static int __setup_additional_pages(enum vdso_abi abi,
struct mm_struct *mm,
struct linux_binprm *bprm,
@ -280,12 +261,6 @@ up_fail:
/*
* Create and map the vectors page for AArch32 tasks.
*/
static int aarch32_vdso_mremap(const struct vm_special_mapping *sm,
struct vm_area_struct *new_vma)
{
return __vdso_remap(VDSO_ABI_AA32, sm, new_vma);
}
enum aarch32_map {
AA32_MAP_VECTORS, /* kuser helpers */
AA32_MAP_SIGPAGE,
@ -308,11 +283,10 @@ static struct vm_special_mapping aarch32_vdso_maps[] = {
[AA32_MAP_VVAR] = {
.name = "[vvar]",
.fault = vvar_fault,
.mremap = vvar_mremap,
},
[AA32_MAP_VDSO] = {
.name = "[vdso]",
.mremap = aarch32_vdso_mremap,
.mremap = vdso_mremap,
},
};
@ -453,12 +427,6 @@ out:
}
#endif /* CONFIG_COMPAT */
static int vdso_mremap(const struct vm_special_mapping *sm,
struct vm_area_struct *new_vma)
{
return __vdso_remap(VDSO_ABI_AA64, sm, new_vma);
}
enum aarch64_map {
AA64_MAP_VVAR,
AA64_MAP_VDSO,
@ -468,7 +436,6 @@ static struct vm_special_mapping aarch64_vdso_maps[] __ro_after_init = {
[AA64_MAP_VVAR] = {
.name = "[vvar]",
.fault = vvar_fault,
.mremap = vvar_mremap,
},
[AA64_MAP_VDSO] = {
.name = "[vdso]",

View File

@ -444,71 +444,6 @@ void __init bootmem_init(void)
memblock_dump_all();
}
#ifndef CONFIG_SPARSEMEM_VMEMMAP
static inline void free_memmap(unsigned long start_pfn, unsigned long end_pfn)
{
struct page *start_pg, *end_pg;
unsigned long pg, pgend;
/*
* Convert start_pfn/end_pfn to a struct page pointer.
*/
start_pg = pfn_to_page(start_pfn - 1) + 1;
end_pg = pfn_to_page(end_pfn - 1) + 1;
/*
* Convert to physical addresses, and round start upwards and end
* downwards.
*/
pg = (unsigned long)PAGE_ALIGN(__pa(start_pg));
pgend = (unsigned long)__pa(end_pg) & PAGE_MASK;
/*
* If there are free pages between these, free the section of the
* memmap array.
*/
if (pg < pgend)
memblock_free(pg, pgend - pg);
}
/*
* The mem_map array can get very big. Free the unused area of the memory map.
*/
static void __init free_unused_memmap(void)
{
unsigned long start, end, prev_end = 0;
int i;
for_each_mem_pfn_range(i, MAX_NUMNODES, &start, &end, NULL) {
#ifdef CONFIG_SPARSEMEM
/*
* Take care not to free memmap entries that don't exist due
* to SPARSEMEM sections which aren't present.
*/
start = min(start, ALIGN(prev_end, PAGES_PER_SECTION));
#endif
/*
* If we had a previous bank, and there is a space between the
* current bank and the previous, free it.
*/
if (prev_end && prev_end < start)
free_memmap(prev_end, start);
/*
* Align up here since the VM subsystem insists that the
* memmap entries are valid from the bank end aligned to
* MAX_ORDER_NR_PAGES.
*/
prev_end = ALIGN(end, MAX_ORDER_NR_PAGES);
}
#ifdef CONFIG_SPARSEMEM
if (!IS_ALIGNED(prev_end, PAGES_PER_SECTION))
free_memmap(prev_end, ALIGN(prev_end, PAGES_PER_SECTION));
#endif
}
#endif /* !CONFIG_SPARSEMEM_VMEMMAP */
/*
* mem_init() marks the free areas in the mem_map and tells us how much memory
* is free. This is done after various parts of the system have claimed their
@ -524,9 +459,6 @@ void __init mem_init(void)
set_max_mapnr(max_pfn - PHYS_PFN_OFFSET);
#ifndef CONFIG_SPARSEMEM_VMEMMAP
free_unused_memmap();
#endif
/* this will put all unused low memory onto the freelists */
memblock_free_all();

View File

@ -155,7 +155,7 @@ int set_direct_map_invalid_noflush(struct page *page)
.clear_mask = __pgprot(PTE_VALID),
};
if (!rodata_full)
if (!debug_pagealloc_enabled() && !rodata_full)
return 0;
return apply_to_page_range(&init_mm,
@ -170,7 +170,7 @@ int set_direct_map_default_noflush(struct page *page)
.clear_mask = __pgprot(PTE_RDONLY),
};
if (!rodata_full)
if (!debug_pagealloc_enabled() && !rodata_full)
return 0;
return apply_to_page_range(&init_mm,
@ -178,6 +178,7 @@ int set_direct_map_default_noflush(struct page *page)
PAGE_SIZE, change_page_range, &data);
}
#ifdef CONFIG_DEBUG_PAGEALLOC
void __kernel_map_pages(struct page *page, int numpages, int enable)
{
if (!debug_pagealloc_enabled() && !rodata_full)
@ -185,6 +186,7 @@ void __kernel_map_pages(struct page *page, int numpages, int enable)
set_memory_valid((unsigned long)page_address(page), numpages, enable);
}
#endif /* CONFIG_DEBUG_PAGEALLOC */
/*
* This function is used to determine if a linear map page has been marked as

View File

@ -288,6 +288,7 @@ config ARCH_SELECT_MEMORY_MODEL
config ARCH_DISCONTIGMEM_ENABLE
def_bool y
depends on BROKEN
help
Say Y to support efficient handling of discontiguous physical memory,
for architectures which are either NUMA (Non-Uniform Memory Access)
@ -299,12 +300,11 @@ config ARCH_FLATMEM_ENABLE
config ARCH_SPARSEMEM_ENABLE
def_bool y
depends on ARCH_DISCONTIGMEM_ENABLE
select SPARSEMEM_VMEMMAP_ENABLE
config ARCH_DISCONTIGMEM_DEFAULT
config ARCH_SPARSEMEM_DEFAULT
def_bool y
depends on ARCH_DISCONTIGMEM_ENABLE
depends on ARCH_SPARSEMEM_ENABLE
config NUMA
bool "NUMA support"
@ -329,7 +329,7 @@ config NODES_SHIFT
# VIRTUAL_MEM_MAP has been retained for historical reasons.
config VIRTUAL_MEM_MAP
bool "Virtual mem map"
depends on !SPARSEMEM
depends on !SPARSEMEM && !FLATMEM
default y
help
Say Y to compile the kernel with support for a virtual mem map.
@ -342,9 +342,6 @@ config HOLES_IN_ZONE
bool
default y if VIRTUAL_MEM_MAP
config HAVE_ARCH_EARLY_PFN_TO_NID
def_bool NUMA && SPARSEMEM
config HAVE_ARCH_NODEDATA_EXTENSION
def_bool y
depends on NUMA

View File

@ -59,10 +59,8 @@ extern int reserve_elfcorehdr(u64 *start, u64 *end);
extern int register_active_ranges(u64 start, u64 len, int nid);
#ifdef CONFIG_VIRTUAL_MEM_MAP
# define LARGE_GAP 0x40000000 /* Use virtual mem map if hole is > than this */
extern unsigned long VMALLOC_END;
extern struct page *vmem_map;
extern int find_largest_hole(u64 start, u64 end, void *arg);
extern int create_mem_map_page_table(u64 start, u64 end, void *arg);
extern int vmemmap_find_next_valid_pfn(int, int);
#else

View File

@ -19,15 +19,12 @@
#include <linux/mm.h>
#include <linux/nmi.h>
#include <linux/swap.h>
#include <linux/sizes.h>
#include <asm/meminit.h>
#include <asm/sections.h>
#include <asm/mca.h>
#ifdef CONFIG_VIRTUAL_MEM_MAP
static unsigned long max_gap;
#endif
/* physical address where the bootmem map is located */
unsigned long bootmap_start;
@ -166,6 +163,32 @@ find_memory (void)
alloc_per_cpu_data();
}
static int __init find_largest_hole(u64 start, u64 end, void *arg)
{
u64 *max_gap = arg;
static u64 last_end = PAGE_OFFSET;
/* NOTE: this algorithm assumes efi memmap table is ordered */
if (*max_gap < (start - last_end))
*max_gap = start - last_end;
last_end = end;
return 0;
}
static void __init verify_gap_absence(void)
{
unsigned long max_gap;
/* Forbid FLATMEM if hole is > than 1G */
efi_memmap_walk(find_largest_hole, (u64 *)&max_gap);
if (max_gap >= SZ_1G)
panic("Cannot use FLATMEM with %ldMB hole\n"
"Please switch over to SPARSEMEM\n",
(max_gap >> 20));
}
/*
* Set up the page tables.
*/
@ -177,37 +200,12 @@ paging_init (void)
unsigned long max_zone_pfns[MAX_NR_ZONES];
memset(max_zone_pfns, 0, sizeof(max_zone_pfns));
#ifdef CONFIG_ZONE_DMA32
max_dma = virt_to_phys((void *) MAX_DMA_ADDRESS) >> PAGE_SHIFT;
max_zone_pfns[ZONE_DMA32] = max_dma;
#endif
max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
#ifdef CONFIG_VIRTUAL_MEM_MAP
efi_memmap_walk(find_largest_hole, (u64 *)&max_gap);
if (max_gap < LARGE_GAP) {
vmem_map = (struct page *) 0;
} else {
unsigned long map_size;
verify_gap_absence();
/* allocate virtual_mem_map */
map_size = PAGE_ALIGN(ALIGN(max_low_pfn, MAX_ORDER_NR_PAGES) *
sizeof(struct page));
VMALLOC_END -= map_size;
vmem_map = (struct page *) VMALLOC_END;
efi_memmap_walk(create_mem_map_page_table, NULL);
/*
* alloc_node_mem_map makes an adjustment for mem_map
* which isn't compatible with vmem_map.
*/
NODE_DATA(0)->node_mem_map = vmem_map +
find_min_pfn_with_active_regions();
printk("Virtual mem_map starts at 0x%p\n", mem_map);
}
#endif /* !CONFIG_VIRTUAL_MEM_MAP */
free_area_init(max_zone_pfns);
zero_page_memmap_ptr = virt_to_page(ia64_imva(empty_zero_page));
}

View File

@ -584,6 +584,25 @@ void call_pernode_memory(unsigned long start, unsigned long len, void *arg)
}
}
static void __init virtual_map_init(void)
{
#ifdef CONFIG_VIRTUAL_MEM_MAP
int node;
VMALLOC_END -= PAGE_ALIGN(ALIGN(max_low_pfn, MAX_ORDER_NR_PAGES) *
sizeof(struct page));
vmem_map = (struct page *) VMALLOC_END;
efi_memmap_walk(create_mem_map_page_table, NULL);
printk("Virtual mem_map starts at 0x%p\n", vmem_map);
for_each_online_node(node) {
unsigned long pfn_offset = mem_data[node].min_pfn;
NODE_DATA(node)->node_mem_map = vmem_map + pfn_offset;
}
#endif
}
/**
* paging_init - setup page tables
*
@ -593,38 +612,17 @@ void call_pernode_memory(unsigned long start, unsigned long len, void *arg)
void __init paging_init(void)
{
unsigned long max_dma;
unsigned long pfn_offset = 0;
unsigned long max_pfn = 0;
int node;
unsigned long max_zone_pfns[MAX_NR_ZONES];
max_dma = virt_to_phys((void *) MAX_DMA_ADDRESS) >> PAGE_SHIFT;
sparse_init();
#ifdef CONFIG_VIRTUAL_MEM_MAP
VMALLOC_END -= PAGE_ALIGN(ALIGN(max_low_pfn, MAX_ORDER_NR_PAGES) *
sizeof(struct page));
vmem_map = (struct page *) VMALLOC_END;
efi_memmap_walk(create_mem_map_page_table, NULL);
printk("Virtual mem_map starts at 0x%p\n", vmem_map);
#endif
for_each_online_node(node) {
pfn_offset = mem_data[node].min_pfn;
#ifdef CONFIG_VIRTUAL_MEM_MAP
NODE_DATA(node)->node_mem_map = vmem_map + pfn_offset;
#endif
if (mem_data[node].max_pfn > max_pfn)
max_pfn = mem_data[node].max_pfn;
}
virtual_map_init();
memset(max_zone_pfns, 0, sizeof(max_zone_pfns));
#ifdef CONFIG_ZONE_DMA32
max_zone_pfns[ZONE_DMA32] = max_dma;
#endif
max_zone_pfns[ZONE_NORMAL] = max_pfn;
max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
free_area_init(max_zone_pfns);
zero_page_memmap_ptr = virt_to_page(ia64_imva(empty_zero_page));

View File

@ -574,20 +574,6 @@ ia64_pfn_valid (unsigned long pfn)
}
EXPORT_SYMBOL(ia64_pfn_valid);
int __init find_largest_hole(u64 start, u64 end, void *arg)
{
u64 *max_gap = arg;
static u64 last_end = PAGE_OFFSET;
/* NOTE: this algorithm assumes efi memmap table is ordered */
if (*max_gap < (start - last_end))
*max_gap = start - last_end;
last_end = end;
return 0;
}
#endif /* CONFIG_VIRTUAL_MEM_MAP */
int __init register_active_ranges(u64 start, u64 len, int nid)

View File

@ -58,36 +58,6 @@ paddr_to_nid(unsigned long paddr)
EXPORT_SYMBOL(paddr_to_nid);
#if defined(CONFIG_SPARSEMEM) && defined(CONFIG_NUMA)
/*
* Because of holes evaluate on section limits.
* If the section of memory exists, then return the node where the section
* resides. Otherwise return node 0 as the default. This is used by
* SPARSEMEM to allocate the SPARSEMEM sectionmap on the NUMA node where
* the section resides.
*/
int __meminit __early_pfn_to_nid(unsigned long pfn,
struct mminit_pfnnid_cache *state)
{
int i, section = pfn >> PFN_SECTION_SHIFT, ssec, esec;
if (section >= state->last_start && section < state->last_end)
return state->last_nid;
for (i = 0; i < num_node_memblks; i++) {
ssec = node_memblk[i].start_paddr >> PA_SECTION_SHIFT;
esec = (node_memblk[i].start_paddr + node_memblk[i].size +
((1L << PA_SECTION_SHIFT) - 1)) >> PA_SECTION_SHIFT;
if (section >= ssec && section < esec) {
state->last_start = ssec;
state->last_end = esec;
state->last_nid = node_memblk[i].nid;
return node_memblk[i].nid;
}
}
return -1;
}
void numa_clear_node(int cpu)
{
unmap_cpu_from_node(cpu, NUMA_NO_NODE);

View File

@ -20,6 +20,7 @@ choice
config M68KCLASSIC
bool "Classic M68K CPU family support"
select HAVE_ARCH_PFN_VALID
config COLDFIRE
bool "Coldfire CPU family support"
@ -373,16 +374,38 @@ config RMW_INSNS
config SINGLE_MEMORY_CHUNK
bool "Use one physical chunk of memory only" if ADVANCED && !SUN3
depends on MMU
default y if SUN3
select NEED_MULTIPLE_NODES
default y if SUN3 || MMU_COLDFIRE
help
Ignore all but the first contiguous chunk of physical memory for VM
purposes. This will save a few bytes kernel size and may speed up
some operations. Say N if not sure.
some operations.
When this option os set to N, you may want to lower "Maximum zone
order" to save memory that could be wasted for unused memory map.
Say N if not sure.
config ARCH_DISCONTIGMEM_ENABLE
depends on BROKEN
def_bool MMU && !SINGLE_MEMORY_CHUNK
config FORCE_MAX_ZONEORDER
int "Maximum zone order" if ADVANCED
depends on !SINGLE_MEMORY_CHUNK
default "11"
help
The kernel memory allocator divides physically contiguous memory
blocks into "zones", where each zone is a power of two number of
pages. This option selects the largest power of two that the kernel
keeps in the memory allocator. If you need to allocate very large
blocks of physically contiguous memory, then you may need to
increase this value.
For systems that have holes in their physical address space this
value also defines the minimal size of the hole that allows
freeing unused memory map.
This config option is actually maximum order plus one. For example,
a value of 11 means that the largest free memory block is 2^10 pages.
config 060_WRITETHROUGH
bool "Use write-through caching for 68060 supervisor accesses"
depends on ADVANCED && M68060
@ -406,7 +429,7 @@ config M68K_L2_CACHE
config NODES_SHIFT
int
default "3"
depends on !SINGLE_MEMORY_CHUNK
depends on DISCONTIGMEM
config CPU_HAS_NO_BITFIELDS
bool

View File

@ -62,8 +62,10 @@ extern unsigned long _ramend;
#include <asm/page_no.h>
#endif
#ifdef CONFIG_DISCONTIGMEM
#define __phys_to_pfn(paddr) ((unsigned long)((paddr) >> PAGE_SHIFT))
#define __pfn_to_phys(pfn) PFN_PHYS(pfn)
#endif
#include <asm-generic/getorder.h>

View File

@ -126,7 +126,7 @@ static inline void *__va(unsigned long x)
extern int m68k_virt_to_node_shift;
#ifdef CONFIG_SINGLE_MEMORY_CHUNK
#ifndef CONFIG_DISCONTIGMEM
#define __virt_to_node(addr) (&pg_data_map[0])
#else
extern struct pglist_data *pg_data_table[];
@ -153,6 +153,7 @@ static inline __attribute_const__ int __virt_to_node_shift(void)
pfn_to_virt(page_to_pfn(page)); \
})
#ifdef CONFIG_DISCONTIGMEM
#define pfn_to_page(pfn) ({ \
unsigned long __pfn = (pfn); \
struct pglist_data *pgdat; \
@ -165,6 +166,10 @@ static inline __attribute_const__ int __virt_to_node_shift(void)
pgdat = &pg_data_map[page_to_nid(__p)]; \
((__p) - pgdat->node_mem_map) + pgdat->node_start_pfn; \
})
#else
#define ARCH_PFN_OFFSET (m68k_memory[0].addr)
#include <asm-generic/memory_model.h>
#endif
#define virt_addr_valid(kaddr) ((void *)(kaddr) >= (void *)PAGE_OFFSET && (void *)(kaddr) < high_memory)
#define pfn_valid(pfn) virt_addr_valid(pfn_to_virt(pfn))

View File

@ -29,12 +29,7 @@ static inline void *phys_to_virt(unsigned long address)
}
/* Permanent address of a page. */
#if defined(CONFIG_MMU) && defined(CONFIG_SINGLE_MEMORY_CHUNK)
#define page_to_phys(page) \
__pa(PAGE_OFFSET + (((page) - pg_data_map[0].node_mem_map) << PAGE_SHIFT))
#else
#define page_to_phys(page) (page_to_pfn(page) << PAGE_SHIFT)
#endif
/*
* IO bus memory addresses are 1:1 with the physical address,

View File

@ -42,19 +42,19 @@ EXPORT_SYMBOL(empty_zero_page);
#ifdef CONFIG_MMU
int m68k_virt_to_node_shift;
#ifdef CONFIG_DISCONTIGMEM
pg_data_t pg_data_map[MAX_NUMNODES];
EXPORT_SYMBOL(pg_data_map);
int m68k_virt_to_node_shift;
#ifndef CONFIG_SINGLE_MEMORY_CHUNK
pg_data_t *pg_data_table[65];
EXPORT_SYMBOL(pg_data_table);
#endif
void __init m68k_setup_node(int node)
{
#ifndef CONFIG_SINGLE_MEMORY_CHUNK
#ifdef CONFIG_DISCONTIGMEM
struct m68k_mem_info *info = m68k_memory + node;
int i, end;

View File

@ -263,10 +263,6 @@ int main(int argc, char **argv)
fprintf(out_file, " const struct vm_special_mapping *sm,\n");
fprintf(out_file, " struct vm_area_struct *new_vma)\n");
fprintf(out_file, "{\n");
fprintf(out_file, " unsigned long new_size =\n");
fprintf(out_file, " new_vma->vm_end - new_vma->vm_start;\n");
fprintf(out_file, " if (vdso_image.size != new_size)\n");
fprintf(out_file, " return -EINVAL;\n");
fprintf(out_file, " current->mm->context.vdso =\n");
fprintf(out_file, " (void *)(new_vma->vm_start);\n");
fprintf(out_file, " return 0;\n");

View File

@ -34,8 +34,8 @@ pgd_t *pgd_alloc(struct mm_struct *mm)
cpu_dcache_wb_range((unsigned long)new_pgd,
(unsigned long)new_pgd +
PTRS_PER_PGD * sizeof(pgd_t));
inc_zone_page_state(virt_to_page((unsigned long *)new_pgd),
NR_PAGETABLE);
inc_lruvec_page_state(virt_to_page((unsigned long *)new_pgd),
NR_PAGETABLE);
return new_pgd;
}
@ -59,7 +59,7 @@ void pgd_free(struct mm_struct *mm, pgd_t * pgd)
pte = pmd_page(*pmd);
pmd_clear(pmd);
dec_zone_page_state(virt_to_page((unsigned long *)pgd), NR_PAGETABLE);
dec_lruvec_page_state(virt_to_page((unsigned long *)pgd), NR_PAGETABLE);
pte_free(mm, pte);
mm_dec_nr_ptes(mm);
pmd_free(mm, pmd);

View File

@ -146,6 +146,7 @@ config PPC
select ARCH_MIGHT_HAVE_PC_SERIO
select ARCH_OPTIONAL_KERNEL_RWX if ARCH_HAS_STRICT_KERNEL_RWX
select ARCH_SUPPORTS_ATOMIC_RMW
select ARCH_SUPPORTS_DEBUG_PAGEALLOC if PPC32 || PPC_BOOK3S_64
select ARCH_USE_BUILTIN_BSWAP
select ARCH_USE_CMPXCHG_LOCKREF if PPC64
select ARCH_USE_QUEUED_RWLOCKS if PPC_QUEUED_SPINLOCKS
@ -356,10 +357,6 @@ config PPC_OF_PLATFORM_PCI
depends on PCI
depends on PPC64 # not supported on 32 bits yet
config ARCH_SUPPORTS_DEBUG_PAGEALLOC
depends on PPC32 || PPC_BOOK3S_64
def_bool y
config ARCH_SUPPORTS_UPROBES
def_bool y

View File

@ -14,6 +14,7 @@ config RISCV
def_bool y
select ARCH_CLOCKSOURCE_INIT
select ARCH_SUPPORTS_ATOMIC_RMW
select ARCH_SUPPORTS_DEBUG_PAGEALLOC if MMU
select ARCH_HAS_BINFMT_FLAT
select ARCH_HAS_DEBUG_VM_PGTABLE
select ARCH_HAS_DEBUG_VIRTUAL if MMU
@ -153,9 +154,6 @@ config ARCH_SELECT_MEMORY_MODEL
config ARCH_WANT_GENERAL_HUGETLB
def_bool y
config ARCH_SUPPORTS_DEBUG_PAGEALLOC
def_bool y
config SYS_SUPPORTS_HUGETLBFS
depends on MMU
def_bool y

View File

@ -461,8 +461,6 @@ static inline int ptep_clear_flush_young(struct vm_area_struct *vma,
#define VMALLOC_START 0
#define VMALLOC_END TASK_SIZE
static inline void __kernel_map_pages(struct page *page, int numpages, int enable) {}
#endif /* !CONFIG_MMU */
#define kern_addr_valid(addr) (1) /* FIXME */

View File

@ -24,6 +24,7 @@ static inline int set_memory_nx(unsigned long addr, int numpages) { return 0; }
int set_direct_map_invalid_noflush(struct page *page);
int set_direct_map_default_noflush(struct page *page);
bool kernel_page_present(struct page *page);
#endif /* __ASSEMBLY__ */

View File

@ -184,6 +184,7 @@ int set_direct_map_default_noflush(struct page *page)
return ret;
}
#ifdef CONFIG_DEBUG_PAGEALLOC
void __kernel_map_pages(struct page *page, int numpages, int enable)
{
if (!debug_pagealloc_enabled())
@ -196,3 +197,33 @@ void __kernel_map_pages(struct page *page, int numpages, int enable)
__set_memory((unsigned long)page_address(page), numpages,
__pgprot(0), __pgprot(_PAGE_PRESENT));
}
#endif
bool kernel_page_present(struct page *page)
{
unsigned long addr = (unsigned long)page_address(page);
pgd_t *pgd;
pud_t *pud;
p4d_t *p4d;
pmd_t *pmd;
pte_t *pte;
pgd = pgd_offset_k(addr);
if (!pgd_present(*pgd))
return false;
p4d = p4d_offset(pgd, addr);
if (!p4d_present(*p4d))
return false;
pud = pud_offset(p4d, addr);
if (!pud_present(*pud))
return false;
pmd = pmd_offset(pud, addr);
if (!pmd_present(*pmd))
return false;
pte = pte_offset_kernel(pmd, addr);
return pte_present(*pte);
}

View File

@ -35,9 +35,6 @@ config GENERIC_LOCKBREAK
config PGSTE
def_bool y if KVM
config ARCH_SUPPORTS_DEBUG_PAGEALLOC
def_bool y
config AUDIT_ARCH
def_bool y
@ -105,6 +102,7 @@ config S390
select ARCH_INLINE_WRITE_UNLOCK_IRQRESTORE
select ARCH_STACKWALK
select ARCH_SUPPORTS_ATOMIC_RMW
select ARCH_SUPPORTS_DEBUG_PAGEALLOC
select ARCH_SUPPORTS_NUMA_BALANCING
select ARCH_USE_BUILTIN_BSWAP
select ARCH_USE_CMPXCHG_LOCKREF

View File

@ -102,7 +102,7 @@ CONFIG_ZSMALLOC_STAT=y
CONFIG_DEFERRED_STRUCT_PAGE_INIT=y
CONFIG_IDLE_PAGE_TRACKING=y
CONFIG_PERCPU_STATS=y
CONFIG_GUP_BENCHMARK=y
CONFIG_GUP_TEST=y
CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_PACKET_DIAG=m

View File

@ -95,7 +95,7 @@ CONFIG_ZSMALLOC_STAT=y
CONFIG_DEFERRED_STRUCT_PAGE_INIT=y
CONFIG_IDLE_PAGE_TRACKING=y
CONFIG_PERCPU_STATS=y
CONFIG_GUP_BENCHMARK=y
CONFIG_GUP_TEST=y
CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_PACKET_DIAG=m

View File

@ -62,17 +62,8 @@ static vm_fault_t vdso_fault(const struct vm_special_mapping *sm,
static int vdso_mremap(const struct vm_special_mapping *sm,
struct vm_area_struct *vma)
{
unsigned long vdso_pages;
vdso_pages = vdso64_pages;
if ((vdso_pages << PAGE_SHIFT) != vma->vm_end - vma->vm_start)
return -EINVAL;
if (WARN_ON_ONCE(current->mm != vma->vm_mm))
return -EFAULT;
current->mm->context.vdso_base = vma->vm_start;
return 0;
}

View File

@ -88,6 +88,7 @@ config SPARC64
select HAVE_C_RECORDMCOUNT
select HAVE_ARCH_AUDITSYSCALL
select ARCH_SUPPORTS_ATOMIC_RMW
select ARCH_SUPPORTS_DEBUG_PAGEALLOC
select HAVE_NMI
select HAVE_REGS_AND_STACK_ACCESS_API
select ARCH_USE_QUEUED_RWLOCKS
@ -149,9 +150,6 @@ config GENERIC_ISA_DMA
bool
default y if SPARC32
config ARCH_SUPPORTS_DEBUG_PAGEALLOC
def_bool y if SPARC64
config PGTABLE_LEVELS
default 4 if 64BIT
default 3

View File

@ -2894,7 +2894,7 @@ pgtable_t pte_alloc_one(struct mm_struct *mm)
if (!page)
return NULL;
if (!pgtable_pte_page_ctor(page)) {
free_unref_page(page);
__free_page(page);
return NULL;
}
return (pte_t *) page_address(page);

View File

@ -92,6 +92,7 @@ config X86
select ARCH_STACKWALK
select ARCH_SUPPORTS_ACPI
select ARCH_SUPPORTS_ATOMIC_RMW
select ARCH_SUPPORTS_DEBUG_PAGEALLOC
select ARCH_SUPPORTS_NUMA_BALANCING if X86_64
select ARCH_SUPPORTS_KMAP_LOCAL_FORCE_MAP if NR_CPUS <= 4096
select ARCH_USE_BUILTIN_BSWAP
@ -202,6 +203,7 @@ config X86
select HAVE_MIXED_BREAKPOINTS_REGS
select HAVE_MOD_ARCH_SPECIFIC
select HAVE_MOVE_PMD
select HAVE_MOVE_PUD
select HAVE_NMI
select HAVE_OPROFILE
select HAVE_OPTPROBES
@ -333,9 +335,6 @@ config ZONE_DMA32
config AUDIT_ARCH
def_bool y if X86_64
config ARCH_SUPPORTS_DEBUG_PAGEALLOC
def_bool y
config KASAN_SHADOW_OFFSET
hex
depends on KASAN

View File

@ -89,30 +89,14 @@ static void vdso_fix_landing(const struct vdso_image *image,
static int vdso_mremap(const struct vm_special_mapping *sm,
struct vm_area_struct *new_vma)
{
unsigned long new_size = new_vma->vm_end - new_vma->vm_start;
const struct vdso_image *image = current->mm->context.vdso_image;
if (image->size != new_size)
return -EINVAL;
vdso_fix_landing(image, new_vma);
current->mm->context.vdso = (void __user *)new_vma->vm_start;
return 0;
}
static int vvar_mremap(const struct vm_special_mapping *sm,
struct vm_area_struct *new_vma)
{
const struct vdso_image *image = new_vma->vm_mm->context.vdso_image;
unsigned long new_size = new_vma->vm_end - new_vma->vm_start;
if (new_size != -image->sym_vvar_start)
return -EINVAL;
return 0;
}
#ifdef CONFIG_TIME_NS
static struct page *find_timens_vvar_page(struct vm_area_struct *vma)
{
@ -252,7 +236,6 @@ static const struct vm_special_mapping vdso_mapping = {
static const struct vm_special_mapping vvar_mapping = {
.name = "[vvar]",
.fault = vvar_fault,
.mremap = vvar_mremap,
};
/*

View File

@ -82,6 +82,7 @@ int set_pages_rw(struct page *page, int numpages);
int set_direct_map_invalid_noflush(struct page *page);
int set_direct_map_default_noflush(struct page *page);
bool kernel_page_present(struct page *page);
extern int kernel_set_to_readonly;

View File

@ -1458,7 +1458,7 @@ static int pseudo_lock_dev_release(struct inode *inode, struct file *filp)
return 0;
}
static int pseudo_lock_dev_mremap(struct vm_area_struct *area)
static int pseudo_lock_dev_mremap(struct vm_area_struct *area, unsigned long flags)
{
/* Not supported */
return -EINVAL;

View File

@ -93,6 +93,7 @@ static struct mm_struct tboot_mm = {
.pgd = swapper_pg_dir,
.mm_users = ATOMIC_INIT(2),
.mm_count = ATOMIC_INIT(1),
.write_protect_seq = SEQCNT_ZERO(tboot_mm.write_protect_seq),
MMAP_LOCK_INITIALIZER(init_mm)
.page_table_lock = __SPIN_LOCK_UNLOCKED(init_mm.page_table_lock),
.mmlist = LIST_HEAD_INIT(init_mm.mmlist),

View File

@ -2194,6 +2194,7 @@ int set_direct_map_default_noflush(struct page *page)
return __set_pages_p(page, 1);
}
#ifdef CONFIG_DEBUG_PAGEALLOC
void __kernel_map_pages(struct page *page, int numpages, int enable)
{
if (PageHighMem(page))
@ -2225,8 +2226,8 @@ void __kernel_map_pages(struct page *page, int numpages, int enable)
arch_flush_lazy_mmu_mode();
}
#endif /* CONFIG_DEBUG_PAGEALLOC */
#ifdef CONFIG_HIBERNATION
bool kernel_page_present(struct page *page)
{
unsigned int level;
@ -2238,7 +2239,6 @@ bool kernel_page_present(struct page *page)
pte = lookup_address((unsigned long)page_address(page), &level);
return (pte_val(*pte) & _PAGE_PRESENT);
}
#endif /* CONFIG_HIBERNATION */
int __init kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, unsigned long address,
unsigned numpages, unsigned long page_flags)

View File

@ -450,7 +450,7 @@ static ssize_t node_read_meminfo(struct device *dev,
#ifdef CONFIG_SHADOW_CALL_STACK
nid, node_page_state(pgdat, NR_KERNEL_SCS_KB),
#endif
nid, K(sum_zone_node_page_state(nid, NR_PAGETABLE)),
nid, K(node_page_state(pgdat, NR_PAGETABLE)),
nid, 0UL,
nid, K(sum_zone_node_page_state(nid, NR_BOUNCE)),
nid, K(node_page_state(pgdat, NR_WRITEBACK_TEMP)),

View File

@ -2,7 +2,7 @@
config ZRAM
tristate "Compressed RAM block device support"
depends on BLOCK && SYSFS && ZSMALLOC && CRYPTO
select CRYPTO_LZO
depends on CRYPTO_LZO || CRYPTO_ZSTD || CRYPTO_LZ4 || CRYPTO_LZ4HC || CRYPTO_842
help
Creates virtual block devices called /dev/zramX (X = 0, 1, ...).
Pages written to these disks are compressed and stored in memory
@ -14,6 +14,46 @@ config ZRAM
See Documentation/admin-guide/blockdev/zram.rst for more information.
choice
prompt "Default zram compressor"
default ZRAM_DEF_COMP_LZORLE
depends on ZRAM
config ZRAM_DEF_COMP_LZORLE
bool "lzo-rle"
depends on CRYPTO_LZO
config ZRAM_DEF_COMP_ZSTD
bool "zstd"
depends on CRYPTO_ZSTD
config ZRAM_DEF_COMP_LZ4
bool "lz4"
depends on CRYPTO_LZ4
config ZRAM_DEF_COMP_LZO
bool "lzo"
depends on CRYPTO_LZO
config ZRAM_DEF_COMP_LZ4HC
bool "lz4hc"
depends on CRYPTO_LZ4HC
config ZRAM_DEF_COMP_842
bool "842"
depends on CRYPTO_842
endchoice
config ZRAM_DEF_COMP
string
default "lzo-rle" if ZRAM_DEF_COMP_LZORLE
default "zstd" if ZRAM_DEF_COMP_ZSTD
default "lz4" if ZRAM_DEF_COMP_LZ4
default "lzo" if ZRAM_DEF_COMP_LZO
default "lz4hc" if ZRAM_DEF_COMP_LZ4HC
default "842" if ZRAM_DEF_COMP_842
config ZRAM_WRITEBACK
bool "Write back incompressible or idle page to backing device"
depends on ZRAM

View File

@ -15,8 +15,10 @@
#include "zcomp.h"
static const char * const backends[] = {
#if IS_ENABLED(CONFIG_CRYPTO_LZO)
"lzo",
"lzo-rle",
#endif
#if IS_ENABLED(CONFIG_CRYPTO_LZ4)
"lz4",
#endif

View File

@ -42,7 +42,7 @@ static DEFINE_IDR(zram_index_idr);
static DEFINE_MUTEX(zram_index_mutex);
static int zram_major;
static const char *default_compressor = "lzo-rle";
static const char *default_compressor = CONFIG_ZRAM_DEF_COMP;
/* Module params (documentation at end) */
static unsigned int num_devices = 1;
@ -620,15 +620,19 @@ static int read_from_bdev_async(struct zram *zram, struct bio_vec *bvec,
return 1;
}
#define PAGE_WB_SIG "page_index="
#define PAGE_WRITEBACK 0
#define HUGE_WRITEBACK 1
#define IDLE_WRITEBACK 2
static ssize_t writeback_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t len)
{
struct zram *zram = dev_to_zram(dev);
unsigned long nr_pages = zram->disksize >> PAGE_SHIFT;
unsigned long index;
unsigned long index = 0;
struct bio bio;
struct bio_vec bio_vec;
struct page *page;
@ -640,8 +644,17 @@ static ssize_t writeback_store(struct device *dev,
mode = IDLE_WRITEBACK;
else if (sysfs_streq(buf, "huge"))
mode = HUGE_WRITEBACK;
else
return -EINVAL;
else {
if (strncmp(buf, PAGE_WB_SIG, sizeof(PAGE_WB_SIG) - 1))
return -EINVAL;
ret = kstrtol(buf + sizeof(PAGE_WB_SIG) - 1, 10, &index);
if (ret || index >= nr_pages)
return -EINVAL;
nr_pages = 1;
mode = PAGE_WRITEBACK;
}
down_read(&zram->init_lock);
if (!init_done(zram)) {
@ -660,7 +673,7 @@ static ssize_t writeback_store(struct device *dev,
goto release_init_lock;
}
for (index = 0; index < nr_pages; index++) {
while (nr_pages--) {
struct bio_vec bvec;
bvec.bv_page = page;
@ -1071,7 +1084,7 @@ static ssize_t mm_stat_show(struct device *dev,
max_used = atomic_long_read(&zram->stats.max_used_pages);
ret = scnprintf(buf, PAGE_SIZE,
"%8llu %8llu %8llu %8lu %8ld %8llu %8lu %8llu\n",
"%8llu %8llu %8llu %8lu %8ld %8llu %8lu %8llu %8llu\n",
orig_size << PAGE_SHIFT,
(u64)atomic64_read(&zram->stats.compr_data_size),
mem_used << PAGE_SHIFT,
@ -1079,7 +1092,8 @@ static ssize_t mm_stat_show(struct device *dev,
max_used << PAGE_SHIFT,
(u64)atomic64_read(&zram->stats.same_pages),
pool_stats.pages_compacted,
(u64)atomic64_read(&zram->stats.huge_pages));
(u64)atomic64_read(&zram->stats.huge_pages),
(u64)atomic64_read(&zram->stats.huge_pages_since));
up_read(&zram->init_lock);
return ret;
@ -1411,6 +1425,7 @@ out:
if (comp_len == PAGE_SIZE) {
zram_set_flag(zram, index, ZRAM_HUGE);
atomic64_inc(&zram->stats.huge_pages);
atomic64_inc(&zram->stats.huge_pages_since);
}
if (flags) {

View File

@ -78,6 +78,7 @@ struct zram_stats {
atomic64_t notify_free; /* no. of swap slot free notifications */
atomic64_t same_pages; /* no. of same element filled pages */
atomic64_t huge_pages; /* no. of huge pages */
atomic64_t huge_pages_since; /* no. of huge pages since zram set up */
atomic64_t pages_stored; /* no. of pages currently stored */
atomic_long_t max_used_pages; /* no. of maximum pages stored */
atomic64_t writestall; /* no. of write slow paths */

View File

@ -256,7 +256,7 @@ static vm_fault_t dev_dax_fault(struct vm_fault *vmf)
return dev_dax_huge_fault(vmf, PE_SIZE_PTE);
}
static int dev_dax_split(struct vm_area_struct *vma, unsigned long addr)
static int dev_dax_may_split(struct vm_area_struct *vma, unsigned long addr)
{
struct file *filp = vma->vm_file;
struct dev_dax *dev_dax = filp->private_data;
@ -277,7 +277,7 @@ static unsigned long dev_dax_pagesize(struct vm_area_struct *vma)
static const struct vm_operations_struct dax_vm_ops = {
.fault = dev_dax_fault,
.huge_fault = dev_dax_huge_fault,
.split = dev_dax_split,
.may_split = dev_dax_may_split,
.pagesize = dev_dax_pagesize,
};

View File

@ -61,7 +61,7 @@ static int dev_dax_kmem_probe(struct dev_dax *dev_dax)
return -EINVAL;
}
data = kzalloc(sizeof(*data) + sizeof(struct resource *) * dev_dax->nr_range, GFP_KERNEL);
data = kzalloc(struct_size(data, res, dev_dax->nr_range), GFP_KERNEL);
if (!data)
return -ENOMEM;

View File

@ -270,8 +270,7 @@ static struct sync_file *sync_file_merge(const char *name, struct sync_file *a,
fences[i++] = dma_fence_get(a_fences[0]);
if (num_fences > i) {
nfences = krealloc(fences, i * sizeof(*fences),
GFP_KERNEL);
nfences = krealloc_array(fences, i, sizeof(*fences), GFP_KERNEL);
if (!nfences)
goto err;

View File

@ -207,8 +207,8 @@ static void enumerate_dimms(const struct dmi_header *dh, void *arg)
if (!hw->num_dimms || !(hw->num_dimms % 16)) {
struct dimm_info *new;
new = krealloc(hw->dimms, (hw->num_dimms + 16) * sizeof(struct dimm_info),
GFP_KERNEL);
new = krealloc_array(hw->dimms, hw->num_dimms + 16,
sizeof(struct dimm_info), GFP_KERNEL);
if (!new) {
WARN_ON_ONCE(1);
return;

View File

@ -57,6 +57,7 @@ struct mm_struct efi_mm = {
.mm_rb = RB_ROOT,
.mm_users = ATOMIC_INIT(2),
.mm_count = ATOMIC_INIT(1),
.write_protect_seq = SEQCNT_ZERO(efi_mm.write_protect_seq),
MMAP_LOCK_INITIALIZER(efi_mm)
.page_table_lock = __SPIN_LOCK_UNLOCKED(efi_mm.page_table_lock),
.mmlist = LIST_HEAD_INIT(efi_mm.mmlist),

View File

@ -964,7 +964,8 @@ drm_atomic_get_connector_state(struct drm_atomic_state *state,
struct __drm_connnectors_state *c;
int alloc = max(index + 1, config->num_connector);
c = krealloc(state->connectors, alloc * sizeof(*state->connectors), GFP_KERNEL);
c = krealloc_array(state->connectors, alloc,
sizeof(*state->connectors), GFP_KERNEL);
if (!c)
return ERR_PTR(-ENOMEM);

View File

@ -2002,7 +2002,7 @@ nr_pages_store(struct device *dev, struct device_attribute *attr,
}
nr_wins++;
rewin = krealloc(win, sizeof(*win) * nr_wins, GFP_KERNEL);
rewin = krealloc_array(win, nr_wins, sizeof(*win), GFP_KERNEL);
if (!rewin) {
kfree(win);
return -ENOMEM;

View File

@ -51,8 +51,6 @@ static void falconide_release_lock(void)
static void falconide_get_lock(irq_handler_t handler, void *data)
{
if (falconide_intr_lock == 0) {
if (in_interrupt() > 0)
panic("Falcon IDE hasn't ST-DMA lock in interrupt");
stdma_lock(handler, data);
falconide_intr_lock = 1;
}

View File

@ -1592,9 +1592,6 @@ EXPORT_SYMBOL_GPL(ide_port_unregister_devices);
static void ide_unregister(ide_hwif_t *hwif)
{
BUG_ON(in_interrupt());
BUG_ON(irqs_disabled());
mutex_lock(&ide_cfg_mtx);
if (hwif->present) {

View File

@ -11,6 +11,7 @@ lkdtm-$(CONFIG_LKDTM) += usercopy.o
lkdtm-$(CONFIG_LKDTM) += stackleak.o
lkdtm-$(CONFIG_LKDTM) += cfi.o
KASAN_SANITIZE_rodata.o := n
KASAN_SANITIZE_stackleak.o := n
KCOV_INSTRUMENT_rodata.o := n

View File

@ -39,7 +39,7 @@ int pinctrl_utils_reserve_map(struct pinctrl_dev *pctldev,
if (old_num >= new_num)
return 0;
new_map = krealloc(*map, sizeof(*new_map) * new_num, GFP_KERNEL);
new_map = krealloc_array(*map, new_num, sizeof(*new_map), GFP_KERNEL);
if (!new_map) {
dev_err(pctldev->dev, "krealloc(map) failed\n");
return -ENOMEM;

View File

@ -198,7 +198,8 @@ static int resize_iovec(struct vringh_kiov *iov, gfp_t gfp)
flag = (iov->max_num & VRINGH_IOV_ALLOCATED);
if (flag)
new = krealloc(iov->iov, new_num * sizeof(struct iovec), gfp);
new = krealloc_array(iov->iov, new_num,
sizeof(struct iovec), gfp);
else {
new = kmalloc_array(new_num, sizeof(struct iovec), gfp);
if (new) {

View File

@ -1114,9 +1114,7 @@ static int virtballoon_validate(struct virtio_device *vdev)
* page reporting as it could potentially change the contents
* of our free pages.
*/
if (!want_init_on_free() &&
(IS_ENABLED(CONFIG_PAGE_POISONING_NO_SANITY) ||
!page_poisoning_enabled()))
if (!want_init_on_free() && !page_poisoning_enabled_static())
__virtio_clear_bit(vdev, VIRTIO_BALLOON_F_PAGE_POISON);
else if (!virtio_has_feature(vdev, VIRTIO_BALLOON_F_PAGE_POISON))
__virtio_clear_bit(vdev, VIRTIO_BALLOON_F_REPORTING);

View File

@ -27,11 +27,6 @@ static int fill_list(unsigned int nr_pages)
if (!res)
return -ENOMEM;
pgmap = kzalloc(sizeof(*pgmap), GFP_KERNEL);
if (!pgmap)
goto err_pgmap;
pgmap->type = MEMORY_DEVICE_GENERIC;
res->name = "Xen scratch";
res->flags = IORESOURCE_MEM | IORESOURCE_BUSY;
@ -43,6 +38,11 @@ static int fill_list(unsigned int nr_pages)
goto err_resource;
}
pgmap = kzalloc(sizeof(*pgmap), GFP_KERNEL);
if (!pgmap)
goto err_pgmap;
pgmap->type = MEMORY_DEVICE_GENERIC;
pgmap->range = (struct range) {
.start = res->start,
.end = res->end,
@ -92,10 +92,10 @@ static int fill_list(unsigned int nr_pages)
return 0;
err_memremap:
release_resource(res);
err_resource:
kfree(pgmap);
err_pgmap:
release_resource(res);
err_resource:
kfree(res);
return ret;
}

View File

@ -323,13 +323,16 @@ static void aio_free_ring(struct kioctx *ctx)
}
}
static int aio_ring_mremap(struct vm_area_struct *vma)
static int aio_ring_mremap(struct vm_area_struct *vma, unsigned long flags)
{
struct file *file = vma->vm_file;
struct mm_struct *mm = vma->vm_mm;
struct kioctx_table *table;
int i, res = -EINVAL;
if (flags & MREMAP_DONTUNMAP)
return -EINVAL;
spin_lock(&mm->ioctx_lock);
rcu_read_lock();
table = rcu_dereference(mm->ioctx_table);

View File

@ -323,7 +323,7 @@ static ssize_t ntfs_prepare_file_for_write(struct kiocb *iocb,
unsigned long flags;
struct file *file = iocb->ki_filp;
struct inode *vi = file_inode(file);
ntfs_inode *base_ni, *ni = NTFS_I(vi);
ntfs_inode *ni = NTFS_I(vi);
ntfs_volume *vol = ni->vol;
ntfs_debug("Entering for i_ino 0x%lx, attribute type 0x%x, pos "
@ -365,9 +365,6 @@ static ssize_t ntfs_prepare_file_for_write(struct kiocb *iocb,
err = -EOPNOTSUPP;
goto out;
}
base_ni = ni;
if (NInoAttr(ni))
base_ni = ni->ext.base_ntfs_ino;
err = file_remove_privs(file);
if (unlikely(err))
goto out;

View File

@ -2347,7 +2347,6 @@ int ntfs_truncate(struct inode *vi)
ATTR_RECORD *a;
const char *te = " Leaving file length out of sync with i_size.";
int err, mp_size, size_change, alloc_change;
u32 attr_len;
ntfs_debug("Entering for inode 0x%lx.", vi->i_ino);
BUG_ON(NInoAttr(ni));
@ -2721,7 +2720,6 @@ do_non_resident_truncate:
* this cannot fail since we are making the attribute smaller thus by
* definition there is enough space to do so.
*/
attr_len = le32_to_cpu(a->length);
err = ntfs_attr_record_resize(m, a, mp_size +
le16_to_cpu(a->data.non_resident.mapping_pairs_offset));
BUG_ON(err);

View File

@ -478,7 +478,7 @@ bool ntfs_check_logfile(struct inode *log_vi, RESTART_PAGE_HEADER **rp)
u8 *kaddr = NULL;
RESTART_PAGE_HEADER *rstr1_ph = NULL;
RESTART_PAGE_HEADER *rstr2_ph = NULL;
int log_page_size, log_page_mask, err;
int log_page_size, err;
bool logfile_is_empty = true;
u8 log_page_bits;
@ -501,7 +501,6 @@ bool ntfs_check_logfile(struct inode *log_vi, RESTART_PAGE_HEADER **rp)
log_page_size = DefaultLogPageSize;
else
log_page_size = PAGE_SIZE;
log_page_mask = log_page_size - 1;
/*
* Use ntfs_ffs() instead of ffs() to enable the compiler to
* optimize log_page_size and log_page_bits into constants.

View File

@ -1198,7 +1198,6 @@ static int o2net_process_message(struct o2net_sock_container *sc,
msglog(hdr, "bad magic\n");
ret = -EINVAL;
goto out;
break;
}
/* find a handler for it */

View File

@ -1088,8 +1088,8 @@ static int ocfs2_check_if_ancestor(struct ocfs2_super *osb,
child_inode_no = parent_inode_no;
if (++i >= MAX_LOOKUP_TIMES) {
mlog(ML_NOTICE, "max lookup times reached, filesystem "
"may have nested directories, "
mlog_ratelimited(ML_NOTICE, "max lookup times reached, "
"filesystem may have nested directories, "
"src inode: %llu, dest inode: %llu.\n",
(unsigned long long)src_inode_no,
(unsigned long long)dest_inode_no);

View File

@ -193,8 +193,6 @@ kclist_add_private(unsigned long pfn, unsigned long nr_pages, void *arg)
return 1;
p = pfn_to_page(pfn);
if (!memmap_valid_within(pfn, p, page_zone(p)))
return 1;
ent = kmalloc(sizeof(*ent), GFP_KERNEL);
if (!ent)

View File

@ -107,7 +107,7 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
global_node_page_state(NR_KERNEL_SCS_KB));
#endif
show_val_kb(m, "PageTables: ",
global_zone_page_state(NR_PAGETABLE));
global_node_page_state(NR_PAGETABLE));
show_val_kb(m, "NFS_Unstable: ", 0);
show_val_kb(m, "Bounce: ",

View File

@ -28,7 +28,7 @@
#include <linux/security.h>
#include <linux/hugetlb.h>
int sysctl_unprivileged_userfaultfd __read_mostly = 1;
int sysctl_unprivileged_userfaultfd __read_mostly;
static struct kmem_cache *userfaultfd_ctx_cachep __read_mostly;
@ -405,6 +405,13 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason)
if (ctx->features & UFFD_FEATURE_SIGBUS)
goto out;
if ((vmf->flags & FAULT_FLAG_USER) == 0 &&
ctx->flags & UFFD_USER_MODE_ONLY) {
printk_once(KERN_WARNING "uffd: Set unprivileged_userfaultfd "
"sysctl knob to 1 if kernel faults must be handled "
"without obtaining CAP_SYS_PTRACE capability\n");
goto out;
}
/*
* If it's already released don't get it. This avoids to loop
@ -1959,16 +1966,23 @@ SYSCALL_DEFINE1(userfaultfd, int, flags)
struct userfaultfd_ctx *ctx;
int fd;
if (!sysctl_unprivileged_userfaultfd && !capable(CAP_SYS_PTRACE))
if (!sysctl_unprivileged_userfaultfd &&
(flags & UFFD_USER_MODE_ONLY) == 0 &&
!capable(CAP_SYS_PTRACE)) {
printk_once(KERN_WARNING "uffd: Set unprivileged_userfaultfd "
"sysctl knob to 1 if kernel faults must be handled "
"without obtaining CAP_SYS_PTRACE capability\n");
return -EPERM;
}
BUG_ON(!current->mm);
/* Check the UFFD_* constants for consistency. */
BUILD_BUG_ON(UFFD_USER_MODE_ONLY & UFFD_SHARED_FCNTL_FLAGS);
BUILD_BUG_ON(UFFD_CLOEXEC != O_CLOEXEC);
BUILD_BUG_ON(UFFD_NONBLOCK != O_NONBLOCK);
if (flags & ~UFFD_SHARED_FCNTL_FLAGS)
if (flags & ~(UFFD_SHARED_FCNTL_FLAGS | UFFD_USER_MODE_ONLY))
return -EINVAL;
ctx = kmem_cache_alloc(userfaultfd_ctx_cachep, GFP_KERNEL);

View File

@ -668,21 +668,6 @@ struct cgroup_subsys {
*/
bool threaded:1;
/*
* If %false, this subsystem is properly hierarchical -
* configuration, resource accounting and restriction on a parent
* cgroup cover those of its children. If %true, hierarchy support
* is broken in some ways - some subsystems ignore hierarchy
* completely while others are only implemented half-way.
*
* It's now disallowed to create nested cgroups if the subsystem is
* broken and cgroup core will emit a warning message on such
* cases. Eventually, all subsystems will be made properly
* hierarchical and this will go away.
*/
bool broken_hierarchy:1;
bool warned_broken_hierarchy:1;
/* the following two fields are initialized automtically during boot */
int id;
const char *name;

View File

@ -98,11 +98,8 @@ extern void reset_isolation_suitable(pg_data_t *pgdat);
extern enum compact_result compaction_suitable(struct zone *zone, int order,
unsigned int alloc_flags, int highest_zoneidx);
extern void defer_compaction(struct zone *zone, int order);
extern bool compaction_deferred(struct zone *zone, int order);
extern void compaction_defer_reset(struct zone *zone, int order,
bool alloc_success);
extern bool compaction_restarting(struct zone *zone, int order);
/* Compaction has made some progress and retrying makes sense */
static inline bool compaction_made_progress(enum compact_result result)
@ -194,15 +191,6 @@ static inline enum compact_result compaction_suitable(struct zone *zone, int ord
return COMPACT_SKIPPED;
}
static inline void defer_compaction(struct zone *zone, int order)
{
}
static inline bool compaction_deferred(struct zone *zone, int order)
{
return true;
}
static inline bool compaction_made_progress(enum compact_result result)
{
return false;

View File

@ -3230,7 +3230,7 @@ static inline bool vma_is_fsdax(struct vm_area_struct *vma)
{
struct inode *inode;
if (!vma->vm_file)
if (!IS_ENABLED(CONFIG_FS_DAX) || !vma->vm_file)
return false;
if (!vma_is_dax(vma))
return false;

View File

@ -580,8 +580,6 @@ void * __meminit alloc_pages_exact_nid(int nid, size_t size, gfp_t gfp_mask);
extern void __free_pages(struct page *page, unsigned int order);
extern void free_pages(unsigned long addr, unsigned int order);
extern void free_unref_page(struct page *page);
extern void free_unref_page_list(struct list_head *list);
struct page_frag_cache;
extern void __page_frag_cache_drain(struct page *page, unsigned int count);

Some files were not shown because too many files have changed in this diff Show More