remarkable-linux/mm
Christoph Lameter 7c83912062 vmstat: User per cpu atomics to avoid interrupt disable / enable
Currently the operations to increment vm counters must disable interrupts
in order to not mess up their housekeeping of counters.

So use this_cpu_cmpxchg() to avoid the overhead. Since we can no longer
count on preremption being disabled we still have some minor issues.
The fetching of the counter thresholds is racy.
A threshold from another cpu may be applied if we happen to be
rescheduled on another cpu.  However, the following vmstat operation
will then bring the counter again under the threshold limit.

The operations for __xxx_zone_state are not changed since the caller
has taken care of the synchronization needs (and therefore the cycle
count is even less than the optimized version for the irq disable case
provided here).

The optimization using this_cpu_cmpxchg will only be used if the arch
supports efficient this_cpu_ops (must have CONFIG_CMPXCHG_LOCAL set!)

The use of this_cpu_cmpxchg reduces the cycle count for the counter
operations by %80 (inc_zone_page_state goes from 170 cycles to 32).

Signed-off-by: Christoph Lameter <cl@linux.com>
2010-12-18 15:54:49 +01:00
..
backing-dev.c Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs-2.6 2010-10-26 17:58:44 -07:00
bootmem.c
bounce.c
compaction.c
debug-pagealloc.c
dmapool.c mm: add a might_sleep_if() to dma_pool_alloc() 2010-10-26 16:52:08 -07:00
fadvise.c
failslab.c
filemap.c Call the filesystem back whenever a page is removed from the page cache 2010-12-02 09:55:21 -05:00
filemap_xip.c
fremap.c
highmem.c mm,x86: fix kmap_atomic_push vs ioremap_32.c 2010-10-27 18:03:05 -07:00
hugetlb.c mm/hugetlb.c: avoid double unlock_page() in hugetlb_fault() 2010-12-02 14:51:14 -08:00
hwpoison-inject.c
init-mm.c
internal.h mm: fix is_mem_section_removable() page_order BUG_ON check 2010-10-26 16:52:11 -07:00
Kconfig Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu 2010-10-22 17:31:36 -07:00
Kconfig.debug
kmemcheck.c
kmemleak-test.c
kmemleak.c
ksm.c ksm: annotate ksm_thread_mutex is no deadlock source 2010-12-02 14:51:15 -08:00
maccess.c MN10300: Save frame pointer in thread_info struct rather than global var 2010-10-27 17:29:01 +01:00
madvise.c
Makefile
memblock.c memblock: Annotate memblock functions with __init_memblock 2010-10-11 16:00:52 -07:00
memcontrol.c cgroups: make swap accounting default behavior configurable 2010-11-25 06:50:45 +09:00
memory-failure.c mem-hotplug: introduce {un}lock_memory_hotplug() 2010-12-02 14:51:15 -08:00
memory.c use clear_page()/copy_page() in favor of memset()/memcpy() on whole pages 2010-10-26 16:52:13 -07:00
memory_hotplug.c mem-hotplug: introduce {un}lock_memory_hotplug() 2010-12-02 14:51:15 -08:00
mempolicy.c mm/mempolicy.c: add rcu read lock to protect pid structure 2010-12-02 14:51:14 -08:00
mempool.c
migrate.c mm: fix error reporting in move_pages() syscall 2010-10-26 16:52:11 -07:00
mincore.c
mlock.c
mm_init.c
mmap.c install_special_mapping skips security_file_mmap check. 2010-12-15 12:30:36 -08:00
mmu_context.c
mmu_notifier.c
mmzone.c
mprotect.c perf_events: Fix perf_counter_mmap() hook in mprotect() 2010-11-09 10:19:38 -08:00
mremap.c mm: remove pte_*map_nested() 2010-10-26 16:52:08 -07:00
msync.c
nommu.c nommu: yield CPU while disposing VM 2010-11-25 06:50:38 +09:00
oom_kill.c oom: kill all threads sharing oom killed task's mm 2010-10-26 16:52:05 -07:00
page-writeback.c writeback: remove the internal 5% low bound on dirty_ratio 2010-10-26 16:52:08 -07:00
page_alloc.c PM / Hibernate: Fix memory corruption related to swap 2010-12-06 23:52:08 +01:00
page_cgroup.c
page_io.c
page_isolation.c mm: page_isolation: codeclean fix comment and rm unneeded val init 2010-10-26 16:52:11 -07:00
pagewalk.c mm: remove call to find_vma in pagewalk for non-hugetlbfs 2010-11-25 06:50:46 +09:00
percpu-km.c
percpu-vm.c
percpu.c percpu: zero memory more efficiently in mm/percpu.c::pcpu_mem_alloc() 2010-12-07 14:47:23 +01:00
prio_tree.c
quicklist.c
readahead.c
rmap.c rmap: make anon_vma_chain_free() static 2010-10-26 16:52:10 -07:00
shmem.c convert get_sb_nodev() users 2010-10-29 04:16:31 -04:00
slab.c core: Replace __get_cpu_var with __this_cpu_read if not used for an address. 2010-12-17 15:07:19 +01:00
slob.c
slub.c slub: Fix a crash during slabinfo -v 2010-12-04 09:53:49 +02:00
sparse-vmemmap.c
sparse.c
swap.c fuse: use release_pages() 2010-10-27 18:03:17 -07:00
swap_state.c
swapfile.c /proc/swaps: support polling 2010-10-26 16:52:11 -07:00
thrash.c
truncate.c Call the filesystem back whenever a page is removed from the page cache 2010-12-02 09:55:21 -05:00
util.c export __get_user_pages_fast() function 2010-10-24 10:51:24 +02:00
vmalloc.c vmalloc: eagerly clear ptes on vunmap 2010-12-02 14:51:15 -08:00
vmscan.c Call the filesystem back whenever a page is removed from the page cache 2010-12-02 09:55:21 -05:00
vmstat.c vmstat: User per cpu atomics to avoid interrupt disable / enable 2010-12-18 15:54:49 +01:00