From c52e75935f8ded2bd4a75eb08e914bd96802725b Mon Sep 17 00:00:00 2001 From: Wei Yang Date: Tue, 5 Mar 2019 15:44:01 -0800 Subject: [PATCH] mm: remove extra drain pages on pcp list In the current implementation, there are two places to isolate a range of page: __offline_pages() and alloc_contig_range(). During this procedure, it will drain pages on pcp list. Below is a brief call flow: __offline_pages()/alloc_contig_range() start_isolate_page_range() set_migratetype_isolate() drain_all_pages() drain_all_pages() <--- A This snippet shows the current logic is isolate and drain pcp list for each pageblock and drain pcp list again for the whole range. start_isolate_page_range is responsible for isolating the given pfn range. One part of that job is to make sure that also pages that are on the allocator pcp lists are properly isolated. Otherwise they could be reused and the range wouldn't be completely isolated until the memory is freed back. While there is no strict guarantee here because pages might get allocated at any time before drain_all_pages is called there doesn't seem to be any strong demand for such a guarantee. In any case, draining is already done at the isolation level and there is no need to do it again later by start_isolate_page_range callers (memory hotplug and CMA allocator currently). Therefore remove pointless draining in existing callers to make the code more clear and functionally correct. [mhocko@suse.com: provide a clearer changelog for the last two paragraphs] Link: http://lkml.kernel.org/r/20190105233141.2329-1-richard.weiyang@gmail.com Signed-off-by: Wei Yang Acked-by: Michal Hocko Acked-by: David Hildenbrand Reviewed-by: Oscar Salvador Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- mm/memory_hotplug.c | 1 - mm/page_alloc.c | 1 - 2 files changed, 2 deletions(-) diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index b3d3c64d15df..eb1a28469b66 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1625,7 +1625,6 @@ static int __ref __offline_pages(unsigned long start_pfn, cond_resched(); lru_add_drain_all(); - drain_all_pages(zone); pfn = scan_movable_pages(pfn, end_pfn); if (pfn) { diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 034b8b6043a3..8afb6f007f68 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -8196,7 +8196,6 @@ int alloc_contig_range(unsigned long start, unsigned long end, */ lru_add_drain_all(); - drain_all_pages(cc.zone); order = 0; outer_start = start;