From 965c484815f591737fb546628704d4c362317705 Mon Sep 17 00:00:00 2001 From: Vlastimil Babka Date: Mon, 14 Dec 2020 19:04:36 -0800 Subject: [PATCH] mm, slub: use kmem_cache_debug_flags() in deactivate_slab() Commit 9cf7a1118365 ("mm/slub: make add_full() condition more explicit") replaced an unnecessarily generic kmem_cache_debug(s) check with an explicit check of SLAB_STORE_USER and #ifdef CONFIG_SLUB_DEBUG. We can achieve the same specific check with the recently added kmem_cache_debug_flags() which removes the #ifdef and restores the no-branch-overhead benefit of static key check when slub debugging is not enabled. Link: https://lkml.kernel.org/r/3ef24214-38c7-1238-8296-88caf7f48ab6@suse.cz Signed-off-by: Vlastimil Babka Cc: Abel Wu Cc: Christopher Lameter Cc: Pekka Enberg Cc: David Rientjes Cc: Joonsoo Kim Cc: Liu Xiang Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- mm/slub.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 2098a544c4e2..79afc8a38ebf 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2245,8 +2245,7 @@ redo: } } else { m = M_FULL; -#ifdef CONFIG_SLUB_DEBUG - if ((s->flags & SLAB_STORE_USER) && !lock) { + if (kmem_cache_debug_flags(s, SLAB_STORE_USER) && !lock) { lock = 1; /* * This also ensures that the scanning of full @@ -2255,7 +2254,6 @@ redo: */ spin_lock(&n->list_lock); } -#endif } if (l != m) {