1
0
Fork 0

locking/lockdep: Verify whether lock objects are small enough to be used as class keys

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Johannes Berg <johannes@sipsolutions.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Waiman Long <longman@redhat.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: johannes.berg@intel.com
Cc: tj@kernel.org
Link: https://lkml.kernel.org/r/20190214230058.196511-18-bvanassche@acm.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
hifive-unleashed-5.1
Bart Van Assche 2019-02-14 15:00:52 -08:00 committed by Ingo Molnar
parent b526b2e39a
commit 4bf5086218
1 changed files with 11 additions and 0 deletions

View File

@ -758,6 +758,17 @@ static bool assign_lock_key(struct lockdep_map *lock)
{
unsigned long can_addr, addr = (unsigned long)lock;
#ifdef __KERNEL__
/*
* lockdep_free_key_range() assumes that struct lock_class_key
* objects do not overlap. Since we use the address of lock
* objects as class key for static objects, check whether the
* size of lock_class_key objects does not exceed the size of
* the smallest lock object.
*/
BUILD_BUG_ON(sizeof(struct lock_class_key) > sizeof(raw_spinlock_t));
#endif
if (__is_kernel_percpu_address(addr, &can_addr))
lock->key = (void *)can_addr;
else if (__is_module_percpu_address(addr, &can_addr))