x86 & generic: change to __builtin_prefetch()

gcc 3.2+ supports __builtin_prefetch, so it's possible to use it on all
architectures. Change the generic fallback in linux/prefetch.h to use it
instead of noping it out. gcc should do the right thing when the
architecture doesn't support prefetching

Undefine the x86-64 inline assembler version and use the fallback.

Signed-off-by: Andi Kleen <ak@suse.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
This commit is contained in:
Andi Kleen 2007-10-19 20:35:04 +02:00 committed by Thomas Gleixner
parent 124d395fd0
commit ab483570a1
2 changed files with 2 additions and 13 deletions

View file

@ -390,12 +390,6 @@ static inline void sync_core(void)
asm volatile("cpuid" : "=a" (tmp) : "0" (1) : "ebx","ecx","edx","memory");
}
#define ARCH_HAS_PREFETCH
static inline void prefetch(void *x)
{
asm volatile("prefetcht0 (%0)" :: "r" (x));
}
#define ARCH_HAS_PREFETCHW 1
static inline void prefetchw(void *x)
{

View file

@ -34,17 +34,12 @@
*/
/*
* These cannot be do{}while(0) macros. See the mental gymnastics in
* the loop macro.
*/
#ifndef ARCH_HAS_PREFETCH
static inline void prefetch(const void *x) {;}
#define prefetch(x) __builtin_prefetch(x)
#endif
#ifndef ARCH_HAS_PREFETCHW
static inline void prefetchw(const void *x) {;}
#define prefetchw(x) __builtin_prefetch(x,1)
#endif
#ifndef ARCH_HAS_SPINLOCK_PREFETCH