1
0
Fork 0

drm/ttm: swap consecutive allocated pooled pages v4

When we detect consecutive allocation of pages swap them to avoid
accidentally freeing them as huge page.

v2: use swap
v3: check if it's really the first allocated page
v4: don't touch the loop variable

Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Roger He <Hongbo.He@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
hifive-unleashed-5.1
Christian König 2017-12-04 11:26:14 +01:00 committed by Alex Deucher
parent d4b7648d6d
commit ae937fe196
1 changed files with 9 additions and 2 deletions

View File

@ -958,8 +958,15 @@ static int ttm_get_pages(struct page **pages, unsigned npages, int flags,
r = ttm_page_pool_get_pages(pool, &plist, flags, cstate,
npages - count, 0);
list_for_each_entry(p, &plist, lru)
pages[count++] = p;
first = count;
list_for_each_entry(p, &plist, lru) {
struct page *tmp = p;
/* Swap the pages if we detect consecutive order */
if (count > first && pages[count - 1] == tmp - 1)
swap(tmp, pages[count - 1]);
pages[count++] = tmp;
}
if (r) {
/* If there is any pages in the list put them back to