1
0
Fork 0

RDMA/umem: Revert broken 'off by one' fix

The previous attempted bug fix overlooked the fact that
ib_umem_odp_map_dma_single_page() was doing a put_page() upon hitting an
error. So there was not really a bug there.

Therefore, this reverts the off-by-one change, but keeps the change to use
release_pages() in the error path.

Fixes: 75a3e6a3c1 ("RDMA/umem: minor bug fix in error handling path")
Suggested-by: Artemy Kovalyov <artemyko@mellanox.com>
Signed-off-by: John Hubbard <jhubbard@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
hifive-unleashed-5.1
John Hubbard 2019-03-05 18:00:22 -08:00 committed by Jason Gunthorpe
parent 75a3e6a3c1
commit 0c507d8f84
1 changed files with 6 additions and 3 deletions

View File

@ -687,10 +687,13 @@ int ib_umem_odp_map_dma_pages(struct ib_umem_odp *umem_odp, u64 user_virt,
if (ret < 0) {
/*
* Release pages, starting at the the first page
* that experienced an error.
* Release pages, remembering that the first page
* to hit an error was already released by
* ib_umem_odp_map_dma_single_page().
*/
release_pages(&local_page_list[j], npages - j);
if (npages - (j + 1) > 0)
release_pages(&local_page_list[j+1],
npages - (j + 1));
break;
}
}