1
0
Fork 0
Commit Graph

19 Commits (a1e16bc7d5f7ca3599d8a7f061841c93a563665e)

Author SHA1 Message Date
Jason Gunthorpe ebc24096c4 RDMA/umem: Add rdma_umem_for_each_dma_block()
This helper does the same as rdma_for_each_block(), except it works on a
umem. This simplifies most of the call sites.

Link: https://lore.kernel.org/r/4-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com
Acked-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
Acked-by: Shiraz Saleem <shiraz.saleem@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-09-09 15:33:17 -03:00
Weihang Li 82d07a4e46 RDMA/hns: Change all page_shift to unsigned
page_shift is used to calculate the page size, it's always non-negative,
and should be in type of unsigned.

Link: https://lore.kernel.org/r/1589982799-28728-7-git-send-email-liweihang@huawei.com
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-05-25 14:02:12 -03:00
Xi Wang 9581a356cc RDMA/hns: Rename macro for defining hns hardware page size
Rename the PAGE_ADDR_SHIFT as HNS_HW_PAGE_SHIFT to make code more
readable.

Link: https://lore.kernel.org/r/1588931159-56875-9-git-send-email-liweihang@huawei.com
Signed-off-by: Xi Wang <wangxi11@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-05-19 20:54:59 -03:00
Xi Wang 2929c40f08 RDMA/hns: Remove unused MTT functions
The MTT (Memory Translate Table) interface is no longer used to configure
the buffer address to BT (Base Address Table) that requires driver
mapping.  Because the MTT is not compatible with multi-hop addressing of
the hip08, it is replaced by MTR (Memory Translate Region) interface, and
all the MTT functions should be removed.

Link: https://lore.kernel.org/r/1588071823-40200-3-git-send-email-liweihang@huawei.com
Signed-off-by: Xi Wang <wangxi11@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-05-06 17:26:43 -03:00
Xi Wang cc23267aed RDMA/hns: Optimize hns buffer allocation flow
When the value of nbufs is 1, the buffer is in direct mode, which may cause
confusion. So optimizes current codes to make it easier to maintain and
understand.

Link: https://lore.kernel.org/r/1586779091-51410-3-git-send-email-liweihang@huawei.com
Signed-off-by: Xi Wang <wangxi11@huawei.com>
Signed-off-by: Lang Cheng <chenglang@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-04-22 15:59:54 -03:00
Yixian Liu 1ceb0b11a8 RDMA/hns: Fix non-standard error codes
It is better to return a linux error code than define a private constant.

Link: https://lore.kernel.org/r/1572952082-6681-9-git-send-email-liweihang@hisilicon.com
Signed-off-by: Yixian Liu <liuyixian@huawei.com>
Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com>
Signed-off-by: Weihang Li <liweihang@hisilicon.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2019-11-08 16:37:54 -04:00
Lijun Ou e9816ddf2a RDMA/hns: Cleanup unnecessary exported symbols
This patch removes the hns-roce.ko for cleanup all the exported symbols in
common part.

Signed-off-by: Xi Wang <wangxi11@huawei.com>
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2019-06-25 14:48:44 -03:00
Colin Ian King 7ef7587541 RDMA/hns: fix potential integer overflow on left shift
There is a potential integer overflow when int i is left shifted as this
is evaluated using 32 bit arithmetic but is being used in a context that
expects an expression of type dma_addr_t.  Fix this by casting integer i
to dma_addr_t before shifting to avoid the overflow.

Addresses-Coverity: ("Unintentional integer overflow")
Fixes: 2ac0bc5e72 ("RDMA/hns: Add a group interfaces for optimizing buffers getting flow")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2019-06-25 10:18:19 -03:00
Lijun Ou 2ac0bc5e72 RDMA/hns: Add a group interfaces for optimizing buffers getting flow
Currently, the code for getting umem and kmem buffers exist many files,
this patch adds a group interfaces to simplify the buffers getting flow.

Signed-off-by: Xi Wang <wangxi11@huawei.com>
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
2019-06-20 12:56:34 -04:00
Luis Chamberlain 750afb08ca cross-tree: phase out dma_zalloc_coherent()
We already need to zero out memory for dma_alloc_coherent(), as such
using dma_zalloc_coherent() is superflous. Phase it out.

This change was generated with the following Coccinelle SmPL patch:

@ replace_dma_zalloc_coherent @
expression dev, size, data, handle, flags;
@@

-dma_zalloc_coherent(dev, size, handle, flags)
+dma_alloc_coherent(dev, size, handle, flags)

Suggested-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
[hch: re-ran the script on the latest tree]
Signed-off-by: Christoph Hellwig <hch@lst.de>
2019-01-08 07:58:37 -05:00
Lijun Ou 5c1f167af1 RDMA/hns: Init SRQ table for hip08
This patch inits hem resource for SRQ table, includes
SRQWQE and SRQWQE index resource.

Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2018-12-05 07:59:13 -07:00
YueHaibing 8c61b24585 IB/hns: Use zeroing memory allocator instead of allocator/memset
Use dma_zalloc_coherent for allocating zeroed memory and
remove unnecessary memset function.

Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2018-06-04 10:50:04 -06:00
Wei Hu\(Xavier\) b1c1583509 RDMA/hns: Get rid of virt_to_page and vmap calls after dma_alloc_coherent
In general dma_alloc_coherent() returns a CPU virtual address and
a DMA address, and we have no guarantee that the virtual address
is either in the linear map or vmalloc. It could be in  some other special
place. We have no guarantee that the underlying memory even has
an associated struct page at all.

In current code, there are incorrect usage as below:
dma_alloc_coherent + virt_to_page + vmap. There will probably
introduce coherency problem. This patch fixes it to get rid of
virt_to_page and vmap calls at Leon's suggestion. The related
link: https://lkml.org/lkml/2017/11/7/34

Fixes: 9a44353("IB/hns: Add driver files for hns RoCE driver")
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Shaobo Xu <xushaobo2@huawei.com>
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Yixian Liu <liuyixian@huawei.com>
Signed-off-by: Xiping Zhang (Francis) <zhangxiping3@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-01 12:21:27 -07:00
Wei Hu(Xavier) 9a8982dc89 RDMA/hns: Support WQE/CQE/PBL page size configurable feature in hip08
This patch updates to support WQE, CQE and PBL page size configurable
feature, which includes base address page size and buffer page size.

Signed-off-by: Shaobo Xu <xushaobo2@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
2017-10-25 13:37:07 -04:00
Wei Hu(Xavier) 13ca970e36 RDMA/hns: Modify assignment device variable to support both PCI device and platform device
In order to support the scalability of the hardware version, the
features irrelevant to the hardware will be located in the hns-roce.ko,
and the hardware relevant operations will be located in hns_roce_hw_v1.ko
or hns_roce_hw_v2.ko based on the series chips.

The hip08 RoCE engine is a PCI device, hip06 RoCE engine is a platform
device. In order to support both platform device and PCI device, We
replace &hr_dev->pdev->dev with hr_dev->dev in hns-roce.ko as belows:
	Before modification:
		struct device *dev = hr_dev->dev;
	After modification:
		struct device *dev = &hr_dev->pdev->dev;

	The related structure:
	struct hns_roce_dev {
		...
		struct platform_device  *pdev;
		struct pci_dev		*pci_dev;
		struct device		*dev;
		...
	}

Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Shaobo Xu <xushaobo2@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
2017-09-27 08:34:55 -04:00
Wei Hu(Xavier) 08805fdbeb RDMA/hns: Split hw v1 driver from hns roce driver
The hardware relevant definitions and operations are implemented
in hns_roce_hw_v* file. According to the diversity chips, the file
is named as hns_roce_hw_v1.c or hns_roce_hw_v2.c etc.

The general software process flow, common structures and allocated
algorithms are implemented in other files located in hns roce driver.

In order to support the scalability of the hardware version, the
common driver features are in the hns-roce.ko, and the hardware
relevant operations are in hns_roce_hw_v1.ko or hns_roce_hw_v2.ko
based on the series chips.

Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Shaobo Xu <xushaobo2@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
2017-09-27 08:34:55 -04:00
Matan Barak e89bf462b6 IB/hns: Support compile test for hns RoCE driver
Compiling the hns RoCE driver requires ARM architecture.
In order to simplify development of IB/core, support
compile test. Add the necessary includes for that too.

Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
2017-07-28 14:48:26 -04:00
Wei Hu (Xavier) 5e6ff78a22 IB/hns: Change qpn allocation to round-robin mode.
When using CM to establish connections, qp number that was freed
just now will be rejected by ib core. To fix these problem, We
change qpn allocation to round-robin mode. We added the round-robin
mode for allocating resources using bitmap. We use round-robin mode
for qp number and non round-robing mode for other resources like
cq number, pd number etc.

Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Salil Mehta  <salil.mehta@huawei.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
2016-12-03 14:20:42 -05:00
oulijun 9a4435375c IB/hns: Add driver files for hns RoCE driver
These are the various new source code files for the Hisilicon
RoCE driver for ARM architecture.

Signed-off-by: Wei Hu <xavier.huwei@huawei.com>
Signed-off-by: Nenglong Zhao <zhaonenglong@hisilicon.com>
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
2016-08-22 14:02:32 -04:00