1
0
Fork 0

RDMA subsystem updates for 5.5

Mainly a collection of smaller of driver updates this cycle.
 
 - Various driver updates and bug fixes for siw, bnxt_re, hns, qedr,
   iw_cxgb4, vmw_pvrdma, mlx5
 
 - Improvements in SRPT from working with iWarp
 
 - SRIOV VF support for bnxt_re
 
 - Skeleton kernel-doc files for drivers/infiniband
 
 - User visible counters for events related to ODP
 
 - Common code for tracking of mmap lifetimes so that drivers can link HW
   object liftime to a VMA
 
 - ODP bug fixes and rework
 
 - RDMA READ support for efa
 
 - Removal of the very old cxgb3 driver
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEfB7FMLh+8QxL+6i3OG33FX4gmxoFAl3dwFYACgkQOG33FX4g
 mxohHRAAnkUr0SKh1uV7vuhA8sUlejkwAaw2V2Sm3E1y4GiPpfRAak/FcbEu9F8l
 E7Q7VI04DRKTTpTRmREVyKYjlKr5QOA4mSryNMfBZobjkp+t++U1GC9YKL3zh65U
 odyJedtv3zKCLzV9RBwX06C8Vi1PQNQp7RbQ42xH/IyKXJIZPHeCUeKv7PsfTWVo
 vhuXFc9pPwqHDfRVTbj6s1g+OwVuRHc+SWep6eTKLGYvt8CqdN9WEpA0sJrlwPet
 NLTZTrFDpuC12RPc4Lo5c7R5MeAzHgojZCeSFaL2DhJLOx3kfmU30wG+rV94Lvsq
 Y2yt6BwKeLleKzpR5ApkuIVHQt2KECPrJbmVLDFqi+rT7yvzsd2AB+uiCS50+LPG
 h2UpWJdKWtrnSzTqbJQieXd+oT253Dk+ciy7zbdPSGPwz1dc/Cna9TTn26X2SezR
 bmRhHOykrh7LCNrv/7jiSq/zWApGTlR9YZ86x9Li+lJ3kkG+7d2DBEofDU+oKE5F
 p0+1Cer1td5IkN3N8oDpvyXAbCIv7qdWwXB2Cm7dSfUETIpAKnXiBxFCHyvmsuus
 7gGyKZh2490N3SoIL1muu12dIvhPC2Obg9ckCNU3ldw9+uPJkCzrs+n+JtkIW0mi
 i1fMjRQ6FumVT6/KdavYgeCsHmItVqv03SGdFsHF3SGwvWo6y8Y=
 =UV2u
 -----END PGP SIGNATURE-----

Merge tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma

Pull rdma updates from Jason Gunthorpe:
 "Again another fairly quiet cycle with few notable core code changes
  and the usual variety of driver bug fixes and small improvements.

   - Various driver updates and bug fixes for siw, bnxt_re, hns, qedr,
     iw_cxgb4, vmw_pvrdma, mlx5

   - Improvements in SRPT from working with iWarp

   - SRIOV VF support for bnxt_re

   - Skeleton kernel-doc files for drivers/infiniband

   - User visible counters for events related to ODP

   - Common code for tracking of mmap lifetimes so that drivers can link
     HW object liftime to a VMA

   - ODP bug fixes and rework

   - RDMA READ support for efa

   - Removal of the very old cxgb3 driver"

* tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma: (168 commits)
  RDMA/hns: Delete unnecessary callback functions for cq
  RDMA/hns: Rename the functions used inside creating cq
  RDMA/hns: Redefine the member of hns_roce_cq struct
  RDMA/hns: Redefine interfaces used in creating cq
  RDMA/efa: Expose RDMA read related attributes
  RDMA/efa: Support remote read access in MR registration
  RDMA/efa: Store network attributes in device attributes
  IB/hfi1: remove redundant assignment to variable ret
  RDMA/bnxt_re: Fix missing le16_to_cpu
  RDMA/bnxt_re: Fix stat push into dma buffer on gen p5 devices
  RDMA/bnxt_re: Fix chip number validation Broadcom's Gen P5 series
  RDMA/bnxt_re: Fix Kconfig indentation
  IB/mlx5: Implement callbacks for getting VFs GUID attributes
  IB/ipoib: Add ndo operation for getting VFs GUID attributes
  IB/core: Add interfaces to get VF node and port GUIDs
  net/core: Add support for getting VF GUIDs
  RDMA/qedr: Fix null-pointer dereference when calling rdma_user_mmap_get_offset
  RDMA/cm: Use refcount_t type for refcount variable
  IB/mlx5: Support extended number of strides for Striding RQ
  IB/mlx4: Update HW GID table while adding vlan GID
  ...
alistair/sunxi64-5.5-dsi
Linus Torvalds 2019-11-27 10:17:28 -08:00
commit d768869728
184 changed files with 4243 additions and 12906 deletions

View File

@ -314,25 +314,6 @@ Description:
board_id: (RO) Manufacturing board ID
sysfs interface for Chelsio T3 RDMA Driver (cxgb3)
--------------------------------------------------
What: /sys/class/infiniband/cxgb3_X/hw_rev
What: /sys/class/infiniband/cxgb3_X/hca_type
What: /sys/class/infiniband/cxgb3_X/board_id
Date: Feb, 2007
KernelVersion: v2.6.21
Contact: linux-rdma@vger.kernel.org
Description:
hw_rev: (RO) Hardware revision number
hca_type: (RO) HCA type. Here it is a driver short name.
It should normally match the name in its bus
driver structure (e.g. pci_driver::name).
board_id: (RO) Manufacturing board id
sysfs interface for Mellanox ConnectX HCA IB driver (mlx4)
----------------------------------------------------------

View File

@ -67,6 +67,8 @@ Description: Interface for making ib_srp connect to a new target.
initiator is allowed to queue per SCSI host. The default
value for this parameter is 62. The lowest supported value
is 2.
* max_it_iu_size, a decimal number specifying the maximum
initiator to target information unit length.
What: /sys/class/infiniband_srp/srp-<hca>-<port_number>/ibdev
Date: January 2, 2006

View File

@ -5,24 +5,6 @@ DMA attributes
This document describes the semantics of the DMA attributes that are
defined in linux/dma-mapping.h.
DMA_ATTR_WRITE_BARRIER
----------------------
DMA_ATTR_WRITE_BARRIER is a (write) barrier attribute for DMA. DMA
to a memory region with the DMA_ATTR_WRITE_BARRIER attribute forces
all pending DMA writes to complete, and thus provides a mechanism to
strictly order DMA from a device across all intervening busses and
bridges. This barrier is not specific to a particular type of
interconnect, it applies to the system as a whole, and so its
implementation must account for the idiosyncrasies of the system all
the way from the DMA device to memory.
As an example of a situation where DMA_ATTR_WRITE_BARRIER would be
useful, suppose that a device does a DMA write to indicate that data is
ready and available in memory. The DMA of the "completion indication"
could race with data DMA. Mapping the memory used for completion
indications with DMA_ATTR_WRITE_BARRIER would prevent the race.
DMA_ATTR_WEAK_ORDERING
----------------------

View File

@ -26,6 +26,7 @@ available subsections can be seen below.
device_link
component
message-based
infiniband
sound
frame-buffer
regulator

View File

@ -0,0 +1,127 @@
===========================================
InfiniBand and Remote DMA (RDMA) Interfaces
===========================================
Introduction and Overview
=========================
TBD
InfiniBand core interfaces
==========================
.. kernel-doc:: drivers/infiniband/core/iwpm_util.h
:internal:
.. kernel-doc:: drivers/infiniband/core/cq.c
:export:
.. kernel-doc:: drivers/infiniband/core/cm.c
:export:
.. kernel-doc:: drivers/infiniband/core/rw.c
:export:
.. kernel-doc:: drivers/infiniband/core/device.c
:export:
.. kernel-doc:: drivers/infiniband/core/verbs.c
:export:
.. kernel-doc:: drivers/infiniband/core/packer.c
:export:
.. kernel-doc:: drivers/infiniband/core/sa_query.c
:export:
.. kernel-doc:: drivers/infiniband/core/ud_header.c
:export:
.. kernel-doc:: drivers/infiniband/core/fmr_pool.c
:export:
.. kernel-doc:: drivers/infiniband/core/umem.c
:export:
.. kernel-doc:: drivers/infiniband/core/umem_odp.c
:export:
RDMA Verbs transport library
============================
.. kernel-doc:: drivers/infiniband/sw/rdmavt/mr.c
:export:
.. kernel-doc:: drivers/infiniband/sw/rdmavt/rc.c
:export:
.. kernel-doc:: drivers/infiniband/sw/rdmavt/ah.c
:export:
.. kernel-doc:: drivers/infiniband/sw/rdmavt/vt.c
:export:
.. kernel-doc:: drivers/infiniband/sw/rdmavt/cq.c
:export:
.. kernel-doc:: drivers/infiniband/sw/rdmavt/qp.c
:export:
.. kernel-doc:: drivers/infiniband/sw/rdmavt/mcast.c
:export:
Upper Layer Protocols
=====================
iSCSI Extensions for RDMA (iSER)
--------------------------------
.. kernel-doc:: drivers/infiniband/ulp/iser/iscsi_iser.h
:internal:
.. kernel-doc:: drivers/infiniband/ulp/iser/iscsi_iser.c
:functions: iscsi_iser_pdu_alloc iser_initialize_task_headers \
iscsi_iser_task_init iscsi_iser_mtask_xmit iscsi_iser_task_xmit \
iscsi_iser_cleanup_task iscsi_iser_check_protection \
iscsi_iser_conn_create iscsi_iser_conn_bind \
iscsi_iser_conn_start iscsi_iser_conn_stop \
iscsi_iser_session_destroy iscsi_iser_session_create \
iscsi_iser_set_param iscsi_iser_ep_connect iscsi_iser_ep_poll \
iscsi_iser_ep_disconnect
.. kernel-doc:: drivers/infiniband/ulp/iser/iser_initiator.c
:internal:
.. kernel-doc:: drivers/infiniband/ulp/iser/iser_verbs.c
:internal:
Omni-Path (OPA) Virtual NIC support
-----------------------------------
.. kernel-doc:: drivers/infiniband/ulp/opa_vnic/opa_vnic_internal.h
:internal:
.. kernel-doc:: drivers/infiniband/ulp/opa_vnic/opa_vnic_encap.h
:internal:
.. kernel-doc:: drivers/infiniband/ulp/opa_vnic/opa_vnic_vema_iface.c
:internal:
.. kernel-doc:: drivers/infiniband/ulp/opa_vnic/opa_vnic_vema.c
:internal:
InfiniBand SCSI RDMA protocol target support
--------------------------------------------
.. kernel-doc:: drivers/infiniband/ulp/srpt/ib_srpt.h
:internal:
.. kernel-doc:: drivers/infiniband/ulp/srpt/ib_srpt.c
:internal:
iSCSI Extensions for RDMA (iSER) target support
-----------------------------------------------
.. kernel-doc:: drivers/infiniband/ulp/isert/ib_isert.c
:internal:

View File

@ -4471,14 +4471,6 @@ W: http://www.chelsio.com
S: Supported
F: drivers/scsi/cxgbi/cxgb3i
CXGB3 IWARP RNIC DRIVER (IW_CXGB3)
M: Potnuri Bharat Teja <bharat@chelsio.com>
L: linux-rdma@vger.kernel.org
W: http://www.openfabrics.org
S: Supported
F: drivers/infiniband/hw/cxgb3/
F: include/uapi/rdma/cxgb3-abi.h
CXGB4 CRYPTO DRIVER (chcr)
M: Atul Gupta <atul.gupta@chelsio.com>
L: linux-crypto@vger.kernel.org

View File

@ -83,7 +83,6 @@ config INFINIBAND_ADDR_TRANS_CONFIGFS
if INFINIBAND_USER_ACCESS || !INFINIBAND_USER_ACCESS
source "drivers/infiniband/hw/mthca/Kconfig"
source "drivers/infiniband/hw/qib/Kconfig"
source "drivers/infiniband/hw/cxgb3/Kconfig"
source "drivers/infiniband/hw/cxgb4/Kconfig"
source "drivers/infiniband/hw/efa/Kconfig"
source "drivers/infiniband/hw/i40iw/Kconfig"

View File

@ -11,7 +11,7 @@ ib_core-y := packer.o ud_header.o verbs.o cq.o rw.o sysfs.o \
device.o fmr_pool.o cache.o netlink.o \
roce_gid_mgmt.o mr_pool.o addr.o sa_query.o \
multicast.o mad.o smi.o agent.o mad_rmpp.o \
nldev.o restrack.o counters.o
nldev.o restrack.o counters.o ib_core_uverbs.o
ib_core-$(CONFIG_SECURITY_INFINIBAND) += security.o
ib_core-$(CONFIG_CGROUP_RDMA) += cgroup.o

View File

@ -819,22 +819,16 @@ static void cleanup_gid_table_port(struct ib_device *ib_dev, u8 port,
struct ib_gid_table *table)
{
int i;
bool deleted = false;
if (!table)
return;
mutex_lock(&table->lock);
for (i = 0; i < table->sz; ++i) {
if (is_gid_entry_valid(table->data_vec[i])) {
if (is_gid_entry_valid(table->data_vec[i]))
del_gid(ib_dev, port, table, i);
deleted = true;
}
}
mutex_unlock(&table->lock);
if (deleted)
dispatch_gid_change_event(ib_dev, port);
}
void ib_cache_gid_set_default_gid(struct ib_device *ib_dev, u8 port,

View File

@ -1,36 +1,10 @@
// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
/*
* Copyright (c) 2004-2007 Intel Corporation. All rights reserved.
* Copyright (c) 2004 Topspin Corporation. All rights reserved.
* Copyright (c) 2004, 2005 Voltaire Corporation. All rights reserved.
* Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
* Copyright (c) 2019, Mellanox Technologies inc. All rights reserved.
*/
#include <linux/completion.h>
@ -246,7 +220,7 @@ struct cm_work {
};
struct cm_timewait_info {
struct cm_work work; /* Must be first. */
struct cm_work work;
struct list_head list;
struct rb_node remote_qp_node;
struct rb_node remote_id_node;
@ -263,7 +237,7 @@ struct cm_id_private {
struct rb_node sidr_id_node;
spinlock_t lock; /* Do not acquire inside cm.lock */
struct completion comp;
atomic_t refcount;
refcount_t refcount;
/* Number of clients sharing this ib_cm_id. Only valid for listeners.
* Protected by the cm.lock spinlock. */
int listen_sharecount;
@ -308,7 +282,7 @@ static void cm_work_handler(struct work_struct *work);
static inline void cm_deref_id(struct cm_id_private *cm_id_priv)
{
if (atomic_dec_and_test(&cm_id_priv->refcount))
if (refcount_dec_and_test(&cm_id_priv->refcount))
complete(&cm_id_priv->comp);
}
@ -365,7 +339,7 @@ static int cm_alloc_msg(struct cm_id_private *cm_id_priv,
m->ah = ah;
m->retries = cm_id_priv->max_cm_retries;
atomic_inc(&cm_id_priv->refcount);
refcount_inc(&cm_id_priv->refcount);
m->context[0] = cm_id_priv;
*msg = m;
@ -626,7 +600,7 @@ static struct cm_id_private * cm_get_id(__be32 local_id, __be32 remote_id)
cm_id_priv = xa_load(&cm.local_id_table, cm_local_id(local_id));
if (cm_id_priv) {
if (cm_id_priv->id.remote_id == remote_id)
atomic_inc(&cm_id_priv->refcount);
refcount_inc(&cm_id_priv->refcount);
else
cm_id_priv = NULL;
}
@ -883,7 +857,7 @@ struct ib_cm_id *ib_create_cm_id(struct ib_device *device,
INIT_LIST_HEAD(&cm_id_priv->prim_list);
INIT_LIST_HEAD(&cm_id_priv->altr_list);
atomic_set(&cm_id_priv->work_count, -1);
atomic_set(&cm_id_priv->refcount, 1);
refcount_set(&cm_id_priv->refcount, 1);
return &cm_id_priv->id;
error:
@ -1230,7 +1204,7 @@ struct ib_cm_id *ib_cm_insert_listen(struct ib_device *device,
spin_unlock_irqrestore(&cm.lock, flags);
return ERR_PTR(-EINVAL);
}
atomic_inc(&cm_id_priv->refcount);
refcount_inc(&cm_id_priv->refcount);
++cm_id_priv->listen_sharecount;
spin_unlock_irqrestore(&cm.lock, flags);
@ -1525,14 +1499,6 @@ static int cm_issue_rej(struct cm_port *port,
return ret;
}
static inline int cm_is_active_peer(__be64 local_ca_guid, __be64 remote_ca_guid,
__be32 local_qpn, __be32 remote_qpn)
{
return (be64_to_cpu(local_ca_guid) > be64_to_cpu(remote_ca_guid) ||
((local_ca_guid == remote_ca_guid) &&
(be32_to_cpu(local_qpn) > be32_to_cpu(remote_qpn))));
}
static bool cm_req_has_alt_path(struct cm_req_msg *req_msg)
{
return ((req_msg->alt_local_lid) ||
@ -1895,8 +1861,8 @@ static struct cm_id_private * cm_match_req(struct cm_work *work,
NULL, 0);
goto out;
}
atomic_inc(&listen_cm_id_priv->refcount);
atomic_inc(&cm_id_priv->refcount);
refcount_inc(&listen_cm_id_priv->refcount);
refcount_inc(&cm_id_priv->refcount);
cm_id_priv->id.state = IB_CM_REQ_RCVD;
atomic_inc(&cm_id_priv->work_count);
spin_unlock_irq(&cm.lock);
@ -2052,7 +2018,7 @@ static int cm_req_handler(struct cm_work *work)
return 0;
rejected:
atomic_dec(&cm_id_priv->refcount);
refcount_dec(&cm_id_priv->refcount);
cm_deref_id(listen_cm_id_priv);
free_timeinfo:
kfree(cm_id_priv->timewait_info);
@ -2826,7 +2792,7 @@ static struct cm_id_private * cm_acquire_rejected_id(struct cm_rej_msg *rej_msg)
cm_local_id(timewait_info->work.local_id));
if (cm_id_priv) {
if (cm_id_priv->id.remote_id == remote_id)
atomic_inc(&cm_id_priv->refcount);
refcount_inc(&cm_id_priv->refcount);
else
cm_id_priv = NULL;
}
@ -3434,7 +3400,7 @@ static int cm_timewait_handler(struct cm_work *work)
struct cm_id_private *cm_id_priv;
int ret;
timewait_info = (struct cm_timewait_info *)work;
timewait_info = container_of(work, struct cm_timewait_info, work);
spin_lock_irq(&cm.lock);
list_del(&timewait_info->list);
spin_unlock_irq(&cm.lock);
@ -3596,8 +3562,8 @@ static int cm_sidr_req_handler(struct cm_work *work)
cm_reject_sidr_req(cm_id_priv, IB_SIDR_UNSUPPORTED);
goto out; /* No match. */
}
atomic_inc(&cur_cm_id_priv->refcount);
atomic_inc(&cm_id_priv->refcount);
refcount_inc(&cur_cm_id_priv->refcount);
refcount_inc(&cm_id_priv->refcount);
spin_unlock_irq(&cm.lock);
cm_id_priv->id.cm_handler = cur_cm_id_priv->id.cm_handler;

View File

@ -1,37 +1,11 @@
/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
/*
* Copyright (c) 2004, 2011 Intel Corporation. All rights reserved.
* Copyright (c) 2004 Topspin Corporation. All rights reserved.
* Copyright (c) 2004 Voltaire Corporation. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING the madirectory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use source and binary forms, with or
* withmodification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retathe above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHWARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS THE
* SOFTWARE.
* Copyright (c) 2019, Mellanox Technologies inc. All rights reserved.
*/
#if !defined(CM_MSGS_H)
#ifndef CM_MSGS_H
#define CM_MSGS_H
#include <rdma/ib_mad.h>

View File

@ -1,36 +1,9 @@
// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
/*
* Copyright (c) 2005 Voltaire Inc. All rights reserved.
* Copyright (c) 2002-2005, Network Appliance, Inc. All rights reserved.
* Copyright (c) 1999-2005, Mellanox Technologies, Inc. All rights reserved.
* Copyright (c) 1999-2019, Mellanox Technologies, Inc. All rights reserved.
* Copyright (c) 2005-2006 Intel Corporation. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#include <linux/completion.h>
@ -2531,7 +2504,9 @@ EXPORT_SYMBOL(rdma_set_service_type);
* This function should be called before rdma_connect() on active side,
* and on passive side before rdma_accept(). It is applicable to primary
* path only. The timeout will affect the local side of the QP, it is not
* negotiated with remote side and zero disables the timer.
* negotiated with remote side and zero disables the timer. In case it is
* set before rdma_resolve_route, the value will also be used to determine
* PacketLifeTime for RoCE.
*
* Return: 0 for success
*/
@ -2828,22 +2803,65 @@ static int cma_resolve_iw_route(struct rdma_id_private *id_priv)
return 0;
}
static int iboe_tos_to_sl(struct net_device *ndev, int tos)
static int get_vlan_ndev_tc(struct net_device *vlan_ndev, int prio)
{
int prio;
struct net_device *dev;
prio = rt_tos2priority(tos);
dev = is_vlan_dev(ndev) ? vlan_dev_real_dev(ndev) : ndev;
dev = vlan_dev_real_dev(vlan_ndev);
if (dev->num_tc)
return netdev_get_prio_tc_map(dev, prio);
#if IS_ENABLED(CONFIG_VLAN_8021Q)
return (vlan_dev_get_egress_qos_mask(vlan_ndev, prio) &
VLAN_PRIO_MASK) >> VLAN_PRIO_SHIFT;
}
struct iboe_prio_tc_map {
int input_prio;
int output_tc;
bool found;
};
static int get_lower_vlan_dev_tc(struct net_device *dev, void *data)
{
struct iboe_prio_tc_map *map = data;
if (is_vlan_dev(dev))
map->output_tc = get_vlan_ndev_tc(dev, map->input_prio);
else if (dev->num_tc)
map->output_tc = netdev_get_prio_tc_map(dev, map->input_prio);
else
map->output_tc = 0;
/* We are interested only in first level VLAN device, so always
* return 1 to stop iterating over next level devices.
*/
map->found = true;
return 1;
}
static int iboe_tos_to_sl(struct net_device *ndev, int tos)
{
struct iboe_prio_tc_map prio_tc_map = {};
int prio = rt_tos2priority(tos);
/* If VLAN device, get it directly from the VLAN netdev */
if (is_vlan_dev(ndev))
return (vlan_dev_get_egress_qos_mask(ndev, prio) &
VLAN_PRIO_MASK) >> VLAN_PRIO_SHIFT;
#endif
return 0;
return get_vlan_ndev_tc(ndev, prio);
prio_tc_map.input_prio = prio;
rcu_read_lock();
netdev_walk_all_lower_dev_rcu(ndev,
get_lower_vlan_dev_tc,
&prio_tc_map);
rcu_read_unlock();
/* If map is found from lower device, use it; Otherwise
* continue with the current netdevice to get priority to tc map.
*/
if (prio_tc_map.found)
return prio_tc_map.output_tc;
else if (ndev->num_tc)
return netdev_get_prio_tc_map(ndev, prio);
else
return 0;
}
static int cma_resolve_iboe_route(struct rdma_id_private *id_priv)
@ -2897,7 +2915,16 @@ static int cma_resolve_iboe_route(struct rdma_id_private *id_priv)
route->path_rec->rate = iboe_get_rate(ndev);
dev_put(ndev);
route->path_rec->packet_life_time_selector = IB_SA_EQ;
route->path_rec->packet_life_time = CMA_IBOE_PACKET_LIFETIME;
/* In case ACK timeout is set, use this value to calculate
* PacketLifeTime. As per IBTA 12.7.34,
* local ACK timeout = (2 * PacketLifeTime + Local CAs ACK delay).
* Assuming a negligible local ACK delay, we can use
* PacketLifeTime = local ACK timeout/2
* as a reasonable approximation for RoCE networks.
*/
route->path_rec->packet_life_time = id_priv->timeout_set ?
id_priv->timeout - 1 : CMA_IBOE_PACKET_LIFETIME;
if (!route->path_rec->mtu) {
ret = -EINVAL;
goto err2;

View File

@ -388,4 +388,15 @@ int ib_device_set_netns_put(struct sk_buff *skb,
int rdma_nl_net_init(struct rdma_dev_net *rnet);
void rdma_nl_net_exit(struct rdma_dev_net *rnet);
struct rdma_umap_priv {
struct vm_area_struct *vma;
struct list_head list;
struct rdma_user_mmap_entry *entry;
};
void rdma_umap_priv_init(struct rdma_umap_priv *priv,
struct vm_area_struct *vma,
struct rdma_user_mmap_entry *entry);
#endif /* _CORE_PRIV_H */

View File

@ -149,11 +149,18 @@ static bool auto_mode_match(struct ib_qp *qp, struct rdma_counter *counter,
struct auto_mode_param *param = &counter->mode.param;
bool match = true;
if (!rdma_is_visible_in_pid_ns(&qp->res))
return false;
/* Ensure that counter belongs to the right PID */
if (task_pid_nr(counter->res.task) != task_pid_nr(qp->res.task))
/*
* Ensure that counter belongs to the right PID. This operation can
* race with user space which kills the process and leaves QP and
* counters orphans.
*
* It is not a big deal because exitted task will leave both QP and
* counter in the same bucket of zombie process. Just ensure that
* process is still alive before procedding.
*
*/
if (task_pid_nr(counter->res.task) != task_pid_nr(qp->res.task) ||
!task_pid_nr(qp->res.task))
return false;
if (auto_mask & RDMA_COUNTER_MASK_QP_TYPE)
@ -229,9 +236,6 @@ static struct rdma_counter *rdma_get_counter_auto_mode(struct ib_qp *qp,
rt = &dev->res[RDMA_RESTRACK_COUNTER];
xa_lock(&rt->xa);
xa_for_each(&rt->xa, id, res) {
if (!rdma_is_visible_in_pid_ns(res))
continue;
counter = container_of(res, struct rdma_counter, res);
if ((counter->device != qp->device) || (counter->port != port))
goto next;
@ -412,9 +416,6 @@ static struct ib_qp *rdma_counter_get_qp(struct ib_device *dev, u32 qp_num)
if (IS_ERR(res))
return NULL;
if (!rdma_is_visible_in_pid_ns(res))
goto err;
qp = container_of(res, struct ib_qp, res);
if (qp->qp_type == IB_QPT_RAW_PACKET && !capable(CAP_NET_RAW))
goto err;
@ -445,11 +446,6 @@ static struct rdma_counter *rdma_get_counter_by_id(struct ib_device *dev,
if (IS_ERR(res))
return NULL;
if (!rdma_is_visible_in_pid_ns(res)) {
rdma_restrack_put(res);
return NULL;
}
counter = container_of(res, struct rdma_counter, res);
kref_get(&counter->kref);
rdma_restrack_put(res);
@ -463,10 +459,15 @@ static struct rdma_counter *rdma_get_counter_by_id(struct ib_device *dev,
int rdma_counter_bind_qpn(struct ib_device *dev, u8 port,
u32 qp_num, u32 counter_id)
{
struct rdma_port_counter *port_counter;
struct rdma_counter *counter;
struct ib_qp *qp;
int ret;
port_counter = &dev->port_data[port].port_counter;
if (port_counter->mode.mode == RDMA_COUNTER_MODE_AUTO)
return -EINVAL;
qp = rdma_counter_get_qp(dev, qp_num);
if (!qp)
return -ENOENT;
@ -503,6 +504,7 @@ err:
int rdma_counter_bind_qpn_alloc(struct ib_device *dev, u8 port,
u32 qp_num, u32 *counter_id)
{
struct rdma_port_counter *port_counter;
struct rdma_counter *counter;
struct ib_qp *qp;
int ret;
@ -510,9 +512,13 @@ int rdma_counter_bind_qpn_alloc(struct ib_device *dev, u8 port,
if (!rdma_is_port_valid(dev, port))
return -EINVAL;
if (!dev->port_data[port].port_counter.hstats)
port_counter = &dev->port_data[port].port_counter;
if (!port_counter->hstats)
return -EOPNOTSUPP;
if (port_counter->mode.mode == RDMA_COUNTER_MODE_AUTO)
return -EINVAL;
qp = rdma_counter_get_qp(dev, qp_num);
if (!qp)
return -ENOENT;

View File

@ -128,17 +128,14 @@ module_param_named(netns_mode, ib_devices_shared_netns, bool, 0444);
MODULE_PARM_DESC(netns_mode,
"Share device among net namespaces; default=1 (shared)");
/**
* rdma_dev_access_netns() - Return whether a rdma device can be accessed
* rdma_dev_access_netns() - Return whether an rdma device can be accessed
* from a specified net namespace or not.
* @device: Pointer to rdma device which needs to be checked
* @dev: Pointer to rdma device which needs to be checked
* @net: Pointer to net namesapce for which access to be checked
*
* rdma_dev_access_netns() - Return whether a rdma device can be accessed
* from a specified net namespace or not. When
* rdma device is in shared mode, it ignores the
* net namespace. When rdma device is exclusive
* to a net namespace, rdma device net namespace is
* checked against the specified one.
* When the rdma device is in shared mode, it ignores the net namespace.
* When the rdma device is exclusive to a net namespace, rdma device net
* namespace is checked against the specified one.
*/
bool rdma_dev_access_netns(const struct ib_device *dev, const struct net *net)
{
@ -1199,9 +1196,21 @@ static void setup_dma_device(struct ib_device *device)
WARN_ON_ONCE(!parent);
device->dma_device = parent;
}
/* Setup default max segment size for all IB devices */
dma_set_max_seg_size(device->dma_device, SZ_2G);
if (!device->dev.dma_parms) {
if (parent) {
/*
* The caller did not provide DMA parameters, so
* 'parent' probably represents a PCI device. The PCI
* core sets the maximum segment size to 64
* KB. Increase this parameter to 2 GB.
*/
device->dev.dma_parms = parent->dma_parms;
dma_set_max_seg_size(device->dma_device, SZ_2G);
} else {
WARN_ON_ONCE(true);
}
}
}
/*
@ -1317,7 +1326,9 @@ out:
/**
* ib_register_device - Register an IB device with IB core
* @device:Device to register
* @device: Device to register
* @name: unique string device name. This may include a '%' which will
* cause a unique index to be added to the passed device name.
*
* Low-level drivers use ib_register_device() to register their
* devices with the IB core. All registered clients will receive a
@ -1444,7 +1455,7 @@ out:
/**
* ib_unregister_device - Unregister an IB device
* @device: The device to unregister
* @ib_dev: The device to unregister
*
* Unregister an IB device. All clients will receive a remove callback.
*
@ -1466,7 +1477,7 @@ EXPORT_SYMBOL(ib_unregister_device);
/**
* ib_unregister_device_and_put - Unregister a device while holding a 'get'
* device: The device to unregister
* @ib_dev: The device to unregister
*
* This is the same as ib_unregister_device(), except it includes an internal
* ib_device_put() that should match a 'get' obtained by the caller.
@ -1536,7 +1547,7 @@ static void ib_unregister_work(struct work_struct *work)
/**
* ib_unregister_device_queued - Unregister a device using a work queue
* device: The device to unregister
* @ib_dev: The device to unregister
*
* This schedules an asynchronous unregistration using a WQ for the device. A
* driver should use this to avoid holding locks while doing unregistration,
@ -2366,7 +2377,7 @@ int ib_modify_device(struct ib_device *device,
struct ib_device_modify *device_modify)
{
if (!device->ops.modify_device)
return -ENOSYS;
return -EOPNOTSUPP;
return device->ops.modify_device(device, device_modify_mask,
device_modify);
@ -2397,8 +2408,12 @@ int ib_modify_port(struct ib_device *device,
rc = device->ops.modify_port(device, port_num,
port_modify_mask,
port_modify);
else if (rdma_protocol_roce(device, port_num) &&
((port_modify->set_port_cap_mask & ~IB_PORT_CM_SUP) == 0 ||
(port_modify->clr_port_cap_mask & ~IB_PORT_CM_SUP) == 0))
rc = 0;
else
rc = rdma_protocol_roce(device, port_num) ? 0 : -ENOSYS;
rc = -EOPNOTSUPP;
return rc;
}
EXPORT_SYMBOL(ib_modify_port);
@ -2607,6 +2622,7 @@ void ib_set_device_ops(struct ib_device *dev, const struct ib_device_ops *ops)
SET_DEVICE_OP(dev_ops, drain_sq);
SET_DEVICE_OP(dev_ops, enable_driver);
SET_DEVICE_OP(dev_ops, fill_res_entry);
SET_DEVICE_OP(dev_ops, fill_stat_entry);
SET_DEVICE_OP(dev_ops, get_dev_fw_str);
SET_DEVICE_OP(dev_ops, get_dma_mr);
SET_DEVICE_OP(dev_ops, get_hw_stats);
@ -2615,6 +2631,7 @@ void ib_set_device_ops(struct ib_device *dev, const struct ib_device_ops *ops)
SET_DEVICE_OP(dev_ops, get_port_immutable);
SET_DEVICE_OP(dev_ops, get_vector_affinity);
SET_DEVICE_OP(dev_ops, get_vf_config);
SET_DEVICE_OP(dev_ops, get_vf_guid);
SET_DEVICE_OP(dev_ops, get_vf_stats);
SET_DEVICE_OP(dev_ops, init_port);
SET_DEVICE_OP(dev_ops, invalidate_range);
@ -2630,6 +2647,7 @@ void ib_set_device_ops(struct ib_device *dev, const struct ib_device_ops *ops)
SET_DEVICE_OP(dev_ops, map_mr_sg_pi);
SET_DEVICE_OP(dev_ops, map_phys_fmr);
SET_DEVICE_OP(dev_ops, mmap);
SET_DEVICE_OP(dev_ops, mmap_free);
SET_DEVICE_OP(dev_ops, modify_ah);
SET_DEVICE_OP(dev_ops, modify_cq);
SET_DEVICE_OP(dev_ops, modify_device);

View File

@ -0,0 +1,335 @@
// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
/*
* Copyright (c) 2005 Mellanox Technologies. All rights reserved.
* Copyright 2018-2019 Amazon.com, Inc. or its affiliates. All rights reserved.
* Copyright 2019 Marvell. All rights reserved.
*/
#include <linux/xarray.h>
#include "uverbs.h"
#include "core_priv.h"
/**
* rdma_umap_priv_init() - Initialize the private data of a vma
*
* @priv: The already allocated private data
* @vma: The vm area struct that needs private data
* @entry: entry into the mmap_xa that needs to be linked with
* this vma
*
* Each time we map IO memory into user space this keeps track of the
* mapping. When the device is hot-unplugged we 'zap' the mmaps in user space
* to point to the zero page and allow the hot unplug to proceed.
*
* This is necessary for cases like PCI physical hot unplug as the actual BAR
* memory may vanish after this and access to it from userspace could MCE.
*
* RDMA drivers supporting disassociation must have their user space designed
* to cope in some way with their IO pages going to the zero page.
*
*/
void rdma_umap_priv_init(struct rdma_umap_priv *priv,
struct vm_area_struct *vma,
struct rdma_user_mmap_entry *entry)
{
struct ib_uverbs_file *ufile = vma->vm_file->private_data;
priv->vma = vma;
if (entry) {
kref_get(&entry->ref);
priv->entry = entry;
}
vma->vm_private_data = priv;
/* vm_ops is setup in ib_uverbs_mmap() to avoid module dependencies */
mutex_lock(&ufile->umap_lock);
list_add(&priv->list, &ufile->umaps);
mutex_unlock(&ufile->umap_lock);
}
EXPORT_SYMBOL(rdma_umap_priv_init);
/**
* rdma_user_mmap_io() - Map IO memory into a process
*
* @ucontext: associated user context
* @vma: the vma related to the current mmap call
* @pfn: pfn to map
* @size: size to map
* @prot: pgprot to use in remap call
* @entry: mmap_entry retrieved from rdma_user_mmap_entry_get(), or NULL
* if mmap_entry is not used by the driver
*
* This is to be called by drivers as part of their mmap() functions if they
* wish to send something like PCI-E BAR memory to userspace.
*
* Return -EINVAL on wrong flags or size, -EAGAIN on failure to map. 0 on
* success.
*/
int rdma_user_mmap_io(struct ib_ucontext *ucontext, struct vm_area_struct *vma,
unsigned long pfn, unsigned long size, pgprot_t prot,
struct rdma_user_mmap_entry *entry)
{
struct ib_uverbs_file *ufile = ucontext->ufile;
struct rdma_umap_priv *priv;
if (!(vma->vm_flags & VM_SHARED))
return -EINVAL;
if (vma->vm_end - vma->vm_start != size)
return -EINVAL;
/* Driver is using this wrong, must be called by ib_uverbs_mmap */
if (WARN_ON(!vma->vm_file ||
vma->vm_file->private_data != ufile))
return -EINVAL;
lockdep_assert_held(&ufile->device->disassociate_srcu);
priv = kzalloc(sizeof(*priv), GFP_KERNEL);
if (!priv)
return -ENOMEM;
vma->vm_page_prot = prot;
if (io_remap_pfn_range(vma, vma->vm_start, pfn, size, prot)) {
kfree(priv);
return -EAGAIN;
}
rdma_umap_priv_init(priv, vma, entry);
return 0;
}
EXPORT_SYMBOL(rdma_user_mmap_io);
/**
* rdma_user_mmap_entry_get_pgoff() - Get an entry from the mmap_xa
*
* @ucontext: associated user context
* @pgoff: The mmap offset >> PAGE_SHIFT
*
* This function is called when a user tries to mmap with an offset (returned
* by rdma_user_mmap_get_offset()) it initially received from the driver. The
* rdma_user_mmap_entry was created by the function
* rdma_user_mmap_entry_insert(). This function increases the refcnt of the
* entry so that it won't be deleted from the xarray in the meantime.
*
* Return an reference to an entry if exists or NULL if there is no
* match. rdma_user_mmap_entry_put() must be called to put the reference.
*/
struct rdma_user_mmap_entry *
rdma_user_mmap_entry_get_pgoff(struct ib_ucontext *ucontext,
unsigned long pgoff)
{
struct rdma_user_mmap_entry *entry;
if (pgoff > U32_MAX)
return NULL;
xa_lock(&ucontext->mmap_xa);
entry = xa_load(&ucontext->mmap_xa, pgoff);
/*
* If refcount is zero, entry is already being deleted, driver_removed
* indicates that the no further mmaps are possible and we waiting for
* the active VMAs to be closed.
*/
if (!entry || entry->start_pgoff != pgoff || entry->driver_removed ||
!kref_get_unless_zero(&entry->ref))
goto err;
xa_unlock(&ucontext->mmap_xa);
ibdev_dbg(ucontext->device, "mmap: pgoff[%#lx] npages[%#zx] returned\n",
pgoff, entry->npages);
return entry;
err:
xa_unlock(&ucontext->mmap_xa);
return NULL;
}
EXPORT_SYMBOL(rdma_user_mmap_entry_get_pgoff);
/**
* rdma_user_mmap_entry_get() - Get an entry from the mmap_xa
*
* @ucontext: associated user context
* @vma: the vma being mmap'd into
*
* This function is like rdma_user_mmap_entry_get_pgoff() except that it also
* checks that the VMA is correct.
*/
struct rdma_user_mmap_entry *
rdma_user_mmap_entry_get(struct ib_ucontext *ucontext,
struct vm_area_struct *vma)
{
struct rdma_user_mmap_entry *entry;
if (!(vma->vm_flags & VM_SHARED))
return NULL;
entry = rdma_user_mmap_entry_get_pgoff(ucontext, vma->vm_pgoff);
if (!entry)
return NULL;
if (entry->npages * PAGE_SIZE != vma->vm_end - vma->vm_start) {
rdma_user_mmap_entry_put(entry);
return NULL;
}
return entry;
}
EXPORT_SYMBOL(rdma_user_mmap_entry_get);
static void rdma_user_mmap_entry_free(struct kref *kref)
{
struct rdma_user_mmap_entry *entry =
container_of(kref, struct rdma_user_mmap_entry, ref);
struct ib_ucontext *ucontext = entry->ucontext;
unsigned long i;
/*
* Erase all entries occupied by this single entry, this is deferred
* until all VMA are closed so that the mmap offsets remain unique.
*/
xa_lock(&ucontext->mmap_xa);
for (i = 0; i < entry->npages; i++)
__xa_erase(&ucontext->mmap_xa, entry->start_pgoff + i);
xa_unlock(&ucontext->mmap_xa);
ibdev_dbg(ucontext->device, "mmap: pgoff[%#lx] npages[%#zx] removed\n",
entry->start_pgoff, entry->npages);
if (ucontext->device->ops.mmap_free)
ucontext->device->ops.mmap_free(entry);
}
/**
* rdma_user_mmap_entry_put() - Drop reference to the mmap entry
*
* @entry: an entry in the mmap_xa
*
* This function is called when the mapping is closed if it was
* an io mapping or when the driver is done with the entry for
* some other reason.
* Should be called after rdma_user_mmap_entry_get was called
* and entry is no longer needed. This function will erase the
* entry and free it if its refcnt reaches zero.
*/
void rdma_user_mmap_entry_put(struct rdma_user_mmap_entry *entry)
{
kref_put(&entry->ref, rdma_user_mmap_entry_free);
}
EXPORT_SYMBOL(rdma_user_mmap_entry_put);
/**
* rdma_user_mmap_entry_remove() - Drop reference to entry and
* mark it as unmmapable
*
* @entry: the entry to insert into the mmap_xa
*
* Drivers can call this to prevent userspace from creating more mappings for
* entry, however existing mmaps continue to exist and ops->mmap_free() will
* not be called until all user mmaps are destroyed.
*/
void rdma_user_mmap_entry_remove(struct rdma_user_mmap_entry *entry)
{
if (!entry)
return;
entry->driver_removed = true;
kref_put(&entry->ref, rdma_user_mmap_entry_free);
}
EXPORT_SYMBOL(rdma_user_mmap_entry_remove);
/**
* rdma_user_mmap_entry_insert() - Insert an entry to the mmap_xa
*
* @ucontext: associated user context.
* @entry: the entry to insert into the mmap_xa
* @length: length of the address that will be mmapped
*
* This function should be called by drivers that use the rdma_user_mmap
* interface for implementing their mmap syscall A database of mmap offsets is
* handled in the core and helper functions are provided to insert entries
* into the database and extract entries when the user calls mmap with the
* given offset. The function allocates a unique page offset that should be
* provided to user, the user will use the offset to retrieve information such
* as address to be mapped and how.
*
* Return: 0 on success and -ENOMEM on failure
*/
int rdma_user_mmap_entry_insert(struct ib_ucontext *ucontext,
struct rdma_user_mmap_entry *entry,
size_t length)
{
struct ib_uverbs_file *ufile = ucontext->ufile;
XA_STATE(xas, &ucontext->mmap_xa, 0);
u32 xa_first, xa_last, npages;
int err;
u32 i;
if (!entry)
return -EINVAL;
kref_init(&entry->ref);
entry->ucontext = ucontext;
/*
* We want the whole allocation to be done without interruption from a
* different thread. The allocation requires finding a free range and
* storing. During the xa_insert the lock could be released, possibly
* allowing another thread to choose the same range.
*/
mutex_lock(&ufile->umap_lock);
xa_lock(&ucontext->mmap_xa);
/* We want to find an empty range */
npages = (u32)DIV_ROUND_UP(length, PAGE_SIZE);
entry->npages = npages;
while (true) {
/* First find an empty index */
xas_find_marked(&xas, U32_MAX, XA_FREE_MARK);
if (xas.xa_node == XAS_RESTART)
goto err_unlock;
xa_first = xas.xa_index;
/* Is there enough room to have the range? */
if (check_add_overflow(xa_first, npages, &xa_last))
goto err_unlock;
/*
* Now look for the next present entry. If an entry doesn't
* exist, we found an empty range and can proceed.
*/
xas_next_entry(&xas, xa_last - 1);
if (xas.xa_node == XAS_BOUNDS || xas.xa_index >= xa_last)
break;
}
for (i = xa_first; i < xa_last; i++) {
err = __xa_insert(&ucontext->mmap_xa, i, entry, GFP_KERNEL);
if (err)
goto err_undo;
}
/*
* Internally the kernel uses a page offset, in libc this is a byte
* offset. Drivers should not return pgoff to userspace.
*/
entry->start_pgoff = xa_first;
xa_unlock(&ucontext->mmap_xa);
mutex_unlock(&ufile->umap_lock);
ibdev_dbg(ucontext->device, "mmap: pgoff[%#lx] npages[%#x] inserted\n",
entry->start_pgoff, npages);
return 0;
err_undo:
for (; i > xa_first; i--)
__xa_erase(&ucontext->mmap_xa, i - 1);
err_unlock:
xa_unlock(&ucontext->mmap_xa);
mutex_unlock(&ufile->umap_lock);
return -ENOMEM;
}
EXPORT_SYMBOL(rdma_user_mmap_entry_insert);

View File

@ -210,8 +210,10 @@ int iwpm_mapinfo_available(void);
/**
* iwpm_compare_sockaddr - Compare two sockaddr storage structs
* @a_sockaddr: first sockaddr to compare
* @b_sockaddr: second sockaddr to compare
*
* Returns 0 if they are holding the same ip/tcp address info,
* Return: 0 if they are holding the same ip/tcp address info,
* otherwise returns 1
*/
int iwpm_compare_sockaddr(struct sockaddr_storage *a_sockaddr,
@ -272,6 +274,7 @@ void iwpm_print_sockaddr(struct sockaddr_storage *sockaddr, char *msg);
* iwpm_send_hello - Send hello response to iwpmd
*
* @nl_client: The index of the netlink client
* @iwpm_pid: The pid of the user space port mapper
* @abi_version: The kernel's abi_version
*
* Returns 0 on success or a negative error code

View File

@ -913,9 +913,9 @@ static int handle_outgoing_dr_smp(struct ib_mad_agent_private *mad_agent_priv,
/* No GRH for DR SMP */
ret = device->ops.process_mad(device, 0, port_num, &mad_wc, NULL,
(const struct ib_mad_hdr *)smp, mad_size,
(struct ib_mad_hdr *)mad_priv->mad,
&mad_size, &out_mad_pkey_index);
(const struct ib_mad *)smp,
(struct ib_mad *)mad_priv->mad, &mad_size,
&out_mad_pkey_index);
switch (ret)
{
case IB_MAD_RESULT_SUCCESS | IB_MAD_RESULT_REPLY:
@ -1397,25 +1397,6 @@ void ib_free_recv_mad(struct ib_mad_recv_wc *mad_recv_wc)
}
EXPORT_SYMBOL(ib_free_recv_mad);
struct ib_mad_agent *ib_redirect_mad_qp(struct ib_qp *qp,
u8 rmpp_version,
ib_mad_send_handler send_handler,
ib_mad_recv_handler recv_handler,
void *context)
{
return ERR_PTR(-EINVAL); /* XXX: for now */
}
EXPORT_SYMBOL(ib_redirect_mad_qp);
int ib_process_mad_wc(struct ib_mad_agent *mad_agent,
struct ib_wc *wc)
{
dev_err(&mad_agent->device->dev,
"ib_process_mad_wc() not implemented yet\n");
return 0;
}
EXPORT_SYMBOL(ib_process_mad_wc);
static int method_in_use(struct ib_mad_mgmt_method_table **method,
struct ib_mad_reg_req *mad_reg_req)
{
@ -2340,9 +2321,9 @@ static void ib_mad_recv_done(struct ib_cq *cq, struct ib_wc *wc)
if (port_priv->device->ops.process_mad) {
ret = port_priv->device->ops.process_mad(
port_priv->device, 0, port_priv->port_num, wc,
&recv->grh, (const struct ib_mad_hdr *)recv->mad,
recv->mad_size, (struct ib_mad_hdr *)response->mad,
&mad_size, &resp_mad_pkey_index);
&recv->grh, (const struct ib_mad *)recv->mad,
(struct ib_mad *)response->mad, &mad_size,
&resp_mad_pkey_index);
if (opa)
wc->pkey_index = resp_mad_pkey_index;

View File

@ -42,6 +42,9 @@
#include "cma_priv.h"
#include "restrack.h"
typedef int (*res_fill_func_t)(struct sk_buff*, bool,
struct rdma_restrack_entry*, uint32_t);
/*
* Sort array elements by the netlink attribute name
*/
@ -180,6 +183,19 @@ static int _rdma_nl_put_driver_u64(struct sk_buff *msg, const char *name,
return 0;
}
int rdma_nl_put_driver_string(struct sk_buff *msg, const char *name,
const char *str)
{
if (put_driver_name_print_type(msg, name,
RDMA_NLDEV_PRINT_TYPE_UNSPEC))
return -EMSGSIZE;
if (nla_put_string(msg, RDMA_NLDEV_ATTR_DRIVER_STRING, str))
return -EMSGSIZE;
return 0;
}
EXPORT_SYMBOL(rdma_nl_put_driver_string);
int rdma_nl_put_driver_u32(struct sk_buff *msg, const char *name, u32 value)
{
return _rdma_nl_put_driver_u32(msg, name, RDMA_NLDEV_PRINT_TYPE_UNSPEC,
@ -399,20 +415,34 @@ err:
static int fill_res_name_pid(struct sk_buff *msg,
struct rdma_restrack_entry *res)
{
int err = 0;
/*
* For user resources, user is should read /proc/PID/comm to get the
* name of the task file.
*/
if (rdma_is_kernel_res(res)) {
if (nla_put_string(msg, RDMA_NLDEV_ATTR_RES_KERN_NAME,
res->kern_name))
return -EMSGSIZE;
err = nla_put_string(msg, RDMA_NLDEV_ATTR_RES_KERN_NAME,
res->kern_name);
} else {
if (nla_put_u32(msg, RDMA_NLDEV_ATTR_RES_PID,
task_pid_vnr(res->task)))
return -EMSGSIZE;
pid_t pid;
pid = task_pid_vnr(res->task);
/*
* Task is dead and in zombie state.
* There is no need to print PID anymore.
*/
if (pid)
/*
* This part is racy, task can be killed and PID will
* be zero right here but it is ok, next query won't
* return PID. We don't promise real-time reflection
* of SW objects.
*/
err = nla_put_u32(msg, RDMA_NLDEV_ATTR_RES_PID, pid);
}
return 0;
return err ? -EMSGSIZE : 0;
}
static bool fill_res_entry(struct ib_device *dev, struct sk_buff *msg,
@ -423,6 +453,14 @@ static bool fill_res_entry(struct ib_device *dev, struct sk_buff *msg,
return dev->ops.fill_res_entry(msg, res);
}
static bool fill_stat_entry(struct ib_device *dev, struct sk_buff *msg,
struct rdma_restrack_entry *res)
{
if (!dev->ops.fill_stat_entry)
return false;
return dev->ops.fill_stat_entry(msg, res);
}
static int fill_res_qp_entry(struct sk_buff *msg, bool has_cap_net_admin,
struct rdma_restrack_entry *res, uint32_t port)
{
@ -698,9 +736,6 @@ static int fill_stat_counter_qps(struct sk_buff *msg,
rt = &counter->device->res[RDMA_RESTRACK_QP];
xa_lock(&rt->xa);
xa_for_each(&rt->xa, id, res) {
if (!rdma_is_visible_in_pid_ns(res))
continue;
qp = container_of(res, struct ib_qp, res);
if (qp->qp_type == IB_QPT_RAW_PACKET && !capable(CAP_NET_RAW))
continue;
@ -723,8 +758,8 @@ err:
return ret;
}
static int fill_stat_hwcounter_entry(struct sk_buff *msg,
const char *name, u64 value)
int rdma_nl_stat_hwcounter_entry(struct sk_buff *msg, const char *name,
u64 value)
{
struct nlattr *entry_attr;
@ -746,6 +781,25 @@ err:
nla_nest_cancel(msg, entry_attr);
return -EMSGSIZE;
}
EXPORT_SYMBOL(rdma_nl_stat_hwcounter_entry);
static int fill_stat_mr_entry(struct sk_buff *msg, bool has_cap_net_admin,
struct rdma_restrack_entry *res, uint32_t port)
{
struct ib_mr *mr = container_of(res, struct ib_mr, res);
struct ib_device *dev = mr->pd->device;
if (nla_put_u32(msg, RDMA_NLDEV_ATTR_RES_MRN, res->id))
goto err;
if (fill_stat_entry(dev, msg, res))
goto err;
return 0;
err:
return -EMSGSIZE;
}
static int fill_stat_counter_hwcounters(struct sk_buff *msg,
struct rdma_counter *counter)
@ -759,7 +813,7 @@ static int fill_stat_counter_hwcounters(struct sk_buff *msg,
return -EMSGSIZE;
for (i = 0; i < st->num_counters; i++)
if (fill_stat_hwcounter_entry(msg, st->names[i], st->value[i]))
if (rdma_nl_stat_hwcounter_entry(msg, st->names[i], st->value[i]))
goto err;
nla_nest_end(msg, table_attr);
@ -1117,8 +1171,6 @@ static int nldev_res_get_dumpit(struct sk_buff *skb,
}
struct nldev_fill_res_entry {
int (*fill_res_func)(struct sk_buff *msg, bool has_cap_net_admin,
struct rdma_restrack_entry *res, u32 port);
enum rdma_nldev_attr nldev_attr;
enum rdma_nldev_command nldev_cmd;
u8 flags;
@ -1132,21 +1184,18 @@ enum nldev_res_flags {
static const struct nldev_fill_res_entry fill_entries[RDMA_RESTRACK_MAX] = {
[RDMA_RESTRACK_QP] = {
.fill_res_func = fill_res_qp_entry,
.nldev_cmd = RDMA_NLDEV_CMD_RES_QP_GET,
.nldev_attr = RDMA_NLDEV_ATTR_RES_QP,
.entry = RDMA_NLDEV_ATTR_RES_QP_ENTRY,
.id = RDMA_NLDEV_ATTR_RES_LQPN,
},
[RDMA_RESTRACK_CM_ID] = {
.fill_res_func = fill_res_cm_id_entry,
.nldev_cmd = RDMA_NLDEV_CMD_RES_CM_ID_GET,
.nldev_attr = RDMA_NLDEV_ATTR_RES_CM_ID,
.entry = RDMA_NLDEV_ATTR_RES_CM_ID_ENTRY,
.id = RDMA_NLDEV_ATTR_RES_CM_IDN,
},
[RDMA_RESTRACK_CQ] = {
.fill_res_func = fill_res_cq_entry,
.nldev_cmd = RDMA_NLDEV_CMD_RES_CQ_GET,
.nldev_attr = RDMA_NLDEV_ATTR_RES_CQ,
.flags = NLDEV_PER_DEV,
@ -1154,7 +1203,6 @@ static const struct nldev_fill_res_entry fill_entries[RDMA_RESTRACK_MAX] = {
.id = RDMA_NLDEV_ATTR_RES_CQN,
},
[RDMA_RESTRACK_MR] = {
.fill_res_func = fill_res_mr_entry,
.nldev_cmd = RDMA_NLDEV_CMD_RES_MR_GET,
.nldev_attr = RDMA_NLDEV_ATTR_RES_MR,
.flags = NLDEV_PER_DEV,
@ -1162,7 +1210,6 @@ static const struct nldev_fill_res_entry fill_entries[RDMA_RESTRACK_MAX] = {
.id = RDMA_NLDEV_ATTR_RES_MRN,
},
[RDMA_RESTRACK_PD] = {
.fill_res_func = fill_res_pd_entry,
.nldev_cmd = RDMA_NLDEV_CMD_RES_PD_GET,
.nldev_attr = RDMA_NLDEV_ATTR_RES_PD,
.flags = NLDEV_PER_DEV,
@ -1170,7 +1217,6 @@ static const struct nldev_fill_res_entry fill_entries[RDMA_RESTRACK_MAX] = {
.id = RDMA_NLDEV_ATTR_RES_PDN,
},
[RDMA_RESTRACK_COUNTER] = {
.fill_res_func = fill_res_counter_entry,
.nldev_cmd = RDMA_NLDEV_CMD_STAT_GET,
.nldev_attr = RDMA_NLDEV_ATTR_STAT_COUNTER,
.entry = RDMA_NLDEV_ATTR_STAT_COUNTER_ENTRY,
@ -1180,7 +1226,8 @@ static const struct nldev_fill_res_entry fill_entries[RDMA_RESTRACK_MAX] = {
static int res_get_common_doit(struct sk_buff *skb, struct nlmsghdr *nlh,
struct netlink_ext_ack *extack,
enum rdma_restrack_type res_type)
enum rdma_restrack_type res_type,
res_fill_func_t fill_func)
{
const struct nldev_fill_res_entry *fe = &fill_entries[res_type];
struct nlattr *tb[RDMA_NLDEV_ATTR_MAX];
@ -1222,11 +1269,6 @@ static int res_get_common_doit(struct sk_buff *skb, struct nlmsghdr *nlh,
goto err;
}
if (!rdma_is_visible_in_pid_ns(res)) {
ret = -ENOENT;
goto err_get;
}
msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL);
if (!msg) {
ret = -ENOMEM;
@ -1243,7 +1285,9 @@ static int res_get_common_doit(struct sk_buff *skb, struct nlmsghdr *nlh,
}
has_cap_net_admin = netlink_capable(skb, CAP_NET_ADMIN);
ret = fe->fill_res_func(msg, has_cap_net_admin, res, port);
ret = fill_func(msg, has_cap_net_admin, res, port);
rdma_restrack_put(res);
if (ret)
goto err_free;
@ -1263,7 +1307,8 @@ err:
static int res_get_common_dumpit(struct sk_buff *skb,
struct netlink_callback *cb,
enum rdma_restrack_type res_type)
enum rdma_restrack_type res_type,
res_fill_func_t fill_func)
{
const struct nldev_fill_res_entry *fe = &fill_entries[res_type];
struct nlattr *tb[RDMA_NLDEV_ATTR_MAX];
@ -1334,9 +1379,6 @@ static int res_get_common_dumpit(struct sk_buff *skb,
* objects.
*/
xa_for_each(&rt->xa, id, res) {
if (!rdma_is_visible_in_pid_ns(res))
continue;
if (idx < start || !rdma_restrack_get(res))
goto next;
@ -1351,7 +1393,8 @@ static int res_get_common_dumpit(struct sk_buff *skb,
goto msg_full;
}
ret = fe->fill_res_func(skb, has_cap_net_admin, res, port);
ret = fill_func(skb, has_cap_net_admin, res, port);
rdma_restrack_put(res);
if (ret) {
@ -1394,17 +1437,19 @@ err_index:
return ret;
}
#define RES_GET_FUNCS(name, type) \
static int nldev_res_get_##name##_dumpit(struct sk_buff *skb, \
#define RES_GET_FUNCS(name, type) \
static int nldev_res_get_##name##_dumpit(struct sk_buff *skb, \
struct netlink_callback *cb) \
{ \
return res_get_common_dumpit(skb, cb, type); \
} \
static int nldev_res_get_##name##_doit(struct sk_buff *skb, \
struct nlmsghdr *nlh, \
{ \
return res_get_common_dumpit(skb, cb, type, \
fill_res_##name##_entry); \
} \
static int nldev_res_get_##name##_doit(struct sk_buff *skb, \
struct nlmsghdr *nlh, \
struct netlink_ext_ack *extack) \
{ \
return res_get_common_doit(skb, nlh, extack, type); \
{ \
return res_get_common_doit(skb, nlh, extack, type, \
fill_res_##name##_entry); \
}
RES_GET_FUNCS(qp, RDMA_RESTRACK_QP);
@ -1880,7 +1925,7 @@ static int stat_get_doit_default_counter(struct sk_buff *skb,
for (i = 0; i < num_cnts; i++) {
v = stats->value[i] +
rdma_counter_get_hwstat_value(device, port, i);
if (fill_stat_hwcounter_entry(msg, stats->names[i], v)) {
if (rdma_nl_stat_hwcounter_entry(msg, stats->names[i], v)) {
ret = -EMSGSIZE;
goto err_table;
}
@ -1989,7 +2034,10 @@ static int nldev_stat_get_doit(struct sk_buff *skb, struct nlmsghdr *nlh,
case RDMA_NLDEV_ATTR_RES_QP:
ret = stat_get_doit_qp(skb, nlh, extack, tb);
break;
case RDMA_NLDEV_ATTR_RES_MR:
ret = res_get_common_doit(skb, nlh, extack, RDMA_RESTRACK_MR,
fill_stat_mr_entry);
break;
default:
ret = -EINVAL;
break;
@ -2013,7 +2061,10 @@ static int nldev_stat_get_dumpit(struct sk_buff *skb,
case RDMA_NLDEV_ATTR_RES_QP:
ret = nldev_res_get_counter_dumpit(skb, cb);
break;
case RDMA_NLDEV_ATTR_RES_MR:
ret = res_get_common_dumpit(skb, cb, RDMA_RESTRACK_MR,
fill_stat_mr_entry);
break;
default:
ret = -EINVAL;
break;

View File

@ -817,6 +817,7 @@ static void ufile_destroy_ucontext(struct ib_uverbs_file *ufile,
rdma_restrack_del(&ucontext->res);
ib_dev->ops.dealloc_ucontext(ucontext);
WARN_ON(!xa_empty(&ucontext->mmap_xa));
kfree(ucontext);
ufile->ucontext = NULL;

View File

@ -116,11 +116,8 @@ int rdma_restrack_count(struct ib_device *dev, enum rdma_restrack_type type)
u32 cnt = 0;
xa_lock(&rt->xa);
xas_for_each(&xas, e, U32_MAX) {
if (!rdma_is_visible_in_pid_ns(e))
continue;
xas_for_each(&xas, e, U32_MAX)
cnt++;
}
xa_unlock(&rt->xa);
return cnt;
}
@ -346,18 +343,3 @@ out:
}
}
EXPORT_SYMBOL(rdma_restrack_del);
bool rdma_is_visible_in_pid_ns(struct rdma_restrack_entry *res)
{
/*
* 1. Kern resources should be visible in init
* namespace only
* 2. Present only resources visible in the current
* namespace
*/
if (rdma_is_kernel_res(res))
return task_active_pid_ns(current) == &init_pid_ns;
/* PID 0 means that resource is not found in current namespace */
return task_pid_vnr(res->task);
}

View File

@ -27,5 +27,4 @@ int rdma_restrack_init(struct ib_device *dev);
void rdma_restrack_clean(struct ib_device *dev);
void rdma_restrack_attach_task(struct rdma_restrack_entry *res,
struct task_struct *task);
bool rdma_is_visible_in_pid_ns(struct rdma_restrack_entry *res);
#endif /* _RDMA_CORE_RESTRACK_H_ */

View File

@ -20,14 +20,17 @@ module_param_named(force_mr, rdma_rw_force_mr, bool, 0);
MODULE_PARM_DESC(force_mr, "Force usage of MRs for RDMA READ/WRITE operations");
/*
* Check if the device might use memory registration. This is currently only
* true for iWarp devices. In the future we can hopefully fine tune this based
* on HCA driver input.
* Report whether memory registration should be used. Memory registration must
* be used for iWarp devices because of iWARP-specific limitations. Memory
* registration is also enabled if registering memory might yield better
* performance than using multiple SGE entries, see rdma_rw_io_needs_mr()
*/
static inline bool rdma_rw_can_use_mr(struct ib_device *dev, u8 port_num)
{
if (rdma_protocol_iwarp(dev, port_num))
return true;
if (dev->attrs.max_sgl_rd)
return true;
if (unlikely(rdma_rw_force_mr))
return true;
return false;
@ -35,17 +38,19 @@ static inline bool rdma_rw_can_use_mr(struct ib_device *dev, u8 port_num)
/*
* Check if the device will use memory registration for this RW operation.
* We currently always use memory registrations for iWarp RDMA READs, and
* have a debug option to force usage of MRs.
*
* XXX: In the future we can hopefully fine tune this based on HCA driver
* input.
* For RDMA READs we must use MRs on iWarp and can optionally use them as an
* optimization otherwise. Additionally we have a debug option to force usage
* of MRs to help testing this code path.
*/
static inline bool rdma_rw_io_needs_mr(struct ib_device *dev, u8 port_num,
enum dma_data_direction dir, int dma_nents)
{
if (rdma_protocol_iwarp(dev, port_num) && dir == DMA_FROM_DEVICE)
return true;
if (dir == DMA_FROM_DEVICE) {
if (rdma_protocol_iwarp(dev, port_num))
return true;
if (dev->attrs.max_sgl_rd && dma_nents > dev->attrs.max_sgl_rd)
return true;
}
if (unlikely(rdma_rw_force_mr))
return true;
return false;

View File

@ -1246,7 +1246,7 @@ static int init_ah_attr_grh_fields(struct ib_device *device, u8 port_num,
* @port_num: Port on the specified device.
* @rec: path record entry to use for ah attributes initialization.
* @ah_attr: address handle attributes to initialization from path record.
* @sgid_attr: SGID attribute to consider during initialization.
* @gid_attr: SGID attribute to consider during initialization.
*
* When ib_init_ah_attr_from_path() returns success,
* (a) for IB link layer it optionally contains a reference to SGID attribute

View File

@ -481,8 +481,8 @@ static int get_perf_mad(struct ib_device *dev, int port_num, __be16 attr,
if (!dev->ops.process_mad)
return -ENOSYS;
in_mad = kzalloc(sizeof *in_mad, GFP_KERNEL);
out_mad = kmalloc(sizeof *out_mad, GFP_KERNEL);
in_mad = kzalloc(sizeof(*in_mad), GFP_KERNEL);
out_mad = kzalloc(sizeof(*out_mad), GFP_KERNEL);
if (!in_mad || !out_mad) {
ret = -ENOMEM;
goto out;
@ -497,10 +497,8 @@ static int get_perf_mad(struct ib_device *dev, int port_num, __be16 attr,
if (attr != IB_PMA_CLASS_PORT_INFO)
in_mad->data[41] = port_num; /* PortSelect field */
if ((dev->ops.process_mad(dev, IB_MAD_IGNORE_MKEY,
port_num, NULL, NULL,
(const struct ib_mad_hdr *)in_mad, mad_size,
(struct ib_mad_hdr *)out_mad, &mad_size,
if ((dev->ops.process_mad(dev, IB_MAD_IGNORE_MKEY, port_num, NULL, NULL,
in_mad, out_mad, &mad_size,
&out_mad_pkey_index) &
(IB_MAD_RESULT_SUCCESS | IB_MAD_RESULT_REPLY)) !=
(IB_MAD_RESULT_SUCCESS | IB_MAD_RESULT_REPLY)) {
@ -1268,7 +1266,7 @@ static ssize_t node_desc_store(struct device *device,
int ret;
if (!dev->ops.modify_device)
return -EIO;
return -EOPNOTSUPP;
memcpy(desc.node_desc, buf, min_t(int, count, IB_DEVICE_NODE_DESC_MAX));
ret = ib_modify_device(dev, IB_DEVICE_MODIFY_NODE_DESC, &desc);

View File

@ -185,10 +185,9 @@ EXPORT_SYMBOL(ib_umem_find_best_pgsz);
* @addr: userspace virtual address to start at
* @size: length of region to pin
* @access: IB_ACCESS_xxx flags for memory being pinned
* @dmasync: flush in-flight DMA when the memory region is written
*/
struct ib_umem *ib_umem_get(struct ib_udata *udata, unsigned long addr,
size_t size, int access, int dmasync)
size_t size, int access)
{
struct ib_ucontext *context;
struct ib_umem *umem;
@ -199,7 +198,6 @@ struct ib_umem *ib_umem_get(struct ib_udata *udata, unsigned long addr,
struct mm_struct *mm;
unsigned long npages;
int ret;
unsigned long dma_attrs = 0;
struct scatterlist *sg;
unsigned int gup_flags = FOLL_WRITE;
@ -211,9 +209,6 @@ struct ib_umem *ib_umem_get(struct ib_udata *udata, unsigned long addr,
if (!context)
return ERR_PTR(-EIO);
if (dmasync)
dma_attrs |= DMA_ATTR_WRITE_BARRIER;
/*
* If the combination of the addr and size requested for this memory
* region causes an integer overflow, return error.
@ -294,11 +289,10 @@ struct ib_umem *ib_umem_get(struct ib_udata *udata, unsigned long addr,
sg_mark_end(sg);
umem->nmap = ib_dma_map_sg_attrs(context->device,
umem->nmap = ib_dma_map_sg(context->device,
umem->sg_head.sgl,
umem->sg_nents,
DMA_BIDIRECTIONAL,
dma_attrs);
DMA_BIDIRECTIONAL);
if (!umem->nmap) {
ret = -ENOMEM;

View File

@ -508,7 +508,6 @@ static int ib_umem_odp_map_dma_single_page(
{
struct ib_device *dev = umem_odp->umem.ibdev;
dma_addr_t dma_addr;
int remove_existing_mapping = 0;
int ret = 0;
/*
@ -534,28 +533,29 @@ static int ib_umem_odp_map_dma_single_page(
} else if (umem_odp->page_list[page_index] == page) {
umem_odp->dma_list[page_index] |= access_mask;
} else {
pr_err("error: got different pages in IB device and from get_user_pages. IB device page: %p, gup page: %p\n",
umem_odp->page_list[page_index], page);
/* Better remove the mapping now, to prevent any further
* damage. */
remove_existing_mapping = 1;
/*
* This is a race here where we could have done:
*
* CPU0 CPU1
* get_user_pages()
* invalidate()
* page_fault()
* mutex_lock(umem_mutex)
* page from GUP != page in ODP
*
* It should be prevented by the retry test above as reading
* the seq number should be reliable under the
* umem_mutex. Thus something is really not working right if
* things get here.
*/
WARN(true,
"Got different pages in IB device and from get_user_pages. IB device page: %p, gup page: %p\n",
umem_odp->page_list[page_index], page);
ret = -EAGAIN;
}
out:
put_user_page(page);
if (remove_existing_mapping) {
ib_umem_notifier_start_account(umem_odp);
dev->ops.invalidate_range(
umem_odp,
ib_umem_start(umem_odp) +
(page_index << umem_odp->page_shift),
ib_umem_start(umem_odp) +
((page_index + 1) << umem_odp->page_shift));
ib_umem_notifier_end_account(umem_odp);
ret = -EAGAIN;
}
return ret;
}

View File

@ -252,6 +252,8 @@ static int ib_uverbs_get_context(struct uverbs_attr_bundle *attrs)
ucontext->closing = false;
ucontext->cleanup_retryable = false;
xa_init_flags(&ucontext->mmap_xa, XA_FLAGS_ALLOC);
ret = get_unused_fd_flags(O_CLOEXEC);
if (ret < 0)
goto err_free;

View File

@ -795,6 +795,9 @@ int uverbs_copy_to_struct_or_zero(const struct uverbs_attr_bundle *bundle,
{
const struct uverbs_attr *attr = uverbs_attr_get(bundle, idx);
if (IS_ERR(attr))
return PTR_ERR(attr);
if (size < attr->ptr_attr.len) {
if (clear_user(u64_to_user_ptr(attr->ptr_attr.data) + size,
attr->ptr_attr.len - size))

View File

@ -772,6 +772,8 @@ out_unlock:
return (ret) ? : count;
}
static const struct vm_operations_struct rdma_umap_ops;
static int ib_uverbs_mmap(struct file *filp, struct vm_area_struct *vma)
{
struct ib_uverbs_file *file = filp->private_data;
@ -785,45 +787,13 @@ static int ib_uverbs_mmap(struct file *filp, struct vm_area_struct *vma)
ret = PTR_ERR(ucontext);
goto out;
}
vma->vm_ops = &rdma_umap_ops;
ret = ucontext->device->ops.mmap(ucontext, vma);
out:
srcu_read_unlock(&file->device->disassociate_srcu, srcu_key);
return ret;
}
/*
* Each time we map IO memory into user space this keeps track of the mapping.
* When the device is hot-unplugged we 'zap' the mmaps in user space to point
* to the zero page and allow the hot unplug to proceed.
*
* This is necessary for cases like PCI physical hot unplug as the actual BAR
* memory may vanish after this and access to it from userspace could MCE.
*
* RDMA drivers supporting disassociation must have their user space designed
* to cope in some way with their IO pages going to the zero page.
*/
struct rdma_umap_priv {
struct vm_area_struct *vma;
struct list_head list;
};
static const struct vm_operations_struct rdma_umap_ops;
static void rdma_umap_priv_init(struct rdma_umap_priv *priv,
struct vm_area_struct *vma)
{
struct ib_uverbs_file *ufile = vma->vm_file->private_data;
priv->vma = vma;
vma->vm_private_data = priv;
vma->vm_ops = &rdma_umap_ops;
mutex_lock(&ufile->umap_lock);
list_add(&priv->list, &ufile->umaps);
mutex_unlock(&ufile->umap_lock);
}
/*
* The VMA has been dup'd, initialize the vm_private_data with a new tracking
* struct
@ -849,7 +819,7 @@ static void rdma_umap_open(struct vm_area_struct *vma)
priv = kzalloc(sizeof(*priv), GFP_KERNEL);
if (!priv)
goto out_unlock;
rdma_umap_priv_init(priv, vma);
rdma_umap_priv_init(priv, vma, opriv->entry);
up_read(&ufile->hw_destroy_rwsem);
return;
@ -880,6 +850,9 @@ static void rdma_umap_close(struct vm_area_struct *vma)
* this point.
*/
mutex_lock(&ufile->umap_lock);
if (priv->entry)
rdma_user_mmap_entry_put(priv->entry);
list_del(&priv->list);
mutex_unlock(&ufile->umap_lock);
kfree(priv);
@ -931,44 +904,6 @@ static const struct vm_operations_struct rdma_umap_ops = {
.fault = rdma_umap_fault,
};
/*
* Map IO memory into a process. This is to be called by drivers as part of
* their mmap() functions if they wish to send something like PCI-E BAR memory
* to userspace.
*/
int rdma_user_mmap_io(struct ib_ucontext *ucontext, struct vm_area_struct *vma,
unsigned long pfn, unsigned long size, pgprot_t prot)
{
struct ib_uverbs_file *ufile = ucontext->ufile;
struct rdma_umap_priv *priv;
if (!(vma->vm_flags & VM_SHARED))
return -EINVAL;
if (vma->vm_end - vma->vm_start != size)
return -EINVAL;
/* Driver is using this wrong, must be called by ib_uverbs_mmap */
if (WARN_ON(!vma->vm_file ||
vma->vm_file->private_data != ufile))
return -EINVAL;
lockdep_assert_held(&ufile->device->disassociate_srcu);
priv = kzalloc(sizeof(*priv), GFP_KERNEL);
if (!priv)
return -ENOMEM;
vma->vm_page_prot = prot;
if (io_remap_pfn_range(vma, vma->vm_start, pfn, size, prot)) {
kfree(priv);
return -EAGAIN;
}
rdma_umap_priv_init(priv, vma);
return 0;
}
EXPORT_SYMBOL(rdma_user_mmap_io);
void uverbs_user_mmap_disassociate(struct ib_uverbs_file *ufile)
{
struct rdma_umap_priv *priv, *next_priv;
@ -1018,6 +953,11 @@ void uverbs_user_mmap_disassociate(struct ib_uverbs_file *ufile)
zap_vma_ptes(vma, vma->vm_start,
vma->vm_end - vma->vm_start);
if (priv->entry) {
rdma_user_mmap_entry_put(priv->entry);
priv->entry = NULL;
}
}
mutex_unlock(&ufile->umap_lock);
skip_mm:

View File

@ -244,6 +244,8 @@ EXPORT_SYMBOL(rdma_port_get_link_layer);
/**
* ib_alloc_pd - Allocates an unused protection domain.
* @device: The device on which to allocate the protection domain.
* @flags: protection domain flags
* @caller: caller's build-time module name
*
* A protection domain object provides an association between QPs, shared
* receive queues, address handles, memory regions, and memory windows.
@ -2459,6 +2461,16 @@ int ib_set_vf_guid(struct ib_device *device, int vf, u8 port, u64 guid,
}
EXPORT_SYMBOL(ib_set_vf_guid);
int ib_get_vf_guid(struct ib_device *device, int vf, u8 port,
struct ifla_vf_guid *node_guid,
struct ifla_vf_guid *port_guid)
{
if (!device->ops.get_vf_guid)
return -EOPNOTSUPP;
return device->ops.get_vf_guid(device, vf, port, node_guid, port_guid);
}
EXPORT_SYMBOL(ib_get_vf_guid);
/**
* ib_map_mr_sg_pi() - Map the dma mapped SG lists for PI (protection
* information) and set an appropriate memory region for registration.

View File

@ -1,7 +1,6 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_INFINIBAND_MTHCA) += mthca/
obj-$(CONFIG_INFINIBAND_QIB) += qib/
obj-$(CONFIG_INFINIBAND_CXGB3) += cxgb3/
obj-$(CONFIG_INFINIBAND_CXGB4) += cxgb4/
obj-$(CONFIG_INFINIBAND_EFA) += efa/
obj-$(CONFIG_INFINIBAND_I40IW) += i40iw/

View File

@ -1,11 +1,11 @@
# SPDX-License-Identifier: GPL-2.0-only
config INFINIBAND_BNXT_RE
tristate "Broadcom Netxtreme HCA support"
depends on 64BIT
depends on ETHERNET && NETDEVICES && PCI && INET && DCB
select NET_VENDOR_BROADCOM
select BNXT
---help---
tristate "Broadcom Netxtreme HCA support"
depends on 64BIT
depends on ETHERNET && NETDEVICES && PCI && INET && DCB
select NET_VENDOR_BROADCOM
select BNXT
---help---
This driver supports Broadcom NetXtreme-E 10/25/40/50 gigabit
RoCE HCAs. To compile this driver as a module, choose M here:
the module will be called bnxt_re.

View File

@ -108,6 +108,7 @@ struct bnxt_re_sqp_entries {
#define BNXT_RE_MAX_MSIX 9
#define BNXT_RE_AEQ_IDX 0
#define BNXT_RE_NQ_IDX 1
#define BNXT_RE_GEN_P5_MAX_VF 64
struct bnxt_re_dev {
struct ib_device ibdev;

View File

@ -191,24 +191,6 @@ int bnxt_re_query_device(struct ib_device *ibdev,
return 0;
}
int bnxt_re_modify_device(struct ib_device *ibdev,
int device_modify_mask,
struct ib_device_modify *device_modify)
{
switch (device_modify_mask) {
case IB_DEVICE_MODIFY_SYS_IMAGE_GUID:
/* Modify the GUID requires the modification of the GID table */
/* GUID should be made as READ-ONLY */
break;
case IB_DEVICE_MODIFY_NODE_DESC:
/* Node Desc should be made as READ-ONLY */
break;
default:
break;
}
return 0;
}
/* Port */
int bnxt_re_query_port(struct ib_device *ibdev, u8 port_num,
struct ib_port_attr *port_attr)
@ -855,7 +837,7 @@ static int bnxt_re_init_user_qp(struct bnxt_re_dev *rdev, struct bnxt_re_pd *pd,
bytes += (qplib_qp->sq.max_wqe * psn_sz);
}
bytes = PAGE_ALIGN(bytes);
umem = ib_umem_get(udata, ureq.qpsva, bytes, IB_ACCESS_LOCAL_WRITE, 1);
umem = ib_umem_get(udata, ureq.qpsva, bytes, IB_ACCESS_LOCAL_WRITE);
if (IS_ERR(umem))
return PTR_ERR(umem);
@ -869,7 +851,7 @@ static int bnxt_re_init_user_qp(struct bnxt_re_dev *rdev, struct bnxt_re_pd *pd,
bytes = (qplib_qp->rq.max_wqe * BNXT_QPLIB_MAX_RQE_ENTRY_SIZE);
bytes = PAGE_ALIGN(bytes);
umem = ib_umem_get(udata, ureq.qprva, bytes,
IB_ACCESS_LOCAL_WRITE, 1);
IB_ACCESS_LOCAL_WRITE);
if (IS_ERR(umem))
goto rqfail;
qp->rumem = umem;
@ -1322,7 +1304,7 @@ static int bnxt_re_init_user_srq(struct bnxt_re_dev *rdev,
bytes = (qplib_srq->max_wqe * BNXT_QPLIB_MAX_RQE_ENTRY_SIZE);
bytes = PAGE_ALIGN(bytes);
umem = ib_umem_get(udata, ureq.srqva, bytes, IB_ACCESS_LOCAL_WRITE, 1);
umem = ib_umem_get(udata, ureq.srqva, bytes, IB_ACCESS_LOCAL_WRITE);
if (IS_ERR(umem))
return PTR_ERR(umem);
@ -2565,7 +2547,7 @@ int bnxt_re_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
cq->umem = ib_umem_get(udata, req.cq_va,
entries * sizeof(struct cq_base),
IB_ACCESS_LOCAL_WRITE, 1);
IB_ACCESS_LOCAL_WRITE);
if (IS_ERR(cq->umem)) {
rc = PTR_ERR(cq->umem);
goto fail;
@ -3530,7 +3512,7 @@ struct ib_mr *bnxt_re_reg_user_mr(struct ib_pd *ib_pd, u64 start, u64 length,
/* The fixed portion of the rkey is the same as the lkey */
mr->ib_mr.rkey = mr->qplib_mr.rkey;
umem = ib_umem_get(udata, start, length, mr_access_flags, 0);
umem = ib_umem_get(udata, start, length, mr_access_flags);
if (IS_ERR(umem)) {
dev_err(rdev_to_dev(rdev), "Failed to get umem");
rc = -EFAULT;

View File

@ -145,9 +145,6 @@ struct bnxt_re_ucontext {
int bnxt_re_query_device(struct ib_device *ibdev,
struct ib_device_attr *ib_attr,
struct ib_udata *udata);
int bnxt_re_modify_device(struct ib_device *ibdev,
int device_modify_mask,
struct ib_device_modify *device_modify);
int bnxt_re_query_port(struct ib_device *ibdev, u8 port_num,
struct ib_port_attr *port_attr);
int bnxt_re_get_port_immutable(struct ib_device *ibdev, u8 port_num,

View File

@ -119,61 +119,76 @@ static void bnxt_re_get_sriov_func_type(struct bnxt_re_dev *rdev)
* reserved for the function. The driver may choose to allocate fewer
* resources than the firmware maximum.
*/
static void bnxt_re_limit_pf_res(struct bnxt_re_dev *rdev)
{
struct bnxt_qplib_dev_attr *attr;
struct bnxt_qplib_ctx *ctx;
int i;
attr = &rdev->dev_attr;
ctx = &rdev->qplib_ctx;
ctx->qpc_count = min_t(u32, BNXT_RE_MAX_QPC_COUNT,
attr->max_qp);
ctx->mrw_count = BNXT_RE_MAX_MRW_COUNT_256K;
/* Use max_mr from fw since max_mrw does not get set */
ctx->mrw_count = min_t(u32, ctx->mrw_count, attr->max_mr);
ctx->srqc_count = min_t(u32, BNXT_RE_MAX_SRQC_COUNT,
attr->max_srq);
ctx->cq_count = min_t(u32, BNXT_RE_MAX_CQ_COUNT, attr->max_cq);
if (!bnxt_qplib_is_chip_gen_p5(&rdev->chip_ctx))
for (i = 0; i < MAX_TQM_ALLOC_REQ; i++)
rdev->qplib_ctx.tqm_count[i] =
rdev->dev_attr.tqm_alloc_reqs[i];
}
static void bnxt_re_limit_vf_res(struct bnxt_qplib_ctx *qplib_ctx, u32 num_vf)
{
struct bnxt_qplib_vf_res *vf_res;
u32 mrws = 0;
u32 vf_pct;
u32 nvfs;
vf_res = &qplib_ctx->vf_res;
/*
* Reserve a set of resources for the PF. Divide the remaining
* resources among the VFs
*/
vf_pct = 100 - BNXT_RE_PCT_RSVD_FOR_PF;
nvfs = num_vf;
num_vf = 100 * num_vf;
vf_res->max_qp_per_vf = (qplib_ctx->qpc_count * vf_pct) / num_vf;
vf_res->max_srq_per_vf = (qplib_ctx->srqc_count * vf_pct) / num_vf;
vf_res->max_cq_per_vf = (qplib_ctx->cq_count * vf_pct) / num_vf;
/*
* The driver allows many more MRs than other resources. If the
* firmware does also, then reserve a fixed amount for the PF and
* divide the rest among VFs. VFs may use many MRs for NFS
* mounts, ISER, NVME applications, etc. If the firmware severely
* restricts the number of MRs, then let PF have half and divide
* the rest among VFs, as for the other resource types.
*/
if (qplib_ctx->mrw_count < BNXT_RE_MAX_MRW_COUNT_64K) {
mrws = qplib_ctx->mrw_count * vf_pct;
nvfs = num_vf;
} else {
mrws = qplib_ctx->mrw_count - BNXT_RE_RESVD_MR_FOR_PF;
}
vf_res->max_mrw_per_vf = (mrws / nvfs);
vf_res->max_gid_per_vf = BNXT_RE_MAX_GID_PER_VF;
}
static void bnxt_re_set_resource_limits(struct bnxt_re_dev *rdev)
{
u32 vf_qps = 0, vf_srqs = 0, vf_cqs = 0, vf_mrws = 0, vf_gids = 0;
u32 i;
u32 vf_pct;
u32 num_vfs;
struct bnxt_qplib_dev_attr *dev_attr = &rdev->dev_attr;
rdev->qplib_ctx.qpc_count = min_t(u32, BNXT_RE_MAX_QPC_COUNT,
dev_attr->max_qp);
memset(&rdev->qplib_ctx.vf_res, 0, sizeof(struct bnxt_qplib_vf_res));
bnxt_re_limit_pf_res(rdev);
rdev->qplib_ctx.mrw_count = BNXT_RE_MAX_MRW_COUNT_256K;
/* Use max_mr from fw since max_mrw does not get set */
rdev->qplib_ctx.mrw_count = min_t(u32, rdev->qplib_ctx.mrw_count,
dev_attr->max_mr);
rdev->qplib_ctx.srqc_count = min_t(u32, BNXT_RE_MAX_SRQC_COUNT,
dev_attr->max_srq);
rdev->qplib_ctx.cq_count = min_t(u32, BNXT_RE_MAX_CQ_COUNT,
dev_attr->max_cq);
for (i = 0; i < MAX_TQM_ALLOC_REQ; i++)
rdev->qplib_ctx.tqm_count[i] =
rdev->dev_attr.tqm_alloc_reqs[i];
if (rdev->num_vfs) {
/*
* Reserve a set of resources for the PF. Divide the remaining
* resources among the VFs
*/
vf_pct = 100 - BNXT_RE_PCT_RSVD_FOR_PF;
num_vfs = 100 * rdev->num_vfs;
vf_qps = (rdev->qplib_ctx.qpc_count * vf_pct) / num_vfs;
vf_srqs = (rdev->qplib_ctx.srqc_count * vf_pct) / num_vfs;
vf_cqs = (rdev->qplib_ctx.cq_count * vf_pct) / num_vfs;
/*
* The driver allows many more MRs than other resources. If the
* firmware does also, then reserve a fixed amount for the PF
* and divide the rest among VFs. VFs may use many MRs for NFS
* mounts, ISER, NVME applications, etc. If the firmware
* severely restricts the number of MRs, then let PF have
* half and divide the rest among VFs, as for the other
* resource types.
*/
if (rdev->qplib_ctx.mrw_count < BNXT_RE_MAX_MRW_COUNT_64K)
vf_mrws = rdev->qplib_ctx.mrw_count * vf_pct / num_vfs;
else
vf_mrws = (rdev->qplib_ctx.mrw_count -
BNXT_RE_RESVD_MR_FOR_PF) / rdev->num_vfs;
vf_gids = BNXT_RE_MAX_GID_PER_VF;
}
rdev->qplib_ctx.vf_res.max_mrw_per_vf = vf_mrws;
rdev->qplib_ctx.vf_res.max_gid_per_vf = vf_gids;
rdev->qplib_ctx.vf_res.max_qp_per_vf = vf_qps;
rdev->qplib_ctx.vf_res.max_srq_per_vf = vf_srqs;
rdev->qplib_ctx.vf_res.max_cq_per_vf = vf_cqs;
num_vfs = bnxt_qplib_is_chip_gen_p5(&rdev->chip_ctx) ?
BNXT_RE_GEN_P5_MAX_VF : rdev->num_vfs;
if (num_vfs)
bnxt_re_limit_vf_res(&rdev->qplib_ctx, num_vfs);
}
/* for handling bnxt_en callbacks later */
@ -193,9 +208,11 @@ static void bnxt_re_sriov_config(void *p, int num_vfs)
return;
rdev->num_vfs = num_vfs;
bnxt_re_set_resource_limits(rdev);
bnxt_qplib_set_func_resources(&rdev->qplib_res, &rdev->rcfw,
&rdev->qplib_ctx);
if (!bnxt_qplib_is_chip_gen_p5(&rdev->chip_ctx)) {
bnxt_re_set_resource_limits(rdev);
bnxt_qplib_set_func_resources(&rdev->qplib_res, &rdev->rcfw,
&rdev->qplib_ctx);
}
}
static void bnxt_re_shutdown(void *p)
@ -477,6 +494,7 @@ static int bnxt_re_net_stats_ctx_alloc(struct bnxt_re_dev *rdev,
bnxt_re_init_hwrm_hdr(rdev, (void *)&req, HWRM_STAT_CTX_ALLOC, -1, -1);
req.update_period_ms = cpu_to_le32(1000);
req.stats_dma_addr = cpu_to_le64(dma_map);
req.stats_dma_length = cpu_to_le16(sizeof(struct ctx_hw_stats_ext));
req.stat_ctx_flags = STAT_CTX_ALLOC_REQ_STAT_CTX_FLAGS_ROCE;
bnxt_re_fill_fw_msg(&fw_msg, (void *)&req, sizeof(req), (void *)&resp,
sizeof(resp), DFLT_HWRM_CMD_TIMEOUT);
@ -625,7 +643,6 @@ static const struct ib_device_ops bnxt_re_dev_ops = {
.map_mr_sg = bnxt_re_map_mr_sg,
.mmap = bnxt_re_mmap,
.modify_ah = bnxt_re_modify_ah,
.modify_device = bnxt_re_modify_device,
.modify_qp = bnxt_re_modify_qp,
.modify_srq = bnxt_re_modify_srq,
.poll_cq = bnxt_re_poll_cq,
@ -895,10 +912,14 @@ static int bnxt_re_cqn_handler(struct bnxt_qplib_nq *nq,
return 0;
}
#define BNXT_RE_GEN_P5_PF_NQ_DB 0x10000
#define BNXT_RE_GEN_P5_VF_NQ_DB 0x4000
static u32 bnxt_re_get_nqdb_offset(struct bnxt_re_dev *rdev, u16 indx)
{
return bnxt_qplib_is_chip_gen_p5(&rdev->chip_ctx) ?
0x10000 : rdev->msix_entries[indx].db_offset;
(rdev->is_virtfn ? BNXT_RE_GEN_P5_VF_NQ_DB :
BNXT_RE_GEN_P5_PF_NQ_DB) :
rdev->msix_entries[indx].db_offset;
}
static void bnxt_re_cleanup_res(struct bnxt_re_dev *rdev)
@ -1270,10 +1291,10 @@ static void bnxt_re_query_hwrm_intf_version(struct bnxt_re_dev *rdev)
return;
}
rdev->qplib_ctx.hwrm_intf_ver =
(u64)resp.hwrm_intf_major << 48 |
(u64)resp.hwrm_intf_minor << 32 |
(u64)resp.hwrm_intf_build << 16 |
resp.hwrm_intf_patch;
(u64)le16_to_cpu(resp.hwrm_intf_major) << 48 |
(u64)le16_to_cpu(resp.hwrm_intf_minor) << 32 |
(u64)le16_to_cpu(resp.hwrm_intf_build) << 16 |
le16_to_cpu(resp.hwrm_intf_patch);
}
static void bnxt_re_ib_unreg(struct bnxt_re_dev *rdev)
@ -1408,8 +1429,8 @@ static int bnxt_re_ib_reg(struct bnxt_re_dev *rdev)
rdev->is_virtfn);
if (rc)
goto disable_rcfw;
if (!rdev->is_virtfn)
bnxt_re_set_resource_limits(rdev);
bnxt_re_set_resource_limits(rdev);
rc = bnxt_qplib_alloc_ctx(rdev->en_dev->pdev, &rdev->qplib_ctx, 0,
bnxt_qplib_is_chip_gen_p5(&rdev->chip_ctx));

View File

@ -494,8 +494,10 @@ int bnxt_qplib_init_rcfw(struct bnxt_qplib_rcfw *rcfw,
* shall setup this area for VF. Skipping the
* HW programming
*/
if (is_virtfn || bnxt_qplib_is_chip_gen_p5(rcfw->res->cctx))
if (is_virtfn)
goto skip_ctx_setup;
if (bnxt_qplib_is_chip_gen_p5(rcfw->res->cctx))
goto config_vf_res;
level = ctx->qpc_tbl.level;
req.qpc_pg_size_qpc_lvl = (level << CMDQ_INITIALIZE_FW_QPC_LVL_SFT) |
@ -540,6 +542,7 @@ int bnxt_qplib_init_rcfw(struct bnxt_qplib_rcfw *rcfw,
req.number_of_srq = cpu_to_le32(ctx->srqc_tbl.max_elements);
req.number_of_cq = cpu_to_le32(ctx->cq_tbl.max_elements);
config_vf_res:
req.max_qp_per_vf = cpu_to_le32(ctx->vf_res.max_qp_per_vf);
req.max_mrw_per_vf = cpu_to_le32(ctx->vf_res.max_mrw_per_vf);
req.max_srq_per_vf = cpu_to_le32(ctx->vf_res.max_srq_per_vf);

View File

@ -186,7 +186,9 @@ struct bnxt_qplib_chip_ctx {
u8 chip_metal;
};
#define CHIP_NUM_57500 0x1750
#define CHIP_NUM_57508 0x1750
#define CHIP_NUM_57504 0x1751
#define CHIP_NUM_57502 0x1752
struct bnxt_qplib_res {
struct pci_dev *pdev;
@ -203,7 +205,9 @@ struct bnxt_qplib_res {
static inline bool bnxt_qplib_is_chip_gen_p5(struct bnxt_qplib_chip_ctx *cctx)
{
return (cctx->chip_num == CHIP_NUM_57500);
return (cctx->chip_num == CHIP_NUM_57508 ||
cctx->chip_num == CHIP_NUM_57504 ||
cctx->chip_num == CHIP_NUM_57502);
}
static inline u8 bnxt_qplib_get_hwq_type(struct bnxt_qplib_res *res)

View File

@ -1,19 +0,0 @@
# SPDX-License-Identifier: GPL-2.0-only
config INFINIBAND_CXGB3
tristate "Chelsio RDMA Driver"
depends on CHELSIO_T3
select GENERIC_ALLOCATOR
---help---
This is an iWARP/RDMA driver for the Chelsio T3 1GbE and
10GbE adapters.
For general information about Chelsio and our products, visit
our website at <http://www.chelsio.com>.
For customer support, please visit our customer support page at
<http://www.chelsio.com/support.html>.
Please send feedback to <linux-bugs@chelsio.com>.
To compile this driver as a module, choose M here: the module
will be called iw_cxgb3.

View File

@ -1,7 +0,0 @@
# SPDX-License-Identifier: GPL-2.0
ccflags-y := -I $(srctree)/drivers/net/ethernet/chelsio/cxgb3
obj-$(CONFIG_INFINIBAND_CXGB3) += iw_cxgb3.o
iw_cxgb3-y := iwch_cm.o iwch_ev.o iwch_cq.o iwch_qp.o iwch_mem.o \
iwch_provider.o iwch.o cxio_hal.o cxio_resource.o

File diff suppressed because it is too large Load Diff

View File

@ -1,204 +0,0 @@
/*
* Copyright (c) 2006 Chelsio, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#ifndef __CXIO_HAL_H__
#define __CXIO_HAL_H__
#include <linux/list.h>
#include <linux/mutex.h>
#include <linux/kfifo.h>
#include "t3_cpl.h"
#include "t3cdev.h"
#include "cxgb3_ctl_defs.h"
#include "cxio_wr.h"
#define T3_CTRL_QP_ID FW_RI_SGEEC_START
#define T3_CTL_QP_TID FW_RI_TID_START
#define T3_CTRL_QP_SIZE_LOG2 8
#define T3_CTRL_CQ_ID 0
#define T3_MAX_NUM_RI (1<<15)
#define T3_MAX_NUM_QP (1<<15)
#define T3_MAX_NUM_CQ (1<<15)
#define T3_MAX_NUM_PD (1<<15)
#define T3_MAX_PBL_SIZE 256
#define T3_MAX_RQ_SIZE 1024
#define T3_MAX_QP_DEPTH (T3_MAX_RQ_SIZE-1)
#define T3_MAX_CQ_DEPTH 65536
#define T3_MAX_NUM_STAG (1<<15)
#define T3_MAX_MR_SIZE 0x100000000ULL
#define T3_PAGESIZE_MASK 0xffff000 /* 4KB-128MB */
#define T3_STAG_UNSET 0xffffffff
#define T3_MAX_DEV_NAME_LEN 32
#define CXIO_FW_MAJ 7
struct cxio_hal_ctrl_qp {
u32 wptr;
u32 rptr;
struct mutex lock; /* for the wtpr, can sleep */
wait_queue_head_t waitq;/* wait for RspQ/CQE msg */
union t3_wr *workq; /* the work request queue */
dma_addr_t dma_addr; /* pci bus address of the workq */
DEFINE_DMA_UNMAP_ADDR(mapping);
void __iomem *doorbell;
};
struct cxio_hal_resource {
struct kfifo tpt_fifo;
spinlock_t tpt_fifo_lock;
struct kfifo qpid_fifo;
spinlock_t qpid_fifo_lock;
struct kfifo cqid_fifo;
spinlock_t cqid_fifo_lock;
struct kfifo pdid_fifo;
spinlock_t pdid_fifo_lock;
};
struct cxio_qpid_list {
struct list_head entry;
u32 qpid;
};
struct cxio_ucontext {
struct list_head qpids;
struct mutex lock;
};
struct cxio_rdev {
char dev_name[T3_MAX_DEV_NAME_LEN];
struct t3cdev *t3cdev_p;
struct rdma_info rnic_info;
struct adap_ports port_info;
struct cxio_hal_resource *rscp;
struct cxio_hal_ctrl_qp ctrl_qp;
void *ulp;
unsigned long qpshift;
u32 qpnr;
u32 qpmask;
struct cxio_ucontext uctx;
struct gen_pool *pbl_pool;
struct gen_pool *rqt_pool;
struct list_head entry;
struct ch_embedded_info fw_info;
u32 flags;
#define CXIO_ERROR_FATAL 1
};
static inline int cxio_fatal_error(struct cxio_rdev *rdev_p)
{
return rdev_p->flags & CXIO_ERROR_FATAL;
}
static inline int cxio_num_stags(struct cxio_rdev *rdev_p)
{
return min((int)T3_MAX_NUM_STAG, (int)((rdev_p->rnic_info.tpt_top - rdev_p->rnic_info.tpt_base) >> 5));
}
typedef void (*cxio_hal_ev_callback_func_t) (struct cxio_rdev * rdev_p,
struct sk_buff * skb);
#define RSPQ_CQID(rsp) (be32_to_cpu(rsp->cq_ptrid) & 0xffff)
#define RSPQ_CQPTR(rsp) ((be32_to_cpu(rsp->cq_ptrid) >> 16) & 0xffff)
#define RSPQ_GENBIT(rsp) ((be32_to_cpu(rsp->flags) >> 16) & 1)
#define RSPQ_OVERFLOW(rsp) ((be32_to_cpu(rsp->flags) >> 17) & 1)
#define RSPQ_AN(rsp) ((be32_to_cpu(rsp->flags) >> 18) & 1)
#define RSPQ_SE(rsp) ((be32_to_cpu(rsp->flags) >> 19) & 1)
#define RSPQ_NOTIFY(rsp) ((be32_to_cpu(rsp->flags) >> 20) & 1)
#define RSPQ_CQBRANCH(rsp) ((be32_to_cpu(rsp->flags) >> 21) & 1)
#define RSPQ_CREDIT_THRESH(rsp) ((be32_to_cpu(rsp->flags) >> 22) & 1)
struct respQ_msg_t {
__be32 flags; /* flit 0 */
__be32 cq_ptrid;
__be64 rsvd; /* flit 1 */
struct t3_cqe cqe; /* flits 2-3 */
};
enum t3_cq_opcode {
CQ_ARM_AN = 0x2,
CQ_ARM_SE = 0x6,
CQ_FORCE_AN = 0x3,
CQ_CREDIT_UPDATE = 0x7
};
int cxio_rdev_open(struct cxio_rdev *rdev);
void cxio_rdev_close(struct cxio_rdev *rdev);
int cxio_hal_cq_op(struct cxio_rdev *rdev, struct t3_cq *cq,
enum t3_cq_opcode op, u32 credit);
int cxio_create_cq(struct cxio_rdev *rdev, struct t3_cq *cq, int kernel);
void cxio_destroy_cq(struct cxio_rdev *rdev, struct t3_cq *cq);
void cxio_release_ucontext(struct cxio_rdev *rdev, struct cxio_ucontext *uctx);
void cxio_init_ucontext(struct cxio_rdev *rdev, struct cxio_ucontext *uctx);
int cxio_create_qp(struct cxio_rdev *rdev, u32 kernel_domain, struct t3_wq *wq,
struct cxio_ucontext *uctx);
int cxio_destroy_qp(struct cxio_rdev *rdev, struct t3_wq *wq,
struct cxio_ucontext *uctx);
int cxio_peek_cq(struct t3_wq *wr, struct t3_cq *cq, int opcode);
int cxio_write_pbl(struct cxio_rdev *rdev_p, __be64 *pbl,
u32 pbl_addr, u32 pbl_size);
int cxio_register_phys_mem(struct cxio_rdev *rdev, u32 * stag, u32 pdid,
enum tpt_mem_perm perm, u32 zbva, u64 to, u32 len,
u8 page_size, u32 pbl_size, u32 pbl_addr);
int cxio_reregister_phys_mem(struct cxio_rdev *rdev, u32 * stag, u32 pdid,
enum tpt_mem_perm perm, u32 zbva, u64 to, u32 len,
u8 page_size, u32 pbl_size, u32 pbl_addr);
int cxio_dereg_mem(struct cxio_rdev *rdev, u32 stag, u32 pbl_size,
u32 pbl_addr);
int cxio_allocate_window(struct cxio_rdev *rdev, u32 * stag, u32 pdid);
int cxio_allocate_stag(struct cxio_rdev *rdev, u32 *stag, u32 pdid, u32 pbl_size, u32 pbl_addr);
int cxio_deallocate_window(struct cxio_rdev *rdev, u32 stag);
int cxio_rdma_init(struct cxio_rdev *rdev, struct t3_rdma_init_attr *attr);
void cxio_register_ev_cb(cxio_hal_ev_callback_func_t ev_cb);
void cxio_unregister_ev_cb(cxio_hal_ev_callback_func_t ev_cb);
u32 cxio_hal_get_pdid(struct cxio_hal_resource *rscp);
void cxio_hal_put_pdid(struct cxio_hal_resource *rscp, u32 pdid);
int __init cxio_hal_init(void);
void __exit cxio_hal_exit(void);
int cxio_flush_rq(struct t3_wq *wq, struct t3_cq *cq, int count);
int cxio_flush_sq(struct t3_wq *wq, struct t3_cq *cq, int count);
void cxio_count_rcqes(struct t3_cq *cq, struct t3_wq *wq, int *count);
void cxio_count_scqes(struct t3_cq *cq, struct t3_wq *wq, int *count);
void cxio_flush_hw_cq(struct t3_cq *cq);
int cxio_poll_cq(struct t3_wq *wq, struct t3_cq *cq, struct t3_cqe *cqe,
u8 *cqe_flushed, u64 *cookie, u32 *credit);
int iwch_cxgb3_ofld_send(struct t3cdev *tdev, struct sk_buff *skb);
#ifdef pr_fmt
#undef pr_fmt
#endif
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#endif

View File

@ -1,344 +0,0 @@
/*
* Copyright (c) 2006 Chelsio, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
/* Crude resource management */
#include <linux/kernel.h>
#include <linux/random.h>
#include <linux/slab.h>
#include <linux/kfifo.h>
#include <linux/spinlock.h>
#include <linux/errno.h>
#include "cxio_resource.h"
#include "cxio_hal.h"
static struct kfifo rhdl_fifo;
static spinlock_t rhdl_fifo_lock;
#define RANDOM_SIZE 16
static int __cxio_init_resource_fifo(struct kfifo *fifo,
spinlock_t *fifo_lock,
u32 nr, u32 skip_low,
u32 skip_high,
int random)
{
u32 i, j, entry = 0, idx;
u32 random_bytes;
u32 rarray[16];
spin_lock_init(fifo_lock);
if (kfifo_alloc(fifo, nr * sizeof(u32), GFP_KERNEL))
return -ENOMEM;
for (i = 0; i < skip_low + skip_high; i++)
kfifo_in(fifo, (unsigned char *) &entry, sizeof(u32));
if (random) {
j = 0;
random_bytes = prandom_u32();
for (i = 0; i < RANDOM_SIZE; i++)
rarray[i] = i + skip_low;
for (i = skip_low + RANDOM_SIZE; i < nr - skip_high; i++) {
if (j >= RANDOM_SIZE) {
j = 0;
random_bytes = prandom_u32();
}
idx = (random_bytes >> (j * 2)) & 0xF;
kfifo_in(fifo,
(unsigned char *) &rarray[idx],
sizeof(u32));
rarray[idx] = i;
j++;
}
for (i = 0; i < RANDOM_SIZE; i++)
kfifo_in(fifo,
(unsigned char *) &rarray[i],
sizeof(u32));
} else
for (i = skip_low; i < nr - skip_high; i++)
kfifo_in(fifo, (unsigned char *) &i, sizeof(u32));
for (i = 0; i < skip_low + skip_high; i++)
if (kfifo_out_locked(fifo, (unsigned char *) &entry,
sizeof(u32), fifo_lock) != sizeof(u32))
break;
return 0;
}
static int cxio_init_resource_fifo(struct kfifo *fifo, spinlock_t * fifo_lock,
u32 nr, u32 skip_low, u32 skip_high)
{
return (__cxio_init_resource_fifo(fifo, fifo_lock, nr, skip_low,
skip_high, 0));
}
static int cxio_init_resource_fifo_random(struct kfifo *fifo,
spinlock_t * fifo_lock,
u32 nr, u32 skip_low, u32 skip_high)
{
return (__cxio_init_resource_fifo(fifo, fifo_lock, nr, skip_low,
skip_high, 1));
}
static int cxio_init_qpid_fifo(struct cxio_rdev *rdev_p)
{
u32 i;
spin_lock_init(&rdev_p->rscp->qpid_fifo_lock);
if (kfifo_alloc(&rdev_p->rscp->qpid_fifo, T3_MAX_NUM_QP * sizeof(u32),
GFP_KERNEL))
return -ENOMEM;
for (i = 16; i < T3_MAX_NUM_QP; i++)
if (!(i & rdev_p->qpmask))
kfifo_in(&rdev_p->rscp->qpid_fifo,
(unsigned char *) &i, sizeof(u32));
return 0;
}
int cxio_hal_init_rhdl_resource(u32 nr_rhdl)
{
return cxio_init_resource_fifo(&rhdl_fifo, &rhdl_fifo_lock, nr_rhdl, 1,
0);
}
void cxio_hal_destroy_rhdl_resource(void)
{
kfifo_free(&rhdl_fifo);
}
/* nr_* must be power of 2 */
int cxio_hal_init_resource(struct cxio_rdev *rdev_p,
u32 nr_tpt, u32 nr_pbl,
u32 nr_rqt, u32 nr_qpid, u32 nr_cqid, u32 nr_pdid)
{
int err = 0;
struct cxio_hal_resource *rscp;
rscp = kmalloc(sizeof(*rscp), GFP_KERNEL);
if (!rscp)
return -ENOMEM;
rdev_p->rscp = rscp;
err = cxio_init_resource_fifo_random(&rscp->tpt_fifo,
&rscp->tpt_fifo_lock,
nr_tpt, 1, 0);
if (err)
goto tpt_err;
err = cxio_init_qpid_fifo(rdev_p);
if (err)
goto qpid_err;
err = cxio_init_resource_fifo(&rscp->cqid_fifo, &rscp->cqid_fifo_lock,
nr_cqid, 1, 0);
if (err)
goto cqid_err;
err = cxio_init_resource_fifo(&rscp->pdid_fifo, &rscp->pdid_fifo_lock,
nr_pdid, 1, 0);
if (err)
goto pdid_err;
return 0;
pdid_err:
kfifo_free(&rscp->cqid_fifo);
cqid_err:
kfifo_free(&rscp->qpid_fifo);
qpid_err:
kfifo_free(&rscp->tpt_fifo);
tpt_err:
return -ENOMEM;
}
/*
* returns 0 if no resource available
*/
static u32 cxio_hal_get_resource(struct kfifo *fifo, spinlock_t * lock)
{
u32 entry;
if (kfifo_out_locked(fifo, (unsigned char *) &entry, sizeof(u32), lock))
return entry;
else
return 0; /* fifo emptry */
}
static void cxio_hal_put_resource(struct kfifo *fifo, spinlock_t * lock,
u32 entry)
{
BUG_ON(
kfifo_in_locked(fifo, (unsigned char *) &entry, sizeof(u32), lock)
== 0);
}
u32 cxio_hal_get_stag(struct cxio_hal_resource *rscp)
{
return cxio_hal_get_resource(&rscp->tpt_fifo, &rscp->tpt_fifo_lock);
}
void cxio_hal_put_stag(struct cxio_hal_resource *rscp, u32 stag)
{
cxio_hal_put_resource(&rscp->tpt_fifo, &rscp->tpt_fifo_lock, stag);
}
u32 cxio_hal_get_qpid(struct cxio_hal_resource *rscp)
{
u32 qpid = cxio_hal_get_resource(&rscp->qpid_fifo,
&rscp->qpid_fifo_lock);
pr_debug("%s qpid 0x%x\n", __func__, qpid);
return qpid;
}
void cxio_hal_put_qpid(struct cxio_hal_resource *rscp, u32 qpid)
{
pr_debug("%s qpid 0x%x\n", __func__, qpid);
cxio_hal_put_resource(&rscp->qpid_fifo, &rscp->qpid_fifo_lock, qpid);
}
u32 cxio_hal_get_cqid(struct cxio_hal_resource *rscp)
{
return cxio_hal_get_resource(&rscp->cqid_fifo, &rscp->cqid_fifo_lock);
}
void cxio_hal_put_cqid(struct cxio_hal_resource *rscp, u32 cqid)
{
cxio_hal_put_resource(&rscp->cqid_fifo, &rscp->cqid_fifo_lock, cqid);
}
u32 cxio_hal_get_pdid(struct cxio_hal_resource *rscp)
{
return cxio_hal_get_resource(&rscp->pdid_fifo, &rscp->pdid_fifo_lock);
}
void cxio_hal_put_pdid(struct cxio_hal_resource *rscp, u32 pdid)
{
cxio_hal_put_resource(&rscp->pdid_fifo, &rscp->pdid_fifo_lock, pdid);
}
void cxio_hal_destroy_resource(struct cxio_hal_resource *rscp)
{
kfifo_free(&rscp->tpt_fifo);
kfifo_free(&rscp->cqid_fifo);
kfifo_free(&rscp->qpid_fifo);
kfifo_free(&rscp->pdid_fifo);
kfree(rscp);
}
/*
* PBL Memory Manager. Uses Linux generic allocator.
*/
#define MIN_PBL_SHIFT 8 /* 256B == min PBL size (32 entries) */
u32 cxio_hal_pblpool_alloc(struct cxio_rdev *rdev_p, int size)
{
unsigned long addr = gen_pool_alloc(rdev_p->pbl_pool, size);
pr_debug("%s addr 0x%x size %d\n", __func__, (u32)addr, size);
return (u32)addr;
}
void cxio_hal_pblpool_free(struct cxio_rdev *rdev_p, u32 addr, int size)
{
pr_debug("%s addr 0x%x size %d\n", __func__, addr, size);
gen_pool_free(rdev_p->pbl_pool, (unsigned long)addr, size);
}
int cxio_hal_pblpool_create(struct cxio_rdev *rdev_p)
{
unsigned pbl_start, pbl_chunk;
rdev_p->pbl_pool = gen_pool_create(MIN_PBL_SHIFT, -1);
if (!rdev_p->pbl_pool)
return -ENOMEM;
pbl_start = rdev_p->rnic_info.pbl_base;
pbl_chunk = rdev_p->rnic_info.pbl_top - pbl_start + 1;
while (pbl_start < rdev_p->rnic_info.pbl_top) {
pbl_chunk = min(rdev_p->rnic_info.pbl_top - pbl_start + 1,
pbl_chunk);
if (gen_pool_add(rdev_p->pbl_pool, pbl_start, pbl_chunk, -1)) {
pr_debug("%s failed to add PBL chunk (%x/%x)\n",
__func__, pbl_start, pbl_chunk);
if (pbl_chunk <= 1024 << MIN_PBL_SHIFT) {
pr_warn("%s: Failed to add all PBL chunks (%x/%x)\n",
__func__, pbl_start,
rdev_p->rnic_info.pbl_top - pbl_start);
return 0;
}
pbl_chunk >>= 1;
} else {
pr_debug("%s added PBL chunk (%x/%x)\n",
__func__, pbl_start, pbl_chunk);
pbl_start += pbl_chunk;
}
}
return 0;
}
void cxio_hal_pblpool_destroy(struct cxio_rdev *rdev_p)
{
gen_pool_destroy(rdev_p->pbl_pool);
}
/*
* RQT Memory Manager. Uses Linux generic allocator.
*/
#define MIN_RQT_SHIFT 10 /* 1KB == mini RQT size (16 entries) */
#define RQT_CHUNK 2*1024*1024
u32 cxio_hal_rqtpool_alloc(struct cxio_rdev *rdev_p, int size)
{
unsigned long addr = gen_pool_alloc(rdev_p->rqt_pool, size << 6);
pr_debug("%s addr 0x%x size %d\n", __func__, (u32)addr, size << 6);
return (u32)addr;
}
void cxio_hal_rqtpool_free(struct cxio_rdev *rdev_p, u32 addr, int size)
{
pr_debug("%s addr 0x%x size %d\n", __func__, addr, size << 6);
gen_pool_free(rdev_p->rqt_pool, (unsigned long)addr, size << 6);
}
int cxio_hal_rqtpool_create(struct cxio_rdev *rdev_p)
{
unsigned long i;
rdev_p->rqt_pool = gen_pool_create(MIN_RQT_SHIFT, -1);
if (rdev_p->rqt_pool)
for (i = rdev_p->rnic_info.rqt_base;
i <= rdev_p->rnic_info.rqt_top - RQT_CHUNK + 1;
i += RQT_CHUNK)
gen_pool_add(rdev_p->rqt_pool, i, RQT_CHUNK, -1);
return rdev_p->rqt_pool ? 0 : -ENOMEM;
}
void cxio_hal_rqtpool_destroy(struct cxio_rdev *rdev_p)
{
gen_pool_destroy(rdev_p->rqt_pool);
}

View File

@ -1,69 +0,0 @@
/*
* Copyright (c) 2006 Chelsio, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#ifndef __CXIO_RESOURCE_H__
#define __CXIO_RESOURCE_H__
#include <linux/kernel.h>
#include <linux/random.h>
#include <linux/slab.h>
#include <linux/kfifo.h>
#include <linux/spinlock.h>
#include <linux/errno.h>
#include <linux/genalloc.h>
#include "cxio_hal.h"
extern int cxio_hal_init_rhdl_resource(u32 nr_rhdl);
extern void cxio_hal_destroy_rhdl_resource(void);
extern int cxio_hal_init_resource(struct cxio_rdev *rdev_p,
u32 nr_tpt, u32 nr_pbl,
u32 nr_rqt, u32 nr_qpid, u32 nr_cqid,
u32 nr_pdid);
extern u32 cxio_hal_get_stag(struct cxio_hal_resource *rscp);
extern void cxio_hal_put_stag(struct cxio_hal_resource *rscp, u32 stag);
extern u32 cxio_hal_get_qpid(struct cxio_hal_resource *rscp);
extern void cxio_hal_put_qpid(struct cxio_hal_resource *rscp, u32 qpid);
extern u32 cxio_hal_get_cqid(struct cxio_hal_resource *rscp);
extern void cxio_hal_put_cqid(struct cxio_hal_resource *rscp, u32 cqid);
extern void cxio_hal_destroy_resource(struct cxio_hal_resource *rscp);
#define PBL_OFF(rdev_p, a) ( (a) - (rdev_p)->rnic_info.pbl_base )
extern int cxio_hal_pblpool_create(struct cxio_rdev *rdev_p);
extern void cxio_hal_pblpool_destroy(struct cxio_rdev *rdev_p);
extern u32 cxio_hal_pblpool_alloc(struct cxio_rdev *rdev_p, int size);
extern void cxio_hal_pblpool_free(struct cxio_rdev *rdev_p, u32 addr, int size);
#define RQT_OFF(rdev_p, a) ( (a) - (rdev_p)->rnic_info.rqt_base )
extern int cxio_hal_rqtpool_create(struct cxio_rdev *rdev_p);
extern void cxio_hal_rqtpool_destroy(struct cxio_rdev *rdev_p);
extern u32 cxio_hal_rqtpool_alloc(struct cxio_rdev *rdev_p, int size);
extern void cxio_hal_rqtpool_free(struct cxio_rdev *rdev_p, u32 addr, int size);
#endif

View File

@ -1,802 +0,0 @@
/*
* Copyright (c) 2006 Chelsio, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#ifndef __CXIO_WR_H__
#define __CXIO_WR_H__
#include <asm/io.h>
#include <linux/pci.h>
#include <linux/timer.h>
#include "firmware_exports.h"
#define T3_MAX_SGE 4
#define T3_MAX_INLINE 64
#define T3_STAG0_PBL_SIZE (2 * T3_MAX_SGE << 3)
#define T3_STAG0_MAX_PBE_LEN (128 * 1024 * 1024)
#define T3_STAG0_PAGE_SHIFT 15
#define Q_EMPTY(rptr,wptr) ((rptr)==(wptr))
#define Q_FULL(rptr,wptr,size_log2) ( (((wptr)-(rptr))>>(size_log2)) && \
((rptr)!=(wptr)) )
#define Q_GENBIT(ptr,size_log2) (!(((ptr)>>size_log2)&0x1))
#define Q_FREECNT(rptr,wptr,size_log2) ((1UL<<size_log2)-((wptr)-(rptr)))
#define Q_COUNT(rptr,wptr) ((wptr)-(rptr))
#define Q_PTR2IDX(ptr,size_log2) (ptr & ((1UL<<size_log2)-1))
static inline void ring_doorbell(void __iomem *doorbell, u32 qpid)
{
writel(((1<<31) | qpid), doorbell);
}
#define SEQ32_GE(x,y) (!( (((u32) (x)) - ((u32) (y))) & 0x80000000 ))
enum t3_wr_flags {
T3_COMPLETION_FLAG = 0x01,
T3_NOTIFY_FLAG = 0x02,
T3_SOLICITED_EVENT_FLAG = 0x04,
T3_READ_FENCE_FLAG = 0x08,
T3_LOCAL_FENCE_FLAG = 0x10
} __packed;
enum t3_wr_opcode {
T3_WR_BP = FW_WROPCODE_RI_BYPASS,
T3_WR_SEND = FW_WROPCODE_RI_SEND,
T3_WR_WRITE = FW_WROPCODE_RI_RDMA_WRITE,
T3_WR_READ = FW_WROPCODE_RI_RDMA_READ,
T3_WR_INV_STAG = FW_WROPCODE_RI_LOCAL_INV,
T3_WR_BIND = FW_WROPCODE_RI_BIND_MW,
T3_WR_RCV = FW_WROPCODE_RI_RECEIVE,
T3_WR_INIT = FW_WROPCODE_RI_RDMA_INIT,
T3_WR_QP_MOD = FW_WROPCODE_RI_MODIFY_QP,
T3_WR_FASTREG = FW_WROPCODE_RI_FASTREGISTER_MR
} __packed;
enum t3_rdma_opcode {
T3_RDMA_WRITE, /* IETF RDMAP v1.0 ... */
T3_READ_REQ,
T3_READ_RESP,
T3_SEND,
T3_SEND_WITH_INV,
T3_SEND_WITH_SE,
T3_SEND_WITH_SE_INV,
T3_TERMINATE,
T3_RDMA_INIT, /* CHELSIO RI specific ... */
T3_BIND_MW,
T3_FAST_REGISTER,
T3_LOCAL_INV,
T3_QP_MOD,
T3_BYPASS,
T3_RDMA_READ_REQ_WITH_INV,
} __packed;
static inline enum t3_rdma_opcode wr2opcode(enum t3_wr_opcode wrop)
{
switch (wrop) {
case T3_WR_BP: return T3_BYPASS;
case T3_WR_SEND: return T3_SEND;
case T3_WR_WRITE: return T3_RDMA_WRITE;
case T3_WR_READ: return T3_READ_REQ;
case T3_WR_INV_STAG: return T3_LOCAL_INV;
case T3_WR_BIND: return T3_BIND_MW;
case T3_WR_INIT: return T3_RDMA_INIT;
case T3_WR_QP_MOD: return T3_QP_MOD;
case T3_WR_FASTREG: return T3_FAST_REGISTER;
default: break;
}
return -1;
}
/* Work request id */
union t3_wrid {
struct {
u32 hi;
u32 low;
} id0;
u64 id1;
};
#define WRID(wrid) (wrid.id1)
#define WRID_GEN(wrid) (wrid.id0.wr_gen)
#define WRID_IDX(wrid) (wrid.id0.wr_idx)
#define WRID_LO(wrid) (wrid.id0.wr_lo)
struct fw_riwrh {
__be32 op_seop_flags;
__be32 gen_tid_len;
};
#define S_FW_RIWR_OP 24
#define M_FW_RIWR_OP 0xff
#define V_FW_RIWR_OP(x) ((x) << S_FW_RIWR_OP)
#define G_FW_RIWR_OP(x) ((((x) >> S_FW_RIWR_OP)) & M_FW_RIWR_OP)
#define S_FW_RIWR_SOPEOP 22
#define M_FW_RIWR_SOPEOP 0x3
#define V_FW_RIWR_SOPEOP(x) ((x) << S_FW_RIWR_SOPEOP)
#define S_FW_RIWR_FLAGS 8
#define M_FW_RIWR_FLAGS 0x3fffff
#define V_FW_RIWR_FLAGS(x) ((x) << S_FW_RIWR_FLAGS)
#define G_FW_RIWR_FLAGS(x) ((((x) >> S_FW_RIWR_FLAGS)) & M_FW_RIWR_FLAGS)
#define S_FW_RIWR_TID 8
#define V_FW_RIWR_TID(x) ((x) << S_FW_RIWR_TID)
#define S_FW_RIWR_LEN 0
#define V_FW_RIWR_LEN(x) ((x) << S_FW_RIWR_LEN)
#define S_FW_RIWR_GEN 31
#define V_FW_RIWR_GEN(x) ((x) << S_FW_RIWR_GEN)
struct t3_sge {
__be32 stag;
__be32 len;
__be64 to;
};
/* If num_sgle is zero, flit 5+ contains immediate data.*/
struct t3_send_wr {
struct fw_riwrh wrh; /* 0 */
union t3_wrid wrid; /* 1 */
u8 rdmaop; /* 2 */
u8 reserved[3];
__be32 rem_stag;
__be32 plen; /* 3 */
__be32 num_sgle;
struct t3_sge sgl[T3_MAX_SGE]; /* 4+ */
};
#define T3_MAX_FASTREG_DEPTH 10
#define T3_MAX_FASTREG_FRAG 10
struct t3_fastreg_wr {
struct fw_riwrh wrh; /* 0 */
union t3_wrid wrid; /* 1 */
__be32 stag; /* 2 */
__be32 len;
__be32 va_base_hi; /* 3 */
__be32 va_base_lo_fbo;
__be32 page_type_perms; /* 4 */
__be32 reserved1;
__be64 pbl_addrs[0]; /* 5+ */
};
/*
* If a fastreg wr spans multiple wqes, then the 2nd fragment look like this.
*/
struct t3_pbl_frag {
struct fw_riwrh wrh; /* 0 */
__be64 pbl_addrs[14]; /* 1..14 */
};
#define S_FR_PAGE_COUNT 24
#define M_FR_PAGE_COUNT 0xff
#define V_FR_PAGE_COUNT(x) ((x) << S_FR_PAGE_COUNT)
#define G_FR_PAGE_COUNT(x) ((((x) >> S_FR_PAGE_COUNT)) & M_FR_PAGE_COUNT)
#define S_FR_PAGE_SIZE 16
#define M_FR_PAGE_SIZE 0x1f
#define V_FR_PAGE_SIZE(x) ((x) << S_FR_PAGE_SIZE)
#define G_FR_PAGE_SIZE(x) ((((x) >> S_FR_PAGE_SIZE)) & M_FR_PAGE_SIZE)
#define S_FR_TYPE 8
#define M_FR_TYPE 0x1
#define V_FR_TYPE(x) ((x) << S_FR_TYPE)
#define G_FR_TYPE(x) ((((x) >> S_FR_TYPE)) & M_FR_TYPE)
#define S_FR_PERMS 0
#define M_FR_PERMS 0xff
#define V_FR_PERMS(x) ((x) << S_FR_PERMS)
#define G_FR_PERMS(x) ((((x) >> S_FR_PERMS)) & M_FR_PERMS)
struct t3_local_inv_wr {
struct fw_riwrh wrh; /* 0 */
union t3_wrid wrid; /* 1 */
__be32 stag; /* 2 */
__be32 reserved;
};
struct t3_rdma_write_wr {
struct fw_riwrh wrh; /* 0 */
union t3_wrid wrid; /* 1 */
u8 rdmaop; /* 2 */
u8 reserved[3];
__be32 stag_sink;
__be64 to_sink; /* 3 */
__be32 plen; /* 4 */
__be32 num_sgle;
struct t3_sge sgl[T3_MAX_SGE]; /* 5+ */
};
struct t3_rdma_read_wr {
struct fw_riwrh wrh; /* 0 */
union t3_wrid wrid; /* 1 */
u8 rdmaop; /* 2 */
u8 local_inv;
u8 reserved[2];
__be32 rem_stag;
__be64 rem_to; /* 3 */
__be32 local_stag; /* 4 */
__be32 local_len;
__be64 local_to; /* 5 */
};
struct t3_bind_mw_wr {
struct fw_riwrh wrh; /* 0 */
union t3_wrid wrid; /* 1 */
u16 reserved; /* 2 */
u8 type;
u8 perms;
__be32 mr_stag;
__be32 mw_stag; /* 3 */
__be32 mw_len;
__be64 mw_va; /* 4 */
__be32 mr_pbl_addr; /* 5 */
u8 reserved2[3];
u8 mr_pagesz;
};
struct t3_receive_wr {
struct fw_riwrh wrh; /* 0 */
union t3_wrid wrid; /* 1 */
u8 pagesz[T3_MAX_SGE];
__be32 num_sgle; /* 2 */
struct t3_sge sgl[T3_MAX_SGE]; /* 3+ */
__be32 pbl_addr[T3_MAX_SGE];
};
struct t3_bypass_wr {
struct fw_riwrh wrh;
union t3_wrid wrid; /* 1 */
};
struct t3_modify_qp_wr {
struct fw_riwrh wrh; /* 0 */
union t3_wrid wrid; /* 1 */
__be32 flags; /* 2 */
__be32 quiesce; /* 2 */
__be32 max_ird; /* 3 */
__be32 max_ord; /* 3 */
__be64 sge_cmd; /* 4 */
__be64 ctx1; /* 5 */
__be64 ctx0; /* 6 */
};
enum t3_modify_qp_flags {
MODQP_QUIESCE = 0x01,
MODQP_MAX_IRD = 0x02,
MODQP_MAX_ORD = 0x04,
MODQP_WRITE_EC = 0x08,
MODQP_READ_EC = 0x10,
};
enum t3_mpa_attrs {
uP_RI_MPA_RX_MARKER_ENABLE = 0x1,
uP_RI_MPA_TX_MARKER_ENABLE = 0x2,
uP_RI_MPA_CRC_ENABLE = 0x4,
uP_RI_MPA_IETF_ENABLE = 0x8
} __packed;
enum t3_qp_caps {
uP_RI_QP_RDMA_READ_ENABLE = 0x01,
uP_RI_QP_RDMA_WRITE_ENABLE = 0x02,
uP_RI_QP_BIND_ENABLE = 0x04,
uP_RI_QP_FAST_REGISTER_ENABLE = 0x08,
uP_RI_QP_STAG0_ENABLE = 0x10
} __packed;
enum rdma_init_rtr_types {
RTR_READ = 1,
RTR_WRITE = 2,
RTR_SEND = 3,
};
#define S_RTR_TYPE 2
#define M_RTR_TYPE 0x3
#define V_RTR_TYPE(x) ((x) << S_RTR_TYPE)
#define G_RTR_TYPE(x) ((((x) >> S_RTR_TYPE)) & M_RTR_TYPE)
#define S_CHAN 4
#define M_CHAN 0x3
#define V_CHAN(x) ((x) << S_CHAN)
#define G_CHAN(x) ((((x) >> S_CHAN)) & M_CHAN)
struct t3_rdma_init_attr {
u32 tid;
u32 qpid;
u32 pdid;
u32 scqid;
u32 rcqid;
u32 rq_addr;
u32 rq_size;
enum t3_mpa_attrs mpaattrs;
enum t3_qp_caps qpcaps;
u16 tcp_emss;
u32 ord;
u32 ird;
u64 qp_dma_addr;
u32 qp_dma_size;
enum rdma_init_rtr_types rtr_type;
u16 flags;
u16 rqe_count;
u32 irs;
u32 chan;
};
struct t3_rdma_init_wr {
struct fw_riwrh wrh; /* 0 */
union t3_wrid wrid; /* 1 */
__be32 qpid; /* 2 */
__be32 pdid;
__be32 scqid; /* 3 */
__be32 rcqid;
__be32 rq_addr; /* 4 */
__be32 rq_size;
u8 mpaattrs; /* 5 */
u8 qpcaps;
__be16 ulpdu_size;
__be16 flags_rtr_type;
__be16 rqe_count;
__be32 ord; /* 6 */
__be32 ird;
__be64 qp_dma_addr; /* 7 */
__be32 qp_dma_size; /* 8 */
__be32 irs;
};
struct t3_genbit {
u64 flit[15];
__be64 genbit;
};
struct t3_wq_in_err {
u64 flit[13];
u64 err;
};
enum rdma_init_wr_flags {
MPA_INITIATOR = (1<<0),
PRIV_QP = (1<<1),
};
union t3_wr {
struct t3_send_wr send;
struct t3_rdma_write_wr write;
struct t3_rdma_read_wr read;
struct t3_receive_wr recv;
struct t3_fastreg_wr fastreg;
struct t3_pbl_frag pbl_frag;
struct t3_local_inv_wr local_inv;
struct t3_bind_mw_wr bind;
struct t3_bypass_wr bypass;
struct t3_rdma_init_wr init;
struct t3_modify_qp_wr qp_mod;
struct t3_genbit genbit;
struct t3_wq_in_err wq_in_err;
__be64 flit[16];
};
#define T3_SQ_CQE_FLIT 13
#define T3_SQ_COOKIE_FLIT 14
#define T3_RQ_COOKIE_FLIT 13
#define T3_RQ_CQE_FLIT 14
static inline enum t3_wr_opcode fw_riwrh_opcode(struct fw_riwrh *wqe)
{
return G_FW_RIWR_OP(be32_to_cpu(wqe->op_seop_flags));
}
enum t3_wr_hdr_bits {
T3_EOP = 1,
T3_SOP = 2,
T3_SOPEOP = T3_EOP|T3_SOP,
};
static inline void build_fw_riwrh(struct fw_riwrh *wqe, enum t3_wr_opcode op,
enum t3_wr_flags flags, u8 genbit, u32 tid,
u8 len, u8 sopeop)
{
wqe->op_seop_flags = cpu_to_be32(V_FW_RIWR_OP(op) |
V_FW_RIWR_SOPEOP(sopeop) |
V_FW_RIWR_FLAGS(flags));
wmb();
wqe->gen_tid_len = cpu_to_be32(V_FW_RIWR_GEN(genbit) |
V_FW_RIWR_TID(tid) |
V_FW_RIWR_LEN(len));
/* 2nd gen bit... */
((union t3_wr *)wqe)->genbit.genbit = cpu_to_be64(genbit);
}
/*
* T3 ULP2_TX commands
*/
enum t3_utx_mem_op {
T3_UTX_MEM_READ = 2,
T3_UTX_MEM_WRITE = 3
};
/* T3 MC7 RDMA TPT entry format */
enum tpt_mem_type {
TPT_NON_SHARED_MR = 0x0,
TPT_SHARED_MR = 0x1,
TPT_MW = 0x2,
TPT_MW_RELAXED_PROTECTION = 0x3
};
enum tpt_addr_type {
TPT_ZBTO = 0,
TPT_VATO = 1
};
enum tpt_mem_perm {
TPT_MW_BIND = 0x10,
TPT_LOCAL_READ = 0x8,
TPT_LOCAL_WRITE = 0x4,
TPT_REMOTE_READ = 0x2,
TPT_REMOTE_WRITE = 0x1
};
struct tpt_entry {
__be32 valid_stag_pdid;
__be32 flags_pagesize_qpid;
__be32 rsvd_pbl_addr;
__be32 len;
__be32 va_hi;
__be32 va_low_or_fbo;
__be32 rsvd_bind_cnt_or_pstag;
__be32 rsvd_pbl_size;
};
#define S_TPT_VALID 31
#define V_TPT_VALID(x) ((x) << S_TPT_VALID)
#define F_TPT_VALID V_TPT_VALID(1U)
#define S_TPT_STAG_KEY 23
#define M_TPT_STAG_KEY 0xFF
#define V_TPT_STAG_KEY(x) ((x) << S_TPT_STAG_KEY)
#define G_TPT_STAG_KEY(x) (((x) >> S_TPT_STAG_KEY) & M_TPT_STAG_KEY)
#define S_TPT_STAG_STATE 22
#define V_TPT_STAG_STATE(x) ((x) << S_TPT_STAG_STATE)
#define F_TPT_STAG_STATE V_TPT_STAG_STATE(1U)
#define S_TPT_STAG_TYPE 20
#define M_TPT_STAG_TYPE 0x3
#define V_TPT_STAG_TYPE(x) ((x) << S_TPT_STAG_TYPE)
#define G_TPT_STAG_TYPE(x) (((x) >> S_TPT_STAG_TYPE) & M_TPT_STAG_TYPE)
#define S_TPT_PDID 0
#define M_TPT_PDID 0xFFFFF
#define V_TPT_PDID(x) ((x) << S_TPT_PDID)
#define G_TPT_PDID(x) (((x) >> S_TPT_PDID) & M_TPT_PDID)
#define S_TPT_PERM 28
#define M_TPT_PERM 0xF
#define V_TPT_PERM(x) ((x) << S_TPT_PERM)
#define G_TPT_PERM(x) (((x) >> S_TPT_PERM) & M_TPT_PERM)
#define S_TPT_REM_INV_DIS 27
#define V_TPT_REM_INV_DIS(x) ((x) << S_TPT_REM_INV_DIS)
#define F_TPT_REM_INV_DIS V_TPT_REM_INV_DIS(1U)
#define S_TPT_ADDR_TYPE 26
#define V_TPT_ADDR_TYPE(x) ((x) << S_TPT_ADDR_TYPE)
#define F_TPT_ADDR_TYPE V_TPT_ADDR_TYPE(1U)
#define S_TPT_MW_BIND_ENABLE 25
#define V_TPT_MW_BIND_ENABLE(x) ((x) << S_TPT_MW_BIND_ENABLE)
#define F_TPT_MW_BIND_ENABLE V_TPT_MW_BIND_ENABLE(1U)
#define S_TPT_PAGE_SIZE 20
#define M_TPT_PAGE_SIZE 0x1F
#define V_TPT_PAGE_SIZE(x) ((x) << S_TPT_PAGE_SIZE)
#define G_TPT_PAGE_SIZE(x) (((x) >> S_TPT_PAGE_SIZE) & M_TPT_PAGE_SIZE)
#define S_TPT_PBL_ADDR 0
#define M_TPT_PBL_ADDR 0x1FFFFFFF
#define V_TPT_PBL_ADDR(x) ((x) << S_TPT_PBL_ADDR)
#define G_TPT_PBL_ADDR(x) (((x) >> S_TPT_PBL_ADDR) & M_TPT_PBL_ADDR)
#define S_TPT_QPID 0
#define M_TPT_QPID 0xFFFFF
#define V_TPT_QPID(x) ((x) << S_TPT_QPID)
#define G_TPT_QPID(x) (((x) >> S_TPT_QPID) & M_TPT_QPID)
#define S_TPT_PSTAG 0
#define M_TPT_PSTAG 0xFFFFFF
#define V_TPT_PSTAG(x) ((x) << S_TPT_PSTAG)
#define G_TPT_PSTAG(x) (((x) >> S_TPT_PSTAG) & M_TPT_PSTAG)
#define S_TPT_PBL_SIZE 0
#define M_TPT_PBL_SIZE 0xFFFFF
#define V_TPT_PBL_SIZE(x) ((x) << S_TPT_PBL_SIZE)
#define G_TPT_PBL_SIZE(x) (((x) >> S_TPT_PBL_SIZE) & M_TPT_PBL_SIZE)
/*
* CQE defs
*/
struct t3_cqe {
__be32 header;
__be32 len;
union {
struct {
__be32 stag;
__be32 msn;
} rcqe;
struct {
u32 wrid_hi;
u32 wrid_low;
} scqe;
} u;
};
#define S_CQE_OOO 31
#define M_CQE_OOO 0x1
#define G_CQE_OOO(x) ((((x) >> S_CQE_OOO)) & M_CQE_OOO)
#define V_CEQ_OOO(x) ((x)<<S_CQE_OOO)
#define S_CQE_QPID 12
#define M_CQE_QPID 0x7FFFF
#define G_CQE_QPID(x) ((((x) >> S_CQE_QPID)) & M_CQE_QPID)
#define V_CQE_QPID(x) ((x)<<S_CQE_QPID)
#define S_CQE_SWCQE 11
#define M_CQE_SWCQE 0x1
#define G_CQE_SWCQE(x) ((((x) >> S_CQE_SWCQE)) & M_CQE_SWCQE)
#define V_CQE_SWCQE(x) ((x)<<S_CQE_SWCQE)
#define S_CQE_GENBIT 10
#define M_CQE_GENBIT 0x1
#define G_CQE_GENBIT(x) (((x) >> S_CQE_GENBIT) & M_CQE_GENBIT)
#define V_CQE_GENBIT(x) ((x)<<S_CQE_GENBIT)
#define S_CQE_STATUS 5
#define M_CQE_STATUS 0x1F
#define G_CQE_STATUS(x) ((((x) >> S_CQE_STATUS)) & M_CQE_STATUS)
#define V_CQE_STATUS(x) ((x)<<S_CQE_STATUS)
#define S_CQE_TYPE 4
#define M_CQE_TYPE 0x1
#define G_CQE_TYPE(x) ((((x) >> S_CQE_TYPE)) & M_CQE_TYPE)
#define V_CQE_TYPE(x) ((x)<<S_CQE_TYPE)
#define S_CQE_OPCODE 0
#define M_CQE_OPCODE 0xF
#define G_CQE_OPCODE(x) ((((x) >> S_CQE_OPCODE)) & M_CQE_OPCODE)
#define V_CQE_OPCODE(x) ((x)<<S_CQE_OPCODE)
#define SW_CQE(x) (G_CQE_SWCQE(be32_to_cpu((x).header)))
#define CQE_OOO(x) (G_CQE_OOO(be32_to_cpu((x).header)))
#define CQE_QPID(x) (G_CQE_QPID(be32_to_cpu((x).header)))
#define CQE_GENBIT(x) (G_CQE_GENBIT(be32_to_cpu((x).header)))
#define CQE_TYPE(x) (G_CQE_TYPE(be32_to_cpu((x).header)))
#define SQ_TYPE(x) (CQE_TYPE((x)))
#define RQ_TYPE(x) (!CQE_TYPE((x)))
#define CQE_STATUS(x) (G_CQE_STATUS(be32_to_cpu((x).header)))
#define CQE_OPCODE(x) (G_CQE_OPCODE(be32_to_cpu((x).header)))
#define CQE_SEND_OPCODE(x)( \
(G_CQE_OPCODE(be32_to_cpu((x).header)) == T3_SEND) || \
(G_CQE_OPCODE(be32_to_cpu((x).header)) == T3_SEND_WITH_SE) || \
(G_CQE_OPCODE(be32_to_cpu((x).header)) == T3_SEND_WITH_INV) || \
(G_CQE_OPCODE(be32_to_cpu((x).header)) == T3_SEND_WITH_SE_INV))
#define CQE_LEN(x) (be32_to_cpu((x).len))
/* used for RQ completion processing */
#define CQE_WRID_STAG(x) (be32_to_cpu((x).u.rcqe.stag))
#define CQE_WRID_MSN(x) (be32_to_cpu((x).u.rcqe.msn))
/* used for SQ completion processing */
#define CQE_WRID_SQ_WPTR(x) ((x).u.scqe.wrid_hi)
#define CQE_WRID_WPTR(x) ((x).u.scqe.wrid_low)
/* generic accessor macros */
#define CQE_WRID_HI(x) ((x).u.scqe.wrid_hi)
#define CQE_WRID_LOW(x) ((x).u.scqe.wrid_low)
#define TPT_ERR_SUCCESS 0x0
#define TPT_ERR_STAG 0x1 /* STAG invalid: either the */
/* STAG is offlimt, being 0, */
/* or STAG_key mismatch */
#define TPT_ERR_PDID 0x2 /* PDID mismatch */
#define TPT_ERR_QPID 0x3 /* QPID mismatch */
#define TPT_ERR_ACCESS 0x4 /* Invalid access right */
#define TPT_ERR_WRAP 0x5 /* Wrap error */
#define TPT_ERR_BOUND 0x6 /* base and bounds voilation */
#define TPT_ERR_INVALIDATE_SHARED_MR 0x7 /* attempt to invalidate a */
/* shared memory region */
#define TPT_ERR_INVALIDATE_MR_WITH_MW_BOUND 0x8 /* attempt to invalidate a */
/* shared memory region */
#define TPT_ERR_ECC 0x9 /* ECC error detected */
#define TPT_ERR_ECC_PSTAG 0xA /* ECC error detected when */
/* reading PSTAG for a MW */
/* Invalidate */
#define TPT_ERR_PBL_ADDR_BOUND 0xB /* pbl addr out of bounds: */
/* software error */
#define TPT_ERR_SWFLUSH 0xC /* SW FLUSHED */
#define TPT_ERR_CRC 0x10 /* CRC error */
#define TPT_ERR_MARKER 0x11 /* Marker error */
#define TPT_ERR_PDU_LEN_ERR 0x12 /* invalid PDU length */
#define TPT_ERR_OUT_OF_RQE 0x13 /* out of RQE */
#define TPT_ERR_DDP_VERSION 0x14 /* wrong DDP version */
#define TPT_ERR_RDMA_VERSION 0x15 /* wrong RDMA version */
#define TPT_ERR_OPCODE 0x16 /* invalid rdma opcode */
#define TPT_ERR_DDP_QUEUE_NUM 0x17 /* invalid ddp queue number */
#define TPT_ERR_MSN 0x18 /* MSN error */
#define TPT_ERR_TBIT 0x19 /* tag bit not set correctly */
#define TPT_ERR_MO 0x1A /* MO not 0 for TERMINATE */
/* or READ_REQ */
#define TPT_ERR_MSN_GAP 0x1B
#define TPT_ERR_MSN_RANGE 0x1C
#define TPT_ERR_IRD_OVERFLOW 0x1D
#define TPT_ERR_RQE_ADDR_BOUND 0x1E /* RQE addr out of bounds: */
/* software error */
#define TPT_ERR_INTERNAL_ERR 0x1F /* internal error (opcode */
/* mismatch) */
struct t3_swsq {
__u64 wr_id;
struct t3_cqe cqe;
__u32 sq_wptr;
__be32 read_len;
int opcode;
int complete;
int signaled;
};
struct t3_swrq {
__u64 wr_id;
__u32 pbl_addr;
};
/*
* A T3 WQ implements both the SQ and RQ.
*/
struct t3_wq {
union t3_wr *queue; /* DMA accessible memory */
dma_addr_t dma_addr; /* DMA address for HW */
DEFINE_DMA_UNMAP_ADDR(mapping); /* unmap kruft */
u32 error; /* 1 once we go to ERROR */
u32 qpid;
u32 wptr; /* idx to next available WR slot */
u32 size_log2; /* total wq size */
struct t3_swsq *sq; /* SW SQ */
struct t3_swsq *oldest_read; /* tracks oldest pending read */
u32 sq_wptr; /* sq_wptr - sq_rptr == count of */
u32 sq_rptr; /* pending wrs */
u32 sq_size_log2; /* sq size */
struct t3_swrq *rq; /* SW RQ (holds consumer wr_ids */
u32 rq_wptr; /* rq_wptr - rq_rptr == count of */
u32 rq_rptr; /* pending wrs */
struct t3_swrq *rq_oldest_wr; /* oldest wr on the SW RQ */
u32 rq_size_log2; /* rq size */
u32 rq_addr; /* rq adapter address */
void __iomem *doorbell; /* kernel db */
u64 udb; /* user db if any */
struct cxio_rdev *rdev;
};
struct t3_cq {
u32 cqid;
u32 rptr;
u32 wptr;
u32 size_log2;
dma_addr_t dma_addr;
DEFINE_DMA_UNMAP_ADDR(mapping);
struct t3_cqe *queue;
struct t3_cqe *sw_queue;
u32 sw_rptr;
u32 sw_wptr;
};
#define CQ_VLD_ENTRY(ptr,size_log2,cqe) (Q_GENBIT(ptr,size_log2) == \
CQE_GENBIT(*cqe))
struct t3_cq_status_page {
u32 cq_err;
};
static inline int cxio_cq_in_error(struct t3_cq *cq)
{
return ((struct t3_cq_status_page *)
&cq->queue[1 << cq->size_log2])->cq_err;
}
static inline void cxio_set_cq_in_error(struct t3_cq *cq)
{
((struct t3_cq_status_page *)
&cq->queue[1 << cq->size_log2])->cq_err = 1;
}
static inline void cxio_set_wq_in_error(struct t3_wq *wq)
{
wq->queue->wq_in_err.err |= 1;
}
static inline void cxio_disable_wq_db(struct t3_wq *wq)
{
wq->queue->wq_in_err.err |= 2;
}
static inline void cxio_enable_wq_db(struct t3_wq *wq)
{
wq->queue->wq_in_err.err &= ~2;
}
static inline int cxio_wq_db_enabled(struct t3_wq *wq)
{
return !(wq->queue->wq_in_err.err & 2);
}
static inline struct t3_cqe *cxio_next_hw_cqe(struct t3_cq *cq)
{
struct t3_cqe *cqe;
cqe = cq->queue + (Q_PTR2IDX(cq->rptr, cq->size_log2));
if (CQ_VLD_ENTRY(cq->rptr, cq->size_log2, cqe))
return cqe;
return NULL;
}
static inline struct t3_cqe *cxio_next_sw_cqe(struct t3_cq *cq)
{
struct t3_cqe *cqe;
if (!Q_EMPTY(cq->sw_rptr, cq->sw_wptr)) {
cqe = cq->sw_queue + (Q_PTR2IDX(cq->sw_rptr, cq->size_log2));
return cqe;
}
return NULL;
}
static inline struct t3_cqe *cxio_next_cqe(struct t3_cq *cq)
{
struct t3_cqe *cqe;
if (!Q_EMPTY(cq->sw_rptr, cq->sw_wptr)) {
cqe = cq->sw_queue + (Q_PTR2IDX(cq->sw_rptr, cq->size_log2));
return cqe;
}
cqe = cq->queue + (Q_PTR2IDX(cq->rptr, cq->size_log2));
if (CQ_VLD_ENTRY(cq->rptr, cq->size_log2, cqe))
return cqe;
return NULL;
}
#endif

View File

@ -1,282 +0,0 @@
/*
* Copyright (c) 2006 Chelsio, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#include <linux/module.h>
#include <linux/moduleparam.h>
#include <rdma/ib_verbs.h>
#include "cxgb3_offload.h"
#include "iwch_provider.h"
#include <rdma/cxgb3-abi.h>
#include "iwch.h"
#include "iwch_cm.h"
#define DRV_VERSION "1.1"
MODULE_AUTHOR("Boyd Faulkner, Steve Wise");
MODULE_DESCRIPTION("Chelsio T3 RDMA Driver");
MODULE_LICENSE("Dual BSD/GPL");
static void open_rnic_dev(struct t3cdev *);
static void close_rnic_dev(struct t3cdev *);
static void iwch_event_handler(struct t3cdev *, u32, u32);
struct cxgb3_client t3c_client = {
.name = "iw_cxgb3",
.add = open_rnic_dev,
.remove = close_rnic_dev,
.handlers = t3c_handlers,
.redirect = iwch_ep_redirect,
.event_handler = iwch_event_handler
};
static LIST_HEAD(dev_list);
static DEFINE_MUTEX(dev_mutex);
static void disable_dbs(struct iwch_dev *rnicp)
{
unsigned long index;
struct iwch_qp *qhp;
xa_lock_irq(&rnicp->qps);
xa_for_each(&rnicp->qps, index, qhp)
cxio_disable_wq_db(&qhp->wq);
xa_unlock_irq(&rnicp->qps);
}
static void enable_dbs(struct iwch_dev *rnicp, int ring_db)
{
unsigned long index;
struct iwch_qp *qhp;
xa_lock_irq(&rnicp->qps);
xa_for_each(&rnicp->qps, index, qhp) {
if (ring_db)
ring_doorbell(qhp->rhp->rdev.ctrl_qp.doorbell,
qhp->wq.qpid);
cxio_enable_wq_db(&qhp->wq);
}
xa_unlock_irq(&rnicp->qps);
}
static void iwch_db_drop_task(struct work_struct *work)
{
struct iwch_dev *rnicp = container_of(work, struct iwch_dev,
db_drop_task.work);
enable_dbs(rnicp, 1);
}
static void rnic_init(struct iwch_dev *rnicp)
{
pr_debug("%s iwch_dev %p\n", __func__, rnicp);
xa_init_flags(&rnicp->cqs, XA_FLAGS_LOCK_IRQ);
xa_init_flags(&rnicp->qps, XA_FLAGS_LOCK_IRQ);
xa_init_flags(&rnicp->mrs, XA_FLAGS_LOCK_IRQ);
INIT_DELAYED_WORK(&rnicp->db_drop_task, iwch_db_drop_task);
rnicp->attr.max_qps = T3_MAX_NUM_QP - 32;
rnicp->attr.max_wrs = T3_MAX_QP_DEPTH;
rnicp->attr.max_sge_per_wr = T3_MAX_SGE;
rnicp->attr.max_sge_per_rdma_write_wr = T3_MAX_SGE;
rnicp->attr.max_cqs = T3_MAX_NUM_CQ - 1;
rnicp->attr.max_cqes_per_cq = T3_MAX_CQ_DEPTH;
rnicp->attr.max_mem_regs = cxio_num_stags(&rnicp->rdev);
rnicp->attr.max_phys_buf_entries = T3_MAX_PBL_SIZE;
rnicp->attr.max_pds = T3_MAX_NUM_PD - 1;
rnicp->attr.mem_pgsizes_bitmask = T3_PAGESIZE_MASK;
rnicp->attr.max_mr_size = T3_MAX_MR_SIZE;
rnicp->attr.can_resize_wq = 0;
rnicp->attr.max_rdma_reads_per_qp = 8;
rnicp->attr.max_rdma_read_resources =
rnicp->attr.max_rdma_reads_per_qp * rnicp->attr.max_qps;
rnicp->attr.max_rdma_read_qp_depth = 8; /* IRD */
rnicp->attr.max_rdma_read_depth =
rnicp->attr.max_rdma_read_qp_depth * rnicp->attr.max_qps;
rnicp->attr.rq_overflow_handled = 0;
rnicp->attr.can_modify_ird = 0;
rnicp->attr.can_modify_ord = 0;
rnicp->attr.max_mem_windows = rnicp->attr.max_mem_regs - 1;
rnicp->attr.stag0_value = 1;
rnicp->attr.zbva_support = 1;
rnicp->attr.local_invalidate_fence = 1;
rnicp->attr.cq_overflow_detection = 1;
return;
}
static void open_rnic_dev(struct t3cdev *tdev)
{
struct iwch_dev *rnicp;
pr_debug("%s t3cdev %p\n", __func__, tdev);
pr_info_once("Chelsio T3 RDMA Driver - version %s\n", DRV_VERSION);
rnicp = ib_alloc_device(iwch_dev, ibdev);
if (!rnicp) {
pr_err("Cannot allocate ib device\n");
return;
}
rnicp->rdev.ulp = rnicp;
rnicp->rdev.t3cdev_p = tdev;
mutex_lock(&dev_mutex);
if (cxio_rdev_open(&rnicp->rdev)) {
mutex_unlock(&dev_mutex);
pr_err("Unable to open CXIO rdev\n");
ib_dealloc_device(&rnicp->ibdev);
return;
}
rnic_init(rnicp);
list_add_tail(&rnicp->entry, &dev_list);
mutex_unlock(&dev_mutex);
if (iwch_register_device(rnicp)) {
pr_err("Unable to register device\n");
close_rnic_dev(tdev);
}
pr_info("Initialized device %s\n",
pci_name(rnicp->rdev.rnic_info.pdev));
return;
}
static void close_rnic_dev(struct t3cdev *tdev)
{
struct iwch_dev *dev, *tmp;
pr_debug("%s t3cdev %p\n", __func__, tdev);
mutex_lock(&dev_mutex);
list_for_each_entry_safe(dev, tmp, &dev_list, entry) {
if (dev->rdev.t3cdev_p == tdev) {
dev->rdev.flags = CXIO_ERROR_FATAL;
synchronize_net();
cancel_delayed_work_sync(&dev->db_drop_task);
list_del(&dev->entry);
iwch_unregister_device(dev);
cxio_rdev_close(&dev->rdev);
WARN_ON(!xa_empty(&dev->cqs));
WARN_ON(!xa_empty(&dev->qps));
WARN_ON(!xa_empty(&dev->mrs));
ib_dealloc_device(&dev->ibdev);
break;
}
}
mutex_unlock(&dev_mutex);
}
static void iwch_event_handler(struct t3cdev *tdev, u32 evt, u32 port_id)
{
struct cxio_rdev *rdev = tdev->ulp;
struct iwch_dev *rnicp;
struct ib_event event;
u32 portnum = port_id + 1;
int dispatch = 0;
if (!rdev)
return;
rnicp = rdev_to_iwch_dev(rdev);
switch (evt) {
case OFFLOAD_STATUS_DOWN: {
rdev->flags = CXIO_ERROR_FATAL;
synchronize_net();
event.event = IB_EVENT_DEVICE_FATAL;
dispatch = 1;
break;
}
case OFFLOAD_PORT_DOWN: {
event.event = IB_EVENT_PORT_ERR;
dispatch = 1;
break;
}
case OFFLOAD_PORT_UP: {
event.event = IB_EVENT_PORT_ACTIVE;
dispatch = 1;
break;
}
case OFFLOAD_DB_FULL: {
disable_dbs(rnicp);
break;
}
case OFFLOAD_DB_EMPTY: {
enable_dbs(rnicp, 1);
break;
}
case OFFLOAD_DB_DROP: {
unsigned long delay = 1000;
unsigned short r;
disable_dbs(rnicp);
get_random_bytes(&r, 2);
delay += r & 1023;
/*
* delay is between 1000-2023 usecs.
*/
schedule_delayed_work(&rnicp->db_drop_task,
usecs_to_jiffies(delay));
break;
}
}
if (dispatch) {
event.device = &rnicp->ibdev;
event.element.port_num = portnum;
ib_dispatch_event(&event);
}
return;
}
static int __init iwch_init_module(void)
{
int err;
err = cxio_hal_init();
if (err)
return err;
err = iwch_cm_init();
if (err)
return err;
cxio_register_ev_cb(iwch_ev_dispatch);
cxgb3_register_client(&t3c_client);
return 0;
}
static void __exit iwch_exit_module(void)
{
cxgb3_unregister_client(&t3c_client);
cxio_unregister_ev_cb(iwch_ev_dispatch);
iwch_cm_term();
cxio_hal_exit();
}
module_init(iwch_init_module);
module_exit(iwch_exit_module);

View File

@ -1,155 +0,0 @@
/*
* Copyright (c) 2006 Chelsio, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#ifndef __IWCH_H__
#define __IWCH_H__
#include <linux/mutex.h>
#include <linux/list.h>
#include <linux/spinlock.h>
#include <linux/xarray.h>
#include <linux/workqueue.h>
#include <rdma/ib_verbs.h>
#include "cxio_hal.h"
#include "cxgb3_offload.h"
struct iwch_pd;
struct iwch_cq;
struct iwch_qp;
struct iwch_mr;
struct iwch_rnic_attributes {
u32 max_qps;
u32 max_wrs; /* Max for any SQ/RQ */
u32 max_sge_per_wr;
u32 max_sge_per_rdma_write_wr; /* for RDMA Write WR */
u32 max_cqs;
u32 max_cqes_per_cq;
u32 max_mem_regs;
u32 max_phys_buf_entries; /* for phys buf list */
u32 max_pds;
/*
* The memory page sizes supported by this RNIC.
* Bit position i in bitmap indicates page of
* size (4k)^i. Phys block list mode unsupported.
*/
u32 mem_pgsizes_bitmask;
u64 max_mr_size;
u8 can_resize_wq;
/*
* The maximum number of RDMA Reads that can be outstanding
* per QP with this RNIC as the target.
*/
u32 max_rdma_reads_per_qp;
/*
* The maximum number of resources used for RDMA Reads
* by this RNIC with this RNIC as the target.
*/
u32 max_rdma_read_resources;
/*
* The max depth per QP for initiation of RDMA Read
* by this RNIC.
*/
u32 max_rdma_read_qp_depth;
/*
* The maximum depth for initiation of RDMA Read
* operations by this RNIC on all QPs
*/
u32 max_rdma_read_depth;
u8 rq_overflow_handled;
u32 can_modify_ird;
u32 can_modify_ord;
u32 max_mem_windows;
u32 stag0_value;
u8 zbva_support;
u8 local_invalidate_fence;
u32 cq_overflow_detection;
};
struct iwch_dev {
struct ib_device ibdev;
struct cxio_rdev rdev;
u32 device_cap_flags;
struct iwch_rnic_attributes attr;
struct xarray cqs;
struct xarray qps;
struct xarray mrs;
struct list_head entry;
struct delayed_work db_drop_task;
};
static inline struct iwch_dev *to_iwch_dev(struct ib_device *ibdev)
{
return container_of(ibdev, struct iwch_dev, ibdev);
}
static inline struct iwch_dev *rdev_to_iwch_dev(struct cxio_rdev *rdev)
{
return container_of(rdev, struct iwch_dev, rdev);
}
static inline int t3b_device(const struct iwch_dev *rhp)
{
return rhp->rdev.t3cdev_p->type == T3B;
}
static inline int t3a_device(const struct iwch_dev *rhp)
{
return rhp->rdev.t3cdev_p->type == T3A;
}
static inline struct iwch_cq *get_chp(struct iwch_dev *rhp, u32 cqid)
{
return xa_load(&rhp->cqs, cqid);
}
static inline struct iwch_qp *get_qhp(struct iwch_dev *rhp, u32 qpid)
{
return xa_load(&rhp->qps, qpid);
}
static inline struct iwch_mr *get_mhp(struct iwch_dev *rhp, u32 mmid)
{
return xa_load(&rhp->mrs, mmid);
}
extern struct cxgb3_client t3c_client;
extern cxgb3_cpl_handler_func t3c_handlers[NUM_CPL_CMDS];
extern void iwch_ev_dispatch(struct cxio_rdev *rdev_p, struct sk_buff *skb);
#endif

File diff suppressed because it is too large Load Diff

View File

@ -1,233 +0,0 @@
/*
* Copyright (c) 2006 Chelsio, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#ifndef _IWCH_CM_H_
#define _IWCH_CM_H_
#include <linux/inet.h>
#include <linux/wait.h>
#include <linux/spinlock.h>
#include <linux/kref.h>
#include <rdma/ib_verbs.h>
#include <rdma/iw_cm.h>
#include "cxgb3_offload.h"
#include "iwch_provider.h"
#define MPA_KEY_REQ "MPA ID Req Frame"
#define MPA_KEY_REP "MPA ID Rep Frame"
#define MPA_MAX_PRIVATE_DATA 256
#define MPA_REV 0 /* XXX - amso1100 uses rev 0 ! */
#define MPA_REJECT 0x20
#define MPA_CRC 0x40
#define MPA_MARKERS 0x80
#define MPA_FLAGS_MASK 0xE0
#define put_ep(ep) { \
pr_debug("put_ep (via %s:%u) ep %p refcnt %d\n", \
__func__, __LINE__, ep, kref_read(&((ep)->kref))); \
WARN_ON(kref_read(&((ep)->kref)) < 1); \
kref_put(&((ep)->kref), __free_ep); \
}
#define get_ep(ep) { \
pr_debug("get_ep (via %s:%u) ep %p, refcnt %d\n", \
__func__, __LINE__, ep, kref_read(&((ep)->kref))); \
kref_get(&((ep)->kref)); \
}
struct mpa_message {
u8 key[16];
u8 flags;
u8 revision;
__be16 private_data_size;
u8 private_data[0];
};
struct terminate_message {
u8 layer_etype;
u8 ecode;
__be16 hdrct_rsvd;
u8 len_hdrs[0];
};
#define TERM_MAX_LENGTH (sizeof(struct terminate_message) + 2 + 18 + 28)
enum iwch_layers_types {
LAYER_RDMAP = 0x00,
LAYER_DDP = 0x10,
LAYER_MPA = 0x20,
RDMAP_LOCAL_CATA = 0x00,
RDMAP_REMOTE_PROT = 0x01,
RDMAP_REMOTE_OP = 0x02,
DDP_LOCAL_CATA = 0x00,
DDP_TAGGED_ERR = 0x01,
DDP_UNTAGGED_ERR = 0x02,
DDP_LLP = 0x03
};
enum iwch_rdma_ecodes {
RDMAP_INV_STAG = 0x00,
RDMAP_BASE_BOUNDS = 0x01,
RDMAP_ACC_VIOL = 0x02,
RDMAP_STAG_NOT_ASSOC = 0x03,
RDMAP_TO_WRAP = 0x04,
RDMAP_INV_VERS = 0x05,
RDMAP_INV_OPCODE = 0x06,
RDMAP_STREAM_CATA = 0x07,
RDMAP_GLOBAL_CATA = 0x08,
RDMAP_CANT_INV_STAG = 0x09,
RDMAP_UNSPECIFIED = 0xff
};
enum iwch_ddp_ecodes {
DDPT_INV_STAG = 0x00,
DDPT_BASE_BOUNDS = 0x01,
DDPT_STAG_NOT_ASSOC = 0x02,
DDPT_TO_WRAP = 0x03,
DDPT_INV_VERS = 0x04,
DDPU_INV_QN = 0x01,
DDPU_INV_MSN_NOBUF = 0x02,
DDPU_INV_MSN_RANGE = 0x03,
DDPU_INV_MO = 0x04,
DDPU_MSG_TOOBIG = 0x05,
DDPU_INV_VERS = 0x06
};
enum iwch_mpa_ecodes {
MPA_CRC_ERR = 0x02,
MPA_MARKER_ERR = 0x03
};
enum iwch_ep_state {
IDLE = 0,
LISTEN,
CONNECTING,
MPA_REQ_WAIT,
MPA_REQ_SENT,
MPA_REQ_RCVD,
MPA_REP_SENT,
FPDU_MODE,
ABORTING,
CLOSING,
MORIBUND,
DEAD,
};
enum iwch_ep_flags {
PEER_ABORT_IN_PROGRESS = 0,
ABORT_REQ_IN_PROGRESS = 1,
RELEASE_RESOURCES = 2,
CLOSE_SENT = 3,
};
struct iwch_ep_common {
struct iw_cm_id *cm_id;
struct iwch_qp *qp;
struct t3cdev *tdev;
enum iwch_ep_state state;
struct kref kref;
spinlock_t lock;
struct sockaddr_in local_addr;
struct sockaddr_in remote_addr;
wait_queue_head_t waitq;
int rpl_done;
int rpl_err;
unsigned long flags;
};
struct iwch_listen_ep {
struct iwch_ep_common com;
unsigned int stid;
int backlog;
};
struct iwch_ep {
struct iwch_ep_common com;
struct iwch_ep *parent_ep;
struct timer_list timer;
unsigned int atid;
u32 hwtid;
u32 snd_seq;
u32 rcv_seq;
struct l2t_entry *l2t;
struct dst_entry *dst;
struct sk_buff *mpa_skb;
struct iwch_mpa_attributes mpa_attr;
unsigned int mpa_pkt_len;
u8 mpa_pkt[sizeof(struct mpa_message) + MPA_MAX_PRIVATE_DATA];
u8 tos;
u16 emss;
u16 plen;
u32 ird;
u32 ord;
};
static inline struct iwch_ep *to_ep(struct iw_cm_id *cm_id)
{
return cm_id->provider_data;
}
static inline struct iwch_listen_ep *to_listen_ep(struct iw_cm_id *cm_id)
{
return cm_id->provider_data;
}
static inline int compute_wscale(int win)
{
int wscale = 0;
while (wscale < 14 && (65535<<wscale) < win)
wscale++;
return wscale;
}
/* CM prototypes */
int iwch_connect(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param);
int iwch_create_listen(struct iw_cm_id *cm_id, int backlog);
int iwch_destroy_listen(struct iw_cm_id *cm_id);
int iwch_reject_cr(struct iw_cm_id *cm_id, const void *pdata, u8 pdata_len);
int iwch_accept_cr(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param);
int iwch_ep_disconnect(struct iwch_ep *ep, int abrupt, gfp_t gfp);
int iwch_quiesce_tid(struct iwch_ep *ep);
int iwch_resume_tid(struct iwch_ep *ep);
void __free_ep(struct kref *kref);
void iwch_rearp(struct iwch_ep *ep);
int iwch_ep_redirect(void *ctx, struct dst_entry *old, struct dst_entry *new, struct l2t_entry *l2t);
int __init iwch_cm_init(void);
void __exit iwch_cm_term(void);
extern int peer2peer;
#endif /* _IWCH_CM_H_ */

View File

@ -1,230 +0,0 @@
/*
* Copyright (c) 2006 Chelsio, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#include "iwch_provider.h"
#include "iwch.h"
static int __iwch_poll_cq_one(struct iwch_dev *rhp, struct iwch_cq *chp,
struct iwch_qp *qhp, struct ib_wc *wc)
{
struct t3_wq *wq = qhp ? &qhp->wq : NULL;
struct t3_cqe cqe;
u32 credit = 0;
u8 cqe_flushed;
u64 cookie;
int ret = 1;
ret = cxio_poll_cq(wq, &(chp->cq), &cqe, &cqe_flushed, &cookie,
&credit);
if (t3a_device(chp->rhp) && credit) {
pr_debug("%s updating %d cq credits on id %d\n", __func__,
credit, chp->cq.cqid);
cxio_hal_cq_op(&rhp->rdev, &chp->cq, CQ_CREDIT_UPDATE, credit);
}
if (ret) {
ret = -EAGAIN;
goto out;
}
ret = 1;
wc->wr_id = cookie;
wc->qp = qhp ? &qhp->ibqp : NULL;
wc->vendor_err = CQE_STATUS(cqe);
wc->wc_flags = 0;
pr_debug("%s qpid 0x%x type %d opcode %d status 0x%x wrid hi 0x%x lo 0x%x cookie 0x%llx\n",
__func__,
CQE_QPID(cqe), CQE_TYPE(cqe),
CQE_OPCODE(cqe), CQE_STATUS(cqe), CQE_WRID_HI(cqe),
CQE_WRID_LOW(cqe), (unsigned long long)cookie);
if (CQE_TYPE(cqe) == 0) {
if (!CQE_STATUS(cqe))
wc->byte_len = CQE_LEN(cqe);
else
wc->byte_len = 0;
wc->opcode = IB_WC_RECV;
if (CQE_OPCODE(cqe) == T3_SEND_WITH_INV ||
CQE_OPCODE(cqe) == T3_SEND_WITH_SE_INV) {
wc->ex.invalidate_rkey = CQE_WRID_STAG(cqe);
wc->wc_flags |= IB_WC_WITH_INVALIDATE;
}
} else {
switch (CQE_OPCODE(cqe)) {
case T3_RDMA_WRITE:
wc->opcode = IB_WC_RDMA_WRITE;
break;
case T3_READ_REQ:
wc->opcode = IB_WC_RDMA_READ;
wc->byte_len = CQE_LEN(cqe);
break;
case T3_SEND:
case T3_SEND_WITH_SE:
case T3_SEND_WITH_INV:
case T3_SEND_WITH_SE_INV:
wc->opcode = IB_WC_SEND;
break;
case T3_LOCAL_INV:
wc->opcode = IB_WC_LOCAL_INV;
break;
case T3_FAST_REGISTER:
wc->opcode = IB_WC_REG_MR;
break;
default:
pr_err("Unexpected opcode %d in the CQE received for QPID=0x%0x\n",
CQE_OPCODE(cqe), CQE_QPID(cqe));
ret = -EINVAL;
goto out;
}
}
if (cqe_flushed)
wc->status = IB_WC_WR_FLUSH_ERR;
else {
switch (CQE_STATUS(cqe)) {
case TPT_ERR_SUCCESS:
wc->status = IB_WC_SUCCESS;
break;
case TPT_ERR_STAG:
wc->status = IB_WC_LOC_ACCESS_ERR;
break;
case TPT_ERR_PDID:
wc->status = IB_WC_LOC_PROT_ERR;
break;
case TPT_ERR_QPID:
case TPT_ERR_ACCESS:
wc->status = IB_WC_LOC_ACCESS_ERR;
break;
case TPT_ERR_WRAP:
wc->status = IB_WC_GENERAL_ERR;
break;
case TPT_ERR_BOUND:
wc->status = IB_WC_LOC_LEN_ERR;
break;
case TPT_ERR_INVALIDATE_SHARED_MR:
case TPT_ERR_INVALIDATE_MR_WITH_MW_BOUND:
wc->status = IB_WC_MW_BIND_ERR;
break;
case TPT_ERR_CRC:
case TPT_ERR_MARKER:
case TPT_ERR_PDU_LEN_ERR:
case TPT_ERR_OUT_OF_RQE:
case TPT_ERR_DDP_VERSION:
case TPT_ERR_RDMA_VERSION:
case TPT_ERR_DDP_QUEUE_NUM:
case TPT_ERR_MSN:
case TPT_ERR_TBIT:
case TPT_ERR_MO:
case TPT_ERR_MSN_RANGE:
case TPT_ERR_IRD_OVERFLOW:
case TPT_ERR_OPCODE:
wc->status = IB_WC_FATAL_ERR;
break;
case TPT_ERR_SWFLUSH:
wc->status = IB_WC_WR_FLUSH_ERR;
break;
default:
pr_err("Unexpected cqe_status 0x%x for QPID=0x%0x\n",
CQE_STATUS(cqe), CQE_QPID(cqe));
ret = -EINVAL;
}
}
out:
return ret;
}
/*
* Get one cq entry from cxio and map it to openib.
*
* Returns:
* 0 EMPTY;
* 1 cqe returned
* -EAGAIN caller must try again
* any other -errno fatal error
*/
static int iwch_poll_cq_one(struct iwch_dev *rhp, struct iwch_cq *chp,
struct ib_wc *wc)
{
struct iwch_qp *qhp;
struct t3_cqe *rd_cqe;
int ret;
rd_cqe = cxio_next_cqe(&chp->cq);
if (!rd_cqe)
return 0;
qhp = get_qhp(rhp, CQE_QPID(*rd_cqe));
if (qhp) {
spin_lock(&qhp->lock);
ret = __iwch_poll_cq_one(rhp, chp, qhp, wc);
spin_unlock(&qhp->lock);
} else {
ret = __iwch_poll_cq_one(rhp, chp, NULL, wc);
}
return ret;
}
int iwch_poll_cq(struct ib_cq *ibcq, int num_entries, struct ib_wc *wc)
{
struct iwch_dev *rhp;
struct iwch_cq *chp;
unsigned long flags;
int npolled;
int err = 0;
chp = to_iwch_cq(ibcq);
rhp = chp->rhp;
spin_lock_irqsave(&chp->lock, flags);
for (npolled = 0; npolled < num_entries; ++npolled) {
/*
* Because T3 can post CQEs that are _not_ associated
* with a WR, we might have to poll again after removing
* one of these.
*/
do {
err = iwch_poll_cq_one(rhp, chp, wc + npolled);
} while (err == -EAGAIN);
if (err <= 0)
break;
}
spin_unlock_irqrestore(&chp->lock, flags);
if (err < 0)
return err;
else {
return npolled;
}
}

View File

@ -1,232 +0,0 @@
/*
* Copyright (c) 2006 Chelsio, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#include <linux/gfp.h>
#include <linux/mman.h>
#include <net/sock.h>
#include "iwch_provider.h"
#include "iwch.h"
#include "iwch_cm.h"
#include "cxio_hal.h"
#include "cxio_wr.h"
static void post_qp_event(struct iwch_dev *rnicp, struct iwch_cq *chp,
struct respQ_msg_t *rsp_msg,
enum ib_event_type ib_event,
int send_term)
{
struct ib_event event;
struct iwch_qp_attributes attrs;
struct iwch_qp *qhp;
unsigned long flag;
xa_lock(&rnicp->qps);
qhp = xa_load(&rnicp->qps, CQE_QPID(rsp_msg->cqe));
if (!qhp) {
pr_err("%s unaffiliated error 0x%x qpid 0x%x\n",
__func__, CQE_STATUS(rsp_msg->cqe),
CQE_QPID(rsp_msg->cqe));
xa_unlock(&rnicp->qps);
return;
}
if ((qhp->attr.state == IWCH_QP_STATE_ERROR) ||
(qhp->attr.state == IWCH_QP_STATE_TERMINATE)) {
pr_debug("%s AE received after RTS - qp state %d qpid 0x%x status 0x%x\n",
__func__,
qhp->attr.state, qhp->wq.qpid,
CQE_STATUS(rsp_msg->cqe));
xa_unlock(&rnicp->qps);
return;
}
pr_err("%s - AE qpid 0x%x opcode %d status 0x%x type %d wrid.hi 0x%x wrid.lo 0x%x\n",
__func__,
CQE_QPID(rsp_msg->cqe), CQE_OPCODE(rsp_msg->cqe),
CQE_STATUS(rsp_msg->cqe), CQE_TYPE(rsp_msg->cqe),
CQE_WRID_HI(rsp_msg->cqe), CQE_WRID_LOW(rsp_msg->cqe));
atomic_inc(&qhp->refcnt);
xa_unlock(&rnicp->qps);
if (qhp->attr.state == IWCH_QP_STATE_RTS) {
attrs.next_state = IWCH_QP_STATE_TERMINATE;
iwch_modify_qp(qhp->rhp, qhp, IWCH_QP_ATTR_NEXT_STATE,
&attrs, 1);
if (send_term)
iwch_post_terminate(qhp, rsp_msg);
}
event.event = ib_event;
event.device = chp->ibcq.device;
if (ib_event == IB_EVENT_CQ_ERR)
event.element.cq = &chp->ibcq;
else
event.element.qp = &qhp->ibqp;
if (qhp->ibqp.event_handler)
(*qhp->ibqp.event_handler)(&event, qhp->ibqp.qp_context);
spin_lock_irqsave(&chp->comp_handler_lock, flag);
(*chp->ibcq.comp_handler)(&chp->ibcq, chp->ibcq.cq_context);
spin_unlock_irqrestore(&chp->comp_handler_lock, flag);
if (atomic_dec_and_test(&qhp->refcnt))
wake_up(&qhp->wait);
}
void iwch_ev_dispatch(struct cxio_rdev *rdev_p, struct sk_buff *skb)
{
struct iwch_dev *rnicp;
struct respQ_msg_t *rsp_msg = (struct respQ_msg_t *) skb->data;
struct iwch_cq *chp;
struct iwch_qp *qhp;
u32 cqid = RSPQ_CQID(rsp_msg);
unsigned long flag;
rnicp = (struct iwch_dev *) rdev_p->ulp;
xa_lock(&rnicp->qps);
chp = get_chp(rnicp, cqid);
qhp = xa_load(&rnicp->qps, CQE_QPID(rsp_msg->cqe));
if (!chp || !qhp) {
pr_err("BAD AE cqid 0x%x qpid 0x%x opcode %d status 0x%x type %d wrid.hi 0x%x wrid.lo 0x%x\n",
cqid, CQE_QPID(rsp_msg->cqe),
CQE_OPCODE(rsp_msg->cqe), CQE_STATUS(rsp_msg->cqe),
CQE_TYPE(rsp_msg->cqe), CQE_WRID_HI(rsp_msg->cqe),
CQE_WRID_LOW(rsp_msg->cqe));
xa_unlock(&rnicp->qps);
goto out;
}
iwch_qp_add_ref(&qhp->ibqp);
atomic_inc(&chp->refcnt);
xa_unlock(&rnicp->qps);
/*
* 1) completion of our sending a TERMINATE.
* 2) incoming TERMINATE message.
*/
if ((CQE_OPCODE(rsp_msg->cqe) == T3_TERMINATE) &&
(CQE_STATUS(rsp_msg->cqe) == 0)) {
if (SQ_TYPE(rsp_msg->cqe)) {
pr_debug("%s QPID 0x%x ep %p disconnecting\n",
__func__, qhp->wq.qpid, qhp->ep);
iwch_ep_disconnect(qhp->ep, 0, GFP_ATOMIC);
} else {
pr_debug("%s post REQ_ERR AE QPID 0x%x\n", __func__,
qhp->wq.qpid);
post_qp_event(rnicp, chp, rsp_msg,
IB_EVENT_QP_REQ_ERR, 0);
iwch_ep_disconnect(qhp->ep, 0, GFP_ATOMIC);
}
goto done;
}
/* Bad incoming Read request */
if (SQ_TYPE(rsp_msg->cqe) &&
(CQE_OPCODE(rsp_msg->cqe) == T3_READ_RESP)) {
post_qp_event(rnicp, chp, rsp_msg, IB_EVENT_QP_REQ_ERR, 1);
goto done;
}
/* Bad incoming write */
if (RQ_TYPE(rsp_msg->cqe) &&
(CQE_OPCODE(rsp_msg->cqe) == T3_RDMA_WRITE)) {
post_qp_event(rnicp, chp, rsp_msg, IB_EVENT_QP_REQ_ERR, 1);
goto done;
}
switch (CQE_STATUS(rsp_msg->cqe)) {
/* Completion Events */
case TPT_ERR_SUCCESS:
/*
* Confirm the destination entry if this is a RECV completion.
*/
if (qhp->ep && SQ_TYPE(rsp_msg->cqe))
dst_confirm(qhp->ep->dst);
spin_lock_irqsave(&chp->comp_handler_lock, flag);
(*chp->ibcq.comp_handler)(&chp->ibcq, chp->ibcq.cq_context);
spin_unlock_irqrestore(&chp->comp_handler_lock, flag);
break;
case TPT_ERR_STAG:
case TPT_ERR_PDID:
case TPT_ERR_QPID:
case TPT_ERR_ACCESS:
case TPT_ERR_WRAP:
case TPT_ERR_BOUND:
case TPT_ERR_INVALIDATE_SHARED_MR:
case TPT_ERR_INVALIDATE_MR_WITH_MW_BOUND:
post_qp_event(rnicp, chp, rsp_msg, IB_EVENT_QP_ACCESS_ERR, 1);
break;
/* Device Fatal Errors */
case TPT_ERR_ECC:
case TPT_ERR_ECC_PSTAG:
case TPT_ERR_INTERNAL_ERR:
post_qp_event(rnicp, chp, rsp_msg, IB_EVENT_DEVICE_FATAL, 1);
break;
/* QP Fatal Errors */
case TPT_ERR_OUT_OF_RQE:
case TPT_ERR_PBL_ADDR_BOUND:
case TPT_ERR_CRC:
case TPT_ERR_MARKER:
case TPT_ERR_PDU_LEN_ERR:
case TPT_ERR_DDP_VERSION:
case TPT_ERR_RDMA_VERSION:
case TPT_ERR_OPCODE:
case TPT_ERR_DDP_QUEUE_NUM:
case TPT_ERR_MSN:
case TPT_ERR_TBIT:
case TPT_ERR_MO:
case TPT_ERR_MSN_GAP:
case TPT_ERR_MSN_RANGE:
case TPT_ERR_RQE_ADDR_BOUND:
case TPT_ERR_IRD_OVERFLOW:
post_qp_event(rnicp, chp, rsp_msg, IB_EVENT_QP_FATAL, 1);
break;
default:
pr_err("Unknown T3 status 0x%x QPID 0x%x\n",
CQE_STATUS(rsp_msg->cqe), qhp->wq.qpid);
post_qp_event(rnicp, chp, rsp_msg, IB_EVENT_QP_FATAL, 1);
break;
}
done:
if (atomic_dec_and_test(&chp->refcnt))
wake_up(&chp->wait);
iwch_qp_rem_ref(&qhp->ibqp);
out:
dev_kfree_skb_irq(skb);
}

View File

@ -1,101 +0,0 @@
/*
* Copyright (c) 2006 Chelsio, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#include <linux/slab.h>
#include <asm/byteorder.h>
#include <rdma/iw_cm.h>
#include <rdma/ib_verbs.h>
#include "cxio_hal.h"
#include "cxio_resource.h"
#include "iwch.h"
#include "iwch_provider.h"
static int iwch_finish_mem_reg(struct iwch_mr *mhp, u32 stag)
{
u32 mmid;
mhp->attr.state = 1;
mhp->attr.stag = stag;
mmid = stag >> 8;
mhp->ibmr.rkey = mhp->ibmr.lkey = stag;
pr_debug("%s mmid 0x%x mhp %p\n", __func__, mmid, mhp);
return xa_insert_irq(&mhp->rhp->mrs, mmid, mhp, GFP_KERNEL);
}
int iwch_register_mem(struct iwch_dev *rhp, struct iwch_pd *php,
struct iwch_mr *mhp, int shift)
{
u32 stag;
int ret;
if (cxio_register_phys_mem(&rhp->rdev,
&stag, mhp->attr.pdid,
mhp->attr.perms,
mhp->attr.zbva,
mhp->attr.va_fbo,
mhp->attr.len,
shift - 12,
mhp->attr.pbl_size, mhp->attr.pbl_addr))
return -ENOMEM;
ret = iwch_finish_mem_reg(mhp, stag);
if (ret)
cxio_dereg_mem(&rhp->rdev, mhp->attr.stag, mhp->attr.pbl_size,
mhp->attr.pbl_addr);
return ret;
}
int iwch_alloc_pbl(struct iwch_mr *mhp, int npages)
{
mhp->attr.pbl_addr = cxio_hal_pblpool_alloc(&mhp->rhp->rdev,
npages << 3);
if (!mhp->attr.pbl_addr)
return -ENOMEM;
mhp->attr.pbl_size = npages;
return 0;
}
void iwch_free_pbl(struct iwch_mr *mhp)
{
cxio_hal_pblpool_free(&mhp->rhp->rdev, mhp->attr.pbl_addr,
mhp->attr.pbl_size << 3);
}
int iwch_write_pbl(struct iwch_mr *mhp, __be64 *pages, int npages, int offset)
{
return cxio_write_pbl(&mhp->rhp->rdev, pages,
mhp->attr.pbl_addr + (offset << 3), npages);
}

File diff suppressed because it is too large Load Diff

View File

@ -1,347 +0,0 @@
/*
* Copyright (c) 2006 Chelsio, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#ifndef __IWCH_PROVIDER_H__
#define __IWCH_PROVIDER_H__
#include <linux/list.h>
#include <linux/spinlock.h>
#include <rdma/ib_verbs.h>
#include <asm/types.h>
#include "t3cdev.h"
#include "iwch.h"
#include "cxio_wr.h"
#include "cxio_hal.h"
struct iwch_pd {
struct ib_pd ibpd;
u32 pdid;
struct iwch_dev *rhp;
};
static inline struct iwch_pd *to_iwch_pd(struct ib_pd *ibpd)
{
return container_of(ibpd, struct iwch_pd, ibpd);
}
struct tpt_attributes {
u32 stag;
u32 state:1;
u32 type:2;
u32 rsvd:1;
enum tpt_mem_perm perms;
u32 remote_invaliate_disable:1;
u32 zbva:1;
u32 mw_bind_enable:1;
u32 page_size:5;
u32 pdid;
u32 qpid;
u32 pbl_addr;
u32 len;
u64 va_fbo;
u32 pbl_size;
};
struct iwch_mr {
struct ib_mr ibmr;
struct ib_umem *umem;
struct iwch_dev *rhp;
u64 kva;
struct tpt_attributes attr;
u64 *pages;
u32 npages;
};
typedef struct iwch_mw iwch_mw_handle;
static inline struct iwch_mr *to_iwch_mr(struct ib_mr *ibmr)
{
return container_of(ibmr, struct iwch_mr, ibmr);
}
struct iwch_mw {
struct ib_mw ibmw;
struct iwch_dev *rhp;
u64 kva;
struct tpt_attributes attr;
};
static inline struct iwch_mw *to_iwch_mw(struct ib_mw *ibmw)
{
return container_of(ibmw, struct iwch_mw, ibmw);
}
struct iwch_cq {
struct ib_cq ibcq;
struct iwch_dev *rhp;
struct t3_cq cq;
spinlock_t lock;
spinlock_t comp_handler_lock;
atomic_t refcnt;
wait_queue_head_t wait;
u32 __user *user_rptr_addr;
};
static inline struct iwch_cq *to_iwch_cq(struct ib_cq *ibcq)
{
return container_of(ibcq, struct iwch_cq, ibcq);
}
enum IWCH_QP_FLAGS {
QP_QUIESCED = 0x01
};
struct iwch_mpa_attributes {
u8 initiator;
u8 recv_marker_enabled;
u8 xmit_marker_enabled; /* iWARP: enable inbound Read Resp. */
u8 crc_enabled;
u8 version; /* 0 or 1 */
};
struct iwch_qp_attributes {
u32 scq;
u32 rcq;
u32 sq_num_entries;
u32 rq_num_entries;
u32 sq_max_sges;
u32 sq_max_sges_rdma_write;
u32 rq_max_sges;
u32 state;
u8 enable_rdma_read;
u8 enable_rdma_write; /* enable inbound Read Resp. */
u8 enable_bind;
u8 enable_mmid0_fastreg; /* Enable STAG0 + Fast-register */
/*
* Next QP state. If specify the current state, only the
* QP attributes will be modified.
*/
u32 max_ord;
u32 max_ird;
u32 pd; /* IN */
u32 next_state;
char terminate_buffer[52];
u32 terminate_msg_len;
u8 is_terminate_local;
struct iwch_mpa_attributes mpa_attr; /* IN-OUT */
struct iwch_ep *llp_stream_handle;
char *stream_msg_buf; /* Last stream msg. before Idle -> RTS */
u32 stream_msg_buf_len; /* Only on Idle -> RTS */
};
struct iwch_qp {
struct ib_qp ibqp;
struct iwch_dev *rhp;
struct iwch_ep *ep;
struct iwch_qp_attributes attr;
struct t3_wq wq;
spinlock_t lock;
atomic_t refcnt;
wait_queue_head_t wait;
enum IWCH_QP_FLAGS flags;
};
static inline int qp_quiesced(struct iwch_qp *qhp)
{
return qhp->flags & QP_QUIESCED;
}
static inline struct iwch_qp *to_iwch_qp(struct ib_qp *ibqp)
{
return container_of(ibqp, struct iwch_qp, ibqp);
}
void iwch_qp_add_ref(struct ib_qp *qp);
void iwch_qp_rem_ref(struct ib_qp *qp);
struct iwch_ucontext {
struct ib_ucontext ibucontext;
struct cxio_ucontext uctx;
u32 key;
spinlock_t mmap_lock;
struct list_head mmaps;
};
static inline struct iwch_ucontext *to_iwch_ucontext(struct ib_ucontext *c)
{
return container_of(c, struct iwch_ucontext, ibucontext);
}
struct iwch_mm_entry {
struct list_head entry;
u64 addr;
u32 key;
unsigned len;
};
static inline struct iwch_mm_entry *remove_mmap(struct iwch_ucontext *ucontext,
u32 key, unsigned len)
{
struct list_head *pos, *nxt;
struct iwch_mm_entry *mm;
spin_lock(&ucontext->mmap_lock);
list_for_each_safe(pos, nxt, &ucontext->mmaps) {
mm = list_entry(pos, struct iwch_mm_entry, entry);
if (mm->key == key && mm->len == len) {
list_del_init(&mm->entry);
spin_unlock(&ucontext->mmap_lock);
pr_debug("%s key 0x%x addr 0x%llx len %d\n",
__func__, key,
(unsigned long long)mm->addr, mm->len);
return mm;
}
}
spin_unlock(&ucontext->mmap_lock);
return NULL;
}
static inline void insert_mmap(struct iwch_ucontext *ucontext,
struct iwch_mm_entry *mm)
{
spin_lock(&ucontext->mmap_lock);
pr_debug("%s key 0x%x addr 0x%llx len %d\n",
__func__, mm->key, (unsigned long long)mm->addr, mm->len);
list_add_tail(&mm->entry, &ucontext->mmaps);
spin_unlock(&ucontext->mmap_lock);
}
enum iwch_qp_attr_mask {
IWCH_QP_ATTR_NEXT_STATE = 1 << 0,
IWCH_QP_ATTR_ENABLE_RDMA_READ = 1 << 7,
IWCH_QP_ATTR_ENABLE_RDMA_WRITE = 1 << 8,
IWCH_QP_ATTR_ENABLE_RDMA_BIND = 1 << 9,
IWCH_QP_ATTR_MAX_ORD = 1 << 11,
IWCH_QP_ATTR_MAX_IRD = 1 << 12,
IWCH_QP_ATTR_LLP_STREAM_HANDLE = 1 << 22,
IWCH_QP_ATTR_STREAM_MSG_BUFFER = 1 << 23,
IWCH_QP_ATTR_MPA_ATTR = 1 << 24,
IWCH_QP_ATTR_QP_CONTEXT_ACTIVATE = 1 << 25,
IWCH_QP_ATTR_VALID_MODIFY = (IWCH_QP_ATTR_ENABLE_RDMA_READ |
IWCH_QP_ATTR_ENABLE_RDMA_WRITE |
IWCH_QP_ATTR_MAX_ORD |
IWCH_QP_ATTR_MAX_IRD |
IWCH_QP_ATTR_LLP_STREAM_HANDLE |
IWCH_QP_ATTR_STREAM_MSG_BUFFER |
IWCH_QP_ATTR_MPA_ATTR |
IWCH_QP_ATTR_QP_CONTEXT_ACTIVATE)
};
int iwch_modify_qp(struct iwch_dev *rhp,
struct iwch_qp *qhp,
enum iwch_qp_attr_mask mask,
struct iwch_qp_attributes *attrs,
int internal);
enum iwch_qp_state {
IWCH_QP_STATE_IDLE,
IWCH_QP_STATE_RTS,
IWCH_QP_STATE_ERROR,
IWCH_QP_STATE_TERMINATE,
IWCH_QP_STATE_CLOSING,
IWCH_QP_STATE_TOT
};
static inline int iwch_convert_state(enum ib_qp_state ib_state)
{
switch (ib_state) {
case IB_QPS_RESET:
case IB_QPS_INIT:
return IWCH_QP_STATE_IDLE;
case IB_QPS_RTS:
return IWCH_QP_STATE_RTS;
case IB_QPS_SQD:
return IWCH_QP_STATE_CLOSING;
case IB_QPS_SQE:
return IWCH_QP_STATE_TERMINATE;
case IB_QPS_ERR:
return IWCH_QP_STATE_ERROR;
default:
return -1;
}
}
static inline u32 iwch_ib_to_tpt_access(int acc)
{
return (acc & IB_ACCESS_REMOTE_WRITE ? TPT_REMOTE_WRITE : 0) |
(acc & IB_ACCESS_REMOTE_READ ? TPT_REMOTE_READ : 0) |
(acc & IB_ACCESS_LOCAL_WRITE ? TPT_LOCAL_WRITE : 0) |
(acc & IB_ACCESS_MW_BIND ? TPT_MW_BIND : 0) |
TPT_LOCAL_READ;
}
static inline u32 iwch_ib_to_tpt_bind_access(int acc)
{
return (acc & IB_ACCESS_REMOTE_WRITE ? TPT_REMOTE_WRITE : 0) |
(acc & IB_ACCESS_REMOTE_READ ? TPT_REMOTE_READ : 0);
}
enum iwch_mmid_state {
IWCH_STAG_STATE_VALID,
IWCH_STAG_STATE_INVALID
};
enum iwch_qp_query_flags {
IWCH_QP_QUERY_CONTEXT_NONE = 0x0, /* No ctx; Only attrs */
IWCH_QP_QUERY_CONTEXT_GET = 0x1, /* Get ctx + attrs */
IWCH_QP_QUERY_CONTEXT_SUSPEND = 0x2, /* Not Supported */
/*
* Quiesce QP context; Consumer
* will NOT replay outstanding WR
*/
IWCH_QP_QUERY_CONTEXT_QUIESCE = 0x4,
IWCH_QP_QUERY_CONTEXT_REMOVE = 0x8,
IWCH_QP_QUERY_TEST_USERWRITE = 0x32 /* Test special */
};
u16 iwch_rqes_posted(struct iwch_qp *qhp);
int iwch_post_send(struct ib_qp *ibqp, const struct ib_send_wr *wr,
const struct ib_send_wr **bad_wr);
int iwch_post_receive(struct ib_qp *ibqp, const struct ib_recv_wr *wr,
const struct ib_recv_wr **bad_wr);
int iwch_poll_cq(struct ib_cq *ibcq, int num_entries, struct ib_wc *wc);
int iwch_post_terminate(struct iwch_qp *qhp, struct respQ_msg_t *rsp_msg);
int iwch_post_zb_read(struct iwch_ep *ep);
int iwch_register_device(struct iwch_dev *dev);
void iwch_unregister_device(struct iwch_dev *dev);
void stop_read_rep_timer(struct iwch_qp *qhp);
int iwch_register_mem(struct iwch_dev *rhp, struct iwch_pd *php,
struct iwch_mr *mhp, int shift);
int iwch_alloc_pbl(struct iwch_mr *mhp, int npages);
void iwch_free_pbl(struct iwch_mr *mhp);
int iwch_write_pbl(struct iwch_mr *mhp, __be64 *pages, int npages, int offset);
#define IWCH_NODE_DESC "cxgb3 Chelsio Communications"
#endif

File diff suppressed because it is too large Load Diff

View File

@ -1,632 +0,0 @@
/*
* Copyright (c) 2007 Chelsio, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#ifndef _TCB_DEFS_H
#define _TCB_DEFS_H
#define W_TCB_T_STATE 0
#define S_TCB_T_STATE 0
#define M_TCB_T_STATE 0xfULL
#define V_TCB_T_STATE(x) ((x) << S_TCB_T_STATE)
#define W_TCB_TIMER 0
#define S_TCB_TIMER 4
#define M_TCB_TIMER 0x1ULL
#define V_TCB_TIMER(x) ((x) << S_TCB_TIMER)
#define W_TCB_DACK_TIMER 0
#define S_TCB_DACK_TIMER 5
#define M_TCB_DACK_TIMER 0x1ULL
#define V_TCB_DACK_TIMER(x) ((x) << S_TCB_DACK_TIMER)
#define W_TCB_DEL_FLAG 0
#define S_TCB_DEL_FLAG 6
#define M_TCB_DEL_FLAG 0x1ULL
#define V_TCB_DEL_FLAG(x) ((x) << S_TCB_DEL_FLAG)
#define W_TCB_L2T_IX 0
#define S_TCB_L2T_IX 7
#define M_TCB_L2T_IX 0x7ffULL
#define V_TCB_L2T_IX(x) ((x) << S_TCB_L2T_IX)
#define W_TCB_SMAC_SEL 0
#define S_TCB_SMAC_SEL 18
#define M_TCB_SMAC_SEL 0x3ULL
#define V_TCB_SMAC_SEL(x) ((x) << S_TCB_SMAC_SEL)
#define W_TCB_TOS 0
#define S_TCB_TOS 20
#define M_TCB_TOS 0x3fULL
#define V_TCB_TOS(x) ((x) << S_TCB_TOS)
#define W_TCB_MAX_RT 0
#define S_TCB_MAX_RT 26
#define M_TCB_MAX_RT 0xfULL
#define V_TCB_MAX_RT(x) ((x) << S_TCB_MAX_RT)
#define W_TCB_T_RXTSHIFT 0
#define S_TCB_T_RXTSHIFT 30
#define M_TCB_T_RXTSHIFT 0xfULL
#define V_TCB_T_RXTSHIFT(x) ((x) << S_TCB_T_RXTSHIFT)
#define W_TCB_T_DUPACKS 1
#define S_TCB_T_DUPACKS 2
#define M_TCB_T_DUPACKS 0xfULL
#define V_TCB_T_DUPACKS(x) ((x) << S_TCB_T_DUPACKS)
#define W_TCB_T_MAXSEG 1
#define S_TCB_T_MAXSEG 6
#define M_TCB_T_MAXSEG 0xfULL
#define V_TCB_T_MAXSEG(x) ((x) << S_TCB_T_MAXSEG)
#define W_TCB_T_FLAGS1 1
#define S_TCB_T_FLAGS1 10
#define M_TCB_T_FLAGS1 0xffffffffULL
#define V_TCB_T_FLAGS1(x) ((x) << S_TCB_T_FLAGS1)
#define W_TCB_T_MIGRATION 1
#define S_TCB_T_MIGRATION 20
#define M_TCB_T_MIGRATION 0x1ULL
#define V_TCB_T_MIGRATION(x) ((x) << S_TCB_T_MIGRATION)
#define W_TCB_T_FLAGS2 2
#define S_TCB_T_FLAGS2 10
#define M_TCB_T_FLAGS2 0x7fULL
#define V_TCB_T_FLAGS2(x) ((x) << S_TCB_T_FLAGS2)
#define W_TCB_SND_SCALE 2
#define S_TCB_SND_SCALE 17
#define M_TCB_SND_SCALE 0xfULL
#define V_TCB_SND_SCALE(x) ((x) << S_TCB_SND_SCALE)
#define W_TCB_RCV_SCALE 2
#define S_TCB_RCV_SCALE 21
#define M_TCB_RCV_SCALE 0xfULL
#define V_TCB_RCV_SCALE(x) ((x) << S_TCB_RCV_SCALE)
#define W_TCB_SND_UNA_RAW 2
#define S_TCB_SND_UNA_RAW 25
#define M_TCB_SND_UNA_RAW 0x7ffffffULL
#define V_TCB_SND_UNA_RAW(x) ((x) << S_TCB_SND_UNA_RAW)
#define W_TCB_SND_NXT_RAW 3
#define S_TCB_SND_NXT_RAW 20
#define M_TCB_SND_NXT_RAW 0x7ffffffULL
#define V_TCB_SND_NXT_RAW(x) ((x) << S_TCB_SND_NXT_RAW)
#define W_TCB_RCV_NXT 4
#define S_TCB_RCV_NXT 15
#define M_TCB_RCV_NXT 0xffffffffULL
#define V_TCB_RCV_NXT(x) ((x) << S_TCB_RCV_NXT)
#define W_TCB_RCV_ADV 5
#define S_TCB_RCV_ADV 15
#define M_TCB_RCV_ADV 0xffffULL
#define V_TCB_RCV_ADV(x) ((x) << S_TCB_RCV_ADV)
#define W_TCB_SND_MAX_RAW 5
#define S_TCB_SND_MAX_RAW 31
#define M_TCB_SND_MAX_RAW 0x7ffffffULL
#define V_TCB_SND_MAX_RAW(x) ((x) << S_TCB_SND_MAX_RAW)
#define W_TCB_SND_CWND 6
#define S_TCB_SND_CWND 26
#define M_TCB_SND_CWND 0x7ffffffULL
#define V_TCB_SND_CWND(x) ((x) << S_TCB_SND_CWND)
#define W_TCB_SND_SSTHRESH 7
#define S_TCB_SND_SSTHRESH 21
#define M_TCB_SND_SSTHRESH 0x7ffffffULL
#define V_TCB_SND_SSTHRESH(x) ((x) << S_TCB_SND_SSTHRESH)
#define W_TCB_T_RTT_TS_RECENT_AGE 8
#define S_TCB_T_RTT_TS_RECENT_AGE 16
#define M_TCB_T_RTT_TS_RECENT_AGE 0xffffffffULL
#define V_TCB_T_RTT_TS_RECENT_AGE(x) ((x) << S_TCB_T_RTT_TS_RECENT_AGE)
#define W_TCB_T_RTSEQ_RECENT 9
#define S_TCB_T_RTSEQ_RECENT 16
#define M_TCB_T_RTSEQ_RECENT 0xffffffffULL
#define V_TCB_T_RTSEQ_RECENT(x) ((x) << S_TCB_T_RTSEQ_RECENT)
#define W_TCB_T_SRTT 10
#define S_TCB_T_SRTT 16
#define M_TCB_T_SRTT 0xffffULL
#define V_TCB_T_SRTT(x) ((x) << S_TCB_T_SRTT)
#define W_TCB_T_RTTVAR 11
#define S_TCB_T_RTTVAR 0
#define M_TCB_T_RTTVAR 0xffffULL
#define V_TCB_T_RTTVAR(x) ((x) << S_TCB_T_RTTVAR)
#define W_TCB_TS_LAST_ACK_SENT_RAW 11
#define S_TCB_TS_LAST_ACK_SENT_RAW 16
#define M_TCB_TS_LAST_ACK_SENT_RAW 0x7ffffffULL
#define V_TCB_TS_LAST_ACK_SENT_RAW(x) ((x) << S_TCB_TS_LAST_ACK_SENT_RAW)
#define W_TCB_DIP 12
#define S_TCB_DIP 11
#define M_TCB_DIP 0xffffffffULL
#define V_TCB_DIP(x) ((x) << S_TCB_DIP)
#define W_TCB_SIP 13
#define S_TCB_SIP 11
#define M_TCB_SIP 0xffffffffULL
#define V_TCB_SIP(x) ((x) << S_TCB_SIP)
#define W_TCB_DP 14
#define S_TCB_DP 11
#define M_TCB_DP 0xffffULL
#define V_TCB_DP(x) ((x) << S_TCB_DP)
#define W_TCB_SP 14
#define S_TCB_SP 27
#define M_TCB_SP 0xffffULL
#define V_TCB_SP(x) ((x) << S_TCB_SP)
#define W_TCB_TIMESTAMP 15
#define S_TCB_TIMESTAMP 11
#define M_TCB_TIMESTAMP 0xffffffffULL
#define V_TCB_TIMESTAMP(x) ((x) << S_TCB_TIMESTAMP)
#define W_TCB_TIMESTAMP_OFFSET 16
#define S_TCB_TIMESTAMP_OFFSET 11
#define M_TCB_TIMESTAMP_OFFSET 0xfULL
#define V_TCB_TIMESTAMP_OFFSET(x) ((x) << S_TCB_TIMESTAMP_OFFSET)
#define W_TCB_TX_MAX 16
#define S_TCB_TX_MAX 15
#define M_TCB_TX_MAX 0xffffffffULL
#define V_TCB_TX_MAX(x) ((x) << S_TCB_TX_MAX)
#define W_TCB_TX_HDR_PTR_RAW 17
#define S_TCB_TX_HDR_PTR_RAW 15
#define M_TCB_TX_HDR_PTR_RAW 0x1ffffULL
#define V_TCB_TX_HDR_PTR_RAW(x) ((x) << S_TCB_TX_HDR_PTR_RAW)
#define W_TCB_TX_LAST_PTR_RAW 18
#define S_TCB_TX_LAST_PTR_RAW 0
#define M_TCB_TX_LAST_PTR_RAW 0x1ffffULL
#define V_TCB_TX_LAST_PTR_RAW(x) ((x) << S_TCB_TX_LAST_PTR_RAW)
#define W_TCB_TX_COMPACT 18
#define S_TCB_TX_COMPACT 17
#define M_TCB_TX_COMPACT 0x1ULL
#define V_TCB_TX_COMPACT(x) ((x) << S_TCB_TX_COMPACT)
#define W_TCB_RX_COMPACT 18
#define S_TCB_RX_COMPACT 18
#define M_TCB_RX_COMPACT 0x1ULL
#define V_TCB_RX_COMPACT(x) ((x) << S_TCB_RX_COMPACT)
#define W_TCB_RCV_WND 18
#define S_TCB_RCV_WND 19
#define M_TCB_RCV_WND 0x7ffffffULL
#define V_TCB_RCV_WND(x) ((x) << S_TCB_RCV_WND)
#define W_TCB_RX_HDR_OFFSET 19
#define S_TCB_RX_HDR_OFFSET 14
#define M_TCB_RX_HDR_OFFSET 0x7ffffffULL
#define V_TCB_RX_HDR_OFFSET(x) ((x) << S_TCB_RX_HDR_OFFSET)
#define W_TCB_RX_FRAG0_START_IDX_RAW 20
#define S_TCB_RX_FRAG0_START_IDX_RAW 9
#define M_TCB_RX_FRAG0_START_IDX_RAW 0x7ffffffULL
#define V_TCB_RX_FRAG0_START_IDX_RAW(x) ((x) << S_TCB_RX_FRAG0_START_IDX_RAW)
#define W_TCB_RX_FRAG1_START_IDX_OFFSET 21
#define S_TCB_RX_FRAG1_START_IDX_OFFSET 4
#define M_TCB_RX_FRAG1_START_IDX_OFFSET 0x7ffffffULL
#define V_TCB_RX_FRAG1_START_IDX_OFFSET(x) ((x) << S_TCB_RX_FRAG1_START_IDX_OFFSET)
#define W_TCB_RX_FRAG0_LEN 21
#define S_TCB_RX_FRAG0_LEN 31
#define M_TCB_RX_FRAG0_LEN 0x7ffffffULL
#define V_TCB_RX_FRAG0_LEN(x) ((x) << S_TCB_RX_FRAG0_LEN)
#define W_TCB_RX_FRAG1_LEN 22
#define S_TCB_RX_FRAG1_LEN 26
#define M_TCB_RX_FRAG1_LEN 0x7ffffffULL
#define V_TCB_RX_FRAG1_LEN(x) ((x) << S_TCB_RX_FRAG1_LEN)
#define W_TCB_NEWRENO_RECOVER 23
#define S_TCB_NEWRENO_RECOVER 21
#define M_TCB_NEWRENO_RECOVER 0x7ffffffULL
#define V_TCB_NEWRENO_RECOVER(x) ((x) << S_TCB_NEWRENO_RECOVER)
#define W_TCB_PDU_HAVE_LEN 24
#define S_TCB_PDU_HAVE_LEN 16
#define M_TCB_PDU_HAVE_LEN 0x1ULL
#define V_TCB_PDU_HAVE_LEN(x) ((x) << S_TCB_PDU_HAVE_LEN)
#define W_TCB_PDU_LEN 24
#define S_TCB_PDU_LEN 17
#define M_TCB_PDU_LEN 0xffffULL
#define V_TCB_PDU_LEN(x) ((x) << S_TCB_PDU_LEN)
#define W_TCB_RX_QUIESCE 25
#define S_TCB_RX_QUIESCE 1
#define M_TCB_RX_QUIESCE 0x1ULL
#define V_TCB_RX_QUIESCE(x) ((x) << S_TCB_RX_QUIESCE)
#define W_TCB_RX_PTR_RAW 25
#define S_TCB_RX_PTR_RAW 2
#define M_TCB_RX_PTR_RAW 0x1ffffULL
#define V_TCB_RX_PTR_RAW(x) ((x) << S_TCB_RX_PTR_RAW)
#define W_TCB_CPU_NO 25
#define S_TCB_CPU_NO 19
#define M_TCB_CPU_NO 0x7fULL
#define V_TCB_CPU_NO(x) ((x) << S_TCB_CPU_NO)
#define W_TCB_ULP_TYPE 25
#define S_TCB_ULP_TYPE 26
#define M_TCB_ULP_TYPE 0xfULL
#define V_TCB_ULP_TYPE(x) ((x) << S_TCB_ULP_TYPE)
#define W_TCB_RX_FRAG1_PTR_RAW 25
#define S_TCB_RX_FRAG1_PTR_RAW 30
#define M_TCB_RX_FRAG1_PTR_RAW 0x1ffffULL
#define V_TCB_RX_FRAG1_PTR_RAW(x) ((x) << S_TCB_RX_FRAG1_PTR_RAW)
#define W_TCB_RX_FRAG2_START_IDX_OFFSET_RAW 26
#define S_TCB_RX_FRAG2_START_IDX_OFFSET_RAW 15
#define M_TCB_RX_FRAG2_START_IDX_OFFSET_RAW 0x7ffffffULL
#define V_TCB_RX_FRAG2_START_IDX_OFFSET_RAW(x) ((x) << S_TCB_RX_FRAG2_START_IDX_OFFSET_RAW)
#define W_TCB_RX_FRAG2_PTR_RAW 27
#define S_TCB_RX_FRAG2_PTR_RAW 10
#define M_TCB_RX_FRAG2_PTR_RAW 0x1ffffULL
#define V_TCB_RX_FRAG2_PTR_RAW(x) ((x) << S_TCB_RX_FRAG2_PTR_RAW)
#define W_TCB_RX_FRAG2_LEN_RAW 27
#define S_TCB_RX_FRAG2_LEN_RAW 27
#define M_TCB_RX_FRAG2_LEN_RAW 0x7ffffffULL
#define V_TCB_RX_FRAG2_LEN_RAW(x) ((x) << S_TCB_RX_FRAG2_LEN_RAW)
#define W_TCB_RX_FRAG3_PTR_RAW 28
#define S_TCB_RX_FRAG3_PTR_RAW 22
#define M_TCB_RX_FRAG3_PTR_RAW 0x1ffffULL
#define V_TCB_RX_FRAG3_PTR_RAW(x) ((x) << S_TCB_RX_FRAG3_PTR_RAW)
#define W_TCB_RX_FRAG3_LEN_RAW 29
#define S_TCB_RX_FRAG3_LEN_RAW 7
#define M_TCB_RX_FRAG3_LEN_RAW 0x7ffffffULL
#define V_TCB_RX_FRAG3_LEN_RAW(x) ((x) << S_TCB_RX_FRAG3_LEN_RAW)
#define W_TCB_RX_FRAG3_START_IDX_OFFSET_RAW 30
#define S_TCB_RX_FRAG3_START_IDX_OFFSET_RAW 2
#define M_TCB_RX_FRAG3_START_IDX_OFFSET_RAW 0x7ffffffULL
#define V_TCB_RX_FRAG3_START_IDX_OFFSET_RAW(x) ((x) << S_TCB_RX_FRAG3_START_IDX_OFFSET_RAW)
#define W_TCB_PDU_HDR_LEN 30
#define S_TCB_PDU_HDR_LEN 29
#define M_TCB_PDU_HDR_LEN 0xffULL
#define V_TCB_PDU_HDR_LEN(x) ((x) << S_TCB_PDU_HDR_LEN)
#define W_TCB_SLUSH1 31
#define S_TCB_SLUSH1 5
#define M_TCB_SLUSH1 0x7ffffULL
#define V_TCB_SLUSH1(x) ((x) << S_TCB_SLUSH1)
#define W_TCB_ULP_RAW 31
#define S_TCB_ULP_RAW 24
#define M_TCB_ULP_RAW 0xffULL
#define V_TCB_ULP_RAW(x) ((x) << S_TCB_ULP_RAW)
#define W_TCB_DDP_RDMAP_VERSION 25
#define S_TCB_DDP_RDMAP_VERSION 30
#define M_TCB_DDP_RDMAP_VERSION 0x1ULL
#define V_TCB_DDP_RDMAP_VERSION(x) ((x) << S_TCB_DDP_RDMAP_VERSION)
#define W_TCB_MARKER_ENABLE_RX 25
#define S_TCB_MARKER_ENABLE_RX 31
#define M_TCB_MARKER_ENABLE_RX 0x1ULL
#define V_TCB_MARKER_ENABLE_RX(x) ((x) << S_TCB_MARKER_ENABLE_RX)
#define W_TCB_MARKER_ENABLE_TX 26
#define S_TCB_MARKER_ENABLE_TX 0
#define M_TCB_MARKER_ENABLE_TX 0x1ULL
#define V_TCB_MARKER_ENABLE_TX(x) ((x) << S_TCB_MARKER_ENABLE_TX)
#define W_TCB_CRC_ENABLE 26
#define S_TCB_CRC_ENABLE 1
#define M_TCB_CRC_ENABLE 0x1ULL
#define V_TCB_CRC_ENABLE(x) ((x) << S_TCB_CRC_ENABLE)
#define W_TCB_IRS_ULP 26
#define S_TCB_IRS_ULP 2
#define M_TCB_IRS_ULP 0x1ffULL
#define V_TCB_IRS_ULP(x) ((x) << S_TCB_IRS_ULP)
#define W_TCB_ISS_ULP 26
#define S_TCB_ISS_ULP 11
#define M_TCB_ISS_ULP 0x1ffULL
#define V_TCB_ISS_ULP(x) ((x) << S_TCB_ISS_ULP)
#define W_TCB_TX_PDU_LEN 26
#define S_TCB_TX_PDU_LEN 20
#define M_TCB_TX_PDU_LEN 0x3fffULL
#define V_TCB_TX_PDU_LEN(x) ((x) << S_TCB_TX_PDU_LEN)
#define W_TCB_TX_PDU_OUT 27
#define S_TCB_TX_PDU_OUT 2
#define M_TCB_TX_PDU_OUT 0x1ULL
#define V_TCB_TX_PDU_OUT(x) ((x) << S_TCB_TX_PDU_OUT)
#define W_TCB_CQ_IDX_SQ 27
#define S_TCB_CQ_IDX_SQ 3
#define M_TCB_CQ_IDX_SQ 0xffffULL
#define V_TCB_CQ_IDX_SQ(x) ((x) << S_TCB_CQ_IDX_SQ)
#define W_TCB_CQ_IDX_RQ 27
#define S_TCB_CQ_IDX_RQ 19
#define M_TCB_CQ_IDX_RQ 0xffffULL
#define V_TCB_CQ_IDX_RQ(x) ((x) << S_TCB_CQ_IDX_RQ)
#define W_TCB_QP_ID 28
#define S_TCB_QP_ID 3
#define M_TCB_QP_ID 0xffffULL
#define V_TCB_QP_ID(x) ((x) << S_TCB_QP_ID)
#define W_TCB_PD_ID 28
#define S_TCB_PD_ID 19
#define M_TCB_PD_ID 0xffffULL
#define V_TCB_PD_ID(x) ((x) << S_TCB_PD_ID)
#define W_TCB_STAG 29
#define S_TCB_STAG 3
#define M_TCB_STAG 0xffffffffULL
#define V_TCB_STAG(x) ((x) << S_TCB_STAG)
#define W_TCB_RQ_START 30
#define S_TCB_RQ_START 3
#define M_TCB_RQ_START 0x3ffffffULL
#define V_TCB_RQ_START(x) ((x) << S_TCB_RQ_START)
#define W_TCB_RQ_MSN 30
#define S_TCB_RQ_MSN 29
#define M_TCB_RQ_MSN 0x3ffULL
#define V_TCB_RQ_MSN(x) ((x) << S_TCB_RQ_MSN)
#define W_TCB_RQ_MAX_OFFSET 31
#define S_TCB_RQ_MAX_OFFSET 7
#define M_TCB_RQ_MAX_OFFSET 0xfULL
#define V_TCB_RQ_MAX_OFFSET(x) ((x) << S_TCB_RQ_MAX_OFFSET)
#define W_TCB_RQ_WRITE_PTR 31
#define S_TCB_RQ_WRITE_PTR 11
#define M_TCB_RQ_WRITE_PTR 0x3ffULL
#define V_TCB_RQ_WRITE_PTR(x) ((x) << S_TCB_RQ_WRITE_PTR)
#define W_TCB_INB_WRITE_PERM 31
#define S_TCB_INB_WRITE_PERM 21
#define M_TCB_INB_WRITE_PERM 0x1ULL
#define V_TCB_INB_WRITE_PERM(x) ((x) << S_TCB_INB_WRITE_PERM)
#define W_TCB_INB_READ_PERM 31
#define S_TCB_INB_READ_PERM 22
#define M_TCB_INB_READ_PERM 0x1ULL
#define V_TCB_INB_READ_PERM(x) ((x) << S_TCB_INB_READ_PERM)
#define W_TCB_ORD_L_BIT_VLD 31
#define S_TCB_ORD_L_BIT_VLD 23
#define M_TCB_ORD_L_BIT_VLD 0x1ULL
#define V_TCB_ORD_L_BIT_VLD(x) ((x) << S_TCB_ORD_L_BIT_VLD)
#define W_TCB_RDMAP_OPCODE 31
#define S_TCB_RDMAP_OPCODE 24
#define M_TCB_RDMAP_OPCODE 0xfULL
#define V_TCB_RDMAP_OPCODE(x) ((x) << S_TCB_RDMAP_OPCODE)
#define W_TCB_TX_FLUSH 31
#define S_TCB_TX_FLUSH 28
#define M_TCB_TX_FLUSH 0x1ULL
#define V_TCB_TX_FLUSH(x) ((x) << S_TCB_TX_FLUSH)
#define W_TCB_TX_OOS_RXMT 31
#define S_TCB_TX_OOS_RXMT 29
#define M_TCB_TX_OOS_RXMT 0x1ULL
#define V_TCB_TX_OOS_RXMT(x) ((x) << S_TCB_TX_OOS_RXMT)
#define W_TCB_TX_OOS_TXMT 31
#define S_TCB_TX_OOS_TXMT 30
#define M_TCB_TX_OOS_TXMT 0x1ULL
#define V_TCB_TX_OOS_TXMT(x) ((x) << S_TCB_TX_OOS_TXMT)
#define W_TCB_SLUSH_AUX2 31
#define S_TCB_SLUSH_AUX2 31
#define M_TCB_SLUSH_AUX2 0x1ULL
#define V_TCB_SLUSH_AUX2(x) ((x) << S_TCB_SLUSH_AUX2)
#define W_TCB_RX_FRAG1_PTR_RAW2 25
#define S_TCB_RX_FRAG1_PTR_RAW2 30
#define M_TCB_RX_FRAG1_PTR_RAW2 0x1ffffULL
#define V_TCB_RX_FRAG1_PTR_RAW2(x) ((x) << S_TCB_RX_FRAG1_PTR_RAW2)
#define W_TCB_RX_DDP_FLAGS 26
#define S_TCB_RX_DDP_FLAGS 15
#define M_TCB_RX_DDP_FLAGS 0x3ffULL
#define V_TCB_RX_DDP_FLAGS(x) ((x) << S_TCB_RX_DDP_FLAGS)
#define W_TCB_SLUSH_AUX3 26
#define S_TCB_SLUSH_AUX3 31
#define M_TCB_SLUSH_AUX3 0x1ffULL
#define V_TCB_SLUSH_AUX3(x) ((x) << S_TCB_SLUSH_AUX3)
#define W_TCB_RX_DDP_BUF0_OFFSET 27
#define S_TCB_RX_DDP_BUF0_OFFSET 8
#define M_TCB_RX_DDP_BUF0_OFFSET 0x3fffffULL
#define V_TCB_RX_DDP_BUF0_OFFSET(x) ((x) << S_TCB_RX_DDP_BUF0_OFFSET)
#define W_TCB_RX_DDP_BUF0_LEN 27
#define S_TCB_RX_DDP_BUF0_LEN 30
#define M_TCB_RX_DDP_BUF0_LEN 0x3fffffULL
#define V_TCB_RX_DDP_BUF0_LEN(x) ((x) << S_TCB_RX_DDP_BUF0_LEN)
#define W_TCB_RX_DDP_BUF1_OFFSET 28
#define S_TCB_RX_DDP_BUF1_OFFSET 20
#define M_TCB_RX_DDP_BUF1_OFFSET 0x3fffffULL
#define V_TCB_RX_DDP_BUF1_OFFSET(x) ((x) << S_TCB_RX_DDP_BUF1_OFFSET)
#define W_TCB_RX_DDP_BUF1_LEN 29
#define S_TCB_RX_DDP_BUF1_LEN 10
#define M_TCB_RX_DDP_BUF1_LEN 0x3fffffULL
#define V_TCB_RX_DDP_BUF1_LEN(x) ((x) << S_TCB_RX_DDP_BUF1_LEN)
#define W_TCB_RX_DDP_BUF0_TAG 30
#define S_TCB_RX_DDP_BUF0_TAG 0
#define M_TCB_RX_DDP_BUF0_TAG 0xffffffffULL
#define V_TCB_RX_DDP_BUF0_TAG(x) ((x) << S_TCB_RX_DDP_BUF0_TAG)
#define W_TCB_RX_DDP_BUF1_TAG 31
#define S_TCB_RX_DDP_BUF1_TAG 0
#define M_TCB_RX_DDP_BUF1_TAG 0xffffffffULL
#define V_TCB_RX_DDP_BUF1_TAG(x) ((x) << S_TCB_RX_DDP_BUF1_TAG)
#define S_TF_DACK 10
#define V_TF_DACK(x) ((x) << S_TF_DACK)
#define S_TF_NAGLE 11
#define V_TF_NAGLE(x) ((x) << S_TF_NAGLE)
#define S_TF_RECV_SCALE 12
#define V_TF_RECV_SCALE(x) ((x) << S_TF_RECV_SCALE)
#define S_TF_RECV_TSTMP 13
#define V_TF_RECV_TSTMP(x) ((x) << S_TF_RECV_TSTMP)
#define S_TF_RECV_SACK 14
#define V_TF_RECV_SACK(x) ((x) << S_TF_RECV_SACK)
#define S_TF_TURBO 15
#define V_TF_TURBO(x) ((x) << S_TF_TURBO)
#define S_TF_KEEPALIVE 16
#define V_TF_KEEPALIVE(x) ((x) << S_TF_KEEPALIVE)
#define S_TF_TCAM_BYPASS 17
#define V_TF_TCAM_BYPASS(x) ((x) << S_TF_TCAM_BYPASS)
#define S_TF_CORE_FIN 18
#define V_TF_CORE_FIN(x) ((x) << S_TF_CORE_FIN)
#define S_TF_CORE_MORE 19
#define V_TF_CORE_MORE(x) ((x) << S_TF_CORE_MORE)
#define S_TF_MIGRATING 20
#define V_TF_MIGRATING(x) ((x) << S_TF_MIGRATING)
#define S_TF_ACTIVE_OPEN 21
#define V_TF_ACTIVE_OPEN(x) ((x) << S_TF_ACTIVE_OPEN)
#define S_TF_ASK_MODE 22
#define V_TF_ASK_MODE(x) ((x) << S_TF_ASK_MODE)
#define S_TF_NON_OFFLOAD 23
#define V_TF_NON_OFFLOAD(x) ((x) << S_TF_NON_OFFLOAD)
#define S_TF_MOD_SCHD 24
#define V_TF_MOD_SCHD(x) ((x) << S_TF_MOD_SCHD)
#define S_TF_MOD_SCHD_REASON0 25
#define V_TF_MOD_SCHD_REASON0(x) ((x) << S_TF_MOD_SCHD_REASON0)
#define S_TF_MOD_SCHD_REASON1 26
#define V_TF_MOD_SCHD_REASON1(x) ((x) << S_TF_MOD_SCHD_REASON1)
#define S_TF_MOD_SCHD_RX 27
#define V_TF_MOD_SCHD_RX(x) ((x) << S_TF_MOD_SCHD_RX)
#define S_TF_CORE_PUSH 28
#define V_TF_CORE_PUSH(x) ((x) << S_TF_CORE_PUSH)
#define S_TF_RCV_COALESCE_ENABLE 29
#define V_TF_RCV_COALESCE_ENABLE(x) ((x) << S_TF_RCV_COALESCE_ENABLE)
#define S_TF_RCV_COALESCE_PUSH 30
#define V_TF_RCV_COALESCE_PUSH(x) ((x) << S_TF_RCV_COALESCE_PUSH)
#define S_TF_RCV_COALESCE_LAST_PSH 31
#define V_TF_RCV_COALESCE_LAST_PSH(x) ((x) << S_TF_RCV_COALESCE_LAST_PSH)
#define S_TF_RCV_COALESCE_HEARTBEAT 32
#define V_TF_RCV_COALESCE_HEARTBEAT(x) ((x) << S_TF_RCV_COALESCE_HEARTBEAT)
#define S_TF_HALF_CLOSE 33
#define V_TF_HALF_CLOSE(x) ((x) << S_TF_HALF_CLOSE)
#define S_TF_DACK_MSS 34
#define V_TF_DACK_MSS(x) ((x) << S_TF_DACK_MSS)
#define S_TF_CCTRL_SEL0 35
#define V_TF_CCTRL_SEL0(x) ((x) << S_TF_CCTRL_SEL0)
#define S_TF_CCTRL_SEL1 36
#define V_TF_CCTRL_SEL1(x) ((x) << S_TF_CCTRL_SEL1)
#define S_TF_TCP_NEWRENO_FAST_RECOVERY 37
#define V_TF_TCP_NEWRENO_FAST_RECOVERY(x) ((x) << S_TF_TCP_NEWRENO_FAST_RECOVERY)
#define S_TF_TX_PACE_AUTO 38
#define V_TF_TX_PACE_AUTO(x) ((x) << S_TF_TX_PACE_AUTO)
#define S_TF_PEER_FIN_HELD 39
#define V_TF_PEER_FIN_HELD(x) ((x) << S_TF_PEER_FIN_HELD)
#define S_TF_CORE_URG 40
#define V_TF_CORE_URG(x) ((x) << S_TF_CORE_URG)
#define S_TF_RDMA_ERROR 41
#define V_TF_RDMA_ERROR(x) ((x) << S_TF_RDMA_ERROR)
#define S_TF_SSWS_DISABLED 42
#define V_TF_SSWS_DISABLED(x) ((x) << S_TF_SSWS_DISABLED)
#define S_TF_DUPACK_COUNT_ODD 43
#define V_TF_DUPACK_COUNT_ODD(x) ((x) << S_TF_DUPACK_COUNT_ODD)
#define S_TF_TX_CHANNEL 44
#define V_TF_TX_CHANNEL(x) ((x) << S_TF_TX_CHANNEL)
#define S_TF_RX_CHANNEL 45
#define V_TF_RX_CHANNEL(x) ((x) << S_TF_RX_CHANNEL)
#define S_TF_TX_PACE_FIXED 46
#define V_TF_TX_PACE_FIXED(x) ((x) << S_TF_TX_PACE_FIXED)
#define S_TF_RDMA_FLM_ERROR 47
#define V_TF_RDMA_FLM_ERROR(x) ((x) << S_TF_RDMA_FLM_ERROR)
#define S_TF_RX_FLOW_CONTROL_DISABLE 48
#define V_TF_RX_FLOW_CONTROL_DISABLE(x) ((x) << S_TF_RX_FLOW_CONTROL_DISABLE)
#endif /* _TCB_DEFS_H */

View File

@ -3379,7 +3379,7 @@ int c4iw_connect(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
if (raddr->sin_addr.s_addr == htonl(INADDR_ANY)) {
err = pick_local_ipaddrs(dev, cm_id);
if (err)
goto fail2;
goto fail3;
}
/* find a route */
@ -3401,7 +3401,7 @@ int c4iw_connect(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
if (ipv6_addr_type(&raddr6->sin6_addr) == IPV6_ADDR_ANY) {
err = pick_local_ip6addrs(dev, cm_id);
if (err)
goto fail2;
goto fail3;
}
/* find a route */

View File

@ -543,7 +543,7 @@ struct ib_mr *c4iw_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
mhp->rhp = rhp;
mhp->umem = ib_umem_get(udata, start, length, acc, 0);
mhp->umem = ib_umem_get(udata, start, length, acc);
if (IS_ERR(mhp->umem))
goto err_free_skb;

View File

@ -305,7 +305,10 @@ static int c4iw_query_device(struct ib_device *ibdev, struct ib_device_attr *pro
static int c4iw_query_port(struct ib_device *ibdev, u8 port,
struct ib_port_attr *props)
{
int ret = 0;
pr_debug("ibdev %p\n", ibdev);
ret = ib_get_eth_speed(ibdev, port, &props->active_speed,
&props->active_width);
props->port_cap_flags =
IB_PORT_CM_SUP |
@ -315,11 +318,9 @@ static int c4iw_query_port(struct ib_device *ibdev, u8 port,
IB_PORT_VENDOR_CLASS_SUP | IB_PORT_BOOT_MGMT_SUP;
props->gid_tbl_len = 1;
props->pkey_tbl_len = 1;
props->active_width = 2;
props->active_speed = IB_SPEED_DDR;
props->max_msg_sz = -1;
return 0;
return ret;
}
static ssize_t hw_rev_show(struct device *dev,

View File

@ -60,8 +60,6 @@ struct efa_dev {
u64 mem_bar_len;
u64 db_bar_addr;
u64 db_bar_len;
u8 addr[EFA_GID_SIZE];
u32 mtu;
int admin_msix_vector_idx;
struct efa_irq admin_irq;
@ -71,8 +69,6 @@ struct efa_dev {
struct efa_ucontext {
struct ib_ucontext ibucontext;
struct xarray mmap_xa;
u32 mmap_xa_page;
u16 uarn;
};
@ -91,6 +87,7 @@ struct efa_cq {
struct efa_ucontext *ucontext;
dma_addr_t dma_addr;
void *cpu_addr;
struct rdma_user_mmap_entry *mmap_entry;
size_t size;
u16 cq_idx;
};
@ -101,6 +98,13 @@ struct efa_qp {
void *rq_cpu_addr;
size_t rq_size;
enum ib_qp_state state;
/* Used for saving mmap_xa entries */
struct rdma_user_mmap_entry *sq_db_mmap_entry;
struct rdma_user_mmap_entry *llq_desc_mmap_entry;
struct rdma_user_mmap_entry *rq_db_mmap_entry;
struct rdma_user_mmap_entry *rq_mmap_entry;
u32 qp_handle;
u32 max_send_wr;
u32 max_recv_wr;
@ -147,6 +151,7 @@ int efa_alloc_ucontext(struct ib_ucontext *ibucontext, struct ib_udata *udata);
void efa_dealloc_ucontext(struct ib_ucontext *ibucontext);
int efa_mmap(struct ib_ucontext *ibucontext,
struct vm_area_struct *vma);
void efa_mmap_free(struct rdma_user_mmap_entry *rdma_entry);
int efa_create_ah(struct ib_ah *ibah,
struct rdma_ah_attr *ah_attr,
u32 flags,

View File

@ -362,9 +362,13 @@ struct efa_admin_reg_mr_cmd {
/*
* permissions
* 0 : local_write_enable - Write permissions: value
* of 1 needed for RQ buffers and for RDMA write
* 7:1 : reserved1 - remote access flags, etc
* 0 : local_write_enable - Local write permissions:
* must be set for RQ buffers and buffers posted for
* RDMA Read requests
* 1 : reserved1 - MBZ
* 2 : remote_read_enable - Remote read permissions:
* must be set to enable RDMA read from the region
* 7:3 : reserved2 - MBZ
*/
u8 permissions;
@ -558,6 +562,16 @@ struct efa_admin_feature_device_attr_desc {
/* Indicates how many bits are used virtual address access */
u8 virt_addr_width;
/*
* 0 : rdma_read - If set, RDMA Read is supported on
* TX queues
* 31:1 : reserved - MBZ
*/
u32 device_caps;
/* Max RDMA transfer size in bytes */
u32 max_rdma_size;
};
struct efa_admin_feature_queue_attr_desc {
@ -604,6 +618,9 @@ struct efa_admin_feature_queue_attr_desc {
/* The maximum size of LLQ in bytes */
u32 max_llq_size;
/* Maximum number of SGEs for a single RDMA read WQE */
u16 max_wr_rdma_sges;
};
struct efa_admin_feature_aenq_desc {
@ -618,6 +635,7 @@ struct efa_admin_feature_network_attr_desc {
/* Raw address data in network byte order */
u8 addr[16];
/* max packet payload size in bytes */
u32 mtu;
};
@ -780,6 +798,8 @@ struct efa_admin_mmio_req_read_less_resp {
#define EFA_ADMIN_REG_MR_CMD_MEM_ADDR_PHY_MODE_EN_SHIFT 7
#define EFA_ADMIN_REG_MR_CMD_MEM_ADDR_PHY_MODE_EN_MASK BIT(7)
#define EFA_ADMIN_REG_MR_CMD_LOCAL_WRITE_ENABLE_MASK BIT(0)
#define EFA_ADMIN_REG_MR_CMD_REMOTE_READ_ENABLE_SHIFT 2
#define EFA_ADMIN_REG_MR_CMD_REMOTE_READ_ENABLE_MASK BIT(2)
/* create_cq_cmd */
#define EFA_ADMIN_CREATE_CQ_CMD_INTERRUPT_MODE_ENABLED_SHIFT 5
@ -791,4 +811,7 @@ struct efa_admin_mmio_req_read_less_resp {
/* get_set_feature_common_desc */
#define EFA_ADMIN_GET_SET_FEATURE_COMMON_DESC_SELECT_MASK GENMASK(1, 0)
/* feature_device_attr_desc */
#define EFA_ADMIN_FEATURE_DEVICE_ATTR_DESC_RDMA_READ_MASK BIT(0)
#endif /* _EFA_ADMIN_CMDS_H_ */

View File

@ -317,6 +317,7 @@ static struct efa_comp_ctx *__efa_com_submit_admin_cmd(struct efa_com_admin_queu
struct efa_admin_acq_entry *comp,
size_t comp_size_in_bytes)
{
struct efa_admin_aq_entry *aqe;
struct efa_comp_ctx *comp_ctx;
u16 queue_size_mask;
u16 cmd_id;
@ -350,7 +351,9 @@ static struct efa_comp_ctx *__efa_com_submit_admin_cmd(struct efa_com_admin_queu
reinit_completion(&comp_ctx->wait_event);
memcpy(&aq->sq.entries[pi], cmd, cmd_size_in_bytes);
aqe = &aq->sq.entries[pi];
memset(aqe, 0, sizeof(*aqe));
memcpy(aqe, cmd, cmd_size_in_bytes);
aq->sq.pc++;
atomic64_inc(&aq->stats.submitted_cmd);

View File

@ -230,8 +230,7 @@ int efa_com_register_mr(struct efa_com_dev *edev,
mr_cmd.flags |= params->page_shift &
EFA_ADMIN_REG_MR_CMD_PHYS_PAGE_SIZE_SHIFT_MASK;
mr_cmd.iova = params->iova;
mr_cmd.permissions |= params->permissions &
EFA_ADMIN_REG_MR_CMD_LOCAL_WRITE_ENABLE_MASK;
mr_cmd.permissions = params->permissions;
if (params->inline_pbl) {
memcpy(mr_cmd.pbl.inline_pbl_array,
@ -423,28 +422,6 @@ static int efa_com_get_feature(struct efa_com_dev *edev,
return efa_com_get_feature_ex(edev, get_resp, feature_id, 0, 0);
}
int efa_com_get_network_attr(struct efa_com_dev *edev,
struct efa_com_get_network_attr_result *result)
{
struct efa_admin_get_feature_resp resp;
int err;
err = efa_com_get_feature(edev, &resp,
EFA_ADMIN_NETWORK_ATTR);
if (err) {
ibdev_err_ratelimited(edev->efa_dev,
"Failed to get network attributes %d\n",
err);
return err;
}
memcpy(result->addr, resp.u.network_attr.addr,
sizeof(resp.u.network_attr.addr));
result->mtu = resp.u.network_attr.mtu;
return 0;
}
int efa_com_get_device_attr(struct efa_com_dev *edev,
struct efa_com_get_device_attr_result *result)
{
@ -467,6 +444,8 @@ int efa_com_get_device_attr(struct efa_com_dev *edev,
result->phys_addr_width = resp.u.device_attr.phys_addr_width;
result->virt_addr_width = resp.u.device_attr.virt_addr_width;
result->db_bar = resp.u.device_attr.db_bar;
result->max_rdma_size = resp.u.device_attr.max_rdma_size;
result->device_caps = resp.u.device_attr.device_caps;
if (result->admin_api_version < 1) {
ibdev_err_ratelimited(
@ -500,6 +479,19 @@ int efa_com_get_device_attr(struct efa_com_dev *edev,
result->max_ah = resp.u.queue_attr.max_ah;
result->max_llq_size = resp.u.queue_attr.max_llq_size;
result->sub_cqs_per_cq = resp.u.queue_attr.sub_cqs_per_cq;
result->max_wr_rdma_sge = resp.u.queue_attr.max_wr_rdma_sges;
err = efa_com_get_feature(edev, &resp, EFA_ADMIN_NETWORK_ATTR);
if (err) {
ibdev_err_ratelimited(edev->efa_dev,
"Failed to get network attributes %d\n",
err);
return err;
}
memcpy(result->addr, resp.u.network_attr.addr,
sizeof(resp.u.network_attr.addr));
result->mtu = resp.u.network_attr.mtu;
return 0;
}

View File

@ -100,14 +100,11 @@ struct efa_com_destroy_ah_params {
u16 pdn;
};
struct efa_com_get_network_attr_result {
u8 addr[EFA_GID_SIZE];
u32 mtu;
};
struct efa_com_get_device_attr_result {
u8 addr[EFA_GID_SIZE];
u64 page_size_cap;
u64 max_mr_pages;
u32 mtu;
u32 fw_version;
u32 admin_api_version;
u32 device_version;
@ -124,9 +121,12 @@ struct efa_com_get_device_attr_result {
u32 max_pd;
u32 max_ah;
u32 max_llq_size;
u32 max_rdma_size;
u32 device_caps;
u16 sub_cqs_per_cq;
u16 max_sq_sge;
u16 max_rq_sge;
u16 max_wr_rdma_sge;
u8 db_bar;
};
@ -181,12 +181,7 @@ struct efa_com_reg_mr_params {
* address mapping
*/
u8 page_shift;
/*
* permissions
* 0: local_write_enable - Write permissions: value of 1 needed
* for RQ buffers and for RDMA write:1: reserved1 - remote
* access flags, etc
*/
/* see permissions field of struct efa_admin_reg_mr_cmd */
u8 permissions;
u8 inline_pbl;
u8 indirect;
@ -271,8 +266,6 @@ int efa_com_create_ah(struct efa_com_dev *edev,
struct efa_com_create_ah_result *result);
int efa_com_destroy_ah(struct efa_com_dev *edev,
struct efa_com_destroy_ah_params *params);
int efa_com_get_network_attr(struct efa_com_dev *edev,
struct efa_com_get_network_attr_result *result);
int efa_com_get_device_attr(struct efa_com_dev *edev,
struct efa_com_get_device_attr_result *result);
int efa_com_get_hw_hints(struct efa_com_dev *edev,

View File

@ -30,15 +30,6 @@ MODULE_DEVICE_TABLE(pci, efa_pci_tbl);
(BIT(EFA_ADMIN_FATAL_ERROR) | BIT(EFA_ADMIN_WARNING) | \
BIT(EFA_ADMIN_NOTIFICATION) | BIT(EFA_ADMIN_KEEP_ALIVE))
static void efa_update_network_attr(struct efa_dev *dev,
struct efa_com_get_network_attr_result *network_attr)
{
memcpy(dev->addr, network_attr->addr, sizeof(network_attr->addr));
dev->mtu = network_attr->mtu;
dev_dbg(&dev->pdev->dev, "Full address %pI6\n", dev->addr);
}
/* This handler will called for unknown event group or unimplemented handlers */
static void unimplemented_aenq_handler(void *data,
struct efa_admin_aenq_entry *aenq_e)
@ -217,6 +208,7 @@ static const struct ib_device_ops efa_dev_ops = {
.get_link_layer = efa_port_link_layer,
.get_port_immutable = efa_get_port_immutable,
.mmap = efa_mmap,
.mmap_free = efa_mmap_free,
.modify_qp = efa_modify_qp,
.query_device = efa_query_device,
.query_gid = efa_query_gid,
@ -233,7 +225,6 @@ static const struct ib_device_ops efa_dev_ops = {
static int efa_ib_device_add(struct efa_dev *dev)
{
struct efa_com_get_network_attr_result network_attr;
struct efa_com_get_hw_hints_result hw_hints;
struct pci_dev *pdev = dev->pdev;
int err;
@ -249,12 +240,6 @@ static int efa_ib_device_add(struct efa_dev *dev)
if (err)
return err;
err = efa_com_get_network_attr(&dev->edev, &network_attr);
if (err)
goto err_release_doorbell_bar;
efa_update_network_attr(dev, &network_attr);
err = efa_com_get_hw_hints(&dev->edev, &hw_hints);
if (err)
goto err_release_doorbell_bar;

View File

@ -13,10 +13,6 @@
#include "efa.h"
#define EFA_MMAP_FLAG_SHIFT 56
#define EFA_MMAP_PAGE_MASK GENMASK(EFA_MMAP_FLAG_SHIFT - 1, 0)
#define EFA_MMAP_INVALID U64_MAX
enum {
EFA_MMAP_DMA_PAGE = 0,
EFA_MMAP_IO_WC,
@ -27,20 +23,12 @@ enum {
(BIT(EFA_ADMIN_FATAL_ERROR) | BIT(EFA_ADMIN_WARNING) | \
BIT(EFA_ADMIN_NOTIFICATION) | BIT(EFA_ADMIN_KEEP_ALIVE))
struct efa_mmap_entry {
void *obj;
struct efa_user_mmap_entry {
struct rdma_user_mmap_entry rdma_entry;
u64 address;
u64 length;
u32 mmap_page;
u8 mmap_flag;
};
static inline u64 get_mmap_key(const struct efa_mmap_entry *efa)
{
return ((u64)efa->mmap_flag << EFA_MMAP_FLAG_SHIFT) |
((u64)efa->mmap_page << PAGE_SHIFT);
}
#define EFA_DEFINE_STATS(op) \
op(EFA_TX_BYTES, "tx_bytes") \
op(EFA_TX_PKTS, "tx_pkts") \
@ -82,8 +70,6 @@ static const char *const efa_stats_names[] = {
#define EFA_CHUNK_USED_SIZE \
((EFA_PTRS_PER_CHUNK * EFA_CHUNK_PAYLOAD_PTR_SIZE) + EFA_CHUNK_PTR_SIZE)
#define EFA_SUPPORTED_ACCESS_FLAGS IB_ACCESS_LOCAL_WRITE
struct pbl_chunk {
dma_addr_t dma_addr;
u64 *buf;
@ -147,6 +133,17 @@ static inline struct efa_ah *to_eah(struct ib_ah *ibah)
return container_of(ibah, struct efa_ah, ibah);
}
static inline struct efa_user_mmap_entry *
to_emmap(struct rdma_user_mmap_entry *rdma_entry)
{
return container_of(rdma_entry, struct efa_user_mmap_entry, rdma_entry);
}
static inline bool is_rdma_read_cap(struct efa_dev *dev)
{
return dev->dev_attr.device_caps & EFA_ADMIN_FEATURE_DEVICE_ATTR_DESC_RDMA_READ_MASK;
}
#define field_avail(x, fld, sz) (offsetof(typeof(x), fld) + \
FIELD_SIZEOF(typeof(x), fld) <= (sz))
@ -172,106 +169,6 @@ static void *efa_zalloc_mapped(struct efa_dev *dev, dma_addr_t *dma_addr,
return addr;
}
/*
* This is only called when the ucontext is destroyed and there can be no
* concurrent query via mmap or allocate on the xarray, thus we can be sure no
* other thread is using the entry pointer. We also know that all the BAR
* pages have either been zap'd or munmaped at this point. Normal pages are
* refcounted and will be freed at the proper time.
*/
static void mmap_entries_remove_free(struct efa_dev *dev,
struct efa_ucontext *ucontext)
{
struct efa_mmap_entry *entry;
unsigned long mmap_page;
xa_for_each(&ucontext->mmap_xa, mmap_page, entry) {
xa_erase(&ucontext->mmap_xa, mmap_page);
ibdev_dbg(
&dev->ibdev,
"mmap: obj[0x%p] key[%#llx] addr[%#llx] len[%#llx] removed\n",
entry->obj, get_mmap_key(entry), entry->address,
entry->length);
if (entry->mmap_flag == EFA_MMAP_DMA_PAGE)
/* DMA mapping is already gone, now free the pages */
free_pages_exact(phys_to_virt(entry->address),
entry->length);
kfree(entry);
}
}
static struct efa_mmap_entry *mmap_entry_get(struct efa_dev *dev,
struct efa_ucontext *ucontext,
u64 key, u64 len)
{
struct efa_mmap_entry *entry;
u64 mmap_page;
mmap_page = (key & EFA_MMAP_PAGE_MASK) >> PAGE_SHIFT;
if (mmap_page > U32_MAX)
return NULL;
entry = xa_load(&ucontext->mmap_xa, mmap_page);
if (!entry || get_mmap_key(entry) != key || entry->length != len)
return NULL;
ibdev_dbg(&dev->ibdev,
"mmap: obj[0x%p] key[%#llx] addr[%#llx] len[%#llx] removed\n",
entry->obj, key, entry->address, entry->length);
return entry;
}
/*
* Note this locking scheme cannot support removal of entries, except during
* ucontext destruction when the core code guarentees no concurrency.
*/
static u64 mmap_entry_insert(struct efa_dev *dev, struct efa_ucontext *ucontext,
void *obj, u64 address, u64 length, u8 mmap_flag)
{
struct efa_mmap_entry *entry;
u32 next_mmap_page;
int err;
entry = kmalloc(sizeof(*entry), GFP_KERNEL);
if (!entry)
return EFA_MMAP_INVALID;
entry->obj = obj;
entry->address = address;
entry->length = length;
entry->mmap_flag = mmap_flag;
xa_lock(&ucontext->mmap_xa);
if (check_add_overflow(ucontext->mmap_xa_page,
(u32)(length >> PAGE_SHIFT),
&next_mmap_page))
goto err_unlock;
entry->mmap_page = ucontext->mmap_xa_page;
ucontext->mmap_xa_page = next_mmap_page;
err = __xa_insert(&ucontext->mmap_xa, entry->mmap_page, entry,
GFP_KERNEL);
if (err)
goto err_unlock;
xa_unlock(&ucontext->mmap_xa);
ibdev_dbg(
&dev->ibdev,
"mmap: obj[0x%p] addr[%#llx], len[%#llx], key[%#llx] inserted\n",
entry->obj, entry->address, entry->length, get_mmap_key(entry));
return get_mmap_key(entry);
err_unlock:
xa_unlock(&ucontext->mmap_xa);
kfree(entry);
return EFA_MMAP_INVALID;
}
int efa_query_device(struct ib_device *ibdev,
struct ib_device_attr *props,
struct ib_udata *udata)
@ -306,12 +203,17 @@ int efa_query_device(struct ib_device *ibdev,
dev_attr->max_rq_depth);
props->max_send_sge = dev_attr->max_sq_sge;
props->max_recv_sge = dev_attr->max_rq_sge;
props->max_sge_rd = dev_attr->max_wr_rdma_sge;
if (udata && udata->outlen) {
resp.max_sq_sge = dev_attr->max_sq_sge;
resp.max_rq_sge = dev_attr->max_rq_sge;
resp.max_sq_wr = dev_attr->max_sq_depth;
resp.max_rq_wr = dev_attr->max_rq_depth;
resp.max_rdma_size = dev_attr->max_rdma_size;
if (is_rdma_read_cap(dev))
resp.device_caps |= EFA_QUERY_DEVICE_CAPS_RDMA_READ;
err = ib_copy_to_udata(udata, &resp,
min(sizeof(resp), udata->outlen));
@ -338,9 +240,9 @@ int efa_query_port(struct ib_device *ibdev, u8 port,
props->pkey_tbl_len = 1;
props->active_speed = IB_SPEED_EDR;
props->active_width = IB_WIDTH_4X;
props->max_mtu = ib_mtu_int_to_enum(dev->mtu);
props->active_mtu = ib_mtu_int_to_enum(dev->mtu);
props->max_msg_sz = dev->mtu;
props->max_mtu = ib_mtu_int_to_enum(dev->dev_attr.mtu);
props->active_mtu = ib_mtu_int_to_enum(dev->dev_attr.mtu);
props->max_msg_sz = dev->dev_attr.mtu;
props->max_vl_num = 1;
return 0;
@ -401,7 +303,7 @@ int efa_query_gid(struct ib_device *ibdev, u8 port, int index,
{
struct efa_dev *dev = to_edev(ibdev);
memcpy(gid->raw, dev->addr, sizeof(dev->addr));
memcpy(gid->raw, dev->dev_attr.addr, sizeof(dev->dev_attr.addr));
return 0;
}
@ -485,8 +387,19 @@ static int efa_destroy_qp_handle(struct efa_dev *dev, u32 qp_handle)
return efa_com_destroy_qp(&dev->edev, &params);
}
static void efa_qp_user_mmap_entries_remove(struct efa_ucontext *uctx,
struct efa_qp *qp)
{
rdma_user_mmap_entry_remove(qp->rq_mmap_entry);
rdma_user_mmap_entry_remove(qp->rq_db_mmap_entry);
rdma_user_mmap_entry_remove(qp->llq_desc_mmap_entry);
rdma_user_mmap_entry_remove(qp->sq_db_mmap_entry);
}
int efa_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata)
{
struct efa_ucontext *ucontext = rdma_udata_to_drv_context(udata,
struct efa_ucontext, ibucontext);
struct efa_dev *dev = to_edev(ibqp->pd->device);
struct efa_qp *qp = to_eqp(ibqp);
int err;
@ -505,61 +418,101 @@ int efa_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata)
DMA_TO_DEVICE);
}
efa_qp_user_mmap_entries_remove(ucontext, qp);
kfree(qp);
return 0;
}
static struct rdma_user_mmap_entry*
efa_user_mmap_entry_insert(struct ib_ucontext *ucontext,
u64 address, size_t length,
u8 mmap_flag, u64 *offset)
{
struct efa_user_mmap_entry *entry = kzalloc(sizeof(*entry), GFP_KERNEL);
int err;
if (!entry)
return NULL;
entry->address = address;
entry->mmap_flag = mmap_flag;
err = rdma_user_mmap_entry_insert(ucontext, &entry->rdma_entry,
length);
if (err) {
kfree(entry);
return NULL;
}
*offset = rdma_user_mmap_get_offset(&entry->rdma_entry);
return &entry->rdma_entry;
}
static int qp_mmap_entries_setup(struct efa_qp *qp,
struct efa_dev *dev,
struct efa_ucontext *ucontext,
struct efa_com_create_qp_params *params,
struct efa_ibv_create_qp_resp *resp)
{
/*
* Once an entry is inserted it might be mmapped, hence cannot be
* cleaned up until dealloc_ucontext.
*/
resp->sq_db_mmap_key =
mmap_entry_insert(dev, ucontext, qp,
dev->db_bar_addr + resp->sq_db_offset,
PAGE_SIZE, EFA_MMAP_IO_NC);
if (resp->sq_db_mmap_key == EFA_MMAP_INVALID)
size_t length;
u64 address;
address = dev->db_bar_addr + resp->sq_db_offset;
qp->sq_db_mmap_entry =
efa_user_mmap_entry_insert(&ucontext->ibucontext,
address,
PAGE_SIZE, EFA_MMAP_IO_NC,
&resp->sq_db_mmap_key);
if (!qp->sq_db_mmap_entry)
return -ENOMEM;
resp->sq_db_offset &= ~PAGE_MASK;
resp->llq_desc_mmap_key =
mmap_entry_insert(dev, ucontext, qp,
dev->mem_bar_addr + resp->llq_desc_offset,
PAGE_ALIGN(params->sq_ring_size_in_bytes +
(resp->llq_desc_offset & ~PAGE_MASK)),
EFA_MMAP_IO_WC);
if (resp->llq_desc_mmap_key == EFA_MMAP_INVALID)
return -ENOMEM;
address = dev->mem_bar_addr + resp->llq_desc_offset;
length = PAGE_ALIGN(params->sq_ring_size_in_bytes +
(resp->llq_desc_offset & ~PAGE_MASK));
qp->llq_desc_mmap_entry =
efa_user_mmap_entry_insert(&ucontext->ibucontext,
address, length,
EFA_MMAP_IO_WC,
&resp->llq_desc_mmap_key);
if (!qp->llq_desc_mmap_entry)
goto err_remove_mmap;
resp->llq_desc_offset &= ~PAGE_MASK;
if (qp->rq_size) {
resp->rq_db_mmap_key =
mmap_entry_insert(dev, ucontext, qp,
dev->db_bar_addr + resp->rq_db_offset,
PAGE_SIZE, EFA_MMAP_IO_NC);
if (resp->rq_db_mmap_key == EFA_MMAP_INVALID)
return -ENOMEM;
address = dev->db_bar_addr + resp->rq_db_offset;
qp->rq_db_mmap_entry =
efa_user_mmap_entry_insert(&ucontext->ibucontext,
address, PAGE_SIZE,
EFA_MMAP_IO_NC,
&resp->rq_db_mmap_key);
if (!qp->rq_db_mmap_entry)
goto err_remove_mmap;
resp->rq_db_offset &= ~PAGE_MASK;
resp->rq_mmap_key =
mmap_entry_insert(dev, ucontext, qp,
virt_to_phys(qp->rq_cpu_addr),
qp->rq_size, EFA_MMAP_DMA_PAGE);
if (resp->rq_mmap_key == EFA_MMAP_INVALID)
return -ENOMEM;
address = virt_to_phys(qp->rq_cpu_addr);
qp->rq_mmap_entry =
efa_user_mmap_entry_insert(&ucontext->ibucontext,
address, qp->rq_size,
EFA_MMAP_DMA_PAGE,
&resp->rq_mmap_key);
if (!qp->rq_mmap_entry)
goto err_remove_mmap;
resp->rq_mmap_size = qp->rq_size;
}
return 0;
err_remove_mmap:
efa_qp_user_mmap_entries_remove(ucontext, qp);
return -ENOMEM;
}
static int efa_qp_validate_cap(struct efa_dev *dev,
@ -634,7 +587,6 @@ struct ib_qp *efa_create_qp(struct ib_pd *ibpd,
struct efa_dev *dev = to_edev(ibpd->device);
struct efa_ibv_create_qp_resp resp = {};
struct efa_ibv_create_qp cmd = {};
bool rq_entry_inserted = false;
struct efa_ucontext *ucontext;
struct efa_qp *qp;
int err;
@ -742,7 +694,6 @@ struct ib_qp *efa_create_qp(struct ib_pd *ibpd,
if (err)
goto err_destroy_qp;
rq_entry_inserted = true;
qp->qp_handle = create_qp_resp.qp_handle;
qp->ibqp.qp_num = create_qp_resp.qp_num;
qp->ibqp.qp_type = init_attr->qp_type;
@ -759,7 +710,7 @@ struct ib_qp *efa_create_qp(struct ib_pd *ibpd,
ibdev_dbg(&dev->ibdev,
"Failed to copy udata for qp[%u]\n",
create_qp_resp.qp_num);
goto err_destroy_qp;
goto err_remove_mmap_entries;
}
}
@ -767,13 +718,16 @@ struct ib_qp *efa_create_qp(struct ib_pd *ibpd,
return &qp->ibqp;
err_remove_mmap_entries:
efa_qp_user_mmap_entries_remove(ucontext, qp);
err_destroy_qp:
efa_destroy_qp_handle(dev, create_qp_resp.qp_handle);
err_free_mapped:
if (qp->rq_size) {
dma_unmap_single(&dev->pdev->dev, qp->rq_dma_addr, qp->rq_size,
DMA_TO_DEVICE);
if (!rq_entry_inserted)
if (!qp->rq_mmap_entry)
free_pages_exact(qp->rq_cpu_addr, qp->rq_size);
}
err_free_qp:
@ -897,16 +851,18 @@ void efa_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata)
efa_destroy_cq_idx(dev, cq->cq_idx);
dma_unmap_single(&dev->pdev->dev, cq->dma_addr, cq->size,
DMA_FROM_DEVICE);
rdma_user_mmap_entry_remove(cq->mmap_entry);
}
static int cq_mmap_entries_setup(struct efa_dev *dev, struct efa_cq *cq,
struct efa_ibv_create_cq_resp *resp)
{
resp->q_mmap_size = cq->size;
resp->q_mmap_key = mmap_entry_insert(dev, cq->ucontext, cq,
virt_to_phys(cq->cpu_addr),
cq->size, EFA_MMAP_DMA_PAGE);
if (resp->q_mmap_key == EFA_MMAP_INVALID)
cq->mmap_entry = efa_user_mmap_entry_insert(&cq->ucontext->ibucontext,
virt_to_phys(cq->cpu_addr),
cq->size, EFA_MMAP_DMA_PAGE,
&resp->q_mmap_key);
if (!cq->mmap_entry)
return -ENOMEM;
return 0;
@ -924,7 +880,6 @@ int efa_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
struct efa_dev *dev = to_edev(ibdev);
struct efa_ibv_create_cq cmd = {};
struct efa_cq *cq = to_ecq(ibcq);
bool cq_entry_inserted = false;
int entries = attr->cqe;
int err;
@ -1013,15 +968,13 @@ int efa_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
goto err_destroy_cq;
}
cq_entry_inserted = true;
if (udata->outlen) {
err = ib_copy_to_udata(udata, &resp,
min(sizeof(resp), udata->outlen));
if (err) {
ibdev_dbg(ibdev,
"Failed to copy udata for create_cq\n");
goto err_destroy_cq;
goto err_remove_mmap;
}
}
@ -1030,13 +983,16 @@ int efa_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
return 0;
err_remove_mmap:
rdma_user_mmap_entry_remove(cq->mmap_entry);
err_destroy_cq:
efa_destroy_cq_idx(dev, cq->cq_idx);
err_free_mapped:
dma_unmap_single(&dev->pdev->dev, cq->dma_addr, cq->size,
DMA_FROM_DEVICE);
if (!cq_entry_inserted)
if (!cq->mmap_entry)
free_pages_exact(cq->cpu_addr, cq->size);
err_out:
atomic64_inc(&dev->stats.sw_stats.create_cq_err);
return err;
@ -1396,6 +1352,7 @@ struct ib_mr *efa_reg_mr(struct ib_pd *ibpd, u64 start, u64 length,
struct efa_com_reg_mr_params params = {};
struct efa_com_reg_mr_result result = {};
struct pbl_context pbl;
int supp_access_flags;
unsigned int pg_sz;
struct efa_mr *mr;
int inline_size;
@ -1409,10 +1366,14 @@ struct ib_mr *efa_reg_mr(struct ib_pd *ibpd, u64 start, u64 length,
goto err_out;
}
if (access_flags & ~EFA_SUPPORTED_ACCESS_FLAGS) {
supp_access_flags =
IB_ACCESS_LOCAL_WRITE |
(is_rdma_read_cap(dev) ? IB_ACCESS_REMOTE_READ : 0);
if (access_flags & ~supp_access_flags) {
ibdev_dbg(&dev->ibdev,
"Unsupported access flags[%#x], supported[%#x]\n",
access_flags, EFA_SUPPORTED_ACCESS_FLAGS);
access_flags, supp_access_flags);
err = -EOPNOTSUPP;
goto err_out;
}
@ -1423,7 +1384,7 @@ struct ib_mr *efa_reg_mr(struct ib_pd *ibpd, u64 start, u64 length,
goto err_out;
}
mr->umem = ib_umem_get(udata, start, length, access_flags, 0);
mr->umem = ib_umem_get(udata, start, length, access_flags);
if (IS_ERR(mr->umem)) {
err = PTR_ERR(mr->umem);
ibdev_dbg(&dev->ibdev,
@ -1434,7 +1395,7 @@ struct ib_mr *efa_reg_mr(struct ib_pd *ibpd, u64 start, u64 length,
params.pd = to_epd(ibpd)->pdn;
params.iova = virt_addr;
params.mr_length_in_bytes = length;
params.permissions = access_flags & 0x1;
params.permissions = access_flags;
pg_sz = ib_umem_find_best_pgsz(mr->umem,
dev->dev_attr.page_size_cap,
@ -1556,7 +1517,6 @@ int efa_alloc_ucontext(struct ib_ucontext *ibucontext, struct ib_udata *udata)
goto err_out;
ucontext->uarn = result.uarn;
xa_init(&ucontext->mmap_xa);
resp.cmds_supp_udata_mask |= EFA_USER_CMDS_SUPP_UDATA_QUERY_DEVICE;
resp.cmds_supp_udata_mask |= EFA_USER_CMDS_SUPP_UDATA_CREATE_AH;
@ -1585,38 +1545,56 @@ void efa_dealloc_ucontext(struct ib_ucontext *ibucontext)
struct efa_ucontext *ucontext = to_eucontext(ibucontext);
struct efa_dev *dev = to_edev(ibucontext->device);
mmap_entries_remove_free(dev, ucontext);
efa_dealloc_uar(dev, ucontext->uarn);
}
static int __efa_mmap(struct efa_dev *dev, struct efa_ucontext *ucontext,
struct vm_area_struct *vma, u64 key, u64 length)
void efa_mmap_free(struct rdma_user_mmap_entry *rdma_entry)
{
struct efa_mmap_entry *entry;
unsigned long va;
u64 pfn;
int err;
struct efa_user_mmap_entry *entry = to_emmap(rdma_entry);
entry = mmap_entry_get(dev, ucontext, key, length);
if (!entry) {
ibdev_dbg(&dev->ibdev, "key[%#llx] does not have valid entry\n",
key);
/* DMA mapping is already gone, now free the pages */
if (entry->mmap_flag == EFA_MMAP_DMA_PAGE)
free_pages_exact(phys_to_virt(entry->address),
entry->rdma_entry.npages * PAGE_SIZE);
kfree(entry);
}
static int __efa_mmap(struct efa_dev *dev, struct efa_ucontext *ucontext,
struct vm_area_struct *vma)
{
struct rdma_user_mmap_entry *rdma_entry;
struct efa_user_mmap_entry *entry;
unsigned long va;
int err = 0;
u64 pfn;
rdma_entry = rdma_user_mmap_entry_get(&ucontext->ibucontext, vma);
if (!rdma_entry) {
ibdev_dbg(&dev->ibdev,
"pgoff[%#lx] does not have valid entry\n",
vma->vm_pgoff);
return -EINVAL;
}
entry = to_emmap(rdma_entry);
ibdev_dbg(&dev->ibdev,
"Mapping address[%#llx], length[%#llx], mmap_flag[%d]\n",
entry->address, length, entry->mmap_flag);
"Mapping address[%#llx], length[%#zx], mmap_flag[%d]\n",
entry->address, rdma_entry->npages * PAGE_SIZE,
entry->mmap_flag);
pfn = entry->address >> PAGE_SHIFT;
switch (entry->mmap_flag) {
case EFA_MMAP_IO_NC:
err = rdma_user_mmap_io(&ucontext->ibucontext, vma, pfn, length,
pgprot_noncached(vma->vm_page_prot));
err = rdma_user_mmap_io(&ucontext->ibucontext, vma, pfn,
entry->rdma_entry.npages * PAGE_SIZE,
pgprot_noncached(vma->vm_page_prot),
rdma_entry);
break;
case EFA_MMAP_IO_WC:
err = rdma_user_mmap_io(&ucontext->ibucontext, vma, pfn, length,
pgprot_writecombine(vma->vm_page_prot));
err = rdma_user_mmap_io(&ucontext->ibucontext, vma, pfn,
entry->rdma_entry.npages * PAGE_SIZE,
pgprot_writecombine(vma->vm_page_prot),
rdma_entry);
break;
case EFA_MMAP_DMA_PAGE:
for (va = vma->vm_start; va < vma->vm_end;
@ -1633,12 +1611,13 @@ static int __efa_mmap(struct efa_dev *dev, struct efa_ucontext *ucontext,
if (err) {
ibdev_dbg(
&dev->ibdev,
"Couldn't mmap address[%#llx] length[%#llx] mmap_flag[%d] err[%d]\n",
entry->address, length, entry->mmap_flag, err);
return err;
"Couldn't mmap address[%#llx] length[%#zx] mmap_flag[%d] err[%d]\n",
entry->address, rdma_entry->npages * PAGE_SIZE,
entry->mmap_flag, err);
}
return 0;
rdma_user_mmap_entry_put(rdma_entry);
return err;
}
int efa_mmap(struct ib_ucontext *ibucontext,
@ -1646,26 +1625,13 @@ int efa_mmap(struct ib_ucontext *ibucontext,
{
struct efa_ucontext *ucontext = to_eucontext(ibucontext);
struct efa_dev *dev = to_edev(ibucontext->device);
u64 length = vma->vm_end - vma->vm_start;
u64 key = vma->vm_pgoff << PAGE_SHIFT;
size_t length = vma->vm_end - vma->vm_start;
ibdev_dbg(&dev->ibdev,
"start %#lx, end %#lx, length = %#llx, key = %#llx\n",
vma->vm_start, vma->vm_end, length, key);
"start %#lx, end %#lx, length = %#zx, pgoff = %#lx\n",
vma->vm_start, vma->vm_end, length, vma->vm_pgoff);
if (length % PAGE_SIZE != 0 || !(vma->vm_flags & VM_SHARED)) {
ibdev_dbg(&dev->ibdev,
"length[%#llx] is not page size aligned[%#lx] or VM_SHARED is not set [%#lx]\n",
length, PAGE_SIZE, vma->vm_flags);
return -EINVAL;
}
if (vma->vm_flags & VM_EXEC) {
ibdev_dbg(&dev->ibdev, "Mapping executable pages is not permitted\n");
return -EPERM;
}
return __efa_mmap(dev, ucontext, vma, key, length);
return __efa_mmap(dev, ucontext, vma);
}
static int efa_ah_destroy(struct efa_dev *dev, struct efa_ah *ah)

View File

@ -4915,16 +4915,11 @@ static int hfi1_process_ib_mad(struct ib_device *ibdev, int mad_flags, u8 port,
*/
int hfi1_process_mad(struct ib_device *ibdev, int mad_flags, u8 port,
const struct ib_wc *in_wc, const struct ib_grh *in_grh,
const struct ib_mad_hdr *in_mad, size_t in_mad_size,
struct ib_mad_hdr *out_mad, size_t *out_mad_size,
u16 *out_mad_pkey_index)
const struct ib_mad *in_mad, struct ib_mad *out_mad,
size_t *out_mad_size, u16 *out_mad_pkey_index)
{
switch (in_mad->base_version) {
switch (in_mad->mad_hdr.base_version) {
case OPA_MGMT_BASE_VERSION:
if (unlikely(in_mad_size != sizeof(struct opa_mad))) {
dev_err(ibdev->dev.parent, "invalid in_mad_size\n");
return IB_MAD_RESULT_FAILURE;
}
return hfi1_process_opa_mad(ibdev, mad_flags, port,
in_wc, in_grh,
(struct opa_mad *)in_mad,
@ -4932,10 +4927,8 @@ int hfi1_process_mad(struct ib_device *ibdev, int mad_flags, u8 port,
out_mad_size,
out_mad_pkey_index);
case IB_MGMT_BASE_VERSION:
return hfi1_process_ib_mad(ibdev, mad_flags, port,
in_wc, in_grh,
(const struct ib_mad *)in_mad,
(struct ib_mad *)out_mad);
return hfi1_process_ib_mad(ibdev, mad_flags, port, in_wc,
in_grh, in_mad, out_mad);
default:
break;
}

View File

@ -634,7 +634,7 @@ static void apply_tx_lanes(struct hfi1_pportdata *ppd, u8 field_id,
u32 config_data, const char *message)
{
u8 i;
int ret = HCMD_SUCCESS;
int ret;
for (i = 0; i < 4; i++) {
ret = load_8051_config(ppd->dd, field_id, i, config_data);

View File

@ -330,9 +330,8 @@ void hfi1_sys_guid_chg(struct hfi1_ibport *ibp);
void hfi1_node_desc_chg(struct hfi1_ibport *ibp);
int hfi1_process_mad(struct ib_device *ibdev, int mad_flags, u8 port,
const struct ib_wc *in_wc, const struct ib_grh *in_grh,
const struct ib_mad_hdr *in_mad, size_t in_mad_size,
struct ib_mad_hdr *out_mad, size_t *out_mad_size,
u16 *out_mad_pkey_index);
const struct ib_mad *in_mad, struct ib_mad *out_mad,
size_t *out_mad_size, u16 *out_mad_pkey_index);
/*
* The PSN_MASK and PSN_SHIFT allow for

View File

@ -1,23 +1,34 @@
# SPDX-License-Identifier: GPL-2.0-only
config INFINIBAND_HNS
bool "HNS RoCE Driver"
tristate "HNS RoCE Driver"
depends on NET_VENDOR_HISILICON
depends on ARM64 || (COMPILE_TEST && 64BIT)
depends on (HNS_DSAF && HNS_ENET) || HNS3
---help---
This is a RoCE/RDMA driver for the Hisilicon RoCE engine. The engine
is used in Hisilicon Hip06 and more further ICT SoC based on
platform device.
To compile HIP06 or HIP08 driver as module, choose M here.
config INFINIBAND_HNS_HIP06
tristate "Hisilicon Hip06 Family RoCE support"
bool "Hisilicon Hip06 Family RoCE support"
depends on INFINIBAND_HNS && HNS && HNS_DSAF && HNS_ENET
depends on INFINIBAND_HNS=m || (HNS_DSAF=y && HNS_ENET=y)
---help---
RoCE driver support for Hisilicon RoCE engine in Hisilicon Hip06 and
Hip07 SoC. These RoCE engines are platform devices.
To compile this driver, choose Y here: if INFINIBAND_HNS is m, this
module will be called hns-roce-hw-v1
config INFINIBAND_HNS_HIP08
tristate "Hisilicon Hip08 Family RoCE support"
bool "Hisilicon Hip08 Family RoCE support"
depends on INFINIBAND_HNS && PCI && HNS3
depends on INFINIBAND_HNS=m || HNS3=y
---help---
RoCE driver support for Hisilicon RoCE engine in Hisilicon Hip08 SoC.
The RoCE engine is a PCI device.
To compile this driver, choose Y here: if INFINIBAND_HNS is m, this
module will be called hns-roce-hw-v2.

View File

@ -9,8 +9,12 @@ hns-roce-objs := hns_roce_main.o hns_roce_cmd.o hns_roce_pd.o \
hns_roce_ah.o hns_roce_hem.o hns_roce_mr.o hns_roce_qp.o \
hns_roce_cq.o hns_roce_alloc.o hns_roce_db.o hns_roce_srq.o hns_roce_restrack.o
ifdef CONFIG_INFINIBAND_HNS_HIP06
hns-roce-hw-v1-objs := hns_roce_hw_v1.o $(hns-roce-objs)
obj-$(CONFIG_INFINIBAND_HNS_HIP06) += hns-roce-hw-v1.o
obj-$(CONFIG_INFINIBAND_HNS) += hns-roce-hw-v1.o
endif
ifdef CONFIG_INFINIBAND_HNS_HIP08
hns-roce-hw-v2-objs := hns_roce_hw_v2.o hns_roce_hw_v2_dfx.o $(hns-roce-objs)
obj-$(CONFIG_INFINIBAND_HNS_HIP08) += hns-roce-hw-v2.o
obj-$(CONFIG_INFINIBAND_HNS) += hns-roce-hw-v2.o
endif

View File

@ -46,32 +46,32 @@ int hns_roce_create_ah(struct ib_ah *ibah, struct rdma_ah_attr *ah_attr,
const struct ib_gid_attr *gid_attr;
struct device *dev = hr_dev->dev;
struct hns_roce_ah *ah = to_hr_ah(ibah);
u16 vlan_tag = 0xffff;
const struct ib_global_route *grh = rdma_ah_read_grh(ah_attr);
u16 vlan_id = 0xffff;
bool vlan_en = false;
int ret;
gid_attr = ah_attr->grh.sgid_attr;
ret = rdma_read_gid_l2_fields(gid_attr, &vlan_tag, NULL);
ret = rdma_read_gid_l2_fields(gid_attr, &vlan_id, NULL);
if (ret)
return ret;
/* Get mac address */
memcpy(ah->av.mac, ah_attr->roce.dmac, ETH_ALEN);
if (vlan_tag < VLAN_CFI_MASK) {
if (vlan_id < VLAN_N_VID) {
vlan_en = true;
vlan_tag |= (rdma_ah_get_sl(ah_attr) &
vlan_id |= (rdma_ah_get_sl(ah_attr) &
HNS_ROCE_VLAN_SL_BIT_MASK) <<
HNS_ROCE_VLAN_SL_SHIFT;
}
ah->av.port = rdma_ah_get_port_num(ah_attr);
ah->av.gid_index = grh->sgid_index;
ah->av.vlan = vlan_tag;
ah->av.vlan_id = vlan_id;
ah->av.vlan_en = vlan_en;
dev_dbg(dev, "gid_index = 0x%x,vlan = 0x%x\n", ah->av.gid_index,
ah->av.vlan);
dev_dbg(dev, "gid_index = 0x%x,vlan_id = 0x%x\n", ah->av.gid_index,
ah->av.vlan_id);
if (rdma_ah_get_static_rate(ah_attr))
ah->av.stat_rate = IB_RATE_10_GBPS;

View File

@ -55,7 +55,7 @@ int hns_roce_bitmap_alloc(struct hns_roce_bitmap *bitmap, unsigned long *obj)
bitmap->last = 0;
*obj |= bitmap->top;
} else {
ret = -1;
ret = -EINVAL;
}
spin_unlock(&bitmap->lock);
@ -100,7 +100,7 @@ int hns_roce_bitmap_alloc_range(struct hns_roce_bitmap *bitmap, int cnt,
}
*obj |= bitmap->top;
} else {
ret = -1;
ret = -EINVAL;
}
spin_unlock(&bitmap->lock);

View File

@ -115,12 +115,12 @@ enum {
enum {
/* TPT commands */
HNS_ROCE_CMD_SW2HW_MPT = 0xd,
HNS_ROCE_CMD_HW2SW_MPT = 0xf,
HNS_ROCE_CMD_CREATE_MPT = 0xd,
HNS_ROCE_CMD_DESTROY_MPT = 0xf,
/* CQ commands */
HNS_ROCE_CMD_SW2HW_CQ = 0x16,
HNS_ROCE_CMD_HW2SW_CQ = 0x17,
HNS_ROCE_CMD_CREATE_CQC = 0x16,
HNS_ROCE_CMD_DESTROY_CQC = 0x17,
/* QP/EE commands */
HNS_ROCE_CMD_RST2INIT_QP = 0x19,
@ -129,14 +129,14 @@ enum {
HNS_ROCE_CMD_RTS2RTS_QP = 0x1c,
HNS_ROCE_CMD_2ERR_QP = 0x1e,
HNS_ROCE_CMD_RTS2SQD_QP = 0x1f,
HNS_ROCE_CMD_SQD2SQD_QP = 0x38,
HNS_ROCE_CMD_SQD2RTS_QP = 0x20,
HNS_ROCE_CMD_2RST_QP = 0x21,
HNS_ROCE_CMD_QUERY_QP = 0x22,
HNS_ROCE_CMD_SW2HW_SRQ = 0x70,
HNS_ROCE_CMD_SQD2SQD_QP = 0x38,
HNS_ROCE_CMD_CREATE_SRQ = 0x70,
HNS_ROCE_CMD_MODIFY_SRQC = 0x72,
HNS_ROCE_CMD_QUERY_SRQC = 0x73,
HNS_ROCE_CMD_HW2SW_SRQ = 0x74,
HNS_ROCE_CMD_DESTROY_SRQ = 0x74,
};
int hns_roce_cmd_mbox(struct hns_roce_dev *hr_dev, u64 in_param, u64 out_param,

View File

@ -39,51 +39,8 @@
#include <rdma/hns-abi.h>
#include "hns_roce_common.h"
static void hns_roce_ib_cq_comp(struct hns_roce_cq *hr_cq)
{
struct ib_cq *ibcq = &hr_cq->ib_cq;
ibcq->comp_handler(ibcq, ibcq->cq_context);
}
static void hns_roce_ib_cq_event(struct hns_roce_cq *hr_cq,
enum hns_roce_event event_type)
{
struct hns_roce_dev *hr_dev;
struct ib_event event;
struct ib_cq *ibcq;
ibcq = &hr_cq->ib_cq;
hr_dev = to_hr_dev(ibcq->device);
if (event_type != HNS_ROCE_EVENT_TYPE_CQ_ID_INVALID &&
event_type != HNS_ROCE_EVENT_TYPE_CQ_ACCESS_ERROR &&
event_type != HNS_ROCE_EVENT_TYPE_CQ_OVERFLOW) {
dev_err(hr_dev->dev,
"hns_roce_ib: Unexpected event type 0x%x on CQ %06lx\n",
event_type, hr_cq->cqn);
return;
}
if (ibcq->event_handler) {
event.device = ibcq->device;
event.event = IB_EVENT_CQ_ERR;
event.element.cq = ibcq;
ibcq->event_handler(&event, ibcq->cq_context);
}
}
static int hns_roce_sw2hw_cq(struct hns_roce_dev *dev,
struct hns_roce_cmd_mailbox *mailbox,
unsigned long cq_num)
{
return hns_roce_cmd_mbox(dev, mailbox->dma, 0, cq_num, 0,
HNS_ROCE_CMD_SW2HW_CQ, HNS_ROCE_CMD_TIMEOUT_MSECS);
}
static int hns_roce_cq_alloc(struct hns_roce_dev *hr_dev, int nent,
struct hns_roce_mtt *hr_mtt,
struct hns_roce_cq *hr_cq, int vector)
static int hns_roce_alloc_cqc(struct hns_roce_dev *hr_dev,
struct hns_roce_cq *hr_cq)
{
struct hns_roce_cmd_mailbox *mailbox;
struct hns_roce_hem_table *mtt_table;
@ -101,35 +58,32 @@ static int hns_roce_cq_alloc(struct hns_roce_dev *hr_dev, int nent,
else
mtt_table = &hr_dev->mr_table.mtt_table;
mtts = hns_roce_table_find(hr_dev, mtt_table,
hr_mtt->first_seg, &dma_handle);
if (!mtts) {
dev_err(dev, "CQ alloc.Failed to find cq buf addr.\n");
return -EINVAL;
}
mtts = hns_roce_table_find(hr_dev, mtt_table, hr_cq->mtt.first_seg,
&dma_handle);
if (vector >= hr_dev->caps.num_comp_vectors) {
dev_err(dev, "CQ alloc.Invalid vector.\n");
if (!mtts) {
dev_err(dev, "Failed to find mtt for CQ buf.\n");
return -EINVAL;
}
hr_cq->vector = vector;
ret = hns_roce_bitmap_alloc(&cq_table->bitmap, &hr_cq->cqn);
if (ret == -1) {
dev_err(dev, "CQ alloc.Failed to alloc index.\n");
return -ENOMEM;
if (ret) {
dev_err(dev, "Num of CQ out of range.\n");
return ret;
}
/* Get CQC memory HEM(Hardware Entry Memory) table */
ret = hns_roce_table_get(hr_dev, &cq_table->table, hr_cq->cqn);
if (ret) {
dev_err(dev, "CQ alloc.Failed to get context mem.\n");
dev_err(dev,
"Get context mem failed(%d) when CQ(0x%lx) alloc.\n",
ret, hr_cq->cqn);
goto err_out;
}
ret = xa_err(xa_store(&cq_table->array, hr_cq->cqn, hr_cq, GFP_KERNEL));
if (ret) {
dev_err(dev, "CQ alloc failed xa_store.\n");
dev_err(dev, "Failed to xa_store CQ.\n");
goto err_put;
}
@ -140,14 +94,16 @@ static int hns_roce_cq_alloc(struct hns_roce_dev *hr_dev, int nent,
goto err_xa;
}
hr_dev->hw->write_cqc(hr_dev, hr_cq, mailbox->buf, mtts, dma_handle,
nent, vector);
hr_dev->hw->write_cqc(hr_dev, hr_cq, mailbox->buf, mtts, dma_handle);
/* Send mailbox to hw */
ret = hns_roce_sw2hw_cq(hr_dev, mailbox, hr_cq->cqn);
ret = hns_roce_cmd_mbox(hr_dev, mailbox->dma, 0, hr_cq->cqn, 0,
HNS_ROCE_CMD_CREATE_CQC, HNS_ROCE_CMD_TIMEOUT_MSECS);
hns_roce_free_cmd_mailbox(hr_dev, mailbox);
if (ret) {
dev_err(dev, "CQ alloc.Failed to cmd mailbox.\n");
dev_err(dev,
"Send cmd mailbox failed(%d) when CQ(0x%lx) alloc.\n",
ret, hr_cq->cqn);
goto err_xa;
}
@ -170,24 +126,17 @@ err_out:
return ret;
}
static int hns_roce_hw2sw_cq(struct hns_roce_dev *dev,
struct hns_roce_cmd_mailbox *mailbox,
unsigned long cq_num)
{
return hns_roce_cmd_mbox(dev, 0, mailbox ? mailbox->dma : 0, cq_num,
mailbox ? 0 : 1, HNS_ROCE_CMD_HW2SW_CQ,
HNS_ROCE_CMD_TIMEOUT_MSECS);
}
void hns_roce_free_cq(struct hns_roce_dev *hr_dev, struct hns_roce_cq *hr_cq)
void hns_roce_free_cqc(struct hns_roce_dev *hr_dev, struct hns_roce_cq *hr_cq)
{
struct hns_roce_cq_table *cq_table = &hr_dev->cq_table;
struct device *dev = hr_dev->dev;
int ret;
ret = hns_roce_hw2sw_cq(hr_dev, NULL, hr_cq->cqn);
ret = hns_roce_cmd_mbox(hr_dev, 0, 0, hr_cq->cqn, 1,
HNS_ROCE_CMD_DESTROY_CQC,
HNS_ROCE_CMD_TIMEOUT_MSECS);
if (ret)
dev_err(dev, "HW2SW_CQ failed (%d) for CQN %06lx\n", ret,
dev_err(dev, "DESTROY_CQ failed (%d) for CQN %06lx\n", ret,
hr_cq->cqn);
xa_erase(&cq_table->array, hr_cq->cqn);
@ -204,103 +153,91 @@ void hns_roce_free_cq(struct hns_roce_dev *hr_dev, struct hns_roce_cq *hr_cq)
hns_roce_bitmap_free(&cq_table->bitmap, hr_cq->cqn, BITMAP_NO_RR);
}
static int hns_roce_ib_get_cq_umem(struct hns_roce_dev *hr_dev,
struct ib_udata *udata,
struct hns_roce_cq_buf *buf,
struct ib_umem **umem, u64 buf_addr, int cqe)
static int get_cq_umem(struct hns_roce_dev *hr_dev, struct hns_roce_cq *hr_cq,
struct hns_roce_ib_create_cq ucmd,
struct ib_udata *udata)
{
int ret;
u32 page_shift;
struct hns_roce_buf *buf = &hr_cq->buf;
struct hns_roce_mtt *mtt = &hr_cq->mtt;
struct ib_umem **umem = &hr_cq->umem;
u32 npages;
int ret;
*umem = ib_umem_get(udata, buf_addr, cqe * hr_dev->caps.cq_entry_sz,
IB_ACCESS_LOCAL_WRITE, 1);
*umem = ib_umem_get(udata, ucmd.buf_addr, buf->size,
IB_ACCESS_LOCAL_WRITE);
if (IS_ERR(*umem))
return PTR_ERR(*umem);
if (hns_roce_check_whether_mhop(hr_dev, HEM_TYPE_CQE))
buf->hr_mtt.mtt_type = MTT_TYPE_CQE;
mtt->mtt_type = MTT_TYPE_CQE;
else
buf->hr_mtt.mtt_type = MTT_TYPE_WQE;
mtt->mtt_type = MTT_TYPE_WQE;
if (hr_dev->caps.cqe_buf_pg_sz) {
npages = (ib_umem_page_count(*umem) +
(1 << hr_dev->caps.cqe_buf_pg_sz) - 1) /
(1 << hr_dev->caps.cqe_buf_pg_sz);
page_shift = PAGE_SHIFT + hr_dev->caps.cqe_buf_pg_sz;
ret = hns_roce_mtt_init(hr_dev, npages, page_shift,
&buf->hr_mtt);
} else {
ret = hns_roce_mtt_init(hr_dev, ib_umem_page_count(*umem),
PAGE_SHIFT, &buf->hr_mtt);
}
npages = DIV_ROUND_UP(ib_umem_page_count(*umem),
1 << hr_dev->caps.cqe_buf_pg_sz);
ret = hns_roce_mtt_init(hr_dev, npages, buf->page_shift, mtt);
if (ret)
goto err_buf;
ret = hns_roce_ib_umem_write_mtt(hr_dev, &buf->hr_mtt, *umem);
ret = hns_roce_ib_umem_write_mtt(hr_dev, mtt, *umem);
if (ret)
goto err_mtt;
return 0;
err_mtt:
hns_roce_mtt_cleanup(hr_dev, &buf->hr_mtt);
hns_roce_mtt_cleanup(hr_dev, mtt);
err_buf:
ib_umem_release(*umem);
return ret;
}
static int hns_roce_ib_alloc_cq_buf(struct hns_roce_dev *hr_dev,
struct hns_roce_cq_buf *buf, u32 nent)
static int alloc_cq_buf(struct hns_roce_dev *hr_dev, struct hns_roce_cq *hr_cq)
{
struct hns_roce_buf *buf = &hr_cq->buf;
struct hns_roce_mtt *mtt = &hr_cq->mtt;
int ret;
u32 page_shift = PAGE_SHIFT + hr_dev->caps.cqe_buf_pg_sz;
ret = hns_roce_buf_alloc(hr_dev, nent * hr_dev->caps.cq_entry_sz,
(1 << page_shift) * 2, &buf->hr_buf,
page_shift);
ret = hns_roce_buf_alloc(hr_dev, buf->size, (1 << buf->page_shift) * 2,
buf, buf->page_shift);
if (ret)
goto out;
if (hns_roce_check_whether_mhop(hr_dev, HEM_TYPE_CQE))
buf->hr_mtt.mtt_type = MTT_TYPE_CQE;
mtt->mtt_type = MTT_TYPE_CQE;
else
buf->hr_mtt.mtt_type = MTT_TYPE_WQE;
mtt->mtt_type = MTT_TYPE_WQE;
ret = hns_roce_mtt_init(hr_dev, buf->hr_buf.npages,
buf->hr_buf.page_shift, &buf->hr_mtt);
ret = hns_roce_mtt_init(hr_dev, buf->npages, buf->page_shift, mtt);
if (ret)
goto err_buf;
ret = hns_roce_buf_write_mtt(hr_dev, &buf->hr_mtt, &buf->hr_buf);
ret = hns_roce_buf_write_mtt(hr_dev, mtt, buf);
if (ret)
goto err_mtt;
return 0;
err_mtt:
hns_roce_mtt_cleanup(hr_dev, &buf->hr_mtt);
hns_roce_mtt_cleanup(hr_dev, mtt);
err_buf:
hns_roce_buf_free(hr_dev, nent * hr_dev->caps.cq_entry_sz,
&buf->hr_buf);
hns_roce_buf_free(hr_dev, buf->size, buf);
out:
return ret;
}
static void hns_roce_ib_free_cq_buf(struct hns_roce_dev *hr_dev,
struct hns_roce_cq_buf *buf, int cqe)
static void free_cq_buf(struct hns_roce_dev *hr_dev, struct hns_roce_cq *hr_cq)
{
hns_roce_buf_free(hr_dev, (cqe + 1) * hr_dev->caps.cq_entry_sz,
&buf->hr_buf);
hns_roce_buf_free(hr_dev, hr_cq->buf.size, &hr_cq->buf);
}
static int create_user_cq(struct hns_roce_dev *hr_dev,
struct hns_roce_cq *hr_cq,
struct ib_udata *udata,
struct hns_roce_ib_create_cq_resp *resp,
int cq_entries)
struct hns_roce_ib_create_cq_resp *resp)
{
struct hns_roce_ib_create_cq ucmd;
struct device *dev = hr_dev->dev;
@ -314,9 +251,7 @@ static int create_user_cq(struct hns_roce_dev *hr_dev,
}
/* Get user space address, write it into mtt table */
ret = hns_roce_ib_get_cq_umem(hr_dev, udata, &hr_cq->hr_buf,
&hr_cq->umem, ucmd.buf_addr,
cq_entries);
ret = get_cq_umem(hr_dev, hr_cq, ucmd, udata);
if (ret) {
dev_err(dev, "Failed to get_cq_umem.\n");
return ret;
@ -337,17 +272,16 @@ static int create_user_cq(struct hns_roce_dev *hr_dev,
return 0;
err_mtt:
hns_roce_mtt_cleanup(hr_dev, &hr_cq->hr_buf.hr_mtt);
hns_roce_mtt_cleanup(hr_dev, &hr_cq->mtt);
ib_umem_release(hr_cq->umem);
return ret;
}
static int create_kernel_cq(struct hns_roce_dev *hr_dev,
struct hns_roce_cq *hr_cq, int cq_entries)
struct hns_roce_cq *hr_cq)
{
struct device *dev = hr_dev->dev;
struct hns_roce_uar *uar;
int ret;
if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_RECORD_DB) {
@ -361,15 +295,14 @@ static int create_kernel_cq(struct hns_roce_dev *hr_dev,
}
/* Init mtt table and write buff address to mtt table */
ret = hns_roce_ib_alloc_cq_buf(hr_dev, &hr_cq->hr_buf, cq_entries);
ret = alloc_cq_buf(hr_dev, hr_cq);
if (ret) {
dev_err(dev, "Failed to alloc_cq_buf.\n");
goto err_db;
}
uar = &hr_dev->priv_uar;
hr_cq->cq_db_l = hr_dev->reg_base + hr_dev->odb_offset +
DB_REG_OFFSET * uar->index;
DB_REG_OFFSET * hr_dev->priv_uar.index;
return 0;
@ -392,64 +325,69 @@ static void destroy_user_cq(struct hns_roce_dev *hr_dev,
(udata->outlen >= sizeof(*resp)))
hns_roce_db_unmap_user(context, &hr_cq->db);
hns_roce_mtt_cleanup(hr_dev, &hr_cq->hr_buf.hr_mtt);
hns_roce_mtt_cleanup(hr_dev, &hr_cq->mtt);
ib_umem_release(hr_cq->umem);
}
static void destroy_kernel_cq(struct hns_roce_dev *hr_dev,
struct hns_roce_cq *hr_cq)
{
hns_roce_mtt_cleanup(hr_dev, &hr_cq->hr_buf.hr_mtt);
hns_roce_ib_free_cq_buf(hr_dev, &hr_cq->hr_buf, hr_cq->ib_cq.cqe);
hns_roce_mtt_cleanup(hr_dev, &hr_cq->mtt);
free_cq_buf(hr_dev, hr_cq);
if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_RECORD_DB)
hns_roce_free_db(hr_dev, &hr_cq->db);
}
int hns_roce_ib_create_cq(struct ib_cq *ib_cq,
const struct ib_cq_init_attr *attr,
struct ib_udata *udata)
int hns_roce_create_cq(struct ib_cq *ib_cq, const struct ib_cq_init_attr *attr,
struct ib_udata *udata)
{
struct hns_roce_dev *hr_dev = to_hr_dev(ib_cq->device);
struct device *dev = hr_dev->dev;
struct hns_roce_ib_create_cq_resp resp = {};
struct hns_roce_cq *hr_cq = to_hr_cq(ib_cq);
struct device *dev = hr_dev->dev;
int vector = attr->comp_vector;
int cq_entries = attr->cqe;
u32 cq_entries = attr->cqe;
int ret;
if (cq_entries < 1 || cq_entries > hr_dev->caps.max_cqes) {
dev_err(dev, "Creat CQ failed. entries=%d, max=%d\n",
dev_err(dev, "Create CQ failed. entries=%d, max=%d\n",
cq_entries, hr_dev->caps.max_cqes);
return -EINVAL;
}
if (hr_dev->caps.min_cqes)
cq_entries = max(cq_entries, hr_dev->caps.min_cqes);
if (vector >= hr_dev->caps.num_comp_vectors) {
dev_err(dev, "Create CQ failed, vector=%d, max=%d\n",
vector, hr_dev->caps.num_comp_vectors);
return -EINVAL;
}
cq_entries = roundup_pow_of_two((unsigned int)cq_entries);
hr_cq->ib_cq.cqe = cq_entries - 1;
cq_entries = max(cq_entries, hr_dev->caps.min_cqes);
cq_entries = roundup_pow_of_two(cq_entries);
hr_cq->ib_cq.cqe = cq_entries - 1; /* used as cqe index */
hr_cq->cq_depth = cq_entries;
hr_cq->vector = vector;
hr_cq->buf.size = hr_cq->cq_depth * hr_dev->caps.cq_entry_sz;
hr_cq->buf.page_shift = PAGE_SHIFT + hr_dev->caps.cqe_buf_pg_sz;
spin_lock_init(&hr_cq->lock);
if (udata) {
ret = create_user_cq(hr_dev, hr_cq, udata, &resp, cq_entries);
ret = create_user_cq(hr_dev, hr_cq, udata, &resp);
if (ret) {
dev_err(dev, "Create cq failed in user mode!\n");
goto err_cq;
}
} else {
ret = create_kernel_cq(hr_dev, hr_cq, cq_entries);
ret = create_kernel_cq(hr_dev, hr_cq);
if (ret) {
dev_err(dev, "Create cq failed in kernel mode!\n");
goto err_cq;
}
}
/* Allocate cq index, fill cq_context */
ret = hns_roce_cq_alloc(hr_dev, cq_entries, &hr_cq->hr_buf.hr_mtt,
hr_cq, vector);
ret = hns_roce_alloc_cqc(hr_dev, hr_cq);
if (ret) {
dev_err(dev, "Creat CQ .Failed to cq_alloc.\n");
dev_err(dev, "Alloc CQ failed(%d).\n", ret);
goto err_dbmap;
}
@ -462,11 +400,6 @@ int hns_roce_ib_create_cq(struct ib_cq *ib_cq,
if (!udata && hr_cq->tptr_addr)
*hr_cq->tptr_addr = 0;
/* Get created cq handler and carry out event */
hr_cq->comp = hns_roce_ib_cq_comp;
hr_cq->event = hns_roce_ib_cq_event;
hr_cq->cq_depth = cq_entries;
if (udata) {
resp.cqn = hr_cq->cqn;
ret = ib_copy_to_udata(udata, &resp, sizeof(resp));
@ -477,7 +410,7 @@ int hns_roce_ib_create_cq(struct ib_cq *ib_cq,
return 0;
err_cqc:
hns_roce_free_cq(hr_dev, hr_cq);
hns_roce_free_cqc(hr_dev, hr_cq);
err_dbmap:
if (udata)
@ -489,7 +422,7 @@ err_cq:
return ret;
}
void hns_roce_ib_destroy_cq(struct ib_cq *ib_cq, struct ib_udata *udata)
void hns_roce_destroy_cq(struct ib_cq *ib_cq, struct ib_udata *udata)
{
struct hns_roce_dev *hr_dev = to_hr_dev(ib_cq->device);
struct hns_roce_cq *hr_cq = to_hr_cq(ib_cq);
@ -499,8 +432,8 @@ void hns_roce_ib_destroy_cq(struct ib_cq *ib_cq, struct ib_udata *udata)
return;
}
hns_roce_free_cq(hr_dev, hr_cq);
hns_roce_mtt_cleanup(hr_dev, &hr_cq->hr_buf.hr_mtt);
hns_roce_free_cqc(hr_dev, hr_cq);
hns_roce_mtt_cleanup(hr_dev, &hr_cq->mtt);
ib_umem_release(hr_cq->umem);
if (udata) {
@ -512,7 +445,7 @@ void hns_roce_ib_destroy_cq(struct ib_cq *ib_cq, struct ib_udata *udata)
&hr_cq->db);
} else {
/* Free the buff of stored cq */
hns_roce_ib_free_cq_buf(hr_dev, &hr_cq->hr_buf, ib_cq->cqe);
free_cq_buf(hr_dev, hr_cq);
if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_RECORD_DB)
hns_roce_free_db(hr_dev, &hr_cq->db);
}
@ -520,38 +453,57 @@ void hns_roce_ib_destroy_cq(struct ib_cq *ib_cq, struct ib_udata *udata)
void hns_roce_cq_completion(struct hns_roce_dev *hr_dev, u32 cqn)
{
struct device *dev = hr_dev->dev;
struct hns_roce_cq *cq;
struct hns_roce_cq *hr_cq;
struct ib_cq *ibcq;
cq = xa_load(&hr_dev->cq_table.array, cqn & (hr_dev->caps.num_cqs - 1));
if (!cq) {
dev_warn(dev, "Completion event for bogus CQ 0x%08x\n", cqn);
hr_cq = xa_load(&hr_dev->cq_table.array,
cqn & (hr_dev->caps.num_cqs - 1));
if (!hr_cq) {
dev_warn(hr_dev->dev, "Completion event for bogus CQ 0x%06x\n",
cqn);
return;
}
++cq->arm_sn;
cq->comp(cq);
++hr_cq->arm_sn;
ibcq = &hr_cq->ib_cq;
if (ibcq->comp_handler)
ibcq->comp_handler(ibcq, ibcq->cq_context);
}
void hns_roce_cq_event(struct hns_roce_dev *hr_dev, u32 cqn, int event_type)
{
struct hns_roce_cq_table *cq_table = &hr_dev->cq_table;
struct device *dev = hr_dev->dev;
struct hns_roce_cq *cq;
struct hns_roce_cq *hr_cq;
struct ib_event event;
struct ib_cq *ibcq;
cq = xa_load(&cq_table->array, cqn & (hr_dev->caps.num_cqs - 1));
if (cq)
atomic_inc(&cq->refcount);
if (!cq) {
dev_warn(dev, "Async event for bogus CQ %08x\n", cqn);
hr_cq = xa_load(&hr_dev->cq_table.array,
cqn & (hr_dev->caps.num_cqs - 1));
if (!hr_cq) {
dev_warn(dev, "Async event for bogus CQ 0x%06x\n", cqn);
return;
}
cq->event(cq, (enum hns_roce_event)event_type);
if (event_type != HNS_ROCE_EVENT_TYPE_CQ_ID_INVALID &&
event_type != HNS_ROCE_EVENT_TYPE_CQ_ACCESS_ERROR &&
event_type != HNS_ROCE_EVENT_TYPE_CQ_OVERFLOW) {
dev_err(dev, "Unexpected event type 0x%x on CQ 0x%06x\n",
event_type, cqn);
return;
}
if (atomic_dec_and_test(&cq->refcount))
complete(&cq->free);
atomic_inc(&hr_cq->refcount);
ibcq = &hr_cq->ib_cq;
if (ibcq->event_handler) {
event.device = ibcq->device;
event.element.cq = ibcq;
event.event = IB_EVENT_CQ_ERR;
ibcq->event_handler(&event, ibcq->cq_context);
}
if (atomic_dec_and_test(&hr_cq->refcount))
complete(&hr_cq->free);
}
int hns_roce_init_cq_table(struct hns_roce_dev *hr_dev)

View File

@ -31,7 +31,7 @@ int hns_roce_db_map_user(struct hns_roce_ucontext *context,
refcount_set(&page->refcount, 1);
page->user_virt = page_addr;
page->umem = ib_umem_get(udata, page_addr, PAGE_SIZE, 0, 0);
page->umem = ib_umem_get(udata, page_addr, PAGE_SIZE, 0);
if (IS_ERR(page->umem)) {
ret = PTR_ERR(page->umem);
kfree(page);

View File

@ -45,7 +45,7 @@
#define HNS_ROCE_MAX_MSG_LEN 0x80000000
#define HNS_ROCE_ALOGN_UP(a, b) ((((a) + (b) - 1) / (b)) * (b))
#define HNS_ROCE_ALIGN_UP(a, b) ((((a) + (b) - 1) / (b)) * (b))
#define HNS_ROCE_IB_MIN_SQ_STRIDE 6
@ -53,8 +53,6 @@
#define BA_BYTE_LEN 8
#define BITS_PER_BYTE 8
/* Hardware specification only for v1 engine */
#define HNS_ROCE_MIN_CQE_NUM 0x40
#define HNS_ROCE_MIN_WQE_NUM 0x20
@ -426,7 +424,6 @@ struct hns_roce_wq {
u64 *wrid; /* Work request ID */
spinlock_t lock;
int wqe_cnt; /* WQE num */
u32 max_post;
int max_gs;
int offset;
int wqe_shift; /* WQE size */
@ -451,6 +448,7 @@ struct hns_roce_buf {
struct hns_roce_buf_list *page_list;
int nbufs;
u32 npages;
u32 size;
int page_shift;
};
@ -482,22 +480,14 @@ struct hns_roce_db {
int order;
};
struct hns_roce_cq_buf {
struct hns_roce_buf hr_buf;
struct hns_roce_mtt hr_mtt;
};
struct hns_roce_cq {
struct ib_cq ib_cq;
struct hns_roce_cq_buf hr_buf;
struct hns_roce_buf buf;
struct hns_roce_mtt mtt;
struct hns_roce_db db;
u8 db_en;
spinlock_t lock;
struct ib_umem *umem;
void (*comp)(struct hns_roce_cq *cq);
void (*event)(struct hns_roce_cq *cq, enum hns_roce_event event_type);
struct hns_roce_uar *uar;
u32 cq_depth;
u32 cons_index;
u32 *set_ci_db;
@ -521,9 +511,8 @@ struct hns_roce_idx_que {
struct hns_roce_srq {
struct ib_srq ibsrq;
void (*event)(struct hns_roce_srq *srq, enum hns_roce_event event);
unsigned long srqn;
int max;
u32 wqe_cnt;
int max_gs;
int wqe_shift;
void __iomem *db_reg_l;
@ -539,8 +528,8 @@ struct hns_roce_srq {
spinlock_t lock;
int head;
int tail;
u16 wqe_ctr;
struct mutex mutex;
void (*event)(struct hns_roce_srq *srq, enum hns_roce_event event);
};
struct hns_roce_uar_table {
@ -582,7 +571,7 @@ struct hns_roce_av {
u8 tclass;
u8 dgid[HNS_ROCE_GID_SIZE];
u8 mac[ETH_ALEN];
u16 vlan;
u16 vlan_id;
bool vlan_en;
};
@ -695,10 +684,6 @@ struct hns_roce_qp {
struct hns_roce_rinl_buf rq_inl_buf;
};
struct hns_roce_sqp {
struct hns_roce_qp hr_qp;
};
struct hns_roce_ib_iboe {
spinlock_t lock;
struct net_device *netdevs[HNS_ROCE_MAX_PORTS];
@ -821,8 +806,8 @@ struct hns_roce_caps {
int max_qp_init_rdma;
int max_qp_dest_rdma;
int num_cqs;
int max_cqes;
int min_cqes;
u32 max_cqes;
u32 min_cqes;
u32 min_wqes;
int reserved_cqs;
int reserved_srqs;
@ -953,7 +938,7 @@ struct hns_roce_hw {
int (*mw_write_mtpt)(void *mb_buf, struct hns_roce_mw *mw);
void (*write_cqc)(struct hns_roce_dev *hr_dev,
struct hns_roce_cq *hr_cq, void *mb_buf, u64 *mtts,
dma_addr_t dma_handle, int nent, u32 vector);
dma_addr_t dma_handle);
int (*set_hem)(struct hns_roce_dev *hr_dev,
struct hns_roce_hem_table *table, int obj, int step_idx);
int (*clear_hem)(struct hns_roce_dev *hr_dev,
@ -1092,11 +1077,6 @@ static inline struct hns_roce_srq *to_hr_srq(struct ib_srq *ibsrq)
return container_of(ibsrq, struct hns_roce_srq, ibsrq);
}
static inline struct hns_roce_sqp *hr_to_hr_sqp(struct hns_roce_qp *hr_qp)
{
return container_of(hr_qp, struct hns_roce_sqp, hr_qp);
}
static inline void hns_roce_write64_k(__le32 val[2], void __iomem *dest)
{
__raw_writeq(*(u64 *) val, dest);
@ -1198,9 +1178,9 @@ struct ib_mr *hns_roce_alloc_mr(struct ib_pd *pd, enum ib_mr_type mr_type,
int hns_roce_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents,
unsigned int *sg_offset);
int hns_roce_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata);
int hns_roce_hw2sw_mpt(struct hns_roce_dev *hr_dev,
struct hns_roce_cmd_mailbox *mailbox,
unsigned long mpt_index);
int hns_roce_hw_destroy_mpt(struct hns_roce_dev *hr_dev,
struct hns_roce_cmd_mailbox *mailbox,
unsigned long mpt_index);
unsigned long key_to_hw_index(u32 key);
struct ib_mw *hns_roce_alloc_mw(struct ib_pd *pd, enum ib_mw_type,
@ -1257,12 +1237,11 @@ void hns_roce_release_range_qp(struct hns_roce_dev *hr_dev, int base_qpn,
__be32 send_ieth(const struct ib_send_wr *wr);
int to_hr_qp_type(int qp_type);
int hns_roce_ib_create_cq(struct ib_cq *ib_cq,
const struct ib_cq_init_attr *attr,
struct ib_udata *udata);
int hns_roce_create_cq(struct ib_cq *ib_cq, const struct ib_cq_init_attr *attr,
struct ib_udata *udata);
void hns_roce_ib_destroy_cq(struct ib_cq *ib_cq, struct ib_udata *udata);
void hns_roce_free_cq(struct hns_roce_dev *hr_dev, struct hns_roce_cq *hr_cq);
void hns_roce_destroy_cq(struct ib_cq *ib_cq, struct ib_udata *udata);
void hns_roce_free_cqc(struct hns_roce_dev *hr_dev, struct hns_roce_cq *hr_cq);
int hns_roce_db_map_user(struct hns_roce_ucontext *context,
struct ib_udata *udata, unsigned long virt,

View File

@ -732,7 +732,7 @@ static int hns_roce_v1_rsv_lp_qp(struct hns_roce_dev *hr_dev)
if (!cq)
return -ENOMEM;
ret = hns_roce_ib_create_cq(cq, &cq_init_attr, NULL);
ret = hns_roce_create_cq(cq, &cq_init_attr, NULL);
if (ret) {
dev_err(dev, "Create cq for reserved loop qp failed!");
goto alloc_cq_failed;
@ -868,7 +868,7 @@ alloc_pd_failed:
kfree(pd);
alloc_mem_failed:
hns_roce_ib_destroy_cq(cq, NULL);
hns_roce_destroy_cq(cq, NULL);
alloc_cq_failed:
kfree(cq);
return ret;
@ -897,7 +897,7 @@ static void hns_roce_v1_release_lp_qp(struct hns_roce_dev *hr_dev)
i, ret);
}
hns_roce_ib_destroy_cq(&free_mr->mr_free_cq->ib_cq, NULL);
hns_roce_destroy_cq(&free_mr->mr_free_cq->ib_cq, NULL);
kfree(&free_mr->mr_free_cq->ib_cq);
hns_roce_dealloc_pd(&free_mr->mr_free_pd->ibpd, NULL);
kfree(&free_mr->mr_free_pd->ibpd);
@ -1114,9 +1114,10 @@ static int hns_roce_v1_dereg_mr(struct hns_roce_dev *hr_dev,
free_mr = &priv->free_mr;
if (mr->enabled) {
if (hns_roce_hw2sw_mpt(hr_dev, NULL, key_to_hw_index(mr->key)
& (hr_dev->caps.num_mtpts - 1)))
dev_warn(dev, "HW2SW_MPT failed!\n");
if (hns_roce_hw_destroy_mpt(hr_dev, NULL,
key_to_hw_index(mr->key) &
(hr_dev->caps.num_mtpts - 1)))
dev_warn(dev, "DESTROY_MPT failed!\n");
}
mr_work = kzalloc(sizeof(*mr_work), GFP_KERNEL);
@ -1979,8 +1980,7 @@ static int hns_roce_v1_write_mtpt(void *mb_buf, struct hns_roce_mr *mr,
static void *get_cqe(struct hns_roce_cq *hr_cq, int n)
{
return hns_roce_buf_offset(&hr_cq->hr_buf.hr_buf,
n * HNS_ROCE_V1_CQE_ENTRY_SIZE);
return hns_roce_buf_offset(&hr_cq->buf, n * HNS_ROCE_V1_CQE_ENTRY_SIZE);
}
static void *get_sw_cqe(struct hns_roce_cq *hr_cq, int n)
@ -1989,7 +1989,7 @@ static void *get_sw_cqe(struct hns_roce_cq *hr_cq, int n)
/* Get cqe when Owner bit is Conversely with the MSB of cons_idx */
return (roce_get_bit(hr_cqe->cqe_byte_4, CQE_BYTE_4_OWNER_S) ^
!!(n & (hr_cq->ib_cq.cqe + 1))) ? hr_cqe : NULL;
!!(n & hr_cq->cq_depth)) ? hr_cqe : NULL;
}
static struct hns_roce_cqe *next_cqe_sw(struct hns_roce_cq *hr_cq)
@ -2072,8 +2072,7 @@ static void hns_roce_v1_cq_clean(struct hns_roce_cq *hr_cq, u32 qpn,
static void hns_roce_v1_write_cqc(struct hns_roce_dev *hr_dev,
struct hns_roce_cq *hr_cq, void *mb_buf,
u64 *mtts, dma_addr_t dma_handle, int nent,
u32 vector)
u64 *mtts, dma_addr_t dma_handle)
{
struct hns_roce_cq_context *cq_context = NULL;
struct hns_roce_buf_list *tptr_buf;
@ -2108,9 +2107,9 @@ static void hns_roce_v1_write_cqc(struct hns_roce_dev *hr_dev,
roce_set_field(cq_context->cqc_byte_12,
CQ_CONTEXT_CQC_BYTE_12_CQ_CQE_SHIFT_M,
CQ_CONTEXT_CQC_BYTE_12_CQ_CQE_SHIFT_S,
ilog2((unsigned int)nent));
ilog2(hr_cq->cq_depth));
roce_set_field(cq_context->cqc_byte_12, CQ_CONTEXT_CQC_BYTE_12_CEQN_M,
CQ_CONTEXT_CQC_BYTE_12_CEQN_S, vector);
CQ_CONTEXT_CQC_BYTE_12_CEQN_S, hr_cq->vector);
cq_context->cur_cqe_ba0_l = cpu_to_le32((u32)(mtts[0]));
@ -3644,10 +3643,7 @@ int hns_roce_v1_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata)
hns_roce_buf_free(hr_dev, hr_qp->buff_size, &hr_qp->hr_buf);
}
if (hr_qp->ibqp.qp_type == IB_QPT_RC)
kfree(hr_qp);
else
kfree(hr_to_hr_sqp(hr_qp));
kfree(hr_qp);
return 0;
}
@ -3658,10 +3654,9 @@ static void hns_roce_v1_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata)
struct device *dev = &hr_dev->pdev->dev;
u32 cqe_cnt_ori;
u32 cqe_cnt_cur;
u32 cq_buf_size;
int wait_time = 0;
hns_roce_free_cq(hr_dev, hr_cq);
hns_roce_free_cqc(hr_dev, hr_cq);
/*
* Before freeing cq buffer, we need to ensure that the outstanding CQE
@ -3686,13 +3681,12 @@ static void hns_roce_v1_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata)
wait_time++;
}
hns_roce_mtt_cleanup(hr_dev, &hr_cq->hr_buf.hr_mtt);
hns_roce_mtt_cleanup(hr_dev, &hr_cq->mtt);
ib_umem_release(hr_cq->umem);
if (!udata) {
/* Free the buff of stored cq */
cq_buf_size = (ibcq->cqe + 1) * hr_dev->caps.cq_entry_sz;
hns_roce_buf_free(hr_dev, cq_buf_size, &hr_cq->hr_buf.hr_buf);
hns_roce_buf_free(hr_dev, hr_cq->buf.size, &hr_cq->buf);
}
}

View File

@ -389,7 +389,7 @@ static int hns_roce_v2_post_send(struct ib_qp *ibqp,
roce_set_field(ud_sq_wqe->byte_36,
V2_UD_SEND_WQE_BYTE_36_VLAN_M,
V2_UD_SEND_WQE_BYTE_36_VLAN_S,
le16_to_cpu(ah->av.vlan));
ah->av.vlan_id);
roce_set_field(ud_sq_wqe->byte_36,
V2_UD_SEND_WQE_BYTE_36_HOPLIMIT_M,
V2_UD_SEND_WQE_BYTE_36_HOPLIMIT_S,
@ -2447,8 +2447,7 @@ static int hns_roce_v2_mw_write_mtpt(void *mb_buf, struct hns_roce_mw *mw)
static void *get_cqe_v2(struct hns_roce_cq *hr_cq, int n)
{
return hns_roce_buf_offset(&hr_cq->hr_buf.hr_buf,
n * HNS_ROCE_V2_CQE_ENTRY_SIZE);
return hns_roce_buf_offset(&hr_cq->buf, n * HNS_ROCE_V2_CQE_ENTRY_SIZE);
}
static void *get_sw_cqe_v2(struct hns_roce_cq *hr_cq, int n)
@ -2457,7 +2456,7 @@ static void *get_sw_cqe_v2(struct hns_roce_cq *hr_cq, int n)
/* Get cqe when Owner bit is Conversely with the MSB of cons_idx */
return (roce_get_bit(cqe->byte_4, V2_CQE_BYTE_4_OWNER_S) ^
!!(n & (hr_cq->ib_cq.cqe + 1))) ? cqe : NULL;
!!(n & hr_cq->cq_depth)) ? cqe : NULL;
}
static struct hns_roce_v2_cqe *next_cqe_sw_v2(struct hns_roce_cq *hr_cq)
@ -2550,8 +2549,7 @@ static void hns_roce_v2_cq_clean(struct hns_roce_cq *hr_cq, u32 qpn,
static void hns_roce_v2_write_cqc(struct hns_roce_dev *hr_dev,
struct hns_roce_cq *hr_cq, void *mb_buf,
u64 *mtts, dma_addr_t dma_handle, int nent,
u32 vector)
u64 *mtts, dma_addr_t dma_handle)
{
struct hns_roce_v2_cq_context *cq_context;
@ -2563,9 +2561,10 @@ static void hns_roce_v2_write_cqc(struct hns_roce_dev *hr_dev,
roce_set_field(cq_context->byte_4_pg_ceqn, V2_CQC_BYTE_4_ARM_ST_M,
V2_CQC_BYTE_4_ARM_ST_S, REG_NXT_CEQE);
roce_set_field(cq_context->byte_4_pg_ceqn, V2_CQC_BYTE_4_SHIFT_M,
V2_CQC_BYTE_4_SHIFT_S, ilog2((unsigned int)nent));
V2_CQC_BYTE_4_SHIFT_S,
ilog2(hr_cq->cq_depth));
roce_set_field(cq_context->byte_4_pg_ceqn, V2_CQC_BYTE_4_CEQN_M,
V2_CQC_BYTE_4_CEQN_S, vector);
V2_CQC_BYTE_4_CEQN_S, hr_cq->vector);
roce_set_field(cq_context->byte_8_cqn, V2_CQC_BYTE_8_CQN_M,
V2_CQC_BYTE_8_CQN_S, hr_cq->cqn);
@ -4061,8 +4060,8 @@ static int hns_roce_v2_set_path(struct ib_qp *ibqp,
struct hns_roce_qp *hr_qp = to_hr_qp(ibqp);
const struct ib_gid_attr *gid_attr = NULL;
int is_roce_protocol;
u16 vlan_id = 0xffff;
bool is_udp = false;
u16 vlan = 0xffff;
u8 ib_port;
u8 hr_port;
int ret;
@ -4074,7 +4073,7 @@ static int hns_roce_v2_set_path(struct ib_qp *ibqp,
if (is_roce_protocol) {
gid_attr = attr->ah_attr.grh.sgid_attr;
ret = rdma_read_gid_l2_fields(gid_attr, &vlan, NULL);
ret = rdma_read_gid_l2_fields(gid_attr, &vlan_id, NULL);
if (ret)
return ret;
@ -4083,7 +4082,7 @@ static int hns_roce_v2_set_path(struct ib_qp *ibqp,
IB_GID_TYPE_ROCE_UDP_ENCAP);
}
if (vlan < VLAN_CFI_MASK) {
if (vlan_id < VLAN_N_VID) {
roce_set_bit(context->byte_76_srqn_op_en,
V2_QPC_BYTE_76_RQ_VLAN_EN_S, 1);
roce_set_bit(qpc_mask->byte_76_srqn_op_en,
@ -4095,7 +4094,7 @@ static int hns_roce_v2_set_path(struct ib_qp *ibqp,
}
roce_set_field(context->byte_24_mtu_tc, V2_QPC_BYTE_24_VLAN_ID_M,
V2_QPC_BYTE_24_VLAN_ID_S, vlan);
V2_QPC_BYTE_24_VLAN_ID_S, vlan_id);
roce_set_field(qpc_mask->byte_24_mtu_tc, V2_QPC_BYTE_24_VLAN_ID_M,
V2_QPC_BYTE_24_VLAN_ID_S, 0);
@ -4650,16 +4649,14 @@ static int hns_roce_v2_destroy_qp_common(struct hns_roce_dev *hr_dev,
{
struct hns_roce_cq *send_cq, *recv_cq;
struct ib_device *ibdev = &hr_dev->ib_dev;
int ret;
int ret = 0;
if (hr_qp->ibqp.qp_type == IB_QPT_RC && hr_qp->state != IB_QPS_RESET) {
/* Modify qp to reset before destroying qp */
ret = hns_roce_v2_modify_qp(&hr_qp->ibqp, NULL, 0,
hr_qp->state, IB_QPS_RESET);
if (ret) {
if (ret)
ibdev_err(ibdev, "modify QP to Reset failed.\n");
return ret;
}
}
send_cq = to_hr_cq(hr_qp->ibqp.send_cq);
@ -4715,7 +4712,7 @@ static int hns_roce_v2_destroy_qp_common(struct hns_roce_dev *hr_dev,
kfree(hr_qp->rq_inl_buf.wqe_list);
}
return 0;
return ret;
}
static int hns_roce_v2_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata)
@ -4725,16 +4722,11 @@ static int hns_roce_v2_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata)
int ret;
ret = hns_roce_v2_destroy_qp_common(hr_dev, hr_qp, udata);
if (ret) {
if (ret)
ibdev_err(&hr_dev->ib_dev, "Destroy qp 0x%06lx failed(%d)\n",
hr_qp->qpn, ret);
return ret;
}
if (hr_qp->ibqp.qp_type == IB_QPT_GSI)
kfree(hr_to_hr_sqp(hr_qp));
else
kfree(hr_qp);
kfree(hr_qp);
return 0;
}
@ -4951,10 +4943,7 @@ static void hns_roce_v2_init_irq_work(struct hns_roce_dev *hr_dev,
static void set_eq_cons_index_v2(struct hns_roce_eq *eq)
{
struct hns_roce_dev *hr_dev = eq->hr_dev;
__le32 doorbell[2];
doorbell[0] = 0;
doorbell[1] = 0;
__le32 doorbell[2] = {};
if (eq->type_flag == HNS_ROCE_AEQ) {
roce_set_field(doorbell[0], HNS_ROCE_V2_EQ_DB_CMD_M,
@ -6047,7 +6036,7 @@ static void hns_roce_v2_write_srqc(struct hns_roce_dev *hr_dev,
hr_dev->caps.srqwqe_hop_num));
roce_set_field(srq_context->byte_4_srqn_srqst,
SRQC_BYTE_4_SRQ_SHIFT_M, SRQC_BYTE_4_SRQ_SHIFT_S,
ilog2(srq->max));
ilog2(srq->wqe_cnt));
roce_set_field(srq_context->byte_4_srqn_srqst, SRQC_BYTE_4_SRQN_M,
SRQC_BYTE_4_SRQN_S, srq->srqn);
@ -6092,11 +6081,11 @@ static void hns_roce_v2_write_srqc(struct hns_roce_dev *hr_dev,
roce_set_field(srq_context->byte_44_idxbufpgsz_addr,
SRQC_BYTE_44_SRQ_IDX_BA_PG_SZ_M,
SRQC_BYTE_44_SRQ_IDX_BA_PG_SZ_S,
hr_dev->caps.idx_ba_pg_sz);
hr_dev->caps.idx_ba_pg_sz + PG_SHIFT_OFFSET);
roce_set_field(srq_context->byte_44_idxbufpgsz_addr,
SRQC_BYTE_44_SRQ_IDX_BUF_PG_SZ_M,
SRQC_BYTE_44_SRQ_IDX_BUF_PG_SZ_S,
hr_dev->caps.idx_buf_pg_sz);
hr_dev->caps.idx_buf_pg_sz + PG_SHIFT_OFFSET);
srq_context->idx_nxt_blk_addr =
cpu_to_le32(mtts_idx[1] >> PAGE_ADDR_SHIFT);
@ -6133,7 +6122,7 @@ static int hns_roce_v2_modify_srq(struct ib_srq *ibsrq,
int ret;
if (srq_attr_mask & IB_SRQ_LIMIT) {
if (srq_attr->srq_limit >= srq->max)
if (srq_attr->srq_limit >= srq->wqe_cnt)
return -EINVAL;
mailbox = hns_roce_alloc_cmd_mailbox(hr_dev);
@ -6193,7 +6182,7 @@ static int hns_roce_v2_query_srq(struct ib_srq *ibsrq, struct ib_srq_attr *attr)
SRQC_BYTE_8_SRQ_LIMIT_WL_S);
attr->srq_limit = limit_wl;
attr->max_wr = srq->max - 1;
attr->max_wr = srq->wqe_cnt - 1;
attr->max_sge = srq->max_gs;
memcpy(srq_context, mailbox->buf, sizeof(*srq_context));
@ -6246,7 +6235,7 @@ static int hns_roce_v2_post_srq_recv(struct ib_srq *ibsrq,
spin_lock_irqsave(&srq->lock, flags);
ind = srq->head & (srq->max - 1);
ind = srq->head & (srq->wqe_cnt - 1);
for (nreq = 0; wr; ++nreq, wr = wr->next) {
if (unlikely(wr->num_sge > srq->max_gs)) {
@ -6261,7 +6250,7 @@ static int hns_roce_v2_post_srq_recv(struct ib_srq *ibsrq,
break;
}
wqe_idx = find_empty_entry(&srq->idx_que, srq->max);
wqe_idx = find_empty_entry(&srq->idx_que, srq->wqe_cnt);
if (wqe_idx < 0) {
ret = -ENOMEM;
*bad_wr = wr;
@ -6285,7 +6274,7 @@ static int hns_roce_v2_post_srq_recv(struct ib_srq *ibsrq,
}
srq->wrid[wqe_idx] = wr->wr_id;
ind = (ind + 1) & (srq->max - 1);
ind = (ind + 1) & (srq->wqe_cnt - 1);
}
if (likely(nreq)) {
@ -6380,12 +6369,14 @@ static const struct pci_device_id hns_roce_hw_v2_pci_tbl[] = {
MODULE_DEVICE_TABLE(pci, hns_roce_hw_v2_pci_tbl);
static int hns_roce_hw_v2_get_cfg(struct hns_roce_dev *hr_dev,
static void hns_roce_hw_v2_get_cfg(struct hns_roce_dev *hr_dev,
struct hnae3_handle *handle)
{
struct hns_roce_v2_priv *priv = hr_dev->priv;
int i;
hr_dev->pci_dev = handle->pdev;
hr_dev->dev = &handle->pdev->dev;
hr_dev->hw = &hns_roce_hw_v2;
hr_dev->dfx = &hns_roce_dfx_hw_v2;
hr_dev->sdb_offset = ROCEE_DB_SQ_L_0_REG;
@ -6410,8 +6401,6 @@ static int hns_roce_hw_v2_get_cfg(struct hns_roce_dev *hr_dev,
hr_dev->reset_cnt = handle->ae_algo->ops->ae_dev_reset_cnt(handle);
priv->handle = handle;
return 0;
}
static int __hns_roce_hw_v2_init_instance(struct hnae3_handle *handle)
@ -6429,14 +6418,7 @@ static int __hns_roce_hw_v2_init_instance(struct hnae3_handle *handle)
goto error_failed_kzalloc;
}
hr_dev->pci_dev = handle->pdev;
hr_dev->dev = &handle->pdev->dev;
ret = hns_roce_hw_v2_get_cfg(hr_dev, handle);
if (ret) {
dev_err(hr_dev->dev, "Get Configuration failed!\n");
goto error_failed_get_cfg;
}
hns_roce_hw_v2_get_cfg(hr_dev, handle);
ret = hns_roce_init(hr_dev);
if (ret) {

View File

@ -87,8 +87,8 @@
#define HNS_ROCE_V2_MTT_ENTRY_SZ 64
#define HNS_ROCE_V2_CQE_ENTRY_SIZE 32
#define HNS_ROCE_V2_SCCC_ENTRY_SZ 32
#define HNS_ROCE_V2_QPC_TIMER_ENTRY_SZ 4096
#define HNS_ROCE_V2_CQC_TIMER_ENTRY_SZ 4096
#define HNS_ROCE_V2_QPC_TIMER_ENTRY_SZ PAGE_SIZE
#define HNS_ROCE_V2_CQC_TIMER_ENTRY_SZ PAGE_SIZE
#define HNS_ROCE_V2_PAGE_SIZE_SUPPORTED 0xFFFFF000
#define HNS_ROCE_V2_MAX_INNER_MTPT_NUM 2
#define HNS_ROCE_INVALID_LKEY 0x100

View File

@ -111,7 +111,7 @@ static int handle_en_event(struct hns_roce_dev *hr_dev, u8 port,
netdev = hr_dev->iboe.netdevs[port];
if (!netdev) {
dev_err(dev, "port(%d) can't find netdev\n", port);
dev_err(dev, "Can't find netdev on port(%u)!\n", port);
return -ENODEV;
}
@ -253,7 +253,7 @@ static int hns_roce_query_port(struct ib_device *ib_dev, u8 port_num,
net_dev = hr_dev->iboe.netdevs[port];
if (!net_dev) {
spin_unlock_irqrestore(&hr_dev->iboe.lock, flags);
dev_err(dev, "find netdev %d failed!\r\n", port);
dev_err(dev, "Find netdev %u failed!\n", port);
return -EINVAL;
}
@ -301,12 +301,6 @@ static int hns_roce_modify_device(struct ib_device *ib_dev, int mask,
return 0;
}
static int hns_roce_modify_port(struct ib_device *ib_dev, u8 port_num, int mask,
struct ib_port_modify *props)
{
return 0;
}
static int hns_roce_alloc_ucontext(struct ib_ucontext *uctx,
struct ib_udata *udata)
{
@ -359,7 +353,8 @@ static int hns_roce_mmap(struct ib_ucontext *context,
return rdma_user_mmap_io(context, vma,
to_hr_ucontext(context)->uar.pfn,
PAGE_SIZE,
pgprot_noncached(vma->vm_page_prot));
pgprot_noncached(vma->vm_page_prot),
NULL);
/* vm_pgoff: 1 -- TPTR */
case 1:
@ -372,7 +367,8 @@ static int hns_roce_mmap(struct ib_ucontext *context,
return rdma_user_mmap_io(context, vma,
hr_dev->tptr_dma_addr >> PAGE_SHIFT,
hr_dev->tptr_size,
vma->vm_page_prot);
vma->vm_page_prot,
NULL);
default:
return -EINVAL;
@ -423,14 +419,14 @@ static const struct ib_device_ops hns_roce_dev_ops = {
.alloc_pd = hns_roce_alloc_pd,
.alloc_ucontext = hns_roce_alloc_ucontext,
.create_ah = hns_roce_create_ah,
.create_cq = hns_roce_ib_create_cq,
.create_cq = hns_roce_create_cq,
.create_qp = hns_roce_create_qp,
.dealloc_pd = hns_roce_dealloc_pd,
.dealloc_ucontext = hns_roce_dealloc_ucontext,
.del_gid = hns_roce_del_gid,
.dereg_mr = hns_roce_dereg_mr,
.destroy_ah = hns_roce_destroy_ah,
.destroy_cq = hns_roce_ib_destroy_cq,
.destroy_cq = hns_roce_destroy_cq,
.disassociate_ucontext = hns_roce_disassociate_ucontext,
.fill_res_entry = hns_roce_fill_res_entry,
.get_dma_mr = hns_roce_get_dma_mr,
@ -438,7 +434,6 @@ static const struct ib_device_ops hns_roce_dev_ops = {
.get_port_immutable = hns_roce_port_immutable,
.mmap = hns_roce_mmap,
.modify_device = hns_roce_modify_device,
.modify_port = hns_roce_modify_port,
.modify_qp = hns_roce_modify_qp,
.query_ah = hns_roce_query_ah,
.query_device = hns_roce_query_device,

View File

@ -48,21 +48,21 @@ unsigned long key_to_hw_index(u32 key)
return (key << 24) | (key >> 8);
}
static int hns_roce_sw2hw_mpt(struct hns_roce_dev *hr_dev,
struct hns_roce_cmd_mailbox *mailbox,
unsigned long mpt_index)
static int hns_roce_hw_create_mpt(struct hns_roce_dev *hr_dev,
struct hns_roce_cmd_mailbox *mailbox,
unsigned long mpt_index)
{
return hns_roce_cmd_mbox(hr_dev, mailbox->dma, 0, mpt_index, 0,
HNS_ROCE_CMD_SW2HW_MPT,
HNS_ROCE_CMD_CREATE_MPT,
HNS_ROCE_CMD_TIMEOUT_MSECS);
}
int hns_roce_hw2sw_mpt(struct hns_roce_dev *hr_dev,
struct hns_roce_cmd_mailbox *mailbox,
unsigned long mpt_index)
int hns_roce_hw_destroy_mpt(struct hns_roce_dev *hr_dev,
struct hns_roce_cmd_mailbox *mailbox,
unsigned long mpt_index)
{
return hns_roce_cmd_mbox(hr_dev, 0, mailbox ? mailbox->dma : 0,
mpt_index, !mailbox, HNS_ROCE_CMD_HW2SW_MPT,
mpt_index, !mailbox, HNS_ROCE_CMD_DESTROY_MPT,
HNS_ROCE_CMD_TIMEOUT_MSECS);
}
@ -83,7 +83,7 @@ static int hns_roce_buddy_alloc(struct hns_roce_buddy *buddy, int order,
}
}
spin_unlock(&buddy->lock);
return -1;
return -EINVAL;
found:
clear_bit(*seg, buddy->bits[o]);
@ -206,13 +206,14 @@ static int hns_roce_alloc_mtt_range(struct hns_roce_dev *hr_dev, int order,
}
ret = hns_roce_buddy_alloc(buddy, order, seg);
if (ret == -1)
return -1;
if (ret)
return ret;
if (hns_roce_table_get_range(hr_dev, table, *seg,
*seg + (1 << order) - 1)) {
ret = hns_roce_table_get_range(hr_dev, table, *seg,
*seg + (1 << order) - 1);
if (ret) {
hns_roce_buddy_free(buddy, *seg, order);
return -1;
return ret;
}
return 0;
@ -578,7 +579,7 @@ static int hns_roce_mr_alloc(struct hns_roce_dev *hr_dev, u32 pd, u64 iova,
/* Allocate a key for mr from mr_table */
ret = hns_roce_bitmap_alloc(&hr_dev->mr_table.mtpt_bitmap, &index);
if (ret == -1)
if (ret)
return -ENOMEM;
mr->iova = iova; /* MR va starting addr */
@ -707,10 +708,11 @@ static void hns_roce_mr_free(struct hns_roce_dev *hr_dev,
int ret;
if (mr->enabled) {
ret = hns_roce_hw2sw_mpt(hr_dev, NULL, key_to_hw_index(mr->key)
& (hr_dev->caps.num_mtpts - 1));
ret = hns_roce_hw_destroy_mpt(hr_dev, NULL,
key_to_hw_index(mr->key) &
(hr_dev->caps.num_mtpts - 1));
if (ret)
dev_warn(dev, "HW2SW_MPT failed (%d)\n", ret);
dev_warn(dev, "DESTROY_MPT failed (%d)\n", ret);
}
if (mr->size != ~0ULL) {
@ -763,10 +765,10 @@ static int hns_roce_mr_enable(struct hns_roce_dev *hr_dev,
goto err_page;
}
ret = hns_roce_sw2hw_mpt(hr_dev, mailbox,
mtpt_idx & (hr_dev->caps.num_mtpts - 1));
ret = hns_roce_hw_create_mpt(hr_dev, mailbox,
mtpt_idx & (hr_dev->caps.num_mtpts - 1));
if (ret) {
dev_err(dev, "SW2HW_MPT failed (%d)\n", ret);
dev_err(dev, "CREATE_MPT failed (%d)\n", ret);
goto err_page;
}
@ -1143,7 +1145,7 @@ struct ib_mr *hns_roce_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
if (!mr)
return ERR_PTR(-ENOMEM);
mr->umem = ib_umem_get(udata, start, length, access_flags, 0);
mr->umem = ib_umem_get(udata, start, length, access_flags);
if (IS_ERR(mr->umem)) {
ret = PTR_ERR(mr->umem);
goto err_free;
@ -1228,7 +1230,7 @@ static int rereg_mr_trans(struct ib_mr *ibmr, int flags,
}
ib_umem_release(mr->umem);
mr->umem = ib_umem_get(udata, start, length, mr_access_flags, 0);
mr->umem = ib_umem_get(udata, start, length, mr_access_flags);
if (IS_ERR(mr->umem)) {
ret = PTR_ERR(mr->umem);
mr->umem = NULL;
@ -1308,9 +1310,9 @@ int hns_roce_rereg_user_mr(struct ib_mr *ibmr, int flags, u64 start, u64 length,
if (ret)
goto free_cmd_mbox;
ret = hns_roce_hw2sw_mpt(hr_dev, NULL, mtpt_idx);
ret = hns_roce_hw_destroy_mpt(hr_dev, NULL, mtpt_idx);
if (ret)
dev_warn(dev, "HW2SW_MPT failed (%d)\n", ret);
dev_warn(dev, "DESTROY_MPT failed (%d)\n", ret);
mr->enabled = 0;
@ -1332,9 +1334,9 @@ int hns_roce_rereg_user_mr(struct ib_mr *ibmr, int flags, u64 start, u64 length,
goto free_cmd_mbox;
}
ret = hns_roce_sw2hw_mpt(hr_dev, mailbox, mtpt_idx);
ret = hns_roce_hw_create_mpt(hr_dev, mailbox, mtpt_idx);
if (ret) {
dev_err(dev, "SW2HW_MPT failed (%d)\n", ret);
dev_err(dev, "CREATE_MPT failed (%d)\n", ret);
ib_umem_release(mr->umem);
goto free_cmd_mbox;
}
@ -1448,10 +1450,11 @@ static void hns_roce_mw_free(struct hns_roce_dev *hr_dev,
int ret;
if (mw->enabled) {
ret = hns_roce_hw2sw_mpt(hr_dev, NULL, key_to_hw_index(mw->rkey)
& (hr_dev->caps.num_mtpts - 1));
ret = hns_roce_hw_destroy_mpt(hr_dev, NULL,
key_to_hw_index(mw->rkey) &
(hr_dev->caps.num_mtpts - 1));
if (ret)
dev_warn(dev, "MW HW2SW_MPT failed (%d)\n", ret);
dev_warn(dev, "MW DESTROY_MPT failed (%d)\n", ret);
hns_roce_table_put(hr_dev, &hr_dev->mr_table.mtpt_table,
key_to_hw_index(mw->rkey));
@ -1487,10 +1490,10 @@ static int hns_roce_mw_enable(struct hns_roce_dev *hr_dev,
goto err_page;
}
ret = hns_roce_sw2hw_mpt(hr_dev, mailbox,
mtpt_idx & (hr_dev->caps.num_mtpts - 1));
ret = hns_roce_hw_create_mpt(hr_dev, mailbox,
mtpt_idx & (hr_dev->caps.num_mtpts - 1));
if (ret) {
dev_err(dev, "MW sw2hw_mpt failed (%d)\n", ret);
dev_err(dev, "MW CREATE_MPT failed (%d)\n", ret);
goto err_page;
}

View File

@ -96,7 +96,7 @@ int hns_roce_uar_alloc(struct hns_roce_dev *hr_dev, struct hns_roce_uar *uar)
/* Using bitmap to manager UAR index */
ret = hns_roce_bitmap_alloc(&hr_dev->uar_table.bitmap, &uar->logic_idx);
if (ret == -1)
if (ret)
return -ENOMEM;
if (uar->logic_idx > 0 && hr_dev->caps.phy_num_uars > 1)

View File

@ -318,7 +318,7 @@ static int hns_roce_set_rq_size(struct hns_roce_dev *hr_dev,
* hr_qp->rq.max_gs);
}
cap->max_recv_wr = hr_qp->rq.max_post = hr_qp->rq.wqe_cnt;
cap->max_recv_wr = hr_qp->rq.wqe_cnt;
cap->max_recv_sge = hr_qp->rq.max_gs;
return 0;
@ -332,9 +332,8 @@ static int check_sq_size_with_integrity(struct hns_roce_dev *hr_dev,
u8 max_sq_stride = ilog2(roundup_sq_stride);
/* Sanity check SQ size before proceeding */
if ((u32)(1 << ucmd->log_sq_bb_count) > hr_dev->caps.max_wqes ||
ucmd->log_sq_stride > max_sq_stride ||
ucmd->log_sq_stride < HNS_ROCE_IB_MIN_SQ_STRIDE) {
if (ucmd->log_sq_stride > max_sq_stride ||
ucmd->log_sq_stride < HNS_ROCE_IB_MIN_SQ_STRIDE) {
ibdev_err(&hr_dev->ib_dev, "check SQ size error!\n");
return -EINVAL;
}
@ -358,13 +357,16 @@ static int hns_roce_set_user_sq_size(struct hns_roce_dev *hr_dev,
u32 max_cnt;
int ret;
if (check_shl_overflow(1, ucmd->log_sq_bb_count, &hr_qp->sq.wqe_cnt) ||
hr_qp->sq.wqe_cnt > hr_dev->caps.max_wqes)
return -EINVAL;
ret = check_sq_size_with_integrity(hr_dev, cap, ucmd);
if (ret) {
ibdev_err(&hr_dev->ib_dev, "Sanity check sq size failed\n");
return ret;
}
hr_qp->sq.wqe_cnt = 1 << ucmd->log_sq_bb_count;
hr_qp->sq.wqe_shift = ucmd->log_sq_stride;
max_cnt = max(1U, cap->max_send_sge);
@ -391,37 +393,37 @@ static int hns_roce_set_user_sq_size(struct hns_roce_dev *hr_dev,
/* Get buf size, SQ and RQ are aligned to page_szie */
if (hr_dev->caps.max_sq_sg <= 2) {
hr_qp->buff_size = HNS_ROCE_ALOGN_UP((hr_qp->rq.wqe_cnt <<
hr_qp->buff_size = HNS_ROCE_ALIGN_UP((hr_qp->rq.wqe_cnt <<
hr_qp->rq.wqe_shift), PAGE_SIZE) +
HNS_ROCE_ALOGN_UP((hr_qp->sq.wqe_cnt <<
HNS_ROCE_ALIGN_UP((hr_qp->sq.wqe_cnt <<
hr_qp->sq.wqe_shift), PAGE_SIZE);
hr_qp->sq.offset = 0;
hr_qp->rq.offset = HNS_ROCE_ALOGN_UP((hr_qp->sq.wqe_cnt <<
hr_qp->rq.offset = HNS_ROCE_ALIGN_UP((hr_qp->sq.wqe_cnt <<
hr_qp->sq.wqe_shift), PAGE_SIZE);
} else {
page_size = 1 << (hr_dev->caps.mtt_buf_pg_sz + PAGE_SHIFT);
hr_qp->sge.sge_cnt = ex_sge_num ?
max(page_size / (1 << hr_qp->sge.sge_shift), ex_sge_num) : 0;
hr_qp->buff_size = HNS_ROCE_ALOGN_UP((hr_qp->rq.wqe_cnt <<
hr_qp->buff_size = HNS_ROCE_ALIGN_UP((hr_qp->rq.wqe_cnt <<
hr_qp->rq.wqe_shift), page_size) +
HNS_ROCE_ALOGN_UP((hr_qp->sge.sge_cnt <<
HNS_ROCE_ALIGN_UP((hr_qp->sge.sge_cnt <<
hr_qp->sge.sge_shift), page_size) +
HNS_ROCE_ALOGN_UP((hr_qp->sq.wqe_cnt <<
HNS_ROCE_ALIGN_UP((hr_qp->sq.wqe_cnt <<
hr_qp->sq.wqe_shift), page_size);
hr_qp->sq.offset = 0;
if (ex_sge_num) {
hr_qp->sge.offset = HNS_ROCE_ALOGN_UP(
hr_qp->sge.offset = HNS_ROCE_ALIGN_UP(
(hr_qp->sq.wqe_cnt <<
hr_qp->sq.wqe_shift),
page_size);
hr_qp->rq.offset = hr_qp->sge.offset +
HNS_ROCE_ALOGN_UP((hr_qp->sge.sge_cnt <<
HNS_ROCE_ALIGN_UP((hr_qp->sge.sge_cnt <<
hr_qp->sge.sge_shift),
page_size);
} else {
hr_qp->rq.offset = HNS_ROCE_ALOGN_UP(
hr_qp->rq.offset = HNS_ROCE_ALIGN_UP(
(hr_qp->sq.wqe_cnt <<
hr_qp->sq.wqe_shift),
page_size);
@ -591,24 +593,24 @@ static int hns_roce_set_kernel_sq_size(struct hns_roce_dev *hr_dev,
/* Get buf size, SQ and RQ are aligned to PAGE_SIZE */
page_size = 1 << (hr_dev->caps.mtt_buf_pg_sz + PAGE_SHIFT);
hr_qp->sq.offset = 0;
size = HNS_ROCE_ALOGN_UP(hr_qp->sq.wqe_cnt << hr_qp->sq.wqe_shift,
size = HNS_ROCE_ALIGN_UP(hr_qp->sq.wqe_cnt << hr_qp->sq.wqe_shift,
page_size);
if (hr_dev->caps.max_sq_sg > 2 && hr_qp->sge.sge_cnt) {
hr_qp->sge.sge_cnt = max(page_size/(1 << hr_qp->sge.sge_shift),
(u32)hr_qp->sge.sge_cnt);
hr_qp->sge.offset = size;
size += HNS_ROCE_ALOGN_UP(hr_qp->sge.sge_cnt <<
size += HNS_ROCE_ALIGN_UP(hr_qp->sge.sge_cnt <<
hr_qp->sge.sge_shift, page_size);
}
hr_qp->rq.offset = size;
size += HNS_ROCE_ALOGN_UP((hr_qp->rq.wqe_cnt << hr_qp->rq.wqe_shift),
size += HNS_ROCE_ALIGN_UP((hr_qp->rq.wqe_cnt << hr_qp->rq.wqe_shift),
page_size);
hr_qp->buff_size = size;
/* Get wr and sge number which send */
cap->max_send_wr = hr_qp->sq.max_post = hr_qp->sq.wqe_cnt;
cap->max_send_wr = hr_qp->sq.wqe_cnt;
cap->max_send_sge = hr_qp->sq.max_gs;
/* We don't support inline sends for kernel QPs (yet) */
@ -743,7 +745,7 @@ static int hns_roce_create_qp_common(struct hns_roce_dev *hr_dev,
}
hr_qp->umem = ib_umem_get(udata, ucmd.buf_addr,
hr_qp->buff_size, 0, 0);
hr_qp->buff_size, 0);
if (IS_ERR(hr_qp->umem)) {
dev_err(dev, "ib_umem_get error for create qp\n");
ret = PTR_ERR(hr_qp->umem);
@ -1017,7 +1019,6 @@ struct ib_qp *hns_roce_create_qp(struct ib_pd *pd,
{
struct hns_roce_dev *hr_dev = to_hr_dev(pd->device);
struct ib_device *ibdev = &hr_dev->ib_dev;
struct hns_roce_sqp *hr_sqp;
struct hns_roce_qp *hr_qp;
int ret;
@ -1030,7 +1031,7 @@ struct ib_qp *hns_roce_create_qp(struct ib_pd *pd,
ret = hns_roce_create_qp_common(hr_dev, pd, init_attr, udata, 0,
hr_qp);
if (ret) {
ibdev_err(ibdev, "Create RC QP 0x%06lx failed(%d)\n",
ibdev_err(ibdev, "Create QP 0x%06lx failed(%d)\n",
hr_qp->qpn, ret);
kfree(hr_qp);
return ERR_PTR(ret);
@ -1047,11 +1048,10 @@ struct ib_qp *hns_roce_create_qp(struct ib_pd *pd,
return ERR_PTR(-EINVAL);
}
hr_sqp = kzalloc(sizeof(*hr_sqp), GFP_KERNEL);
if (!hr_sqp)
hr_qp = kzalloc(sizeof(*hr_qp), GFP_KERNEL);
if (!hr_qp)
return ERR_PTR(-ENOMEM);
hr_qp = &hr_sqp->hr_qp;
hr_qp->port = init_attr->port_num - 1;
hr_qp->phy_port = hr_dev->iboe.phy_port[hr_qp->port];
@ -1066,7 +1066,7 @@ struct ib_qp *hns_roce_create_qp(struct ib_pd *pd,
hr_qp->ibqp.qp_num, hr_qp);
if (ret) {
ibdev_err(ibdev, "Create GSI QP failed!\n");
kfree(hr_sqp);
kfree(hr_qp);
return ERR_PTR(ret);
}
@ -1289,7 +1289,7 @@ bool hns_roce_wq_overflow(struct hns_roce_wq *hr_wq, int nreq,
u32 cur;
cur = hr_wq->head - hr_wq->tail;
if (likely(cur + nreq < hr_wq->max_post))
if (likely(cur + nreq < hr_wq->wqe_cnt))
return false;
hr_cq = to_hr_cq(ib_cq);
@ -1297,7 +1297,7 @@ bool hns_roce_wq_overflow(struct hns_roce_wq *hr_wq, int nreq,
cur = hr_wq->head - hr_wq->tail;
spin_unlock(&hr_cq->lock);
return cur + nreq >= hr_wq->max_post;
return cur + nreq >= hr_wq->wqe_cnt;
}
int hns_roce_init_qp_table(struct hns_roce_dev *hr_dev)

View File

@ -98,11 +98,15 @@ static int hns_roce_fill_res_cq_entry(struct sk_buff *msg,
goto err;
table_attr = nla_nest_start(msg, RDMA_NLDEV_ATTR_DRIVER);
if (!table_attr)
if (!table_attr) {
ret = -EMSGSIZE;
goto err;
}
if (hns_roce_fill_cq(msg, context))
if (hns_roce_fill_cq(msg, context)) {
ret = -EMSGSIZE;
goto err_cancel_table;
}
nla_nest_end(msg, table_attr);
kfree(context);
@ -113,7 +117,7 @@ err_cancel_table:
nla_nest_cancel(msg, table_attr);
err:
kfree(context);
return -EMSGSIZE;
return ret;
}
int hns_roce_fill_res_entry(struct sk_buff *msg,

View File

@ -59,21 +59,21 @@ static void hns_roce_ib_srq_event(struct hns_roce_srq *srq,
}
}
static int hns_roce_sw2hw_srq(struct hns_roce_dev *dev,
struct hns_roce_cmd_mailbox *mailbox,
unsigned long srq_num)
static int hns_roce_hw_create_srq(struct hns_roce_dev *dev,
struct hns_roce_cmd_mailbox *mailbox,
unsigned long srq_num)
{
return hns_roce_cmd_mbox(dev, mailbox->dma, 0, srq_num, 0,
HNS_ROCE_CMD_SW2HW_SRQ,
HNS_ROCE_CMD_CREATE_SRQ,
HNS_ROCE_CMD_TIMEOUT_MSECS);
}
static int hns_roce_hw2sw_srq(struct hns_roce_dev *dev,
struct hns_roce_cmd_mailbox *mailbox,
unsigned long srq_num)
static int hns_roce_hw_destroy_srq(struct hns_roce_dev *dev,
struct hns_roce_cmd_mailbox *mailbox,
unsigned long srq_num)
{
return hns_roce_cmd_mbox(dev, 0, mailbox ? mailbox->dma : 0, srq_num,
mailbox ? 0 : 1, HNS_ROCE_CMD_HW2SW_SRQ,
mailbox ? 0 : 1, HNS_ROCE_CMD_DESTROY_SRQ,
HNS_ROCE_CMD_TIMEOUT_MSECS);
}
@ -95,8 +95,7 @@ static int hns_roce_srq_alloc(struct hns_roce_dev *hr_dev, u32 pdn, u32 cqn,
srq->mtt.first_seg,
&dma_handle_wqe);
if (!mtts_wqe) {
dev_err(hr_dev->dev,
"SRQ alloc.Failed to find srq buf addr.\n");
dev_err(hr_dev->dev, "Failed to find mtt for srq buf.\n");
return -EINVAL;
}
@ -106,13 +105,14 @@ static int hns_roce_srq_alloc(struct hns_roce_dev *hr_dev, u32 pdn, u32 cqn,
&dma_handle_idx);
if (!mtts_idx) {
dev_err(hr_dev->dev,
"SRQ alloc.Failed to find idx que buf addr.\n");
"Failed to find mtt for srq idx queue buf.\n");
return -EINVAL;
}
ret = hns_roce_bitmap_alloc(&srq_table->bitmap, &srq->srqn);
if (ret == -1) {
dev_err(hr_dev->dev, "SRQ alloc.Failed to alloc index.\n");
if (ret) {
dev_err(hr_dev->dev,
"Failed to alloc a bit from srq bitmap.\n");
return -ENOMEM;
}
@ -134,7 +134,7 @@ static int hns_roce_srq_alloc(struct hns_roce_dev *hr_dev, u32 pdn, u32 cqn,
mtts_wqe, mtts_idx, dma_handle_wqe,
dma_handle_idx);
ret = hns_roce_sw2hw_srq(hr_dev, mailbox, srq->srqn);
ret = hns_roce_hw_create_srq(hr_dev, mailbox, srq->srqn);
hns_roce_free_cmd_mailbox(hr_dev, mailbox);
if (ret)
goto err_xa;
@ -160,9 +160,9 @@ static void hns_roce_srq_free(struct hns_roce_dev *hr_dev,
struct hns_roce_srq_table *srq_table = &hr_dev->srq_table;
int ret;
ret = hns_roce_hw2sw_srq(hr_dev, NULL, srq->srqn);
ret = hns_roce_hw_destroy_srq(hr_dev, NULL, srq->srqn);
if (ret)
dev_err(hr_dev->dev, "HW2SW_SRQ failed (%d) for CQN %06lx\n",
dev_err(hr_dev->dev, "DESTROY_SRQ failed (%d) for SRQN %06lx\n",
ret, srq->srqn);
xa_erase(&srq_table->xa, srq->srqn);
@ -180,22 +180,23 @@ static int create_user_srq(struct hns_roce_srq *srq, struct ib_udata *udata,
{
struct hns_roce_dev *hr_dev = to_hr_dev(srq->ibsrq.device);
struct hns_roce_ib_create_srq ucmd;
u32 page_shift;
u32 npages;
struct hns_roce_buf *buf;
int ret;
if (ib_copy_from_udata(&ucmd, udata, sizeof(ucmd)))
return -EFAULT;
srq->umem = ib_umem_get(udata, ucmd.buf_addr, srq_buf_size, 0, 0);
srq->umem = ib_umem_get(udata, ucmd.buf_addr, srq_buf_size, 0);
if (IS_ERR(srq->umem))
return PTR_ERR(srq->umem);
npages = (ib_umem_page_count(srq->umem) +
(1 << hr_dev->caps.srqwqe_buf_pg_sz) - 1) /
(1 << hr_dev->caps.srqwqe_buf_pg_sz);
page_shift = PAGE_SHIFT + hr_dev->caps.srqwqe_buf_pg_sz;
ret = hns_roce_mtt_init(hr_dev, npages, page_shift, &srq->mtt);
buf = &srq->buf;
buf->npages = (ib_umem_page_count(srq->umem) +
(1 << hr_dev->caps.srqwqe_buf_pg_sz) - 1) /
(1 << hr_dev->caps.srqwqe_buf_pg_sz);
buf->page_shift = PAGE_SHIFT + hr_dev->caps.srqwqe_buf_pg_sz;
ret = hns_roce_mtt_init(hr_dev, buf->npages, buf->page_shift,
&srq->mtt);
if (ret)
goto err_user_buf;
@ -205,16 +206,19 @@ static int create_user_srq(struct hns_roce_srq *srq, struct ib_udata *udata,
/* config index queue BA */
srq->idx_que.umem = ib_umem_get(udata, ucmd.que_addr,
srq->idx_que.buf_size, 0, 0);
srq->idx_que.buf_size, 0);
if (IS_ERR(srq->idx_que.umem)) {
dev_err(hr_dev->dev, "ib_umem_get error for index queue\n");
ret = PTR_ERR(srq->idx_que.umem);
goto err_user_srq_mtt;
}
ret = hns_roce_mtt_init(hr_dev, ib_umem_page_count(srq->idx_que.umem),
PAGE_SHIFT, &srq->idx_que.mtt);
buf = &srq->idx_que.idx_buf;
buf->npages = DIV_ROUND_UP(ib_umem_page_count(srq->idx_que.umem),
1 << hr_dev->caps.idx_buf_pg_sz);
buf->page_shift = PAGE_SHIFT + hr_dev->caps.idx_buf_pg_sz;
ret = hns_roce_mtt_init(hr_dev, buf->npages, buf->page_shift,
&srq->idx_que.mtt);
if (ret) {
dev_err(hr_dev->dev, "hns_roce_mtt_init error for idx que\n");
goto err_user_idx_mtt;
@ -251,7 +255,7 @@ static int hns_roce_create_idx_que(struct ib_pd *pd, struct hns_roce_srq *srq,
struct hns_roce_dev *hr_dev = to_hr_dev(pd->device);
struct hns_roce_idx_que *idx_que = &srq->idx_que;
idx_que->bitmap = bitmap_zalloc(srq->max, GFP_KERNEL);
idx_que->bitmap = bitmap_zalloc(srq->wqe_cnt, GFP_KERNEL);
if (!idx_que->bitmap)
return -ENOMEM;
@ -277,7 +281,7 @@ static int create_kernel_srq(struct hns_roce_srq *srq, int srq_buf_size)
return -ENOMEM;
srq->head = 0;
srq->tail = srq->max - 1;
srq->tail = srq->wqe_cnt - 1;
ret = hns_roce_mtt_init(hr_dev, srq->buf.npages, srq->buf.page_shift,
&srq->mtt);
@ -308,7 +312,7 @@ static int create_kernel_srq(struct hns_roce_srq *srq, int srq_buf_size)
if (ret)
goto err_kernel_idx_buf;
srq->wrid = kvmalloc_array(srq->max, sizeof(u64), GFP_KERNEL);
srq->wrid = kvmalloc_array(srq->wqe_cnt, sizeof(u64), GFP_KERNEL);
if (!srq->wrid) {
ret = -ENOMEM;
goto err_kernel_idx_buf;
@ -354,7 +358,7 @@ static void destroy_kernel_srq(struct hns_roce_dev *hr_dev,
}
int hns_roce_create_srq(struct ib_srq *ib_srq,
struct ib_srq_init_attr *srq_init_attr,
struct ib_srq_init_attr *init_attr,
struct ib_udata *udata)
{
struct hns_roce_dev *hr_dev = to_hr_dev(ib_srq->device);
@ -366,24 +370,24 @@ int hns_roce_create_srq(struct ib_srq *ib_srq,
u32 cqn;
/* Check the actual SRQ wqe and SRQ sge num */
if (srq_init_attr->attr.max_wr >= hr_dev->caps.max_srq_wrs ||
srq_init_attr->attr.max_sge > hr_dev->caps.max_srq_sges)
if (init_attr->attr.max_wr >= hr_dev->caps.max_srq_wrs ||
init_attr->attr.max_sge > hr_dev->caps.max_srq_sges)
return -EINVAL;
mutex_init(&srq->mutex);
spin_lock_init(&srq->lock);
srq->max = roundup_pow_of_two(srq_init_attr->attr.max_wr + 1);
srq->max_gs = srq_init_attr->attr.max_sge;
srq->wqe_cnt = roundup_pow_of_two(init_attr->attr.max_wr + 1);
srq->max_gs = init_attr->attr.max_sge;
srq_desc_size = roundup_pow_of_two(max(16, 16 * srq->max_gs));
srq->wqe_shift = ilog2(srq_desc_size);
srq_buf_size = srq->max * srq_desc_size;
srq_buf_size = srq->wqe_cnt * srq_desc_size;
srq->idx_que.entry_sz = HNS_ROCE_IDX_QUE_ENTRY_SZ;
srq->idx_que.buf_size = srq->max * srq->idx_que.entry_sz;
srq->idx_que.buf_size = srq->wqe_cnt * srq->idx_que.entry_sz;
srq->mtt.mtt_type = MTT_TYPE_SRQWQE;
srq->idx_que.mtt.mtt_type = MTT_TYPE_IDX;
@ -401,8 +405,8 @@ int hns_roce_create_srq(struct ib_srq *ib_srq,
}
}
cqn = ib_srq_has_cq(srq_init_attr->srq_type) ?
to_hr_cq(srq_init_attr->ext.cq)->cqn : 0;
cqn = ib_srq_has_cq(init_attr->srq_type) ?
to_hr_cq(init_attr->ext.cq)->cqn : 0;
srq->db_reg_l = hr_dev->reg_base + SRQ_DB_REG;
@ -449,7 +453,7 @@ void hns_roce_destroy_srq(struct ib_srq *ibsrq, struct ib_udata *udata)
hns_roce_mtt_cleanup(hr_dev, &srq->idx_que.mtt);
} else {
kvfree(srq->wrid);
hns_roce_buf_free(hr_dev, srq->max << srq->wqe_shift,
hns_roce_buf_free(hr_dev, srq->wqe_cnt << srq->wqe_shift,
&srq->buf);
}
ib_umem_release(srq->idx_que.umem);

View File

@ -2079,9 +2079,9 @@ static int i40iw_addr_resolve_neigh_ipv6(struct i40iw_device *iwdev,
dst = i40iw_get_dst_ipv6(&src_addr, &dst_addr);
if (!dst || dst->error) {
if (dst) {
dst_release(dst);
i40iw_pr_err("ip6_route_output returned dst->error = %d\n",
dst->error);
dst_release(dst);
}
return rc;
}

View File

@ -1763,7 +1763,7 @@ static struct ib_mr *i40iw_reg_user_mr(struct ib_pd *pd,
if (length > I40IW_MAX_MR_SIZE)
return ERR_PTR(-EINVAL);
region = ib_umem_get(udata, start, length, acc, 0);
region = ib_umem_get(udata, start, length, acc);
if (IS_ERR(region))
return (struct ib_mr *)region;

View File

@ -145,7 +145,7 @@ static int mlx4_ib_get_cq_umem(struct mlx4_ib_dev *dev, struct ib_udata *udata,
int n;
*umem = ib_umem_get(udata, buf_addr, cqe * cqe_size,
IB_ACCESS_LOCAL_WRITE, 1);
IB_ACCESS_LOCAL_WRITE);
if (IS_ERR(*umem))
return PTR_ERR(*umem);

View File

@ -64,7 +64,7 @@ int mlx4_ib_db_map_user(struct ib_udata *udata, unsigned long virt,
page->user_virt = (virt & PAGE_MASK);
page->refcnt = 0;
page->umem = ib_umem_get(udata, virt & PAGE_MASK, PAGE_SIZE, 0, 0);
page->umem = ib_umem_get(udata, virt & PAGE_MASK, PAGE_SIZE, 0);
if (IS_ERR(page->umem)) {
err = PTR_ERR(page->umem);
kfree(page);

View File

@ -966,7 +966,6 @@ static int iboe_process_mad(struct ib_device *ibdev, int mad_flags, u8 port_num,
}
mutex_unlock(&dev->counters_table[port_num - 1].mutex);
if (stats_avail) {
memset(out_mad->data, 0, sizeof out_mad->data);
switch (counter_stats.counter_mode & 0xf) {
case 0:
edit_counter(&counter_stats,
@ -984,38 +983,31 @@ static int iboe_process_mad(struct ib_device *ibdev, int mad_flags, u8 port_num,
int mlx4_ib_process_mad(struct ib_device *ibdev, int mad_flags, u8 port_num,
const struct ib_wc *in_wc, const struct ib_grh *in_grh,
const struct ib_mad_hdr *in, size_t in_mad_size,
struct ib_mad_hdr *out, size_t *out_mad_size,
u16 *out_mad_pkey_index)
const struct ib_mad *in, struct ib_mad *out,
size_t *out_mad_size, u16 *out_mad_pkey_index)
{
struct mlx4_ib_dev *dev = to_mdev(ibdev);
const struct ib_mad *in_mad = (const struct ib_mad *)in;
struct ib_mad *out_mad = (struct ib_mad *)out;
enum rdma_link_layer link = rdma_port_get_link_layer(ibdev, port_num);
if (WARN_ON_ONCE(in_mad_size != sizeof(*in_mad) ||
*out_mad_size != sizeof(*out_mad)))
return IB_MAD_RESULT_FAILURE;
/* iboe_process_mad() which uses the HCA flow-counters to implement IB PMA
* queries, should be called only by VFs and for that specific purpose
*/
if (link == IB_LINK_LAYER_INFINIBAND) {
if (mlx4_is_slave(dev->dev) &&
(in_mad->mad_hdr.mgmt_class == IB_MGMT_CLASS_PERF_MGMT &&
(in_mad->mad_hdr.attr_id == IB_PMA_PORT_COUNTERS ||
in_mad->mad_hdr.attr_id == IB_PMA_PORT_COUNTERS_EXT ||
in_mad->mad_hdr.attr_id == IB_PMA_CLASS_PORT_INFO)))
return iboe_process_mad(ibdev, mad_flags, port_num, in_wc,
in_grh, in_mad, out_mad);
(in->mad_hdr.mgmt_class == IB_MGMT_CLASS_PERF_MGMT &&
(in->mad_hdr.attr_id == IB_PMA_PORT_COUNTERS ||
in->mad_hdr.attr_id == IB_PMA_PORT_COUNTERS_EXT ||
in->mad_hdr.attr_id == IB_PMA_CLASS_PORT_INFO)))
return iboe_process_mad(ibdev, mad_flags, port_num,
in_wc, in_grh, in, out);
return ib_process_mad(ibdev, mad_flags, port_num, in_wc,
in_grh, in_mad, out_mad);
return ib_process_mad(ibdev, mad_flags, port_num, in_wc, in_grh,
in, out);
}
if (link == IB_LINK_LAYER_ETHERNET)
return iboe_process_mad(ibdev, mad_flags, port_num, in_wc,
in_grh, in_mad, out_mad);
in_grh, in, out);
return -EINVAL;
}

View File

@ -256,6 +256,8 @@ static int mlx4_ib_add_gid(const struct ib_gid_attr *attr, void **context)
int hw_update = 0;
int i;
struct gid_entry *gids = NULL;
u16 vlan_id = 0xffff;
u8 mac[ETH_ALEN];
if (!rdma_cap_roce_gid_table(attr->device, attr->port_num))
return -EINVAL;
@ -266,12 +268,16 @@ static int mlx4_ib_add_gid(const struct ib_gid_attr *attr, void **context)
if (!context)
return -EINVAL;
ret = rdma_read_gid_l2_fields(attr, &vlan_id, &mac[0]);
if (ret)
return ret;
port_gid_table = &iboe->gids[attr->port_num - 1];
spin_lock_bh(&iboe->lock);
for (i = 0; i < MLX4_MAX_PORT_GIDS; ++i) {
if (!memcmp(&port_gid_table->gids[i].gid,
&attr->gid, sizeof(attr->gid)) &&
port_gid_table->gids[i].gid_type == attr->gid_type) {
port_gid_table->gids[i].gid_type == attr->gid_type &&
port_gid_table->gids[i].vlan_id == vlan_id) {
found = i;
break;
}
@ -291,6 +297,7 @@ static int mlx4_ib_add_gid(const struct ib_gid_attr *attr, void **context)
memcpy(&port_gid_table->gids[free].gid,
&attr->gid, sizeof(attr->gid));
port_gid_table->gids[free].gid_type = attr->gid_type;
port_gid_table->gids[free].vlan_id = vlan_id;
port_gid_table->gids[free].ctx->real_index = free;
port_gid_table->gids[free].ctx->refcount = 1;
hw_update = 1;
@ -1146,7 +1153,8 @@ static int mlx4_ib_mmap(struct ib_ucontext *context, struct vm_area_struct *vma)
return rdma_user_mmap_io(context, vma,
to_mucontext(context)->uar.pfn,
PAGE_SIZE,
pgprot_noncached(vma->vm_page_prot));
pgprot_noncached(vma->vm_page_prot),
NULL);
case 1:
if (dev->dev->caps.bf_reg_size == 0)
@ -1155,7 +1163,8 @@ static int mlx4_ib_mmap(struct ib_ucontext *context, struct vm_area_struct *vma)
context, vma,
to_mucontext(context)->uar.pfn +
dev->dev->caps.num_uars,
PAGE_SIZE, pgprot_writecombine(vma->vm_page_prot));
PAGE_SIZE, pgprot_writecombine(vma->vm_page_prot),
NULL);
case 3: {
struct mlx4_clock_params params;
@ -1171,7 +1180,8 @@ static int mlx4_ib_mmap(struct ib_ucontext *context, struct vm_area_struct *vma)
params.bar) +
params.offset) >>
PAGE_SHIFT,
PAGE_SIZE, pgprot_noncached(vma->vm_page_prot));
PAGE_SIZE, pgprot_noncached(vma->vm_page_prot),
NULL);
}
default:

View File

@ -508,6 +508,7 @@ struct gid_entry {
union ib_gid gid;
enum ib_gid_type gid_type;
struct gid_cache_context *ctx;
u16 vlan_id;
};
struct mlx4_port_gid_table {
@ -786,11 +787,10 @@ int mlx4_ib_post_recv(struct ib_qp *ibqp, const struct ib_recv_wr *wr,
int mlx4_MAD_IFC(struct mlx4_ib_dev *dev, int mad_ifc_flags,
int port, const struct ib_wc *in_wc, const struct ib_grh *in_grh,
const void *in_mad, void *response_mad);
int mlx4_ib_process_mad(struct ib_device *ibdev, int mad_flags, u8 port_num,
int mlx4_ib_process_mad(struct ib_device *ibdev, int mad_flags, u8 port_num,
const struct ib_wc *in_wc, const struct ib_grh *in_grh,
const struct ib_mad_hdr *in, size_t in_mad_size,
struct ib_mad_hdr *out, size_t *out_mad_size,
u16 *out_mad_pkey_index);
const struct ib_mad *in, struct ib_mad *out,
size_t *out_mad_size, u16 *out_mad_pkey_index);
int mlx4_ib_mad_init(struct mlx4_ib_dev *dev);
void mlx4_ib_mad_cleanup(struct mlx4_ib_dev *dev);

View File

@ -398,7 +398,7 @@ static struct ib_umem *mlx4_get_umem_mr(struct ib_udata *udata, u64 start,
up_read(&current->mm->mmap_sem);
}
return ib_umem_get(udata, start, length, access_flags, 0);
return ib_umem_get(udata, start, length, access_flags);
}
struct ib_mr *mlx4_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,

View File

@ -916,7 +916,7 @@ static int create_rq(struct ib_pd *pd, struct ib_qp_init_attr *init_attr,
qp->buf_size = (qp->rq.wqe_cnt << qp->rq.wqe_shift) +
(qp->sq.wqe_cnt << qp->sq.wqe_shift);
qp->umem = ib_umem_get(udata, wq.buf_addr, qp->buf_size, 0, 0);
qp->umem = ib_umem_get(udata, wq.buf_addr, qp->buf_size, 0);
if (IS_ERR(qp->umem)) {
err = PTR_ERR(qp->umem);
goto err;
@ -1110,8 +1110,7 @@ static int create_qp_common(struct ib_pd *pd, struct ib_qp_init_attr *init_attr,
if (err)
goto err;
qp->umem =
ib_umem_get(udata, ucmd.buf_addr, qp->buf_size, 0, 0);
qp->umem = ib_umem_get(udata, ucmd.buf_addr, qp->buf_size, 0);
if (IS_ERR(qp->umem)) {
err = PTR_ERR(qp->umem);
goto err;

View File

@ -110,7 +110,7 @@ int mlx4_ib_create_srq(struct ib_srq *ib_srq,
if (ib_copy_from_udata(&ucmd, udata, sizeof(ucmd)))
return -EFAULT;
srq->umem = ib_umem_get(udata, ucmd.buf_addr, buf_size, 0, 0);
srq->umem = ib_umem_get(udata, ucmd.buf_addr, buf_size, 0);
if (IS_ERR(srq->umem))
return PTR_ERR(srq->umem);

View File

@ -3,7 +3,7 @@ obj-$(CONFIG_MLX5_INFINIBAND) += mlx5_ib.o
mlx5_ib-y := main.o cq.o doorbell.o qp.o mem.o srq_cmd.o \
srq.o mr.o ah.o mad.o gsi.o ib_virt.o cmd.o \
cong.o
cong.o restrack.o
mlx5_ib-$(CONFIG_INFINIBAND_ON_DEMAND_PAGING) += odp.o
mlx5_ib-$(CONFIG_MLX5_ESWITCH) += ib_rep.o
mlx5_ib-$(CONFIG_INFINIBAND_USER_ACCESS) += devx.o

View File

@ -423,9 +423,6 @@ static int mlx5_poll_one(struct mlx5_ib_cq *cq,
struct mlx5_cqe64 *cqe64;
struct mlx5_core_qp *mqp;
struct mlx5_ib_wq *wq;
struct mlx5_sig_err_cqe *sig_err_cqe;
struct mlx5_core_mkey *mmkey;
struct mlx5_ib_mr *mr;
uint8_t opcode;
uint32_t qpn;
u16 wqe_ctr;
@ -519,27 +516,29 @@ repoll:
}
}
break;
case MLX5_CQE_SIG_ERR:
sig_err_cqe = (struct mlx5_sig_err_cqe *)cqe64;
case MLX5_CQE_SIG_ERR: {
struct mlx5_sig_err_cqe *sig_err_cqe =
(struct mlx5_sig_err_cqe *)cqe64;
struct mlx5_core_sig_ctx *sig;
xa_lock(&dev->mdev->priv.mkey_table);
mmkey = xa_load(&dev->mdev->priv.mkey_table,
xa_lock(&dev->sig_mrs);
sig = xa_load(&dev->sig_mrs,
mlx5_base_mkey(be32_to_cpu(sig_err_cqe->mkey)));
mr = to_mibmr(mmkey);
get_sig_err_item(sig_err_cqe, &mr->sig->err_item);
mr->sig->sig_err_exists = true;
mr->sig->sigerr_count++;
get_sig_err_item(sig_err_cqe, &sig->err_item);
sig->sig_err_exists = true;
sig->sigerr_count++;
mlx5_ib_warn(dev, "CQN: 0x%x Got SIGERR on key: 0x%x err_type %x err_offset %llx expected %x actual %x\n",
cq->mcq.cqn, mr->sig->err_item.key,
mr->sig->err_item.err_type,
mr->sig->err_item.sig_err_offset,
mr->sig->err_item.expected,
mr->sig->err_item.actual);
cq->mcq.cqn, sig->err_item.key,
sig->err_item.err_type,
sig->err_item.sig_err_offset,
sig->err_item.expected,
sig->err_item.actual);
xa_unlock(&dev->mdev->priv.mkey_table);
xa_unlock(&dev->sig_mrs);
goto repoll;
}
}
return 0;
}
@ -710,7 +709,7 @@ static int create_cq_user(struct mlx5_ib_dev *dev, struct ib_udata *udata,
cq->buf.umem =
ib_umem_get(udata, ucmd.buf_addr, entries * ucmd.cqe_size,
IB_ACCESS_LOCAL_WRITE, 1);
IB_ACCESS_LOCAL_WRITE);
if (IS_ERR(cq->buf.umem)) {
err = PTR_ERR(cq->buf.umem);
return err;
@ -1111,7 +1110,7 @@ static int resize_user(struct mlx5_ib_dev *dev, struct mlx5_ib_cq *cq,
umem = ib_umem_get(udata, ucmd.buf_addr,
(size_t)ucmd.cqe_size * entries,
IB_ACCESS_LOCAL_WRITE, 1);
IB_ACCESS_LOCAL_WRITE);
if (IS_ERR(umem)) {
err = PTR_ERR(umem);
return err;

View File

@ -100,6 +100,7 @@ struct devx_obj {
struct mlx5_ib_devx_mr devx_mr;
struct mlx5_core_dct core_dct;
struct mlx5_core_cq core_cq;
u32 flow_counter_bulk_size;
};
struct list_head event_sub; /* holds devx_event_subscription entries */
};
@ -192,15 +193,20 @@ bool mlx5_ib_devx_is_flow_dest(void *obj, int *dest_id, int *dest_type)
}
}
bool mlx5_ib_devx_is_flow_counter(void *obj, u32 *counter_id)
bool mlx5_ib_devx_is_flow_counter(void *obj, u32 offset, u32 *counter_id)
{
struct devx_obj *devx_obj = obj;
u16 opcode = MLX5_GET(general_obj_in_cmd_hdr, devx_obj->dinbox, opcode);
if (opcode == MLX5_CMD_OP_DEALLOC_FLOW_COUNTER) {
if (offset && offset >= devx_obj->flow_counter_bulk_size)
return false;
*counter_id = MLX5_GET(dealloc_flow_counter_in,
devx_obj->dinbox,
flow_counter_id);
*counter_id += offset;
return true;
}
@ -1265,8 +1271,8 @@ static int devx_handle_mkey_indirect(struct devx_obj *obj,
mkey->pd = MLX5_GET(mkc, mkc, pd);
devx_mr->ndescs = MLX5_GET(mkc, mkc, translations_octword_size);
return xa_err(xa_store(&dev->mdev->priv.mkey_table,
mlx5_base_mkey(mkey->key), mkey, GFP_KERNEL));
return xa_err(xa_store(&dev->odp_mkeys, mlx5_base_mkey(mkey->key), mkey,
GFP_KERNEL));
}
static int devx_handle_mkey_create(struct mlx5_ib_dev *dev,
@ -1345,9 +1351,9 @@ static int devx_obj_cleanup(struct ib_uobject *uobject,
* the mmkey, we must wait for that to stop before freeing the
* mkey, as another allocation could get the same mkey #.
*/
xa_erase(&obj->ib_dev->mdev->priv.mkey_table,
xa_erase(&obj->ib_dev->odp_mkeys,
mlx5_base_mkey(obj->devx_mr.mmkey.key));
synchronize_srcu(&dev->mr_srcu);
synchronize_srcu(&dev->odp_srcu);
}
if (obj->flags & DEVX_OBJ_FLAGS_DCT)
@ -1463,6 +1469,13 @@ static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_OBJ_CREATE)(
if (err)
goto obj_free;
if (opcode == MLX5_CMD_OP_ALLOC_FLOW_COUNTER) {
u8 bulk = MLX5_GET(alloc_flow_counter_in,
cmd_in,
flow_counter_bulk);
obj->flow_counter_bulk_size = 128UL * bulk;
}
uobj->object = obj;
INIT_LIST_HEAD(&obj->event_sub);
obj->ib_dev = dev;
@ -2121,7 +2134,7 @@ static int devx_umem_get(struct mlx5_ib_dev *dev, struct ib_ucontext *ucontext,
if (err)
return err;
obj->umem = ib_umem_get(&attrs->driver_udata, addr, size, access, 0);
obj->umem = ib_umem_get(&attrs->driver_udata, addr, size, access);
if (IS_ERR(obj->umem))
return PTR_ERR(obj->umem);

Some files were not shown because too many files have changed in this diff Show More