1
0
Fork 0

SCSI misc on 20200806

This series consists of the usual driver updates (ufs, qla2xxx, tcmu,
 lpfc, hpsa, zfcp, scsi_debug) and minor bug fixes.  We also have a
 huge docbook fix update like most other subsystems and no major update
 to the core (the few non trivial updates are either minor fixes or
 removing an unused feature [scsi_sdb_cache]).
 
 Signed-off-by: James E.J. Bottomley <jejb@linux.ibm.com>
 -----BEGIN PGP SIGNATURE-----
 
 iJwEABMIAEQWIQTnYEDbdso9F2cI+arnQslM7pishQUCXyxq1yYcamFtZXMuYm90
 dG9tbGV5QGhhbnNlbnBhcnRuZXJzaGlwLmNvbQAKCRDnQslM7pishSoAAQChZ4i8
 ZqYW3pL33JO3fA8vdjvLuyC489Hj4wzIsl3/bQEAxYyM6BSLvMoLWR2Plq/JmTLm
 4W/LDptarpTiDI3NuDc=
 =4b0W
 -----END PGP SIGNATURE-----

Merge tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi

Pull SCSI updates from James Bottomley:
 "This consists of the usual driver updates (ufs, qla2xxx, tcmu, lpfc,
  hpsa, zfcp, scsi_debug) and minor bug fixes.

  We also have a huge docbook fix update like most other subsystems and
  no major update to the core (the few non trivial updates are either
  minor fixes or removing an unused feature [scsi_sdb_cache])"

* tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (307 commits)
  scsi: scsi_transport_srp: Sanitize scsi_target_block/unblock sequences
  scsi: ufs-mediatek: Apply DELAY_AFTER_LPM quirk to Micron devices
  scsi: ufs: Introduce device quirk "DELAY_AFTER_LPM"
  scsi: virtio-scsi: Correctly handle the case where all LUNs are unplugged
  scsi: scsi_debug: Implement tur_ms_to_ready parameter
  scsi: scsi_debug: Fix request sense
  scsi: lpfc: Fix typo in comment for ULP
  scsi: ufs-mediatek: Prevent LPM operation on undeclared VCC
  scsi: iscsi: Do not put host in iscsi_set_flashnode_param()
  scsi: hpsa: Correct ctrl queue depth
  scsi: target: tcmu: Make TMR notification optional
  scsi: target: tcmu: Implement tmr_notify callback
  scsi: target: tcmu: Fix and simplify timeout handling
  scsi: target: tcmu: Factor out new helper ring_insert_padding
  scsi: target: tcmu: Do not queue aborted commands
  scsi: target: tcmu: Use priv pointer in se_cmd
  scsi: target: Add tmr_notify backend function
  scsi: target: Modify core_tmr_abort_task()
  scsi: target: iscsi: Fix inconsistent debug message
  scsi: target: iscsi: Fix login error when receiving
  ...
zero-sugar-mainline-defconfig
Linus Torvalds 2020-08-06 16:50:07 -07:00
commit dfdf16ecfd
221 changed files with 7788 additions and 3395 deletions

View File

@ -883,3 +883,139 @@ Contact: Subhash Jadavani <subhashj@codeaurora.org>
Description: This entry shows the target state of an UFS UIC link
for the chosen system power management level.
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/device_descriptor/wb_presv_us_en
Date: June 2020
Contact: Asutosh Das <asutoshd@codeaurora.org>
Description: This entry shows if preserve user-space was configured
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/device_descriptor/wb_shared_alloc_units
Date: June 2020
Contact: Asutosh Das <asutoshd@codeaurora.org>
Description: This entry shows the shared allocated units of WB buffer
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/device_descriptor/wb_type
Date: June 2020
Contact: Asutosh Das <asutoshd@codeaurora.org>
Description: This entry shows the configured WB type.
0x1 for shared buffer mode. 0x0 for dedicated buffer mode.
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/wb_buff_cap_adj
Date: June 2020
Contact: Asutosh Das <asutoshd@codeaurora.org>
Description: This entry shows the total user-space decrease in shared
buffer mode.
The value of this parameter is 3 for TLC NAND when SLC mode
is used as WriteBooster Buffer. 2 for MLC NAND.
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/wb_max_alloc_units
Date: June 2020
Contact: Asutosh Das <asutoshd@codeaurora.org>
Description: This entry shows the Maximum total WriteBooster Buffer size
which is supported by the entire device.
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/wb_max_wb_luns
Date: June 2020
Contact: Asutosh Das <asutoshd@codeaurora.org>
Description: This entry shows the maximum number of luns that can support
WriteBooster.
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/wb_sup_red_type
Date: June 2020
Contact: Asutosh Das <asutoshd@codeaurora.org>
Description: The supportability of user space reduction mode
and preserve user space mode.
00h: WriteBooster Buffer can be configured only in
user space reduction type.
01h: WriteBooster Buffer can be configured only in
preserve user space type.
02h: Device can be configured in either user space
reduction type or preserve user space type.
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/wb_sup_wb_type
Date: June 2020
Contact: Asutosh Das <asutoshd@codeaurora.org>
Description: The supportability of WriteBooster Buffer type.
00h: LU based WriteBooster Buffer configuration
01h: Single shared WriteBooster Buffer
configuration
02h: Supporting both LU based WriteBooster
Buffer and Single shared WriteBooster Buffer
configuration
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/flags/wb_enable
Date: June 2020
Contact: Asutosh Das <asutoshd@codeaurora.org>
Description: This entry shows the status of WriteBooster.
0: WriteBooster is not enabled.
1: WriteBooster is enabled
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/flags/wb_flush_en
Date: June 2020
Contact: Asutosh Das <asutoshd@codeaurora.org>
Description: This entry shows if flush is enabled.
0: Flush operation is not performed.
1: Flush operation is performed.
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/flags/wb_flush_during_h8
Date: June 2020
Contact: Asutosh Das <asutoshd@codeaurora.org>
Description: Flush WriteBooster Buffer during hibernate state.
0: Device is not allowed to flush the
WriteBooster Buffer during link hibernate
state.
1: Device is allowed to flush the
WriteBooster Buffer during link hibernate
state
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/attributes/wb_avail_buf
Date: June 2020
Contact: Asutosh Das <asutoshd@codeaurora.org>
Description: This entry shows the amount of unused WriteBooster buffer
available.
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/attributes/wb_cur_buf
Date: June 2020
Contact: Asutosh Das <asutoshd@codeaurora.org>
Description: This entry shows the amount of unused current buffer.
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/attributes/wb_flush_status
Date: June 2020
Contact: Asutosh Das <asutoshd@codeaurora.org>
Description: This entry shows the flush operation status.
00h: idle
01h: Flush operation in progress
02h: Flush operation stopped prematurely.
03h: Flush operation completed successfully
04h: Flush operation general failure
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/attributes/wb_life_time_est
Date: June 2020
Contact: Asutosh Das <asutoshd@codeaurora.org>
Description: This entry shows an indication of the WriteBooster Buffer
lifetime based on the amount of performed program/erase cycles
01h: 0% - 10% WriteBooster Buffer life time used
...
0Ah: 90% - 100% WriteBooster Buffer life time used
The file is read only.
What: /sys/class/scsi_device/*/device/unit_descriptor/wb_buf_alloc_units
Date: June 2020
Contact: Asutosh Das <asutoshd@codeaurora.org>
Description: This entry shows the configured size of WriteBooster buffer.
0400h corresponds to 4GB.
The file is read only.

View File

@ -125,7 +125,7 @@ The following constants can be defined in the source file.
c. klogd is started with the appropriate -c parameter
(e.g. klogd -c 8)
This will cause printk() messages to be be displayed on the
This will cause printk() messages to be displayed on the
current console. Refer to the klogd(8) and syslogd(8) man pages
for details.

View File

@ -94,7 +94,7 @@ parameters may be changed at runtime by the command
(/proc/sys/dev/scsi/logging_level).
There is also a nice 'scsi_logging_level' script in the
S390-tools package, available for download at
http://www-128.ibm.com/developerworks/linux/linux390/s390-tools-1.5.4.html
https://github.com/ibm-s390-tools/s390-tools/blob/master/scripts/scsi_logging_level
scsi_mod.scan= [SCSI] sync (default) scans SCSI busses as they are
discovered. async scans them in kernel threads,

View File

@ -2309,7 +2309,7 @@ F: drivers/pci/controller/dwc/pcie-qcom.c
F: drivers/phy/qualcomm/
F: drivers/power/*/msm*
F: drivers/reset/reset-qcom-*
F: drivers/scsi/ufs/ufs-qcom.*
F: drivers/scsi/ufs/ufs-qcom*
F: drivers/spi/spi-geni-qcom.c
F: drivers/spi/spi-qcom-qspi.c
F: drivers/spi/spi-qup.c

View File

@ -164,9 +164,8 @@ EXPORT_SYMBOL(blk_pre_runtime_resume);
*
* Description:
* Update the queue's runtime status according to the return value of the
* device's runtime_resume function. If it is successfully resumed, process
* the requests that are queued into the device's queue when it is resuming
* and then mark last busy and initiate autosuspend for it.
* device's runtime_resume function. If the resume was successful, call
* blk_set_runtime_active() to do the real work of restarting the queue.
*
* This function should be called near the end of the device's
* runtime_resume callback.
@ -175,19 +174,13 @@ void blk_post_runtime_resume(struct request_queue *q, int err)
{
if (!q->dev)
return;
spin_lock_irq(&q->queue_lock);
if (!err) {
q->rpm_status = RPM_ACTIVE;
pm_runtime_mark_last_busy(q->dev);
pm_request_autosuspend(q->dev);
blk_set_runtime_active(q);
} else {
spin_lock_irq(&q->queue_lock);
q->rpm_status = RPM_SUSPENDED;
spin_unlock_irq(&q->queue_lock);
}
spin_unlock_irq(&q->queue_lock);
if (!err)
blk_clear_pm_only(q);
}
EXPORT_SYMBOL(blk_post_runtime_resume);
@ -204,15 +197,25 @@ EXPORT_SYMBOL(blk_post_runtime_resume);
* This function can be used in driver's resume hook to correct queue
* runtime PM status and re-enable peeking requests from the queue. It
* should be called before first request is added to the queue.
*
* This function is also called by blk_post_runtime_resume() for successful
* runtime resumes. It does everything necessary to restart the queue.
*/
void blk_set_runtime_active(struct request_queue *q)
{
if (q->dev) {
spin_lock_irq(&q->queue_lock);
q->rpm_status = RPM_ACTIVE;
pm_runtime_mark_last_busy(q->dev);
pm_request_autosuspend(q->dev);
spin_unlock_irq(&q->queue_lock);
}
int old_status;
if (!q->dev)
return;
spin_lock_irq(&q->queue_lock);
old_status = q->rpm_status;
q->rpm_status = RPM_ACTIVE;
pm_runtime_mark_last_busy(q->dev);
pm_request_autosuspend(q->dev);
spin_unlock_irq(&q->queue_lock);
if (old_status != RPM_ACTIVE)
blk_clear_pm_only(q);
}
EXPORT_SYMBOL(blk_set_runtime_active);

View File

@ -922,6 +922,107 @@ int qcom_scm_ocmem_unlock(enum qcom_scm_ocmem_client id, u32 offset, u32 size)
}
EXPORT_SYMBOL(qcom_scm_ocmem_unlock);
/**
* qcom_scm_ice_available() - Is the ICE key programming interface available?
*
* Return: true iff the SCM calls wrapped by qcom_scm_ice_invalidate_key() and
* qcom_scm_ice_set_key() are available.
*/
bool qcom_scm_ice_available(void)
{
return __qcom_scm_is_call_available(__scm->dev, QCOM_SCM_SVC_ES,
QCOM_SCM_ES_INVALIDATE_ICE_KEY) &&
__qcom_scm_is_call_available(__scm->dev, QCOM_SCM_SVC_ES,
QCOM_SCM_ES_CONFIG_SET_ICE_KEY);
}
EXPORT_SYMBOL(qcom_scm_ice_available);
/**
* qcom_scm_ice_invalidate_key() - Invalidate an inline encryption key
* @index: the keyslot to invalidate
*
* The UFSHCI standard defines a standard way to do this, but it doesn't work on
* these SoCs; only this SCM call does.
*
* Return: 0 on success; -errno on failure.
*/
int qcom_scm_ice_invalidate_key(u32 index)
{
struct qcom_scm_desc desc = {
.svc = QCOM_SCM_SVC_ES,
.cmd = QCOM_SCM_ES_INVALIDATE_ICE_KEY,
.arginfo = QCOM_SCM_ARGS(1),
.args[0] = index,
.owner = ARM_SMCCC_OWNER_SIP,
};
return qcom_scm_call(__scm->dev, &desc, NULL);
}
EXPORT_SYMBOL(qcom_scm_ice_invalidate_key);
/**
* qcom_scm_ice_set_key() - Set an inline encryption key
* @index: the keyslot into which to set the key
* @key: the key to program
* @key_size: the size of the key in bytes
* @cipher: the encryption algorithm the key is for
* @data_unit_size: the encryption data unit size, i.e. the size of each
* individual plaintext and ciphertext. Given in 512-byte
* units, e.g. 1 = 512 bytes, 8 = 4096 bytes, etc.
*
* Program a key into a keyslot of Qualcomm ICE (Inline Crypto Engine), where it
* can then be used to encrypt/decrypt UFS I/O requests inline.
*
* The UFSHCI standard defines a standard way to do this, but it doesn't work on
* these SoCs; only this SCM call does.
*
* Return: 0 on success; -errno on failure.
*/
int qcom_scm_ice_set_key(u32 index, const u8 *key, u32 key_size,
enum qcom_scm_ice_cipher cipher, u32 data_unit_size)
{
struct qcom_scm_desc desc = {
.svc = QCOM_SCM_SVC_ES,
.cmd = QCOM_SCM_ES_CONFIG_SET_ICE_KEY,
.arginfo = QCOM_SCM_ARGS(5, QCOM_SCM_VAL, QCOM_SCM_RW,
QCOM_SCM_VAL, QCOM_SCM_VAL,
QCOM_SCM_VAL),
.args[0] = index,
.args[2] = key_size,
.args[3] = cipher,
.args[4] = data_unit_size,
.owner = ARM_SMCCC_OWNER_SIP,
};
void *keybuf;
dma_addr_t key_phys;
int ret;
/*
* 'key' may point to vmalloc()'ed memory, but we need to pass a
* physical address that's been properly flushed. The sanctioned way to
* do this is by using the DMA API. But as is best practice for crypto
* keys, we also must wipe the key after use. This makes kmemdup() +
* dma_map_single() not clearly correct, since the DMA API can use
* bounce buffers. Instead, just use dma_alloc_coherent(). Programming
* keys is normally rare and thus not performance-critical.
*/
keybuf = dma_alloc_coherent(__scm->dev, key_size, &key_phys,
GFP_KERNEL);
if (!keybuf)
return -ENOMEM;
memcpy(keybuf, key, key_size);
desc.args[1] = key_phys;
ret = qcom_scm_call(__scm->dev, &desc, NULL);
memzero_explicit(keybuf, key_size);
dma_free_coherent(__scm->dev, key_size, keybuf, key_phys);
return ret;
}
EXPORT_SYMBOL(qcom_scm_ice_set_key);
/**
* qcom_scm_hdcp_available() - Check if secure environment supports HDCP.
*

View File

@ -103,6 +103,10 @@ extern int scm_legacy_call(struct device *dev, const struct qcom_scm_desc *desc,
#define QCOM_SCM_OCMEM_LOCK_CMD 0x01
#define QCOM_SCM_OCMEM_UNLOCK_CMD 0x02
#define QCOM_SCM_SVC_ES 0x10 /* Enterprise Security */
#define QCOM_SCM_ES_INVALIDATE_ICE_KEY 0x03
#define QCOM_SCM_ES_CONFIG_SET_ICE_KEY 0x04
#define QCOM_SCM_SVC_HDCP 0x11
#define QCOM_SCM_HDCP_INVOKE 0x01

View File

@ -124,13 +124,12 @@ static void zfcp_ccw_remove(struct ccw_device *cdev)
return;
write_lock_irq(&adapter->port_list_lock);
list_for_each_entry_safe(port, p, &adapter->port_list, list) {
list_for_each_entry(port, &adapter->port_list, list) {
write_lock(&port->unit_list_lock);
list_for_each_entry_safe(unit, u, &port->unit_list, list)
list_move(&unit->list, &unit_remove_lh);
list_splice_init(&port->unit_list, &unit_remove_lh);
write_unlock(&port->unit_list_lock);
list_move(&port->list, &port_remove_lh);
}
list_splice_init(&adapter->port_list, &port_remove_lh);
write_unlock_irq(&adapter->port_list_lock);
zfcp_ccw_adapter_put(adapter); /* put from zfcp_ccw_adapter_by_cdev */

View File

@ -68,7 +68,7 @@ static void zfcp_erp_action_ready(struct zfcp_erp_action *act)
{
struct zfcp_adapter *adapter = act->adapter;
list_move(&act->list, &act->adapter->erp_ready_head);
list_move(&act->list, &adapter->erp_ready_head);
zfcp_dbf_rec_run("erardy1", act);
wake_up(&adapter->erp_ready_wq);
zfcp_dbf_rec_run("erardy2", act);

View File

@ -48,7 +48,7 @@ unsigned int zfcp_fc_port_scan_backoff(void)
{
if (!port_scan_backoff)
return 0;
return get_random_int() % port_scan_backoff;
return prandom_u32_max(port_scan_backoff);
}
static void zfcp_fc_port_scan_time(struct zfcp_adapter *adapter)

View File

@ -246,7 +246,7 @@ int zfcp_qdio_sbal_get(struct zfcp_qdio *qdio)
}
/**
* zfcp_qdio_send - set PCI flag in first SBALE and send req to QDIO
* zfcp_qdio_send - send req to QDIO
* @qdio: pointer to struct zfcp_qdio
* @q_req: pointer to struct zfcp_qdio_req
* Returns: 0 on success, error otherwise
@ -260,17 +260,20 @@ int zfcp_qdio_send(struct zfcp_qdio *qdio, struct zfcp_qdio_req *q_req)
zfcp_qdio_account(qdio);
spin_unlock(&qdio->stat_lock);
atomic_sub(sbal_number, &qdio->req_q_free);
retval = do_QDIO(qdio->adapter->ccw_device, QDIO_FLAG_SYNC_OUTPUT, 0,
q_req->sbal_first, sbal_number);
if (unlikely(retval)) {
/* Failed to submit the IO, roll back our modifications. */
atomic_add(sbal_number, &qdio->req_q_free);
zfcp_qdio_zero_sbals(qdio->req_q, q_req->sbal_first,
sbal_number);
return retval;
}
/* account for transferred buffers */
atomic_sub(sbal_number, &qdio->req_q_free);
qdio->req_q_idx += sbal_number;
qdio->req_q_idx %= QDIO_MAX_BUFFERS_PER_Q;

View File

@ -1154,6 +1154,7 @@ source "drivers/scsi/qedf/Kconfig"
config SCSI_LPFC
tristate "Emulex LightPulse Fibre Channel Support"
depends on PCI && SCSI
depends on CPU_FREQ
depends on SCSI_FC_ATTRS
depends on NVME_TARGET_FC || NVME_TARGET_FC=n
depends on NVME_FC || NVME_FC=n
@ -1469,14 +1470,19 @@ config SCSI_SUNESP
module will be called sun_esp.
config ZFCP
tristate "FCP host bus adapter driver for IBM eServer zSeries"
tristate "FCP host bus adapter driver for IBM mainframes"
depends on S390 && QDIO && SCSI
depends on SCSI_FC_ATTRS
help
If you want to access SCSI devices attached to your IBM eServer
zSeries by means of Fibre Channel interfaces say Y.
For details please refer to the documentation provided by IBM at
<http://oss.software.ibm.com/developerworks/opensource/linux390>
If you want to access SCSI devices attached to your IBM mainframe by
means of Fibre Channel Protocol host bus adapters say Y.
Supported HBAs include different models of the FICON Express and FCP
Express I/O cards.
For a more complete list, and for more details about setup and
operation refer to the IBM publication "Device Drivers, Features, and
Commands", SC33-8411.
This driver is also available as a module. This module will be
called zfcp. If you want to compile it as a module, say M here

View File

@ -350,7 +350,8 @@ static inline int aac_valid_context(struct scsi_cmnd *scsicmd,
/**
* aac_get_config_status - check the adapter configuration
* @common: adapter to query
* @dev: aac driver data
* @commit_flag: force sending CT_COMMIT_CONFIG
*
* Query config status, and commit the configuration if needed.
*/
@ -442,7 +443,7 @@ static void aac_expose_phy_device(struct scsi_cmnd *scsicmd)
/**
* aac_get_containers - list containers
* @common: adapter to probe
* @dev: aac driver data
*
* Make a list of all containers on this controller
*/
@ -561,7 +562,7 @@ static void get_container_name_callback(void *context, struct fib * fibptr)
scsicmd->scsi_done(scsicmd);
}
/**
/*
* aac_get_container_name - get container name, none blocking.
*/
static int aac_get_container_name(struct scsi_cmnd * scsicmd)
@ -786,8 +787,7 @@ static int _aac_probe_container(struct scsi_cmnd * scsicmd, int (*callback)(stru
/**
* aac_probe_container - query a logical volume
* @dev: device to query
* @cid: container identifier
* @scsicmd: the scsi command block
*
* Queries the controller about the given volume. The volume information
* is updated in the struct fsa_dev_info structure rather than returned.
@ -1098,7 +1098,7 @@ static void get_container_serial_callback(void *context, struct fib * fibptr)
scsicmd->scsi_done(scsicmd);
}
/**
/*
* aac_get_container_serial - get container serial, none blocking.
*/
static int aac_get_container_serial(struct scsi_cmnd * scsicmd)
@ -1952,8 +1952,6 @@ free_identify_resp:
/**
* aac_set_safw_attr_all_targets- update current hba map with data from FW
* @dev: aac_dev structure
* @phys_luns: FW information from report phys luns
* @rescan: Indicates scan type
*
* Update our hba map with the information gathered from the FW
*/
@ -3391,15 +3389,12 @@ int aac_dev_ioctl(struct aac_dev *dev, unsigned int cmd, void __user *arg)
}
/**
*
* aac_srb_callback
* @context: the context set in the fib - here it is scsi cmd
* @fibptr: pointer to the fib
*
* Handles the completion of a scsi command to a non dasd device
*
*/
static void aac_srb_callback(void *context, struct fib * fibptr)
{
struct aac_srb_reply *srbreply;
@ -3684,13 +3679,11 @@ static void hba_resp_task_failure(struct aac_dev *dev,
}
/**
*
* aac_hba_callback
* @context: the context set in the fib - here it is scsi cmd
* @fibptr: pointer to the fib
*
* Handles the completion of a native HBA scsi command
*
*/
void aac_hba_callback(void *context, struct fib *fibptr)
{
@ -3749,14 +3742,12 @@ out:
}
/**
*
* aac_send_srb_fib
* @scsicmd: the scsi command block
*
* This routine will form a FIB and fill in the aac_srb from the
* scsicmd passed in.
*/
static int aac_send_srb_fib(struct scsi_cmnd* scsicmd)
{
struct fib* cmd_fibcontext;
@ -3792,7 +3783,6 @@ static int aac_send_srb_fib(struct scsi_cmnd* scsicmd)
}
/**
*
* aac_send_hba_fib
* @scsicmd: the scsi command block
*

View File

@ -32,6 +32,8 @@
#include "aacraid.h"
# define AAC_DEBUG_PREAMBLE KERN_INFO
# define AAC_DEBUG_POSTAMBLE
/**
* ioctl_send_fib - send a FIB from userspace
* @dev: adapter is being processed
@ -40,9 +42,6 @@
* This routine sends a fib to the adapter on behalf of a user level
* program.
*/
# define AAC_DEBUG_PREAMBLE KERN_INFO
# define AAC_DEBUG_POSTAMBLE
static int ioctl_send_fib(struct aac_dev * dev, void __user *arg)
{
struct hw_fib * kfib;
@ -158,11 +157,12 @@ cleanup:
/**
* open_getadapter_fib - Get the next fib
* @dev: adapter is being processed
* @arg: arguments to the open call
*
* This routine will get the next Fib, if available, from the AdapterFibContext
* passed in from the user.
*/
static int open_getadapter_fib(struct aac_dev * dev, void __user *arg)
{
struct aac_fib_context * fibctx;
@ -234,7 +234,6 @@ static int open_getadapter_fib(struct aac_dev * dev, void __user *arg)
* This routine will get the next Fib, if available, from the AdapterFibContext
* passed in from the user.
*/
static int next_getadapter_fib(struct aac_dev * dev, void __user *arg)
{
struct fib_ioctl f;
@ -455,11 +454,10 @@ static int check_revision(struct aac_dev *dev, void __user *arg)
/**
*
* aac_send_raw_scb
*
* @dev: adapter is being processed
* @arg: arguments to the send call
*/
static int aac_send_raw_srb(struct aac_dev* dev, void __user * arg)
{
struct fib* srbfib;

View File

@ -214,6 +214,7 @@ int aac_fib_setup(struct aac_dev * dev)
/**
* aac_fib_alloc_tag-allocate a fib using tags
* @dev: Adapter to allocate the fib for
* @scmd: SCSI command
*
* Allocate a fib from the adapter fib pool using tags
* from the blk layer.
@ -405,8 +406,8 @@ static int aac_get_entry (struct aac_dev * dev, u32 qid, struct aac_entry **entr
* aac_queue_get - get the next free QE
* @dev: Adapter
* @index: Returned index
* @priority: Priority of fib
* @fib: Fib to associate with the queue entry
* @qid: Queue number
* @hw_fib: Fib to associate with the queue entry
* @wait: Wait if queue full
* @fibptr: Driver fib object to go with fib
* @nonotify: Don't notify the adapter
@ -934,7 +935,7 @@ int aac_fib_adapter_complete(struct fib *fibptr, unsigned short size)
/**
* aac_fib_complete - fib completion handler
* @fib: FIB to complete
* @fibptr: FIB to complete
*
* Will do all necessary work to complete a FIB.
*/
@ -1049,6 +1050,7 @@ static void aac_handle_aif_bu(struct aac_dev *dev, struct aac_aifcmd *aifcmd)
}
}
#define AIF_SNIFF_TIMEOUT (500*HZ)
/**
* aac_handle_aif - Handle a message from the firmware
* @dev: Which adapter this fib is from
@ -1057,8 +1059,6 @@ static void aac_handle_aif_bu(struct aac_dev *dev, struct aac_aifcmd *aifcmd)
* This routine handles a driver notify fib from the adapter and
* dispatches it to the appropriate routine for handling.
*/
#define AIF_SNIFF_TIMEOUT (500*HZ)
static void aac_handle_aif(struct aac_dev * dev, struct fib * fibptr)
{
struct hw_fib * hw_fib = fibptr->hw_fib_va;
@ -2416,7 +2416,7 @@ out:
/**
* aac_command_thread - command processing thread
* @dev: Adapter to monitor
* @data: Adapter to monitor
*
* Waits on the commandready event in it's queue. When the event gets set
* it will pull FIBs off it's queue. It will continue to pull FIBs off

View File

@ -99,10 +99,11 @@ unsigned int aac_response_normal(struct aac_queue * q)
}
if (hwfib->header.XferState & cpu_to_le32(NoResponseExpected | Async))
{
if (hwfib->header.XferState & cpu_to_le32(NoResponseExpected))
if (hwfib->header.XferState & cpu_to_le32(NoResponseExpected)) {
FIB_COUNTER_INCREMENT(aac_config.NoResponseRecved);
else
} else {
FIB_COUNTER_INCREMENT(aac_config.AsyncRecved);
}
/*
* NOTE: we cannot touch the fib after this
* call, because it may have been deallocated.
@ -229,7 +230,6 @@ static void aac_aif_callback(void *context, struct fib * fibptr)
struct fib *fibctx;
struct aac_dev *dev;
struct aac_aifcmd *cmd;
int status;
fibctx = (struct fib *)context;
BUG_ON(fibptr == NULL);
@ -249,7 +249,7 @@ static void aac_aif_callback(void *context, struct fib * fibptr)
cmd = (struct aac_aifcmd *) fib_data(fibctx);
cmd->command = cpu_to_le32(AifReqEvent);
status = aac_fib_send(AifRequest,
aac_fib_send(AifRequest,
fibctx,
sizeof(struct hw_fib)-sizeof(struct aac_fibhdr),
FsaNormal,
@ -258,7 +258,7 @@ static void aac_aif_callback(void *context, struct fib * fibptr)
}
/**
/*
* aac_intr_normal - Handle command replies
* @dev: Device
* @index: completion reference
@ -403,12 +403,13 @@ unsigned int aac_intr_normal(struct aac_dev *dev, u32 index, int isAif,
if (hwfib->header.XferState &
cpu_to_le32(NoResponseExpected | Async)) {
if (hwfib->header.XferState & cpu_to_le32(
NoResponseExpected))
NoResponseExpected)) {
FIB_COUNTER_INCREMENT(
aac_config.NoResponseRecved);
else
} else {
FIB_COUNTER_INCREMENT(
aac_config.AsyncRecved);
}
start_callback = 1;
} else {
unsigned long flagv;

View File

@ -230,8 +230,8 @@ static struct aac_driver_ident aac_drivers[] = {
/**
* aac_queuecommand - queue a SCSI command
* @shost: Scsi host to queue command on
* @cmd: SCSI command to queue
* @done: Function to call on command completion
*
* Queues a command for execution by the associated Host Adapter.
*
@ -363,9 +363,10 @@ static int aac_biosparm(struct scsi_device *sdev, struct block_device *bdev,
param->cylinders = cap_to_cyls(capacity, param->heads * param->sectors);
if (num < 4 && end_sec == param->sectors) {
if (param->cylinders != saved_cylinders)
if (param->cylinders != saved_cylinders) {
dprintk((KERN_DEBUG "Adopting geometry: heads=%d, sectors=%d from partition table %d.\n",
param->heads, param->sectors, num));
}
} else if (end_head > 0 || end_sec > 0) {
dprintk((KERN_DEBUG "Strange geometry: heads=%d, sectors=%d in partition table %d.\n",
end_head + 1, end_sec, num));
@ -1159,7 +1160,6 @@ static int aac_cfg_open(struct inode *inode, struct file *file)
/**
* aac_cfg_ioctl - AAC configuration request
* @inode: inode of device
* @file: file handle
* @cmd: ioctl command code
* @arg: argument

View File

@ -24,6 +24,7 @@
/**
* aac_nark_ioremap
* @dev: device to ioremap
* @size: mapping resize request
*
*/

View File

@ -57,6 +57,7 @@ static int aac_rkt_select_comm(struct aac_dev *dev, int comm)
/**
* aac_rkt_ioremap
* @dev: device to ioremap
* @size: mapping resize request
*
*/
@ -77,8 +78,8 @@ static int aac_rkt_ioremap(struct aac_dev * dev, u32 size)
* aac_rkt_init - initialize an i960 based AAC card
* @dev: device to configure
*
* Allocate and set up resources for the i960 based AAC variants. The
* device_interface in the commregion will be allocated and linked
* Allocate and set up resources for the i960 based AAC variants. The
* device_interface in the commregion will be allocated and linked
* to the comm region.
*/

View File

@ -144,7 +144,16 @@ static void aac_rx_enable_interrupt_message(struct aac_dev *dev)
* @dev: Adapter
* @command: Command to execute
* @p1: first parameter
* @ret: adapter status
* @p2: second parameter
* @p3: third parameter
* @p4: forth parameter
* @p5: fifth parameter
* @p6: sixth parameter
* @status: adapter status
* @r1: first return value
* @r2: second return value
* @r3: third return value
* @r4: forth return value
*
* This routine will send a synchronous command to the adapter and wait
* for its completion.
@ -443,6 +452,7 @@ static int aac_rx_deliver_message(struct fib * fib)
/**
* aac_rx_ioremap
* @dev: adapter
* @size: mapping resize request
*
*/

View File

@ -135,13 +135,21 @@ static void aac_sa_notify_adapter(struct aac_dev *dev, u32 event)
* @dev: Adapter
* @command: Command to execute
* @p1: first parameter
* @p2: second parameter
* @p3: third parameter
* @p4: forth parameter
* @p5: fifth parameter
* @p6: sixth parameter
* @ret: adapter status
* @r1: first return value
* @r2: second return value
* @r3: third return value
* @r4: forth return value
*
* This routine will send a synchronous command to the adapter and wait
* This routine will send a synchronous command to the adapter and wait
* for its completion.
*/
static int sa_sync_cmd(struct aac_dev *dev, u32 command,
static int sa_sync_cmd(struct aac_dev *dev, u32 command,
u32 p1, u32 p2, u32 p3, u32 p4, u32 p5, u32 p6,
u32 *ret, u32 *r1, u32 *r2, u32 *r3, u32 *r4)
{
@ -283,6 +291,7 @@ static int aac_sa_check_health(struct aac_dev *dev)
/**
* aac_sa_ioremap
* @dev: device to ioremap
* @size: mapping resize request
*
*/
@ -300,8 +309,8 @@ static int aac_sa_ioremap(struct aac_dev * dev, u32 size)
* aac_sa_init - initialize an ARM based AAC card
* @dev: device to configure
*
* Allocate and set up resources for the ARM based AAC variants. The
* device_interface in the commregion will be allocated and linked
* Allocate and set up resources for the ARM based AAC variants. The
* device_interface in the commregion will be allocated and linked
* to the comm region.
*/

View File

@ -191,7 +191,16 @@ static void aac_src_enable_interrupt_message(struct aac_dev *dev)
* @dev: Adapter
* @command: Command to execute
* @p1: first parameter
* @ret: adapter status
* @p2: second parameter
* @p3: third parameter
* @p4: forth parameter
* @p5: fifth parameter
* @p6: sixth parameter
* @status: adapter status
* @r1: first return value
* @r2: second return valu
* @r3: third return value
* @r4: forth return value
*
* This routine will send a synchronous command to the adapter and wait
* for its completion.
@ -602,6 +611,7 @@ static int aac_src_deliver_message(struct fib *fib)
/**
* aac_src_ioremap
* @dev: device ioremap
* @size: mapping resize request
*
*/
@ -632,6 +642,7 @@ static int aac_src_ioremap(struct aac_dev *dev, u32 size)
/**
* aac_srcv_ioremap
* @dev: device ioremap
* @size: mapping resize request
*
*/

View File

@ -2030,8 +2030,7 @@ static void datai_run(struct Scsi_Host *shpnt)
fifodata, GETPORT(FIFOSTAT));
SETPORT(DMACNTRL0, ENDMA|_8BIT);
while(fifodata>0) {
int data;
data=GETPORT(DATAPORT);
GETPORT(DATAPORT);
fifodata--;
DATA_LEN++;
}

View File

@ -1735,10 +1735,8 @@ ahd_dump_sglist(struct scb *scb)
sg_list = (struct ahd_dma64_seg*)scb->sg_list;
for (i = 0; i < scb->sg_count; i++) {
uint64_t addr;
uint32_t len;
addr = ahd_le64toh(sg_list[i].addr);
len = ahd_le32toh(sg_list[i].len);
printk("sg[%d] - Addr 0x%x%x : Length %d%s\n",
i,
(uint32_t)((addr >> 32) & 0xFFFFFFFF),
@ -1906,9 +1904,6 @@ ahd_handle_seqint(struct ahd_softc *ahd, u_int intstat)
{
struct ahd_devinfo devinfo;
struct scb *scb;
struct ahd_initiator_tinfo *targ_info;
struct ahd_tmode_tstate *tstate;
struct ahd_transinfo *tinfo;
u_int scbid;
/*
@ -1936,12 +1931,6 @@ ahd_handle_seqint(struct ahd_softc *ahd, u_int intstat)
SCB_GET_LUN(scb),
SCB_GET_CHANNEL(ahd, scb),
ROLE_INITIATOR);
targ_info = ahd_fetch_transinfo(ahd,
devinfo.channel,
devinfo.our_scsiid,
devinfo.target,
&tstate);
tinfo = &targ_info->curr;
ahd_set_width(ahd, &devinfo, MSG_EXT_WDTR_BUS_8_BIT,
AHD_TRANS_ACTIVE, /*paused*/TRUE);
ahd_set_syncrate(ahd, &devinfo, /*period*/0,
@ -2669,7 +2658,6 @@ ahd_handle_transmission_error(struct ahd_softc *ahd)
struct scb *scb;
u_int scbid;
u_int lqistat1;
u_int lqistat2;
u_int msg_out;
u_int curphase;
u_int lastphase;
@ -2680,7 +2668,7 @@ ahd_handle_transmission_error(struct ahd_softc *ahd)
scb = NULL;
ahd_set_modes(ahd, AHD_MODE_SCSI, AHD_MODE_SCSI);
lqistat1 = ahd_inb(ahd, LQISTAT1) & ~(LQIPHASE_LQ|LQIPHASE_NLQ);
lqistat2 = ahd_inb(ahd, LQISTAT2);
ahd_inb(ahd, LQISTAT2);
if ((lqistat1 & (LQICRCI_NLQ|LQICRCI_LQ)) == 0
&& (ahd->bugs & AHD_NLQICRC_DELAYED_BUG) != 0) {
u_int lqistate;
@ -4218,13 +4206,11 @@ ahd_update_pending_scbs(struct ahd_softc *ahd)
pending_scb_count = 0;
LIST_FOREACH(pending_scb, &ahd->pending_scbs, pending_links) {
struct ahd_devinfo devinfo;
struct ahd_initiator_tinfo *tinfo;
struct ahd_tmode_tstate *tstate;
ahd_scb_devinfo(ahd, &devinfo, pending_scb);
tinfo = ahd_fetch_transinfo(ahd, devinfo.channel,
devinfo.our_scsiid,
devinfo.target, &tstate);
ahd_fetch_transinfo(ahd, devinfo.channel, devinfo.our_scsiid,
devinfo.target, &tstate);
if ((tstate->auto_negotiate & devinfo.target_mask) == 0
&& (pending_scb->flags & SCB_AUTO_NEGOTIATE) != 0) {
pending_scb->flags &= ~SCB_AUTO_NEGOTIATE;

View File

@ -700,9 +700,6 @@ ahd_linux_slave_alloc(struct scsi_device *sdev)
static int
ahd_linux_slave_configure(struct scsi_device *sdev)
{
struct ahd_softc *ahd;
ahd = *((struct ahd_softc **)sdev->host->hostdata);
if (bootverbose)
sdev_printk(KERN_INFO, sdev, "Slave Configure\n");
@ -778,16 +775,13 @@ ahd_linux_dev_reset(struct scsi_cmnd *cmd)
struct scb *reset_scb;
u_int cdb_byte;
int retval = SUCCESS;
int paused;
int wait;
struct ahd_initiator_tinfo *tinfo;
struct ahd_tmode_tstate *tstate;
unsigned long flags;
DECLARE_COMPLETION_ONSTACK(done);
reset_scb = NULL;
paused = FALSE;
wait = FALSE;
ahd = *(struct ahd_softc **)cmd->device->host->hostdata;
scmd_printk(KERN_INFO, cmd,
@ -1793,10 +1787,12 @@ ahd_done(struct ahd_softc *ahd, struct scb *scb)
*/
cmd->sense_buffer[0] = 0;
if (ahd_get_transaction_status(scb) == CAM_REQ_INPROG) {
#ifdef AHD_REPORT_UNDERFLOWS
uint32_t amount_xferred;
amount_xferred =
ahd_get_transfer_length(scb) - ahd_get_residual(scb);
#endif
if ((scb->flags & SCB_TRANSMISSION_ERROR) != 0) {
#ifdef AHD_DEBUG
if ((ahd_debug & AHD_SHOW_MISC) != 0) {
@ -2147,7 +2143,7 @@ ahd_linux_queue_abort_cmd(struct scsi_cmnd *cmd)
u_int last_phase;
u_int saved_scsiid;
u_int cdb_byte;
int retval;
int retval = SUCCESS;
int was_paused;
int paused;
int wait;
@ -2185,8 +2181,7 @@ ahd_linux_queue_abort_cmd(struct scsi_cmnd *cmd)
* so we must not still own the command.
*/
scmd_printk(KERN_INFO, cmd, "Is not an active device\n");
retval = SUCCESS;
goto no_cmd;
goto done;
}
/*
@ -2199,7 +2194,7 @@ ahd_linux_queue_abort_cmd(struct scsi_cmnd *cmd)
if (pending_scb == NULL) {
scmd_printk(KERN_INFO, cmd, "Command not found\n");
goto no_cmd;
goto done;
}
if ((pending_scb->flags & SCB_RECOVERY_SCB) != 0) {
@ -2207,7 +2202,7 @@ ahd_linux_queue_abort_cmd(struct scsi_cmnd *cmd)
* We can't queue two recovery actions using the same SCB
*/
retval = FAILED;
goto done;
goto done;
}
/*
@ -2222,7 +2217,7 @@ ahd_linux_queue_abort_cmd(struct scsi_cmnd *cmd)
if ((pending_scb->flags & SCB_ACTIVE) == 0) {
scmd_printk(KERN_INFO, cmd, "Command already completed\n");
goto no_cmd;
goto done;
}
printk("%s: At time of recovery, card was %spaused\n",
@ -2239,7 +2234,6 @@ ahd_linux_queue_abort_cmd(struct scsi_cmnd *cmd)
printk("%s:%d:%d:%d: Cmd aborted from QINFIFO\n",
ahd_name(ahd), cmd->device->channel,
cmd->device->id, (u8)cmd->device->lun);
retval = SUCCESS;
goto done;
}
@ -2336,17 +2330,10 @@ ahd_linux_queue_abort_cmd(struct scsi_cmnd *cmd)
} else {
scmd_printk(KERN_INFO, cmd, "Unable to deliver message\n");
retval = FAILED;
goto done;
}
no_cmd:
/*
* Our assumption is that if we don't have the command, no
* recovery action was required, so we return success. Again,
* the semantics of the mid-layer recovery engine are not
* well defined, so this may change in time.
*/
retval = SUCCESS;
ahd_restore_modes(ahd, saved_modes);
done:
if (paused)
ahd_unpause(ahd);

View File

@ -564,8 +564,6 @@ ahc_linux_target_alloc(struct scsi_target *starget)
struct scsi_target **ahc_targp = ahc_linux_target_in_softc(starget);
unsigned short scsirate;
struct ahc_devinfo devinfo;
struct ahc_initiator_tinfo *tinfo;
struct ahc_tmode_tstate *tstate;
char channel = starget->channel + 'A';
unsigned int our_id = ahc->our_id;
unsigned int target_offset;
@ -612,9 +610,6 @@ ahc_linux_target_alloc(struct scsi_target *starget)
spi_max_offset(starget) = 0;
spi_min_period(starget) =
ahc_find_period(ahc, scsirate, maxsync);
tinfo = ahc_fetch_transinfo(ahc, channel, ahc->our_id,
starget->id, &tstate);
}
ahc_compile_devinfo(&devinfo, our_id, starget->id,
CAM_LUN_WILDCARD, channel,
@ -671,10 +666,6 @@ ahc_linux_slave_alloc(struct scsi_device *sdev)
static int
ahc_linux_slave_configure(struct scsi_device *sdev)
{
struct ahc_softc *ahc;
ahc = *((struct ahc_softc **)sdev->host->hostdata);
if (bootverbose)
sdev_printk(KERN_INFO, sdev, "Slave Configure\n");
@ -1601,7 +1592,6 @@ ahc_send_async(struct ahc_softc *ahc, char channel,
case AC_TRANSFER_NEG:
{
struct scsi_target *starget;
struct ahc_linux_target *targ;
struct ahc_initiator_tinfo *tinfo;
struct ahc_tmode_tstate *tstate;
int target_offset;
@ -1635,7 +1625,6 @@ ahc_send_async(struct ahc_softc *ahc, char channel,
starget = ahc->platform_data->starget[target_offset];
if (starget == NULL)
break;
targ = scsi_transport_target_data(starget);
target_ppr_options =
(spi_dt(starget) ? MSG_EXT_PPR_DT_REQ : 0)
@ -1722,10 +1711,12 @@ ahc_done(struct ahc_softc *ahc, struct scb *scb)
*/
cmd->sense_buffer[0] = 0;
if (ahc_get_transaction_status(scb) == CAM_REQ_INPROG) {
#ifdef AHC_REPORT_UNDERFLOWS
uint32_t amount_xferred;
amount_xferred =
ahc_get_transfer_length(scb) - ahc_get_residual(scb);
#endif
if ((scb->flags & SCB_TRANSMISSION_ERROR) != 0) {
#ifdef AHC_DEBUG
if ((ahc_debug & AHC_SHOW_MISC) != 0) {

View File

@ -236,7 +236,7 @@ static int asd_init_sata_pm_table_ddb(struct domain_device *dev)
/**
* asd_init_sata_pm_port_ddb -- SATA Port Multiplier Port
* dev: pointer to domain device
* @dev: pointer to domain device
*
* For SATA Port Multiplier Ports we need to allocate one SATA Port
* Multiplier Port DDB and depending on whether the target on it
@ -281,7 +281,7 @@ static int asd_init_initiator_ddb(struct domain_device *dev)
/**
* asd_init_sata_pm_ddb -- SATA Port Multiplier
* dev: pointer to domain device
* @dev: pointer to domain device
*
* For STP and direct-attached SATA Port Multipliers we need
* one target port DDB entry and one SATA PM table DDB entry.

View File

@ -575,7 +575,7 @@ static int asd_extend_cmdctx(struct asd_ha_struct *asd_ha)
/**
* asd_init_ctxmem -- initialize context memory
* asd_ha: pointer to host adapter structure
* @asd_ha: pointer to host adapter structure
*
* This function sets the maximum number of SCBs and
* DDBs which can be used by the sequencer. This is normally
@ -1146,7 +1146,6 @@ static void asd_swap_head_scb(struct asd_ha_struct *asd_ha,
/**
* asd_start_timers -- (add and) start timers of SCBs
* @list: pointer to struct list_head of the scbs
* @to: timeout in jiffies
*
* If an SCB in the @list has no timer function, assign the default
* one, then start the timer of the SCB. This function is

View File

@ -530,7 +530,7 @@ static int asd_create_ha_caches(struct asd_ha_struct *asd_ha)
return 0;
}
/**
/*
* asd_free_edbs -- free empty data buffers
* asd_ha: pointer to host adapter structure
*/

View File

@ -123,8 +123,8 @@ static unsigned ord_phy(struct asd_ha_struct *asd_ha, struct asd_phy *phy)
/**
* asd_get_attached_sas_addr -- extract/generate attached SAS address
* phy: pointer to asd_phy
* sas_addr: pointer to buffer where the SAS address is to be written
* @phy: pointer to asd_phy
* @sas_addr: pointer to buffer where the SAS address is to be written
*
* This function extracts the SAS address from an IDENTIFY frame
* received. If OOB is SATA, then a SAS address is generated from the
@ -847,7 +847,7 @@ void asd_build_initiate_link_adm_task(struct asd_ascb *ascb, int phy_id,
/**
* asd_ascb_timedout -- called when a pending SCB's timer has expired
* @data: unsigned long, a pointer to the ascb in question
* @t: Timer context used to fetch the SCB
*
* This is the default timeout function which does the most necessary.
* Upper layers can implement their own timeout function, say to free

View File

@ -582,6 +582,7 @@ static void asd_init_cseq_scratch(struct asd_ha_struct *asd_ha)
/**
* asd_init_lseq_mip -- initialize LSEQ Mode independent pages 0-3
* @asd_ha: pointer to host adapter structure
* @lseq: link sequencer
*/
static void asd_init_lseq_mip(struct asd_ha_struct *asd_ha, u8 lseq)
{
@ -669,6 +670,7 @@ static void asd_init_lseq_mip(struct asd_ha_struct *asd_ha, u8 lseq)
/**
* asd_init_lseq_mdp -- initialize LSEQ mode dependent pages.
* @asd_ha: pointer to host adapter structure
* @lseq: link sequencer
*/
static void asd_init_lseq_mdp(struct asd_ha_struct *asd_ha, int lseq)
{
@ -953,6 +955,7 @@ static void asd_init_cseq_cio(struct asd_ha_struct *asd_ha)
/**
* asd_init_lseq_cio -- initialize LmSEQ CIO registers
* @asd_ha: pointer to host adapter structure
* @lseq: link sequencer
*/
static void asd_init_lseq_cio(struct asd_ha_struct *asd_ha, int lseq)
{
@ -1345,7 +1348,8 @@ int asd_start_seqs(struct asd_ha_struct *asd_ha)
/**
* asd_update_port_links -- update port_map_by_links and phy_is_up
* @sas_phy: pointer to the phy which has been added to a port
* @asd_ha: pointer to host adapter structure
* @phy: pointer to the phy which has been added to a port
*
* 1) When a link reset has completed and we got BYTES DMAED with a
* valid frame we call this function for that phy, to indicate that

View File

@ -673,7 +673,7 @@ int asd_lu_reset(struct domain_device *dev, u8 *lun)
/**
* asd_query_task -- send a QUERY TASK TMF to an I_T_L_Q nexus
* task: pointer to sas_task struct of interest
* @task: pointer to sas_task struct of interest
*
* Returns: TMF_RESP_FUNC_COMPLETE if the task is not in the task set,
* or TMF_RESP_FUNC_SUCC if the task is in the task set.

View File

@ -283,11 +283,10 @@ static bool arcmsr_remap_pciregion(struct AdapterControlBlock *acb)
}
case ACB_ADAPTER_TYPE_D: {
void __iomem *mem_base0;
unsigned long addr, range, flags;
unsigned long addr, range;
addr = (unsigned long)pci_resource_start(pdev, 0);
range = pci_resource_len(pdev, 0);
flags = pci_resource_flags(pdev, 0);
mem_base0 = ioremap(addr, range);
if (!mem_base0) {
pr_notice("arcmsr%d: memory mapping region fail\n",
@ -1067,12 +1066,11 @@ static void arcmsr_free_irq(struct pci_dev *pdev,
static int arcmsr_suspend(struct pci_dev *pdev, pm_message_t state)
{
uint32_t intmask_org;
struct Scsi_Host *host = pci_get_drvdata(pdev);
struct AdapterControlBlock *acb =
(struct AdapterControlBlock *)host->hostdata;
intmask_org = arcmsr_disable_outbound_ints(acb);
arcmsr_disable_outbound_ints(acb);
arcmsr_free_irq(pdev, acb);
del_timer_sync(&acb->eternal_timer);
if (set_date_time)
@ -1407,7 +1405,7 @@ static void arcmsr_done4abort_postqueue(struct AdapterControlBlock *acb)
struct ARCMSR_CDB *pARCMSR_CDB;
bool error;
struct CommandControlBlock *pCCB;
unsigned long ccb_cdb_phy, cdb_phy_hipart;
unsigned long ccb_cdb_phy;
switch (acb->adapter_type) {
@ -1489,8 +1487,6 @@ static void arcmsr_done4abort_postqueue(struct AdapterControlBlock *acb)
((toggle ^ 0x4000) + 1);
doneq_index = pmu->doneq_index;
spin_unlock_irqrestore(&acb->doneq_lock, flags);
cdb_phy_hipart = pmu->done_qbuffer[doneq_index &
0xFFF].addressHigh;
addressLow = pmu->done_qbuffer[doneq_index &
0xFFF].addressLow;
ccb_cdb_phy = (addressLow & 0xFFFFFFF0);
@ -2445,7 +2441,7 @@ static void arcmsr_hbaD_postqueue_isr(struct AdapterControlBlock *acb)
struct MessageUnit_D *pmu;
struct ARCMSR_CDB *arcmsr_cdb;
struct CommandControlBlock *ccb;
unsigned long flags, ccb_cdb_phy, cdb_phy_hipart;
unsigned long flags, ccb_cdb_phy;
spin_lock_irqsave(&acb->doneq_lock, flags);
pmu = acb->pmuD;
@ -2459,8 +2455,6 @@ static void arcmsr_hbaD_postqueue_isr(struct AdapterControlBlock *acb)
pmu->doneq_index = index_stripped ? (index_stripped | toggle) :
((toggle ^ 0x4000) + 1);
doneq_index = pmu->doneq_index;
cdb_phy_hipart = pmu->done_qbuffer[doneq_index &
0xFFF].addressHigh;
addressLow = pmu->done_qbuffer[doneq_index &
0xFFF].addressLow;
ccb_cdb_phy = (addressLow & 0xFFFFFFF0);
@ -3495,7 +3489,7 @@ static int arcmsr_hbaD_polling_ccbdone(struct AdapterControlBlock *acb,
bool error;
uint32_t poll_ccb_done = 0, poll_count = 0, flag_ccb;
int rtn, doneq_index, index_stripped, outbound_write_pointer, toggle;
unsigned long flags, ccb_cdb_phy, cdb_phy_hipart;
unsigned long flags, ccb_cdb_phy;
struct ARCMSR_CDB *arcmsr_cdb;
struct CommandControlBlock *pCCB;
struct MessageUnit_D *pmu = acb->pmuD;
@ -3527,8 +3521,6 @@ polling_hbaD_ccb_retry:
((toggle ^ 0x4000) + 1);
doneq_index = pmu->doneq_index;
spin_unlock_irqrestore(&acb->doneq_lock, flags);
cdb_phy_hipart = pmu->done_qbuffer[doneq_index &
0xFFF].addressHigh;
flag_ccb = pmu->done_qbuffer[doneq_index & 0xFFF].addressLow;
ccb_cdb_phy = (flag_ccb & 0xFFFFFFF0);
if (acb->cdb_phyadd_hipart)

View File

@ -450,7 +450,7 @@ static int cumanascsi2_probe(struct expansion_card *ec,
if (info->info.scsi.dma != NO_DMA)
free_dma(info->info.scsi.dma);
free_irq(ec->irq, host);
free_irq(ec->irq, info);
out_release:
fas216_release(host);

View File

@ -571,7 +571,7 @@ static int eesoxscsi_probe(struct expansion_card *ec, const struct ecard_id *id)
if (info->info.scsi.dma != NO_DMA)
free_dma(info->info.scsi.dma);
free_irq(ec->irq, host);
free_irq(ec->irq, info);
out_remove:
fas216_remove(host);

View File

@ -378,7 +378,7 @@ static int powertecscsi_probe(struct expansion_card *ec,
if (info->info.scsi.dma != NO_DMA)
free_dma(info->info.scsi.dma);
free_irq(ec->irq, host);
free_irq(ec->irq, info);
out_release:
fas216_release(host);

View File

@ -27,6 +27,7 @@ extern struct iscsi_transport beiscsi_iscsi_transport;
/**
* beiscsi_session_create - creates a new iscsi session
* @ep: pointer to iscsi ep
* @cmds_max: max commands supported
* @qdepth: max queue depth supported
* @initial_cmdsn: initial iscsi CMDSN
@ -164,6 +165,7 @@ beiscsi_conn_create(struct iscsi_cls_session *cls_session, u32 cid)
* @cls_session: pointer to iscsi cls session
* @cls_conn: pointer to iscsi cls conn
* @transport_fd: EP handle(64 bit)
* @is_leading: indicate if this is the session leading connection (MCS)
*
* This function binds the TCP Conn with iSCSI Connection and Session.
*/
@ -992,7 +994,7 @@ static void beiscsi_put_cid(struct beiscsi_hba *phba, unsigned short cid)
/**
* beiscsi_free_ep - free endpoint
* @ep: pointer to iscsi endpoint structure
* @beiscsi_ep: pointer to device endpoint struct
*/
static void beiscsi_free_ep(struct beiscsi_endpoint *beiscsi_ep)
{
@ -1027,9 +1029,10 @@ static void beiscsi_free_ep(struct beiscsi_endpoint *beiscsi_ep)
/**
* beiscsi_open_conn - Ask FW to open a TCP connection
* @ep: endpoint to be used
* @ep: pointer to device endpoint struct
* @src_addr: The source IP address
* @dst_addr: The Destination IP address
* @non_blocking: blocking or non-blocking call
*
* Asks the FW to open a TCP connection
*/
@ -1123,7 +1126,7 @@ static int beiscsi_open_conn(struct iscsi_endpoint *ep,
/**
* beiscsi_ep_connect - Ask chip to create TCP Conn
* @scsi_host: Pointer to scsi_host structure
* @shost: Pointer to scsi_host structure
* @dst_addr: The IP address of Target
* @non_blocking: blocking or non-blocking call
*
@ -1228,7 +1231,7 @@ static void beiscsi_flush_cq(struct beiscsi_hba *phba)
/**
* beiscsi_conn_close - Invalidate and upload connection
* @ep: The iscsi endpoint
* @beiscsi_ep: pointer to device endpoint struct
*
* Returns 0 on success, -1 on failure.
*/

View File

@ -977,7 +977,7 @@ beiscsi_get_wrb_handle(struct hwi_wrb_context *pwrb_context,
* alloc_wrb_handle - To allocate a wrb handle
* @phba: The hba pointer
* @cid: The cid to use for allocation
* @pwrb_context: ptr to ptr to wrb context
* @pcontext: ptr to ptr to wrb context
*
* This happens under session_lock until submission to chip
*/
@ -1394,7 +1394,7 @@ static void hwi_complete_cmd(struct beiscsi_conn *beiscsi_conn,
spin_unlock_bh(&session->back_lock);
}
/**
/*
* ASYNC PDUs include
* a. Unsolicited NOP-In (target initiated NOP-In)
* b. ASYNC Messages

View File

@ -97,6 +97,7 @@ unsigned int mgmt_vendor_specific_fw_cmd(struct be_ctrl_info *ctrl,
/**
* mgmt_open_connection()- Establish a TCP CXN
* @phba: driver priv structure
* @dst_addr: Destination Address
* @beiscsi_ep: ptr to device endpoint struct
* @nonemb_cmd: ptr to memory allocated for command
@ -209,7 +210,7 @@ int mgmt_open_connection(struct beiscsi_hba *phba,
return tag;
}
/*
/**
* beiscsi_exec_nemb_cmd()- execute non-embedded MBX cmd
* @phba: driver priv structure
* @nonemb_cmd: DMA address of the MBX command to be issued

View File

@ -1237,7 +1237,7 @@ bfa_iocfc_disable_cb(void *bfa_arg, bfa_boolean_t compl)
complete(&bfad->disable_comp);
}
/**
/*
* configure queue registers from firmware response
*/
static void

View File

@ -2335,9 +2335,7 @@ bfa_fcpim_lunmask_delete(struct bfa_s *bfa, u16 vf_id, wwn_t *pwwn,
wwn_t rpwwn, struct scsi_lun lun)
{
struct bfa_lun_mask_s *lunm_list;
struct bfa_rport_s *rp = NULL;
struct bfa_fcs_lport_s *port = NULL;
struct bfa_fcs_rport_s *rp_fcs;
int i;
/* in min cfg lunm_list could be NULL but no commands should run. */
@ -2353,12 +2351,8 @@ bfa_fcpim_lunmask_delete(struct bfa_s *bfa, u16 vf_id, wwn_t *pwwn,
port = bfa_fcs_lookup_port(
&((struct bfad_s *)bfa->bfad)->bfa_fcs,
vf_id, *pwwn);
if (port) {
if (port)
*pwwn = port->port_cfg.pwwn;
rp_fcs = bfa_fcs_lport_get_rport_by_pwwn(port, rpwwn);
if (rp_fcs)
rp = rp_fcs->bfa_rport;
}
}
lunm_list = bfa_get_lun_mask_list(bfa);
@ -3818,7 +3812,7 @@ bfa_iotag_attach(struct bfa_fcp_mod_s *fcp)
}
/**
/*
* To send config req, first try to use throttle value from flash
* If 0, then use driver parameter
* We need to use min(flash_val, drv_val) because

View File

@ -2240,15 +2240,12 @@ bfa_fcs_rport_process_adisc(struct bfa_fcs_rport_s *rport,
struct bfa_fcxp_s *fcxp;
struct fchs_s fchs;
struct bfa_fcs_lport_s *port = rport->port;
struct fc_adisc_s *adisc;
bfa_trc(port->fcs, rx_fchs->s_id);
bfa_trc(port->fcs, rx_fchs->d_id);
rport->stats.adisc_rcvd++;
adisc = (struct fc_adisc_s *) (rx_fchs + 1);
/*
* Accept if the itnim for this rport is online.
* Else reject the ADISC.

View File

@ -701,7 +701,7 @@ static void
bfa_iocpf_sm_fwcheck_entry(struct bfa_iocpf_s *iocpf)
{
struct bfi_ioc_image_hdr_s fwhdr;
u32 r32, fwstate, pgnum, pgoff, loff = 0;
u32 r32, fwstate, pgnum, loff = 0;
int i;
/*
@ -731,7 +731,6 @@ bfa_iocpf_sm_fwcheck_entry(struct bfa_iocpf_s *iocpf)
* Clear fwver hdr
*/
pgnum = PSS_SMEM_PGNUM(iocpf->ioc->ioc_regs.smem_pg0, loff);
pgoff = PSS_SMEM_PGOFF(loff);
writel(pgnum, iocpf->ioc->ioc_regs.host_page_num_fn);
for (i = 0; i < sizeof(struct bfi_ioc_image_hdr_s) / sizeof(u32); i++) {
@ -1440,13 +1439,12 @@ bfa_ioc_lpu_stop(struct bfa_ioc_s *ioc)
void
bfa_ioc_fwver_get(struct bfa_ioc_s *ioc, struct bfi_ioc_image_hdr_s *fwhdr)
{
u32 pgnum, pgoff;
u32 pgnum;
u32 loff = 0;
int i;
u32 *fwsig = (u32 *) fwhdr;
pgnum = PSS_SMEM_PGNUM(ioc->ioc_regs.smem_pg0, loff);
pgoff = PSS_SMEM_PGOFF(loff);
writel(pgnum, ioc->ioc_regs.host_page_num_fn);
for (i = 0; i < (sizeof(struct bfi_ioc_image_hdr_s) / sizeof(u32));
@ -1662,7 +1660,7 @@ bfa_status_t
bfa_ioc_fwsig_invalidate(struct bfa_ioc_s *ioc)
{
u32 pgnum, pgoff;
u32 pgnum;
u32 loff = 0;
enum bfi_ioc_state ioc_fwstate;
@ -1671,7 +1669,6 @@ bfa_ioc_fwsig_invalidate(struct bfa_ioc_s *ioc)
return BFA_STATUS_ADAPTER_ENABLED;
pgnum = PSS_SMEM_PGNUM(ioc->ioc_regs.smem_pg0, loff);
pgoff = PSS_SMEM_PGOFF(loff);
writel(pgnum, ioc->ioc_regs.host_page_num_fn);
bfa_mem_write(ioc->ioc_regs.smem_page_start, loff, BFA_IOC_FW_INV_SIGN);
@ -1863,7 +1860,7 @@ bfa_ioc_download_fw(struct bfa_ioc_s *ioc, u32 boot_type,
u32 boot_env)
{
u32 *fwimg;
u32 pgnum, pgoff;
u32 pgnum;
u32 loff = 0;
u32 chunkno = 0;
u32 i;
@ -1892,8 +1889,6 @@ bfa_ioc_download_fw(struct bfa_ioc_s *ioc, u32 boot_type,
pgnum = PSS_SMEM_PGNUM(ioc->ioc_regs.smem_pg0, loff);
pgoff = PSS_SMEM_PGOFF(loff);
writel(pgnum, ioc->ioc_regs.host_page_num_fn);
for (i = 0; i < fwimg_size; i++) {
@ -4763,11 +4758,9 @@ bfa_diag_memtest_done(void *cbarg)
struct bfa_ioc_s *ioc = diag->ioc;
struct bfa_diag_memtest_result *res = diag->result;
u32 loff = BFI_BOOT_MEMTEST_RES_ADDR;
u32 pgnum, pgoff, i;
u32 pgnum, i;
pgnum = PSS_SMEM_PGNUM(ioc->ioc_regs.smem_pg0, loff);
pgoff = PSS_SMEM_PGOFF(loff);
writel(pgnum, ioc->ioc_regs.host_page_num_fn);
for (i = 0; i < (sizeof(struct bfa_diag_memtest_result) /
@ -5026,7 +5019,7 @@ diag_portbeacon_comp(struct bfa_diag_s *diag)
/*
* Diag hmbox handler
*/
void
static void
bfa_diag_intr(void *diagarg, struct bfi_mbmsg_s *msg)
{
struct bfa_diag_s *diag = diagarg;
@ -6649,8 +6642,8 @@ enum bfa_flash_cmd {
BFA_FLASH_READ_STATUS = 0x05, /* read status */
};
/**
* @brief hardware error definition
/*
* Hardware error definition
*/
enum bfa_flash_err {
BFA_FLASH_NOT_PRESENT = -1, /*!< flash not present */
@ -6664,8 +6657,8 @@ enum bfa_flash_err {
BFA_FLASH_ERR_LEN = -9, /*!< invalid length */
};
/**
* @brief flash command register data structure
/*
* Flash command register data structure
*/
union bfa_flash_cmd_reg_u {
struct {
@ -6688,8 +6681,8 @@ union bfa_flash_cmd_reg_u {
u32 i;
};
/**
* @brief flash device status register data structure
/*
* Flash device status register data structure
*/
union bfa_flash_dev_status_reg_u {
struct {
@ -6714,8 +6707,8 @@ union bfa_flash_dev_status_reg_u {
u32 i;
};
/**
* @brief flash address register data structure
/*
* Flash address register data structure
*/
union bfa_flash_addr_reg_u {
struct {
@ -6730,7 +6723,7 @@ union bfa_flash_addr_reg_u {
u32 i;
};
/**
/*
* dg flash_raw_private Flash raw private functions
*/
static void
@ -6771,7 +6764,7 @@ bfa_flash_cmd_act_check(void __iomem *pci_bar)
return 0;
}
/**
/*
* @brief
* Flush FLI data fifo.
*
@ -6784,7 +6777,6 @@ static u32
bfa_flash_fifo_flush(void __iomem *pci_bar)
{
u32 i;
u32 t;
union bfa_flash_dev_status_reg_u dev_status;
dev_status.i = readl(pci_bar + FLI_DEV_STATUS_REG);
@ -6794,7 +6786,7 @@ bfa_flash_fifo_flush(void __iomem *pci_bar)
/* fifo counter in terms of words */
for (i = 0; i < dev_status.r.fifo_cnt; i++)
t = readl(pci_bar + FLI_RDDATA_REG);
readl(pci_bar + FLI_RDDATA_REG);
/*
* Check the device status. It may take some time.
@ -6811,7 +6803,7 @@ bfa_flash_fifo_flush(void __iomem *pci_bar)
return 0;
}
/**
/*
* @brief
* Read flash status.
*
@ -6856,7 +6848,7 @@ bfa_flash_status_read(void __iomem *pci_bar)
return ret_status;
}
/**
/*
* @brief
* Start flash read operation.
*
@ -6902,7 +6894,7 @@ bfa_flash_read_start(void __iomem *pci_bar, u32 offset, u32 len,
return 0;
}
/**
/*
* @brief
* Check flash read operation.
*
@ -6918,7 +6910,8 @@ bfa_flash_read_check(void __iomem *pci_bar)
return 0;
}
/**
/*
* @brief
* End flash read operation.
*
@ -6944,7 +6937,7 @@ bfa_flash_read_end(void __iomem *pci_bar, u32 len, char *buf)
bfa_flash_fifo_flush(pci_bar);
}
/**
/*
* @brief
* Perform flash raw read.
*
@ -6970,7 +6963,7 @@ bfa_raw_sem_get(void __iomem *bar)
}
bfa_status_t
static bfa_status_t
bfa_flash_sem_get(void __iomem *bar)
{
u32 n = FLASH_BLOCKING_OP_MAX;
@ -6983,7 +6976,7 @@ bfa_flash_sem_get(void __iomem *bar)
return BFA_STATUS_OK;
}
void
static void
bfa_flash_sem_put(void __iomem *bar)
{
writel(0, (bar + FLASH_SEM_LOCK_REG));

View File

@ -496,7 +496,7 @@ bfa_ioc_ct_sync_complete(struct bfa_ioc_s *ioc)
return BFA_FALSE;
}
/**
/*
* Called from bfa_ioc_attach() to map asic specific calls.
*/
static void
@ -517,7 +517,7 @@ bfa_ioc_set_ctx_hwif(struct bfa_ioc_s *ioc, struct bfa_ioc_hwif_s *hwif)
hwif->ioc_get_alt_fwstate = bfa_ioc_ct_get_alt_ioc_fwstate;
}
/**
/*
* Called from bfa_ioc_attach() to map asic specific calls.
*/
void
@ -532,7 +532,7 @@ bfa_ioc_set_ct_hwif(struct bfa_ioc_s *ioc)
ioc->ioc_hwif = &hwif_ct;
}
/**
/*
* Called from bfa_ioc_attach() to map asic specific calls.
*/
void

View File

@ -756,7 +756,7 @@ bfa_cee_reset_stats(struct bfa_cee_s *cee,
* @return void
*/
void
static void
bfa_cee_isr(void *cbarg, struct bfi_mbmsg_s *m)
{
union bfi_cee_i2h_msg_u *msg;
@ -792,7 +792,7 @@ bfa_cee_isr(void *cbarg, struct bfi_mbmsg_s *m)
* @return void
*/
void
static void
bfa_cee_notify(void *arg, enum bfa_ioc_event_e event)
{
struct bfa_cee_s *cee = (struct bfa_cee_s *) arg;

View File

@ -2718,7 +2718,7 @@ bfa_fcport_sm_ddport(struct bfa_fcport_s *fcport,
case BFA_FCPORT_SM_DPORTDISABLE:
case BFA_FCPORT_SM_ENABLE:
case BFA_FCPORT_SM_START:
/**
/*
* Ignore event for a port that is ddport
*/
break;
@ -3839,7 +3839,7 @@ bfa_fcport_get_topology(struct bfa_s *bfa)
return fcport->topology;
}
/**
/*
* Get config topology.
*/
enum bfa_port_topology

View File

@ -15,7 +15,7 @@
BFA_TRC_FILE(LDRV, BSG);
int
static int
bfad_iocmd_ioc_enable(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_gen_s *iocmd = (struct bfa_bsg_gen_s *)cmd;
@ -38,7 +38,7 @@ bfad_iocmd_ioc_enable(struct bfad_s *bfad, void *cmd)
return 0;
}
int
static int
bfad_iocmd_ioc_disable(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_gen_s *iocmd = (struct bfa_bsg_gen_s *)cmd;
@ -146,7 +146,7 @@ bfad_iocmd_ioc_get_stats(struct bfad_s *bfad, void *cmd)
return 0;
}
int
static int
bfad_iocmd_ioc_get_fwstats(struct bfad_s *bfad, void *cmd,
unsigned int payload_len)
{
@ -176,7 +176,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_ioc_reset_stats(struct bfad_s *bfad, void *cmd, unsigned int v_cmd)
{
struct bfa_bsg_gen_s *iocmd = (struct bfa_bsg_gen_s *)cmd;
@ -194,7 +194,7 @@ bfad_iocmd_ioc_reset_stats(struct bfad_s *bfad, void *cmd, unsigned int v_cmd)
return 0;
}
int
static int
bfad_iocmd_ioc_set_name(struct bfad_s *bfad, void *cmd, unsigned int v_cmd)
{
struct bfa_bsg_ioc_name_s *iocmd = (struct bfa_bsg_ioc_name_s *) cmd;
@ -208,7 +208,7 @@ bfad_iocmd_ioc_set_name(struct bfad_s *bfad, void *cmd, unsigned int v_cmd)
return 0;
}
int
static int
bfad_iocmd_iocfc_get_attr(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_iocfc_attr_s *iocmd = (struct bfa_bsg_iocfc_attr_s *)cmd;
@ -219,7 +219,7 @@ bfad_iocmd_iocfc_get_attr(struct bfad_s *bfad, void *cmd)
return 0;
}
int
static int
bfad_iocmd_ioc_fw_sig_inv(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_gen_s *iocmd = (struct bfa_bsg_gen_s *)cmd;
@ -231,7 +231,7 @@ bfad_iocmd_ioc_fw_sig_inv(struct bfad_s *bfad, void *cmd)
return 0;
}
int
static int
bfad_iocmd_iocfc_set_intr(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_iocfc_intr_s *iocmd = (struct bfa_bsg_iocfc_intr_s *)cmd;
@ -244,7 +244,7 @@ bfad_iocmd_iocfc_set_intr(struct bfad_s *bfad, void *cmd)
return 0;
}
int
static int
bfad_iocmd_port_enable(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_gen_s *iocmd = (struct bfa_bsg_gen_s *)cmd;
@ -265,7 +265,7 @@ bfad_iocmd_port_enable(struct bfad_s *bfad, void *cmd)
return 0;
}
int
static int
bfad_iocmd_port_disable(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_gen_s *iocmd = (struct bfa_bsg_gen_s *)cmd;
@ -315,7 +315,7 @@ bfad_iocmd_port_get_attr(struct bfad_s *bfad, void *cmd)
return 0;
}
int
static int
bfad_iocmd_port_get_stats(struct bfad_s *bfad, void *cmd,
unsigned int payload_len)
{
@ -349,7 +349,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_port_reset_stats(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_gen_s *iocmd = (struct bfa_bsg_gen_s *)cmd;
@ -370,7 +370,7 @@ bfad_iocmd_port_reset_stats(struct bfad_s *bfad, void *cmd)
return 0;
}
int
static int
bfad_iocmd_set_port_cfg(struct bfad_s *bfad, void *iocmd, unsigned int v_cmd)
{
struct bfa_bsg_port_cfg_s *cmd = (struct bfa_bsg_port_cfg_s *)iocmd;
@ -390,7 +390,7 @@ bfad_iocmd_set_port_cfg(struct bfad_s *bfad, void *iocmd, unsigned int v_cmd)
return 0;
}
int
static int
bfad_iocmd_port_cfg_maxfrsize(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_port_cfg_maxfrsize_s *iocmd =
@ -404,7 +404,7 @@ bfad_iocmd_port_cfg_maxfrsize(struct bfad_s *bfad, void *cmd)
return 0;
}
int
static int
bfad_iocmd_port_cfg_bbcr(struct bfad_s *bfad, unsigned int cmd, void *pcmd)
{
struct bfa_bsg_bbcr_enable_s *iocmd =
@ -427,7 +427,7 @@ bfad_iocmd_port_cfg_bbcr(struct bfad_s *bfad, unsigned int cmd, void *pcmd)
return 0;
}
int
static int
bfad_iocmd_port_get_bbcr_attr(struct bfad_s *bfad, void *pcmd)
{
struct bfa_bsg_bbcr_attr_s *iocmd = (struct bfa_bsg_bbcr_attr_s *) pcmd;
@ -465,7 +465,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_lport_get_stats(struct bfad_s *bfad, void *cmd)
{
struct bfa_fcs_lport_s *fcs_port;
@ -489,7 +489,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_lport_reset_stats(struct bfad_s *bfad, void *cmd)
{
struct bfa_fcs_lport_s *fcs_port;
@ -523,7 +523,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_lport_get_iostats(struct bfad_s *bfad, void *cmd)
{
struct bfa_fcs_lport_s *fcs_port;
@ -548,7 +548,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_lport_get_rports(struct bfad_s *bfad, void *cmd,
unsigned int payload_len)
{
@ -590,7 +590,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_rport_get_attr(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_rport_attr_s *iocmd = (struct bfa_bsg_rport_attr_s *)cmd;
@ -676,7 +676,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_rport_get_stats(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_rport_stats_s *iocmd =
@ -717,7 +717,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_rport_clr_stats(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_rport_reset_stats_s *iocmd =
@ -753,7 +753,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_rport_set_speed(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_rport_set_speed_s *iocmd =
@ -789,7 +789,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_vport_get_attr(struct bfad_s *bfad, void *cmd)
{
struct bfa_fcs_vport_s *fcs_vport;
@ -812,7 +812,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_vport_get_stats(struct bfad_s *bfad, void *cmd)
{
struct bfa_fcs_vport_s *fcs_vport;
@ -840,7 +840,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_vport_clr_stats(struct bfad_s *bfad, void *cmd)
{
struct bfa_fcs_vport_s *fcs_vport;
@ -907,7 +907,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_qos_set_bw(struct bfad_s *bfad, void *pcmd)
{
struct bfa_bsg_qos_bw_s *iocmd = (struct bfa_bsg_qos_bw_s *)pcmd;
@ -920,7 +920,7 @@ bfad_iocmd_qos_set_bw(struct bfad_s *bfad, void *pcmd)
return 0;
}
int
static int
bfad_iocmd_ratelim(struct bfad_s *bfad, unsigned int cmd, void *pcmd)
{
struct bfa_bsg_gen_s *iocmd = (struct bfa_bsg_gen_s *)pcmd;
@ -949,7 +949,7 @@ bfad_iocmd_ratelim(struct bfad_s *bfad, unsigned int cmd, void *pcmd)
return 0;
}
int
static int
bfad_iocmd_ratelim_speed(struct bfad_s *bfad, unsigned int cmd, void *pcmd)
{
struct bfa_bsg_trl_speed_s *iocmd = (struct bfa_bsg_trl_speed_s *)pcmd;
@ -978,7 +978,7 @@ bfad_iocmd_ratelim_speed(struct bfad_s *bfad, unsigned int cmd, void *pcmd)
return 0;
}
int
static int
bfad_iocmd_cfg_fcpim(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_fcpim_s *iocmd = (struct bfa_bsg_fcpim_s *)cmd;
@ -991,7 +991,7 @@ bfad_iocmd_cfg_fcpim(struct bfad_s *bfad, void *cmd)
return 0;
}
int
static int
bfad_iocmd_fcpim_get_modstats(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_fcpim_modstats_s *iocmd =
@ -1013,7 +1013,7 @@ bfad_iocmd_fcpim_get_modstats(struct bfad_s *bfad, void *cmd)
return 0;
}
int
static int
bfad_iocmd_fcpim_clr_modstats(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_fcpim_modstatsclr_s *iocmd =
@ -1035,7 +1035,7 @@ bfad_iocmd_fcpim_clr_modstats(struct bfad_s *bfad, void *cmd)
return 0;
}
int
static int
bfad_iocmd_fcpim_get_del_itn_stats(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_fcpim_del_itn_stats_s *iocmd =
@ -1160,7 +1160,7 @@ bfad_iocmd_itnim_get_itnstats(struct bfad_s *bfad, void *cmd)
return 0;
}
int
static int
bfad_iocmd_fcport_enable(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_gen_s *iocmd = (struct bfa_bsg_gen_s *)cmd;
@ -1173,7 +1173,7 @@ bfad_iocmd_fcport_enable(struct bfad_s *bfad, void *cmd)
return 0;
}
int
static int
bfad_iocmd_fcport_disable(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_gen_s *iocmd = (struct bfa_bsg_gen_s *)cmd;
@ -1186,7 +1186,7 @@ bfad_iocmd_fcport_disable(struct bfad_s *bfad, void *cmd)
return 0;
}
int
static int
bfad_iocmd_ioc_get_pcifn_cfg(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_pcifn_cfg_s *iocmd = (struct bfa_bsg_pcifn_cfg_s *)cmd;
@ -1208,7 +1208,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_pcifn_create(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_pcifn_s *iocmd = (struct bfa_bsg_pcifn_s *)cmd;
@ -1231,7 +1231,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_pcifn_delete(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_pcifn_s *iocmd = (struct bfa_bsg_pcifn_s *)cmd;
@ -1253,7 +1253,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_pcifn_bw(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_pcifn_s *iocmd = (struct bfa_bsg_pcifn_s *)cmd;
@ -1277,7 +1277,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_adapter_cfg_mode(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_adapter_cfg_mode_s *iocmd =
@ -1300,7 +1300,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_port_cfg_mode(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_port_cfg_mode_s *iocmd =
@ -1324,7 +1324,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_ablk_optrom(struct bfad_s *bfad, unsigned int cmd, void *pcmd)
{
struct bfa_bsg_gen_s *iocmd = (struct bfa_bsg_gen_s *)pcmd;
@ -1350,7 +1350,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_faa_query(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_faa_attr_s *iocmd = (struct bfa_bsg_faa_attr_s *)cmd;
@ -1373,7 +1373,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_cee_attr(struct bfad_s *bfad, void *cmd, unsigned int payload_len)
{
struct bfa_bsg_cee_attr_s *iocmd =
@ -1409,7 +1409,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_cee_get_stats(struct bfad_s *bfad, void *cmd,
unsigned int payload_len)
{
@ -1446,7 +1446,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_cee_reset_stats(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_gen_s *iocmd = (struct bfa_bsg_gen_s *)cmd;
@ -1460,7 +1460,7 @@ bfad_iocmd_cee_reset_stats(struct bfad_s *bfad, void *cmd)
return 0;
}
int
static int
bfad_iocmd_sfp_media(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_sfp_media_s *iocmd = (struct bfa_bsg_sfp_media_s *)cmd;
@ -1482,7 +1482,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_sfp_speed(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_sfp_speed_s *iocmd = (struct bfa_bsg_sfp_speed_s *)cmd;
@ -1503,7 +1503,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_flash_get_attr(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_flash_attr_s *iocmd =
@ -1524,7 +1524,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_flash_erase_part(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_flash_s *iocmd = (struct bfa_bsg_flash_s *)cmd;
@ -1544,7 +1544,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_flash_update_part(struct bfad_s *bfad, void *cmd,
unsigned int payload_len)
{
@ -1576,7 +1576,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_flash_read_part(struct bfad_s *bfad, void *cmd,
unsigned int payload_len)
{
@ -1608,7 +1608,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_diag_temp(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_diag_get_temp_s *iocmd =
@ -1630,7 +1630,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_diag_memtest(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_diag_memtest_s *iocmd =
@ -1653,7 +1653,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_diag_loopback(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_diag_loopback_s *iocmd =
@ -1676,7 +1676,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_diag_fwping(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_diag_fwping_s *iocmd =
@ -1700,7 +1700,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_diag_queuetest(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_diag_qtest_s *iocmd = (struct bfa_bsg_diag_qtest_s *)cmd;
@ -1721,7 +1721,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_diag_sfp(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_sfp_show_s *iocmd =
@ -1744,7 +1744,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_diag_led(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_diag_led_s *iocmd = (struct bfa_bsg_diag_led_s *)cmd;
@ -1757,7 +1757,7 @@ bfad_iocmd_diag_led(struct bfad_s *bfad, void *cmd)
return 0;
}
int
static int
bfad_iocmd_diag_beacon_lport(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_diag_beacon_s *iocmd =
@ -1772,7 +1772,7 @@ bfad_iocmd_diag_beacon_lport(struct bfad_s *bfad, void *cmd)
return 0;
}
int
static int
bfad_iocmd_diag_lb_stat(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_diag_lb_stat_s *iocmd =
@ -1787,7 +1787,7 @@ bfad_iocmd_diag_lb_stat(struct bfad_s *bfad, void *cmd)
return 0;
}
int
static int
bfad_iocmd_diag_dport_enable(struct bfad_s *bfad, void *pcmd)
{
struct bfa_bsg_dport_enable_s *iocmd =
@ -1809,7 +1809,7 @@ bfad_iocmd_diag_dport_enable(struct bfad_s *bfad, void *pcmd)
return 0;
}
int
static int
bfad_iocmd_diag_dport_disable(struct bfad_s *bfad, void *pcmd)
{
struct bfa_bsg_gen_s *iocmd = (struct bfa_bsg_gen_s *)pcmd;
@ -1829,7 +1829,7 @@ bfad_iocmd_diag_dport_disable(struct bfad_s *bfad, void *pcmd)
return 0;
}
int
static int
bfad_iocmd_diag_dport_start(struct bfad_s *bfad, void *pcmd)
{
struct bfa_bsg_dport_enable_s *iocmd =
@ -1854,7 +1854,7 @@ bfad_iocmd_diag_dport_start(struct bfad_s *bfad, void *pcmd)
return 0;
}
int
static int
bfad_iocmd_diag_dport_show(struct bfad_s *bfad, void *pcmd)
{
struct bfa_bsg_diag_dport_show_s *iocmd =
@ -1869,7 +1869,7 @@ bfad_iocmd_diag_dport_show(struct bfad_s *bfad, void *pcmd)
}
int
static int
bfad_iocmd_phy_get_attr(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_phy_attr_s *iocmd =
@ -1890,7 +1890,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_phy_get_stats(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_phy_stats_s *iocmd =
@ -1911,7 +1911,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_phy_read(struct bfad_s *bfad, void *cmd, unsigned int payload_len)
{
struct bfa_bsg_phy_s *iocmd = (struct bfa_bsg_phy_s *)cmd;
@ -1943,7 +1943,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_vhba_query(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_vhba_attr_s *iocmd =
@ -1962,7 +1962,7 @@ bfad_iocmd_vhba_query(struct bfad_s *bfad, void *cmd)
return 0;
}
int
static int
bfad_iocmd_phy_update(struct bfad_s *bfad, void *cmd, unsigned int payload_len)
{
struct bfa_bsg_phy_s *iocmd = (struct bfa_bsg_phy_s *)cmd;
@ -1992,7 +1992,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_porglog_get(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_debug_s *iocmd = (struct bfa_bsg_debug_s *)cmd;
@ -2012,7 +2012,7 @@ out:
}
#define BFA_DEBUG_FW_CORE_CHUNK_SZ 0x4000U /* 16K chunks for FW dump */
int
static int
bfad_iocmd_debug_fw_core(struct bfad_s *bfad, void *cmd,
unsigned int payload_len)
{
@ -2046,7 +2046,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_debug_ctl(struct bfad_s *bfad, void *cmd, unsigned int v_cmd)
{
struct bfa_bsg_gen_s *iocmd = (struct bfa_bsg_gen_s *)cmd;
@ -2067,7 +2067,7 @@ bfad_iocmd_debug_ctl(struct bfad_s *bfad, void *cmd, unsigned int v_cmd)
return 0;
}
int
static int
bfad_iocmd_porglog_ctl(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_portlogctl_s *iocmd = (struct bfa_bsg_portlogctl_s *)cmd;
@ -2081,7 +2081,7 @@ bfad_iocmd_porglog_ctl(struct bfad_s *bfad, void *cmd)
return 0;
}
int
static int
bfad_iocmd_fcpim_cfg_profile(struct bfad_s *bfad, void *cmd, unsigned int v_cmd)
{
struct bfa_bsg_fcpim_profile_s *iocmd =
@ -2125,7 +2125,7 @@ bfad_iocmd_itnim_get_ioprofile(struct bfad_s *bfad, void *cmd)
return 0;
}
int
static int
bfad_iocmd_fcport_get_stats(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_fcport_stats_s *iocmd =
@ -2150,7 +2150,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_fcport_reset_stats(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_gen_s *iocmd = (struct bfa_bsg_gen_s *)cmd;
@ -2174,7 +2174,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_boot_cfg(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_boot_s *iocmd = (struct bfa_bsg_boot_s *)cmd;
@ -2196,7 +2196,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_boot_query(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_boot_s *iocmd = (struct bfa_bsg_boot_s *)cmd;
@ -2218,7 +2218,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_preboot_query(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_preboot_s *iocmd = (struct bfa_bsg_preboot_s *)cmd;
@ -2237,7 +2237,7 @@ bfad_iocmd_preboot_query(struct bfad_s *bfad, void *cmd)
return 0;
}
int
static int
bfad_iocmd_ethboot_cfg(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_ethboot_s *iocmd = (struct bfa_bsg_ethboot_s *)cmd;
@ -2260,7 +2260,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_ethboot_query(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_ethboot_s *iocmd = (struct bfa_bsg_ethboot_s *)cmd;
@ -2283,7 +2283,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_cfg_trunk(struct bfad_s *bfad, void *cmd, unsigned int v_cmd)
{
struct bfa_bsg_gen_s *iocmd = (struct bfa_bsg_gen_s *)cmd;
@ -2323,7 +2323,7 @@ bfad_iocmd_cfg_trunk(struct bfad_s *bfad, void *cmd, unsigned int v_cmd)
return 0;
}
int
static int
bfad_iocmd_trunk_get_attr(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_trunk_attr_s *iocmd = (struct bfa_bsg_trunk_attr_s *)cmd;
@ -2346,7 +2346,7 @@ bfad_iocmd_trunk_get_attr(struct bfad_s *bfad, void *cmd)
return 0;
}
int
static int
bfad_iocmd_qos(struct bfad_s *bfad, void *cmd, unsigned int v_cmd)
{
struct bfa_bsg_gen_s *iocmd = (struct bfa_bsg_gen_s *)cmd;
@ -2374,7 +2374,7 @@ bfad_iocmd_qos(struct bfad_s *bfad, void *cmd, unsigned int v_cmd)
return 0;
}
int
static int
bfad_iocmd_qos_get_attr(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_qos_attr_s *iocmd = (struct bfa_bsg_qos_attr_s *)cmd;
@ -2400,7 +2400,7 @@ bfad_iocmd_qos_get_attr(struct bfad_s *bfad, void *cmd)
return 0;
}
int
static int
bfad_iocmd_qos_get_vc_attr(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_qos_vc_attr_s *iocmd =
@ -2432,7 +2432,7 @@ bfad_iocmd_qos_get_vc_attr(struct bfad_s *bfad, void *cmd)
return 0;
}
int
static int
bfad_iocmd_qos_get_stats(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_fcport_stats_s *iocmd =
@ -2464,7 +2464,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_qos_reset_stats(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_gen_s *iocmd = (struct bfa_bsg_gen_s *)cmd;
@ -2495,7 +2495,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_vf_get_stats(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_vf_stats_s *iocmd =
@ -2518,7 +2518,7 @@ out:
return 0;
}
int
static int
bfad_iocmd_vf_clr_stats(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_vf_reset_stats_s *iocmd =
@ -2555,7 +2555,7 @@ bfad_iocmd_lunmask_reset_lunscan_mode(struct bfad_s *bfad, int lunmask_cfg)
bfad_reset_sdev_bflags(vport->drv_port.im_port, lunmask_cfg);
}
int
static int
bfad_iocmd_lunmask(struct bfad_s *bfad, void *pcmd, unsigned int v_cmd)
{
struct bfa_bsg_gen_s *iocmd = (struct bfa_bsg_gen_s *)pcmd;
@ -2578,7 +2578,7 @@ bfad_iocmd_lunmask(struct bfad_s *bfad, void *pcmd, unsigned int v_cmd)
return 0;
}
int
static int
bfad_iocmd_fcpim_lunmask_query(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_fcpim_lunmask_query_s *iocmd =
@ -2592,7 +2592,7 @@ bfad_iocmd_fcpim_lunmask_query(struct bfad_s *bfad, void *cmd)
return 0;
}
int
static int
bfad_iocmd_fcpim_cfg_lunmask(struct bfad_s *bfad, void *cmd, unsigned int v_cmd)
{
struct bfa_bsg_fcpim_lunmask_s *iocmd =
@ -2611,7 +2611,7 @@ bfad_iocmd_fcpim_cfg_lunmask(struct bfad_s *bfad, void *cmd, unsigned int v_cmd)
return 0;
}
int
static int
bfad_iocmd_fcpim_throttle_query(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_fcpim_throttle_s *iocmd =
@ -2626,7 +2626,7 @@ bfad_iocmd_fcpim_throttle_query(struct bfad_s *bfad, void *cmd)
return 0;
}
int
static int
bfad_iocmd_fcpim_throttle_set(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_fcpim_throttle_s *iocmd =
@ -2641,7 +2641,7 @@ bfad_iocmd_fcpim_throttle_set(struct bfad_s *bfad, void *cmd)
return 0;
}
int
static int
bfad_iocmd_tfru_read(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_tfru_s *iocmd =
@ -2663,7 +2663,7 @@ bfad_iocmd_tfru_read(struct bfad_s *bfad, void *cmd)
return 0;
}
int
static int
bfad_iocmd_tfru_write(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_tfru_s *iocmd =
@ -2685,7 +2685,7 @@ bfad_iocmd_tfru_write(struct bfad_s *bfad, void *cmd)
return 0;
}
int
static int
bfad_iocmd_fruvpd_read(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_fruvpd_s *iocmd =
@ -2707,7 +2707,7 @@ bfad_iocmd_fruvpd_read(struct bfad_s *bfad, void *cmd)
return 0;
}
int
static int
bfad_iocmd_fruvpd_update(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_fruvpd_s *iocmd =
@ -2729,7 +2729,7 @@ bfad_iocmd_fruvpd_update(struct bfad_s *bfad, void *cmd)
return 0;
}
int
static int
bfad_iocmd_fruvpd_get_max_size(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_fruvpd_max_size_s *iocmd =
@ -3177,7 +3177,7 @@ out:
}
/* FC passthru call backs */
u64
static u64
bfad_fcxp_get_req_sgaddr_cb(void *bfad_fcxp, int sgeid)
{
struct bfad_fcxp *drv_fcxp = bfad_fcxp;
@ -3189,7 +3189,7 @@ bfad_fcxp_get_req_sgaddr_cb(void *bfad_fcxp, int sgeid)
return addr;
}
u32
static u32
bfad_fcxp_get_req_sglen_cb(void *bfad_fcxp, int sgeid)
{
struct bfad_fcxp *drv_fcxp = bfad_fcxp;
@ -3199,7 +3199,7 @@ bfad_fcxp_get_req_sglen_cb(void *bfad_fcxp, int sgeid)
return sge->sg_len;
}
u64
static u64
bfad_fcxp_get_rsp_sgaddr_cb(void *bfad_fcxp, int sgeid)
{
struct bfad_fcxp *drv_fcxp = bfad_fcxp;
@ -3211,7 +3211,7 @@ bfad_fcxp_get_rsp_sgaddr_cb(void *bfad_fcxp, int sgeid)
return addr;
}
u32
static u32
bfad_fcxp_get_rsp_sglen_cb(void *bfad_fcxp, int sgeid)
{
struct bfad_fcxp *drv_fcxp = bfad_fcxp;
@ -3221,7 +3221,7 @@ bfad_fcxp_get_rsp_sglen_cb(void *bfad_fcxp, int sgeid)
return sge->sg_len;
}
void
static void
bfad_send_fcpt_cb(void *bfad_fcxp, struct bfa_fcxp_s *fcxp, void *cbarg,
bfa_status_t req_status, u32 rsp_len, u32 resid_len,
struct fchs_s *rsp_fchs)
@ -3236,7 +3236,7 @@ bfad_send_fcpt_cb(void *bfad_fcxp, struct bfa_fcxp_s *fcxp, void *cbarg,
complete(&drv_fcxp->comp);
}
struct bfad_buf_info *
static struct bfad_buf_info *
bfad_fcxp_map_sg(struct bfad_s *bfad, void *payload_kbuf,
uint32_t payload_len, uint32_t *num_sgles)
{
@ -3280,7 +3280,7 @@ out_free_mem:
return NULL;
}
void
static void
bfad_fcxp_free_mem(struct bfad_s *bfad, struct bfad_buf_info *buf_base,
uint32_t num_sgles)
{
@ -3298,7 +3298,7 @@ bfad_fcxp_free_mem(struct bfad_s *bfad, struct bfad_buf_info *buf_base,
}
}
int
static int
bfad_fcxp_bsg_send(struct bsg_job *job, struct bfad_fcxp *drv_fcxp,
bfa_bsg_fcpt_t *bsg_fcpt)
{
@ -3338,7 +3338,7 @@ bfad_fcxp_bsg_send(struct bsg_job *job, struct bfad_fcxp *drv_fcxp,
return BFA_STATUS_OK;
}
int
static int
bfad_im_bsg_els_ct_request(struct bsg_job *job)
{
struct bfa_bsg_data *bsg_data;

View File

@ -1071,9 +1071,8 @@ static int bnx2fc_fip_recv(struct sk_buff *skb, struct net_device *dev,
/**
* bnx2fc_update_src_mac - Update Ethernet MAC filters.
*
* @fip: FCoE controller.
* @old: Unicast MAC address to delete if the MAC is non-zero.
* @new: Unicast MAC address to add.
* @lport: The local port
* @addr: Location of data to copy
*
* Remove any previously-set unicast MAC filter.
* Add secondary FCoE MAC address filter for our OUI.
@ -1659,8 +1658,7 @@ static void __bnx2fc_destroy(struct bnx2fc_interface *interface)
/**
* bnx2fc_destroy - Destroy a bnx2fc FCoE interface
*
* @buffer: The name of the Ethernet interface to be destroyed
* @kp: The associated kernel parameter
* @netdev: The net device that the FCoE interface is on
*
* Called from sysfs.
*
@ -2101,7 +2099,7 @@ static int __bnx2fc_disable(struct fcoe_ctlr *ctlr)
return 0;
}
/**
/*
* Deperecated: Use bnx2fc_enabled()
*/
static int bnx2fc_disable(struct net_device *netdev)
@ -2229,7 +2227,7 @@ done:
return 0;
}
/**
/*
* Deprecated: Use bnx2fc_enabled()
*/
static int bnx2fc_enable(struct net_device *netdev)
@ -2523,7 +2521,7 @@ static struct bnx2fc_hba *bnx2fc_hba_lookup(struct net_device
/**
* bnx2fc_ulp_exit - shuts down adapter instance and frees all resources
*
* @dev cnic device handle
* @dev: cnic device handle
*/
static void bnx2fc_ulp_exit(struct cnic_dev *dev)
{
@ -2956,7 +2954,7 @@ static struct device_attribute *bnx2fc_host_attrs[] = {
NULL,
};
/**
/*
* scsi_host_template structure used while registering with SCSI-ml
*/
static struct scsi_host_template bnx2fc_shost_template = {
@ -2989,7 +2987,7 @@ static struct libfc_function_template bnx2fc_libfc_fcn_templ = {
.rport_event_callback = bnx2fc_rport_event_handler,
};
/**
/*
* bnx2fc_cnic_cb - global template of bnx2fc - cnic driver interface
* structure carrying callback function pointers
*/

View File

@ -485,7 +485,7 @@ int bnx2fc_send_session_disable_req(struct fcoe_port *port,
/**
* bnx2fc_send_session_destroy_req - initiates FCoE Session destroy
*
* @port: port structure pointer
* @hba: adapter structure pointer
* @tgt: bnx2fc_rport structure pointer
*/
int bnx2fc_send_session_destroy_req(struct bnx2fc_hba *hba,
@ -635,7 +635,6 @@ static void bnx2fc_process_unsol_compl(struct bnx2fc_rport *tgt, u16 wqe)
struct bnx2fc_cmd *io_req = NULL;
struct bnx2fc_interface *interface = tgt->port->priv;
struct bnx2fc_hba *hba = interface->hba;
int task_idx, index;
int rc = 0;
u64 err_warn_bit_map;
u8 err_warn = 0xff;
@ -701,15 +700,12 @@ static void bnx2fc_process_unsol_compl(struct bnx2fc_rport *tgt, u16 wqe)
BNX2FC_TGT_DBG(tgt, "buf_offsets - tx = 0x%x, rx = 0x%x\n",
err_entry->data.tx_buf_off, err_entry->data.rx_buf_off);
if (xid > hba->max_xid) {
BNX2FC_TGT_DBG(tgt, "xid(0x%x) out of FW range\n",
xid);
goto ret_err_rqe;
}
task_idx = xid / BNX2FC_TASKS_PER_PAGE;
index = xid % BNX2FC_TASKS_PER_PAGE;
io_req = (struct bnx2fc_cmd *)hba->cmd_mgr->cmds[xid];
if (!io_req)
@ -833,8 +829,6 @@ ret_err_rqe:
}
BNX2FC_TGT_DBG(tgt, "warn = 0x%x\n", err_warn);
task_idx = xid / BNX2FC_TASKS_PER_PAGE;
index = xid % BNX2FC_TASKS_PER_PAGE;
io_req = (struct bnx2fc_cmd *)hba->cmd_mgr->cmds[xid];
if (!io_req)
goto ret_warn_rqe;
@ -1008,7 +1002,6 @@ static bool bnx2fc_pending_work(struct bnx2fc_rport *tgt, unsigned int wqe)
unsigned char *rq_data = NULL;
unsigned char rq_data_buff[BNX2FC_RQ_BUF_SZ];
int task_idx, index;
unsigned char *dummy;
u16 xid;
u8 num_rq;
int i;
@ -1038,7 +1031,7 @@ static bool bnx2fc_pending_work(struct bnx2fc_rport *tgt, unsigned int wqe)
if (num_rq > 1) {
/* We do not need extra sense data */
for (i = 1; i < num_rq; i++)
dummy = bnx2fc_get_next_rqe(tgt, 1);
bnx2fc_get_next_rqe(tgt, 1);
}
if (rq_data)
@ -1341,8 +1334,8 @@ static void bnx2fc_init_failure(struct bnx2fc_hba *hba, u32 err_code)
/**
* bnx2fc_indicae_kcqe - process KCQE
*
* @hba: adapter structure pointer
* @kcqe: kcqe pointer
* @context: adapter structure pointer
* @kcq: kcqe pointer
* @num_cqe: Number of completion queue elements
*
* Generic KCQ event handler
@ -1510,7 +1503,6 @@ void bnx2fc_init_seq_cleanup_task(struct bnx2fc_cmd *seq_clnp_req,
u64 phys_addr = (u64)orig_io_req->bd_tbl->bd_tbl_dma;
u32 orig_offset = offset;
int bd_count;
int orig_task_idx, index;
int i;
memset(task, 0, sizeof(struct fcoe_task_ctx_entry));
@ -1560,8 +1552,6 @@ void bnx2fc_init_seq_cleanup_task(struct bnx2fc_cmd *seq_clnp_req,
offset; /* adjusted offset */
task->txwr_only.sgl_ctx.sgl.mul_sgl.cur_sge_idx = i;
} else {
orig_task_idx = orig_xid / BNX2FC_TASKS_PER_PAGE;
index = orig_xid % BNX2FC_TASKS_PER_PAGE;
/* Multiple SGEs were used for this IO */
sgl = &task->rxwr_only.union_ctx.read_info.sgl_ctx.sgl;
@ -2089,11 +2079,7 @@ static int bnx2fc_allocate_hash_table(struct bnx2fc_hba *hba)
pbl = hba->hash_tbl_pbl;
i = 0;
while (*pbl && *(pbl + 1)) {
u32 lo;
u32 hi;
lo = *pbl;
++pbl;
hi = *pbl;
++pbl;
++i;
}

View File

@ -431,7 +431,7 @@ static int bnx2fc_init_tgt(struct bnx2fc_rport *tgt,
return 0;
}
/**
/*
* This event_callback is called after successful completion of libfc
* initiated target login. bnx2fc can proceed with initiating the session
* establishment.
@ -656,9 +656,8 @@ static void bnx2fc_free_conn_id(struct bnx2fc_hba *hba, u32 conn_id)
spin_unlock_bh(&hba->hba_lock);
}
/**
*bnx2fc_alloc_session_resc - Allocate qp resources for the session
*
/*
* bnx2fc_alloc_session_resc - Allocate qp resources for the session
*/
static int bnx2fc_alloc_session_resc(struct bnx2fc_hba *hba,
struct bnx2fc_rport *tgt)

View File

@ -181,7 +181,7 @@ int bnx2i_arm_cq_event_coalescing(struct bnx2i_endpoint *ep, u8 action)
/**
* bnx2i_get_rq_buf - copy RQ buffer contents to driver buffer
* @conn: iscsi connection on which RQ event occurred
* @bnx2i_conn: iscsi connection on which RQ event occurred
* @ptr: driver buffer to which RQ buffer contents is to
* be copied
* @len: length of valid data inside RQ buf
@ -223,7 +223,7 @@ static void bnx2i_ring_577xx_doorbell(struct bnx2i_conn *conn)
/**
* bnx2i_put_rq_buf - Replenish RQ buffer, if required ring on chip doorbell
* @conn: iscsi connection on which event to post
* @bnx2i_conn: iscsi connection on which event to post
* @count: number of RQ buffer being posted to chip
*
* No need to ring hardware doorbell for 57710 family of devices
@ -258,7 +258,7 @@ void bnx2i_put_rq_buf(struct bnx2i_conn *bnx2i_conn, int count)
/**
* bnx2i_ring_sq_dbell - Ring SQ doorbell to wake-up the processing engine
* @conn: iscsi connection to which new SQ entries belong
* @bnx2i_conn: iscsi connection to which new SQ entries belong
* @count: number of SQ WQEs to post
*
* SQ DB is updated in host memory and TX Doorbell is rung for 57710 family
@ -283,7 +283,7 @@ static void bnx2i_ring_sq_dbell(struct bnx2i_conn *bnx2i_conn, int count)
/**
* bnx2i_ring_dbell_update_sq_params - update SQ driver parameters
* @conn: iscsi connection to which new SQ entries belong
* @bnx2i_conn: iscsi connection to which new SQ entries belong
* @count: number of SQ WQEs to post
*
* this routine will update SQ driver parameters and ring the doorbell
@ -320,9 +320,9 @@ static void bnx2i_ring_dbell_update_sq_params(struct bnx2i_conn *bnx2i_conn,
/**
* bnx2i_send_iscsi_login - post iSCSI login request MP WQE to hardware
* @conn: iscsi connection
* @cmd: driver command structure which is requesting
* a WQE to sent to chip for further processing
* @bnx2i_conn: iscsi connection
* @task: transport layer's command structure pointer which is requesting
* a WQE to sent to chip for further processing
*
* prepare and post an iSCSI Login request WQE to CNIC firmware
*/
@ -373,7 +373,7 @@ int bnx2i_send_iscsi_login(struct bnx2i_conn *bnx2i_conn,
/**
* bnx2i_send_iscsi_tmf - post iSCSI task management request MP WQE to hardware
* @conn: iscsi connection
* @bnx2i_conn: iscsi connection
* @mtask: driver command structure which is requesting
* a WQE to sent to chip for further processing
*
@ -447,7 +447,7 @@ int bnx2i_send_iscsi_tmf(struct bnx2i_conn *bnx2i_conn,
/**
* bnx2i_send_iscsi_text - post iSCSI text WQE to hardware
* @conn: iscsi connection
* @bnx2i_conn: iscsi connection
* @mtask: driver command structure which is requesting
* a WQE to sent to chip for further processing
*
@ -495,7 +495,7 @@ int bnx2i_send_iscsi_text(struct bnx2i_conn *bnx2i_conn,
/**
* bnx2i_send_iscsi_scsicmd - post iSCSI scsicmd request WQE to hardware
* @conn: iscsi connection
* @bnx2i_conn: iscsi connection
* @cmd: driver command structure which is requesting
* a WQE to sent to chip for further processing
*
@ -517,9 +517,9 @@ int bnx2i_send_iscsi_scsicmd(struct bnx2i_conn *bnx2i_conn,
/**
* bnx2i_send_iscsi_nopout - post iSCSI NOPOUT request WQE to hardware
* @conn: iscsi connection
* @cmd: driver command structure which is requesting
* a WQE to sent to chip for further processing
* @bnx2i_conn: iscsi connection
* @task: transport layer's command structure pointer which is
* requesting a WQE to sent to chip for further processing
* @datap: payload buffer pointer
* @data_len: payload data length
* @unsol: indicated whether nopout pdu is unsolicited pdu or
@ -579,9 +579,9 @@ int bnx2i_send_iscsi_nopout(struct bnx2i_conn *bnx2i_conn,
/**
* bnx2i_send_iscsi_logout - post iSCSI logout request WQE to hardware
* @conn: iscsi connection
* @cmd: driver command structure which is requesting
* a WQE to sent to chip for further processing
* @bnx2i_conn: iscsi connection
* @task: transport layer's command structure pointer which is
* requesting a WQE to sent to chip for further processing
*
* prepare and post logout request WQE to CNIC firmware
*/
@ -678,7 +678,8 @@ void bnx2i_update_iscsi_conn(struct iscsi_conn *conn)
/**
* bnx2i_ep_ofld_timer - post iSCSI logout request WQE to hardware
* @data: endpoint (transport handle) structure pointer
* @t: timer context used to fetch the endpoint (transport
* handle) structure pointer
*
* routine to handle connection offload/destroy request timeout
*/
@ -1662,7 +1663,7 @@ static void bnx2i_process_nopin_local_cmpl(struct iscsi_session *session,
/**
* bnx2i_unsol_pdu_adjust_rq - makes adjustments to RQ after unsol pdu is recvd
* @conn: iscsi connection
* @bnx2i_conn: iscsi connection
*
* Firmware advances RQ producer index for every unsolicited PDU even if
* payload data length is '0'. This function makes corresponding
@ -1885,7 +1886,9 @@ int bnx2i_percpu_io_thread(void *arg)
/**
* bnx2i_queue_scsi_cmd_resp - queue cmd completion to the percpu thread
* @session: iscsi session
* @bnx2i_conn: bnx2i connection
* @cqe: pointer to newly DMA'ed CQE entry for processing
*
* this function is called by generic KCQ handler to queue all pending cmd
* completion CQEs
@ -2466,8 +2469,9 @@ static void bnx2i_process_ofld_cmpl(struct bnx2i_hba *hba,
/**
* bnx2i_indicate_kcqe - process iscsi conn update completion KCQE
* @hba: adapter structure pointer
* @update_kcqe: kcqe pointer
* @context: adapter structure pointer
* @kcqe: kcqe pointer
* @num_cqe: number of kcqes to process
*
* Generic KCQ event handler/dispatcher
*/
@ -2614,8 +2618,7 @@ static void bnx2i_cm_abort_cmpl(struct cnic_sock *cm_sk)
/**
* bnx2i_cm_remote_close - process received TCP FIN
* @hba: adapter structure pointer
* @update_kcqe: kcqe pointer
* @cm_sk: cnic sock structure pointer
*
* function callback exported via bnx2i - cnic driver interface to indicate
* async TCP events such as FIN
@ -2631,8 +2634,7 @@ static void bnx2i_cm_remote_close(struct cnic_sock *cm_sk)
/**
* bnx2i_cm_remote_abort - process TCP RST and start conn cleanup
* @hba: adapter structure pointer
* @update_kcqe: kcqe pointer
* @cm_sk: cnic sock structure pointer
*
* function callback exported via bnx2i - cnic driver interface to
* indicate async TCP events (RST) sent by the peer.
@ -2669,10 +2671,9 @@ static int bnx2i_send_nl_mesg(void *context, u32 msg_type,
}
/**
/*
* bnx2i_cnic_cb - global template of bnx2i - cnic driver interface structure
* carrying callback function pointers
*
*/
struct cnic_ulp_ops bnx2i_cnic_cb = {
.cnic_init = bnx2i_ulp_init,

View File

@ -73,7 +73,7 @@ DEFINE_PER_CPU(struct bnx2i_percpu_s, bnx2i_percpu);
/**
* bnx2i_identify_device - identifies NetXtreme II device type
* @hba: Adapter structure pointer
* @cnic: Corresponding cnic device
* @dev: Corresponding cnic device
*
* This function identifies the NX2 device type and sets appropriate
* queue mailbox register access method, 5709 requires driver to

View File

@ -228,7 +228,7 @@ static void bnx2i_setup_cmd_wqe_template(struct bnx2i_cmd *cmd)
/**
* bnx2i_bind_conn_to_iscsi_cid - bind conn structure to 'iscsi_cid'
* @hba: pointer to adapter instance
* @conn: pointer to iscsi connection
* @bnx2i_conn: pointer to iscsi connection
* @iscsi_cid: iscsi context ID, range 0 - (MAX_CONN - 1)
*
* update iscsi cid table entry with connection pointer. This enables
@ -463,7 +463,6 @@ static int bnx2i_alloc_bdt(struct bnx2i_hba *hba, struct iscsi_session *session,
* bnx2i_destroy_cmd_pool - destroys iscsi command pool and release BD table
* @hba: adapter instance pointer
* @session: iscsi session pointer
* @cmd: iscsi command structure
*/
static void bnx2i_destroy_cmd_pool(struct bnx2i_hba *hba,
struct iscsi_session *session)
@ -582,8 +581,7 @@ static void bnx2i_free_mp_bdt(struct bnx2i_hba *hba)
/**
* bnx2i_drop_session - notifies iscsid of connection error.
* @hba: adapter instance pointer
* @session: iscsi session pointer
* @cls_session: iscsi cls session pointer
*
* This notifies iscsid that there is a error, so it can initiate
* recovery.
@ -1277,7 +1275,8 @@ static int bnx2i_task_xmit(struct iscsi_task *task)
/**
* bnx2i_session_create - create a new iscsi session
* @cmds_max: max commands supported
* @ep: pointer to iscsi endpoint
* @cmds_max: user specified maximum commands
* @qdepth: scsi queue depth to support
* @initial_cmdsn: initial iscsi CMDSN to be used for this session
*
@ -1971,7 +1970,7 @@ static int bnx2i_ep_poll(struct iscsi_endpoint *ep, int timeout_ms)
/**
* bnx2i_ep_tcp_conn_active - check EP state transition
* @ep: endpoint pointer
* @bnx2i_ep: endpoint pointer
*
* check if underlying TCP connection is active
*/
@ -2014,9 +2013,9 @@ static int bnx2i_ep_tcp_conn_active(struct bnx2i_endpoint *bnx2i_ep)
}
/*
/**
* bnx2i_hw_ep_disconnect - executes TCP connection teardown process in the hw
* @ep: TCP connection (bnx2i endpoint) handle
* @bnx2i_ep: TCP connection (bnx2i endpoint) handle
*
* executes TCP connection teardown process
*/
@ -2171,8 +2170,8 @@ out:
/**
* bnx2i_nl_set_path - ISCSI_UEVENT_PATH_UPDATE user message handler
* @buf: pointer to buffer containing iscsi path message
*
* @shost: scsi host pointer
* @params: pointer to buffer containing iscsi path message
*/
static int bnx2i_nl_set_path(struct Scsi_Host *shost, struct iscsi_path *params)
{

View File

@ -30,6 +30,7 @@ static inline struct bnx2i_hba *bnx2i_dev_to_hba(struct device *dev)
/**
* bnx2i_show_sq_info - return(s currently configured send queue (SQ) size
* @dev: device pointer
* @attr: device attribute (unused)
* @buf: buffer to return current SQ size parameter
*
* Returns current SQ size parameter, this paramater determines the number
@ -47,6 +48,7 @@ static ssize_t bnx2i_show_sq_info(struct device *dev,
/**
* bnx2i_set_sq_info - update send queue (SQ) size parameter
* @dev: device pointer
* @attr: device attribute (unused)
* @buf: buffer to return current SQ size parameter
* @count: parameter buffer size
*
@ -87,6 +89,7 @@ skip_config:
/**
* bnx2i_show_ccell_info - returns command cell (HQ) size
* @dev: device pointer
* @attr: device attribute (unused)
* @buf: buffer to return current SQ size parameter
*
* returns per-connection TCP history queue size parameter
@ -103,6 +106,7 @@ static ssize_t bnx2i_show_ccell_info(struct device *dev,
/**
* bnx2i_get_link_state - set command cell (HQ) size
* @dev: device pointer
* @attr: device attribute (unused)
* @buf: buffer to return current SQ size parameter
* @count: parameter buffer size
*

View File

@ -306,7 +306,7 @@ csio_hw_get_vpd_params(struct csio_hw *hw, struct csio_vpd *p)
uint8_t *vpd, csum;
const struct t4_vpd_hdr *v;
/* To get around compilation warning from strstrip */
char *s;
char __always_unused *s;
if (csio_is_valid_vpd(hw))
return 0;

View File

@ -148,12 +148,11 @@ csio_t5_mc_read(struct csio_hw *hw, int idx, uint32_t addr, __be32 *data,
{
int i;
uint32_t mc_bist_cmd_reg, mc_bist_cmd_addr_reg, mc_bist_cmd_len_reg;
uint32_t mc_bist_status_rdata_reg, mc_bist_data_pattern_reg;
uint32_t mc_bist_data_pattern_reg;
mc_bist_cmd_reg = MC_REG(MC_P_BIST_CMD_A, idx);
mc_bist_cmd_addr_reg = MC_REG(MC_P_BIST_CMD_ADDR_A, idx);
mc_bist_cmd_len_reg = MC_REG(MC_P_BIST_CMD_LEN_A, idx);
mc_bist_status_rdata_reg = MC_REG(MC_P_BIST_STATUS_RDATA_A, idx);
mc_bist_data_pattern_reg = MC_REG(MC_P_BIST_DATA_PATTERN_A, idx);
if (csio_rd_reg32(hw, mc_bist_cmd_reg) & START_BIST_F)
@ -196,7 +195,7 @@ csio_t5_edc_read(struct csio_hw *hw, int idx, uint32_t addr, __be32 *data,
{
int i;
uint32_t edc_bist_cmd_reg, edc_bist_cmd_addr_reg, edc_bist_cmd_len_reg;
uint32_t edc_bist_cmd_data_pattern, edc_bist_status_rdata_reg;
uint32_t edc_bist_cmd_data_pattern;
/*
* These macro are missing in t4_regs.h file.
@ -208,7 +207,6 @@ csio_t5_edc_read(struct csio_hw *hw, int idx, uint32_t addr, __be32 *data,
edc_bist_cmd_addr_reg = EDC_REG_T5(EDC_H_BIST_CMD_ADDR_A, idx);
edc_bist_cmd_len_reg = EDC_REG_T5(EDC_H_BIST_CMD_LEN_A, idx);
edc_bist_cmd_data_pattern = EDC_REG_T5(EDC_H_BIST_DATA_PATTERN_A, idx);
edc_bist_status_rdata_reg = EDC_REG_T5(EDC_H_BIST_STATUS_RDATA_A, idx);
#undef EDC_REG_T5
#undef EDC_STRIDE_T5

View File

@ -582,7 +582,7 @@ csio_hw_free(struct csio_hw *hw)
* @hw: The HW module.
* @dev: The device associated with this invocation.
* @probe: Called from probe context or not?
* @os_pln: Parent lnode if any.
* @pln: Parent lnode if any.
*
* Allocates lnode structure via scsi_host_alloc, initializes
* shost, initializes lnode module and registers with SCSI ML

View File

@ -2068,10 +2068,9 @@ csio_ln_exit(struct csio_lnode *ln)
ln->fcfinfo = NULL;
}
/**
/*
* csio_lnode_init - Initialize the members of an lnode.
* @ln: lnode
*
*/
int
csio_lnode_init(struct csio_lnode *ln, struct csio_hw *hw,

View File

@ -862,7 +862,7 @@ csio_rnode_devloss_handler(struct csio_rnode *rn)
/**
* csio_rnode_fwevt_handler - Event handler for firmware rnode events.
* @rn: rnode
*
* @fwevt: firmware event to handle
*/
void
csio_rnode_fwevt_handler(struct csio_rnode *rn, uint8_t fwevt)

View File

@ -361,7 +361,7 @@ static inline void make_tx_data_wr(struct cxgbi_sock *csk, struct sk_buff *skb,
/* len includes the length of any HW ULP additions */
req->len = htonl(len);
/* V_TX_ULP_SUBMODE sets both the mode and submode */
req->flags = htonl(V_TX_ULP_SUBMODE(cxgbi_skcb_ulp_mode(skb)) |
req->flags = htonl(V_TX_ULP_SUBMODE(cxgbi_skcb_tx_ulp_mode(skb)) |
V_TX_SHOVE((skb_peek(&csk->write_queue) ? 0 : 1)));
req->sndseq = htonl(csk->snd_nxt);
req->param = htonl(V_TX_PORT(l2t->smt_idx));
@ -375,10 +375,8 @@ static inline void make_tx_data_wr(struct cxgbi_sock *csk, struct sk_buff *skb,
}
}
/**
/*
* push_tx_frames -- start transmit
* @c3cn: the offloaded connection
* @req_completion: request wr_ack or not
*
* Prepends TX_DATA_WR or CPL_CLOSE_CON_REQ headers to buffers waiting in a
* connection's send queue and sends them on to T3. Must be called with the
@ -442,7 +440,7 @@ static int push_tx_frames(struct cxgbi_sock *csk, int req_completion)
req_completion = 1;
csk->wr_una_cred = 0;
}
len += cxgbi_ulp_extra_len(cxgbi_skcb_ulp_mode(skb));
len += cxgbi_ulp_extra_len(cxgbi_skcb_tx_ulp_mode(skb));
make_tx_data_wr(csk, skb, len, req_completion);
csk->snd_nxt += len;
cxgbi_skcb_clear_flag(skb, SKCBF_TX_NEED_HDR);
@ -886,11 +884,6 @@ free_cpl_skbs:
return -ENOMEM;
}
/**
* release_offload_resources - release offload resource
* @c3cn: the offloaded iscsi tcp connection.
* Release resources held by an offload connection (TID, L2T entry, etc.)
*/
static void l2t_put(struct cxgbi_sock *csk)
{
struct t3cdev *t3dev = (struct t3cdev *)csk->cdev->lldev;
@ -902,6 +895,10 @@ static void l2t_put(struct cxgbi_sock *csk)
}
}
/*
* release_offload_resources - release offload resource
* Release resources held by an offload connection (TID, L2T entry, etc.)
*/
static void release_offload_resources(struct cxgbi_sock *csk)
{
struct t3cdev *t3dev = (struct t3cdev *)csk->cdev->lldev;

View File

@ -197,7 +197,10 @@ static inline bool is_ofld_imm(const struct sk_buff *skb)
if (likely(cxgbi_skcb_test_flag(skb, SKCBF_TX_NEED_HDR)))
len += sizeof(struct fw_ofld_tx_data_wr);
return len <= MAX_IMM_TX_PKT_LEN;
if (likely(cxgbi_skcb_test_flag((struct sk_buff *)skb, SKCBF_TX_ISO)))
len += sizeof(struct cpl_tx_data_iso);
return (len <= MAX_IMM_OFLD_TX_DATA_WR_LEN);
}
static void send_act_open_req(struct cxgbi_sock *csk, struct sk_buff *skb,
@ -641,7 +644,10 @@ static inline int send_tx_flowc_wr(struct cxgbi_sock *csk)
flowc->mnemval[8].mnemonic = 0;
flowc->mnemval[8].val = 0;
flowc->mnemval[8].mnemonic = FW_FLOWC_MNEM_TXDATAPLEN_MAX;
flowc->mnemval[8].val = 16384;
if (csk->cdev->skb_iso_txhdr)
flowc->mnemval[8].val = cpu_to_be32(CXGBI_MAX_ISO_DATA_IN_SKB);
else
flowc->mnemval[8].val = cpu_to_be32(16128);
#ifdef CONFIG_CHELSIO_T4_DCB
flowc->mnemval[9].mnemonic = FW_FLOWC_MNEM_DCBPRIO;
if (vlan == CPL_L2T_VLAN_NONE) {
@ -667,38 +673,86 @@ static inline int send_tx_flowc_wr(struct cxgbi_sock *csk)
return flowclen16;
}
static inline void make_tx_data_wr(struct cxgbi_sock *csk, struct sk_buff *skb,
int dlen, int len, u32 credits, int compl)
static void
cxgb4i_make_tx_iso_cpl(struct sk_buff *skb, struct cpl_tx_data_iso *cpl)
{
struct cxgbi_iso_info *info = (struct cxgbi_iso_info *)skb->head;
u32 imm_en = !!(info->flags & CXGBI_ISO_INFO_IMM_ENABLE);
u32 fslice = !!(info->flags & CXGBI_ISO_INFO_FSLICE);
u32 lslice = !!(info->flags & CXGBI_ISO_INFO_LSLICE);
u32 pdu_type = (info->op == ISCSI_OP_SCSI_CMD) ? 0 : 1;
u32 submode = cxgbi_skcb_tx_ulp_mode(skb) & 0x3;
cpl->op_to_scsi = cpu_to_be32(CPL_TX_DATA_ISO_OP_V(CPL_TX_DATA_ISO) |
CPL_TX_DATA_ISO_FIRST_V(fslice) |
CPL_TX_DATA_ISO_LAST_V(lslice) |
CPL_TX_DATA_ISO_CPLHDRLEN_V(0) |
CPL_TX_DATA_ISO_HDRCRC_V(submode & 1) |
CPL_TX_DATA_ISO_PLDCRC_V(((submode >> 1) & 1)) |
CPL_TX_DATA_ISO_IMMEDIATE_V(imm_en) |
CPL_TX_DATA_ISO_SCSI_V(pdu_type));
cpl->ahs_len = info->ahs;
cpl->mpdu = cpu_to_be16(DIV_ROUND_UP(info->mpdu, 4));
cpl->burst_size = cpu_to_be32(info->burst_size);
cpl->len = cpu_to_be32(info->len);
cpl->reserved2_seglen_offset =
cpu_to_be32(CPL_TX_DATA_ISO_SEGLEN_OFFSET_V(info->segment_offset));
cpl->datasn_offset = cpu_to_be32(info->datasn_offset);
cpl->buffer_offset = cpu_to_be32(info->buffer_offset);
cpl->reserved3 = cpu_to_be32(0);
log_debug(1 << CXGBI_DBG_ISCSI | 1 << CXGBI_DBG_PDU_TX,
"iso: flags 0x%x, op %u, ahs %u, num_pdu %u, mpdu %u, "
"burst_size %u, iso_len %u\n",
info->flags, info->op, info->ahs, info->num_pdu,
info->mpdu, info->burst_size << 2, info->len);
}
static void
cxgb4i_make_tx_data_wr(struct cxgbi_sock *csk, struct sk_buff *skb, int dlen,
int len, u32 credits, int compl)
{
struct cxgbi_device *cdev = csk->cdev;
struct cxgb4_lld_info *lldi = cxgbi_cdev_priv(cdev);
struct fw_ofld_tx_data_wr *req;
unsigned int submode = cxgbi_skcb_ulp_mode(skb) & 3;
unsigned int wr_ulp_mode = 0, val;
bool imm = is_ofld_imm(skb);
struct cpl_tx_data_iso *cpl;
u32 submode = cxgbi_skcb_tx_ulp_mode(skb) & 0x3;
u32 wr_ulp_mode = 0;
u32 hdr_size = sizeof(*req);
u32 opcode = FW_OFLD_TX_DATA_WR;
u32 immlen = 0;
u32 force = is_t5(lldi->adapter_type) ? TX_FORCE_V(!submode) :
T6_TX_FORCE_F;
req = __skb_push(skb, sizeof(*req));
if (imm) {
req->op_to_immdlen = htonl(FW_WR_OP_V(FW_OFLD_TX_DATA_WR) |
FW_WR_COMPL_F |
FW_WR_IMMDLEN_V(dlen));
req->flowid_len16 = htonl(FW_WR_FLOWID_V(csk->tid) |
FW_WR_LEN16_V(credits));
} else {
req->op_to_immdlen =
cpu_to_be32(FW_WR_OP_V(FW_OFLD_TX_DATA_WR) |
FW_WR_COMPL_F |
FW_WR_IMMDLEN_V(0));
req->flowid_len16 =
cpu_to_be32(FW_WR_FLOWID_V(csk->tid) |
FW_WR_LEN16_V(credits));
if (cxgbi_skcb_test_flag(skb, SKCBF_TX_ISO)) {
hdr_size += sizeof(struct cpl_tx_data_iso);
opcode = FW_ISCSI_TX_DATA_WR;
immlen += sizeof(struct cpl_tx_data_iso);
submode |= 8;
}
if (is_ofld_imm(skb))
immlen += dlen;
req = (struct fw_ofld_tx_data_wr *)__skb_push(skb, hdr_size);
req->op_to_immdlen = cpu_to_be32(FW_WR_OP_V(opcode) |
FW_WR_COMPL_V(compl) |
FW_WR_IMMDLEN_V(immlen));
req->flowid_len16 = cpu_to_be32(FW_WR_FLOWID_V(csk->tid) |
FW_WR_LEN16_V(credits));
req->plen = cpu_to_be32(len);
cpl = (struct cpl_tx_data_iso *)(req + 1);
if (likely(cxgbi_skcb_test_flag(skb, SKCBF_TX_ISO)))
cxgb4i_make_tx_iso_cpl(skb, cpl);
if (submode)
wr_ulp_mode = FW_OFLD_TX_DATA_WR_ULPMODE_V(ULP2_MODE_ISCSI) |
FW_OFLD_TX_DATA_WR_ULPSUBMODE_V(submode);
val = skb_peek(&csk->write_queue) ? 0 : 1;
req->tunnel_to_proxy = htonl(wr_ulp_mode |
FW_OFLD_TX_DATA_WR_SHOVE_V(val));
req->plen = htonl(len);
FW_OFLD_TX_DATA_WR_ULPSUBMODE_V(submode);
req->tunnel_to_proxy = cpu_to_be32(wr_ulp_mode | force |
FW_OFLD_TX_DATA_WR_SHOVE_V(1U));
if (!cxgbi_sock_flag(csk, CTPF_TX_DATA_SENT))
cxgbi_sock_set_flag(csk, CTPF_TX_DATA_SENT);
}
@ -716,30 +770,34 @@ static int push_tx_frames(struct cxgbi_sock *csk, int req_completion)
if (unlikely(csk->state < CTP_ESTABLISHED ||
csk->state == CTP_CLOSE_WAIT_1 || csk->state >= CTP_ABORTING)) {
log_debug(1 << CXGBI_DBG_TOE | 1 << CXGBI_DBG_SOCK |
1 << CXGBI_DBG_PDU_TX,
"csk 0x%p,%u,0x%lx,%u, in closing state.\n",
csk, csk->state, csk->flags, csk->tid);
1 << CXGBI_DBG_PDU_TX,
"csk 0x%p,%u,0x%lx,%u, in closing state.\n",
csk, csk->state, csk->flags, csk->tid);
return 0;
}
while (csk->wr_cred && (skb = skb_peek(&csk->write_queue)) != NULL) {
int dlen = skb->len;
int len = skb->len;
unsigned int credits_needed;
int flowclen16 = 0;
while (csk->wr_cred && ((skb = skb_peek(&csk->write_queue)) != NULL)) {
struct cxgbi_iso_info *iso_cpl;
u32 dlen = skb->len;
u32 len = skb->len;
u32 iso_cpl_len = 0;
u32 flowclen16 = 0;
u32 credits_needed;
u32 num_pdu = 1, hdr_len;
if (cxgbi_skcb_test_flag(skb, SKCBF_TX_ISO))
iso_cpl_len = sizeof(struct cpl_tx_data_iso);
skb_reset_transport_header(skb);
if (is_ofld_imm(skb))
credits_needed = DIV_ROUND_UP(dlen, 16);
credits_needed = DIV_ROUND_UP(dlen + iso_cpl_len, 16);
else
credits_needed = DIV_ROUND_UP(
8 * calc_tx_flits_ofld(skb),
16);
credits_needed =
DIV_ROUND_UP((8 * calc_tx_flits_ofld(skb)) +
iso_cpl_len, 16);
if (likely(cxgbi_skcb_test_flag(skb, SKCBF_TX_NEED_HDR)))
credits_needed += DIV_ROUND_UP(
sizeof(struct fw_ofld_tx_data_wr),
16);
credits_needed +=
DIV_ROUND_UP(sizeof(struct fw_ofld_tx_data_wr), 16);
/*
* Assumes the initial credits is large enough to support
@ -754,14 +812,19 @@ static int push_tx_frames(struct cxgbi_sock *csk, int req_completion)
if (csk->wr_cred < credits_needed) {
log_debug(1 << CXGBI_DBG_PDU_TX,
"csk 0x%p, skb %u/%u, wr %d < %u.\n",
csk, skb->len, skb->data_len,
credits_needed, csk->wr_cred);
"csk 0x%p, skb %u/%u, wr %d < %u.\n",
csk, skb->len, skb->data_len,
credits_needed, csk->wr_cred);
csk->no_tx_credits++;
break;
}
csk->no_tx_credits = 0;
__skb_unlink(skb, &csk->write_queue);
set_wr_txq(skb, CPL_PRIORITY_DATA, csk->port_id);
skb->csum = credits_needed + flowclen16;
skb->csum = (__force __wsum)(credits_needed + flowclen16);
csk->wr_cred -= credits_needed;
csk->wr_una_cred += credits_needed;
cxgbi_sock_enqueue_wr(csk, skb);
@ -771,25 +834,42 @@ static int push_tx_frames(struct cxgbi_sock *csk, int req_completion)
csk, skb->len, skb->data_len, credits_needed,
csk->wr_cred, csk->wr_una_cred);
if (!req_completion &&
((csk->wr_una_cred >= (csk->wr_max_cred / 2)) ||
after(csk->write_seq, (csk->snd_una + csk->snd_win / 2))))
req_completion = 1;
if (likely(cxgbi_skcb_test_flag(skb, SKCBF_TX_NEED_HDR))) {
len += cxgbi_ulp_extra_len(cxgbi_skcb_ulp_mode(skb));
make_tx_data_wr(csk, skb, dlen, len, credits_needed,
req_completion);
u32 ulp_mode = cxgbi_skcb_tx_ulp_mode(skb);
if (cxgbi_skcb_test_flag(skb, SKCBF_TX_ISO)) {
iso_cpl = (struct cxgbi_iso_info *)skb->head;
num_pdu = iso_cpl->num_pdu;
hdr_len = cxgbi_skcb_tx_iscsi_hdrlen(skb);
len += (cxgbi_ulp_extra_len(ulp_mode) * num_pdu) +
(hdr_len * (num_pdu - 1));
} else {
len += cxgbi_ulp_extra_len(ulp_mode);
}
cxgb4i_make_tx_data_wr(csk, skb, dlen, len,
credits_needed, req_completion);
csk->snd_nxt += len;
cxgbi_skcb_clear_flag(skb, SKCBF_TX_NEED_HDR);
} else if (cxgbi_skcb_test_flag(skb, SKCBF_TX_FLAG_COMPL) &&
(csk->wr_una_cred >= (csk->wr_max_cred / 2))) {
struct cpl_close_con_req *req =
(struct cpl_close_con_req *)skb->data;
req->wr.wr_hi |= htonl(FW_WR_COMPL_F);
req->wr.wr_hi |= cpu_to_be32(FW_WR_COMPL_F);
}
total_size += skb->truesize;
t4_set_arp_err_handler(skb, csk, arp_failure_skb_discard);
log_debug(1 << CXGBI_DBG_TOE | 1 << CXGBI_DBG_PDU_TX,
"csk 0x%p,%u,0x%lx,%u, skb 0x%p, %u.\n",
csk, csk->state, csk->flags, csk->tid, skb, len);
"csk 0x%p,%u,0x%lx,%u, skb 0x%p, %u.\n",
csk, csk->state, csk->flags, csk->tid, skb, len);
cxgb4_l2t_send(csk->cdev->ports[csk->port_id], skb, csk->l2t);
}
return total_size;
@ -2111,10 +2191,30 @@ static int cxgb4i_ddp_init(struct cxgbi_device *cdev)
return 0;
}
static bool is_memfree(struct adapter *adap)
{
u32 io;
io = t4_read_reg(adap, MA_TARGET_MEM_ENABLE_A);
if (is_t5(adap->params.chip)) {
if ((io & EXT_MEM0_ENABLE_F) || (io & EXT_MEM1_ENABLE_F))
return false;
} else if (io & EXT_MEM_ENABLE_F) {
return false;
}
return true;
}
static void *t4_uld_add(const struct cxgb4_lld_info *lldi)
{
struct cxgbi_device *cdev;
struct port_info *pi;
struct net_device *ndev;
struct adapter *adap;
struct tid_info *t;
u32 max_cmds = CXGB4I_SCSI_HOST_QDEPTH;
u32 max_conn = CXGBI_MAX_CONN;
int i, rc;
cdev = cxgbi_device_register(sizeof(*lldi), lldi->nports);
@ -2154,14 +2254,40 @@ static void *t4_uld_add(const struct cxgb4_lld_info *lldi)
pr_info("t4 0x%p ddp init failed %d.\n", cdev, rc);
goto err_out;
}
ndev = cdev->ports[0];
adap = netdev2adap(ndev);
if (adap) {
t = &adap->tids;
if (t->ntids <= CXGBI_MAX_CONN)
max_conn = t->ntids;
if (is_memfree(adap)) {
cdev->flags |= CXGBI_FLAG_DEV_ISO_OFF;
max_cmds = CXGB4I_SCSI_HOST_QDEPTH >> 2;
pr_info("%s: 0x%p, tid %u, SO adapter.\n",
ndev->name, cdev, t->ntids);
}
} else {
pr_info("%s, 0x%p, NO adapter struct.\n", ndev->name, cdev);
}
/* ISO is enabled in T5/T6 firmware version >= 1.13.43.0 */
if (!is_t4(lldi->adapter_type) &&
(lldi->fw_vers >= 0x10d2b00) &&
!(cdev->flags & CXGBI_FLAG_DEV_ISO_OFF))
cdev->skb_iso_txhdr = sizeof(struct cpl_tx_data_iso);
rc = cxgb4i_ofld_init(cdev);
if (rc) {
pr_info("t4 0x%p ofld init failed.\n", cdev);
goto err_out;
}
rc = cxgbi_hbas_add(cdev, CXGB4I_MAX_LUN, CXGBI_MAX_CONN,
&cxgb4i_host_template, cxgb4i_stt);
cxgb4i_host_template.can_queue = max_cmds;
rc = cxgbi_hbas_add(cdev, CXGB4I_MAX_LUN, max_conn,
&cxgb4i_host_template, cxgb4i_stt);
if (rc)
goto err_out;

View File

@ -359,13 +359,15 @@ int cxgbi_hbas_add(struct cxgbi_device *cdev, u64 max_lun,
shost->max_lun = max_lun;
shost->max_id = max_id;
shost->max_channel = 0;
shost->max_cmd_len = 16;
shost->max_cmd_len = SCSI_MAX_VARLEN_CDB_SIZE;
chba = iscsi_host_priv(shost);
chba->cdev = cdev;
chba->ndev = cdev->ports[i];
chba->shost = shost;
shost->can_queue = sht->can_queue - ISCSI_MGMT_CMDS_MAX;
log_debug(1 << CXGBI_DBG_DEV,
"cdev 0x%p, p#%d %s: chba 0x%p.\n",
cdev, i, cdev->ports[i]->name, chba);
@ -1136,82 +1138,6 @@ void cxgbi_sock_check_wr_invariants(const struct cxgbi_sock *csk)
}
EXPORT_SYMBOL_GPL(cxgbi_sock_check_wr_invariants);
static int cxgbi_sock_send_pdus(struct cxgbi_sock *csk, struct sk_buff *skb)
{
struct cxgbi_device *cdev = csk->cdev;
struct sk_buff *next;
int err, copied = 0;
spin_lock_bh(&csk->lock);
if (csk->state != CTP_ESTABLISHED) {
log_debug(1 << CXGBI_DBG_PDU_TX,
"csk 0x%p,%u,0x%lx,%u, EAGAIN.\n",
csk, csk->state, csk->flags, csk->tid);
err = -EAGAIN;
goto out_err;
}
if (csk->err) {
log_debug(1 << CXGBI_DBG_PDU_TX,
"csk 0x%p,%u,0x%lx,%u, EPIPE %d.\n",
csk, csk->state, csk->flags, csk->tid, csk->err);
err = -EPIPE;
goto out_err;
}
if (csk->write_seq - csk->snd_una >= csk->snd_win) {
log_debug(1 << CXGBI_DBG_PDU_TX,
"csk 0x%p,%u,0x%lx,%u, FULL %u-%u >= %u.\n",
csk, csk->state, csk->flags, csk->tid, csk->write_seq,
csk->snd_una, csk->snd_win);
err = -ENOBUFS;
goto out_err;
}
while (skb) {
int frags = skb_shinfo(skb)->nr_frags +
(skb->len != skb->data_len);
if (unlikely(skb_headroom(skb) < cdev->skb_tx_rsvd)) {
pr_err("csk 0x%p, skb head %u < %u.\n",
csk, skb_headroom(skb), cdev->skb_tx_rsvd);
err = -EINVAL;
goto out_err;
}
if (frags >= SKB_WR_LIST_SIZE) {
pr_err("csk 0x%p, frags %d, %u,%u >%u.\n",
csk, skb_shinfo(skb)->nr_frags, skb->len,
skb->data_len, (uint)(SKB_WR_LIST_SIZE));
err = -EINVAL;
goto out_err;
}
next = skb->next;
skb->next = NULL;
cxgbi_skcb_set_flag(skb, SKCBF_TX_NEED_HDR);
cxgbi_sock_skb_entail(csk, skb);
copied += skb->len;
csk->write_seq += skb->len +
cxgbi_ulp_extra_len(cxgbi_skcb_ulp_mode(skb));
skb = next;
}
if (likely(skb_queue_len(&csk->write_queue)))
cdev->csk_push_tx_frames(csk, 1);
done:
spin_unlock_bh(&csk->lock);
return copied;
out_err:
if (copied == 0 && err == -EPIPE)
copied = csk->err ? csk->err : -EPIPE;
else
copied = err;
goto done;
}
static inline void
scmd_get_params(struct scsi_cmnd *sc, struct scatterlist **sgl,
unsigned int *sgcnt, unsigned int *dlen,
@ -1284,8 +1210,6 @@ EXPORT_SYMBOL_GPL(cxgbi_ddp_set_one_ppod);
* APIs interacting with open-iscsi libraries
*/
static unsigned char padding[4];
int cxgbi_ddp_ppm_setup(void **ppm_pp, struct cxgbi_device *cdev,
struct cxgbi_tag_format *tformat,
unsigned int iscsi_size, unsigned int llimit,
@ -1833,9 +1757,10 @@ static int sgl_seek_offset(struct scatterlist *sgl, unsigned int sgcnt,
return -EFAULT;
}
static int sgl_read_to_frags(struct scatterlist *sg, unsigned int sgoffset,
unsigned int dlen, struct page_frag *frags,
int frag_max)
static int
sgl_read_to_frags(struct scatterlist *sg, unsigned int sgoffset,
unsigned int dlen, struct page_frag *frags,
int frag_max, u32 *dlimit)
{
unsigned int datalen = dlen;
unsigned int sglen = sg->length - sgoffset;
@ -1867,6 +1792,7 @@ static int sgl_read_to_frags(struct scatterlist *sg, unsigned int sgoffset,
if (i >= frag_max) {
pr_warn("too many pages %u, dlen %u.\n",
frag_max, dlen);
*dlimit = dlen - datalen;
return -EINVAL;
}
@ -1883,38 +1809,220 @@ static int sgl_read_to_frags(struct scatterlist *sg, unsigned int sgoffset,
return i;
}
int cxgbi_conn_alloc_pdu(struct iscsi_task *task, u8 opcode)
static void cxgbi_task_data_sgl_check(struct iscsi_task *task)
{
struct iscsi_tcp_conn *tcp_conn = task->conn->dd_data;
struct scsi_cmnd *sc = task->sc;
struct cxgbi_task_data *tdata = iscsi_task_cxgbi_data(task);
struct scatterlist *sg, *sgl = NULL;
u32 sgcnt = 0;
int i;
tdata->flags = CXGBI_TASK_SGL_CHECKED;
if (!sc)
return;
scmd_get_params(sc, &sgl, &sgcnt, &tdata->dlen, 0);
if (!sgl || !sgcnt) {
tdata->flags |= CXGBI_TASK_SGL_COPY;
return;
}
for_each_sg(sgl, sg, sgcnt, i) {
if (page_count(sg_page(sg)) < 1) {
tdata->flags |= CXGBI_TASK_SGL_COPY;
return;
}
}
}
static int
cxgbi_task_data_sgl_read(struct iscsi_task *task, u32 offset, u32 count,
u32 *dlimit)
{
struct scsi_cmnd *sc = task->sc;
struct cxgbi_task_data *tdata = iscsi_task_cxgbi_data(task);
struct scatterlist *sgl = NULL;
struct scatterlist *sg;
u32 dlen = 0;
u32 sgcnt;
int err;
if (!sc)
return 0;
scmd_get_params(sc, &sgl, &sgcnt, &dlen, 0);
if (!sgl || !sgcnt)
return 0;
err = sgl_seek_offset(sgl, sgcnt, offset, &tdata->sgoffset, &sg);
if (err < 0) {
pr_warn("tpdu max, sgl %u, bad offset %u/%u.\n",
sgcnt, offset, tdata->dlen);
return err;
}
err = sgl_read_to_frags(sg, tdata->sgoffset, count,
tdata->frags, MAX_SKB_FRAGS, dlimit);
if (err < 0) {
log_debug(1 << CXGBI_DBG_ISCSI,
"sgl max limit, sgl %u, offset %u, %u/%u, dlimit %u.\n",
sgcnt, offset, count, tdata->dlen, *dlimit);
return err;
}
tdata->offset = offset;
tdata->count = count;
tdata->nr_frags = err;
tdata->total_count = count;
tdata->total_offset = offset;
log_debug(1 << CXGBI_DBG_ISCSI | 1 << CXGBI_DBG_PDU_TX,
"%s: offset %u, count %u,\n"
"err %u, total_count %u, total_offset %u\n",
__func__, offset, count, err, tdata->total_count, tdata->total_offset);
return 0;
}
int cxgbi_conn_alloc_pdu(struct iscsi_task *task, u8 op)
{
struct iscsi_conn *conn = task->conn;
struct iscsi_session *session = task->conn->session;
struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
struct cxgbi_conn *cconn = tcp_conn->dd_data;
struct cxgbi_device *cdev = cconn->chba->cdev;
struct iscsi_conn *conn = task->conn;
struct cxgbi_sock *csk = cconn->cep ? cconn->cep->csk : NULL;
struct iscsi_tcp_task *tcp_task = task->dd_data;
struct cxgbi_task_data *tdata = iscsi_task_cxgbi_data(task);
struct scsi_cmnd *sc = task->sc;
struct cxgbi_sock *csk = cconn->cep->csk;
struct net_device *ndev = cdev->ports[csk->port_id];
int headroom = SKB_TX_ISCSI_PDU_HEADER_MAX;
u32 headroom = SKB_TX_ISCSI_PDU_HEADER_MAX;
u32 max_txdata_len = conn->max_xmit_dlength;
u32 iso_tx_rsvd = 0, local_iso_info = 0;
u32 last_tdata_offset, last_tdata_count;
int err = 0;
if (!tcp_task) {
pr_err("task 0x%p, tcp_task 0x%p, tdata 0x%p.\n",
task, tcp_task, tdata);
return -ENOMEM;
}
if (!csk) {
pr_err("task 0x%p, csk gone.\n", task);
return -EPIPE;
}
op &= ISCSI_OPCODE_MASK;
tcp_task->dd_data = tdata;
task->hdr = NULL;
if (SKB_MAX_HEAD(cdev->skb_tx_rsvd) > (512 * MAX_SKB_FRAGS) &&
(opcode == ISCSI_OP_SCSI_DATA_OUT ||
(opcode == ISCSI_OP_SCSI_CMD &&
sc->sc_data_direction == DMA_TO_DEVICE)))
/* data could goes into skb head */
headroom += min_t(unsigned int,
SKB_MAX_HEAD(cdev->skb_tx_rsvd),
conn->max_xmit_dlength);
last_tdata_count = tdata->count;
last_tdata_offset = tdata->offset;
tdata->skb = alloc_skb(cdev->skb_tx_rsvd + headroom, GFP_ATOMIC);
if (!tdata->skb) {
ndev->stats.tx_dropped++;
return -ENOMEM;
if ((op == ISCSI_OP_SCSI_DATA_OUT) ||
((op == ISCSI_OP_SCSI_CMD) &&
(sc->sc_data_direction == DMA_TO_DEVICE))) {
u32 remaining_data_tosend, dlimit = 0;
u32 max_pdu_size, max_num_pdu, num_pdu;
u32 count;
/* Preserve conn->max_xmit_dlength because it can get updated to
* ISO data size.
*/
if (task->state == ISCSI_TASK_PENDING)
tdata->max_xmit_dlength = conn->max_xmit_dlength;
if (!tdata->offset)
cxgbi_task_data_sgl_check(task);
remaining_data_tosend =
tdata->dlen - tdata->offset - tdata->count;
recalculate_sgl:
max_txdata_len = tdata->max_xmit_dlength;
log_debug(1 << CXGBI_DBG_ISCSI | 1 << CXGBI_DBG_PDU_TX,
"tdata->dlen %u, remaining to send %u "
"conn->max_xmit_dlength %u, "
"tdata->max_xmit_dlength %u\n",
tdata->dlen, remaining_data_tosend,
conn->max_xmit_dlength, tdata->max_xmit_dlength);
if (cdev->skb_iso_txhdr && !csk->disable_iso &&
(remaining_data_tosend > tdata->max_xmit_dlength) &&
!(remaining_data_tosend % 4)) {
u32 max_iso_data;
if ((op == ISCSI_OP_SCSI_CMD) &&
session->initial_r2t_en)
goto no_iso;
max_pdu_size = tdata->max_xmit_dlength +
ISCSI_PDU_NONPAYLOAD_LEN;
max_iso_data = rounddown(CXGBI_MAX_ISO_DATA_IN_SKB,
csk->advmss);
max_num_pdu = max_iso_data / max_pdu_size;
num_pdu = (remaining_data_tosend +
tdata->max_xmit_dlength - 1) /
tdata->max_xmit_dlength;
if (num_pdu > max_num_pdu)
num_pdu = max_num_pdu;
conn->max_xmit_dlength = tdata->max_xmit_dlength * num_pdu;
max_txdata_len = conn->max_xmit_dlength;
iso_tx_rsvd = cdev->skb_iso_txhdr;
local_iso_info = sizeof(struct cxgbi_iso_info);
log_debug(1 << CXGBI_DBG_ISCSI | 1 << CXGBI_DBG_PDU_TX,
"max_pdu_size %u, max_num_pdu %u, "
"max_txdata %u, num_pdu %u\n",
max_pdu_size, max_num_pdu,
max_txdata_len, num_pdu);
}
no_iso:
count = min_t(u32, max_txdata_len, remaining_data_tosend);
err = cxgbi_task_data_sgl_read(task,
tdata->offset + tdata->count,
count, &dlimit);
if (unlikely(err < 0)) {
log_debug(1 << CXGBI_DBG_ISCSI,
"task 0x%p, tcp_task 0x%p, tdata 0x%p, "
"sgl err %d, count %u, dlimit %u\n",
task, tcp_task, tdata, err, count, dlimit);
if (dlimit) {
remaining_data_tosend =
rounddown(dlimit,
tdata->max_xmit_dlength);
if (!remaining_data_tosend)
remaining_data_tosend = dlimit;
dlimit = 0;
conn->max_xmit_dlength = remaining_data_tosend;
goto recalculate_sgl;
}
pr_err("task 0x%p, tcp_task 0x%p, tdata 0x%p, "
"sgl err %d\n",
task, tcp_task, tdata, err);
goto ret_err;
}
if ((tdata->flags & CXGBI_TASK_SGL_COPY) ||
(tdata->nr_frags > MAX_SKB_FRAGS))
headroom += conn->max_xmit_dlength;
}
skb_reserve(tdata->skb, cdev->skb_tx_rsvd);
tdata->skb = alloc_skb(local_iso_info + cdev->skb_tx_rsvd +
iso_tx_rsvd + headroom, GFP_ATOMIC);
if (!tdata->skb) {
tdata->count = last_tdata_count;
tdata->offset = last_tdata_offset;
err = -ENOMEM;
goto ret_err;
}
skb_reserve(tdata->skb, local_iso_info + cdev->skb_tx_rsvd +
iso_tx_rsvd);
if (task->sc) {
task->hdr = (struct iscsi_hdr *)tdata->skb->data;
@ -1923,25 +2031,100 @@ int cxgbi_conn_alloc_pdu(struct iscsi_task *task, u8 opcode)
if (!task->hdr) {
__kfree_skb(tdata->skb);
tdata->skb = NULL;
ndev->stats.tx_dropped++;
return -ENOMEM;
}
}
task->hdr_max = SKB_TX_ISCSI_PDU_HEADER_MAX; /* BHS + AHS */
task->hdr_max = SKB_TX_ISCSI_PDU_HEADER_MAX;
if (iso_tx_rsvd)
cxgbi_skcb_set_flag(tdata->skb, SKCBF_TX_ISO);
/* data_out uses scsi_cmd's itt */
if (opcode != ISCSI_OP_SCSI_DATA_OUT)
if (op != ISCSI_OP_SCSI_DATA_OUT)
task_reserve_itt(task, &task->hdr->itt);
log_debug(1 << CXGBI_DBG_ISCSI | 1 << CXGBI_DBG_PDU_TX,
"task 0x%p, op 0x%x, skb 0x%p,%u+%u/%u, itt 0x%x.\n",
task, opcode, tdata->skb, cdev->skb_tx_rsvd, headroom,
conn->max_xmit_dlength, ntohl(task->hdr->itt));
"task 0x%p, op 0x%x, skb 0x%p,%u+%u/%u, itt 0x%x.\n",
task, op, tdata->skb, cdev->skb_tx_rsvd, headroom,
conn->max_xmit_dlength, be32_to_cpu(task->hdr->itt));
return 0;
ret_err:
conn->max_xmit_dlength = tdata->max_xmit_dlength;
return err;
}
EXPORT_SYMBOL_GPL(cxgbi_conn_alloc_pdu);
static int
cxgbi_prep_iso_info(struct iscsi_task *task, struct sk_buff *skb,
u32 count)
{
struct cxgbi_iso_info *iso_info = (struct cxgbi_iso_info *)skb->head;
struct iscsi_r2t_info *r2t;
struct cxgbi_task_data *tdata = iscsi_task_cxgbi_data(task);
struct iscsi_conn *conn = task->conn;
struct iscsi_session *session = conn->session;
struct iscsi_tcp_task *tcp_task = task->dd_data;
u32 burst_size = 0, r2t_dlength = 0, dlength;
u32 max_pdu_len = tdata->max_xmit_dlength;
u32 segment_offset = 0;
u32 num_pdu;
if (unlikely(!cxgbi_skcb_test_flag(skb, SKCBF_TX_ISO)))
return 0;
memset(iso_info, 0, sizeof(struct cxgbi_iso_info));
if (task->hdr->opcode == ISCSI_OP_SCSI_CMD && session->imm_data_en) {
iso_info->flags |= CXGBI_ISO_INFO_IMM_ENABLE;
burst_size = count;
}
dlength = ntoh24(task->hdr->dlength);
dlength = min(dlength, max_pdu_len);
hton24(task->hdr->dlength, dlength);
num_pdu = (count + max_pdu_len - 1) / max_pdu_len;
if (iscsi_task_has_unsol_data(task))
r2t = &task->unsol_r2t;
else
r2t = tcp_task->r2t;
if (r2t) {
log_debug(1 << CXGBI_DBG_ISCSI | 1 << CXGBI_DBG_PDU_TX,
"count %u, tdata->count %u, num_pdu %u,"
"task->hdr_len %u, r2t->data_length %u, r2t->sent %u\n",
count, tdata->count, num_pdu, task->hdr_len,
r2t->data_length, r2t->sent);
r2t_dlength = r2t->data_length - r2t->sent;
segment_offset = r2t->sent;
r2t->datasn += num_pdu - 1;
}
if (!r2t || !r2t->sent)
iso_info->flags |= CXGBI_ISO_INFO_FSLICE;
if (task->hdr->flags & ISCSI_FLAG_CMD_FINAL)
iso_info->flags |= CXGBI_ISO_INFO_LSLICE;
task->hdr->flags &= ~ISCSI_FLAG_CMD_FINAL;
iso_info->op = task->hdr->opcode;
iso_info->ahs = task->hdr->hlength;
iso_info->num_pdu = num_pdu;
iso_info->mpdu = max_pdu_len;
iso_info->burst_size = (burst_size + r2t_dlength) >> 2;
iso_info->len = count + task->hdr_len;
iso_info->segment_offset = segment_offset;
cxgbi_skcb_tx_iscsi_hdrlen(skb) = task->hdr_len;
return 0;
}
static inline void tx_skb_setmode(struct sk_buff *skb, int hcrc, int dcrc)
{
if (hcrc || dcrc) {
@ -1951,133 +2134,260 @@ static inline void tx_skb_setmode(struct sk_buff *skb, int hcrc, int dcrc)
submode |= 1;
if (dcrc)
submode |= 2;
cxgbi_skcb_ulp_mode(skb) = (ULP2_MODE_ISCSI << 4) | submode;
cxgbi_skcb_tx_ulp_mode(skb) = (ULP2_MODE_ISCSI << 4) | submode;
} else
cxgbi_skcb_ulp_mode(skb) = 0;
cxgbi_skcb_tx_ulp_mode(skb) = 0;
}
static struct page *rsvd_page;
int cxgbi_conn_init_pdu(struct iscsi_task *task, unsigned int offset,
unsigned int count)
{
struct iscsi_conn *conn = task->conn;
struct iscsi_tcp_task *tcp_task = task->dd_data;
struct cxgbi_task_data *tdata = iscsi_task_cxgbi_data(task);
struct sk_buff *skb = tdata->skb;
unsigned int datalen = count;
int i, padlen = iscsi_padding(count);
struct sk_buff *skb;
struct scsi_cmnd *sc = task->sc;
u32 expected_count, expected_offset;
u32 datalen = count, dlimit = 0;
u32 i, padlen = iscsi_padding(count);
struct page *pg;
int err;
if (!tcp_task || (tcp_task->dd_data != tdata)) {
pr_err("task 0x%p,0x%p, tcp_task 0x%p, tdata 0x%p/0x%p.\n",
task, task->sc, tcp_task,
tcp_task ? tcp_task->dd_data : NULL, tdata);
return -EINVAL;
}
skb = tdata->skb;
log_debug(1 << CXGBI_DBG_ISCSI | 1 << CXGBI_DBG_PDU_TX,
"task 0x%p,0x%p, skb 0x%p, 0x%x,0x%x,0x%x, %u+%u.\n",
task, task->sc, skb, (*skb->data) & ISCSI_OPCODE_MASK,
ntohl(task->cmdsn), ntohl(task->hdr->itt), offset, count);
"task 0x%p,0x%p, skb 0x%p, 0x%x,0x%x,0x%x, %u+%u.\n",
task, task->sc, skb, (*skb->data) & ISCSI_OPCODE_MASK,
be32_to_cpu(task->cmdsn), be32_to_cpu(task->hdr->itt), offset, count);
skb_put(skb, task->hdr_len);
tx_skb_setmode(skb, conn->hdrdgst_en, datalen ? conn->datadgst_en : 0);
if (!count)
return 0;
if (task->sc) {
struct scsi_data_buffer *sdb = &task->sc->sdb;
struct scatterlist *sg = NULL;
int err;
tdata->offset = offset;
if (!count) {
tdata->count = count;
err = sgl_seek_offset(
sdb->table.sgl, sdb->table.nents,
tdata->offset, &tdata->sgoffset, &sg);
if (err < 0) {
pr_warn("tpdu, sgl %u, bad offset %u/%u.\n",
sdb->table.nents, tdata->offset, sdb->length);
return err;
}
err = sgl_read_to_frags(sg, tdata->sgoffset, tdata->count,
tdata->frags, MAX_PDU_FRAGS);
if (err < 0) {
pr_warn("tpdu, sgl %u, bad offset %u + %u.\n",
sdb->table.nents, tdata->offset, tdata->count);
return err;
}
tdata->nr_frags = err;
tdata->offset = offset;
tdata->nr_frags = 0;
tdata->total_offset = 0;
tdata->total_count = 0;
if (tdata->max_xmit_dlength)
conn->max_xmit_dlength = tdata->max_xmit_dlength;
cxgbi_skcb_clear_flag(skb, SKCBF_TX_ISO);
return 0;
}
if (tdata->nr_frags > MAX_SKB_FRAGS ||
(padlen && tdata->nr_frags == MAX_SKB_FRAGS)) {
log_debug(1 << CXGBI_DBG_ISCSI | 1 << CXGBI_DBG_PDU_TX,
"data->total_count %u, tdata->total_offset %u\n",
tdata->total_count, tdata->total_offset);
expected_count = tdata->total_count;
expected_offset = tdata->total_offset;
if ((count != expected_count) ||
(offset != expected_offset)) {
err = cxgbi_task_data_sgl_read(task, offset, count, &dlimit);
if (err < 0) {
pr_err("task 0x%p,0x%p, tcp_task 0x%p, tdata 0x%p/0x%p "
"dlimit %u, sgl err %d.\n", task, task->sc,
tcp_task, tcp_task ? tcp_task->dd_data : NULL,
tdata, dlimit, err);
return err;
}
}
/* Restore original value of conn->max_xmit_dlength because
* it can get updated to ISO data size.
*/
conn->max_xmit_dlength = tdata->max_xmit_dlength;
if (sc) {
struct page_frag *frag = tdata->frags;
if ((tdata->flags & CXGBI_TASK_SGL_COPY) ||
(tdata->nr_frags > MAX_SKB_FRAGS) ||
(padlen && (tdata->nr_frags ==
MAX_SKB_FRAGS))) {
char *dst = skb->data + task->hdr_len;
struct page_frag *frag = tdata->frags;
/* data fits in the skb's headroom */
for (i = 0; i < tdata->nr_frags; i++, frag++) {
char *src = kmap_atomic(frag->page);
memcpy(dst, src+frag->offset, frag->size);
memcpy(dst, src + frag->offset, frag->size);
dst += frag->size;
kunmap_atomic(src);
}
if (padlen) {
memset(dst, 0, padlen);
padlen = 0;
}
skb_put(skb, count + padlen);
} else {
/* data fit into frag_list */
for (i = 0; i < tdata->nr_frags; i++) {
__skb_fill_page_desc(skb, i,
tdata->frags[i].page,
tdata->frags[i].offset,
tdata->frags[i].size);
skb_frag_ref(skb, i);
for (i = 0; i < tdata->nr_frags; i++, frag++) {
get_page(frag->page);
skb_fill_page_desc(skb, i, frag->page,
frag->offset, frag->size);
}
skb_shinfo(skb)->nr_frags = tdata->nr_frags;
skb->len += count;
skb->data_len += count;
skb->truesize += count;
}
} else {
pg = virt_to_page(task->data);
pg = virt_to_head_page(task->data);
get_page(pg);
skb_fill_page_desc(skb, 0, pg, offset_in_page(task->data),
count);
skb_fill_page_desc(skb, 0, pg,
task->data - (char *)page_address(pg),
count);
skb->len += count;
skb->data_len += count;
skb->truesize += count;
}
if (padlen) {
i = skb_shinfo(skb)->nr_frags;
get_page(rsvd_page);
skb_fill_page_desc(skb, skb_shinfo(skb)->nr_frags,
virt_to_page(padding), offset_in_page(padding),
padlen);
rsvd_page, 0, padlen);
skb->data_len += padlen;
skb->truesize += padlen;
skb->len += padlen;
}
if (likely(count > tdata->max_xmit_dlength))
cxgbi_prep_iso_info(task, skb, count);
else
cxgbi_skcb_clear_flag(skb, SKCBF_TX_ISO);
return 0;
}
EXPORT_SYMBOL_GPL(cxgbi_conn_init_pdu);
static int cxgbi_sock_tx_queue_up(struct cxgbi_sock *csk, struct sk_buff *skb)
{
struct cxgbi_device *cdev = csk->cdev;
struct cxgbi_iso_info *iso_cpl;
u32 frags = skb_shinfo(skb)->nr_frags;
u32 extra_len, num_pdu, hdr_len;
u32 iso_tx_rsvd = 0;
if (csk->state != CTP_ESTABLISHED) {
log_debug(1 << CXGBI_DBG_PDU_TX,
"csk 0x%p,%u,0x%lx,%u, EAGAIN.\n",
csk, csk->state, csk->flags, csk->tid);
return -EPIPE;
}
if (csk->err) {
log_debug(1 << CXGBI_DBG_PDU_TX,
"csk 0x%p,%u,0x%lx,%u, EPIPE %d.\n",
csk, csk->state, csk->flags, csk->tid, csk->err);
return -EPIPE;
}
if ((cdev->flags & CXGBI_FLAG_DEV_T3) &&
before((csk->snd_win + csk->snd_una), csk->write_seq)) {
log_debug(1 << CXGBI_DBG_PDU_TX,
"csk 0x%p,%u,0x%lx,%u, FULL %u-%u >= %u.\n",
csk, csk->state, csk->flags, csk->tid, csk->write_seq,
csk->snd_una, csk->snd_win);
return -ENOBUFS;
}
if (cxgbi_skcb_test_flag(skb, SKCBF_TX_ISO))
iso_tx_rsvd = cdev->skb_iso_txhdr;
if (unlikely(skb_headroom(skb) < (cdev->skb_tx_rsvd + iso_tx_rsvd))) {
pr_err("csk 0x%p, skb head %u < %u.\n",
csk, skb_headroom(skb), cdev->skb_tx_rsvd);
return -EINVAL;
}
if (skb->len != skb->data_len)
frags++;
if (frags >= SKB_WR_LIST_SIZE) {
pr_err("csk 0x%p, frags %u, %u,%u >%lu.\n",
csk, skb_shinfo(skb)->nr_frags, skb->len,
skb->data_len, SKB_WR_LIST_SIZE);
return -EINVAL;
}
cxgbi_skcb_set_flag(skb, SKCBF_TX_NEED_HDR);
skb_reset_transport_header(skb);
cxgbi_sock_skb_entail(csk, skb);
extra_len = cxgbi_ulp_extra_len(cxgbi_skcb_tx_ulp_mode(skb));
if (likely(cxgbi_skcb_test_flag(skb, SKCBF_TX_ISO))) {
iso_cpl = (struct cxgbi_iso_info *)skb->head;
num_pdu = iso_cpl->num_pdu;
hdr_len = cxgbi_skcb_tx_iscsi_hdrlen(skb);
extra_len = (cxgbi_ulp_extra_len(cxgbi_skcb_tx_ulp_mode(skb)) *
num_pdu) + (hdr_len * (num_pdu - 1));
}
csk->write_seq += (skb->len + extra_len);
return 0;
}
static int cxgbi_sock_send_skb(struct cxgbi_sock *csk, struct sk_buff *skb)
{
struct cxgbi_device *cdev = csk->cdev;
int len = skb->len;
int err;
spin_lock_bh(&csk->lock);
err = cxgbi_sock_tx_queue_up(csk, skb);
if (err < 0) {
spin_unlock_bh(&csk->lock);
return err;
}
if (likely(skb_queue_len(&csk->write_queue)))
cdev->csk_push_tx_frames(csk, 0);
spin_unlock_bh(&csk->lock);
return len;
}
int cxgbi_conn_xmit_pdu(struct iscsi_task *task)
{
struct iscsi_tcp_conn *tcp_conn = task->conn->dd_data;
struct cxgbi_conn *cconn = tcp_conn->dd_data;
struct iscsi_tcp_task *tcp_task = task->dd_data;
struct cxgbi_task_data *tdata = iscsi_task_cxgbi_data(task);
struct cxgbi_task_tag_info *ttinfo = &tdata->ttinfo;
struct sk_buff *skb = tdata->skb;
struct sk_buff *skb;
struct cxgbi_sock *csk = NULL;
unsigned int datalen;
u32 pdulen = 0;
u32 datalen;
int err;
if (!tcp_task || (tcp_task->dd_data != tdata)) {
pr_err("task 0x%p,0x%p, tcp_task 0x%p, tdata 0x%p/0x%p.\n",
task, task->sc, tcp_task,
tcp_task ? tcp_task->dd_data : NULL, tdata);
return -EINVAL;
}
skb = tdata->skb;
if (!skb) {
log_debug(1 << CXGBI_DBG_ISCSI | 1 << CXGBI_DBG_PDU_TX,
"task 0x%p\n", task);
"task 0x%p, skb NULL.\n", task);
return 0;
}
if (cconn && cconn->cep)
csk = cconn->cep->csk;
if (!csk) {
log_debug(1 << CXGBI_DBG_ISCSI | 1 << CXGBI_DBG_PDU_TX,
"task 0x%p, csk gone.\n", task);
@ -2101,13 +2411,12 @@ int cxgbi_conn_xmit_pdu(struct iscsi_task *task)
if (!task->sc)
memcpy(skb->data, task->hdr, SKB_TX_ISCSI_PDU_HEADER_MAX);
err = cxgbi_sock_send_pdus(cconn->cep->csk, skb);
err = cxgbi_sock_send_skb(csk, skb);
if (err > 0) {
int pdulen = err;
pdulen += err;
log_debug(1 << CXGBI_DBG_PDU_TX,
"task 0x%p,0x%p, skb 0x%p, len %u/%u, rv %d.\n",
task, task->sc, skb, skb->len, skb->data_len, err);
log_debug(1 << CXGBI_DBG_PDU_TX, "task 0x%p,0x%p, rv %d.\n",
task, task->sc, err);
if (task->conn->hdrdgst_en)
pdulen += ISCSI_DIGEST_SIZE;
@ -2116,24 +2425,42 @@ int cxgbi_conn_xmit_pdu(struct iscsi_task *task)
pdulen += ISCSI_DIGEST_SIZE;
task->conn->txdata_octets += pdulen;
if (unlikely(cxgbi_is_iso_config(csk) && cxgbi_is_iso_disabled(csk))) {
if (time_after(jiffies, csk->prev_iso_ts + HZ)) {
csk->disable_iso = false;
csk->prev_iso_ts = 0;
log_debug(1 << CXGBI_DBG_PDU_TX,
"enable iso: csk 0x%p\n", csk);
}
}
return 0;
}
if (err == -EAGAIN || err == -ENOBUFS) {
log_debug(1 << CXGBI_DBG_PDU_TX,
"task 0x%p, skb 0x%p, len %u/%u, %d EAGAIN.\n",
task, skb, skb->len, skb->data_len, err);
"task 0x%p, skb 0x%p, len %u/%u, %d EAGAIN.\n",
task, skb, skb->len, skb->data_len, err);
/* reset skb to send when we are called again */
tdata->skb = skb;
if (cxgbi_is_iso_config(csk) && !cxgbi_is_iso_disabled(csk) &&
(csk->no_tx_credits++ >= 2)) {
csk->disable_iso = true;
csk->prev_iso_ts = jiffies;
log_debug(1 << CXGBI_DBG_PDU_TX,
"disable iso:csk 0x%p, ts:%lu\n",
csk, csk->prev_iso_ts);
}
return err;
}
log_debug(1 << CXGBI_DBG_ISCSI | 1 << CXGBI_DBG_PDU_TX,
"itt 0x%x, skb 0x%p, len %u/%u, xmit err %d.\n",
task->itt, skb, skb->len, skb->data_len, err);
__kfree_skb(skb);
log_debug(1 << CXGBI_DBG_ISCSI | 1 << CXGBI_DBG_PDU_TX,
"itt 0x%x, skb 0x%p, len %u/%u, xmit err %d.\n",
task->itt, skb, skb->len, skb->data_len, err);
iscsi_conn_printk(KERN_ERR, task->conn, "xmit err %d.\n", err);
iscsi_conn_failure(task->conn, ISCSI_ERR_XMIT_FAILED);
return err;
@ -2145,7 +2472,7 @@ void cxgbi_cleanup_task(struct iscsi_task *task)
struct iscsi_tcp_task *tcp_task = task->dd_data;
struct cxgbi_task_data *tdata = iscsi_task_cxgbi_data(task);
if (!tcp_task || !tdata || (tcp_task->dd_data != tdata)) {
if (!tcp_task || (tcp_task->dd_data != tdata)) {
pr_info("task 0x%p,0x%p, tcp_task 0x%p, tdata 0x%p/0x%p.\n",
task, task->sc, tcp_task,
tcp_task ? tcp_task->dd_data : NULL, tdata);
@ -2749,12 +3076,17 @@ static int __init libcxgbi_init_module(void)
BUILD_BUG_ON(sizeof_field(struct sk_buff, cb) <
sizeof(struct cxgbi_skb_cb));
rsvd_page = alloc_page(GFP_KERNEL | __GFP_ZERO);
if (!rsvd_page)
return -ENOMEM;
return 0;
}
static void __exit libcxgbi_exit_module(void)
{
cxgbi_device_unregister_all(0xFF);
put_page(rsvd_page);
return;
}

View File

@ -76,6 +76,14 @@ do { \
#define ULP2_MAX_PDU_PAYLOAD \
(ULP2_MAX_PKT_SIZE - ISCSI_PDU_NONPAYLOAD_LEN)
#define CXGBI_ULP2_MAX_ISO_PAYLOAD 65535
#define CXGBI_MAX_ISO_DATA_IN_SKB \
min_t(u32, MAX_SKB_FRAGS << PAGE_SHIFT, CXGBI_ULP2_MAX_ISO_PAYLOAD)
#define cxgbi_is_iso_config(csk) ((csk)->cdev->skb_iso_txhdr)
#define cxgbi_is_iso_disabled(csk) ((csk)->disable_iso)
/*
* For iscsi connections HW may inserts digest bytes into the pdu. Those digest
* bytes are not sent by the host but are part of the TCP payload and therefore
@ -162,6 +170,10 @@ struct cxgbi_sock {
u32 write_seq;
u32 snd_win;
u32 rcv_win;
bool disable_iso;
u32 no_tx_credits;
unsigned long prev_iso_ts;
};
/*
@ -203,6 +215,8 @@ struct cxgbi_skb_tx_cb {
void *handle;
void *arp_err_handler;
struct sk_buff *wr_next;
u16 iscsi_hdr_len;
u8 ulp_mode;
};
enum cxgbi_skcb_flags {
@ -218,6 +232,7 @@ enum cxgbi_skcb_flags {
SKCBF_RX_HCRC_ERR, /* header digest error */
SKCBF_RX_DCRC_ERR, /* data digest error */
SKCBF_RX_PAD_ERR, /* padding byte error */
SKCBF_TX_ISO, /* iso cpl in tx skb */
};
struct cxgbi_skb_cb {
@ -225,18 +240,18 @@ struct cxgbi_skb_cb {
struct cxgbi_skb_rx_cb rx;
struct cxgbi_skb_tx_cb tx;
};
unsigned char ulp_mode;
unsigned long flags;
unsigned int seq;
};
#define CXGBI_SKB_CB(skb) ((struct cxgbi_skb_cb *)&((skb)->cb[0]))
#define cxgbi_skcb_flags(skb) (CXGBI_SKB_CB(skb)->flags)
#define cxgbi_skcb_ulp_mode(skb) (CXGBI_SKB_CB(skb)->ulp_mode)
#define cxgbi_skcb_tcp_seq(skb) (CXGBI_SKB_CB(skb)->seq)
#define cxgbi_skcb_rx_ddigest(skb) (CXGBI_SKB_CB(skb)->rx.ddigest)
#define cxgbi_skcb_rx_pdulen(skb) (CXGBI_SKB_CB(skb)->rx.pdulen)
#define cxgbi_skcb_tx_wr_next(skb) (CXGBI_SKB_CB(skb)->tx.wr_next)
#define cxgbi_skcb_tx_iscsi_hdrlen(skb) (CXGBI_SKB_CB(skb)->tx.iscsi_hdr_len)
#define cxgbi_skcb_tx_ulp_mode(skb) (CXGBI_SKB_CB(skb)->tx.ulp_mode)
static inline void cxgbi_skcb_set_flag(struct sk_buff *skb,
enum cxgbi_skcb_flags flag)
@ -458,6 +473,7 @@ struct cxgbi_ports_map {
#define CXGBI_FLAG_IPV4_SET 0x10
#define CXGBI_FLAG_USE_PPOD_OFLDQ 0x40
#define CXGBI_FLAG_DDP_OFF 0x100
#define CXGBI_FLAG_DEV_ISO_OFF 0x400
struct cxgbi_device {
struct list_head list_head;
@ -477,6 +493,7 @@ struct cxgbi_device {
unsigned int pfvf;
unsigned int rx_credit_thres;
unsigned int skb_tx_rsvd;
u32 skb_iso_txhdr;
unsigned int skb_rx_extra; /* for msg coalesced mode */
unsigned int tx_max_size;
unsigned int rx_max_size;
@ -523,20 +540,41 @@ struct cxgbi_endpoint {
struct cxgbi_sock *csk;
};
#define MAX_PDU_FRAGS ((ULP2_MAX_PDU_PAYLOAD + 512 - 1) / 512)
struct cxgbi_task_data {
#define CXGBI_TASK_SGL_CHECKED 0x1
#define CXGBI_TASK_SGL_COPY 0x2
u8 flags;
unsigned short nr_frags;
struct page_frag frags[MAX_PDU_FRAGS];
struct page_frag frags[MAX_SKB_FRAGS];
struct sk_buff *skb;
unsigned int dlen;
unsigned int offset;
unsigned int count;
unsigned int sgoffset;
u32 total_count;
u32 total_offset;
u32 max_xmit_dlength;
struct cxgbi_task_tag_info ttinfo;
};
#define iscsi_task_cxgbi_data(task) \
((task)->dd_data + sizeof(struct iscsi_tcp_task))
struct cxgbi_iso_info {
#define CXGBI_ISO_INFO_FSLICE 0x1
#define CXGBI_ISO_INFO_LSLICE 0x2
#define CXGBI_ISO_INFO_IMM_ENABLE 0x4
u8 flags;
u8 op;
u8 ahs;
u8 num_pdu;
u32 mpdu;
u32 burst_size;
u32 len;
u32 segment_offset;
u32 datasn_offset;
u32 buffer_offset;
};
static inline void *cxgbi_alloc_big_mem(unsigned int size,
gfp_t gfp)
{

View File

@ -1331,7 +1331,6 @@ static s32 adpt_i2o_reset_hba(adpt_hba* pHba)
printk(KERN_ERR"IOP reset failed - no free memory.\n");
return -ENOMEM;
}
memset(status,0,4);
msg[0]=EIGHT_WORD_MSG_SIZE|SGL_OFFSET_0;
msg[1]=I2O_CMD_ADAPTER_RESET<<24|HOST_TID<<12|ADAPTER_TID;
@ -2784,7 +2783,6 @@ static s32 adpt_i2o_init_outbound_q(adpt_hba* pHba)
pHba->name);
return -ENOMEM;
}
memset(status, 0, 4);
writel(EIGHT_WORD_MSG_SIZE| SGL_OFFSET_6, &msg[0]);
writel(I2O_CMD_OUTBOUND_INIT<<24 | HOST_TID<<12 | ADAPTER_TID, &msg[1]);
@ -2838,7 +2836,6 @@ static s32 adpt_i2o_init_outbound_q(adpt_hba* pHba)
printk(KERN_ERR "%s: Could not allocate reply pool\n", pHba->name);
return -ENOMEM;
}
memset(pHba->reply_pool, 0 , pHba->reply_fifo_size * REPLY_FRAME_SIZE * 4);
for(i = 0; i < pHba->reply_fifo_size; i++) {
writel(pHba->reply_pool_pa + (i * REPLY_FRAME_SIZE * 4),
@ -3073,7 +3070,6 @@ static int adpt_i2o_build_sys_table(void)
printk(KERN_WARNING "SysTab Set failed. Out of memory.\n");
return -ENOMEM;
}
memset(sys_tbl, 0, sys_tbl_len);
sys_tbl->num_entries = hba_count;
sys_tbl->version = I2OVERSION;

View File

@ -1225,8 +1225,9 @@ static inline void esas2r_rq_init_request(struct esas2r_request *rq,
/* req_table entry should be NULL at this point - if not, halt */
if (a->req_table[LOWORD(vrq->scsi.handle)])
if (a->req_table[LOWORD(vrq->scsi.handle)]) {
esas2r_bugon();
}
/* fill in the table for this handle so we can get back to the
* request.

View File

@ -75,7 +75,7 @@ static char event_buffer[EVENT_LOG_BUFF_SIZE];
/* A lock to protect the shared buffer used for formatting messages. */
static DEFINE_SPINLOCK(event_buffer_lock);
/**
/*
* translates an esas2r-defined logging event level to a kernel logging level.
*
* @param [in] level the esas2r-defined logging event level to translate
@ -101,7 +101,7 @@ static const char *translate_esas2r_event_level_to_kernel(const long level)
}
}
/**
/*
* the master logging function. this function will format the message as
* outlined by the formatting string, the input device information and the
* substitution arguments and output the resulting string to the system log.
@ -170,7 +170,7 @@ static int esas2r_log_master(const long level,
return 0;
}
/**
/*
* formats and logs a message to the system log.
*
* @param [in] level the event level of the message
@ -193,7 +193,7 @@ int esas2r_log(const long level, const char *format, ...)
return retval;
}
/**
/*
* formats and logs a message to the system log. this message will include
* device information.
*
@ -221,7 +221,7 @@ int esas2r_log_dev(const long level,
return retval;
}
/**
/*
* formats and logs a message to the system log. this message will include
* device information.
*

View File

@ -645,7 +645,7 @@ static int fcoe_lport_config(struct fc_lport *lport)
return 0;
}
/**
/*
* fcoe_netdev_features_change - Updates the lport's offload flags based
* on the LLD netdev's FCoE feature flags
*/
@ -2029,7 +2029,7 @@ static int fcoe_ctlr_enabled(struct fcoe_ctlr_device *cdev)
/**
* fcoe_ctlr_mode() - Switch FIP mode
* @cdev: The FCoE Controller that is being modified
* @ctlr_dev: The FCoE Controller that is being modified
*
* When the FIP mode has been changed we need to update
* the multicast addresses to ensure we get the correct
@ -2136,9 +2136,7 @@ static bool fcoe_match(struct net_device *netdev)
/**
* fcoe_dcb_create() - Initialize DCB attributes and hooks
* @netdev: The net_device object of the L2 link that should be queried
* @port: The fcoe_port to bind FCoE APP priority with
* @
* @fcoe: The new FCoE interface
*/
static void fcoe_dcb_create(struct fcoe_interface *fcoe)
{
@ -2609,7 +2607,7 @@ static void fcoe_logo_resp(struct fc_seq *seq, struct fc_frame *fp, void *arg)
fc_lport_logo_resp(seq, fp, lport);
}
/**
/*
* fcoe_elsct_send - FCoE specific ELS handler
*
* This does special case handling of FIP encapsualted ELS exchanges for FCoE,

View File

@ -134,6 +134,7 @@ static void fcoe_ctlr_map_dest(struct fcoe_ctlr *fip)
/**
* fcoe_ctlr_init() - Initialize the FCoE Controller instance
* @fip: The FCoE controller to initialize
* @mode: FIP mode to set
*/
void fcoe_ctlr_init(struct fcoe_ctlr *fip, enum fip_mode mode)
{
@ -336,7 +337,7 @@ static void fcoe_ctlr_announce(struct fcoe_ctlr *fip)
printk(KERN_NOTICE "libfcoe: host%d: "
"FIP Fibre-Channel Forwarder MAC %pM deselected\n",
fip->lp->host->host_no, fip->dest_addr);
memset(fip->dest_addr, 0, ETH_ALEN);
eth_zero_addr(fip->dest_addr);
}
if (sel) {
printk(KERN_INFO "libfcoe: host%d: FIP selected "
@ -587,6 +588,7 @@ static void fcoe_ctlr_send_keep_alive(struct fcoe_ctlr *fip,
/**
* fcoe_ctlr_encaps() - Encapsulate an ELS frame for FIP, without sending it
* @fip: The FCoE controller for the ELS frame
* @lport: The local port
* @dtype: The FIP descriptor type for the frame
* @skb: The FCoE ELS frame including FC header but no FCoE headers
* @d_id: The destination port ID.
@ -1302,7 +1304,7 @@ drop:
/**
* fcoe_ctlr_recv_els() - Handle an incoming link reset frame
* @fip: The FCoE controller that received the frame
* @fh: The received FIP header
* @skb: The received FIP packet
*
* There may be multiple VN_Port descriptors.
* The overall length has already been checked.
@ -1775,7 +1777,7 @@ unlock:
/**
* fcoe_ctlr_timeout() - FIP timeout handler
* @arg: The FCoE controller that timed out
* @t: Timer context use to obtain the controller reference
*/
static void fcoe_ctlr_timeout(struct timer_list *t)
{
@ -1887,6 +1889,7 @@ static void fcoe_ctlr_recv_work(struct work_struct *recv_work)
/**
* fcoe_ctlr_recv_flogi() - Snoop pre-FIP receipt of FLOGI response
* @fip: The FCoE controller
* @lport: The local port
* @fp: The FC frame to snoop
*
* Snoop potential response to FLOGI or even incoming FLOGI.
@ -2158,7 +2161,7 @@ static struct fc_rport_operations fcoe_ctlr_vn_rport_ops = {
/**
* fcoe_ctlr_disc_stop_locked() - stop discovery in VN2VN mode
* @fip: The FCoE controller
* @lport: The local port
*
* Called with ctlr_mutex held.
*/
@ -2179,7 +2182,7 @@ static void fcoe_ctlr_disc_stop_locked(struct fc_lport *lport)
/**
* fcoe_ctlr_disc_stop() - stop discovery in VN2VN mode
* @fip: The FCoE controller
* @lport: The local port
*
* Called through the local port template for discovery.
* Called without the ctlr_mutex held.
@ -2195,7 +2198,7 @@ static void fcoe_ctlr_disc_stop(struct fc_lport *lport)
/**
* fcoe_ctlr_disc_stop_final() - stop discovery for shutdown in VN2VN mode
* @fip: The FCoE controller
* @lport: The local port
*
* Called through the local port template for discovery.
* Called without the ctlr_mutex held.
@ -2262,7 +2265,7 @@ static void fcoe_ctlr_vn_start(struct fcoe_ctlr *fip)
* fcoe_ctlr_vn_parse - parse probe request or response
* @fip: The FCoE controller
* @skb: incoming packet
* @rdata: buffer for resulting parsed VN entry plus fcoe_rport
* @frport: parsed FCoE rport from the probe request
*
* Returns non-zero error number on error.
* Does not consume the packet.
@ -2793,7 +2796,7 @@ drop:
* fcoe_ctlr_vlan_parse - parse vlan discovery request or response
* @fip: The FCoE controller
* @skb: incoming packet
* @rdata: buffer for resulting parsed VLAN entry plus fcoe_rport
* @frport: parsed FCoE rport from the probe request
*
* Returns non-zero error number on error.
* Does not consume the packet.
@ -2892,7 +2895,6 @@ len_err:
* @fip: The FCoE controller
* @sub: sub-opcode for vlan notification or vn2vn vlan notification
* @dest: The destination Ethernet MAC address
* @min_len: minimum size of the Ethernet payload to be sent
*/
static void fcoe_ctlr_vlan_send(struct fcoe_ctlr *fip,
enum fip_vlan_subcode sub,
@ -2969,9 +2971,8 @@ static void fcoe_ctlr_vlan_disc_reply(struct fcoe_ctlr *fip,
/**
* fcoe_ctlr_vlan_recv - vlan request receive handler for VN2VN mode.
* @lport: The local port
* @fp: The received frame
*
* @fip: The FCoE controller
* @skb: The received FIP packet
*/
static int fcoe_ctlr_vlan_recv(struct fcoe_ctlr *fip, struct sk_buff *skb)
{
@ -3015,9 +3016,8 @@ static void fcoe_ctlr_disc_recv(struct fc_lport *lport, struct fc_frame *fp)
fc_frame_free(fp);
}
/**
* fcoe_ctlr_disc_recv - start discovery for VN2VN mode.
* @fip: The FCoE controller
/*
* fcoe_ctlr_disc_start - start discovery for VN2VN mode.
*
* This sets a flag indicating that remote ports should be created
* and started for the peers we discover. We use the disc_callback

View File

@ -382,6 +382,7 @@ EXPORT_SYMBOL_GPL(fcoe_clean_pending_queue);
/**
* fcoe_check_wait_queue() - Attempt to clear the transmit backlog
* @lport: The local port whose backlog is to be cleared
* @skb: The received FIP packet
*
* This empties the wait_queue, dequeues the head of the wait_queue queue
* and calls fcoe_start_io() for each packet. If all skb have been
@ -439,7 +440,7 @@ EXPORT_SYMBOL_GPL(fcoe_check_wait_queue);
/**
* fcoe_queue_timer() - The fcoe queue timer
* @lport: The local port
* @t: Timer context use to obtain the FCoE port
*
* Calls fcoe_check_wait_queue on timeout
*/
@ -672,6 +673,7 @@ static void fcoe_del_netdev_mapping(struct net_device *netdev)
/**
* fcoe_netdev_map_lookup - find the fcoe transport that matches the netdev on which
* it was created
* @netdev: The net device that the FCoE interface is on
*
* Returns : ptr to the fcoe transport that supports this netdev or NULL
* if not found.

View File

@ -103,7 +103,7 @@ enum {
#define REG_FIFO_COUNT 14 /* R: FIFO Data Count */
#ifdef CONFIG_PM_SLEEP
static const struct dev_pm_ops fdomain_pm_ops;
static const struct dev_pm_ops __maybe_unused fdomain_pm_ops;
#define FDOMAIN_PM_OPS (&fdomain_pm_ops)
#else
#define FDOMAIN_PM_OPS NULL

View File

@ -23,6 +23,7 @@
#include <linux/scatterlist.h>
#include <linux/skbuff.h>
#include <linux/spinlock.h>
#include <linux/etherdevice.h>
#include <linux/if_ether.h>
#include <linux/if_vlan.h>
#include <linux/delay.h>
@ -275,7 +276,7 @@ int fnic_flogi_reg_handler(struct fnic *fnic, u32 fc_id)
}
if (fnic->ctlr.map_dest) {
memset(gw_mac, 0xff, ETH_ALEN);
eth_broadcast_addr(gw_mac);
format = FCPIO_FLOGI_REG_DEF_DEST;
} else {
memcpy(gw_mac, fnic->ctlr.dest_addr, ETH_ALEN);

View File

@ -1258,8 +1258,10 @@ static void slot_complete_v1_hw(struct hisi_hba *hisi_hba,
!(cmplt_hdr_data & CMPLT_HDR_RSPNS_XFRD_MSK)) {
slot_err_v1_hw(hisi_hba, task, slot);
if (unlikely(slot->abort))
if (unlikely(slot->abort)) {
sas_task_abort(task);
return;
}
goto out;
}

View File

@ -2404,8 +2404,10 @@ static void slot_complete_v2_hw(struct hisi_hba *hisi_hba,
error_info[0], error_info[1],
error_info[2], error_info[3]);
if (unlikely(slot->abort))
if (unlikely(slot->abort)) {
sas_task_abort(task);
return;
}
goto out;
}
@ -3300,7 +3302,7 @@ static irq_handler_t fatal_interrupts[HISI_SAS_FATAL_INT_NR] = {
fatal_axi_int_v2_hw
};
/**
/*
* There is a limitation in the hip06 chipset that we need
* to map in all mbigen interrupts, even if they are not used.
*/

View File

@ -2235,8 +2235,10 @@ static void slot_complete_v3_hw(struct hisi_hba *hisi_hba,
dw0, dw1, complete_hdr->act, dw3,
error_info[0], error_info[1],
error_info[2], error_info[3]);
if (unlikely(slot->abort))
if (unlikely(slot->abort)) {
sas_task_abort(task);
return;
}
goto out;
}

View File

@ -272,8 +272,10 @@ int scsi_add_host_with_dma(struct Scsi_Host *shost, struct device *dev,
if (shost->transportt->create_work_queue) {
snprintf(shost->work_q_name, sizeof(shost->work_q_name),
"scsi_wq_%d", shost->host_no);
shost->work_q = create_singlethread_workqueue(
shost->work_q_name);
shost->work_q = alloc_workqueue("%s",
WQ_SYSFS | __WQ_LEGACY | WQ_MEM_RECLAIM | WQ_UNBOUND,
1, shost->work_q_name);
if (!shost->work_q) {
error = -EINVAL;
goto out_free_shost_data;
@ -487,7 +489,7 @@ struct Scsi_Host *scsi_host_alloc(struct scsi_host_template *sht, int privsize)
}
shost->tmf_work_q = alloc_workqueue("scsi_tmf_%d",
WQ_UNBOUND | WQ_MEM_RECLAIM,
WQ_UNBOUND | WQ_MEM_RECLAIM | WQ_SYSFS,
1, shost->host_no);
if (!shost->tmf_work_q) {
shost_printk(KERN_WARNING, shost,

View File

@ -59,7 +59,7 @@
* HPSA_DRIVER_VERSION must be 3 byte values (0-255) separated by '.'
* with an optional trailing '-' followed by a byte value (0-255).
*/
#define HPSA_DRIVER_VERSION "3.4.20-170"
#define HPSA_DRIVER_VERSION "3.4.20-200"
#define DRIVER_NAME "HP HPSA Driver (v " HPSA_DRIVER_VERSION ")"
#define HPSA "hpsa"
@ -2134,6 +2134,7 @@ static int hpsa_slave_alloc(struct scsi_device *sdev)
}
/* configure scsi device based on internal per-device structure */
#define CTLR_TIMEOUT (120 * HZ)
static int hpsa_slave_configure(struct scsi_device *sdev)
{
struct hpsa_scsi_dev_t *sd;
@ -2144,17 +2145,21 @@ static int hpsa_slave_configure(struct scsi_device *sdev)
if (sd) {
sd->was_removed = 0;
queue_depth = sd->queue_depth != 0 ?
sd->queue_depth : sdev->host->can_queue;
if (sd->external) {
queue_depth = EXTERNAL_QD;
sdev->eh_timeout = HPSA_EH_PTRAID_TIMEOUT;
blk_queue_rq_timeout(sdev->request_queue,
HPSA_EH_PTRAID_TIMEOUT);
} else {
queue_depth = sd->queue_depth != 0 ?
sd->queue_depth : sdev->host->can_queue;
}
} else
if (is_hba_lunid(sd->scsi3addr)) {
sdev->eh_timeout = CTLR_TIMEOUT;
blk_queue_rq_timeout(sdev->request_queue, CTLR_TIMEOUT);
}
} else {
queue_depth = sdev->host->can_queue;
}
scsi_change_queue_depth(sdev, queue_depth);
@ -3443,9 +3448,14 @@ static void hpsa_get_enclosure_info(struct ctlr_info *h,
struct ErrorInfo *ei = NULL;
struct bmic_sense_storage_box_params *bssbp = NULL;
struct bmic_identify_physical_device *id_phys = NULL;
struct ext_report_lun_entry *rle = &rlep->LUN[rle_index];
struct ext_report_lun_entry *rle;
u16 bmic_device_index = 0;
if (rle_index < 0 || rle_index >= HPSA_MAX_PHYS_LUN)
return;
rle = &rlep->LUN[rle_index];
encl_dev->eli =
hpsa_get_enclosure_logical_identifier(h, scsi3addr);
@ -4174,6 +4184,9 @@ static void hpsa_get_ioaccel_drive_info(struct ctlr_info *h,
int rc;
struct ext_report_lun_entry *rle;
if (rle_index < 0 || rle_index >= HPSA_MAX_PHYS_LUN)
return;
rle = &rlep->LUN[rle_index];
dev->ioaccel_handle = rle->ioaccel_handle;
@ -4198,7 +4211,12 @@ static void hpsa_get_path_info(struct hpsa_scsi_dev_t *this_device,
struct ReportExtendedLUNdata *rlep, int rle_index,
struct bmic_identify_physical_device *id_phys)
{
struct ext_report_lun_entry *rle = &rlep->LUN[rle_index];
struct ext_report_lun_entry *rle;
if (rle_index < 0 || rle_index >= HPSA_MAX_PHYS_LUN)
return;
rle = &rlep->LUN[rle_index];
if ((rle->device_flags & 0x08) && this_device->ioaccel_handle)
this_device->hba_ioaccel_enabled = 1;
@ -4420,7 +4438,8 @@ static void hpsa_update_scsi_devices(struct ctlr_info *h)
/*
* Skip over some devices such as a spare.
*/
if (!tmpdevice->external && physical_device) {
if (phys_dev_index >= 0 && !tmpdevice->external &&
physical_device) {
skip_device = hpsa_skip_device(h, lunaddrbytes,
&physdev_list->LUN[phys_dev_index]);
if (skip_device)

View File

@ -57,7 +57,7 @@ struct hpsa_sas_phy {
bool added_to_port;
};
#define EXTERNAL_QD 7
#define EXTERNAL_QD 128
struct hpsa_scsi_dev_t {
unsigned int devtype;
int bus, target, lun; /* as presented to the OS */

View File

@ -1344,7 +1344,7 @@ static void ibmvfc_map_sg_list(struct scsi_cmnd *scmd, int nseg,
}
/**
* ibmvfc_map_sg_data - Maps dma for a scatterlist and initializes decriptor fields
* ibmvfc_map_sg_data - Maps dma for a scatterlist and initializes descriptor fields
* @scmd: struct scsi_cmnd with the scatterlist
* @evt: ibmvfc event struct
* @vfc_cmd: vfc_cmd that contains the memory descriptor

View File

@ -669,7 +669,7 @@ static int map_sg_list(struct scsi_cmnd *cmd, int nseg,
}
/**
* map_sg_data: - Maps dma for a scatterlist and initializes decriptor fields
* map_sg_data: - Maps dma for a scatterlist and initializes descriptor fields
* @cmd: struct scsi_cmnd with the scatterlist
* @srp_cmd: srp_cmd that contains the memory descriptor
* @dev: device for which to map dma memory

View File

@ -903,7 +903,6 @@ static int imm_engine(imm_struct *dev, struct scsi_cmnd *cmd)
w_ctr(ppb, 0x4);
}
return 0; /* Finished */
break;
default:
printk("imm: Invalid scsi phase\n");
@ -969,10 +968,8 @@ static int imm_abort(struct scsi_cmnd *cmd)
case 1: /* Have not connected to interface */
dev->cur_cmd = NULL; /* Forget the problem */
return SUCCESS;
break;
default: /* SCSI command sent, can not abort */
return FAILED;
break;
}
}

View File

@ -670,6 +670,7 @@ static void ipr_reinit_ipr_cmnd(struct ipr_cmnd *ipr_cmd)
/**
* ipr_init_ipr_cmnd - Initialize an IPR Cmnd block
* @ipr_cmd: ipr command struct
* @fast_done: fast done function call-back
*
* Return value:
* none
@ -687,7 +688,7 @@ static void ipr_init_ipr_cmnd(struct ipr_cmnd *ipr_cmd,
/**
* __ipr_get_free_ipr_cmnd - Get a free IPR Cmnd block
* @ioa_cfg: ioa config struct
* @hrrq: hrr queue
*
* Return value:
* pointer to ipr command struct
@ -737,7 +738,6 @@ struct ipr_cmnd *ipr_get_free_ipr_cmnd(struct ipr_ioa_cfg *ioa_cfg)
static void ipr_mask_and_clear_interrupts(struct ipr_ioa_cfg *ioa_cfg,
u32 clr_ints)
{
volatile u32 int_reg;
int i;
/* Stop new interrupts */
@ -757,7 +757,7 @@ static void ipr_mask_and_clear_interrupts(struct ipr_ioa_cfg *ioa_cfg,
if (ioa_cfg->sis64)
writel(~0, ioa_cfg->regs.clr_interrupt_reg);
writel(clr_ints, ioa_cfg->regs.clr_interrupt_reg32);
int_reg = readl(ioa_cfg->regs.sense_interrupt_reg);
readl(ioa_cfg->regs.sense_interrupt_reg);
}
/**
@ -1287,7 +1287,7 @@ static int ipr_is_same_device(struct ipr_resource_entry *res,
/**
* __ipr_format_res_path - Format the resource path for printing.
* @res_path: resource path
* @buf: buffer
* @buffer: buffer
* @len: length of buffer provided
*
* Return value:
@ -1310,7 +1310,7 @@ static char *__ipr_format_res_path(u8 *res_path, char *buffer, int len)
* ipr_format_res_path - Format the resource path for printing.
* @ioa_cfg: ioa config struct
* @res_path: resource path
* @buf: buffer
* @buffer: buffer
* @len: length of buffer provided
*
* Return value:
@ -1391,7 +1391,6 @@ static void ipr_update_res_entry(struct ipr_resource_entry *res,
* ipr_clear_res_target - Clear the bit in the bit map representing the target
* for the resource.
* @res: resource entry struct
* @cfgtew: config table entry wrapper struct
*
* Return value:
* none
@ -2667,7 +2666,7 @@ static void ipr_process_error(struct ipr_cmnd *ipr_cmd)
/**
* ipr_timeout - An internally generated op has timed out.
* @ipr_cmd: ipr command struct
* @t: Timer context used to fetch ipr command struct
*
* This function blocks host requests and initiates an
* adapter reset.
@ -2700,7 +2699,7 @@ static void ipr_timeout(struct timer_list *t)
/**
* ipr_oper_timeout - Adapter timed out transitioning to operational
* @ipr_cmd: ipr command struct
* @t: Timer context used to fetch ipr command struct
*
* This function blocks host requests and initiates an
* adapter reset.
@ -3484,6 +3483,7 @@ static struct bin_attribute ipr_trace_attr = {
/**
* ipr_show_fw_version - Show the firmware version
* @dev: class device struct
* @attr: device attribute (unused)
* @buf: buffer
*
* Return value:
@ -3518,6 +3518,7 @@ static struct device_attribute ipr_fw_version_attr = {
/**
* ipr_show_log_level - Show the adapter's error logging level
* @dev: class device struct
* @attr: device attribute (unused)
* @buf: buffer
*
* Return value:
@ -3540,7 +3541,9 @@ static ssize_t ipr_show_log_level(struct device *dev,
/**
* ipr_store_log_level - Change the adapter's error logging level
* @dev: class device struct
* @attr: device attribute (unused)
* @buf: buffer
* @count: buffer size
*
* Return value:
* number of bytes printed to buffer
@ -3571,6 +3574,7 @@ static struct device_attribute ipr_log_level_attr = {
/**
* ipr_store_diagnostics - IOA Diagnostics interface
* @dev: device struct
* @attr: device attribute (unused)
* @buf: buffer
* @count: buffer size
*
@ -3631,7 +3635,8 @@ static struct device_attribute ipr_diagnostics_attr = {
/**
* ipr_show_adapter_state - Show the adapter's state
* @class_dev: device struct
* @dev: device struct
* @attr: device attribute (unused)
* @buf: buffer
*
* Return value:
@ -3657,6 +3662,7 @@ static ssize_t ipr_show_adapter_state(struct device *dev,
/**
* ipr_store_adapter_state - Change adapter state
* @dev: device struct
* @attr: device attribute (unused)
* @buf: buffer
* @count: buffer size
*
@ -3708,6 +3714,7 @@ static struct device_attribute ipr_ioa_state_attr = {
/**
* ipr_store_reset_adapter - Reset the adapter
* @dev: device struct
* @attr: device attribute (unused)
* @buf: buffer
* @count: buffer size
*
@ -3749,6 +3756,7 @@ static int ipr_iopoll(struct irq_poll *iop, int budget);
/**
* ipr_show_iopoll_weight - Show ipr polling mode
* @dev: class device struct
* @attr: device attribute (unused)
* @buf: buffer
*
* Return value:
@ -3772,7 +3780,9 @@ static ssize_t ipr_show_iopoll_weight(struct device *dev,
/**
* ipr_store_iopoll_weight - Change the adapter's polling mode
* @dev: class device struct
* @attr: device attribute (unused)
* @buf: buffer
* @count: buffer size
*
* Return value:
* number of bytes printed to buffer
@ -3871,7 +3881,7 @@ static struct ipr_sglist *ipr_alloc_ucode_buffer(int buf_len)
/**
* ipr_free_ucode_buffer - Frees a microcode download buffer
* @p_dnld: scatter/gather list pointer
* @sglist: scatter/gather list pointer
*
* Free a DMA'able ucode download buffer previously allocated with
* ipr_alloc_ucode_buffer
@ -4059,7 +4069,8 @@ static int ipr_update_ioa_ucode(struct ipr_ioa_cfg *ioa_cfg,
/**
* ipr_store_update_fw - Update the firmware on the adapter
* @class_dev: device struct
* @dev: device struct
* @attr: device attribute (unused)
* @buf: buffer
* @count: buffer size
*
@ -4139,6 +4150,7 @@ static struct device_attribute ipr_update_fw_attr = {
/**
* ipr_show_fw_type - Show the adapter's firmware type.
* @dev: class device struct
* @attr: device attribute (unused)
* @buf: buffer
*
* Return value:
@ -4480,7 +4492,6 @@ static int ipr_free_dump(struct ipr_ioa_cfg *ioa_cfg) { return 0; };
* ipr_change_queue_depth - Change the device's queue depth
* @sdev: scsi device struct
* @qdepth: depth to set
* @reason: calling context
*
* Return value:
* actual depth set
@ -4650,6 +4661,7 @@ static struct device_attribute ipr_resource_type_attr = {
/**
* ipr_show_raw_mode - Show the adapter's raw mode
* @dev: class device struct
* @attr: device attribute (unused)
* @buf: buffer
*
* Return value:
@ -4677,7 +4689,9 @@ static ssize_t ipr_show_raw_mode(struct device *dev,
/**
* ipr_store_raw_mode - Change the adapter's raw mode
* @dev: class device struct
* @attr: device attribute (unused)
* @buf: buffer
* @count: buffer size
*
* Return value:
* number of bytes printed to buffer
@ -5060,7 +5074,7 @@ static int ipr_match_lun(struct ipr_cmnd *ipr_cmd, void *device)
/**
* ipr_cmnd_is_free - Check if a command is free or not
* @ipr_cmd ipr command struct
* @ipr_cmd: ipr command struct
*
* Returns:
* true / false
@ -5096,7 +5110,7 @@ static int ipr_match_res(struct ipr_cmnd *ipr_cmd, void *resource)
/**
* ipr_wait_for_ops - Wait for matching commands to complete
* @ipr_cmd: ipr command struct
* @ioa_cfg: ioa config struct
* @device: device to match (sdev)
* @match: match function to use
*
@ -5261,6 +5275,7 @@ static int ipr_device_reset(struct ipr_ioa_cfg *ioa_cfg,
* ipr_sata_reset - Reset the SATA port
* @link: SATA link to reset
* @classes: class of the attached device
* @deadline: unused
*
* This function issues a SATA phy reset to the affected ATA link.
*
@ -5440,7 +5455,7 @@ static void ipr_bus_reset_done(struct ipr_cmnd *ipr_cmd)
/**
* ipr_abort_timeout - An abort task has timed out
* @ipr_cmd: ipr command struct
* @t: Timer context used to fetch ipr command struct
*
* This function handles when an abort task times out. If this
* happens we issue a bus reset since we have resources tied
@ -5494,7 +5509,7 @@ static int ipr_cancel_op(struct scsi_cmnd *scsi_cmd)
struct ipr_ioa_cfg *ioa_cfg;
struct ipr_resource_entry *res;
struct ipr_cmd_pkt *cmd_pkt;
u32 ioasc, int_reg;
u32 ioasc;
int i, op_found = 0;
struct ipr_hrr_queue *hrrq;
@ -5517,7 +5532,7 @@ static int ipr_cancel_op(struct scsi_cmnd *scsi_cmd)
* by a still not detected EEH error. In such cases, reading a register will
* trigger the EEH recovery infrastructure.
*/
int_reg = readl(ioa_cfg->regs.sense_interrupt_reg);
readl(ioa_cfg->regs.sense_interrupt_reg);
if (!ipr_is_gscsi(res))
return FAILED;
@ -5569,7 +5584,8 @@ static int ipr_cancel_op(struct scsi_cmnd *scsi_cmd)
/**
* ipr_eh_abort - Abort a single op
* @scsi_cmd: scsi command struct
* @shost: scsi host struct
* @elapsed_time: elapsed time
*
* Return value:
* 0 if scan in progress / 1 if scan is complete
@ -5696,6 +5712,7 @@ static irqreturn_t ipr_handle_other_interrupt(struct ipr_ioa_cfg *ioa_cfg,
* ipr_isr_eh - Interrupt service routine error handler
* @ioa_cfg: ioa config struct
* @msg: message to log
* @number: various meanings depending on the caller/message
*
* Return value:
* none
@ -5762,7 +5779,6 @@ static int ipr_process_hrrq(struct ipr_hrr_queue *hrr_queue, int budget,
static int ipr_iopoll(struct irq_poll *iop, int budget)
{
struct ipr_ioa_cfg *ioa_cfg;
struct ipr_hrr_queue *hrrq;
struct ipr_cmnd *ipr_cmd, *temp;
unsigned long hrrq_flags;
@ -5770,7 +5786,6 @@ static int ipr_iopoll(struct irq_poll *iop, int budget)
LIST_HEAD(doneq);
hrrq = container_of(iop, struct ipr_hrr_queue, iopoll);
ioa_cfg = hrrq->ioa_cfg;
spin_lock_irqsave(hrrq->lock, hrrq_flags);
completed_ops = ipr_process_hrrq(hrrq, budget, &doneq);
@ -6268,8 +6283,7 @@ static void ipr_dump_ioasa(struct ipr_ioa_cfg *ioa_cfg,
/**
* ipr_gen_sense - Generate SCSI sense data from an IOASA
* @ioasa: IOASA
* @sense_buf: sense data buffer
* @ipr_cmd: ipr command struct
*
* Return value:
* none
@ -6702,7 +6716,7 @@ static int ipr_ioctl(struct scsi_device *sdev, unsigned int cmd,
/**
* ipr_info - Get information about the card/driver
* @scsi_host: scsi host struct
* @host: scsi host struct
*
* Return value:
* pointer to buffer with description string
@ -7592,7 +7606,7 @@ static int ipr_ioafp_mode_select_page28(struct ipr_cmnd *ipr_cmd)
/**
* ipr_build_mode_sense - Builds a mode sense command
* @ipr_cmd: ipr command struct
* @res: resource entry struct
* @res_handle: resource entry struct
* @parm: Byte 2 of mode sense command
* @dma_addr: DMA address of mode sense buffer
* @xfer_len: Size of DMA buffer
@ -7939,6 +7953,7 @@ static void ipr_build_ioa_service_action(struct ipr_cmnd *ipr_cmd,
/**
* ipr_ioafp_set_caching_parameters - Issue Set Cache parameters service
* action
* @ipr_cmd: ipr command struct
*
* Return value:
* none
@ -7975,6 +7990,10 @@ static int ipr_ioafp_set_caching_parameters(struct ipr_cmnd *ipr_cmd)
/**
* ipr_ioafp_inquiry - Send an Inquiry to the adapter.
* @ipr_cmd: ipr command struct
* @flags: flags to send
* @page: page to inquire
* @dma_addr: DMA address
* @xfer_len: transfer data length
*
* This utility function sends an inquiry to the adapter.
*
@ -8265,7 +8284,7 @@ static int ipr_ioafp_identify_hrrq(struct ipr_cmnd *ipr_cmd)
/**
* ipr_reset_timer_done - Adapter reset timer function
* @ipr_cmd: ipr command struct
* @t: Timer context used to fetch ipr command struct
*
* Description: This function is used in adapter reset processing
* for timing events. If the reset_cmd pointer in the IOA
@ -8659,7 +8678,6 @@ static int ipr_dump_mailbox_wait(struct ipr_cmnd *ipr_cmd)
static int ipr_reset_restore_cfg_space(struct ipr_cmnd *ipr_cmd)
{
struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
u32 int_reg;
ENTER;
ioa_cfg->pdev->state_saved = true;
@ -8675,7 +8693,7 @@ static int ipr_reset_restore_cfg_space(struct ipr_cmnd *ipr_cmd)
if (ioa_cfg->sis64) {
/* Set the adapter to the correct endian mode. */
writel(IPR_ENDIAN_SWAP_KEY, ioa_cfg->regs.endian_swap_reg);
int_reg = readl(ioa_cfg->regs.endian_swap_reg);
readl(ioa_cfg->regs.endian_swap_reg);
}
if (ioa_cfg->ioa_unit_checked) {
@ -9483,7 +9501,6 @@ static pci_ers_result_t ipr_pci_error_detected(struct pci_dev *pdev,
* Description: This is the second phase of adapter initialization
* This function takes care of initilizing the adapter to the point
* where it can accept new commands.
* Return value:
* 0 on success / -EIO on failure
**/
@ -9597,7 +9614,7 @@ static void ipr_free_irqs(struct ipr_ioa_cfg *ioa_cfg)
/**
* ipr_free_all_resources - Free all allocated resources for an adapter.
* @ipr_cmd: ipr command struct
* @ioa_cfg: ioa config struct
*
* This function frees all allocated resources for the
* specified adapter.
@ -10059,7 +10076,8 @@ static int ipr_request_other_msi_irqs(struct ipr_ioa_cfg *ioa_cfg,
/**
* ipr_test_intr - Handle the interrupt generated in ipr_test_msi().
* @pdev: PCI device struct
* @devp: PCI device struct
* @irq: IRQ number
*
* Description: Simply set the msi_received flag to 1 indicating that
* Message Signaled Interrupts are supported.
@ -10085,6 +10103,7 @@ static irqreturn_t ipr_test_intr(int irq, void *devp)
/**
* ipr_test_msi - Test for Message Signaled Interrupt (MSI) support.
* @ioa_cfg: ioa config struct
* @pdev: PCI device struct
*
* Description: This routine sets up and initiates a test interrupt to determine
@ -10097,7 +10116,6 @@ static irqreturn_t ipr_test_intr(int irq, void *devp)
static int ipr_test_msi(struct ipr_ioa_cfg *ioa_cfg, struct pci_dev *pdev)
{
int rc;
volatile u32 int_reg;
unsigned long lock_flags = 0;
int irq = pci_irq_vector(pdev, 0);
@ -10108,7 +10126,7 @@ static int ipr_test_msi(struct ipr_ioa_cfg *ioa_cfg, struct pci_dev *pdev)
ioa_cfg->msi_received = 0;
ipr_mask_and_clear_interrupts(ioa_cfg, ~IPR_PCII_IOA_TRANS_TO_OPER);
writel(IPR_PCII_IO_DEBUG_ACKNOWLEDGE, ioa_cfg->regs.clr_interrupt_mask_reg32);
int_reg = readl(ioa_cfg->regs.sense_interrupt_mask_reg);
readl(ioa_cfg->regs.sense_interrupt_mask_reg);
spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
rc = request_irq(irq, ipr_test_intr, 0, IPR_NAME, ioa_cfg);
@ -10119,7 +10137,7 @@ static int ipr_test_msi(struct ipr_ioa_cfg *ioa_cfg, struct pci_dev *pdev)
dev_info(&pdev->dev, "IRQ assigned: %d\n", irq);
writel(IPR_PCII_IO_DEBUG_ACKNOWLEDGE, ioa_cfg->regs.sense_interrupt_reg32);
int_reg = readl(ioa_cfg->regs.sense_interrupt_reg);
readl(ioa_cfg->regs.sense_interrupt_reg);
wait_event_timeout(ioa_cfg->msi_wait_q, ioa_cfg->msi_received, HZ);
spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
ipr_mask_and_clear_interrupts(ioa_cfg, ~IPR_PCII_IOA_TRANS_TO_OPER);
@ -10530,6 +10548,8 @@ static void ipr_remove(struct pci_dev *pdev)
/**
* ipr_probe - Adapter hot plug add entry point
* @pdev: pci device struct
* @dev_id: pci device ID
*
* Return value:
* 0 on success / non-zero on failure
@ -10786,6 +10806,7 @@ static struct pci_driver ipr_driver = {
/**
* ipr_halt_done - Shutdown prepare completion
* @ipr_cmd: ipr command struct
*
* Return value:
* none
@ -10797,6 +10818,9 @@ static void ipr_halt_done(struct ipr_cmnd *ipr_cmd)
/**
* ipr_halt - Issue shutdown prepare to all adapters
* @nb: Notifier block
* @event: Notifier event
* @buf: Notifier data (unused)
*
* Return value:
* NOTIFY_OK on success / NOTIFY_DONE on failure

View File

@ -1684,7 +1684,7 @@ struct ipr_dump_entry_header {
struct ipr_dump_location_entry {
struct ipr_dump_entry_header hdr;
u8 location[20];
}__attribute__((packed));
}__attribute__((packed, aligned (4)));
struct ipr_dump_trace_entry {
struct ipr_dump_entry_header hdr;
@ -1708,7 +1708,7 @@ struct ipr_driver_dump {
struct ipr_dump_location_entry location_entry;
struct ipr_dump_ioa_type_entry ioa_type_entry;
struct ipr_dump_trace_entry trace_entry;
}__attribute__((packed));
}__attribute__((packed, aligned (4)));
struct ipr_ioa_dump {
struct ipr_dump_entry_header hdr;

View File

@ -2239,7 +2239,7 @@ ips_get_bios_version(ips_ha_t * ha, int intr)
major = 0;
minor = 0;
strncpy(ha->bios_version, " ?", 8);
memcpy(ha->bios_version, " ?", 8);
if (ha->pcidev->device == IPS_DEVICEID_COPPERHEAD) {
if (IPS_USE_MEMIO(ha)) {
@ -3515,11 +3515,11 @@ ips_send_cmd(ips_ha_t * ha, ips_scb_t * scb)
inquiry.Flags[1] =
IPS_SCSI_INQ_WBus16 |
IPS_SCSI_INQ_Sync;
strncpy(inquiry.VendorId, "IBM ",
memcpy(inquiry.VendorId, "IBM ",
8);
strncpy(inquiry.ProductId,
memcpy(inquiry.ProductId,
"SERVERAID ", 16);
strncpy(inquiry.ProductRevisionLevel,
memcpy(inquiry.ProductRevisionLevel,
"1.00", 4);
ips_scmd_buf_write(scb->scsi_cmd,
@ -4036,9 +4036,9 @@ ips_inquiry(ips_ha_t * ha, ips_scb_t * scb)
inquiry.Flags[0] = IPS_SCSI_INQ_Address16;
inquiry.Flags[1] =
IPS_SCSI_INQ_WBus16 | IPS_SCSI_INQ_Sync | IPS_SCSI_INQ_CmdQue;
strncpy(inquiry.VendorId, "IBM ", 8);
strncpy(inquiry.ProductId, "SERVERAID ", 16);
strncpy(inquiry.ProductRevisionLevel, "1.00", 4);
memcpy(inquiry.VendorId, "IBM ", 8);
memcpy(inquiry.ProductId, "SERVERAID ", 16);
memcpy(inquiry.ProductRevisionLevel, "1.00", 4);
ips_scmd_buf_write(scb->scsi_cmd, &inquiry, sizeof (inquiry));
@ -4697,7 +4697,6 @@ ips_init_copperhead(ips_ha_t * ha)
uint8_t Isr;
uint8_t Cbsp;
uint8_t PostByte[IPS_MAX_POST_BYTES];
uint8_t ConfigByte[IPS_MAX_CONFIG_BYTES];
int i, j;
METHOD_TRACE("ips_init_copperhead", 1);
@ -4742,7 +4741,7 @@ ips_init_copperhead(ips_ha_t * ha)
/* error occurred */
return (0);
ConfigByte[i] = inb(ha->io_addr + IPS_REG_ISPR);
inb(ha->io_addr + IPS_REG_ISPR);
outb(Isr, ha->io_addr + IPS_REG_HISR);
}
@ -4791,7 +4790,6 @@ ips_init_copperhead_memio(ips_ha_t * ha)
uint8_t Isr = 0;
uint8_t Cbsp;
uint8_t PostByte[IPS_MAX_POST_BYTES];
uint8_t ConfigByte[IPS_MAX_CONFIG_BYTES];
int i, j;
METHOD_TRACE("ips_init_copperhead_memio", 1);
@ -4836,7 +4834,7 @@ ips_init_copperhead_memio(ips_ha_t * ha)
/* error occurred */
return (0);
ConfigByte[i] = readb(ha->mem_ptr + IPS_REG_ISPR);
readb(ha->mem_ptr + IPS_REG_ISPR);
writeb(Isr, ha->mem_ptr + IPS_REG_HISR);
}
@ -5622,10 +5620,10 @@ ips_write_driver_status(ips_ha_t * ha, int intr)
/* change values (as needed) */
ha->nvram->operating_system = IPS_OS_LINUX;
ha->nvram->adapter_type = ha->ad_type;
strncpy((char *) ha->nvram->driver_high, IPS_VERSION_HIGH, 4);
strncpy((char *) ha->nvram->driver_low, IPS_VERSION_LOW, 4);
strncpy((char *) ha->nvram->bios_high, ha->bios_version, 4);
strncpy((char *) ha->nvram->bios_low, ha->bios_version + 4, 4);
memcpy((char *) ha->nvram->driver_high, IPS_VERSION_HIGH, 4);
memcpy((char *) ha->nvram->driver_low, IPS_VERSION_LOW, 4);
memcpy((char *) ha->nvram->bios_high, ha->bios_version, 4);
memcpy((char *) ha->nvram->bios_low, ha->bios_version + 4, 4);
ha->nvram->versioning = 0; /* Indicate the Driver Does Not Support Versioning */
@ -6835,8 +6833,6 @@ ips_init_phase1(struct pci_dev *pci_dev, int *indexPtr)
uint32_t mem_addr;
uint32_t io_len;
uint32_t mem_len;
uint8_t bus;
uint8_t func;
int j;
int index;
dma_addr_t dma_address;
@ -6856,10 +6852,6 @@ ips_init_phase1(struct pci_dev *pci_dev, int *indexPtr)
if (index >= IPS_MAX_ADAPTERS)
return -1;
/* stuff that we get in dev */
bus = pci_dev->bus->number;
func = pci_dev->devfn;
/* Init MEM/IO addresses to 0 */
mem_addr = 0;
io_addr = 0;

View File

@ -3444,7 +3444,7 @@ struct isci_request *isci_tmf_request_from_tag(struct isci_host *ihost,
int isci_request_execute(struct isci_host *ihost, struct isci_remote_device *idev,
struct sas_task *task, u16 tag)
{
enum sci_status status = SCI_FAILURE_UNSUPPORTED_PROTOCOL;
enum sci_status status;
struct isci_request *ireq;
unsigned long flags;
int ret = 0;

View File

@ -337,7 +337,7 @@ static void fc_disc_error(struct fc_disc *disc, struct fc_frame *fp)
/**
* fc_disc_gpn_ft_req() - Send Get Port Names by FC-4 type (GPN_FT) request
* @lport: The discovery context
* @disc: The discovery context
*/
static void fc_disc_gpn_ft_req(struct fc_disc *disc)
{
@ -370,7 +370,7 @@ err:
/**
* fc_disc_gpn_ft_parse() - Parse the body of the dNS GPN_FT response.
* @lport: The local port the GPN_FT was received on
* @disc: The discovery context
* @buf: The GPN_FT response buffer
* @len: The size of response buffer
*
@ -488,7 +488,7 @@ static void fc_disc_timeout(struct work_struct *work)
* fc_disc_gpn_ft_resp() - Handle a response frame from Get Port Names (GPN_FT)
* @sp: The sequence that the GPN_FT response was received on
* @fp: The GPN_FT response frame
* @lp_arg: The discovery context
* @disc_arg: The discovery context
*
* Locking Note: This function is called without disc mutex held, and
* should do all its processing with the mutex held

View File

@ -49,6 +49,8 @@ static struct workqueue_struct *fc_exch_workqueue;
* @total_exches: Total allocated exchanges
* @lock: Exch pool lock
* @ex_list: List of exchanges
* @left: Cache of free slot in exch array
* @right: Cache of free slot in exch array
*
* This structure manages per cpu exchanges in array of exchange pointers.
* This array is allocated followed by struct fc_exch_pool memory for
@ -60,7 +62,6 @@ struct fc_exch_pool {
u16 next_index;
u16 total_exches;
/* two cache of free slot in exch array */
u16 left;
u16 right;
} ____cacheline_aligned_in_smp;
@ -74,6 +75,7 @@ struct fc_exch_pool {
* @ep_pool: Reserved exchange pointers
* @pool_max_index: Max exch array index in exch pool
* @pool: Per cpu exch pool
* @lport: Local exchange port
* @stats: Statistics structure
*
* This structure is the center for creating exchanges and sequences.
@ -702,6 +704,9 @@ int fc_seq_exch_abort(const struct fc_seq *req_sp, unsigned int timer_msec)
/**
* fc_invoke_resp() - invoke ep->resp()
* @ep: The exchange to be operated on
* @fp: The frame pointer to pass through to ->resp()
* @sp: The sequence pointer to pass through to ->resp()
*
* Notes:
* It is assumed that after initialization finished (this means the

View File

@ -289,6 +289,7 @@ static int fc_fcp_send_abort(struct fc_fcp_pkt *fsp)
/**
* fc_fcp_retry_cmd() - Retry a fcp_pkt
* @fsp: The FCP packet to be retried
* @status_code: The FCP status code to set
*
* Sets the status code to be FC_ERROR and then calls
* fc_fcp_complete_locked() which in turn calls fc_io_compl().
@ -580,7 +581,7 @@ err:
/**
* fc_fcp_send_data() - Send SCSI data to a target
* @fsp: The FCP packet the data is on
* @sp: The sequence the data is to be sent on
* @seq: The sequence the data is to be sent on
* @offset: The starting offset for this data request
* @seq_blen: The burst length for this data request
*
@ -1283,7 +1284,7 @@ static int fc_fcp_pkt_abort(struct fc_fcp_pkt *fsp)
/**
* fc_lun_reset_send() - Send LUN reset command
* @data: The FCP packet that identifies the LUN to be reset
* @t: Timer context used to fetch the FSP packet
*/
static void fc_lun_reset_send(struct timer_list *t)
{
@ -1409,7 +1410,7 @@ static void fc_fcp_cleanup(struct fc_lport *lport)
/**
* fc_fcp_timeout() - Handler for fcp_pkt timeouts
* @data: The FCP packet that has timed out
* @t: Timer context used to fetch the FSP packet
*
* If REC is supported then just issue it and return. The REC exchange will
* complete or time out and recovery can continue at that point. Otherwise,
@ -1691,6 +1692,7 @@ out:
/**
* fc_fcp_recovery() - Handler for fcp_pkt recovery
* @fsp: The FCP pkt that needs to be aborted
* @code: The FCP status code to set
*/
static void fc_fcp_recovery(struct fc_fcp_pkt *fsp, u8 code)
{
@ -1709,6 +1711,7 @@ static void fc_fcp_recovery(struct fc_fcp_pkt *fsp, u8 code)
* fc_fcp_srr() - Send a SRR request (Sequence Retransmission Request)
* @fsp: The FCP packet the SRR is to be sent on
* @r_ctl: The R_CTL field for the SRR request
* @offset: The SRR relative offset
* This is called after receiving status but insufficient data, or
* when expecting status but the request has timed out.
*/
@ -1851,7 +1854,7 @@ static inline int fc_fcp_lport_queue_ready(struct fc_lport *lport)
/**
* fc_queuecommand() - The queuecommand function of the SCSI template
* @shost: The Scsi_Host that the command was issued to
* @cmd: The scsi_cmnd to be executed
* @sc_cmd: The scsi_cmnd to be executed
*
* This is the i/o strategy routine, called by the SCSI layer.
*/

View File

@ -405,7 +405,7 @@ static void fc_lport_recv_rlir_req(struct fc_lport *lport, struct fc_frame *fp)
/**
* fc_lport_recv_echo_req() - Handle received ECHO request
* @lport: The local port receiving the ECHO
* @fp: ECHO request frame
* @in_fp: ECHO request frame
*/
static void fc_lport_recv_echo_req(struct fc_lport *lport,
struct fc_frame *in_fp)
@ -440,7 +440,7 @@ static void fc_lport_recv_echo_req(struct fc_lport *lport,
/**
* fc_lport_recv_rnid_req() - Handle received Request Node ID data request
* @lport: The local port receiving the RNID
* @fp: The RNID request frame
* @in_fp: The RNID request frame
*/
static void fc_lport_recv_rnid_req(struct fc_lport *lport,
struct fc_frame *in_fp)
@ -1325,6 +1325,7 @@ static void fc_lport_enter_scr(struct fc_lport *lport)
/**
* fc_lport_enter_ns() - register some object with the name server
* @lport: Fibre Channel local port to register
* @state: Local port state
*/
static void fc_lport_enter_ns(struct fc_lport *lport, enum fc_lport_state state)
{
@ -1423,6 +1424,7 @@ err:
/**
* fc_lport_enter_ms() - management server commands
* @lport: Fibre Channel local port to register
* @state: Local port state
*/
static void fc_lport_enter_ms(struct fc_lport *lport, enum fc_lport_state state)
{
@ -1932,6 +1934,7 @@ static void fc_lport_bsg_resp(struct fc_seq *sp, struct fc_frame *fp,
* @job: The BSG Passthrough job
* @lport: The local port sending the request
* @did: The destination port id
* @tov: The timeout period (in ms)
*/
static int fc_lport_els_request(struct bsg_job *job,
struct fc_lport *lport,

View File

@ -121,7 +121,7 @@ EXPORT_SYMBOL(fc_rport_lookup);
/**
* fc_rport_create() - Create a new remote port
* @lport: The local port this remote port will be associated with
* @ids: The identifiers for the new remote port
* @port_id: The identifiers for the new remote port
*
* The remote port will start in the INIT state.
*/
@ -1445,7 +1445,7 @@ drop:
* fc_rport_logo_resp() - Handler for logout (LOGO) responses
* @sp: The sequence the LOGO was on
* @fp: The LOGO response frame
* @lport_arg: The local port
* @rdata_arg: The remote port
*/
static void fc_rport_logo_resp(struct fc_seq *sp, struct fc_frame *fp,
void *rdata_arg)

View File

@ -507,10 +507,23 @@ void sas_ata_end_eh(struct ata_port *ap)
spin_unlock_irqrestore(&ha->lock, flags);
}
static int sas_ata_prereset(struct ata_link *link, unsigned long deadline)
{
struct ata_port *ap = link->ap;
struct domain_device *dev = ap->private_data;
struct sas_phy *local_phy = sas_get_local_phy(dev);
int res = 0;
if (!local_phy->enabled || test_bit(SAS_DEV_GONE, &dev->state))
res = -ENOENT;
sas_put_local_phy(local_phy);
return res;
}
static struct ata_port_operations sas_sata_ops = {
.prereset = ata_std_prereset,
.prereset = sas_ata_prereset,
.hardreset = sas_ata_hard_reset,
.postreset = ata_std_postreset,
.error_handler = ata_std_error_handler,
.post_internal_cmd = sas_ata_post_internal,
.qc_defer = ata_std_qc_defer,

View File

@ -427,7 +427,7 @@ out_err:
static int sas_expander_discover(struct domain_device *dev)
{
struct expander_device *ex = &dev->ex_dev;
int res = -ENOMEM;
int res;
ex->ex_phy = kcalloc(ex->num_phys, sizeof(*ex->ex_phy), GFP_KERNEL);
if (!ex->ex_phy)

View File

@ -627,6 +627,14 @@ struct lpfc_ras_fwlog {
enum ras_state state; /* RAS logging running state */
};
#define DBG_LOG_STR_SZ 256
#define DBG_LOG_SZ 256
struct dbg_log_ent {
char log[DBG_LOG_STR_SZ];
u64 t_ns;
};
enum lpfc_irq_chann_mode {
/* Assign IRQs to all possible cpus that have hardware queues */
NORMAL_MODE,
@ -709,6 +717,9 @@ struct lpfc_hba {
struct workqueue_struct *wq;
struct delayed_work eq_delay_work;
#define LPFC_IDLE_STAT_DELAY 1000
struct delayed_work idle_stat_delay_work;
struct lpfc_sli sli;
uint8_t pci_dev_grp; /* lpfc PCI dev group: 0x0, 0x1, 0x2,... */
uint32_t sli_rev; /* SLI2, SLI3, or SLI4 */
@ -1237,6 +1248,10 @@ struct lpfc_hba {
struct scsi_host_template port_template;
/* SCSI host template information - for all vports */
struct scsi_host_template vport_template;
atomic_t dbg_log_idx;
atomic_t dbg_log_cnt;
atomic_t dbg_log_dmping;
struct dbg_log_ent dbg_log[DBG_LOG_SZ];
};
static inline struct Scsi_Host *

View File

@ -2404,33 +2404,27 @@ lpfc_sli4_bsg_link_diag_test(struct bsg_job *job)
union lpfc_sli4_cfg_shdr *shdr;
uint32_t shdr_status, shdr_add_status;
struct diag_status *diag_status_reply;
int mbxstatus, rc = 0;
int mbxstatus, rc = -ENODEV, rc1 = 0;
shost = fc_bsg_to_shost(job);
if (!shost) {
rc = -ENODEV;
if (!shost)
goto job_error;
}
vport = shost_priv(shost);
if (!vport) {
rc = -ENODEV;
goto job_error;
}
phba = vport->phba;
if (!phba) {
rc = -ENODEV;
goto job_error;
}
if (phba->sli_rev < LPFC_SLI_REV4) {
rc = -ENODEV;
vport = shost_priv(shost);
if (!vport)
goto job_error;
}
phba = vport->phba;
if (!phba)
goto job_error;
if (phba->sli_rev < LPFC_SLI_REV4)
goto job_error;
if (bf_get(lpfc_sli_intf_if_type, &phba->sli4_hba.sli_intf) <
LPFC_SLI_INTF_IF_TYPE_2) {
rc = -ENODEV;
LPFC_SLI_INTF_IF_TYPE_2)
goto job_error;
}
if (job->request_len < sizeof(struct fc_bsg_request) +
sizeof(struct sli4_link_diag)) {
@ -2465,8 +2459,10 @@ lpfc_sli4_bsg_link_diag_test(struct bsg_job *job)
alloc_len = lpfc_sli4_config(phba, pmboxq, LPFC_MBOX_SUBSYSTEM_FCOE,
LPFC_MBOX_OPCODE_FCOE_LINK_DIAG_STATE,
req_len, LPFC_SLI4_MBX_EMBED);
if (alloc_len != req_len)
if (alloc_len != req_len) {
rc = -ENOMEM;
goto link_diag_test_exit;
}
run_link_diag_test = &pmboxq->u.mqe.un.link_diag_test;
bf_set(lpfc_mbx_run_diag_test_link_num, &run_link_diag_test->u.req,
@ -2515,7 +2511,7 @@ lpfc_sli4_bsg_link_diag_test(struct bsg_job *job)
diag_status_reply->shdr_add_status = shdr_add_status;
link_diag_test_exit:
rc = lpfc_sli4_bsg_set_link_diag_state(phba, 0);
rc1 = lpfc_sli4_bsg_set_link_diag_state(phba, 0);
if (pmboxq)
mempool_free(pmboxq, phba->mbox_mem_pool);
@ -2524,6 +2520,8 @@ link_diag_test_exit:
job_error:
/* make error code available to userspace */
if (rc1 && !rc)
rc = rc1;
bsg_reply->result = rc;
/* complete the job back to userspace if no error */
if (rc == 0)
@ -4306,6 +4304,7 @@ lpfc_bsg_handle_sli_cfg_mbox(struct lpfc_hba *phba, struct bsg_job *job,
case COMN_OPCODE_GET_CNTL_ADDL_ATTRIBUTES:
case COMN_OPCODE_GET_CNTL_ATTRIBUTES:
case COMN_OPCODE_GET_PROFILE_CONFIG:
case COMN_OPCODE_SET_FEATURES:
lpfc_printf_log(phba, KERN_INFO, LOG_LIBDFC,
"3106 Handled SLI_CONFIG "
"subsys_comn, opcode:x%x\n",

View File

@ -225,6 +225,10 @@ struct lpfc_sli_config_hdr {
uint32_t reserved5;
};
#define LPFC_CSF_BOOT_DEV 0x1D
#define LPFC_CSF_QUERY 0
#define LPFC_CSF_SAVE 1
struct lpfc_sli_config_emb0_subsys {
struct lpfc_sli_config_hdr sli_config_hdr;
#define LPFC_MBX_SLI_CONFIG_MAX_MSE 19
@ -243,6 +247,15 @@ struct lpfc_sli_config_emb0_subsys {
#define FCOE_OPCODE_ADD_FCF 0x09
#define FCOE_OPCODE_SET_DPORT_MODE 0x27
#define FCOE_OPCODE_GET_DPORT_RESULTS 0x28
uint32_t timeout; /* comn_set_feature timeout */
uint32_t request_length; /* comn_set_feature request len */
uint32_t version; /* comn_set_feature version */
uint32_t csf_feature; /* comn_set_feature feature */
uint32_t word69; /* comn_set_feature parameter len */
uint32_t word70; /* comn_set_feature parameter val0 */
#define lpfc_emb0_subcmnd_csf_p0_SHIFT 0
#define lpfc_emb0_subcmnd_csf_p0_MASK 0x3
#define lpfc_emb0_subcmnd_csf_p0_WORD word70
};
struct lpfc_sli_config_emb1_subsys {
@ -261,6 +274,7 @@ struct lpfc_sli_config_emb1_subsys {
#define COMN_OPCODE_WRITE_OBJECT 0xAC
#define COMN_OPCODE_READ_OBJECT_LIST 0xAD
#define COMN_OPCODE_DELETE_OBJECT 0xAE
#define COMN_OPCODE_SET_FEATURES 0xBF
#define COMN_OPCODE_GET_CNTL_ADDL_ATTRIBUTES 0x79
#define COMN_OPCODE_GET_CNTL_ATTRIBUTES 0x20
uint32_t timeout;

View File

@ -386,7 +386,7 @@ void lpfc_rq_buf_free(struct lpfc_hba *phba, struct lpfc_dmabuf *mp);
int lpfc_link_reset(struct lpfc_vport *vport);
/* Function prototypes. */
int lpfc_check_pci_resettable(const struct lpfc_hba *phba);
int lpfc_check_pci_resettable(struct lpfc_hba *phba);
const char* lpfc_info(struct Scsi_Host *);
int lpfc_scan_finished(struct Scsi_Host *, unsigned long);

View File

@ -300,7 +300,7 @@ lpfc_ct_free_iocb(struct lpfc_hba *phba, struct lpfc_iocbq *ctiocb)
return 0;
}
/**
/*
* lpfc_gen_req - Build and issue a GEN_REQUEST command to the SLI Layer
* @vport: pointer to a host virtual N_Port data structure.
* @bmp: Pointer to BPL for SLI command
@ -394,7 +394,7 @@ lpfc_gen_req(struct lpfc_vport *vport, struct lpfc_dmabuf *bmp,
return 0;
}
/**
/*
* lpfc_ct_cmd - Build and issue a CT command
* @vport: pointer to a host virtual N_Port data structure.
* @inmp: Pointer to data buffer for response data.
@ -750,7 +750,7 @@ lpfc_cmpl_ct_cmd_gid_ft(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
if (vport->fc_flag & FC_RSCN_MODE)
lpfc_els_flush_rscn(vport);
lpfc_vport_set_state(vport, FC_VPORT_FAILED);
lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"0257 GID_FT Query error: 0x%x 0x%x\n",
irsp->ulpStatus, vport->fc_ns_retry);
} else {
@ -811,7 +811,7 @@ lpfc_cmpl_ct_cmd_gid_ft(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
} else {
/* NameServer Rsp Error */
lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"0241 NameServer Rsp Error "
"Data: x%x x%x x%x x%x\n",
CTrsp->CommandResponse.bits.CmdRsp,
@ -951,7 +951,7 @@ lpfc_cmpl_ct_cmd_gid_pt(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
if (vport->fc_flag & FC_RSCN_MODE)
lpfc_els_flush_rscn(vport);
lpfc_vport_set_state(vport, FC_VPORT_FAILED);
lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"4103 GID_FT Query error: 0x%x 0x%x\n",
irsp->ulpStatus, vport->fc_ns_retry);
} else {
@ -1012,7 +1012,7 @@ lpfc_cmpl_ct_cmd_gid_pt(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
}
} else {
/* NameServer Rsp Error */
lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"4109 NameServer Rsp Error "
"Data: x%x x%x x%x x%x\n",
CTrsp->CommandResponse.bits.CmdRsp,
@ -1143,7 +1143,7 @@ lpfc_cmpl_ct_cmd_gff_id(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
}
}
}
lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"0267 NameServer GFF Rsp "
"x%x Error (%d %d) Data: x%x x%x\n",
did, irsp->ulpStatus, irsp->un.ulpWord[4],
@ -1271,7 +1271,7 @@ lpfc_cmpl_ct_cmd_gft_id(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
}
}
} else
lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"3065 GFT_ID failed x%08x\n", irsp->ulpStatus);
lpfc_ct_free_iocb(phba, cmdiocb);
@ -1320,7 +1320,7 @@ lpfc_cmpl_ct(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
irsp->ulpStatus, irsp->un.ulpWord[4], cmdcode);
if (irsp->ulpStatus) {
lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"0268 NS cmd x%x Error (x%x x%x)\n",
cmdcode, irsp->ulpStatus, irsp->un.ulpWord[4]);
@ -1843,7 +1843,7 @@ ns_cmd_free_mpvirt:
ns_cmd_free_mp:
kfree(mp);
ns_cmd_exit:
lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"0266 Issue NameServer Req x%x err %d Data: x%x x%x\n",
cmdcode, rc, vport->fc_flag, vport->fc_rscn_id_cnt);
return 1;
@ -3019,8 +3019,8 @@ int (*lpfc_fdmi_port_action[])
* lpfc_fdmi_cmd - Build and send a FDMI cmd to the specified NPort
* @vport: pointer to a host virtual N_Port data structure.
* @ndlp: ndlp to send FDMI cmd to (if NULL use FDMI_DID)
* cmdcode: FDMI command to send
* mask: Mask of HBA or PORT Attributes to send
* @cmdcode: FDMI command to send
* @new_mask: Mask of HBA or PORT Attributes to send
*
* Builds and sends a FDMI command using the CT subsystem.
*/
@ -3262,7 +3262,7 @@ fdmi_cmd_exit:
/**
* lpfc_delayed_disc_tmo - Timeout handler for delayed discovery timer.
* @ptr - Context object of the timer.
* @t: Context object of the timer.
*
* This function set the WORKER_DELAYED_DISC_TMO flag and wake up
* the worker thread.

View File

@ -100,7 +100,7 @@ lpfc_els_chk_latt(struct lpfc_vport *vport)
return 0;
/* Pending Link Event during Discovery */
lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"0237 Pending Link Event during "
"Discovery: State x%x\n",
phba->pport->port_state);
@ -440,8 +440,9 @@ fail_free_mbox:
fail:
lpfc_vport_set_state(vport, FC_VPORT_FAILED);
lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
"0249 Cannot issue Register Fabric login: Err %d\n", err);
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"0249 Cannot issue Register Fabric login: Err %d\n",
err);
return -ENXIO;
}
@ -524,8 +525,8 @@ fail:
}
lpfc_vport_set_state(vport, FC_VPORT_FAILED);
lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
"0289 Issue Register VFI failed: Err %d\n", rc);
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"0289 Issue Register VFI failed: Err %d\n", rc);
return rc;
}
@ -550,7 +551,7 @@ lpfc_issue_unreg_vfi(struct lpfc_vport *vport)
mboxq = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
if (!mboxq) {
lpfc_printf_log(phba, KERN_ERR, LOG_DISCOVERY|LOG_MBOX,
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
"2556 UNREG_VFI mbox allocation failed"
"HBA state x%x\n", phba->pport->port_state);
return -ENOMEM;
@ -562,7 +563,7 @@ lpfc_issue_unreg_vfi(struct lpfc_vport *vport)
rc = lpfc_sli_issue_mbox(phba, mboxq, MBX_NOWAIT);
if (rc == MBX_NOT_FINISHED) {
lpfc_printf_log(phba, KERN_ERR, LOG_DISCOVERY|LOG_MBOX,
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
"2557 UNREG_VFI issue mbox failed rc x%x "
"HBA state x%x\n",
rc, phba->pport->port_state);
@ -1041,18 +1042,18 @@ stop_rr_fcf_flogi:
if (!(irsp->ulpStatus == IOSTAT_LOCAL_REJECT &&
((irsp->un.ulpWord[4] & IOERR_PARAM_MASK) ==
IOERR_LOOP_OPEN_FAILURE)))
lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
"2858 FLOGI failure Status:x%x/x%x "
"TMO:x%x Data x%x x%x\n",
irsp->ulpStatus, irsp->un.ulpWord[4],
irsp->ulpTimeout, phba->hba_flag,
phba->fcf.fcf_flag);
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"2858 FLOGI failure Status:x%x/x%x TMO"
":x%x Data x%x x%x\n",
irsp->ulpStatus, irsp->un.ulpWord[4],
irsp->ulpTimeout, phba->hba_flag,
phba->fcf.fcf_flag);
/* Check for retry */
if (lpfc_els_retry(phba, cmdiocb, rspiocb))
goto out;
lpfc_printf_vlog(vport, KERN_WARNING, LOG_ELS,
lpfc_printf_vlog(vport, KERN_WARNING, LOG_TRACE_EVENT,
"0150 FLOGI failure Status:x%x/x%x "
"xri x%x TMO:x%x\n",
irsp->ulpStatus, irsp->un.ulpWord[4],
@ -1132,8 +1133,7 @@ stop_rr_fcf_flogi:
else if (!(phba->hba_flag & HBA_FCOE_MODE))
rc = lpfc_cmpl_els_flogi_nport(vport, ndlp, sp);
else {
lpfc_printf_vlog(vport, KERN_ERR,
LOG_FIP | LOG_ELS,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"2831 FLOGI response with cleared Fabric "
"bit fcf_index 0x%x "
"Switch Name %02x%02x%02x%02x%02x%02x%02x%02x "
@ -1934,7 +1934,7 @@ lpfc_cmpl_els_rrq(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
ndlp = lpfc_findnode_did(vport, irsp->un.elsreq64.remoteID);
if (!ndlp || !NLP_CHK_NODE_ACT(ndlp) || ndlp != rrq->ndlp) {
lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"2882 RRQ completes to NPort x%x "
"with no ndlp. Data: x%x x%x x%x\n",
irsp->un.elsreq64.remoteID,
@ -1957,10 +1957,11 @@ lpfc_cmpl_els_rrq(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
(((irsp->un.ulpWord[4]) >> 16 != LSRJT_INVALID_CMD) &&
((irsp->un.ulpWord[4]) >> 16 != LSRJT_UNABLE_TPC)) ||
(phba)->pport->cfg_log_verbose & LOG_ELS)
lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
"2881 RRQ failure DID:%06X Status:x%x/x%x\n",
ndlp->nlp_DID, irsp->ulpStatus,
irsp->un.ulpWord[4]);
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"2881 RRQ failure DID:%06X Status:"
"x%x/x%x\n",
ndlp->nlp_DID, irsp->ulpStatus,
irsp->un.ulpWord[4]);
}
out:
if (rrq)
@ -2010,7 +2011,7 @@ lpfc_cmpl_els_plogi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
ndlp = lpfc_findnode_did(vport, irsp->un.elsreq64.remoteID);
if (!ndlp || !NLP_CHK_NODE_ACT(ndlp)) {
lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"0136 PLOGI completes to NPort x%x "
"with no ndlp. Data: x%x x%x x%x\n",
irsp->un.elsreq64.remoteID,
@ -2059,7 +2060,7 @@ lpfc_cmpl_els_plogi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
(((irsp->un.ulpWord[4]) >> 16 != LSRJT_INVALID_CMD) &&
((irsp->un.ulpWord[4]) >> 16 != LSRJT_UNABLE_TPC)) ||
(phba)->pport->cfg_log_verbose & LOG_ELS)
lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"2753 PLOGI failure DID:%06X Status:x%x/x%x\n",
ndlp->nlp_DID, irsp->ulpStatus,
irsp->un.ulpWord[4]);
@ -2237,6 +2238,7 @@ lpfc_cmpl_els_prli(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
IOCB_t *irsp;
struct lpfc_nodelist *ndlp;
char *mode;
u32 loglevel;
/* we pass cmdiocb to state machine which needs rspiocb as well */
cmdiocb->context_un.rsp_iocb = rspiocb;
@ -2278,13 +2280,16 @@ lpfc_cmpl_els_prli(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
* could be expected.
*/
if ((vport->fc_flag & FC_FABRIC) ||
(vport->cfg_enable_fc4_type != LPFC_ENABLE_BOTH))
(vport->cfg_enable_fc4_type != LPFC_ENABLE_BOTH)) {
mode = KERN_ERR;
else
loglevel = LOG_TRACE_EVENT;
} else {
mode = KERN_INFO;
loglevel = LOG_ELS;
}
/* PRLI failed */
lpfc_printf_vlog(vport, mode, LOG_ELS,
lpfc_printf_vlog(vport, mode, loglevel,
"2754 PRLI failure DID:%06X Status:x%x/x%x, "
"data: x%x\n",
ndlp->nlp_DID, irsp->ulpStatus,
@ -2695,7 +2700,7 @@ lpfc_cmpl_els_adisc(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
goto out;
}
/* ADISC failed */
lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"2755 ADISC failure DID:%06X Status:x%x/x%x\n",
ndlp->nlp_DID, irsp->ulpStatus,
irsp->un.ulpWord[4]);
@ -2853,7 +2858,7 @@ lpfc_cmpl_els_logo(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
*/
if (irsp->ulpStatus) {
/* LOGO failed */
lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"2756 LOGO failure, No Retry DID:%06X Status:x%x/x%x\n",
ndlp->nlp_DID, irsp->ulpStatus,
irsp->un.ulpWord[4]);
@ -3597,7 +3602,7 @@ lpfc_cancel_retry_delay_tmo(struct lpfc_vport *vport, struct lpfc_nodelist *nlp)
/**
* lpfc_els_retry_delay - Timer function with a ndlp delayed function timer
* @ptr: holder for the pointer to the timer function associated data (ndlp).
* @t: pointer to the timer function associated data (ndlp).
*
* This routine is invoked by the ndlp delayed-function timer to check
* whether there is any pending ELS retry event(s) with the node. If not, it
@ -3734,7 +3739,7 @@ lpfc_link_reset(struct lpfc_vport *vport)
"2851 Attempt link reset\n");
mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
if (!mbox) {
lpfc_printf_log(phba, KERN_ERR, LOG_MBOX,
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
"2852 Failed to allocate mbox memory");
return 1;
}
@ -3756,7 +3761,7 @@ lpfc_link_reset(struct lpfc_vport *vport)
mbox->vport = vport;
rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT);
if ((rc != MBX_BUSY) && (rc != MBX_SUCCESS)) {
lpfc_printf_log(phba, KERN_ERR, LOG_MBOX,
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
"2853 Failed to issue INIT_LINK "
"mbox command, rc:x%x\n", rc);
mempool_free(mbox, phba->mbox_mem_pool);
@ -3860,7 +3865,7 @@ lpfc_els_retry(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
break;
case IOERR_ILLEGAL_COMMAND:
lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"0124 Retry illegal cmd x%x "
"retry:x%x delay:x%x\n",
cmd, cmdiocb->retry, delay);
@ -3970,7 +3975,8 @@ lpfc_els_retry(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
if ((phba->sli3_options & LPFC_SLI3_NPIV_ENABLED) &&
(cmd == ELS_CMD_FDISC) &&
(stat.un.b.lsRjtRsnCodeExp == LSEXP_OUT_OF_RESOURCE)){
lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
lpfc_printf_vlog(vport, KERN_ERR,
LOG_TRACE_EVENT,
"0125 FDISC Failed (x%x). "
"Fabric out of resources\n",
stat.un.lsRjtError);
@ -4009,7 +4015,8 @@ lpfc_els_retry(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
LSEXP_NOTHING_MORE) {
vport->fc_sparam.cmn.bbRcvSizeMsb &= 0xf;
retry = 1;
lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
lpfc_printf_vlog(vport, KERN_ERR,
LOG_TRACE_EVENT,
"0820 FLOGI Failed (x%x). "
"BBCredit Not Supported\n",
stat.un.lsRjtError);
@ -4022,7 +4029,8 @@ lpfc_els_retry(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
((stat.un.b.lsRjtRsnCodeExp == LSEXP_INVALID_PNAME) ||
(stat.un.b.lsRjtRsnCodeExp == LSEXP_INVALID_NPORT_ID))
) {
lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
lpfc_printf_vlog(vport, KERN_ERR,
LOG_TRACE_EVENT,
"0122 FDISC Failed (x%x). "
"Fabric Detected Bad WWN\n",
stat.un.lsRjtError);
@ -4200,7 +4208,7 @@ out_retry:
}
/* No retry ELS command <elsCmd> to remote NPORT <did> */
if (logerr) {
lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"0137 No retry ELS command x%x to remote "
"NPORT x%x: Out of Resources: Error:x%x/%x\n",
cmd, did, irsp->ulpStatus,
@ -4499,7 +4507,7 @@ lpfc_cmpl_els_rsp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
irsp = &rspiocb->iocb;
if (!vport) {
lpfc_printf_log(phba, KERN_ERR, LOG_ELS,
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
"3177 ELS response failed\n");
goto out;
}
@ -4605,7 +4613,7 @@ lpfc_cmpl_els_rsp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
ndlp->nlp_flag &= ~NLP_REG_LOGIN_SEND;
/* ELS rsp: Cannot issue reg_login for <NPortid> */
lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"0138 ELS rsp: Cannot issue reg_login for x%x "
"Data: x%x x%x x%x\n",
ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_state,
@ -4843,7 +4851,7 @@ lpfc_els_rsp_acc(struct lpfc_vport *vport, uint32_t flag,
/**
* lpfc_els_rsp_reject - Propare and issue a rjt response iocb command
* @vport: pointer to a virtual N_Port data structure.
* @rejectError:
* @rejectError: reject response to issue
* @oldiocb: pointer to the original lpfc command iocb data structure.
* @ndlp: pointer to a node-list data structure.
* @mbox: pointer to the driver internal queue element for mailbox command.
@ -6411,8 +6419,8 @@ lpfc_els_rcv_lcb(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
lcb_context->rx_id = cmdiocb->iocb.ulpContext;
lcb_context->ndlp = lpfc_nlp_get(ndlp);
if (lpfc_sli4_set_beacon(vport, lcb_context, state)) {
lpfc_printf_vlog(ndlp->vport, KERN_ERR,
LOG_ELS, "0193 failed to send mail box");
lpfc_printf_vlog(ndlp->vport, KERN_ERR, LOG_TRACE_EVENT,
"0193 failed to send mail box");
kfree(lcb_context);
lpfc_nlp_put(ndlp);
rjt_err = LSRJT_UNABLE_TPC;
@ -6621,7 +6629,7 @@ lpfc_send_rscn_event(struct lpfc_vport *vport,
rscn_event_data = kmalloc(sizeof(struct lpfc_rscn_event_header) +
payload_len, GFP_KERNEL);
if (!rscn_event_data) {
lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"0147 Failed to allocate memory for RSCN event\n");
return;
}
@ -6998,7 +7006,7 @@ lpfc_els_rcv_flogi(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
/* An FLOGI ELS command <elsCmd> was received from DID <did> in
Loop Mode */
lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"0113 An FLOGI ELS command x%x was "
"received from DID x%x in Loop Mode\n",
cmd, did);
@ -7879,7 +7887,7 @@ lpfc_els_rcv_fan(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
/**
* lpfc_els_timeout - Handler funciton to the els timer
* @ptr: holder for the timer function associated data.
* @t: timer context used to obtain the vport.
*
* This routine is invoked by the ELS timer after timeout. It posts the ELS
* timer timeout event by setting the WORKER_ELS_TMO bit to the work port
@ -7988,7 +7996,7 @@ lpfc_els_timeout_handler(struct lpfc_vport *vport)
list_for_each_entry_safe(piocb, tmp_iocb, &abort_list, dlist) {
cmd = &piocb->iocb;
lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"0127 ELS timeout Data: x%x x%x x%x "
"x%x\n", els_command,
remote_ID, cmd->ulpCommand, cmd->ulpIoTag);
@ -8098,7 +8106,7 @@ lpfc_els_flush_cmd(struct lpfc_vport *vport)
spin_unlock_irqrestore(&phba->hbalock, iflags);
}
if (!list_empty(&abort_list))
lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"3387 abort list for txq not empty\n");
INIT_LIST_HEAD(&abort_list);
@ -8252,7 +8260,7 @@ lpfc_send_els_failure_event(struct lpfc_hba *phba,
* lpfc_send_els_event - Posts unsolicited els event
* @vport: Pointer to vport object.
* @ndlp: Pointer FC node object.
* @cmd: ELS command code.
* @payload: ELS command code type.
*
* This function posts an event when there is an incoming
* unsolicited ELS command.
@ -8269,7 +8277,7 @@ lpfc_send_els_event(struct lpfc_vport *vport,
if (*payload == ELS_CMD_LOGO) {
logo_data = kmalloc(sizeof(struct lpfc_logo_event), GFP_KERNEL);
if (!logo_data) {
lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"0148 Failed to allocate memory "
"for LOGO event\n");
return;
@ -8279,7 +8287,7 @@ lpfc_send_els_event(struct lpfc_vport *vport,
els_data = kmalloc(sizeof(struct lpfc_els_event_header),
GFP_KERNEL);
if (!els_data) {
lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"0149 Failed to allocate memory "
"for ELS event\n");
return;
@ -8337,7 +8345,7 @@ DECLARE_ENUM2STR_LOOKUP(lpfc_get_fpin_li_event_nm, fc_fpin_li_event_types,
/**
* lpfc_els_rcv_fpin_li - Process an FPIN Link Integrity Event.
* @vport: Pointer to vport object.
* @lnk_not: Pointer to the Link Integrity Notification Descriptor.
* @tlv: Pointer to the Link Integrity Notification Descriptor.
*
* This function processes a link integrity FPIN event by
* logging a message
@ -8396,7 +8404,7 @@ lpfc_els_rcv_fpin(struct lpfc_vport *vport, struct fc_els_fpin *fpin,
break;
default:
dtag_nm = lpfc_get_tlv_dtag_nm(dtag);
lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"4678 skipped FPIN descriptor[%d]: "
"tag x%x (%s)\n",
desc_cnt, dtag, dtag_nm);
@ -8811,7 +8819,7 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
rjt_exp = LSEXP_NOTHING_MORE;
/* Unknown ELS command <elsCmd> received from NPORT <did> */
lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"0115 Unknown ELS command x%x "
"received from NPORT x%x\n", cmd, did);
if (newnode)
@ -8856,7 +8864,7 @@ lsrjt:
dropit:
if (vport && !(vport->load_flag & FC_UNLOADING))
lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"0111 Dropping received ELS cmd "
"Data: x%x x%x x%x\n",
icmd->ulpStatus, icmd->un.ulpWord[4], icmd->ulpTimeout);
@ -9006,7 +9014,7 @@ lpfc_do_scr_ns_plogi(struct lpfc_hba *phba, struct lpfc_vport *vport)
spin_lock_irq(shost->host_lock);
if (vport->fc_flag & FC_DISC_DELAYED) {
spin_unlock_irq(shost->host_lock);
lpfc_printf_log(phba, KERN_ERR, LOG_DISCOVERY,
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
"3334 Delay fc port discovery for %d seconds\n",
phba->fc_ratov);
mod_timer(&vport->delayed_disc_tmo,
@ -9024,7 +9032,7 @@ lpfc_do_scr_ns_plogi(struct lpfc_hba *phba, struct lpfc_vport *vport)
return;
}
lpfc_vport_set_state(vport, FC_VPORT_FAILED);
lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"0251 NameServer login: no memory\n");
return;
}
@ -9036,7 +9044,7 @@ lpfc_do_scr_ns_plogi(struct lpfc_hba *phba, struct lpfc_vport *vport)
return;
}
lpfc_vport_set_state(vport, FC_VPORT_FAILED);
lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"0348 NameServer login: node freed\n");
return;
}
@ -9047,7 +9055,7 @@ lpfc_do_scr_ns_plogi(struct lpfc_hba *phba, struct lpfc_vport *vport)
if (lpfc_issue_els_plogi(vport, ndlp->nlp_DID, 0)) {
lpfc_vport_set_state(vport, FC_VPORT_FAILED);
lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"0252 Cannot issue NameServer login\n");
return;
}
@ -9084,7 +9092,7 @@ lpfc_cmpl_reg_new_vport(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
spin_unlock_irq(shost->host_lock);
if (mb->mbxStatus) {
lpfc_printf_vlog(vport, KERN_ERR, LOG_MBOX,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"0915 Register VPI failed : Status: x%x"
" upd bit: x%x \n", mb->mbxStatus,
mb->un.varRegVpi.upd);
@ -9114,8 +9122,8 @@ lpfc_cmpl_reg_new_vport(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
rc = lpfc_sli_issue_mbox(phba, pmb,
MBX_NOWAIT);
if (rc == MBX_NOT_FINISHED) {
lpfc_printf_vlog(vport,
KERN_ERR, LOG_MBOX,
lpfc_printf_vlog(vport, KERN_ERR,
LOG_TRACE_EVENT,
"2732 Failed to issue INIT_VPI"
" mailbox command\n");
} else {
@ -9203,12 +9211,12 @@ lpfc_register_new_vport(struct lpfc_hba *phba, struct lpfc_vport *vport,
lpfc_nlp_put(ndlp);
mempool_free(mbox, phba->mbox_mem_pool);
lpfc_printf_vlog(vport, KERN_ERR, LOG_MBOX,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"0253 Register VPI: Can't send mbox\n");
goto mbox_err_exit;
}
} else {
lpfc_printf_vlog(vport, KERN_ERR, LOG_MBOX,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"0254 Register VPI: no memory\n");
goto mbox_err_exit;
}
@ -9370,7 +9378,7 @@ lpfc_cmpl_els_fdisc(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
if (lpfc_els_retry(phba, cmdiocb, rspiocb))
goto out;
/* FDISC failed */
lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"0126 FDISC failed. (x%x/x%x)\n",
irsp->ulpStatus, irsp->un.ulpWord[4]);
goto fdisc_failed;
@ -9492,7 +9500,7 @@ lpfc_issue_els_fdisc(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
ELS_CMD_FDISC);
if (!elsiocb) {
lpfc_vport_set_state(vport, FC_VPORT_FAILED);
lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"0255 Issue FDISC: no IOCB\n");
return 1;
}
@ -9546,7 +9554,7 @@ lpfc_issue_els_fdisc(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
if (rc == IOCB_ERROR) {
lpfc_els_free_iocb(phba, elsiocb);
lpfc_vport_set_state(vport, FC_VPORT_FAILED);
lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"0256 Issue FDISC: Cannot send IOCB\n");
return 1;
}
@ -9666,7 +9674,7 @@ lpfc_issue_els_npiv_logo(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
/**
* lpfc_fabric_block_timeout - Handler function to the fabric block timer
* @ptr: holder for the timer function associated data.
* @t: timer context used to obtain the lpfc hba.
*
* This routine is invoked by the fabric iocb block timer after
* timeout. It posts the fabric iocb block timeout event by setting the
@ -10127,8 +10135,7 @@ lpfc_sli_abts_recover_port(struct lpfc_vport *vport,
"rport in state 0x%x\n", ndlp->nlp_state);
return;
}
lpfc_printf_log(phba, KERN_ERR,
LOG_ELS | LOG_FCP_ERROR | LOG_NVME_IOERR,
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
"3094 Start rport recovery on shost id 0x%x "
"fc_id 0x%06x vpi 0x%x rpi 0x%x state 0x%x "
"flags 0x%x\n",

View File

@ -155,17 +155,17 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport)
return;
if (rport->port_name != wwn_to_u64(ndlp->nlp_portname.u.wwn))
lpfc_printf_vlog(vport, KERN_ERR, LOG_NODE,
"6789 rport name %llx != node port name %llx",
rport->port_name,
wwn_to_u64(ndlp->nlp_portname.u.wwn));
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"6789 rport name %llx != node port name %llx",
rport->port_name,
wwn_to_u64(ndlp->nlp_portname.u.wwn));
evtp = &ndlp->dev_loss_evt;
if (!list_empty(&evtp->evt_listp)) {
lpfc_printf_vlog(vport, KERN_ERR, LOG_NODE,
"6790 rport name %llx dev_loss_evt pending",
rport->port_name);
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"6790 rport name %llx dev_loss_evt pending",
rport->port_name);
return;
}
@ -295,7 +295,7 @@ lpfc_dev_loss_tmo_handler(struct lpfc_nodelist *ndlp)
}
if (warn_on) {
lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"0203 Devloss timeout on "
"WWPN %02x:%02x:%02x:%02x:%02x:%02x:%02x:%02x "
"NPort x%06x Data: x%x x%x x%x\n",
@ -304,7 +304,7 @@ lpfc_dev_loss_tmo_handler(struct lpfc_nodelist *ndlp)
ndlp->nlp_DID, ndlp->nlp_flag,
ndlp->nlp_state, ndlp->nlp_rpi);
} else {
lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY,
lpfc_printf_vlog(vport, KERN_INFO, LOG_TRACE_EVENT,
"0204 Devloss timeout on "
"WWPN %02x:%02x:%02x:%02x:%02x:%02x:%02x:%02x "
"NPort x%06x Data: x%x x%x x%x\n",
@ -755,7 +755,7 @@ lpfc_do_work(void *p)
|| kthread_should_stop()));
/* Signal wakeup shall terminate the worker thread */
if (rc) {
lpfc_printf_log(phba, KERN_ERR, LOG_ELS,
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
"0433 Wakeup on signal: rc=x%x\n", rc);
break;
}
@ -1092,7 +1092,7 @@ lpfc_mbx_cmpl_clear_la(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
/* Check for error */
if ((mb->mbxStatus) && (mb->mbxStatus != 0x1601)) {
/* CLEAR_LA mbox error <mbxStatus> state <hba_state> */
lpfc_printf_vlog(vport, KERN_ERR, LOG_MBOX,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"0320 CLEAR_LA mbxStatus error x%x hba "
"state x%x\n",
mb->mbxStatus, vport->port_state);
@ -1180,7 +1180,7 @@ lpfc_mbx_cmpl_local_config_link(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
return;
out:
lpfc_printf_vlog(vport, KERN_ERR, LOG_MBOX,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"0306 CONFIG_LINK mbxStatus error x%x "
"HBA state x%x\n",
pmb->u.mb.mbxStatus, vport->port_state);
@ -1188,7 +1188,7 @@ out:
lpfc_linkdown(phba);
lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"0200 CONFIG_LINK bad hba state x%x\n",
vport->port_state);
@ -1198,7 +1198,7 @@ out:
/**
* lpfc_sli4_clear_fcf_rr_bmask
* @phba pointer to the struct lpfc_hba for this port.
* @phba: pointer to the struct lpfc_hba for this port.
* This fucnction resets the round robin bit mask and clears the
* fcf priority list. The list deletions are done while holding the
* hbalock. The ON_LIST flag and the FLOGI_FAILED flags are cleared
@ -1224,10 +1224,10 @@ lpfc_mbx_cmpl_reg_fcfi(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq)
struct lpfc_vport *vport = mboxq->vport;
if (mboxq->u.mb.mbxStatus) {
lpfc_printf_vlog(vport, KERN_ERR, LOG_MBOX,
"2017 REG_FCFI mbxStatus error x%x "
"HBA state x%x\n",
mboxq->u.mb.mbxStatus, vport->port_state);
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"2017 REG_FCFI mbxStatus error x%x "
"HBA state x%x\n", mboxq->u.mb.mbxStatus,
vport->port_state);
goto fail_out;
}
@ -1297,7 +1297,7 @@ lpfc_fab_name_match(uint8_t *fab_name, struct fcf_record *new_fcf_record)
/**
* lpfc_sw_name_match - Check if the fcf switch name match.
* @fab_name: pointer to fabric name.
* @sw_name: pointer to switch name.
* @new_fcf_record: pointer to fcf record.
*
* This routine compare the fcf record's switch name with provided
@ -1385,7 +1385,7 @@ __lpfc_update_fcf_record_pri(struct lpfc_hba *phba, uint16_t fcf_index,
/**
* lpfc_copy_fcf_record - Copy fcf information to lpfc_hba.
* @fcf: pointer to driver fcf record.
* @fcf_rec: pointer to driver fcf record.
* @new_fcf_record: pointer to fcf record.
*
* This routine copies the FCF information from the FCF
@ -1848,7 +1848,7 @@ lpfc_sli4_fcf_rec_mbox_parse(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq,
*/
lpfc_sli4_mbx_sge_get(mboxq, 0, &sge);
if (unlikely(!mboxq->sge_array)) {
lpfc_printf_log(phba, KERN_ERR, LOG_MBOX,
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
"2524 Failed to get the non-embedded SGE "
"virtual address\n");
return NULL;
@ -1864,11 +1864,12 @@ lpfc_sli4_fcf_rec_mbox_parse(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq,
if (shdr_status || shdr_add_status) {
if (shdr_status == STATUS_FCF_TABLE_EMPTY ||
if_type == LPFC_SLI_INTF_IF_TYPE_2)
lpfc_printf_log(phba, KERN_ERR, LOG_FIP,
lpfc_printf_log(phba, KERN_ERR,
LOG_TRACE_EVENT,
"2726 READ_FCF_RECORD Indicates empty "
"FCF table.\n");
else
lpfc_printf_log(phba, KERN_ERR, LOG_FIP,
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
"2521 READ_FCF_RECORD mailbox failed "
"with status x%x add_status x%x, "
"mbx\n", shdr_status, shdr_add_status);
@ -1952,7 +1953,7 @@ lpfc_sli4_log_fcf_record_info(struct lpfc_hba *phba,
}
/**
lpfc_sli4_fcf_record_match - testing new FCF record for matching existing FCF
* lpfc_sli4_fcf_record_match - testing new FCF record for matching existing FCF
* @phba: pointer to lpfc hba data structure.
* @fcf_rec: pointer to an existing FCF record.
* @new_fcf_record: pointer to a new FCF record.
@ -2066,7 +2067,7 @@ stop_flogi_current_fcf:
/**
* lpfc_sli4_fcf_pri_list_del
* @phba: pointer to lpfc hba data structure.
* @fcf_index the index of the fcf record to delete
* @fcf_index: the index of the fcf record to delete
* This routine checks the on list flag of the fcf_index to be deleted.
* If it is one the list then it is removed from the list, and the flag
* is cleared. This routine grab the hbalock before removing the fcf
@ -2096,7 +2097,7 @@ static void lpfc_sli4_fcf_pri_list_del(struct lpfc_hba *phba,
/**
* lpfc_sli4_set_fcf_flogi_fail
* @phba: pointer to lpfc hba data structure.
* @fcf_index the index of the fcf record to update
* @fcf_index: the index of the fcf record to update
* This routine acquires the hbalock and then set the LPFC_FCF_FLOGI_FAILED
* flag so the the round robin slection for the particular priority level
* will try a different fcf record that does not have this bit set.
@ -2116,7 +2117,8 @@ lpfc_sli4_set_fcf_flogi_fail(struct lpfc_hba *phba, uint16_t fcf_index)
/**
* lpfc_sli4_fcf_pri_list_add
* @phba: pointer to lpfc hba data structure.
* @fcf_index the index of the fcf record to add
* @fcf_index: the index of the fcf record to add
* @new_fcf_record: pointer to a new FCF record.
* This routine checks the priority of the fcf_index to be added.
* If it is a lower priority than the current head of the fcf_pri list
* then it is added to the list in the right order.
@ -2246,7 +2248,7 @@ lpfc_mbx_cmpl_fcf_scan_read_fcf_rec(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq)
new_fcf_record = lpfc_sli4_fcf_rec_mbox_parse(phba, mboxq,
&next_fcf_index);
if (!new_fcf_record) {
lpfc_printf_log(phba, KERN_ERR, LOG_FIP,
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
"2765 Mailbox command READ_FCF_RECORD "
"failed to retrieve a FCF record.\n");
/* Let next new FCF event trigger fast failover */
@ -2290,7 +2292,8 @@ lpfc_mbx_cmpl_fcf_scan_read_fcf_rec(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq)
new_fcf_record, LPFC_FCOE_IGNORE_VID)) {
if (bf_get(lpfc_fcf_record_fcf_index, new_fcf_record) !=
phba->fcf.current_rec.fcf_indx) {
lpfc_printf_log(phba, KERN_ERR, LOG_FIP,
lpfc_printf_log(phba, KERN_ERR,
LOG_TRACE_EVENT,
"2862 FCF (x%x) matches property "
"of in-use FCF (x%x)\n",
bf_get(lpfc_fcf_record_fcf_index,
@ -2360,7 +2363,7 @@ lpfc_mbx_cmpl_fcf_scan_read_fcf_rec(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq)
phba->pport->fc_flag);
goto out;
} else
lpfc_printf_log(phba, KERN_ERR, LOG_FIP,
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
"2863 New FCF (x%x) matches "
"property of in-use FCF (x%x)\n",
bf_get(lpfc_fcf_record_fcf_index,
@ -2774,10 +2777,9 @@ lpfc_init_vfi_cmpl(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq)
(bf_get(lpfc_sli_intf_if_type, &phba->sli4_hba.sli_intf) !=
LPFC_SLI_INTF_IF_TYPE_0) &&
mboxq->u.mb.mbxStatus != MBX_VFI_IN_USE) {
lpfc_printf_vlog(vport, KERN_ERR,
LOG_MBOX,
"2891 Init VFI mailbox failed 0x%x\n",
mboxq->u.mb.mbxStatus);
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"2891 Init VFI mailbox failed 0x%x\n",
mboxq->u.mb.mbxStatus);
mempool_free(mboxq, phba->mbox_mem_pool);
lpfc_vport_set_state(vport, FC_VPORT_FAILED);
return;
@ -2805,7 +2807,7 @@ lpfc_issue_init_vfi(struct lpfc_vport *vport)
mboxq = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
if (!mboxq) {
lpfc_printf_vlog(vport, KERN_ERR,
LOG_MBOX, "2892 Failed to allocate "
LOG_TRACE_EVENT, "2892 Failed to allocate "
"init_vfi mailbox\n");
return;
}
@ -2813,8 +2815,8 @@ lpfc_issue_init_vfi(struct lpfc_vport *vport)
mboxq->mbox_cmpl = lpfc_init_vfi_cmpl;
rc = lpfc_sli_issue_mbox(phba, mboxq, MBX_NOWAIT);
if (rc == MBX_NOT_FINISHED) {
lpfc_printf_vlog(vport, KERN_ERR,
LOG_MBOX, "2893 Failed to issue init_vfi mailbox\n");
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"2893 Failed to issue init_vfi mailbox\n");
mempool_free(mboxq, vport->phba->mbox_mem_pool);
}
}
@ -2834,10 +2836,9 @@ lpfc_init_vpi_cmpl(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq)
struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
if (mboxq->u.mb.mbxStatus) {
lpfc_printf_vlog(vport, KERN_ERR,
LOG_MBOX,
"2609 Init VPI mailbox failed 0x%x\n",
mboxq->u.mb.mbxStatus);
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"2609 Init VPI mailbox failed 0x%x\n",
mboxq->u.mb.mbxStatus);
mempool_free(mboxq, phba->mbox_mem_pool);
lpfc_vport_set_state(vport, FC_VPORT_FAILED);
return;
@ -2851,7 +2852,7 @@ lpfc_init_vpi_cmpl(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq)
ndlp = lpfc_findnode_did(vport, Fabric_DID);
if (!ndlp)
lpfc_printf_vlog(vport, KERN_ERR,
LOG_DISCOVERY,
LOG_TRACE_EVENT,
"2731 Cannot find fabric "
"controller node\n");
else
@ -2864,7 +2865,7 @@ lpfc_init_vpi_cmpl(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq)
lpfc_initial_fdisc(vport);
else {
lpfc_vport_set_state(vport, FC_VPORT_NO_FABRIC_SUPP);
lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"2606 No NPIV Fabric support\n");
}
mempool_free(mboxq, phba->mbox_mem_pool);
@ -2887,8 +2888,7 @@ lpfc_issue_init_vpi(struct lpfc_vport *vport)
if ((vport->port_type != LPFC_PHYSICAL_PORT) && (!vport->vpi)) {
vpi = lpfc_alloc_vpi(vport->phba);
if (!vpi) {
lpfc_printf_vlog(vport, KERN_ERR,
LOG_MBOX,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"3303 Failed to obtain vport vpi\n");
lpfc_vport_set_state(vport, FC_VPORT_FAILED);
return;
@ -2899,7 +2899,7 @@ lpfc_issue_init_vpi(struct lpfc_vport *vport)
mboxq = mempool_alloc(vport->phba->mbox_mem_pool, GFP_KERNEL);
if (!mboxq) {
lpfc_printf_vlog(vport, KERN_ERR,
LOG_MBOX, "2607 Failed to allocate "
LOG_TRACE_EVENT, "2607 Failed to allocate "
"init_vpi mailbox\n");
return;
}
@ -2908,8 +2908,8 @@ lpfc_issue_init_vpi(struct lpfc_vport *vport)
mboxq->mbox_cmpl = lpfc_init_vpi_cmpl;
rc = lpfc_sli_issue_mbox(vport->phba, mboxq, MBX_NOWAIT);
if (rc == MBX_NOT_FINISHED) {
lpfc_printf_vlog(vport, KERN_ERR,
LOG_MBOX, "2608 Failed to issue init_vpi mailbox\n");
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"2608 Failed to issue init_vpi mailbox\n");
mempool_free(mboxq, vport->phba->mbox_mem_pool);
}
}
@ -2953,7 +2953,7 @@ lpfc_start_fdiscs(struct lpfc_hba *phba)
lpfc_vport_set_state(vports[i],
FC_VPORT_NO_FABRIC_SUPP);
lpfc_printf_vlog(vports[i], KERN_ERR,
LOG_ELS,
LOG_TRACE_EVENT,
"0259 No NPIV "
"Fabric support\n");
}
@ -2977,10 +2977,10 @@ lpfc_mbx_cmpl_reg_vfi(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq)
(bf_get(lpfc_sli_intf_if_type, &phba->sli4_hba.sli_intf) !=
LPFC_SLI_INTF_IF_TYPE_0) &&
mboxq->u.mb.mbxStatus != MBX_VFI_IN_USE) {
lpfc_printf_vlog(vport, KERN_ERR, LOG_MBOX,
"2018 REG_VFI mbxStatus error x%x "
"HBA state x%x\n",
mboxq->u.mb.mbxStatus, vport->port_state);
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"2018 REG_VFI mbxStatus error x%x "
"HBA state x%x\n",
mboxq->u.mb.mbxStatus, vport->port_state);
if (phba->fc_topology == LPFC_TOPOLOGY_LOOP) {
/* FLOGI failed, use loop map to make discovery list */
lpfc_disc_list_loopmap(vport);
@ -3067,7 +3067,7 @@ lpfc_mbx_cmpl_read_sparam(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
/* Check for error */
if (mb->mbxStatus) {
/* READ_SPARAM mbox error <mbxStatus> state <hba_state> */
lpfc_printf_vlog(vport, KERN_ERR, LOG_MBOX,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"0319 READ_SPARAM mbxStatus error x%x "
"hba state x%x>\n",
mb->mbxStatus, vport->port_state);
@ -3286,7 +3286,7 @@ lpfc_mbx_process_link_up(struct lpfc_hba *phba, struct lpfc_mbx_read_top *la)
GFP_KERNEL);
if (unlikely(!fcf_record)) {
lpfc_printf_log(phba, KERN_ERR,
LOG_MBOX | LOG_SLI,
LOG_TRACE_EVENT,
"2554 Could not allocate memory for "
"fcf record\n");
rc = -ENODEV;
@ -3298,7 +3298,7 @@ lpfc_mbx_process_link_up(struct lpfc_hba *phba, struct lpfc_mbx_read_top *la)
rc = lpfc_sli4_add_fcf_record(phba, fcf_record);
if (unlikely(rc)) {
lpfc_printf_log(phba, KERN_ERR,
LOG_MBOX | LOG_SLI,
LOG_TRACE_EVENT,
"2013 Could not manually add FCF "
"record 0, status %d\n", rc);
rc = -ENODEV;
@ -3344,7 +3344,7 @@ lpfc_mbx_process_link_up(struct lpfc_hba *phba, struct lpfc_mbx_read_top *la)
return;
out:
lpfc_vport_set_state(vport, FC_VPORT_FAILED);
lpfc_printf_vlog(vport, KERN_ERR, LOG_MBOX,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"0263 Discovery Mailbox error: state: 0x%x : x%px x%px\n",
vport->port_state, sparam_mbox, cfglink_mbox);
lpfc_issue_clear_la(phba, vport);
@ -3617,7 +3617,7 @@ lpfc_mbx_cmpl_unreg_vpi(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
break;
/* If VPI is busy, reset the HBA */
case 0x9700:
lpfc_printf_vlog(vport, KERN_ERR, LOG_NODE,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"2798 Unreg_vpi failed vpi 0x%x, mb status = 0x%x\n",
vport->vpi, mb->mbxStatus);
if (!(phba->pport->load_flag & FC_UNLOADING))
@ -3655,7 +3655,7 @@ lpfc_mbx_unreg_vpi(struct lpfc_vport *vport)
mbox->mbox_cmpl = lpfc_mbx_cmpl_unreg_vpi;
rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT);
if (rc == MBX_NOT_FINISHED) {
lpfc_printf_vlog(vport, KERN_ERR, LOG_MBOX | LOG_VPORT,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"1800 Could not issue unreg_vpi\n");
mempool_free(mbox, phba->mbox_mem_pool);
vport->unreg_vpi_cmpl = VPORT_ERROR;
@ -3742,7 +3742,7 @@ lpfc_create_static_vport(struct lpfc_hba *phba)
pmb = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
if (!pmb) {
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
"0542 lpfc_create_static_vport failed to"
" allocate mailbox memory\n");
return;
@ -3752,7 +3752,7 @@ lpfc_create_static_vport(struct lpfc_hba *phba)
vport_info = kzalloc(sizeof(struct static_vport_info), GFP_KERNEL);
if (!vport_info) {
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
"0543 lpfc_create_static_vport failed to"
" allocate vport_info\n");
mempool_free(pmb, phba->mbox_mem_pool);
@ -3813,11 +3813,12 @@ lpfc_create_static_vport(struct lpfc_hba *phba)
if ((le32_to_cpu(vport_info->signature) != VPORT_INFO_SIG) ||
((le32_to_cpu(vport_info->rev) & VPORT_INFO_REV_MASK)
!= VPORT_INFO_REV)) {
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"0545 lpfc_create_static_vport bad"
" information header 0x%x 0x%x\n",
le32_to_cpu(vport_info->signature),
le32_to_cpu(vport_info->rev) & VPORT_INFO_REV_MASK);
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
"0545 lpfc_create_static_vport bad"
" information header 0x%x 0x%x\n",
le32_to_cpu(vport_info->signature),
le32_to_cpu(vport_info->rev) &
VPORT_INFO_REV_MASK);
goto out;
}
@ -3881,7 +3882,7 @@ lpfc_mbx_cmpl_fabric_reg_login(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
pmb->ctx_buf = NULL;
if (mb->mbxStatus) {
lpfc_printf_vlog(vport, KERN_ERR, LOG_MBOX,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"0258 Register Fabric login error: 0x%x\n",
mb->mbxStatus);
lpfc_mbuf_free(phba, mp->virt, mp->phys);
@ -3954,7 +3955,8 @@ lpfc_issue_gidft(struct lpfc_vport *vport)
/* Cannot issue NameServer FCP Query, so finish up
* discovery
*/
lpfc_printf_vlog(vport, KERN_ERR, LOG_SLI,
lpfc_printf_vlog(vport, KERN_ERR,
LOG_TRACE_EVENT,
"0604 %s FC TYPE %x %s\n",
"Failed to issue GID_FT to ",
FC_TYPE_FCP,
@ -3970,7 +3972,8 @@ lpfc_issue_gidft(struct lpfc_vport *vport)
/* Cannot issue NameServer NVME Query, so finish up
* discovery
*/
lpfc_printf_vlog(vport, KERN_ERR, LOG_SLI,
lpfc_printf_vlog(vport, KERN_ERR,
LOG_TRACE_EVENT,
"0605 %s FC_TYPE %x %s %d\n",
"Failed to issue GID_FT to ",
FC_TYPE_NVME,
@ -4002,7 +4005,7 @@ lpfc_issue_gidpt(struct lpfc_vport *vport)
/* Cannot issue NameServer FCP Query, so finish up
* discovery
*/
lpfc_printf_vlog(vport, KERN_ERR, LOG_SLI,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"0606 %s Port TYPE %x %s\n",
"Failed to issue GID_PT to ",
GID_PT_N_PORT,
@ -4032,7 +4035,7 @@ lpfc_mbx_cmpl_ns_reg_login(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
vport->gidft_inp = 0;
if (mb->mbxStatus) {
lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"0260 Register NameServer error: 0x%x\n",
mb->mbxStatus);
@ -4344,7 +4347,7 @@ lpfc_nlp_state_cleanup(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
GFP_KERNEL);
if (!ndlp->lat_data)
lpfc_printf_vlog(vport, KERN_ERR, LOG_NODE,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"0286 lpfc_nlp_state_cleanup failed to "
"allocate statistical data buffer DID "
"0x%x\n", ndlp->nlp_DID);
@ -5013,8 +5016,8 @@ lpfc_unreg_hba_rpis(struct lpfc_hba *phba)
vports = lpfc_create_vport_work_array(phba);
if (!vports) {
lpfc_printf_log(phba, KERN_ERR, LOG_DISCOVERY,
"2884 Vport array allocation failed \n");
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
"2884 Vport array allocation failed \n");
return;
}
for (i = 0; i <= phba->max_vports && vports[i] != NULL; i++) {
@ -5057,9 +5060,10 @@ lpfc_unreg_all_rpis(struct lpfc_vport *vport)
mempool_free(mbox, phba->mbox_mem_pool);
if ((rc == MBX_TIMEOUT) || (rc == MBX_NOT_FINISHED))
lpfc_printf_vlog(vport, KERN_ERR, LOG_MBOX | LOG_VPORT,
"1836 Could not issue "
"unreg_login(all_rpis) status %d\n", rc);
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"1836 Could not issue "
"unreg_login(all_rpis) status %d\n",
rc);
}
}
@ -5086,7 +5090,7 @@ lpfc_unreg_default_rpis(struct lpfc_vport *vport)
mempool_free(mbox, phba->mbox_mem_pool);
if ((rc == MBX_TIMEOUT) || (rc == MBX_NOT_FINISHED))
lpfc_printf_vlog(vport, KERN_ERR, LOG_MBOX | LOG_VPORT,
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"1815 Could not issue "
"unreg_did (default rpis) status %d\n",
rc);
@ -5907,7 +5911,8 @@ lpfc_disc_timeout_handler(struct lpfc_vport *vport)
case LPFC_FLOGI:
/* port_state is identically LPFC_FLOGI while waiting for FLOGI cmpl */
/* Initial FLOGI timeout */
lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
lpfc_printf_vlog(vport, KERN_ERR,
LOG_TRACE_EVENT,
"0222 Initial %s timeout\n",
vport->vpi ? "FDISC" : "FLOGI");
@ -5925,7 +5930,8 @@ lpfc_disc_timeout_handler(struct lpfc_vport *vport)
case LPFC_FABRIC_CFG_LINK:
/* hba_state is identically LPFC_FABRIC_CFG_LINK while waiting for
NameServer login */
lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
lpfc_printf_vlog(vport, KERN_ERR,
LOG_TRACE_EVENT,
"0223 Timeout while waiting for "
"NameServer login\n");
/* Next look for NameServer ndlp */
@ -5938,7 +5944,8 @@ lpfc_disc_timeout_handler(struct lpfc_vport *vport)
case LPFC_NS_QRY:
/* Check for wait for NameServer Rsp timeout */
lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
lpfc_printf_vlog(vport, KERN_ERR,
LOG_TRACE_EVENT,
"0224 NameServer Query timeout "
"Data: x%x x%x\n",
vport->fc_ns_retry, LPFC_MAX_NS_RETRY);
@ -5971,7 +5978,8 @@ restart_disc:
/* Setup and issue mailbox INITIALIZE LINK command */
initlinkmbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
if (!initlinkmbox) {
lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
lpfc_printf_vlog(vport, KERN_ERR,
LOG_TRACE_EVENT,
"0206 Device Discovery "
"completion error\n");
phba->link_state = LPFC_HBA_ERROR;
@ -5993,7 +6001,8 @@ restart_disc:
case LPFC_DISC_AUTH:
/* Node Authentication timeout */
lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
lpfc_printf_vlog(vport, KERN_ERR,
LOG_TRACE_EVENT,
"0227 Node Authentication timeout\n");
lpfc_disc_flush_list(vport);
@ -6013,7 +6022,8 @@ restart_disc:
case LPFC_VPORT_READY:
if (vport->fc_flag & FC_RSCN_MODE) {
lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
lpfc_printf_vlog(vport, KERN_ERR,
LOG_TRACE_EVENT,
"0231 RSCN timeout Data: x%x "
"x%x\n",
vport->fc_ns_retry, LPFC_MAX_NS_RETRY);
@ -6027,7 +6037,8 @@ restart_disc:
break;
default:
lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
lpfc_printf_vlog(vport, KERN_ERR,
LOG_TRACE_EVENT,
"0273 Unexpected discovery timeout, "
"vport State x%x\n", vport->port_state);
break;
@ -6036,7 +6047,8 @@ restart_disc:
switch (phba->link_state) {
case LPFC_CLEAR_LA:
/* CLEAR LA timeout */
lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
lpfc_printf_vlog(vport, KERN_ERR,
LOG_TRACE_EVENT,
"0228 CLEAR LA timeout\n");
clrlaerr = 1;
break;
@ -6050,7 +6062,8 @@ restart_disc:
case LPFC_INIT_MBX_CMDS:
case LPFC_LINK_DOWN:
case LPFC_HBA_ERROR:
lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
lpfc_printf_vlog(vport, KERN_ERR,
LOG_TRACE_EVENT,
"0230 Unexpected timeout, hba link "
"state x%x\n", phba->link_state);
clrlaerr = 1;
@ -6241,9 +6254,9 @@ lpfc_find_vport_by_vpid(struct lpfc_hba *phba, uint16_t vpi)
}
if (i >= phba->max_vpi) {
lpfc_printf_log(phba, KERN_ERR, LOG_ELS,
"2936 Could not find Vport mapped "
"to vpi %d\n", vpi);
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
"2936 Could not find Vport mapped "
"to vpi %d\n", vpi);
return NULL;
}
}
@ -6547,10 +6560,10 @@ lpfc_unregister_vfi_cmpl(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq)
struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
if (mboxq->u.mb.mbxStatus) {
lpfc_printf_log(phba, KERN_ERR, LOG_DISCOVERY|LOG_MBOX,
"2555 UNREG_VFI mbxStatus error x%x "
"HBA state x%x\n",
mboxq->u.mb.mbxStatus, vport->port_state);
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
"2555 UNREG_VFI mbxStatus error x%x "
"HBA state x%x\n",
mboxq->u.mb.mbxStatus, vport->port_state);
}
spin_lock_irq(shost->host_lock);
phba->pport->fc_flag &= ~FC_VFI_REGISTERED;
@ -6572,10 +6585,10 @@ lpfc_unregister_fcfi_cmpl(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq)
struct lpfc_vport *vport = mboxq->vport;
if (mboxq->u.mb.mbxStatus) {
lpfc_printf_log(phba, KERN_ERR, LOG_DISCOVERY|LOG_MBOX,
"2550 UNREG_FCFI mbxStatus error x%x "
"HBA state x%x\n",
mboxq->u.mb.mbxStatus, vport->port_state);
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
"2550 UNREG_FCFI mbxStatus error x%x "
"HBA state x%x\n",
mboxq->u.mb.mbxStatus, vport->port_state);
}
mempool_free(mboxq, phba->mbox_mem_pool);
return;
@ -6664,7 +6677,7 @@ lpfc_sli4_unregister_fcf(struct lpfc_hba *phba)
mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
if (!mbox) {
lpfc_printf_log(phba, KERN_ERR, LOG_DISCOVERY|LOG_MBOX,
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
"2551 UNREG_FCFI mbox allocation failed"
"HBA state x%x\n", phba->pport->port_state);
return -ENOMEM;
@ -6675,7 +6688,7 @@ lpfc_sli4_unregister_fcf(struct lpfc_hba *phba)
rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT);
if (rc == MBX_NOT_FINISHED) {
lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
"2552 Unregister FCFI command failed rc x%x "
"HBA state x%x\n",
rc, phba->pport->port_state);
@ -6699,7 +6712,7 @@ lpfc_unregister_fcf_rescan(struct lpfc_hba *phba)
/* Preparation for unregistering fcf */
rc = lpfc_unregister_fcf_prep(phba);
if (rc) {
lpfc_printf_log(phba, KERN_ERR, LOG_DISCOVERY,
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
"2748 Failed to prepare for unregistering "
"HBA's FCF record: rc=%d\n", rc);
return;
@ -6735,7 +6748,7 @@ lpfc_unregister_fcf_rescan(struct lpfc_hba *phba)
spin_lock_irq(&phba->hbalock);
phba->fcf.fcf_flag &= ~FCF_INIT_DISC;
spin_unlock_irq(&phba->hbalock);
lpfc_printf_log(phba, KERN_ERR, LOG_DISCOVERY|LOG_MBOX,
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
"2553 lpfc_unregister_unused_fcf failed "
"to read FCF record HBA state x%x\n",
phba->pport->port_state);
@ -6757,7 +6770,7 @@ lpfc_unregister_fcf(struct lpfc_hba *phba)
/* Preparation for unregistering fcf */
rc = lpfc_unregister_fcf_prep(phba);
if (rc) {
lpfc_printf_log(phba, KERN_ERR, LOG_DISCOVERY,
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
"2749 Failed to prepare for unregistering "
"HBA's FCF record: rc=%d\n", rc);
return;
@ -6844,9 +6857,9 @@ lpfc_read_fcf_conn_tbl(struct lpfc_hba *phba,
conn_entry = kzalloc(sizeof(struct lpfc_fcf_conn_entry),
GFP_KERNEL);
if (!conn_entry) {
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"2566 Failed to allocate connection"
" table entry\n");
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
"2566 Failed to allocate connection"
" table entry\n");
return;
}
@ -6990,7 +7003,7 @@ lpfc_parse_fcoe_conf(struct lpfc_hba *phba,
/* Check the region signature first */
if (memcmp(buff, LPFC_REGION23_SIGNATURE, 4)) {
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
"2567 Config region 23 has bad signature\n");
return;
}
@ -6999,8 +7012,8 @@ lpfc_parse_fcoe_conf(struct lpfc_hba *phba,
/* Check the data structure version */
if (buff[offset] != LPFC_REGION23_VERSION) {
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"2568 Config region 23 has bad version\n");
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
"2568 Config region 23 has bad version\n");
return;
}
offset += 4;

View File

@ -650,6 +650,9 @@ struct lpfc_register {
#define lpfc_sliport_status_oti_SHIFT 29
#define lpfc_sliport_status_oti_MASK 0x1
#define lpfc_sliport_status_oti_WORD word0
#define lpfc_sliport_status_dip_SHIFT 25
#define lpfc_sliport_status_dip_MASK 0x1
#define lpfc_sliport_status_dip_WORD word0
#define lpfc_sliport_status_rn_SHIFT 24
#define lpfc_sliport_status_rn_MASK 0x1
#define lpfc_sliport_status_rn_WORD word0
@ -3531,7 +3534,7 @@ struct lpfc_sli4_parameters {
};
#define LPFC_SET_UE_RECOVERY 0x10
#define LPFC_SET_MDS_DIAGS 0x11
#define LPFC_SET_MDS_DIAGS 0x12
#define LPFC_SET_DUAL_DUMP 0x1e
struct lpfc_mbx_set_feature {
struct mbox_header header;

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More