1
0
Fork 0

SCSI for-linus on 20141007

This patch set consists of the usual driver updates (megaraid_sas, arcmsr,
 be2iscsi, lpfc, mpt2sas, mpt3sas, qla2xxx, ufs) plus several assorted fixes
 and miscellaneous updates (including the pci_msix_enable_range() changes that
 have been pending for a while).
 
 Signed-off-by: James Bottomley <JBottomley@Parallels.com>
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.22 (GNU/Linux)
 
 iQEcBAABAgAGBQJUNHvtAAoJEDeqqVYsXL0MzaoIAJ/R2JW/Xm50rD6iUj6RfjEc
 6OOi3sOJe6yivPFLLTmIZyLcgHuZKPVXsjcjBXENsrjJeyu2aTq+vs2bOJN9BRYU
 gHGyAEhVPKsvecYhEj/78ClRIMzwkr7KQMQConbClDa0sVr62M/dPQVHNvjaTDeS
 rtmPGZbNpv9rCl0itNBLMrnOBT/MduuWtS2VNCAkV5yFU8kvEax5pizB+W4ztjoe
 BnVnF8OJC70wAM4vpiUcgwCR5AGmYv5SQKn3AHNBayrJic0MLUSIhrnCptc9TSir
 zWJAyoW2iQY1LKmihjwjDlXP40jbfOaBacEycqTUKNkfMRKbBl3qQa4IUrR1XsQ=
 =I1Yg
 -----END PGP SIGNATURE-----

Merge tag 'scsi-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi

Pull SCSI updates from James Bottomley:
 "This patch set consists of the usual driver updates (megaraid_sas,
  arcmsr, be2iscsi, lpfc, mpt2sas, mpt3sas, qla2xxx, ufs) plus several
  assorted fixes and miscellaneous updates (including the
  pci_msix_enable_range() changes that have been pending for a while)"

* tag 'scsi-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (202 commits)
  scsi: add a CONFIG_SCSI_MQ_DEFAULT option
  ufs: definitions for phy interface
  ufs: tune bkops while power managment events
  ufs: Add support for clock scaling using devfreq framework
  ufs: Add freq-table-hz property for UFS device
  ufs: Add support for clock gating
  ufs: refactor configuring power mode
  ufs: add UFS power management support
  ufs: introduce well known logical unit in ufs
  ufs: manually add well known logical units
  ufs: Active Power Mode - configuring bActiveICCLevel
  ufs: improve init sequence
  ufs: refactor query descriptor API support
  ufs: add voting support for host controller power
  ufs: Add clock initialization support
  ufs: Add regulator enable support
  ufs: Allow vendor specific initialization
  scsi: don't add scsi_device if its already visible
  scsi: fix the type for well known LUs
  scsi: fix comment in struct Scsi_Host definition
  ...
wifi-calibration
Linus Torvalds 2014-10-07 21:29:18 -04:00
commit 9a50aaefc1
142 changed files with 10389 additions and 3045 deletions

View File

@ -8,9 +8,50 @@ Required properties:
- interrupts : <interrupt mapping for UFS host controller IRQ>
- reg : <registers mapping>
Optional properties:
- vdd-hba-supply : phandle to UFS host controller supply regulator node
- vcc-supply : phandle to VCC supply regulator node
- vccq-supply : phandle to VCCQ supply regulator node
- vccq2-supply : phandle to VCCQ2 supply regulator node
- vcc-supply-1p8 : For embedded UFS devices, valid VCC range is 1.7-1.95V
or 2.7-3.6V. This boolean property when set, specifies
to use low voltage range of 1.7-1.95V. Note for external
UFS cards this property is invalid and valid VCC range is
always 2.7-3.6V.
- vcc-max-microamp : specifies max. load that can be drawn from vcc supply
- vccq-max-microamp : specifies max. load that can be drawn from vccq supply
- vccq2-max-microamp : specifies max. load that can be drawn from vccq2 supply
- <name>-fixed-regulator : boolean property specifying that <name>-supply is a fixed regulator
- clocks : List of phandle and clock specifier pairs
- clock-names : List of clock input name strings sorted in the same
order as the clocks property.
- freq-table-hz : Array of <min max> operating frequencies stored in the same
order as the clocks property. If this property is not
defined or a value in the array is "0" then it is assumed
that the frequency is set by the parent clock or a
fixed rate clock source.
Note: If above properties are not defined it can be assumed that the supply
regulators or clocks are always on.
Example:
ufshc@0xfc598000 {
compatible = "jedec,ufs-1.1";
reg = <0xfc598000 0x800>;
interrupts = <0 28 0>;
vdd-hba-supply = <&xxx_reg0>;
vdd-hba-fixed-regulator;
vcc-supply = <&xxx_reg1>;
vcc-supply-1p8;
vccq-supply = <&xxx_reg2>;
vccq2-supply = <&xxx_reg3>;
vcc-max-microamp = 500000;
vccq-max-microamp = 200000;
vccq2-max-microamp = 200000;
clocks = <&core 0>, <&ref 0>, <&iface 0>;
clock-names = "core_clk", "ref_clk", "iface_clk";
freq-table-hz = <100000000 200000000>, <0 0>, <0 0>;
};

View File

@ -1,3 +1,17 @@
Release Date : Thu. Jun 19, 2014 17:00:00 PST 2014 -
(emaild-id:megaraidlinux@lsi.com)
Adam Radford
Kashyap Desai
Sumit Saxena
Uday Lingala
Current Version : 06.803.02.00-rc1
Old Version : 06.803.01.00-rc1
1. Fix reset_mutex leak in megasas_reset_fusion().
2. Remove unused variables in megasas_instance.
3. Fix LD/VF affiliation parsing.
4. Add missing initial call to megasas_get_ld_vf_affiliation().
5. Version and Changelog update.
-------------------------------------------------------------------------------
Release Date : Mon. Mar 10, 2014 17:00:00 PST 2014 -
(emaild-id:megaraidlinux@lsi.com)
Adam Radford

View File

@ -1400,7 +1400,6 @@ mpt_verify_adapter(int iocid, MPT_ADAPTER **iocpp)
* @vendor: pci vendor id
* @device: pci device id
* @revision: pci revision id
* @prod_name: string returned
*
* Returns product string displayed when driver loads,
* in /proc/mpt/summary and /sysfs/class/scsi_host/host<X>/version_product
@ -3172,12 +3171,7 @@ GetIocFacts(MPT_ADAPTER *ioc, int sleepFlag, int reason)
facts->FWImageSize = le32_to_cpu(facts->FWImageSize);
}
sz = facts->FWImageSize;
if ( sz & 0x01 )
sz += 1;
if ( sz & 0x02 )
sz += 2;
facts->FWImageSize = sz;
facts->FWImageSize = ALIGN(facts->FWImageSize, 4);
if (!facts->RequestFrameSize) {
/* Something is wrong! */

View File

@ -1741,12 +1741,7 @@ mptctl_replace_fw (unsigned long arg)
/* Allocate memory for the new FW image
*/
newFwSize = karg.newImageSize;
if (newFwSize & 0x01)
newFwSize += 1;
if (newFwSize & 0x02)
newFwSize += 2;
newFwSize = ALIGN(karg.newImageSize, 4);
mpt_alloc_fw_memory(ioc, newFwSize);
if (ioc->cached_fw == NULL)

View File

@ -1419,6 +1419,11 @@ mptspi_probe(struct pci_dev *pdev, const struct pci_device_id *id)
goto out_mptspi_probe;
}
/* VMWare emulation doesn't properly implement WRITE_SAME
*/
if (pdev->subsystem_vendor == 0x15AD)
sh->no_write_same = 1;
spin_lock_irqsave(&ioc->FreeQlock, flags);
/* Attach the SCSI Host to the IOC structure

View File

@ -45,6 +45,17 @@ config SCSI_NETLINK
default n
depends on NET
config SCSI_MQ_DEFAULT
bool "SCSI: use blk-mq I/O path by default"
depends on SCSI
---help---
This option enables the new blk-mq based I/O path for SCSI
devices by default. With the option the scsi_mod.use_blk_mq
module/boot option defaults to Y, without it to N, but it can
still be overriden either way.
If unsure say N.
config SCSI_PROC_FS
bool "legacy /proc/scsi/ support"
depends on SCSI && PROC_FS

View File

@ -1152,6 +1152,7 @@ static int aac_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
shost->irq = pdev->irq;
shost->unique_id = unique_id;
shost->max_cmd_len = 16;
shost->use_cmd_list = 1;
aac = (struct aac_dev *)shost->hostdata;
aac->base_start = pci_resource_start(pdev, 0);

View File

@ -45,13 +45,14 @@
#include <linux/interrupt.h>
struct device_attribute;
/*The limit of outstanding scsi command that firmware can handle*/
#define ARCMSR_MAX_OUTSTANDING_CMD 256
#ifdef CONFIG_XEN
#define ARCMSR_MAX_FREECCB_NUM 160
#define ARCMSR_MAX_OUTSTANDING_CMD 155
#else
#define ARCMSR_MAX_FREECCB_NUM 320
#define ARCMSR_MAX_OUTSTANDING_CMD 255
#endif
#define ARCMSR_DRIVER_VERSION "Driver Version 1.20.00.15 2010/08/05"
#define ARCMSR_DRIVER_VERSION "v1.30.00.04-20140919"
#define ARCMSR_SCSI_INITIATOR_ID 255
#define ARCMSR_MAX_XFER_SECTORS 512
#define ARCMSR_MAX_XFER_SECTORS_B 4096
@ -62,11 +63,17 @@ struct device_attribute;
#define ARCMSR_MAX_QBUFFER 4096
#define ARCMSR_DEFAULT_SG_ENTRIES 38
#define ARCMSR_MAX_HBB_POSTQUEUE 264
#define ARCMSR_MAX_ARC1214_POSTQUEUE 256
#define ARCMSR_MAX_ARC1214_DONEQUEUE 257
#define ARCMSR_MAX_XFER_LEN 0x26000 /* 152K */
#define ARCMSR_CDB_SG_PAGE_LENGTH 256
#define ARCMST_NUM_MSIX_VECTORS 4
#ifndef PCI_DEVICE_ID_ARECA_1880
#define PCI_DEVICE_ID_ARECA_1880 0x1880
#endif
#ifndef PCI_DEVICE_ID_ARECA_1214
#define PCI_DEVICE_ID_ARECA_1214 0x1214
#endif
/*
**********************************************************************************
**
@ -100,10 +107,11 @@ struct CMD_MESSAGE
** IOP Message Transfer Data for user space
*******************************************************************************
*/
#define ARCMSR_API_DATA_BUFLEN 1032
struct CMD_MESSAGE_FIELD
{
struct CMD_MESSAGE cmdmessage;
uint8_t messagedatabuffer[1032];
uint8_t messagedatabuffer[ARCMSR_API_DATA_BUFLEN];
};
/* IOP message transfer */
#define ARCMSR_MESSAGE_FAIL 0x0001
@ -337,6 +345,56 @@ struct FIRMWARE_INFO
#define ARCMSR_HBCMU_MESSAGE_FIRMWARE_OK 0x80000000
/*
*******************************************************************************
** SPEC. for Areca Type D adapter
*******************************************************************************
*/
#define ARCMSR_ARC1214_CHIP_ID 0x00004
#define ARCMSR_ARC1214_CPU_MEMORY_CONFIGURATION 0x00008
#define ARCMSR_ARC1214_I2_HOST_INTERRUPT_MASK 0x00034
#define ARCMSR_ARC1214_SAMPLE_RESET 0x00100
#define ARCMSR_ARC1214_RESET_REQUEST 0x00108
#define ARCMSR_ARC1214_MAIN_INTERRUPT_STATUS 0x00200
#define ARCMSR_ARC1214_PCIE_F0_INTERRUPT_ENABLE 0x0020C
#define ARCMSR_ARC1214_INBOUND_MESSAGE0 0x00400
#define ARCMSR_ARC1214_INBOUND_MESSAGE1 0x00404
#define ARCMSR_ARC1214_OUTBOUND_MESSAGE0 0x00420
#define ARCMSR_ARC1214_OUTBOUND_MESSAGE1 0x00424
#define ARCMSR_ARC1214_INBOUND_DOORBELL 0x00460
#define ARCMSR_ARC1214_OUTBOUND_DOORBELL 0x00480
#define ARCMSR_ARC1214_OUTBOUND_DOORBELL_ENABLE 0x00484
#define ARCMSR_ARC1214_INBOUND_LIST_BASE_LOW 0x01000
#define ARCMSR_ARC1214_INBOUND_LIST_BASE_HIGH 0x01004
#define ARCMSR_ARC1214_INBOUND_LIST_WRITE_POINTER 0x01018
#define ARCMSR_ARC1214_OUTBOUND_LIST_BASE_LOW 0x01060
#define ARCMSR_ARC1214_OUTBOUND_LIST_BASE_HIGH 0x01064
#define ARCMSR_ARC1214_OUTBOUND_LIST_COPY_POINTER 0x0106C
#define ARCMSR_ARC1214_OUTBOUND_LIST_READ_POINTER 0x01070
#define ARCMSR_ARC1214_OUTBOUND_INTERRUPT_CAUSE 0x01088
#define ARCMSR_ARC1214_OUTBOUND_INTERRUPT_ENABLE 0x0108C
#define ARCMSR_ARC1214_MESSAGE_WBUFFER 0x02000
#define ARCMSR_ARC1214_MESSAGE_RBUFFER 0x02100
#define ARCMSR_ARC1214_MESSAGE_RWBUFFER 0x02200
/* Host Interrupt Mask */
#define ARCMSR_ARC1214_ALL_INT_ENABLE 0x00001010
#define ARCMSR_ARC1214_ALL_INT_DISABLE 0x00000000
/* Host Interrupt Status */
#define ARCMSR_ARC1214_OUTBOUND_DOORBELL_ISR 0x00001000
#define ARCMSR_ARC1214_OUTBOUND_POSTQUEUE_ISR 0x00000010
/* DoorBell*/
#define ARCMSR_ARC1214_DRV2IOP_DATA_IN_READY 0x00000001
#define ARCMSR_ARC1214_DRV2IOP_DATA_OUT_READ 0x00000002
/*inbound message 0 ready*/
#define ARCMSR_ARC1214_IOP2DRV_DATA_WRITE_OK 0x00000001
/*outbound DATA WRITE isr door bell clear*/
#define ARCMSR_ARC1214_IOP2DRV_DATA_READ_OK 0x00000002
/*outbound message 0 ready*/
#define ARCMSR_ARC1214_IOP2DRV_MESSAGE_CMD_DONE 0x02000000
/*outbound message cmd isr door bell clear*/
/*ARCMSR_HBAMU_MESSAGE_FIRMWARE_OK*/
#define ARCMSR_ARC1214_MESSAGE_FIRMWARE_OK 0x80000000
#define ARCMSR_ARC1214_OUTBOUND_LIST_INTERRUPT_CLEAR 0x00000001
/*
*******************************************************************************
** ARECA SCSI COMMAND DESCRIPTOR BLOCK size 0x1F8 (504)
*******************************************************************************
*/
@ -357,7 +415,7 @@ struct ARCMSR_CDB
#define ARCMSR_CDB_FLAG_ORDEREDQ 0x10
uint8_t msgPages;
uint32_t Context;
uint32_t msgContext;
uint32_t DataLength;
uint8_t Cdb[16];
uint8_t DeviceStatus;
@ -494,6 +552,56 @@ struct MessageUnit_C{
uint32_t msgcode_rwbuffer[256]; /*2200 23FF*/
};
/*
*********************************************************************
** Messaging Unit (MU) of Type D processor
*********************************************************************
*/
struct InBound_SRB {
uint32_t addressLow; /* pointer to SRB block */
uint32_t addressHigh;
uint32_t length; /* in DWORDs */
uint32_t reserved0;
};
struct OutBound_SRB {
uint32_t addressLow; /* pointer to SRB block */
uint32_t addressHigh;
};
struct MessageUnit_D {
struct InBound_SRB post_qbuffer[ARCMSR_MAX_ARC1214_POSTQUEUE];
volatile struct OutBound_SRB
done_qbuffer[ARCMSR_MAX_ARC1214_DONEQUEUE];
u16 postq_index;
volatile u16 doneq_index;
u32 __iomem *chip_id; /* 0x00004 */
u32 __iomem *cpu_mem_config; /* 0x00008 */
u32 __iomem *i2o_host_interrupt_mask; /* 0x00034 */
u32 __iomem *sample_at_reset; /* 0x00100 */
u32 __iomem *reset_request; /* 0x00108 */
u32 __iomem *host_int_status; /* 0x00200 */
u32 __iomem *pcief0_int_enable; /* 0x0020C */
u32 __iomem *inbound_msgaddr0; /* 0x00400 */
u32 __iomem *inbound_msgaddr1; /* 0x00404 */
u32 __iomem *outbound_msgaddr0; /* 0x00420 */
u32 __iomem *outbound_msgaddr1; /* 0x00424 */
u32 __iomem *inbound_doorbell; /* 0x00460 */
u32 __iomem *outbound_doorbell; /* 0x00480 */
u32 __iomem *outbound_doorbell_enable; /* 0x00484 */
u32 __iomem *inboundlist_base_low; /* 0x01000 */
u32 __iomem *inboundlist_base_high; /* 0x01004 */
u32 __iomem *inboundlist_write_pointer; /* 0x01018 */
u32 __iomem *outboundlist_base_low; /* 0x01060 */
u32 __iomem *outboundlist_base_high; /* 0x01064 */
u32 __iomem *outboundlist_copy_pointer; /* 0x0106C */
u32 __iomem *outboundlist_read_pointer; /* 0x01070 0x01072 */
u32 __iomem *outboundlist_interrupt_cause; /* 0x1088 */
u32 __iomem *outboundlist_interrupt_enable; /* 0x108C */
u32 __iomem *message_wbuffer; /* 0x2000 */
u32 __iomem *message_rbuffer; /* 0x2100 */
u32 __iomem *msgcode_rwbuffer; /* 0x2200 */
};
/*
*******************************************************************************
** Adapter Control Block
*******************************************************************************
@ -505,19 +613,26 @@ struct AdapterControlBlock
#define ACB_ADAPTER_TYPE_B 0x00000002 /* hbb M IOP */
#define ACB_ADAPTER_TYPE_C 0x00000004 /* hbc P IOP */
#define ACB_ADAPTER_TYPE_D 0x00000008 /* hbd A IOP */
u32 roundup_ccbsize;
struct pci_dev * pdev;
struct Scsi_Host * host;
unsigned long vir2phy_offset;
struct msix_entry entries[ARCMST_NUM_MSIX_VECTORS];
/* Offset is used in making arc cdb physical to virtual calculations */
uint32_t outbound_int_enable;
uint32_t cdb_phyaddr_hi32;
uint32_t reg_mu_acc_handle0;
spinlock_t eh_lock;
spinlock_t ccblist_lock;
spinlock_t postq_lock;
spinlock_t doneq_lock;
spinlock_t rqbuffer_lock;
spinlock_t wqbuffer_lock;
union {
struct MessageUnit_A __iomem *pmuA;
struct MessageUnit_B *pmuB;
struct MessageUnit_C __iomem *pmuC;
struct MessageUnit_D *pmuD;
};
/* message unit ATU inbound base address0 */
void __iomem *mem_base0;
@ -544,6 +659,8 @@ struct AdapterControlBlock
/* iop init */
#define ACB_F_ABORT 0x0200
#define ACB_F_FIRMWARE_TRAP 0x0400
#define ACB_F_MSI_ENABLED 0x1000
#define ACB_F_MSIX_ENABLED 0x2000
struct CommandControlBlock * pccb_pool[ARCMSR_MAX_FREECCB_NUM];
/* used for memory free */
struct list_head ccb_free_list;
@ -557,19 +674,20 @@ struct AdapterControlBlock
/* dma_coherent used for memory free */
dma_addr_t dma_coherent_handle;
/* dma_coherent_handle used for memory free */
dma_addr_t dma_coherent_handle_hbb_mu;
dma_addr_t dma_coherent_handle2;
void *dma_coherent2;
unsigned int uncache_size;
uint8_t rqbuffer[ARCMSR_MAX_QBUFFER];
/* data collection buffer for read from 80331 */
int32_t rqbuf_firstindex;
int32_t rqbuf_getIndex;
/* first of read buffer */
int32_t rqbuf_lastindex;
int32_t rqbuf_putIndex;
/* last of read buffer */
uint8_t wqbuffer[ARCMSR_MAX_QBUFFER];
/* data collection buffer for write to 80331 */
int32_t wqbuf_firstindex;
int32_t wqbuf_getIndex;
/* first of write buffer */
int32_t wqbuf_lastindex;
int32_t wqbuf_putIndex;
/* last of write buffer */
uint8_t devstate[ARCMSR_MAX_TARGETID][ARCMSR_MAX_TARGETLUN];
/* id0 ..... id15, lun0...lun7 */
@ -594,6 +712,8 @@ struct AdapterControlBlock
#define FW_DEADLOCK 0x0010
atomic_t rq_map_token;
atomic_t ante_token_value;
uint32_t maxOutstanding;
int msix_vector_count;
};/* HW_DEVICE_EXTENSION */
/*
*******************************************************************************
@ -606,7 +726,7 @@ struct CommandControlBlock{
struct list_head list; /*x32: 8byte, x64: 16byte*/
struct scsi_cmnd *pcmd; /*8 bytes pointer of linux scsi command */
struct AdapterControlBlock *acb; /*x32: 4byte, x64: 8byte*/
uint32_t cdb_phyaddr_pattern; /*x32: 4byte, x64: 4byte*/
uint32_t cdb_phyaddr; /*x32: 4byte, x64: 4byte*/
uint32_t arc_cdb_size; /*x32:4byte,x64:4byte*/
uint16_t ccb_flags; /*x32: 2byte, x64: 2byte*/
#define CCB_FLAG_READ 0x0000
@ -684,8 +804,10 @@ struct SENSE_DATA
#define ARCMSR_MU_OUTBOUND_MESSAGE0_INTMASKENABLE 0x01
#define ARCMSR_MU_OUTBOUND_ALL_INTMASKENABLE 0x1F
extern void arcmsr_post_ioctldata2iop(struct AdapterControlBlock *);
extern void arcmsr_iop_message_read(struct AdapterControlBlock *);
extern void arcmsr_write_ioctldata2iop(struct AdapterControlBlock *);
extern uint32_t arcmsr_Read_iop_rqbuffer_data(struct AdapterControlBlock *,
struct QBUFFER __iomem *);
extern void arcmsr_clear_iop2drv_rqueue_buffer(struct AdapterControlBlock *);
extern struct QBUFFER __iomem *arcmsr_get_iop_rqbuffer(struct AdapterControlBlock *);
extern struct device_attribute *arcmsr_host_attrs[];
extern int arcmsr_alloc_sysfs_attr(struct AdapterControlBlock *);

View File

@ -50,6 +50,7 @@
#include <linux/errno.h>
#include <linux/delay.h>
#include <linux/pci.h>
#include <linux/circ_buf.h>
#include <scsi/scsi_cmnd.h>
#include <scsi/scsi_device.h>
@ -68,42 +69,42 @@ static ssize_t arcmsr_sysfs_iop_message_read(struct file *filp,
struct device *dev = container_of(kobj,struct device,kobj);
struct Scsi_Host *host = class_to_shost(dev);
struct AdapterControlBlock *acb = (struct AdapterControlBlock *) host->hostdata;
uint8_t *pQbuffer,*ptmpQbuffer;
uint8_t *ptmpQbuffer;
int32_t allxfer_len = 0;
unsigned long flags;
if (!capable(CAP_SYS_ADMIN))
return -EACCES;
/* do message unit read. */
ptmpQbuffer = (uint8_t *)buf;
while ((acb->rqbuf_firstindex != acb->rqbuf_lastindex)
&& (allxfer_len < 1031)) {
pQbuffer = &acb->rqbuffer[acb->rqbuf_firstindex];
memcpy(ptmpQbuffer, pQbuffer, 1);
acb->rqbuf_firstindex++;
acb->rqbuf_firstindex %= ARCMSR_MAX_QBUFFER;
ptmpQbuffer++;
allxfer_len++;
spin_lock_irqsave(&acb->rqbuffer_lock, flags);
if (acb->rqbuf_getIndex != acb->rqbuf_putIndex) {
unsigned int tail = acb->rqbuf_getIndex;
unsigned int head = acb->rqbuf_putIndex;
unsigned int cnt_to_end = CIRC_CNT_TO_END(head, tail, ARCMSR_MAX_QBUFFER);
allxfer_len = CIRC_CNT(head, tail, ARCMSR_MAX_QBUFFER);
if (allxfer_len > ARCMSR_API_DATA_BUFLEN)
allxfer_len = ARCMSR_API_DATA_BUFLEN;
if (allxfer_len <= cnt_to_end)
memcpy(ptmpQbuffer, acb->rqbuffer + tail, allxfer_len);
else {
memcpy(ptmpQbuffer, acb->rqbuffer + tail, cnt_to_end);
memcpy(ptmpQbuffer + cnt_to_end, acb->rqbuffer, allxfer_len - cnt_to_end);
}
acb->rqbuf_getIndex = (acb->rqbuf_getIndex + allxfer_len) % ARCMSR_MAX_QBUFFER;
}
if (acb->acb_flags & ACB_F_IOPDATA_OVERFLOW) {
struct QBUFFER __iomem *prbuffer;
uint8_t __iomem *iop_data;
int32_t iop_len;
acb->acb_flags &= ~ACB_F_IOPDATA_OVERFLOW;
prbuffer = arcmsr_get_iop_rqbuffer(acb);
iop_data = prbuffer->data;
iop_len = readl(&prbuffer->data_len);
while (iop_len > 0) {
acb->rqbuffer[acb->rqbuf_lastindex] = readb(iop_data);
acb->rqbuf_lastindex++;
acb->rqbuf_lastindex %= ARCMSR_MAX_QBUFFER;
iop_data++;
iop_len--;
}
arcmsr_iop_message_read(acb);
if (arcmsr_Read_iop_rqbuffer_data(acb, prbuffer) == 0)
acb->acb_flags |= ACB_F_IOPDATA_OVERFLOW;
}
return (allxfer_len);
spin_unlock_irqrestore(&acb->rqbuffer_lock, flags);
return allxfer_len;
}
static ssize_t arcmsr_sysfs_iop_message_write(struct file *filp,
@ -115,43 +116,42 @@ static ssize_t arcmsr_sysfs_iop_message_write(struct file *filp,
struct device *dev = container_of(kobj,struct device,kobj);
struct Scsi_Host *host = class_to_shost(dev);
struct AdapterControlBlock *acb = (struct AdapterControlBlock *) host->hostdata;
int32_t my_empty_len, user_len, wqbuf_firstindex, wqbuf_lastindex;
int32_t user_len, cnt2end;
uint8_t *pQbuffer, *ptmpuserbuffer;
unsigned long flags;
if (!capable(CAP_SYS_ADMIN))
return -EACCES;
if (count > 1032)
if (count > ARCMSR_API_DATA_BUFLEN)
return -EINVAL;
/* do message unit write. */
ptmpuserbuffer = (uint8_t *)buf;
user_len = (int32_t)count;
wqbuf_lastindex = acb->wqbuf_lastindex;
wqbuf_firstindex = acb->wqbuf_firstindex;
if (wqbuf_lastindex != wqbuf_firstindex) {
arcmsr_post_ioctldata2iop(acb);
spin_lock_irqsave(&acb->wqbuffer_lock, flags);
if (acb->wqbuf_putIndex != acb->wqbuf_getIndex) {
arcmsr_write_ioctldata2iop(acb);
spin_unlock_irqrestore(&acb->wqbuffer_lock, flags);
return 0; /*need retry*/
} else {
my_empty_len = (wqbuf_firstindex-wqbuf_lastindex - 1)
&(ARCMSR_MAX_QBUFFER - 1);
if (my_empty_len >= user_len) {
while (user_len > 0) {
pQbuffer =
&acb->wqbuffer[acb->wqbuf_lastindex];
memcpy(pQbuffer, ptmpuserbuffer, 1);
acb->wqbuf_lastindex++;
acb->wqbuf_lastindex %= ARCMSR_MAX_QBUFFER;
ptmpuserbuffer++;
user_len--;
}
if (acb->acb_flags & ACB_F_MESSAGE_WQBUFFER_CLEARED) {
acb->acb_flags &=
~ACB_F_MESSAGE_WQBUFFER_CLEARED;
arcmsr_post_ioctldata2iop(acb);
}
return count;
} else {
return 0; /*need retry*/
pQbuffer = &acb->wqbuffer[acb->wqbuf_putIndex];
cnt2end = ARCMSR_MAX_QBUFFER - acb->wqbuf_putIndex;
if (user_len > cnt2end) {
memcpy(pQbuffer, ptmpuserbuffer, cnt2end);
ptmpuserbuffer += cnt2end;
user_len -= cnt2end;
acb->wqbuf_putIndex = 0;
pQbuffer = acb->wqbuffer;
}
memcpy(pQbuffer, ptmpuserbuffer, user_len);
acb->wqbuf_putIndex += user_len;
acb->wqbuf_putIndex %= ARCMSR_MAX_QBUFFER;
if (acb->acb_flags & ACB_F_MESSAGE_WQBUFFER_CLEARED) {
acb->acb_flags &=
~ACB_F_MESSAGE_WQBUFFER_CLEARED;
arcmsr_write_ioctldata2iop(acb);
}
spin_unlock_irqrestore(&acb->wqbuffer_lock, flags);
return count;
}
}
@ -165,22 +165,24 @@ static ssize_t arcmsr_sysfs_iop_message_clear(struct file *filp,
struct Scsi_Host *host = class_to_shost(dev);
struct AdapterControlBlock *acb = (struct AdapterControlBlock *) host->hostdata;
uint8_t *pQbuffer;
unsigned long flags;
if (!capable(CAP_SYS_ADMIN))
return -EACCES;
if (acb->acb_flags & ACB_F_IOPDATA_OVERFLOW) {
acb->acb_flags &= ~ACB_F_IOPDATA_OVERFLOW;
arcmsr_iop_message_read(acb);
}
arcmsr_clear_iop2drv_rqueue_buffer(acb);
acb->acb_flags |=
(ACB_F_MESSAGE_WQBUFFER_CLEARED
| ACB_F_MESSAGE_RQBUFFER_CLEARED
| ACB_F_MESSAGE_WQBUFFER_READED);
acb->rqbuf_firstindex = 0;
acb->rqbuf_lastindex = 0;
acb->wqbuf_firstindex = 0;
acb->wqbuf_lastindex = 0;
spin_lock_irqsave(&acb->rqbuffer_lock, flags);
acb->rqbuf_getIndex = 0;
acb->rqbuf_putIndex = 0;
spin_unlock_irqrestore(&acb->rqbuffer_lock, flags);
spin_lock_irqsave(&acb->wqbuffer_lock, flags);
acb->wqbuf_getIndex = 0;
acb->wqbuf_putIndex = 0;
spin_unlock_irqrestore(&acb->wqbuffer_lock, flags);
pQbuffer = acb->rqbuffer;
memset(pQbuffer, 0, sizeof (struct QBUFFER));
pQbuffer = acb->wqbuffer;
@ -193,7 +195,7 @@ static struct bin_attribute arcmsr_sysfs_message_read_attr = {
.name = "mu_read",
.mode = S_IRUSR ,
},
.size = 1032,
.size = ARCMSR_API_DATA_BUFLEN,
.read = arcmsr_sysfs_iop_message_read,
};
@ -202,7 +204,7 @@ static struct bin_attribute arcmsr_sysfs_message_write_attr = {
.name = "mu_write",
.mode = S_IWUSR,
},
.size = 1032,
.size = ARCMSR_API_DATA_BUFLEN,
.write = arcmsr_sysfs_iop_message_write,
};

File diff suppressed because it is too large Load Diff

View File

@ -1,5 +1,5 @@
/**
* Copyright (C) 2005 - 2013 Emulex
* Copyright (C) 2005 - 2014 Emulex
* All rights reserved.
*
* This program is free software; you can redistribute it and/or

View File

@ -1,5 +1,5 @@
/**
* Copyright (C) 2005 - 2013 Emulex
* Copyright (C) 2005 - 2014 Emulex
* All rights reserved.
*
* This program is free software; you can redistribute it and/or
@ -275,6 +275,19 @@ bool is_link_state_evt(u32 trailer)
ASYNC_EVENT_CODE_LINK_STATE);
}
static bool is_iscsi_evt(u32 trailer)
{
return ((trailer >> ASYNC_TRAILER_EVENT_CODE_SHIFT) &
ASYNC_TRAILER_EVENT_CODE_MASK) ==
ASYNC_EVENT_CODE_ISCSI;
}
static int iscsi_evt_type(u32 trailer)
{
return (trailer >> ASYNC_TRAILER_EVENT_TYPE_SHIFT) &
ASYNC_TRAILER_EVENT_TYPE_MASK;
}
static inline bool be_mcc_compl_is_new(struct be_mcc_compl *compl)
{
if (compl->flags != 0) {
@ -438,7 +451,7 @@ void beiscsi_async_link_state_process(struct beiscsi_hba *phba,
} else if ((evt->port_link_status & ASYNC_EVENT_LINK_UP) ||
((evt->port_link_status & ASYNC_EVENT_LOGICAL) &&
(evt->port_fault == BEISCSI_PHY_LINK_FAULT_NONE))) {
phba->state = BE_ADAPTER_LINK_UP;
phba->state = BE_ADAPTER_LINK_UP | BE_ADAPTER_CHECK_BOOT;
beiscsi_log(phba, KERN_ERR,
BEISCSI_LOG_CONFIG | BEISCSI_LOG_INIT,
@ -461,7 +474,28 @@ int beiscsi_process_mcc(struct beiscsi_hba *phba)
/* Interpret compl as a async link evt */
beiscsi_async_link_state_process(phba,
(struct be_async_event_link_state *) compl);
else
else if (is_iscsi_evt(compl->flags)) {
switch (iscsi_evt_type(compl->flags)) {
case ASYNC_EVENT_NEW_ISCSI_TGT_DISC:
case ASYNC_EVENT_NEW_ISCSI_CONN:
case ASYNC_EVENT_NEW_TCP_CONN:
phba->state |= BE_ADAPTER_CHECK_BOOT;
beiscsi_log(phba, KERN_ERR,
BEISCSI_LOG_CONFIG |
BEISCSI_LOG_MBOX,
"BC_%d : Async iscsi Event,"
" flags handled = 0x%08x\n",
compl->flags);
break;
default:
beiscsi_log(phba, KERN_ERR,
BEISCSI_LOG_CONFIG |
BEISCSI_LOG_MBOX,
"BC_%d : Unsupported Async"
" Event, flags = 0x%08x\n",
compl->flags);
}
} else
beiscsi_log(phba, KERN_ERR,
BEISCSI_LOG_CONFIG |
BEISCSI_LOG_MBOX,

View File

@ -1,5 +1,5 @@
/**
* Copyright (C) 2005 - 2013 Emulex
* Copyright (C) 2005 - 2014 Emulex
* All rights reserved.
*
* This program is free software; you can redistribute it and/or
@ -26,9 +26,9 @@
* The commands are serviced by the ARM processor in the OneConnect's MPU.
*/
struct be_sge {
u32 pa_lo;
u32 pa_hi;
u32 len;
__le32 pa_lo;
__le32 pa_hi;
__le32 len;
};
#define MCC_WRB_SGE_CNT_SHIFT 3 /* bits 3 - 7 of dword 0 */
@ -118,6 +118,14 @@ struct be_mcc_compl {
#define ASYNC_TRAILER_EVENT_CODE_SHIFT 8 /* bits 8 - 15 */
#define ASYNC_TRAILER_EVENT_CODE_MASK 0xFF
#define ASYNC_EVENT_CODE_LINK_STATE 0x1
#define ASYNC_EVENT_CODE_ISCSI 0x4
#define ASYNC_TRAILER_EVENT_TYPE_SHIFT 16 /* bits 16 - 23 */
#define ASYNC_TRAILER_EVENT_TYPE_MASK 0xF
#define ASYNC_EVENT_NEW_ISCSI_TGT_DISC 0x4
#define ASYNC_EVENT_NEW_ISCSI_CONN 0x5
#define ASYNC_EVENT_NEW_TCP_CONN 0x7
struct be_async_event_trailer {
u32 code;
};
@ -624,11 +632,11 @@ static inline struct be_sge *nonembedded_sgl(struct be_mcc_wrb *wrb)
/******************** Modify EQ Delay *******************/
struct be_cmd_req_modify_eq_delay {
struct be_cmd_req_hdr hdr;
u32 num_eq;
__le32 num_eq;
struct {
u32 eq_id;
u32 phase;
u32 delay_multiplier;
__le32 eq_id;
__le32 phase;
__le32 delay_multiplier;
} delay[MAX_CPUS];
} __packed;

View File

@ -1,5 +1,5 @@
/**
* Copyright (C) 2005 - 2013 Emulex
* Copyright (C) 2005 - 2014 Emulex
* All rights reserved.
*
* This program is free software; you can redistribute it and/or
@ -1273,6 +1273,31 @@ int beiscsi_ep_poll(struct iscsi_endpoint *ep, int timeout_ms)
return 0;
}
/**
* beiscsi_flush_cq()- Flush the CQ created.
* @phba: ptr device priv structure.
*
* Before the connection resource are freed flush
* all the CQ enteries
**/
static void beiscsi_flush_cq(struct beiscsi_hba *phba)
{
uint16_t i;
struct be_eq_obj *pbe_eq;
struct hwi_controller *phwi_ctrlr;
struct hwi_context_memory *phwi_context;
phwi_ctrlr = phba->phwi_ctrlr;
phwi_context = phwi_ctrlr->phwi_ctxt;
for (i = 0; i < phba->num_cpus; i++) {
pbe_eq = &phwi_context->be_eq[i];
blk_iopoll_disable(&pbe_eq->iopoll);
beiscsi_process_cq(pbe_eq);
blk_iopoll_enable(&pbe_eq->iopoll);
}
}
/**
* beiscsi_close_conn - Upload the connection
* @ep: The iscsi endpoint
@ -1294,6 +1319,10 @@ static int beiscsi_close_conn(struct beiscsi_endpoint *beiscsi_ep, int flag)
}
ret = beiscsi_mccq_compl(phba, tag, NULL, NULL);
/* Flush the CQ entries */
beiscsi_flush_cq(phba);
return ret;
}

View File

@ -1,5 +1,5 @@
/**
* Copyright (C) 2005 - 2013 Emulex
* Copyright (C) 2005 - 2014 Emulex
* All rights reserved.
*
* This program is free software; you can redistribute it and/or

View File

@ -1,5 +1,5 @@
/**
* Copyright (C) 2005 - 2013 Emulex
* Copyright (C) 2005 - 2014 Emulex
* All rights reserved.
*
* This program is free software; you can redistribute it and/or
@ -2068,7 +2068,7 @@ static void beiscsi_process_mcc_isr(struct beiscsi_hba *phba)
* return
* Number of Completion Entries processed.
**/
static unsigned int beiscsi_process_cq(struct be_eq_obj *pbe_eq)
unsigned int beiscsi_process_cq(struct be_eq_obj *pbe_eq)
{
struct be_queue_info *cq;
struct sol_cqe *sol;
@ -2110,6 +2110,18 @@ static unsigned int beiscsi_process_cq(struct be_eq_obj *pbe_eq)
cri_index = BE_GET_CRI_FROM_CID(cid);
ep = phba->ep_array[cri_index];
if (ep == NULL) {
/* connection has already been freed
* just move on to next one
*/
beiscsi_log(phba, KERN_WARNING,
BEISCSI_LOG_INIT,
"BM_%d : proc cqe of disconn ep: cid %d\n",
cid);
goto proc_next_cqe;
}
beiscsi_ep = ep->dd_data;
beiscsi_conn = beiscsi_ep->conn;
@ -2219,6 +2231,7 @@ static unsigned int beiscsi_process_cq(struct be_eq_obj *pbe_eq)
break;
}
proc_next_cqe:
AMAP_SET_BITS(struct amap_sol_cqe, valid, sol, 0);
queue_tail_inc(cq);
sol = queue_tail_node(cq);
@ -4377,6 +4390,10 @@ static int beiscsi_setup_boot_info(struct beiscsi_hba *phba)
{
struct iscsi_boot_kobj *boot_kobj;
/* it has been created previously */
if (phba->boot_kset)
return 0;
/* get boot info using mgmt cmd */
if (beiscsi_get_boot_info(phba))
/* Try to see if we can carry on without this */
@ -5206,6 +5223,7 @@ static void beiscsi_quiesce(struct beiscsi_hba *phba,
free_irq(phba->pcidev->irq, phba);
}
pci_disable_msix(phba->pcidev);
cancel_delayed_work_sync(&phba->beiscsi_hw_check_task);
for (i = 0; i < phba->num_cpus; i++) {
pbe_eq = &phwi_context->be_eq[i];
@ -5227,7 +5245,6 @@ static void beiscsi_quiesce(struct beiscsi_hba *phba,
hwi_cleanup(phba);
}
cancel_delayed_work_sync(&phba->beiscsi_hw_check_task);
}
static void beiscsi_remove(struct pci_dev *pcidev)
@ -5276,9 +5293,9 @@ static void beiscsi_msix_enable(struct beiscsi_hba *phba)
for (i = 0; i <= phba->num_cpus; i++)
phba->msix_entries[i].entry = i;
status = pci_enable_msix(phba->pcidev, phba->msix_entries,
(phba->num_cpus + 1));
if (!status)
status = pci_enable_msix_range(phba->pcidev, phba->msix_entries,
phba->num_cpus + 1, phba->num_cpus + 1);
if (status > 0)
phba->msix_enabled = true;
return;
@ -5335,6 +5352,14 @@ static void be_eqd_update(struct beiscsi_hba *phba)
}
}
static void be_check_boot_session(struct beiscsi_hba *phba)
{
if (beiscsi_setup_boot_info(phba))
beiscsi_log(phba, KERN_ERR, BEISCSI_LOG_INIT,
"BM_%d : Could not set up "
"iSCSI boot info on async event.\n");
}
/*
* beiscsi_hw_health_check()- Check adapter health
* @work: work item to check HW health
@ -5350,6 +5375,11 @@ beiscsi_hw_health_check(struct work_struct *work)
be_eqd_update(phba);
if (phba->state & BE_ADAPTER_CHECK_BOOT) {
phba->state &= ~BE_ADAPTER_CHECK_BOOT;
be_check_boot_session(phba);
}
beiscsi_ue_detect(phba);
schedule_delayed_work(&phba->beiscsi_hw_check_task,

View File

@ -1,5 +1,5 @@
/**
* Copyright (C) 2005 - 2013 Emulex
* Copyright (C) 2005 - 2014 Emulex
* All rights reserved.
*
* This program is free software; you can redistribute it and/or
@ -36,7 +36,7 @@
#include <scsi/scsi_transport_iscsi.h>
#define DRV_NAME "be2iscsi"
#define BUILD_STR "10.2.273.0"
#define BUILD_STR "10.4.114.0"
#define BE_NAME "Emulex OneConnect" \
"Open-iSCSI Driver version" BUILD_STR
#define DRV_DESC BE_NAME " " "Driver"
@ -104,6 +104,7 @@
#define BE_ADAPTER_LINK_DOWN 0x002
#define BE_ADAPTER_PCI_ERR 0x004
#define BE_ADAPTER_STATE_SHUTDOWN 0x008
#define BE_ADAPTER_CHECK_BOOT 0x010
#define BEISCSI_CLEAN_UNLOAD 0x01
@ -839,6 +840,9 @@ void beiscsi_free_mgmt_task_handles(struct beiscsi_conn *beiscsi_conn,
void hwi_ring_cq_db(struct beiscsi_hba *phba,
unsigned int id, unsigned int num_processed,
unsigned char rearm, unsigned char event);
unsigned int beiscsi_process_cq(struct be_eq_obj *pbe_eq);
static inline bool beiscsi_error(struct beiscsi_hba *phba)
{
return phba->ue_detected || phba->fw_timeout;

View File

@ -1,5 +1,5 @@
/**
* Copyright (C) 2005 - 2013 Emulex
* Copyright (C) 2005 - 2014 Emulex
* All rights reserved.
*
* This program is free software; you can redistribute it and/or
@ -943,17 +943,20 @@ mgmt_static_ip_modify(struct beiscsi_hba *phba,
if (ip_action == IP_ACTION_ADD) {
memcpy(req->ip_params.ip_record.ip_addr.addr, ip_param->value,
ip_param->len);
sizeof(req->ip_params.ip_record.ip_addr.addr));
if (subnet_param)
memcpy(req->ip_params.ip_record.ip_addr.subnet_mask,
subnet_param->value, subnet_param->len);
subnet_param->value,
sizeof(req->ip_params.ip_record.ip_addr.subnet_mask));
} else {
memcpy(req->ip_params.ip_record.ip_addr.addr,
if_info->ip_addr.addr, ip_param->len);
if_info->ip_addr.addr,
sizeof(req->ip_params.ip_record.ip_addr.addr));
memcpy(req->ip_params.ip_record.ip_addr.subnet_mask,
if_info->ip_addr.subnet_mask, ip_param->len);
if_info->ip_addr.subnet_mask,
sizeof(req->ip_params.ip_record.ip_addr.subnet_mask));
}
rc = mgmt_exec_nonemb_cmd(phba, &nonemb_cmd, NULL, 0);
@ -981,7 +984,7 @@ static int mgmt_modify_gateway(struct beiscsi_hba *phba, uint8_t *gt_addr,
req->action = gtway_action;
req->ip_addr.ip_type = BE2_IPV4;
memcpy(req->ip_addr.addr, gt_addr, param_len);
memcpy(req->ip_addr.addr, gt_addr, sizeof(req->ip_addr.addr));
return mgmt_exec_nonemb_cmd(phba, &nonemb_cmd, NULL, 0);
}

View File

@ -1,5 +1,5 @@
/**
* Copyright (C) 2005 - 2013 Emulex
* Copyright (C) 2005 - 2014 Emulex
* All rights reserved.
*
* This program is free software; you can redistribute it and/or

View File

@ -1654,6 +1654,10 @@ static int bnx2fc_map_sg(struct bnx2fc_cmd *io_req)
u64 addr;
int i;
/*
* Use dma_map_sg directly to ensure we're using the correct
* dev struct off of pcidev.
*/
sg_count = dma_map_sg(&hba->pcidev->dev, scsi_sglist(sc),
scsi_sg_count(sc), sc->sc_data_direction);
scsi_for_each_sg(sc, sg, sg_count, i) {
@ -1703,9 +1707,16 @@ static int bnx2fc_build_bd_list_from_sg(struct bnx2fc_cmd *io_req)
static void bnx2fc_unmap_sg_list(struct bnx2fc_cmd *io_req)
{
struct scsi_cmnd *sc = io_req->sc_cmd;
struct bnx2fc_interface *interface = io_req->port->priv;
struct bnx2fc_hba *hba = interface->hba;
if (io_req->bd_tbl->bd_valid && sc) {
scsi_dma_unmap(sc);
/*
* Use dma_unmap_sg directly to ensure we're using the correct
* dev struct off of pcidev.
*/
if (io_req->bd_tbl->bd_valid && sc && scsi_sg_count(sc)) {
dma_unmap_sg(&hba->pcidev->dev, scsi_sglist(sc),
scsi_sg_count(sc), sc->sc_data_direction);
io_req->bd_tbl->bd_valid = 0;
}
}

View File

@ -2235,6 +2235,9 @@ static umode_t bnx2i_attr_is_visible(int param_type, int param)
case ISCSI_PARAM_TGT_RESET_TMO:
case ISCSI_PARAM_IFACE_NAME:
case ISCSI_PARAM_INITIATOR_NAME:
case ISCSI_PARAM_BOOT_ROOT:
case ISCSI_PARAM_BOOT_NIC:
case ISCSI_PARAM_BOOT_TARGET:
return S_IRUGO;
default:
return 0;

View File

@ -94,7 +94,7 @@ enum {
};
struct csio_msix_entries {
unsigned short vector; /* Vector assigned by pci_enable_msix */
unsigned short vector; /* Assigned MSI-X vector */
void *dev_id; /* Priv object associated w/ this msix*/
char desc[24]; /* Description of this vector */
};

View File

@ -499,7 +499,7 @@ csio_reduce_sqsets(struct csio_hw *hw, int cnt)
static int
csio_enable_msix(struct csio_hw *hw)
{
int rv, i, j, k, n, min, cnt;
int i, j, k, n, min, cnt;
struct csio_msix_entries *entryp;
struct msix_entry *entries;
int extra = CSIO_EXTRA_VECS;
@ -521,21 +521,15 @@ csio_enable_msix(struct csio_hw *hw)
csio_dbg(hw, "FW supp #niq:%d, trying %d msix's\n", hw->cfg_niq, cnt);
while ((rv = pci_enable_msix(hw->pdev, entries, cnt)) >= min)
cnt = rv;
if (!rv) {
if (cnt < (hw->num_sqsets + extra)) {
csio_dbg(hw, "Reducing sqsets to %d\n", cnt - extra);
csio_reduce_sqsets(hw, cnt - extra);
}
} else {
if (rv > 0) {
pci_disable_msix(hw->pdev);
csio_info(hw, "Not using MSI-X, remainder:%d\n", rv);
}
cnt = pci_enable_msix_range(hw->pdev, entries, min, cnt);
if (cnt < 0) {
kfree(entries);
return -ENOMEM;
return cnt;
}
if (cnt < (hw->num_sqsets + extra)) {
csio_dbg(hw, "Reducing sqsets to %d\n", cnt - extra);
csio_reduce_sqsets(hw, cnt - extra);
}
/* Save off vectors */

View File

@ -1852,7 +1852,7 @@ static void csk_return_rx_credits(struct cxgbi_sock *csk, int copied)
u32 credits;
log_debug(1 << CXGBI_DBG_PDU_RX,
"csk 0x%p,%u,0x%lu,%u, seq %u, wup %u, thre %u, %u.\n",
"csk 0x%p,%u,0x%lx,%u, seq %u, wup %u, thre %u, %u.\n",
csk, csk->state, csk->flags, csk->tid, csk->copied_seq,
csk->rcv_wup, cdev->rx_credit_thres,
cdev->rcv_win);

View File

@ -2363,6 +2363,7 @@ static s32 adpt_scsi_host_alloc(adpt_hba* pHba, struct scsi_host_template *sht)
host->unique_id = (u32)sys_tbl_pa + pHba->unit;
host->sg_tablesize = pHba->sg_tablesize;
host->can_queue = pHba->post_fifo_size;
host->use_cmd_list = 1;
return 0;
}

View File

@ -837,7 +837,6 @@ struct hostdata {
static struct Scsi_Host *sh[MAX_BOARDS];
static const char *driver_name = "EATA";
static char sha[MAX_BOARDS];
static DEFINE_SPINLOCK(driver_lock);
/* Initialize num_boards so that ihdlr can work while detect is in progress */
static unsigned int num_boards = MAX_BOARDS;
@ -1097,8 +1096,6 @@ static int port_detect(unsigned long port_base, unsigned int j,
goto fail;
}
spin_lock_irq(&driver_lock);
if (do_dma(port_base, 0, READ_CONFIG_PIO)) {
#if defined(DEBUG_DETECT)
printk("%s: detect, do_dma failed at 0x%03lx.\n", name,
@ -1264,10 +1261,7 @@ static int port_detect(unsigned long port_base, unsigned int j,
}
#endif
spin_unlock_irq(&driver_lock);
sh[j] = shost = scsi_register(tpnt, sizeof(struct hostdata));
spin_lock_irq(&driver_lock);
if (shost == NULL) {
printk("%s: unable to register host, detaching.\n", name);
goto freedma;
@ -1344,8 +1338,6 @@ static int port_detect(unsigned long port_base, unsigned int j,
else
sprintf(dma_name, "DMA %u", dma_channel);
spin_unlock_irq(&driver_lock);
for (i = 0; i < shost->can_queue; i++)
ha->cp[i].cp_dma_addr = pci_map_single(ha->pdev,
&ha->cp[i],
@ -1438,7 +1430,6 @@ static int port_detect(unsigned long port_base, unsigned int j,
freeirq:
free_irq(irq, &sha[j]);
freelock:
spin_unlock_irq(&driver_lock);
release_region(port_base, REGION_SIZE);
fail:
return 0;

View File

@ -96,14 +96,32 @@ int fcoe_link_speed_update(struct fc_lport *lport)
struct ethtool_cmd ecmd;
if (!__ethtool_get_settings(netdev, &ecmd)) {
lport->link_supported_speeds &=
~(FC_PORTSPEED_1GBIT | FC_PORTSPEED_10GBIT);
lport->link_supported_speeds &= ~(FC_PORTSPEED_1GBIT |
FC_PORTSPEED_10GBIT |
FC_PORTSPEED_20GBIT |
FC_PORTSPEED_40GBIT);
if (ecmd.supported & (SUPPORTED_1000baseT_Half |
SUPPORTED_1000baseT_Full))
SUPPORTED_1000baseT_Full |
SUPPORTED_1000baseKX_Full))
lport->link_supported_speeds |= FC_PORTSPEED_1GBIT;
if (ecmd.supported & SUPPORTED_10000baseT_Full)
lport->link_supported_speeds |=
FC_PORTSPEED_10GBIT;
if (ecmd.supported & (SUPPORTED_10000baseT_Full |
SUPPORTED_10000baseKX4_Full |
SUPPORTED_10000baseKR_Full |
SUPPORTED_10000baseR_FEC))
lport->link_supported_speeds |= FC_PORTSPEED_10GBIT;
if (ecmd.supported & (SUPPORTED_20000baseMLD2_Full |
SUPPORTED_20000baseKR2_Full))
lport->link_supported_speeds |= FC_PORTSPEED_20GBIT;
if (ecmd.supported & (SUPPORTED_40000baseKR4_Full |
SUPPORTED_40000baseCR4_Full |
SUPPORTED_40000baseSR4_Full |
SUPPORTED_40000baseLR4_Full))
lport->link_supported_speeds |= FC_PORTSPEED_40GBIT;
switch (ethtool_cmd_speed(&ecmd)) {
case SPEED_1000:
lport->link_speed = FC_PORTSPEED_1GBIT;
@ -111,6 +129,15 @@ int fcoe_link_speed_update(struct fc_lport *lport)
case SPEED_10000:
lport->link_speed = FC_PORTSPEED_10GBIT;
break;
case 20000:
lport->link_speed = FC_PORTSPEED_20GBIT;
break;
case 40000:
lport->link_speed = FC_PORTSPEED_40GBIT;
break;
default:
lport->link_speed = FC_PORTSPEED_UNKNOWN;
break;
}
return 0;
}

View File

@ -39,7 +39,7 @@
#define DRV_NAME "fnic"
#define DRV_DESCRIPTION "Cisco FCoE HBA Driver"
#define DRV_VERSION "1.6.0.10"
#define DRV_VERSION "1.6.0.11"
#define PFX DRV_NAME ": "
#define DFX DRV_NAME "%d: "

View File

@ -35,7 +35,7 @@
#include "cq_enet_desc.h"
#include "cq_exch_desc.h"
static u8 fcoe_all_fcfs[ETH_ALEN];
static u8 fcoe_all_fcfs[ETH_ALEN] = FIP_ALL_FCF_MACS;
struct workqueue_struct *fnic_fip_queue;
struct workqueue_struct *fnic_event_queue;
@ -101,13 +101,14 @@ void fnic_handle_link(struct work_struct *work)
FNIC_FCS_DBG(KERN_DEBUG, fnic->lport->host,
"link up\n");
fcoe_ctlr_link_up(&fnic->ctlr);
} else
} else {
/* UP -> UP */
spin_unlock_irqrestore(&fnic->fnic_lock, flags);
fnic_fc_trace_set_data(
fnic->lport->host->host_no, FNIC_FC_LE,
"Link Status: UP_UP",
strlen("Link Status: UP_UP"));
}
}
} else if (fnic->link_status) {
/* DOWN -> UP */

View File

@ -743,7 +743,7 @@ void copy_and_format_trace_data(struct fc_trace_hdr *tdata,
fmt = "%02d:%02d:%04ld %02d:%02d:%02d.%09lu ns%8x %c%8x\t";
len += snprintf(fnic_dbgfs_prt->buffer + len,
(fnic_fc_trace_max_pages * PAGE_SIZE * 3) - len,
max_size - len,
fmt,
tm.tm_mon + 1, tm.tm_mday, tm.tm_year + 1900,
tm.tm_hour, tm.tm_min, tm.tm_sec,
@ -767,8 +767,7 @@ void copy_and_format_trace_data(struct fc_trace_hdr *tdata,
j == ethhdr_len + fcoehdr_len + fchdr_len ||
(i > 3 && j%fchdr_len == 0)) {
len += snprintf(fnic_dbgfs_prt->buffer
+ len, (fnic_fc_trace_max_pages
* PAGE_SIZE * 3) - len,
+ len, max_size - len,
"\n\t\t\t\t\t\t\t\t");
i++;
}

View File

@ -5971,10 +5971,6 @@ static int hpsa_kdump_hard_reset_controller(struct pci_dev *pdev)
/* Save the PCI command register */
pci_read_config_word(pdev, 4, &command_register);
/* Turn the board off. This is so that later pci_restore_state()
* won't turn the board on before the rest of config space is ready.
*/
pci_disable_device(pdev);
pci_save_state(pdev);
/* find the first memory BAR, so we can find the cfg table */
@ -6022,11 +6018,6 @@ static int hpsa_kdump_hard_reset_controller(struct pci_dev *pdev)
goto unmap_cfgtable;
pci_restore_state(pdev);
rc = pci_enable_device(pdev);
if (rc) {
dev_warn(&pdev->dev, "failed to enable device.\n");
goto unmap_cfgtable;
}
pci_write_config_word(pdev, 4, command_register);
/* Some devices (notably the HP Smart Array 5i Controller)
@ -6159,26 +6150,22 @@ static void hpsa_interrupt_mode(struct ctlr_info *h)
h->msix_vector = MAX_REPLY_QUEUES;
if (h->msix_vector > num_online_cpus())
h->msix_vector = num_online_cpus();
err = pci_enable_msix(h->pdev, hpsa_msix_entries,
h->msix_vector);
if (err > 0) {
err = pci_enable_msix_range(h->pdev, hpsa_msix_entries,
1, h->msix_vector);
if (err < 0) {
dev_warn(&h->pdev->dev, "MSI-X init failed %d\n", err);
h->msix_vector = 0;
goto single_msi_mode;
} else if (err < h->msix_vector) {
dev_warn(&h->pdev->dev, "only %d MSI-X vectors "
"available\n", err);
h->msix_vector = err;
err = pci_enable_msix(h->pdev, hpsa_msix_entries,
h->msix_vector);
}
if (!err) {
for (i = 0; i < h->msix_vector; i++)
h->intr[i] = hpsa_msix_entries[i].vector;
return;
} else {
dev_warn(&h->pdev->dev, "MSI-X init failed %d\n",
err);
h->msix_vector = 0;
goto default_int_mode;
}
h->msix_vector = err;
for (i = 0; i < h->msix_vector; i++)
h->intr[i] = hpsa_msix_entries[i].vector;
return;
}
single_msi_mode:
if (pci_find_capability(h->pdev, PCI_CAP_ID_MSI)) {
dev_info(&h->pdev->dev, "MSI\n");
if (!pci_enable_msi(h->pdev))
@ -6541,6 +6528,23 @@ static int hpsa_init_reset_devices(struct pci_dev *pdev)
if (!reset_devices)
return 0;
/* kdump kernel is loading, we don't know in which state is
* the pci interface. The dev->enable_cnt is equal zero
* so we call enable+disable, wait a while and switch it on.
*/
rc = pci_enable_device(pdev);
if (rc) {
dev_warn(&pdev->dev, "Failed to enable PCI device\n");
return -ENODEV;
}
pci_disable_device(pdev);
msleep(260); /* a randomly chosen number */
rc = pci_enable_device(pdev);
if (rc) {
dev_warn(&pdev->dev, "failed to enable device.\n");
return -ENODEV;
}
pci_set_master(pdev);
/* Reset the controller with a PCI power-cycle or via doorbell */
rc = hpsa_kdump_hard_reset_controller(pdev);
@ -6549,10 +6553,11 @@ static int hpsa_init_reset_devices(struct pci_dev *pdev)
* "performant mode". Or, it might be 640x, which can't reset
* due to concerns about shared bbwc between 6402/6404 pair.
*/
if (rc == -ENOTSUPP)
return rc; /* just try to do the kdump anyhow. */
if (rc)
return -ENODEV;
if (rc) {
if (rc != -ENOTSUPP) /* just try to do the kdump anyhow. */
rc = -ENODEV;
goto out_disable;
}
/* Now try to get the controller to respond to a no-op */
dev_warn(&pdev->dev, "Waiting for controller to respond to no-op\n");
@ -6563,7 +6568,11 @@ static int hpsa_init_reset_devices(struct pci_dev *pdev)
dev_warn(&pdev->dev, "no-op failed%s\n",
(i < 11 ? "; re-trying" : ""));
}
return 0;
out_disable:
pci_disable_device(pdev);
return rc;
}
static int hpsa_allocate_cmd_pool(struct ctlr_info *h)
@ -6743,6 +6752,7 @@ static void hpsa_undo_allocations_after_kdump_soft_reset(struct ctlr_info *h)
iounmap(h->transtable);
if (h->cfgtable)
iounmap(h->cfgtable);
pci_disable_device(h->pdev);
pci_release_regions(h->pdev);
kfree(h);
}

View File

@ -2440,6 +2440,7 @@ static void ipr_handle_log_data(struct ipr_ioa_cfg *ioa_cfg,
{
u32 ioasc;
int error_index;
struct ipr_hostrcb_type_21_error *error;
if (hostrcb->hcam.notify_type != IPR_HOST_RCB_NOTIF_TYPE_ERROR_LOG_ENTRY)
return;
@ -2464,6 +2465,15 @@ static void ipr_handle_log_data(struct ipr_ioa_cfg *ioa_cfg,
if (!ipr_error_table[error_index].log_hcam)
return;
if (ioasc == IPR_IOASC_HW_CMD_FAILED &&
hostrcb->hcam.overlay_id == IPR_HOST_RCB_OVERLAY_ID_21) {
error = &hostrcb->hcam.u.error64.u.type_21_error;
if (((be32_to_cpu(error->sense_data[0]) & 0x0000ff00) >> 8) == ILLEGAL_REQUEST &&
ioa_cfg->log_level <= IPR_DEFAULT_LOG_LEVEL)
return;
}
ipr_hcam_err(hostrcb, "%s\n", ipr_error_table[error_index].error);
/* Set indication we have logged an error */

View File

@ -130,6 +130,7 @@
#define IPR_IOASC_HW_DEV_BUS_STATUS 0x04448500
#define IPR_IOASC_IOASC_MASK 0xFFFFFF00
#define IPR_IOASC_SCSI_STATUS_MASK 0x000000FF
#define IPR_IOASC_HW_CMD_FAILED 0x046E0000
#define IPR_IOASC_IR_INVALID_REQ_TYPE_OR_PKT 0x05240000
#define IPR_IOASC_IR_RESOURCE_HANDLE 0x05250000
#define IPR_IOASC_IR_NO_CMDS_TO_2ND_IOA 0x05258100

View File

@ -726,13 +726,18 @@ static int iscsi_sw_tcp_conn_get_param(struct iscsi_cls_conn *cls_conn,
switch(param) {
case ISCSI_PARAM_CONN_PORT:
case ISCSI_PARAM_CONN_ADDRESS:
case ISCSI_PARAM_LOCAL_PORT:
spin_lock_bh(&conn->session->frwd_lock);
if (!tcp_sw_conn || !tcp_sw_conn->sock) {
spin_unlock_bh(&conn->session->frwd_lock);
return -ENOTCONN;
}
rc = kernel_getpeername(tcp_sw_conn->sock,
(struct sockaddr *)&addr, &len);
if (param == ISCSI_PARAM_LOCAL_PORT)
rc = kernel_getsockname(tcp_sw_conn->sock,
(struct sockaddr *)&addr, &len);
else
rc = kernel_getpeername(tcp_sw_conn->sock,
(struct sockaddr *)&addr, &len);
spin_unlock_bh(&conn->session->frwd_lock);
if (rc)
return rc;
@ -895,6 +900,7 @@ static umode_t iscsi_sw_tcp_attr_is_visible(int param_type, int param)
case ISCSI_PARAM_DATADGST_EN:
case ISCSI_PARAM_CONN_ADDRESS:
case ISCSI_PARAM_CONN_PORT:
case ISCSI_PARAM_LOCAL_PORT:
case ISCSI_PARAM_EXP_STATSN:
case ISCSI_PARAM_PERSISTENT_ADDRESS:
case ISCSI_PARAM_PERSISTENT_PORT:

View File

@ -296,9 +296,9 @@ void fc_fc4_deregister_provider(enum fc_fh_type type, struct fc4_prov *prov)
BUG_ON(type >= FC_FC4_PROV_SIZE);
mutex_lock(&fc_prov_mutex);
if (prov->recv)
rcu_assign_pointer(fc_passive_prov[type], NULL);
RCU_INIT_POINTER(fc_passive_prov[type], NULL);
else
rcu_assign_pointer(fc_active_prov[type], NULL);
RCU_INIT_POINTER(fc_active_prov[type], NULL);
mutex_unlock(&fc_prov_mutex);
synchronize_rcu();
}

View File

@ -3505,6 +3505,7 @@ int iscsi_conn_get_addr_param(struct sockaddr_storage *addr,
len = sprintf(buf, "%pI6\n", &sin6->sin6_addr);
break;
case ISCSI_PARAM_CONN_PORT:
case ISCSI_PARAM_LOCAL_PORT:
if (sin)
len = sprintf(buf, "%hu\n", be16_to_cpu(sin->sin_port));
else

View File

@ -3385,7 +3385,7 @@ lpfc_stat_data_ctrl_store(struct device *dev, struct device_attribute *attr,
if (strlen(buf) > (LPFC_MAX_DATA_CTRL_LEN - 1))
return -EINVAL;
strcpy(bucket_data, buf);
strncpy(bucket_data, buf, LPFC_MAX_DATA_CTRL_LEN);
str_ptr = &bucket_data[0];
/* Ignore this token - this is command token */
token = strsep(&str_ptr, "\t ");

View File

@ -656,7 +656,6 @@ lpfc_bsg_rport_els(struct fc_bsg_job *job)
struct lpfc_nodelist *ndlp = rdata->pnode;
uint32_t elscmd;
uint32_t cmdsize;
uint32_t rspsize;
struct lpfc_iocbq *cmdiocbq;
uint16_t rpi = 0;
struct bsg_job_data *dd_data;
@ -687,7 +686,6 @@ lpfc_bsg_rport_els(struct fc_bsg_job *job)
elscmd = job->request->rqst_data.r_els.els_code;
cmdsize = job->request_payload.payload_len;
rspsize = job->reply_payload.payload_len;
if (!lpfc_nlp_get(ndlp)) {
rc = -ENODEV;
@ -2251,7 +2249,6 @@ lpfc_sli4_bsg_diag_mode_end(struct fc_bsg_job *job)
i = 0;
while (phba->link_state != LPFC_LINK_DOWN) {
if (i++ > timeout) {
rc = -ETIMEDOUT;
lpfc_printf_log(phba, KERN_INFO, LOG_LIBDFC,
"3140 Timeout waiting for link to "
"diagnostic mode_end, timeout:%d ms\n",
@ -2291,7 +2288,6 @@ lpfc_sli4_bsg_link_diag_test(struct fc_bsg_job *job)
LPFC_MBOXQ_t *pmboxq;
struct sli4_link_diag *link_diag_test_cmd;
uint32_t req_len, alloc_len;
uint32_t timeout;
struct lpfc_mbx_run_link_diag_test *run_link_diag_test;
union lpfc_sli4_cfg_shdr *shdr;
uint32_t shdr_status, shdr_add_status;
@ -2342,7 +2338,6 @@ lpfc_sli4_bsg_link_diag_test(struct fc_bsg_job *job)
link_diag_test_cmd = (struct sli4_link_diag *)
job->request->rqst_data.h_vendor.vendor_cmd;
timeout = link_diag_test_cmd->timeout * 100;
rc = lpfc_sli4_bsg_set_link_diag_state(phba, 1);
@ -2693,14 +2688,13 @@ lpfc_bsg_dma_page_alloc(struct lpfc_hba *phba)
INIT_LIST_HEAD(&dmabuf->list);
/* now, allocate dma buffer */
dmabuf->virt = dma_alloc_coherent(&pcidev->dev, BSG_MBOX_SIZE,
&(dmabuf->phys), GFP_KERNEL);
dmabuf->virt = dma_zalloc_coherent(&pcidev->dev, BSG_MBOX_SIZE,
&(dmabuf->phys), GFP_KERNEL);
if (!dmabuf->virt) {
kfree(dmabuf);
return NULL;
}
memset((uint8_t *)dmabuf->virt, 0, BSG_MBOX_SIZE);
return dmabuf;
}
@ -2828,8 +2822,10 @@ diag_cmd_data_alloc(struct lpfc_hba *phba,
size -= cnt;
}
mlist->flag = i;
return mlist;
if (mlist) {
mlist->flag = i;
return mlist;
}
out:
diag_cmd_data_free(phba, mlist);
return NULL;
@ -3344,7 +3340,7 @@ job_error:
* will wake up thread waiting on the wait queue pointed by context1
* of the mailbox.
**/
void
static void
lpfc_bsg_issue_mbox_cmpl(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmboxq)
{
struct bsg_job_data *dd_data;
@ -4593,7 +4589,7 @@ sli_cfg_ext_error:
* being reset) and com-plete the job, otherwise issue the mailbox command and
* let our completion handler finish the command.
**/
static uint32_t
static int
lpfc_bsg_issue_mbox(struct lpfc_hba *phba, struct fc_bsg_job *job,
struct lpfc_vport *vport)
{

View File

@ -451,7 +451,6 @@ int lpfc_send_rrq(struct lpfc_hba *, struct lpfc_node_rrq *);
int lpfc_set_rrq_active(struct lpfc_hba *, struct lpfc_nodelist *,
uint16_t, uint16_t, uint16_t);
uint16_t lpfc_sli4_xri_inrange(struct lpfc_hba *, uint16_t);
void lpfc_cleanup_wt_rrqs(struct lpfc_hba *);
void lpfc_cleanup_vports_rrqs(struct lpfc_vport *, struct lpfc_nodelist *);
struct lpfc_node_rrq *lpfc_get_active_rrq(struct lpfc_vport *, uint16_t,
uint32_t);

View File

@ -1439,7 +1439,7 @@ lpfc_fdmi_cmd(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, int cmdcode)
/* #2 HBA attribute entry */
ae = (ATTRIBUTE_ENTRY *) ((uint8_t *) rh + size);
ae->ad.bits.AttrType = be16_to_cpu(MANUFACTURER);
strcpy(ae->un.Manufacturer, "Emulex Corporation");
strncpy(ae->un.Manufacturer, "Emulex Corporation", 64);
len = strlen(ae->un.Manufacturer);
len += (len & 3) ? (4 - (len & 3)) : 4;
ae->ad.bits.AttrLen = be16_to_cpu(FOURBYTES + len);
@ -1449,7 +1449,7 @@ lpfc_fdmi_cmd(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, int cmdcode)
/* #3 HBA attribute entry */
ae = (ATTRIBUTE_ENTRY *) ((uint8_t *) rh + size);
ae->ad.bits.AttrType = be16_to_cpu(SERIAL_NUMBER);
strcpy(ae->un.SerialNumber, phba->SerialNumber);
strncpy(ae->un.SerialNumber, phba->SerialNumber, 64);
len = strlen(ae->un.SerialNumber);
len += (len & 3) ? (4 - (len & 3)) : 4;
ae->ad.bits.AttrLen = be16_to_cpu(FOURBYTES + len);
@ -1459,7 +1459,7 @@ lpfc_fdmi_cmd(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, int cmdcode)
/* #4 HBA attribute entry */
ae = (ATTRIBUTE_ENTRY *) ((uint8_t *) rh + size);
ae->ad.bits.AttrType = be16_to_cpu(MODEL);
strcpy(ae->un.Model, phba->ModelName);
strncpy(ae->un.Model, phba->ModelName, 256);
len = strlen(ae->un.Model);
len += (len & 3) ? (4 - (len & 3)) : 4;
ae->ad.bits.AttrLen = be16_to_cpu(FOURBYTES + len);
@ -1469,7 +1469,7 @@ lpfc_fdmi_cmd(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, int cmdcode)
/* #5 HBA attribute entry */
ae = (ATTRIBUTE_ENTRY *) ((uint8_t *) rh + size);
ae->ad.bits.AttrType = be16_to_cpu(MODEL_DESCRIPTION);
strcpy(ae->un.ModelDescription, phba->ModelDesc);
strncpy(ae->un.ModelDescription, phba->ModelDesc, 256);
len = strlen(ae->un.ModelDescription);
len += (len & 3) ? (4 - (len & 3)) : 4;
ae->ad.bits.AttrLen = be16_to_cpu(FOURBYTES + len);
@ -1500,7 +1500,8 @@ lpfc_fdmi_cmd(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, int cmdcode)
/* #7 HBA attribute entry */
ae = (ATTRIBUTE_ENTRY *) ((uint8_t *) rh + size);
ae->ad.bits.AttrType = be16_to_cpu(DRIVER_VERSION);
strcpy(ae->un.DriverVersion, lpfc_release_version);
strncpy(ae->un.DriverVersion,
lpfc_release_version, 256);
len = strlen(ae->un.DriverVersion);
len += (len & 3) ? (4 - (len & 3)) : 4;
ae->ad.bits.AttrLen = be16_to_cpu(FOURBYTES + len);
@ -1510,7 +1511,8 @@ lpfc_fdmi_cmd(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, int cmdcode)
/* #8 HBA attribute entry */
ae = (ATTRIBUTE_ENTRY *) ((uint8_t *) rh + size);
ae->ad.bits.AttrType = be16_to_cpu(OPTION_ROM_VERSION);
strcpy(ae->un.OptionROMVersion, phba->OptionROMVersion);
strncpy(ae->un.OptionROMVersion,
phba->OptionROMVersion, 256);
len = strlen(ae->un.OptionROMVersion);
len += (len & 3) ? (4 - (len & 3)) : 4;
ae->ad.bits.AttrLen = be16_to_cpu(FOURBYTES + len);

View File

@ -269,7 +269,7 @@ static int
lpfc_debugfs_hbqinfo_data(struct lpfc_hba *phba, char *buf, int size)
{
int len = 0;
int cnt, i, j, found, posted, low;
int i, j, found, posted, low;
uint32_t phys, raw_index, getidx;
struct lpfc_hbq_init *hip;
struct hbq_s *hbqs;
@ -279,7 +279,7 @@ lpfc_debugfs_hbqinfo_data(struct lpfc_hba *phba, char *buf, int size)
if (phba->sli_rev != 3)
return 0;
cnt = LPFC_HBQINFO_SIZE;
spin_lock_irq(&phba->hbalock);
/* toggle between multiple hbqs, if any */

View File

@ -78,7 +78,8 @@ struct lpfc_nodelist {
struct list_head nlp_listp;
struct lpfc_name nlp_portname;
struct lpfc_name nlp_nodename;
uint32_t nlp_flag; /* entry flags */
uint32_t nlp_flag; /* entry flags */
uint32_t nlp_add_flag; /* additional flags */
uint32_t nlp_DID; /* FC D_ID of entry */
uint32_t nlp_last_elscmd; /* Last ELS cmd sent */
uint16_t nlp_type;
@ -157,6 +158,9 @@ struct lpfc_node_rrq {
#define NLP_FIRSTBURST 0x40000000 /* Target supports FirstBurst */
#define NLP_RPI_REGISTERED 0x80000000 /* nlp_rpi is valid */
/* Defines for nlp_add_flag (uint32) */
#define NLP_IN_DEV_LOSS 0x00000001 /* Dev Loss processing in progress */
/* ndlp usage management macros */
#define NLP_CHK_NODE_ACT(ndlp) (((ndlp)->nlp_usg_map \
& NLP_USG_NODE_ACT_BIT) \

View File

@ -1084,7 +1084,8 @@ stop_rr_fcf_flogi:
* accessing it.
*/
prsp = list_get_first(&pcmd->list, struct lpfc_dmabuf, list);
if (!prsp)
goto out;
sp = prsp->virt + sizeof(uint32_t);
/* FLOGI completes successfully */
@ -1828,7 +1829,7 @@ lpfc_cmpl_els_plogi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
IOCB_t *irsp;
struct lpfc_nodelist *ndlp;
struct lpfc_dmabuf *prsp;
int disc, rc, did, type;
int disc, rc;
/* we pass cmdiocb to state machine which needs rspiocb as well */
cmdiocb->context_un.rsp_iocb = rspiocb;
@ -1873,10 +1874,6 @@ lpfc_cmpl_els_plogi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
goto out;
}
/* ndlp could be freed in DSM, save these values now */
type = ndlp->nlp_type;
did = ndlp->nlp_DID;
if (irsp->ulpStatus) {
/* Check for retry */
if (lpfc_els_retry(phba, cmdiocb, rspiocb)) {
@ -2269,8 +2266,6 @@ lpfc_adisc_done(struct lpfc_vport *vport)
void
lpfc_more_adisc(struct lpfc_vport *vport)
{
int sentadisc;
if (vport->num_disc_nodes)
vport->num_disc_nodes--;
/* Continue discovery with <num_disc_nodes> ADISCs to go */
@ -2283,7 +2278,7 @@ lpfc_more_adisc(struct lpfc_vport *vport)
if (vport->fc_flag & FC_NLP_MORE) {
lpfc_set_disctmo(vport);
/* go thru NPR nodes and issue any remaining ELS ADISCs */
sentadisc = lpfc_els_disc_adisc(vport);
lpfc_els_disc_adisc(vport);
}
if (!vport->num_disc_nodes)
lpfc_adisc_done(vport);
@ -3027,10 +3022,9 @@ lpfc_els_retry_delay_handler(struct lpfc_nodelist *ndlp)
{
struct lpfc_vport *vport = ndlp->vport;
struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
uint32_t cmd, did, retry;
uint32_t cmd, retry;
spin_lock_irq(shost->host_lock);
did = ndlp->nlp_DID;
cmd = ndlp->nlp_last_elscmd;
ndlp->nlp_last_elscmd = 0;
@ -5288,10 +5282,9 @@ lpfc_els_rcv_rnid(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
IOCB_t *icmd;
RNID *rn;
struct ls_rjt stat;
uint32_t cmd, did;
uint32_t cmd;
icmd = &cmdiocb->iocb;
did = icmd->un.elsreq64.remoteID;
pcmd = (struct lpfc_dmabuf *) cmdiocb->context2;
lp = (uint32_t *) pcmd->virt;
@ -6693,6 +6686,13 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
phba->fc_stat.elsRcvFrame++;
/*
* Do not process any unsolicited ELS commands
* if the ndlp is in DEV_LOSS
*/
if (ndlp->nlp_add_flag & NLP_IN_DEV_LOSS)
goto dropit;
elsiocb->context1 = lpfc_nlp_get(ndlp);
elsiocb->vport = vport;
@ -7514,6 +7514,8 @@ lpfc_cmpl_els_fdisc(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
vport->fc_myDID = irsp->un.ulpWord[4] & Mask_DID;
lpfc_vport_set_state(vport, FC_VPORT_ACTIVE);
prsp = list_get_first(&pcmd->list, struct lpfc_dmabuf, list);
if (!prsp)
goto out;
sp = prsp->virt + sizeof(uint32_t);
fabric_param_changed = lpfc_check_clean_addr_bit(vport, sp);
memcpy(&vport->fabric_portname, &sp->portName,
@ -8187,9 +8189,11 @@ lpfc_sli4_els_xri_aborted(struct lpfc_hba *phba,
list_del(&sglq_entry->list);
ndlp = sglq_entry->ndlp;
sglq_entry->ndlp = NULL;
spin_lock(&pring->ring_lock);
list_add_tail(&sglq_entry->list,
&phba->sli4_hba.lpfc_sgl_list);
sglq_entry->state = SGL_FREED;
spin_unlock(&pring->ring_lock);
spin_unlock(&phba->sli4_hba.abts_sgl_list_lock);
spin_unlock_irqrestore(&phba->hbalock, iflag);
lpfc_set_rrq_active(phba, ndlp,
@ -8208,12 +8212,15 @@ lpfc_sli4_els_xri_aborted(struct lpfc_hba *phba,
spin_unlock_irqrestore(&phba->hbalock, iflag);
return;
}
spin_lock(&pring->ring_lock);
sglq_entry = __lpfc_get_active_sglq(phba, lxri);
if (!sglq_entry || (sglq_entry->sli4_xritag != xri)) {
spin_unlock(&pring->ring_lock);
spin_unlock_irqrestore(&phba->hbalock, iflag);
return;
}
sglq_entry->state = SGL_XRI_ABORTED;
spin_unlock(&pring->ring_lock);
spin_unlock_irqrestore(&phba->hbalock, iflag);
return;
}

View File

@ -150,9 +150,30 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport)
/* If the WWPN of the rport and ndlp don't match, ignore it */
if (rport->port_name != wwn_to_u64(ndlp->nlp_portname.u.wwn)) {
lpfc_printf_vlog(vport, KERN_ERR, LOG_NODE,
"6789 rport name %lx != node port name %lx",
(unsigned long)rport->port_name,
(unsigned long)wwn_to_u64(
ndlp->nlp_portname.u.wwn));
put_node = rdata->pnode != NULL;
put_rport = ndlp->rport != NULL;
rdata->pnode = NULL;
ndlp->rport = NULL;
if (put_node)
lpfc_nlp_put(ndlp);
put_device(&rport->dev);
return;
}
put_node = rdata->pnode != NULL;
put_rport = ndlp->rport != NULL;
rdata->pnode = NULL;
ndlp->rport = NULL;
if (put_node)
lpfc_nlp_put(ndlp);
if (put_rport)
put_device(&rport->dev);
return;
}
evtp = &ndlp->dev_loss_evt;
@ -161,6 +182,7 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport)
return;
evtp->evt_arg1 = lpfc_nlp_get(ndlp);
ndlp->nlp_add_flag |= NLP_IN_DEV_LOSS;
spin_lock_irq(&phba->hbalock);
/* We need to hold the node by incrementing the reference
@ -201,8 +223,10 @@ lpfc_dev_loss_tmo_handler(struct lpfc_nodelist *ndlp)
rport = ndlp->rport;
if (!rport)
if (!rport) {
ndlp->nlp_add_flag &= ~NLP_IN_DEV_LOSS;
return fcf_inuse;
}
rdata = rport->dd_data;
name = (uint8_t *) &ndlp->nlp_portname;
@ -235,6 +259,7 @@ lpfc_dev_loss_tmo_handler(struct lpfc_nodelist *ndlp)
put_rport = ndlp->rport != NULL;
rdata->pnode = NULL;
ndlp->rport = NULL;
ndlp->nlp_add_flag &= ~NLP_IN_DEV_LOSS;
if (put_node)
lpfc_nlp_put(ndlp);
if (put_rport)
@ -250,6 +275,7 @@ lpfc_dev_loss_tmo_handler(struct lpfc_nodelist *ndlp)
*name, *(name+1), *(name+2), *(name+3),
*(name+4), *(name+5), *(name+6), *(name+7),
ndlp->nlp_DID);
ndlp->nlp_add_flag &= ~NLP_IN_DEV_LOSS;
return fcf_inuse;
}
@ -259,6 +285,7 @@ lpfc_dev_loss_tmo_handler(struct lpfc_nodelist *ndlp)
put_rport = ndlp->rport != NULL;
rdata->pnode = NULL;
ndlp->rport = NULL;
ndlp->nlp_add_flag &= ~NLP_IN_DEV_LOSS;
if (put_node)
lpfc_nlp_put(ndlp);
if (put_rport)
@ -269,6 +296,7 @@ lpfc_dev_loss_tmo_handler(struct lpfc_nodelist *ndlp)
if (ndlp->nlp_sid != NLP_NO_SID) {
warn_on = 1;
/* flush the target */
ndlp->nlp_add_flag &= ~NLP_IN_DEV_LOSS;
lpfc_sli_abort_iocb(vport, &phba->sli.ring[phba->sli.fcp_ring],
ndlp->nlp_sid, 0, LPFC_CTX_TGT);
}
@ -297,6 +325,7 @@ lpfc_dev_loss_tmo_handler(struct lpfc_nodelist *ndlp)
put_rport = ndlp->rport != NULL;
rdata->pnode = NULL;
ndlp->rport = NULL;
ndlp->nlp_add_flag &= ~NLP_IN_DEV_LOSS;
if (put_node)
lpfc_nlp_put(ndlp);
if (put_rport)
@ -995,7 +1024,6 @@ lpfc_linkup(struct lpfc_hba *phba)
struct lpfc_vport **vports;
int i;
lpfc_cleanup_wt_rrqs(phba);
phba->link_state = LPFC_LINK_UP;
/* Unblock fabric iocbs if they are blocked */
@ -2042,7 +2070,8 @@ lpfc_sli4_set_fcf_flogi_fail(struct lpfc_hba *phba, uint16_t fcf_index)
* returns:
* 0=success 1=failure
**/
int lpfc_sli4_fcf_pri_list_add(struct lpfc_hba *phba, uint16_t fcf_index,
static int lpfc_sli4_fcf_pri_list_add(struct lpfc_hba *phba,
uint16_t fcf_index,
struct fcf_record *new_fcf_record)
{
uint16_t current_fcf_pri;
@ -2146,7 +2175,6 @@ lpfc_mbx_cmpl_fcf_scan_read_fcf_rec(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq)
uint16_t fcf_index, next_fcf_index;
struct lpfc_fcf_rec *fcf_rec = NULL;
uint16_t vlan_id;
uint32_t seed;
bool select_new_fcf;
int rc;
@ -2383,9 +2411,6 @@ lpfc_mbx_cmpl_fcf_scan_read_fcf_rec(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq)
phba->fcf.fcf_flag |= FCF_AVAILABLE;
/* Setup initial running random FCF selection count */
phba->fcf.eligible_fcf_cnt = 1;
/* Seeding the random number generator for random selection */
seed = (uint32_t)(0xFFFFFFFF & jiffies);
prandom_seed(seed);
}
spin_unlock_irq(&phba->hbalock);
goto read_next_fcf;
@ -2678,7 +2703,7 @@ out:
*
* This function handles completion of init vfi mailbox command.
*/
void
static void
lpfc_init_vfi_cmpl(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq)
{
struct lpfc_vport *vport = mboxq->vport;
@ -4438,7 +4463,7 @@ lpfc_no_rpi(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp)
* This function will issue an ELS LOGO command after completing
* the UNREG_RPI.
**/
void
static void
lpfc_nlp_logo_unreg(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
{
struct lpfc_vport *vport = pmb->vport;
@ -5006,7 +5031,6 @@ lpfc_disc_start(struct lpfc_vport *vport)
struct lpfc_hba *phba = vport->phba;
uint32_t num_sent;
uint32_t clear_la_pending;
int did_changed;
if (!lpfc_is_link_up(phba)) {
lpfc_printf_vlog(vport, KERN_INFO, LOG_SLI,
@ -5025,11 +5049,6 @@ lpfc_disc_start(struct lpfc_vport *vport)
lpfc_set_disctmo(vport);
if (vport->fc_prevDID == vport->fc_myDID)
did_changed = 0;
else
did_changed = 1;
vport->fc_prevDID = vport->fc_myDID;
vport->num_disc_nodes = 0;
@ -6318,7 +6337,7 @@ lpfc_parse_fcoe_conf(struct lpfc_hba *phba,
uint8_t *buff,
uint32_t size)
{
uint32_t offset = 0, rec_length;
uint32_t offset = 0;
uint8_t *rec_ptr;
/*
@ -6345,8 +6364,6 @@ lpfc_parse_fcoe_conf(struct lpfc_hba *phba,
}
offset += 4;
rec_length = buff[offset + 1];
/* Read FCoE param record */
rec_ptr = lpfc_get_rec_conf23(&buff[offset],
size - offset, FCOE_PARAM_TYPE);

View File

@ -306,10 +306,10 @@ lpfc_dump_wakeup_param_cmpl(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmboxq)
dist = dist_char[prg->dist];
if ((prg->dist == 3) && (prg->num == 0))
sprintf(phba->OptionROMVersion, "%d.%d%d",
snprintf(phba->OptionROMVersion, 32, "%d.%d%d",
prg->ver, prg->rev, prg->lev);
else
sprintf(phba->OptionROMVersion, "%d.%d%d%c%d",
snprintf(phba->OptionROMVersion, 32, "%d.%d%d%c%d",
prg->ver, prg->rev, prg->lev,
dist, prg->num);
mempool_free(pmboxq, phba->mbox_mem_pool);
@ -649,7 +649,7 @@ lpfc_config_port_post(struct lpfc_hba *phba)
* 0 - success
* Any other value - error
**/
int
static int
lpfc_hba_init_link(struct lpfc_hba *phba, uint32_t flag)
{
return lpfc_hba_init_link_fc_topology(phba, phba->cfg_topology, flag);
@ -750,7 +750,7 @@ lpfc_hba_init_link_fc_topology(struct lpfc_hba *phba, uint32_t fc_topology,
* 0 - success
* Any other value - error
**/
int
static int
lpfc_hba_down_link(struct lpfc_hba *phba, uint32_t flag)
{
LPFC_MBOXQ_t *pmb;
@ -988,9 +988,12 @@ lpfc_hba_down_post_s4(struct lpfc_hba *phba)
LIST_HEAD(aborts);
unsigned long iflag = 0;
struct lpfc_sglq *sglq_entry = NULL;
struct lpfc_sli *psli = &phba->sli;
struct lpfc_sli_ring *pring;
lpfc_hba_free_post_buf(phba);
lpfc_hba_clean_txcmplq(phba);
pring = &psli->ring[LPFC_ELS_RING];
/* At this point in time the HBA is either reset or DOA. Either
* way, nothing should be on lpfc_abts_els_sgl_list, it needs to be
@ -1008,8 +1011,10 @@ lpfc_hba_down_post_s4(struct lpfc_hba *phba)
&phba->sli4_hba.lpfc_abts_els_sgl_list, list)
sglq_entry->state = SGL_FREED;
spin_lock(&pring->ring_lock);
list_splice_init(&phba->sli4_hba.lpfc_abts_els_sgl_list,
&phba->sli4_hba.lpfc_sgl_list);
spin_unlock(&pring->ring_lock);
spin_unlock(&phba->sli4_hba.abts_sgl_list_lock);
/* abts_scsi_buf_list_lock required because worker thread uses this
* list.
@ -3047,6 +3052,7 @@ lpfc_sli4_xri_sgl_update(struct lpfc_hba *phba)
LIST_HEAD(els_sgl_list);
LIST_HEAD(scsi_sgl_list);
int rc;
struct lpfc_sli_ring *pring = &phba->sli.ring[LPFC_ELS_RING];
/*
* update on pci function's els xri-sgl list
@ -3087,7 +3093,9 @@ lpfc_sli4_xri_sgl_update(struct lpfc_hba *phba)
list_add_tail(&sglq_entry->list, &els_sgl_list);
}
spin_lock_irq(&phba->hbalock);
spin_lock(&pring->ring_lock);
list_splice_init(&els_sgl_list, &phba->sli4_hba.lpfc_sgl_list);
spin_unlock(&pring->ring_lock);
spin_unlock_irq(&phba->hbalock);
} else if (els_xri_cnt < phba->sli4_hba.els_xri_cnt) {
/* els xri-sgl shrinked */
@ -3097,7 +3105,9 @@ lpfc_sli4_xri_sgl_update(struct lpfc_hba *phba)
"%d to %d\n", phba->sli4_hba.els_xri_cnt,
els_xri_cnt);
spin_lock_irq(&phba->hbalock);
spin_lock(&pring->ring_lock);
list_splice_init(&phba->sli4_hba.lpfc_sgl_list, &els_sgl_list);
spin_unlock(&pring->ring_lock);
spin_unlock_irq(&phba->hbalock);
/* release extra els sgls from list */
for (i = 0; i < xri_cnt; i++) {
@ -3110,7 +3120,9 @@ lpfc_sli4_xri_sgl_update(struct lpfc_hba *phba)
}
}
spin_lock_irq(&phba->hbalock);
spin_lock(&pring->ring_lock);
list_splice_init(&els_sgl_list, &phba->sli4_hba.lpfc_sgl_list);
spin_unlock(&pring->ring_lock);
spin_unlock_irq(&phba->hbalock);
} else
lpfc_printf_log(phba, KERN_INFO, LOG_SLI,
@ -3165,9 +3177,11 @@ lpfc_sli4_xri_sgl_update(struct lpfc_hba *phba)
for (i = 0; i < scsi_xri_cnt; i++) {
list_remove_head(&scsi_sgl_list, psb,
struct lpfc_scsi_buf, list);
pci_pool_free(phba->lpfc_scsi_dma_buf_pool, psb->data,
psb->dma_handle);
kfree(psb);
if (psb) {
pci_pool_free(phba->lpfc_scsi_dma_buf_pool,
psb->data, psb->dma_handle);
kfree(psb);
}
}
spin_lock_irq(&phba->scsi_buf_list_get_lock);
phba->sli4_hba.scsi_xri_cnt -= scsi_xri_cnt;
@ -3550,7 +3564,7 @@ lpfc_fcf_redisc_wait_start_timer(struct lpfc_hba *phba)
* list, and then worker thread shall be waked up for processing from the
* worker thread context.
**/
void
static void
lpfc_sli4_fcf_redisc_wait_tmo(unsigned long ptr)
{
struct lpfc_hba *phba = (struct lpfc_hba *)ptr;
@ -5680,10 +5694,13 @@ static void
lpfc_free_els_sgl_list(struct lpfc_hba *phba)
{
LIST_HEAD(sglq_list);
struct lpfc_sli_ring *pring = &phba->sli.ring[LPFC_ELS_RING];
/* Retrieve all els sgls from driver list */
spin_lock_irq(&phba->hbalock);
spin_lock(&pring->ring_lock);
list_splice_init(&phba->sli4_hba.lpfc_sgl_list, &sglq_list);
spin_unlock(&pring->ring_lock);
spin_unlock_irq(&phba->hbalock);
/* Now free the sgl list */
@ -5848,16 +5865,14 @@ lpfc_sli4_create_rpi_hdr(struct lpfc_hba *phba)
if (!dmabuf)
return NULL;
dmabuf->virt = dma_alloc_coherent(&phba->pcidev->dev,
LPFC_HDR_TEMPLATE_SIZE,
&dmabuf->phys,
GFP_KERNEL);
dmabuf->virt = dma_zalloc_coherent(&phba->pcidev->dev,
LPFC_HDR_TEMPLATE_SIZE,
&dmabuf->phys, GFP_KERNEL);
if (!dmabuf->virt) {
rpi_hdr = NULL;
goto err_free_dmabuf;
}
memset(dmabuf->virt, 0, LPFC_HDR_TEMPLATE_SIZE);
if (!IS_ALIGNED(dmabuf->phys, LPFC_HDR_TEMPLATE_SIZE)) {
rpi_hdr = NULL;
goto err_free_coherent;
@ -6246,14 +6261,11 @@ lpfc_sli_pci_mem_setup(struct lpfc_hba *phba)
}
/* Allocate memory for SLI-2 structures */
phba->slim2p.virt = dma_alloc_coherent(&pdev->dev,
SLI2_SLIM_SIZE,
&phba->slim2p.phys,
GFP_KERNEL);
phba->slim2p.virt = dma_zalloc_coherent(&pdev->dev, SLI2_SLIM_SIZE,
&phba->slim2p.phys, GFP_KERNEL);
if (!phba->slim2p.virt)
goto out_iounmap;
memset(phba->slim2p.virt, 0, SLI2_SLIM_SIZE);
phba->mbox = phba->slim2p.virt + offsetof(struct lpfc_sli2_slim, mbx);
phba->mbox_ext = (phba->slim2p.virt +
offsetof(struct lpfc_sli2_slim, mbx_ext_words));
@ -6618,15 +6630,12 @@ lpfc_create_bootstrap_mbox(struct lpfc_hba *phba)
* plus an alignment restriction of 16 bytes.
*/
bmbx_size = sizeof(struct lpfc_bmbx_create) + (LPFC_ALIGN_16_BYTE - 1);
dmabuf->virt = dma_alloc_coherent(&phba->pcidev->dev,
bmbx_size,
&dmabuf->phys,
GFP_KERNEL);
dmabuf->virt = dma_zalloc_coherent(&phba->pcidev->dev, bmbx_size,
&dmabuf->phys, GFP_KERNEL);
if (!dmabuf->virt) {
kfree(dmabuf);
return -ENOMEM;
}
memset(dmabuf->virt, 0, bmbx_size);
/*
* Initialize the bootstrap mailbox pointers now so that the register
@ -6710,7 +6719,6 @@ lpfc_sli4_read_config(struct lpfc_hba *phba)
struct lpfc_mbx_get_func_cfg *get_func_cfg;
struct lpfc_rsrc_desc_fcfcoe *desc;
char *pdesc_0;
uint32_t desc_count;
int length, i, rc = 0, rc2;
pmb = (LPFC_MBOXQ_t *) mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
@ -6841,7 +6849,6 @@ lpfc_sli4_read_config(struct lpfc_hba *phba)
/* search for fc_fcoe resrouce descriptor */
get_func_cfg = &pmb->u.mqe.un.get_func_cfg;
desc_count = get_func_cfg->func_cfg.rsrc_desc_count;
pdesc_0 = (char *)&get_func_cfg->func_cfg.desc[0];
desc = (struct lpfc_rsrc_desc_fcfcoe *)pdesc_0;
@ -7417,7 +7424,8 @@ lpfc_sli4_queue_setup(struct lpfc_hba *phba)
if (rc) {
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"0523 Failed setup of fast-path EQ "
"(%d), rc = 0x%x\n", fcp_eqidx, rc);
"(%d), rc = 0x%x\n", fcp_eqidx,
(uint32_t)rc);
goto out_destroy_hba_eq;
}
lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
@ -7448,7 +7456,8 @@ lpfc_sli4_queue_setup(struct lpfc_hba *phba)
if (rc) {
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"0527 Failed setup of fast-path FCP "
"CQ (%d), rc = 0x%x\n", fcp_cqidx, rc);
"CQ (%d), rc = 0x%x\n", fcp_cqidx,
(uint32_t)rc);
goto out_destroy_fcp_cq;
}
@ -7488,7 +7497,8 @@ lpfc_sli4_queue_setup(struct lpfc_hba *phba)
if (rc) {
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"0535 Failed setup of fast-path FCP "
"WQ (%d), rc = 0x%x\n", fcp_wqidx, rc);
"WQ (%d), rc = 0x%x\n", fcp_wqidx,
(uint32_t)rc);
goto out_destroy_fcp_wq;
}
@ -7521,7 +7531,7 @@ lpfc_sli4_queue_setup(struct lpfc_hba *phba)
if (rc) {
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"0529 Failed setup of slow-path mailbox CQ: "
"rc = 0x%x\n", rc);
"rc = 0x%x\n", (uint32_t)rc);
goto out_destroy_fcp_wq;
}
lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
@ -7541,7 +7551,7 @@ lpfc_sli4_queue_setup(struct lpfc_hba *phba)
if (rc) {
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"0531 Failed setup of slow-path ELS CQ: "
"rc = 0x%x\n", rc);
"rc = 0x%x\n", (uint32_t)rc);
goto out_destroy_mbx_cq;
}
lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
@ -7585,7 +7595,7 @@ lpfc_sli4_queue_setup(struct lpfc_hba *phba)
if (rc) {
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"0537 Failed setup of slow-path ELS WQ: "
"rc = 0x%x\n", rc);
"rc = 0x%x\n", (uint32_t)rc);
goto out_destroy_mbx_wq;
}
@ -7617,7 +7627,7 @@ lpfc_sli4_queue_setup(struct lpfc_hba *phba)
if (rc) {
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"0541 Failed setup of Receive Queue: "
"rc = 0x%x\n", rc);
"rc = 0x%x\n", (uint32_t)rc);
goto out_destroy_fcp_wq;
}
@ -7896,7 +7906,8 @@ lpfc_pci_function_reset(struct lpfc_hba *phba)
LPFC_MBOXQ_t *mboxq;
uint32_t rc = 0, if_type;
uint32_t shdr_status, shdr_add_status;
uint32_t rdy_chk, num_resets = 0, reset_again = 0;
uint32_t rdy_chk;
uint32_t port_reset = 0;
union lpfc_sli4_cfg_shdr *shdr;
struct lpfc_register reg_data;
uint16_t devid;
@ -7936,9 +7947,42 @@ lpfc_pci_function_reset(struct lpfc_hba *phba)
}
break;
case LPFC_SLI_INTF_IF_TYPE_2:
for (num_resets = 0;
num_resets < MAX_IF_TYPE_2_RESETS;
num_resets++) {
wait:
/*
* Poll the Port Status Register and wait for RDY for
* up to 30 seconds. If the port doesn't respond, treat
* it as an error.
*/
for (rdy_chk = 0; rdy_chk < 3000; rdy_chk++) {
if (lpfc_readl(phba->sli4_hba.u.if_type2.
STATUSregaddr, &reg_data.word0)) {
rc = -ENODEV;
goto out;
}
if (bf_get(lpfc_sliport_status_rdy, &reg_data))
break;
msleep(20);
}
if (!bf_get(lpfc_sliport_status_rdy, &reg_data)) {
phba->work_status[0] = readl(
phba->sli4_hba.u.if_type2.ERR1regaddr);
phba->work_status[1] = readl(
phba->sli4_hba.u.if_type2.ERR2regaddr);
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"2890 Port not ready, port status reg "
"0x%x error 1=0x%x, error 2=0x%x\n",
reg_data.word0,
phba->work_status[0],
phba->work_status[1]);
rc = -ENODEV;
goto out;
}
if (!port_reset) {
/*
* Reset the port now
*/
reg_data.word0 = 0;
bf_set(lpfc_sliport_ctrl_end, &reg_data,
LPFC_SLIPORT_LITTLE_ENDIAN);
@ -7949,64 +7993,16 @@ lpfc_pci_function_reset(struct lpfc_hba *phba)
/* flush */
pci_read_config_word(phba->pcidev,
PCI_DEVICE_ID, &devid);
/*
* Poll the Port Status Register and wait for RDY for
* up to 10 seconds. If the port doesn't respond, treat
* it as an error. If the port responds with RN, start
* the loop again.
*/
for (rdy_chk = 0; rdy_chk < 1000; rdy_chk++) {
msleep(10);
if (lpfc_readl(phba->sli4_hba.u.if_type2.
STATUSregaddr, &reg_data.word0)) {
rc = -ENODEV;
goto out;
}
if (bf_get(lpfc_sliport_status_rn, &reg_data))
reset_again++;
if (bf_get(lpfc_sliport_status_rdy, &reg_data))
break;
}
/*
* If the port responds to the init request with
* reset needed, delay for a bit and restart the loop.
*/
if (reset_again && (rdy_chk < 1000)) {
msleep(10);
reset_again = 0;
continue;
}
/* Detect any port errors. */
if ((bf_get(lpfc_sliport_status_err, &reg_data)) ||
(rdy_chk >= 1000)) {
phba->work_status[0] = readl(
phba->sli4_hba.u.if_type2.ERR1regaddr);
phba->work_status[1] = readl(
phba->sli4_hba.u.if_type2.ERR2regaddr);
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"2890 Port error detected during port "
"reset(%d): wait_tmo:%d ms, "
"port status reg 0x%x, "
"error 1=0x%x, error 2=0x%x\n",
num_resets, rdy_chk*10,
reg_data.word0,
phba->work_status[0],
phba->work_status[1]);
rc = -ENODEV;
}
/*
* Terminate the outer loop provided the Port indicated
* ready within 10 seconds.
*/
if (rdy_chk < 1000)
break;
port_reset = 1;
msleep(20);
goto wait;
} else if (bf_get(lpfc_sliport_status_rn, &reg_data)) {
rc = -ENODEV;
goto out;
}
/* delay driver action following IF_TYPE_2 function reset */
msleep(100);
break;
case LPFC_SLI_INTF_IF_TYPE_1:
default:
break;
@ -8014,11 +8010,10 @@ lpfc_pci_function_reset(struct lpfc_hba *phba)
out:
/* Catch the not-ready port failure after a port reset. */
if (num_resets >= MAX_IF_TYPE_2_RESETS) {
if (rc) {
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"3317 HBA not functional: IP Reset Failed "
"after (%d) retries, try: "
"echo fw_reset > board_mode\n", num_resets);
"try: echo fw_reset > board_mode\n");
rc = -ENODEV;
}
@ -8211,9 +8206,9 @@ lpfc_sli4_pci_mem_unset(struct lpfc_hba *phba)
* @phba: pointer to lpfc hba data structure.
*
* This routine is invoked to enable the MSI-X interrupt vectors to device
* with SLI-3 interface specs. The kernel function pci_enable_msix() is
* called to enable the MSI-X vectors. Note that pci_enable_msix(), once
* invoked, enables either all or nothing, depending on the current
* with SLI-3 interface specs. The kernel function pci_enable_msix_exact()
* is called to enable the MSI-X vectors. Note that pci_enable_msix_exact(),
* once invoked, enables either all or nothing, depending on the current
* availability of PCI vector resources. The device driver is responsible
* for calling the individual request_irq() to register each MSI-X vector
* with a interrupt handler, which is done in this function. Note that
@ -8237,8 +8232,8 @@ lpfc_sli_enable_msix(struct lpfc_hba *phba)
phba->msix_entries[i].entry = i;
/* Configure MSI-X capability structure */
rc = pci_enable_msix(phba->pcidev, phba->msix_entries,
ARRAY_SIZE(phba->msix_entries));
rc = pci_enable_msix_exact(phba->pcidev, phba->msix_entries,
LPFC_MSIX_VECTORS);
if (rc) {
lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
"0420 PCI enable MSI-X failed (%d)\n", rc);
@ -8775,16 +8770,14 @@ out:
* @phba: pointer to lpfc hba data structure.
*
* This routine is invoked to enable the MSI-X interrupt vectors to device
* with SLI-4 interface spec. The kernel function pci_enable_msix() is called
* to enable the MSI-X vectors. Note that pci_enable_msix(), once invoked,
* enables either all or nothing, depending on the current availability of
* PCI vector resources. The device driver is responsible for calling the
* individual request_irq() to register each MSI-X vector with a interrupt
* handler, which is done in this function. Note that later when device is
* unloading, the driver should always call free_irq() on all MSI-X vectors
* it has done request_irq() on before calling pci_disable_msix(). Failure
* to do so results in a BUG_ON() and a device will be left with MSI-X
* enabled and leaks its vectors.
* with SLI-4 interface spec. The kernel function pci_enable_msix_range()
* is called to enable the MSI-X vectors. The device driver is responsible
* for calling the individual request_irq() to register each MSI-X vector
* with a interrupt handler, which is done in this function. Note that
* later when device is unloading, the driver should always call free_irq()
* on all MSI-X vectors it has done request_irq() on before calling
* pci_disable_msix(). Failure to do so results in a BUG_ON() and a device
* will be left with MSI-X enabled and leaks its vectors.
*
* Return codes
* 0 - successful
@ -8805,17 +8798,14 @@ lpfc_sli4_enable_msix(struct lpfc_hba *phba)
phba->sli4_hba.msix_entries[index].entry = index;
vectors++;
}
enable_msix_vectors:
rc = pci_enable_msix(phba->pcidev, phba->sli4_hba.msix_entries,
vectors);
if (rc > 1) {
vectors = rc;
goto enable_msix_vectors;
} else if (rc) {
rc = pci_enable_msix_range(phba->pcidev, phba->sli4_hba.msix_entries,
2, vectors);
if (rc < 0) {
lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
"0484 PCI enable MSI-X failed (%d)\n", rc);
goto vec_fail_out;
}
vectors = rc;
/* Log MSI-X vector assignment */
for (index = 0; index < vectors; index++)
@ -8828,7 +8818,8 @@ enable_msix_vectors:
/* Assign MSI-X vectors to interrupt handlers */
for (index = 0; index < vectors; index++) {
memset(&phba->sli4_hba.handler_name[index], 0, 16);
sprintf((char *)&phba->sli4_hba.handler_name[index],
snprintf((char *)&phba->sli4_hba.handler_name[index],
LPFC_SLI4_HANDLER_NAME_SZ,
LPFC_DRIVER_HANDLER_NAME"%d", index);
phba->sli4_hba.fcp_eq_hdl[index].idx = index;

View File

@ -1811,12 +1811,12 @@ lpfc_sli4_config(struct lpfc_hba *phba, struct lpfcMboxq *mbox,
* page, this is used as a priori size of SLI4_PAGE_SIZE for
* the later DMA memory free.
*/
viraddr = dma_alloc_coherent(&phba->pcidev->dev, SLI4_PAGE_SIZE,
&phyaddr, GFP_KERNEL);
viraddr = dma_zalloc_coherent(&phba->pcidev->dev,
SLI4_PAGE_SIZE, &phyaddr,
GFP_KERNEL);
/* In case of malloc fails, proceed with whatever we have */
if (!viraddr)
break;
memset(viraddr, 0, SLI4_PAGE_SIZE);
mbox->sge_array->addr[pagen] = viraddr;
/* Keep the first page for later sub-header construction */
if (pagen == 0)

View File

@ -1031,6 +1031,8 @@ lpfc_cmpl_plogi_plogi_issue(struct lpfc_vport *vport,
pcmd = (struct lpfc_dmabuf *) cmdiocb->context2;
prsp = list_get_first(&pcmd->list, struct lpfc_dmabuf, list);
if (!prsp)
goto out;
lp = (uint32_t *) prsp->virt;
sp = (struct serv_parm *) ((uint8_t *) lp + sizeof (uint32_t));

View File

@ -306,7 +306,7 @@ lpfc_send_sdev_queuedepth_change_event(struct lpfc_hba *phba,
* depth for a scsi device. This function sets the queue depth to the new
* value and sends an event out to log the queue depth change.
**/
int
static int
lpfc_change_queue_depth(struct scsi_device *sdev, int qdepth, int reason)
{
struct lpfc_vport *vport = (struct lpfc_vport *) sdev->host->hostdata;
@ -380,12 +380,14 @@ lpfc_rampdown_queue_depth(struct lpfc_hba *phba)
{
unsigned long flags;
uint32_t evt_posted;
unsigned long expires;
spin_lock_irqsave(&phba->hbalock, flags);
atomic_inc(&phba->num_rsrc_err);
phba->last_rsrc_error_time = jiffies;
if ((phba->last_ramp_down_time + QUEUE_RAMP_DOWN_INTERVAL) > jiffies) {
expires = phba->last_ramp_down_time + QUEUE_RAMP_DOWN_INTERVAL;
if (time_after(expires, jiffies)) {
spin_unlock_irqrestore(&phba->hbalock, flags);
return;
}
@ -741,7 +743,7 @@ lpfc_sli4_fcp_xri_aborted(struct lpfc_hba *phba,
*
* Returns: 0 = failure, non-zero number of successfully posted buffers.
**/
int
static int
lpfc_sli4_post_scsi_sgl_list(struct lpfc_hba *phba,
struct list_head *post_sblist, int sb_count)
{
@ -2965,7 +2967,7 @@ err:
* on the specified data using a CRC algorithmn
* using crc_t10dif.
*/
uint16_t
static uint16_t
lpfc_bg_crc(uint8_t *data, int count)
{
uint16_t crc = 0;
@ -2981,7 +2983,7 @@ lpfc_bg_crc(uint8_t *data, int count)
* on the specified data using a CSUM algorithmn
* using ip_compute_csum.
*/
uint16_t
static uint16_t
lpfc_bg_csum(uint8_t *data, int count)
{
uint16_t ret;
@ -2994,7 +2996,7 @@ lpfc_bg_csum(uint8_t *data, int count)
* This function examines the protection data to try to determine
* what type of T10-DIF error occurred.
*/
void
static void
lpfc_calc_bg_err(struct lpfc_hba *phba, struct lpfc_scsi_buf *lpfc_cmd)
{
struct scatterlist *sgpe; /* s/g prot entry */
@ -3464,7 +3466,7 @@ lpfc_scsi_prep_dma_buf_s4(struct lpfc_hba *phba, struct lpfc_scsi_buf *lpfc_cmd)
*/
if ((phba->cfg_fof) && ((struct lpfc_device_data *)
scsi_cmnd->device->hostdata)->oas_enabled)
lpfc_cmd->cur_iocbq.iocb_flag |= LPFC_IO_OAS;
lpfc_cmd->cur_iocbq.iocb_flag |= (LPFC_IO_OAS | LPFC_IO_FOF);
return 0;
}
@ -3604,6 +3606,14 @@ lpfc_bg_scsi_prep_dma_buf_s4(struct lpfc_hba *phba,
*/
iocb_cmd->un.fcpi.fcpi_parm = fcpdl;
/*
* If the OAS driver feature is enabled and the lun is enabled for
* OAS, set the oas iocb related flags.
*/
if ((phba->cfg_fof) && ((struct lpfc_device_data *)
scsi_cmnd->device->hostdata)->oas_enabled)
lpfc_cmd->cur_iocbq.iocb_flag |= (LPFC_IO_OAS | LPFC_IO_FOF);
return 0;
err:
if (lpfc_cmd->seg_cnt)
@ -4874,6 +4884,8 @@ lpfc_abort_handler(struct scsi_cmnd *cmnd)
/* ABTS WQE must go to the same WQ as the WQE to be aborted */
abtsiocb->fcp_wqidx = iocb->fcp_wqidx;
abtsiocb->iocb_flag |= LPFC_USE_FCPWQIDX;
if (iocb->iocb_flag & LPFC_IO_FOF)
abtsiocb->iocb_flag |= LPFC_IO_FOF;
if (lpfc_is_link_up(phba))
icmd->ulpCommand = CMD_ABORT_XRI_CN;
@ -5327,7 +5339,13 @@ lpfc_target_reset_handler(struct scsi_cmnd *cmnd)
if (status == FAILED) {
lpfc_printf_vlog(vport, KERN_ERR, LOG_FCP,
"0722 Target Reset rport failure: rdata x%p\n", rdata);
return FAILED;
spin_lock_irq(shost->host_lock);
pnode->nlp_flag &= ~NLP_NPR_ADISC;
pnode->nlp_fcp_info &= ~NLP_FCP_2_DEVICE;
spin_unlock_irq(shost->host_lock);
lpfc_reset_flush_io_context(vport, tgt_id, lun_id,
LPFC_CTX_TGT);
return FAST_IO_FAIL;
}
scsi_event.event_type = FC_REG_SCSI_EVENT;

View File

@ -187,7 +187,6 @@ lpfc_sli4_mq_put(struct lpfc_queue *q, struct lpfc_mqe *mqe)
{
struct lpfc_mqe *temp_mqe;
struct lpfc_register doorbell;
uint32_t host_index;
/* sanity check on queue memory */
if (unlikely(!q))
@ -202,7 +201,6 @@ lpfc_sli4_mq_put(struct lpfc_queue *q, struct lpfc_mqe *mqe)
q->phba->mbox = (MAILBOX_t *)temp_mqe;
/* Update the host index before invoking device */
host_index = q->host_index;
q->host_index = ((q->host_index + 1) % q->entry_count);
/* Ring Doorbell */
@ -785,42 +783,6 @@ lpfc_cleanup_vports_rrqs(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
}
}
/**
* lpfc_cleanup_wt_rrqs - Remove all rrq's from the active list.
* @phba: Pointer to HBA context object.
*
* Remove all rrqs from the phba->active_rrq_list and free them by
* calling __lpfc_clr_active_rrq
*
**/
void
lpfc_cleanup_wt_rrqs(struct lpfc_hba *phba)
{
struct lpfc_node_rrq *rrq;
struct lpfc_node_rrq *nextrrq;
unsigned long next_time;
unsigned long iflags;
LIST_HEAD(rrq_list);
if (phba->sli_rev != LPFC_SLI_REV4)
return;
spin_lock_irqsave(&phba->hbalock, iflags);
phba->hba_flag &= ~HBA_RRQ_ACTIVE;
next_time = jiffies + msecs_to_jiffies(1000 * (phba->fc_ratov * 2));
list_splice_init(&phba->active_rrq_list, &rrq_list);
spin_unlock_irqrestore(&phba->hbalock, iflags);
list_for_each_entry_safe(rrq, nextrrq, &rrq_list, list) {
list_del(&rrq->list);
lpfc_clr_rrq_active(phba, rrq->xritag, rrq);
}
if ((!list_empty(&phba->active_rrq_list)) &&
(!(phba->pport->load_flag & FC_UNLOADING)))
mod_timer(&phba->rrq_tmr, next_time);
}
/**
* lpfc_test_rrq_active - Test RRQ bit in xri_bitmap.
* @phba: Pointer to HBA context object.
@ -937,7 +899,7 @@ out:
* @phba: Pointer to HBA context object.
* @piocb: Pointer to the iocbq.
*
* This function is called with hbalock held. This function
* This function is called with the ring lock held. This function
* gets a new driver sglq object from the sglq list. If the
* list is not empty then it is successful, it returns pointer to the newly
* allocated sglq object else it returns NULL.
@ -1053,10 +1015,12 @@ __lpfc_sli_release_iocbq_s4(struct lpfc_hba *phba, struct lpfc_iocbq *iocbq)
spin_unlock_irqrestore(
&phba->sli4_hba.abts_sgl_list_lock, iflag);
} else {
spin_lock_irqsave(&pring->ring_lock, iflag);
sglq->state = SGL_FREED;
sglq->ndlp = NULL;
list_add_tail(&sglq->list,
&phba->sli4_hba.lpfc_sgl_list);
spin_unlock_irqrestore(&pring->ring_lock, iflag);
/* Check if TXQ queue needs to be serviced */
if (!list_empty(&pring->txq))
@ -2469,11 +2433,9 @@ lpfc_sli_process_unsol_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
IOCB_t * irsp;
WORD5 * w5p;
uint32_t Rctl, Type;
uint32_t match;
struct lpfc_iocbq *iocbq;
struct lpfc_dmabuf *dmzbuf;
match = 0;
irsp = &(saveq->iocb);
if (irsp->ulpCommand == CMD_ASYNC_STATUS) {
@ -2899,7 +2861,7 @@ lpfc_sli_rsp_pointers_error(struct lpfc_hba *phba, struct lpfc_sli_ring *pring)
void lpfc_poll_eratt(unsigned long ptr)
{
struct lpfc_hba *phba;
uint32_t eratt = 0, rem;
uint32_t eratt = 0;
uint64_t sli_intr, cnt;
phba = (struct lpfc_hba *)ptr;
@ -2914,7 +2876,7 @@ void lpfc_poll_eratt(unsigned long ptr)
cnt = (sli_intr - phba->sli.slistat.sli_prev_intr);
/* 64-bit integer division not supporte on 32-bit x86 - use do_div */
rem = do_div(cnt, LPFC_ERATT_POLL_INTERVAL);
do_div(cnt, LPFC_ERATT_POLL_INTERVAL);
phba->sli.slistat.sli_ips = cnt;
phba->sli.slistat.sli_prev_intr = sli_intr;
@ -4864,15 +4826,12 @@ lpfc_sli4_read_rev(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq,
* mailbox command.
*/
dma_size = *vpd_size;
dmabuf->virt = dma_alloc_coherent(&phba->pcidev->dev,
dma_size,
&dmabuf->phys,
GFP_KERNEL);
dmabuf->virt = dma_zalloc_coherent(&phba->pcidev->dev, dma_size,
&dmabuf->phys, GFP_KERNEL);
if (!dmabuf->virt) {
kfree(dmabuf);
return -ENOMEM;
}
memset(dmabuf->virt, 0, dma_size);
/*
* The SLI4 implementation of READ_REV conflicts at word1,
@ -5990,9 +5949,6 @@ lpfc_sli4_get_allocated_extnts(struct lpfc_hba *phba, uint16_t type,
curr_blks++;
}
/* Calculate the total requested length of the dma memory. */
req_len = curr_blks * sizeof(uint16_t);
/*
* Calculate the size of an embedded mailbox. The uint32_t
* accounts for extents-specific word.
@ -6101,14 +6057,18 @@ lpfc_sli4_repost_els_sgl_list(struct lpfc_hba *phba)
struct lpfc_sglq *sglq_entry_first = NULL;
int status, total_cnt, post_cnt = 0, num_posted = 0, block_cnt = 0;
int last_xritag = NO_XRI;
struct lpfc_sli_ring *pring;
LIST_HEAD(prep_sgl_list);
LIST_HEAD(blck_sgl_list);
LIST_HEAD(allc_sgl_list);
LIST_HEAD(post_sgl_list);
LIST_HEAD(free_sgl_list);
pring = &phba->sli.ring[LPFC_ELS_RING];
spin_lock_irq(&phba->hbalock);
spin_lock(&pring->ring_lock);
list_splice_init(&phba->sli4_hba.lpfc_sgl_list, &allc_sgl_list);
spin_unlock(&pring->ring_lock);
spin_unlock_irq(&phba->hbalock);
total_cnt = phba->sli4_hba.els_xri_cnt;
@ -6210,8 +6170,10 @@ lpfc_sli4_repost_els_sgl_list(struct lpfc_hba *phba)
/* push els sgls posted to the availble list */
if (!list_empty(&post_sgl_list)) {
spin_lock_irq(&phba->hbalock);
spin_lock(&pring->ring_lock);
list_splice_init(&post_sgl_list,
&phba->sli4_hba.lpfc_sgl_list);
spin_unlock(&pring->ring_lock);
spin_unlock_irq(&phba->hbalock);
} else {
lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
@ -6797,13 +6759,16 @@ void
lpfc_mbox_timeout_handler(struct lpfc_hba *phba)
{
LPFC_MBOXQ_t *pmbox = phba->sli.mbox_active;
MAILBOX_t *mb = &pmbox->u.mb;
MAILBOX_t *mb = NULL;
struct lpfc_sli *psli = &phba->sli;
/* If the mailbox completed, process the completion and return */
if (lpfc_sli4_process_missed_mbox_completions(phba))
return;
if (pmbox != NULL)
mb = &pmbox->u.mb;
/* Check the pmbox pointer first. There is a race condition
* between the mbox timeout handler getting executed in the
* worklist and the mailbox actually completing. When this
@ -8138,7 +8103,7 @@ lpfc_sli4_bpl2sgl(struct lpfc_hba *phba, struct lpfc_iocbq *piocbq,
*
* Return: index into SLI4 fast-path FCP queue index.
**/
static inline uint32_t
static inline int
lpfc_sli4_scmd_to_wqidx_distr(struct lpfc_hba *phba)
{
struct lpfc_vector_map_info *cpup;
@ -8152,7 +8117,6 @@ lpfc_sli4_scmd_to_wqidx_distr(struct lpfc_hba *phba)
cpup += cpu;
return cpup->channel_id;
}
chann = cpu;
}
chann = atomic_add_return(1, &phba->fcp_qidx);
chann = (chann % phba->cfg_fcp_io_channel);
@ -8784,6 +8748,37 @@ lpfc_sli_api_table_setup(struct lpfc_hba *phba, uint8_t dev_grp)
return 0;
}
int
lpfc_sli_calc_ring(struct lpfc_hba *phba, uint32_t ring_number,
struct lpfc_iocbq *piocb)
{
uint32_t idx;
if (phba->sli_rev == LPFC_SLI_REV4) {
if (piocb->iocb_flag & (LPFC_IO_FCP | LPFC_USE_FCPWQIDX)) {
/*
* fcp_wqidx should already be setup based on what
* completion queue we want to use.
*/
if (!(phba->cfg_fof) ||
(!(piocb->iocb_flag & LPFC_IO_FOF))) {
if (unlikely(!phba->sli4_hba.fcp_wq))
return LPFC_HBA_ERROR;
idx = lpfc_sli4_scmd_to_wqidx_distr(phba);
piocb->fcp_wqidx = idx;
ring_number = MAX_SLI3_CONFIGURED_RINGS + idx;
} else {
if (unlikely(!phba->sli4_hba.oas_wq))
return LPFC_HBA_ERROR;
idx = 0;
piocb->fcp_wqidx = idx;
ring_number = LPFC_FCP_OAS_RING;
}
}
}
return ring_number;
}
/**
* lpfc_sli_issue_iocb - Wrapper function for __lpfc_sli_issue_iocb
* @phba: Pointer to HBA context object.
@ -8809,61 +8804,42 @@ lpfc_sli_issue_iocb(struct lpfc_hba *phba, uint32_t ring_number,
int rc, idx;
if (phba->sli_rev == LPFC_SLI_REV4) {
if (piocb->iocb_flag & LPFC_IO_FCP) {
if (!phba->cfg_fof || (!(piocb->iocb_flag &
LPFC_IO_OAS))) {
if (unlikely(!phba->sli4_hba.fcp_wq))
return IOCB_ERROR;
idx = lpfc_sli4_scmd_to_wqidx_distr(phba);
piocb->fcp_wqidx = idx;
ring_number = MAX_SLI3_CONFIGURED_RINGS + idx;
} else {
if (unlikely(!phba->sli4_hba.oas_wq))
return IOCB_ERROR;
idx = 0;
piocb->fcp_wqidx = 0;
ring_number = LPFC_FCP_OAS_RING;
}
pring = &phba->sli.ring[ring_number];
spin_lock_irqsave(&pring->ring_lock, iflags);
rc = __lpfc_sli_issue_iocb(phba, ring_number, piocb,
flag);
spin_unlock_irqrestore(&pring->ring_lock, iflags);
ring_number = lpfc_sli_calc_ring(phba, ring_number, piocb);
if (unlikely(ring_number == LPFC_HBA_ERROR))
return IOCB_ERROR;
idx = piocb->fcp_wqidx;
if (lpfc_fcp_look_ahead) {
fcp_eq_hdl = &phba->sli4_hba.fcp_eq_hdl[idx];
pring = &phba->sli.ring[ring_number];
spin_lock_irqsave(&pring->ring_lock, iflags);
rc = __lpfc_sli_issue_iocb(phba, ring_number, piocb, flag);
spin_unlock_irqrestore(&pring->ring_lock, iflags);
if (atomic_dec_and_test(&fcp_eq_hdl->
fcp_eq_in_use)) {
if (lpfc_fcp_look_ahead && (piocb->iocb_flag & LPFC_IO_FCP)) {
fcp_eq_hdl = &phba->sli4_hba.fcp_eq_hdl[idx];
/* Get associated EQ with this index */
fpeq = phba->sli4_hba.hba_eq[idx];
if (atomic_dec_and_test(&fcp_eq_hdl->
fcp_eq_in_use)) {
/* Turn off interrupts from this EQ */
lpfc_sli4_eq_clr_intr(fpeq);
/* Get associated EQ with this index */
fpeq = phba->sli4_hba.hba_eq[idx];
/*
* Process all the events on FCP EQ
*/
while ((eqe = lpfc_sli4_eq_get(fpeq))) {
lpfc_sli4_hba_handle_eqe(phba,
eqe, idx);
fpeq->EQ_processed++;
}
/* Turn off interrupts from this EQ */
lpfc_sli4_eq_clr_intr(fpeq);
/* Always clear and re-arm the EQ */
lpfc_sli4_eq_release(fpeq,
LPFC_QUEUE_REARM);
/*
* Process all the events on FCP EQ
*/
while ((eqe = lpfc_sli4_eq_get(fpeq))) {
lpfc_sli4_hba_handle_eqe(phba,
eqe, idx);
fpeq->EQ_processed++;
}
atomic_inc(&fcp_eq_hdl->fcp_eq_in_use);
}
} else {
pring = &phba->sli.ring[ring_number];
spin_lock_irqsave(&pring->ring_lock, iflags);
rc = __lpfc_sli_issue_iocb(phba, ring_number, piocb,
flag);
spin_unlock_irqrestore(&pring->ring_lock, iflags);
/* Always clear and re-arm the EQ */
lpfc_sli4_eq_release(fpeq,
LPFC_QUEUE_REARM);
}
atomic_inc(&fcp_eq_hdl->fcp_eq_in_use);
}
} else {
/* For now, SLI2/3 will still use hbalock */
@ -9746,6 +9722,7 @@ lpfc_sli_abort_iotag_issue(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
struct lpfc_iocbq *abtsiocbp;
IOCB_t *icmd = NULL;
IOCB_t *iabt = NULL;
int ring_number;
int retval;
unsigned long iflags;
@ -9786,6 +9763,8 @@ lpfc_sli_abort_iotag_issue(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
abtsiocbp->fcp_wqidx = cmdiocb->fcp_wqidx;
if (cmdiocb->iocb_flag & LPFC_IO_FCP)
abtsiocbp->iocb_flag |= LPFC_USE_FCPWQIDX;
if (cmdiocb->iocb_flag & LPFC_IO_FOF)
abtsiocbp->iocb_flag |= LPFC_IO_FOF;
if (phba->link_state >= LPFC_LINK_UP)
iabt->ulpCommand = CMD_ABORT_XRI_CN;
@ -9802,6 +9781,11 @@ lpfc_sli_abort_iotag_issue(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
abtsiocbp->iotag);
if (phba->sli_rev == LPFC_SLI_REV4) {
ring_number =
lpfc_sli_calc_ring(phba, pring->ringno, abtsiocbp);
if (unlikely(ring_number == LPFC_HBA_ERROR))
return 0;
pring = &phba->sli.ring[ring_number];
/* Note: both hbalock and ring_lock need to be set here */
spin_lock_irqsave(&pring->ring_lock, iflags);
retval = __lpfc_sli_issue_iocb(phba, pring->ringno,
@ -10099,6 +10083,8 @@ lpfc_sli_abort_iocb(struct lpfc_vport *vport, struct lpfc_sli_ring *pring,
abtsiocb->fcp_wqidx = iocbq->fcp_wqidx;
if (iocbq->iocb_flag & LPFC_IO_FCP)
abtsiocb->iocb_flag |= LPFC_USE_FCPWQIDX;
if (iocbq->iocb_flag & LPFC_IO_FOF)
abtsiocb->iocb_flag |= LPFC_IO_FOF;
if (lpfc_is_link_up(phba))
abtsiocb->iocb.ulpCommand = CMD_ABORT_XRI_CN;
@ -10146,7 +10132,9 @@ lpfc_sli_abort_taskmgmt(struct lpfc_vport *vport, struct lpfc_sli_ring *pring,
uint16_t tgt_id, uint64_t lun_id, lpfc_ctx_cmd cmd)
{
struct lpfc_hba *phba = vport->phba;
struct lpfc_scsi_buf *lpfc_cmd;
struct lpfc_iocbq *abtsiocbq;
struct lpfc_nodelist *ndlp;
struct lpfc_iocbq *iocbq;
IOCB_t *icmd;
int sum, i, ret_val;
@ -10198,8 +10186,14 @@ lpfc_sli_abort_taskmgmt(struct lpfc_vport *vport, struct lpfc_sli_ring *pring,
abtsiocbq->fcp_wqidx = iocbq->fcp_wqidx;
if (iocbq->iocb_flag & LPFC_IO_FCP)
abtsiocbq->iocb_flag |= LPFC_USE_FCPWQIDX;
if (iocbq->iocb_flag & LPFC_IO_FOF)
abtsiocbq->iocb_flag |= LPFC_IO_FOF;
if (lpfc_is_link_up(phba))
lpfc_cmd = container_of(iocbq, struct lpfc_scsi_buf, cur_iocbq);
ndlp = lpfc_cmd->rdata->pnode;
if (lpfc_is_link_up(phba) &&
(ndlp && ndlp->nlp_state == NLP_STE_MAPPED_NODE))
abtsiocbq->iocb.ulpCommand = CMD_ABORT_XRI_CN;
else
abtsiocbq->iocb.ulpCommand = CMD_CLOSE_XRI_CN;
@ -12611,6 +12605,9 @@ lpfc_sli4_hba_intr_handler(int irq, void *dev_id)
* Process all the event on FCP fast-path EQ
*/
while ((eqe = lpfc_sli4_eq_get(fpeq))) {
if (eqe == NULL)
break;
lpfc_sli4_hba_handle_eqe(phba, eqe, fcp_eqidx);
if (!(++ecount % fpeq->entry_repost))
lpfc_sli4_eq_release(fpeq, LPFC_QUEUE_NOARM);
@ -12760,14 +12757,13 @@ lpfc_sli4_queue_alloc(struct lpfc_hba *phba, uint32_t entry_size,
dmabuf = kzalloc(sizeof(struct lpfc_dmabuf), GFP_KERNEL);
if (!dmabuf)
goto out_fail;
dmabuf->virt = dma_alloc_coherent(&phba->pcidev->dev,
hw_page_size, &dmabuf->phys,
GFP_KERNEL);
dmabuf->virt = dma_zalloc_coherent(&phba->pcidev->dev,
hw_page_size, &dmabuf->phys,
GFP_KERNEL);
if (!dmabuf->virt) {
kfree(dmabuf);
goto out_fail;
}
memset(dmabuf->virt, 0, hw_page_size);
dmabuf->buffer_tag = x;
list_add_tail(&dmabuf->list, &queue->page_list);
/* initialize queue's entry array */
@ -12845,7 +12841,7 @@ lpfc_dual_chute_pci_bar_map(struct lpfc_hba *phba, uint16_t pci_barset)
* memory this function will return -ENOMEM. If the queue create mailbox command
* fails this function will return -ENXIO.
**/
uint32_t
int
lpfc_modify_fcp_eq_delay(struct lpfc_hba *phba, uint16_t startq)
{
struct lpfc_mbx_modify_eq_delay *eq_delay;
@ -12931,7 +12927,7 @@ lpfc_modify_fcp_eq_delay(struct lpfc_hba *phba, uint16_t startq)
* memory this function will return -ENOMEM. If the queue create mailbox command
* fails this function will return -ENXIO.
**/
uint32_t
int
lpfc_eq_create(struct lpfc_hba *phba, struct lpfc_queue *eq, uint32_t imax)
{
struct lpfc_mbx_eq_create *eq_create;
@ -13053,7 +13049,7 @@ lpfc_eq_create(struct lpfc_hba *phba, struct lpfc_queue *eq, uint32_t imax)
* memory this function will return -ENOMEM. If the queue create mailbox command
* fails this function will return -ENXIO.
**/
uint32_t
int
lpfc_cq_create(struct lpfc_hba *phba, struct lpfc_queue *cq,
struct lpfc_queue *eq, uint32_t type, uint32_t subtype)
{
@ -13394,7 +13390,7 @@ out:
* memory this function will return -ENOMEM. If the queue create mailbox command
* fails this function will return -ENXIO.
**/
uint32_t
int
lpfc_wq_create(struct lpfc_hba *phba, struct lpfc_queue *wq,
struct lpfc_queue *cq, uint32_t subtype)
{
@ -13630,7 +13626,7 @@ lpfc_rq_adjust_repost(struct lpfc_hba *phba, struct lpfc_queue *rq, int qno)
* memory this function will return -ENOMEM. If the queue create mailbox command
* fails this function will return -ENXIO.
**/
uint32_t
int
lpfc_rq_create(struct lpfc_hba *phba, struct lpfc_queue *hrq,
struct lpfc_queue *drq, struct lpfc_queue *cq, uint32_t subtype)
{
@ -13895,7 +13891,7 @@ out:
* On success this function will return a zero. If the queue destroy mailbox
* command fails this function will return -ENXIO.
**/
uint32_t
int
lpfc_eq_destroy(struct lpfc_hba *phba, struct lpfc_queue *eq)
{
LPFC_MBOXQ_t *mbox;
@ -13951,7 +13947,7 @@ lpfc_eq_destroy(struct lpfc_hba *phba, struct lpfc_queue *eq)
* On success this function will return a zero. If the queue destroy mailbox
* command fails this function will return -ENXIO.
**/
uint32_t
int
lpfc_cq_destroy(struct lpfc_hba *phba, struct lpfc_queue *cq)
{
LPFC_MBOXQ_t *mbox;
@ -14005,7 +14001,7 @@ lpfc_cq_destroy(struct lpfc_hba *phba, struct lpfc_queue *cq)
* On success this function will return a zero. If the queue destroy mailbox
* command fails this function will return -ENXIO.
**/
uint32_t
int
lpfc_mq_destroy(struct lpfc_hba *phba, struct lpfc_queue *mq)
{
LPFC_MBOXQ_t *mbox;
@ -14059,7 +14055,7 @@ lpfc_mq_destroy(struct lpfc_hba *phba, struct lpfc_queue *mq)
* On success this function will return a zero. If the queue destroy mailbox
* command fails this function will return -ENXIO.
**/
uint32_t
int
lpfc_wq_destroy(struct lpfc_hba *phba, struct lpfc_queue *wq)
{
LPFC_MBOXQ_t *mbox;
@ -14112,7 +14108,7 @@ lpfc_wq_destroy(struct lpfc_hba *phba, struct lpfc_queue *wq)
* On success this function will return a zero. If the queue destroy mailbox
* command fails this function will return -ENXIO.
**/
uint32_t
int
lpfc_rq_destroy(struct lpfc_hba *phba, struct lpfc_queue *hrq,
struct lpfc_queue *drq)
{
@ -14252,7 +14248,6 @@ lpfc_sli4_post_sgl(struct lpfc_hba *phba,
"2511 POST_SGL mailbox failed with "
"status x%x add_status x%x, mbx status x%x\n",
shdr_status, shdr_add_status, rc);
rc = -ENXIO;
}
return 0;
}
@ -14270,7 +14265,7 @@ lpfc_sli4_post_sgl(struct lpfc_hba *phba,
* A nonzero rpi defined as rpi_base <= rpi < max_rpi if successful
* LPFC_RPI_ALLOC_ERROR if no rpis are available.
**/
uint16_t
static uint16_t
lpfc_sli4_alloc_xri(struct lpfc_hba *phba)
{
unsigned long xri;
@ -14300,7 +14295,7 @@ lpfc_sli4_alloc_xri(struct lpfc_hba *phba)
* This routine is invoked to release an xri to the pool of
* available rpis maintained by the driver.
**/
void
static void
__lpfc_sli4_free_xri(struct lpfc_hba *phba, int xri)
{
if (test_and_clear_bit(xri, phba->sli4_hba.xri_bmask)) {
@ -14720,7 +14715,7 @@ lpfc_fc_frame_to_vport(struct lpfc_hba *phba, struct fc_frame_header *fc_hdr,
* the driver uses this time stamp to indicate if any received sequences have
* timed out.
**/
void
static void
lpfc_update_rcv_time_stamp(struct lpfc_vport *vport)
{
struct lpfc_dmabuf *h_buf;
@ -15019,7 +15014,7 @@ uint16_t
lpfc_sli4_xri_inrange(struct lpfc_hba *phba,
uint16_t xri)
{
int i;
uint16_t i;
for (i = 0; i < phba->sli4_hba.max_cfg_param.max_xri; i++) {
if (xri == phba->sli4_hba.xri_ids[i])
@ -15189,7 +15184,7 @@ lpfc_sli4_seq_abort_rsp(struct lpfc_vport *vport,
* unsolicited sequence has been aborted. After that, it will issue a basic
* accept to accept the abort.
**/
void
static void
lpfc_sli4_handle_unsol_abort(struct lpfc_vport *vport,
struct hbq_dmabuf *dmabuf)
{
@ -15734,7 +15729,7 @@ lpfc_sli4_alloc_rpi(struct lpfc_hba *phba)
* This routine is invoked to release an rpi to the pool of
* available rpis maintained by the driver.
**/
void
static void
__lpfc_sli4_free_rpi(struct lpfc_hba *phba, int rpi)
{
if (test_and_clear_bit(rpi, phba->sli4_hba.rpi_bmask)) {
@ -16172,7 +16167,7 @@ fail_fcf_read:
* returns:
* 1=success 0=failure
**/
int
static int
lpfc_check_next_fcf_pri_level(struct lpfc_hba *phba)
{
uint16_t next_fcf_pri;
@ -16403,7 +16398,7 @@ lpfc_sli4_fcf_rr_index_clear(struct lpfc_hba *phba, uint16_t fcf_index)
* command. If the mailbox command returned failure, it will try to stop the
* FCF rediscover wait timer.
**/
void
static void
lpfc_mbx_cmpl_redisc_fcf_table(struct lpfc_hba *phba, LPFC_MBOXQ_t *mbox)
{
struct lpfc_mbx_redisc_fcf_tbl *redisc_fcf;
@ -16956,7 +16951,7 @@ lpfc_drain_txq(struct lpfc_hba *phba)
char *fail_msg = NULL;
struct lpfc_sglq *sglq;
union lpfc_wqe wqe;
int txq_cnt = 0;
uint32_t txq_cnt = 0;
spin_lock_irqsave(&pring->ring_lock, iflags);
list_for_each_entry(piocbq, &pring->txq, list) {

View File

@ -79,6 +79,7 @@ struct lpfc_iocbq {
#define LPFC_FIP_ELS_ID_SHIFT 14
#define LPFC_IO_OAS 0x10000 /* OAS FCP IO */
#define LPFC_IO_FOF 0x20000 /* FOF FCP IO */
uint32_t drvrTimeout; /* driver timeout in seconds */
uint32_t fcp_wqidx; /* index to FCP work queue */

View File

@ -670,22 +670,22 @@ void lpfc_sli4_hba_reset(struct lpfc_hba *);
struct lpfc_queue *lpfc_sli4_queue_alloc(struct lpfc_hba *, uint32_t,
uint32_t);
void lpfc_sli4_queue_free(struct lpfc_queue *);
uint32_t lpfc_eq_create(struct lpfc_hba *, struct lpfc_queue *, uint32_t);
uint32_t lpfc_modify_fcp_eq_delay(struct lpfc_hba *, uint16_t);
uint32_t lpfc_cq_create(struct lpfc_hba *, struct lpfc_queue *,
int lpfc_eq_create(struct lpfc_hba *, struct lpfc_queue *, uint32_t);
int lpfc_modify_fcp_eq_delay(struct lpfc_hba *, uint16_t);
int lpfc_cq_create(struct lpfc_hba *, struct lpfc_queue *,
struct lpfc_queue *, uint32_t, uint32_t);
int32_t lpfc_mq_create(struct lpfc_hba *, struct lpfc_queue *,
struct lpfc_queue *, uint32_t);
uint32_t lpfc_wq_create(struct lpfc_hba *, struct lpfc_queue *,
int lpfc_wq_create(struct lpfc_hba *, struct lpfc_queue *,
struct lpfc_queue *, uint32_t);
uint32_t lpfc_rq_create(struct lpfc_hba *, struct lpfc_queue *,
int lpfc_rq_create(struct lpfc_hba *, struct lpfc_queue *,
struct lpfc_queue *, struct lpfc_queue *, uint32_t);
void lpfc_rq_adjust_repost(struct lpfc_hba *, struct lpfc_queue *, int);
uint32_t lpfc_eq_destroy(struct lpfc_hba *, struct lpfc_queue *);
uint32_t lpfc_cq_destroy(struct lpfc_hba *, struct lpfc_queue *);
uint32_t lpfc_mq_destroy(struct lpfc_hba *, struct lpfc_queue *);
uint32_t lpfc_wq_destroy(struct lpfc_hba *, struct lpfc_queue *);
uint32_t lpfc_rq_destroy(struct lpfc_hba *, struct lpfc_queue *,
int lpfc_eq_destroy(struct lpfc_hba *, struct lpfc_queue *);
int lpfc_cq_destroy(struct lpfc_hba *, struct lpfc_queue *);
int lpfc_mq_destroy(struct lpfc_hba *, struct lpfc_queue *);
int lpfc_wq_destroy(struct lpfc_hba *, struct lpfc_queue *);
int lpfc_rq_destroy(struct lpfc_hba *, struct lpfc_queue *,
struct lpfc_queue *);
int lpfc_sli4_queue_setup(struct lpfc_hba *);
void lpfc_sli4_queue_unset(struct lpfc_hba *);

View File

@ -18,7 +18,7 @@
* included with this package. *
*******************************************************************/
#define LPFC_DRIVER_VERSION "10.2.8001.0."
#define LPFC_DRIVER_VERSION "10.4.8000.0."
#define LPFC_DRIVER_NAME "lpfc"
/* Used for SLI 2/3 */

View File

@ -33,9 +33,9 @@
/*
* MegaRAID SAS Driver meta data
*/
#define MEGASAS_VERSION "06.803.01.00-rc1"
#define MEGASAS_RELDATE "Mar. 10, 2014"
#define MEGASAS_EXT_VERSION "Mon. Mar. 10 17:00:00 PDT 2014"
#define MEGASAS_VERSION "06.805.06.00-rc1"
#define MEGASAS_RELDATE "Sep. 4, 2014"
#define MEGASAS_EXT_VERSION "Thu. Sep. 4 17:00:00 PDT 2014"
/*
* Device IDs
@ -105,6 +105,9 @@
#define MFI_STATE_READY 0xB0000000
#define MFI_STATE_OPERATIONAL 0xC0000000
#define MFI_STATE_FAULT 0xF0000000
#define MFI_STATE_FORCE_OCR 0x00000080
#define MFI_STATE_DMADONE 0x00000008
#define MFI_STATE_CRASH_DUMP_DONE 0x00000004
#define MFI_RESET_REQUIRED 0x00000001
#define MFI_RESET_ADAPTER 0x00000002
#define MEGAMFI_FRAME_SIZE 64
@ -191,6 +194,9 @@
#define MR_DCMD_CLUSTER_RESET_LD 0x08010200
#define MR_DCMD_PD_LIST_QUERY 0x02010100
#define MR_DCMD_CTRL_SET_CRASH_DUMP_PARAMS 0x01190100
#define MR_DRIVER_SET_APP_CRASHDUMP_MODE (0xF0010000 | 0x0600)
/*
* Global functions
*/
@ -263,6 +269,25 @@ enum MFI_STAT {
MFI_STAT_INVALID_STATUS = 0xFF
};
/*
* Crash dump related defines
*/
#define MAX_CRASH_DUMP_SIZE 512
#define CRASH_DMA_BUF_SIZE (1024 * 1024)
enum MR_FW_CRASH_DUMP_STATE {
UNAVAILABLE = 0,
AVAILABLE = 1,
COPYING = 2,
COPIED = 3,
COPY_ERROR = 4,
};
enum _MR_CRASH_BUF_STATUS {
MR_CRASH_BUF_TURN_OFF = 0,
MR_CRASH_BUF_TURN_ON = 1,
};
/*
* Number of mailbox bytes in DCMD message frame
*/
@ -365,7 +390,6 @@ enum MR_LD_QUERY_TYPE {
#define MR_EVT_FOREIGN_CFG_IMPORTED 0x00db
#define MR_EVT_LD_OFFLINE 0x00fc
#define MR_EVT_CTRL_HOST_BUS_SCAN_REQUESTED 0x0152
#define MAX_LOGICAL_DRIVES 64
enum MR_PD_STATE {
MR_PD_STATE_UNCONFIGURED_GOOD = 0x00,
@ -443,14 +467,14 @@ struct MR_LD_LIST {
u8 state;
u8 reserved[3];
u64 size;
} ldList[MAX_LOGICAL_DRIVES];
} ldList[MAX_LOGICAL_DRIVES_EXT];
} __packed;
struct MR_LD_TARGETID_LIST {
u32 size;
u32 count;
u8 pad[3];
u8 targetId[MAX_LOGICAL_DRIVES];
u8 targetId[MAX_LOGICAL_DRIVES_EXT];
};
@ -916,6 +940,15 @@ struct megasas_ctrl_info {
* HA cluster information
*/
struct {
#if defined(__BIG_ENDIAN_BITFIELD)
u32 reserved:26;
u32 premiumFeatureMismatch:1;
u32 ctrlPropIncompatible:1;
u32 fwVersionMismatch:1;
u32 hwIncompatible:1;
u32 peerIsIncompatible:1;
u32 peerIsPresent:1;
#else
u32 peerIsPresent:1;
u32 peerIsIncompatible:1;
u32 hwIncompatible:1;
@ -923,6 +956,7 @@ struct megasas_ctrl_info {
u32 ctrlPropIncompatible:1;
u32 premiumFeatureMismatch:1;
u32 reserved:26;
#endif
} cluster;
char clusterId[16]; /*7D4h */
@ -933,7 +967,27 @@ struct megasas_ctrl_info {
u8 reserved; /*0x7E7*/
} iov;
u8 pad[0x800-0x7E8]; /*0x7E8 pad to 2k */
struct {
#if defined(__BIG_ENDIAN_BITFIELD)
u32 reserved:25;
u32 supportCrashDump:1;
u32 supportMaxExtLDs:1;
u32 supportT10RebuildAssist:1;
u32 supportDisableImmediateIO:1;
u32 supportThermalPollInterval:1;
u32 supportPersonalityChange:2;
#else
u32 supportPersonalityChange:2;
u32 supportThermalPollInterval:1;
u32 supportDisableImmediateIO:1;
u32 supportT10RebuildAssist:1;
u32 supportMaxExtLDs:1;
u32 supportCrashDump:1;
u32 reserved:25;
#endif
} adapterOperations3;
u8 pad[0x800-0x7EC];
} __packed;
/*
@ -942,13 +996,12 @@ struct megasas_ctrl_info {
* ===============================
*/
#define MEGASAS_MAX_PD_CHANNELS 2
#define MEGASAS_MAX_LD_CHANNELS 1
#define MEGASAS_MAX_LD_CHANNELS 2
#define MEGASAS_MAX_CHANNELS (MEGASAS_MAX_PD_CHANNELS + \
MEGASAS_MAX_LD_CHANNELS)
#define MEGASAS_MAX_DEV_PER_CHANNEL 128
#define MEGASAS_DEFAULT_INIT_ID -1
#define MEGASAS_MAX_LUN 8
#define MEGASAS_MAX_LD 64
#define MEGASAS_DEFAULT_CMD_PER_LUN 256
#define MEGASAS_MAX_PD (MEGASAS_MAX_PD_CHANNELS * \
MEGASAS_MAX_DEV_PER_CHANNEL)
@ -961,6 +1014,14 @@ struct megasas_ctrl_info {
#define MEGASAS_FW_BUSY 1
#define VD_EXT_DEBUG 0
enum MR_MFI_MPT_PTHR_FLAGS {
MFI_MPT_DETACHED = 0,
MFI_LIST_ADDED = 1,
MFI_MPT_ATTACHED = 2,
};
/* Frame Type */
#define IO_FRAME 0
#define PTHRU_FRAME 1
@ -978,7 +1039,7 @@ struct megasas_ctrl_info {
#define MEGASAS_IOCTL_CMD 0
#define MEGASAS_DEFAULT_CMD_TIMEOUT 90
#define MEGASAS_THROTTLE_QUEUE_DEPTH 16
#define MEGASAS_BLOCKED_CMD_TIMEOUT 60
/*
* FW reports the maximum of number of commands that it can accept (maximum
* commands that can be outstanding) at any time. The driver must report a
@ -1133,13 +1194,19 @@ union megasas_sgl_frame {
typedef union _MFI_CAPABILITIES {
struct {
#if defined(__BIG_ENDIAN_BITFIELD)
u32 reserved:30;
u32 reserved:27;
u32 support_ndrive_r1_lb:1;
u32 support_max_255lds:1;
u32 reserved1:1;
u32 support_additional_msix:1;
u32 support_fp_remote_lun:1;
#else
u32 support_fp_remote_lun:1;
u32 support_additional_msix:1;
u32 reserved:30;
u32 reserved1:1;
u32 support_max_255lds:1;
u32 support_ndrive_r1_lb:1;
u32 reserved:27;
#endif
} mfi_capabilities;
u32 reg;
@ -1559,6 +1626,20 @@ struct megasas_instance {
u32 *reply_queue;
dma_addr_t reply_queue_h;
u32 *crash_dump_buf;
dma_addr_t crash_dump_h;
void *crash_buf[MAX_CRASH_DUMP_SIZE];
u32 crash_buf_pages;
unsigned int fw_crash_buffer_size;
unsigned int fw_crash_state;
unsigned int fw_crash_buffer_offset;
u32 drv_buf_index;
u32 drv_buf_alloc;
u32 crash_dump_fw_support;
u32 crash_dump_drv_support;
u32 crash_dump_app_support;
spinlock_t crashdump_lock;
struct megasas_register_set __iomem *reg_set;
u32 *reply_post_host_index_addr[MR_MAX_MSIX_REG_ARRAY];
struct megasas_pd_list pd_list[MEGASAS_MAX_PD];
@ -1577,7 +1658,7 @@ struct megasas_instance {
struct megasas_cmd **cmd_list;
struct list_head cmd_pool;
/* used to sync fire the cmd to fw */
spinlock_t cmd_pool_lock;
spinlock_t mfi_pool_lock;
/* used to sync fire the cmd to fw */
spinlock_t hba_lock;
/* used to synch producer, consumer ptrs in dpc */
@ -1606,6 +1687,7 @@ struct megasas_instance {
struct megasas_instance_template *instancet;
struct tasklet_struct isr_tasklet;
struct work_struct work_init;
struct work_struct crash_init;
u8 flag;
u8 unload;
@ -1613,6 +1695,14 @@ struct megasas_instance {
u8 issuepend_done;
u8 disableOnlineCtrlReset;
u8 UnevenSpanSupport;
u8 supportmax256vd;
u16 fw_supported_vd_count;
u16 fw_supported_pd_count;
u16 drv_supported_vd_count;
u16 drv_supported_pd_count;
u8 adprecovery;
unsigned long last_time;
u32 mfiStatus;
@ -1622,6 +1712,8 @@ struct megasas_instance {
/* Ptr to hba specific information */
void *ctrl_context;
u32 ctrl_context_pages;
struct megasas_ctrl_info *ctrl_info;
unsigned int msix_vectors;
struct msix_entry msixentry[MEGASAS_MAX_MSIX_QUEUES];
struct megasas_irq_context irq_context[MEGASAS_MAX_MSIX_QUEUES];
@ -1633,8 +1725,6 @@ struct megasas_instance {
struct timer_list sriov_heartbeat_timer;
char skip_heartbeat_timer_del;
u8 requestorId;
u64 initiator_sas_address;
u64 ld_sas_address[64];
char PlasmaFW111;
char mpio;
int throttlequeuedepth;
@ -1661,6 +1751,7 @@ struct MR_LD_VF_AFFILIATION {
/* Plasma 1.11 FW backward compatibility structures */
#define IOV_111_OFFSET 0x7CE
#define MAX_VIRTUAL_FUNCTIONS 8
#define MR_LD_ACCESS_HIDDEN 15
struct IOV_111 {
u8 maxVFsSupported;
@ -1754,6 +1845,11 @@ struct megasas_cmd {
struct list_head list;
struct scsi_cmnd *scmd;
void *mpt_pthr_cmd_blocked;
atomic_t mfi_mpt_pthr;
u8 is_wait_event;
struct megasas_instance *instance;
union {
struct {
@ -1823,12 +1919,33 @@ u8
MR_BuildRaidContext(struct megasas_instance *instance,
struct IO_REQUEST_INFO *io_info,
struct RAID_CONTEXT *pRAID_Context,
struct MR_FW_RAID_MAP_ALL *map, u8 **raidLUN);
u8 MR_TargetIdToLdGet(u32 ldTgtId, struct MR_FW_RAID_MAP_ALL *map);
struct MR_LD_RAID *MR_LdRaidGet(u32 ld, struct MR_FW_RAID_MAP_ALL *map);
u16 MR_ArPdGet(u32 ar, u32 arm, struct MR_FW_RAID_MAP_ALL *map);
u16 MR_LdSpanArrayGet(u32 ld, u32 span, struct MR_FW_RAID_MAP_ALL *map);
u16 MR_PdDevHandleGet(u32 pd, struct MR_FW_RAID_MAP_ALL *map);
u16 MR_GetLDTgtId(u32 ld, struct MR_FW_RAID_MAP_ALL *map);
struct MR_DRV_RAID_MAP_ALL *map, u8 **raidLUN);
u8 MR_TargetIdToLdGet(u32 ldTgtId, struct MR_DRV_RAID_MAP_ALL *map);
struct MR_LD_RAID *MR_LdRaidGet(u32 ld, struct MR_DRV_RAID_MAP_ALL *map);
u16 MR_ArPdGet(u32 ar, u32 arm, struct MR_DRV_RAID_MAP_ALL *map);
u16 MR_LdSpanArrayGet(u32 ld, u32 span, struct MR_DRV_RAID_MAP_ALL *map);
u16 MR_PdDevHandleGet(u32 pd, struct MR_DRV_RAID_MAP_ALL *map);
u16 MR_GetLDTgtId(u32 ld, struct MR_DRV_RAID_MAP_ALL *map);
u16 get_updated_dev_handle(struct megasas_instance *instance,
struct LD_LOAD_BALANCE_INFO *lbInfo, struct IO_REQUEST_INFO *in_info);
void mr_update_load_balance_params(struct MR_DRV_RAID_MAP_ALL *map,
struct LD_LOAD_BALANCE_INFO *lbInfo);
int megasas_get_ctrl_info(struct megasas_instance *instance,
struct megasas_ctrl_info *ctrl_info);
int megasas_set_crash_dump_params(struct megasas_instance *instance,
u8 crash_buf_state);
void megasas_free_host_crash_buffer(struct megasas_instance *instance);
void megasas_fusion_crash_dump_wq(struct work_struct *work);
void megasas_return_cmd_fusion(struct megasas_instance *instance,
struct megasas_cmd_fusion *cmd);
int megasas_issue_blocked_cmd(struct megasas_instance *instance,
struct megasas_cmd *cmd, int timeout);
void __megasas_return_cmd(struct megasas_instance *instance,
struct megasas_cmd *cmd);
void megasas_return_mfi_mpt_pthr(struct megasas_instance *instance,
struct megasas_cmd *cmd_mfi, struct megasas_cmd_fusion *cmd_fusion);
#endif /*LSI_MEGARAID_SAS_H */

File diff suppressed because it is too large Load Diff

View File

@ -55,6 +55,13 @@
#include "megaraid_sas.h"
#include <asm/div64.h>
#define LB_PENDING_CMDS_DEFAULT 4
static unsigned int lb_pending_cmds = LB_PENDING_CMDS_DEFAULT;
module_param(lb_pending_cmds, int, S_IRUGO);
MODULE_PARM_DESC(lb_pending_cmds, "Change raid-1 load balancing outstanding "
"threshold. Valid Values are 1-128. Default: 4");
#define ABS_DIFF(a, b) (((a) > (b)) ? ((a) - (b)) : ((b) - (a)))
#define MR_LD_STATE_OPTIMAL 3
#define FALSE 0
@ -66,16 +73,13 @@
#define SPAN_INVALID 0xff
/* Prototypes */
void mr_update_load_balance_params(struct MR_FW_RAID_MAP_ALL *map,
struct LD_LOAD_BALANCE_INFO *lbInfo);
static void mr_update_span_set(struct MR_FW_RAID_MAP_ALL *map,
static void mr_update_span_set(struct MR_DRV_RAID_MAP_ALL *map,
PLD_SPAN_INFO ldSpanInfo);
static u8 mr_spanset_get_phy_params(struct megasas_instance *instance, u32 ld,
u64 stripRow, u16 stripRef, struct IO_REQUEST_INFO *io_info,
struct RAID_CONTEXT *pRAID_Context, struct MR_FW_RAID_MAP_ALL *map);
struct RAID_CONTEXT *pRAID_Context, struct MR_DRV_RAID_MAP_ALL *map);
static u64 get_row_from_strip(struct megasas_instance *instance, u32 ld,
u64 strip, struct MR_FW_RAID_MAP_ALL *map);
u64 strip, struct MR_DRV_RAID_MAP_ALL *map);
u32 mega_mod64(u64 dividend, u32 divisor)
{
@ -109,94 +113,183 @@ u64 mega_div64_32(uint64_t dividend, uint32_t divisor)
return d;
}
struct MR_LD_RAID *MR_LdRaidGet(u32 ld, struct MR_FW_RAID_MAP_ALL *map)
struct MR_LD_RAID *MR_LdRaidGet(u32 ld, struct MR_DRV_RAID_MAP_ALL *map)
{
return &map->raidMap.ldSpanMap[ld].ldRaid;
}
static struct MR_SPAN_BLOCK_INFO *MR_LdSpanInfoGet(u32 ld,
struct MR_FW_RAID_MAP_ALL
struct MR_DRV_RAID_MAP_ALL
*map)
{
return &map->raidMap.ldSpanMap[ld].spanBlock[0];
}
static u8 MR_LdDataArmGet(u32 ld, u32 armIdx, struct MR_FW_RAID_MAP_ALL *map)
static u8 MR_LdDataArmGet(u32 ld, u32 armIdx, struct MR_DRV_RAID_MAP_ALL *map)
{
return map->raidMap.ldSpanMap[ld].dataArmMap[armIdx];
}
u16 MR_ArPdGet(u32 ar, u32 arm, struct MR_FW_RAID_MAP_ALL *map)
u16 MR_ArPdGet(u32 ar, u32 arm, struct MR_DRV_RAID_MAP_ALL *map)
{
return le16_to_cpu(map->raidMap.arMapInfo[ar].pd[arm]);
}
u16 MR_LdSpanArrayGet(u32 ld, u32 span, struct MR_FW_RAID_MAP_ALL *map)
u16 MR_LdSpanArrayGet(u32 ld, u32 span, struct MR_DRV_RAID_MAP_ALL *map)
{
return le16_to_cpu(map->raidMap.ldSpanMap[ld].spanBlock[span].span.arrayRef);
}
u16 MR_PdDevHandleGet(u32 pd, struct MR_FW_RAID_MAP_ALL *map)
u16 MR_PdDevHandleGet(u32 pd, struct MR_DRV_RAID_MAP_ALL *map)
{
return map->raidMap.devHndlInfo[pd].curDevHdl;
}
u16 MR_GetLDTgtId(u32 ld, struct MR_FW_RAID_MAP_ALL *map)
u16 MR_GetLDTgtId(u32 ld, struct MR_DRV_RAID_MAP_ALL *map)
{
return le16_to_cpu(map->raidMap.ldSpanMap[ld].ldRaid.targetId);
}
u8 MR_TargetIdToLdGet(u32 ldTgtId, struct MR_FW_RAID_MAP_ALL *map)
u8 MR_TargetIdToLdGet(u32 ldTgtId, struct MR_DRV_RAID_MAP_ALL *map)
{
return map->raidMap.ldTgtIdToLd[ldTgtId];
}
static struct MR_LD_SPAN *MR_LdSpanPtrGet(u32 ld, u32 span,
struct MR_FW_RAID_MAP_ALL *map)
struct MR_DRV_RAID_MAP_ALL *map)
{
return &map->raidMap.ldSpanMap[ld].spanBlock[span].span;
}
/*
* This function will Populate Driver Map using firmware raid map
*/
void MR_PopulateDrvRaidMap(struct megasas_instance *instance)
{
struct fusion_context *fusion = instance->ctrl_context;
struct MR_FW_RAID_MAP_ALL *fw_map_old = NULL;
struct MR_FW_RAID_MAP *pFwRaidMap = NULL;
int i;
struct MR_DRV_RAID_MAP_ALL *drv_map =
fusion->ld_drv_map[(instance->map_id & 1)];
struct MR_DRV_RAID_MAP *pDrvRaidMap = &drv_map->raidMap;
if (instance->supportmax256vd) {
memcpy(fusion->ld_drv_map[instance->map_id & 1],
fusion->ld_map[instance->map_id & 1],
fusion->current_map_sz);
/* New Raid map will not set totalSize, so keep expected value
* for legacy code in ValidateMapInfo
*/
pDrvRaidMap->totalSize = sizeof(struct MR_FW_RAID_MAP_EXT);
} else {
fw_map_old = (struct MR_FW_RAID_MAP_ALL *)
fusion->ld_map[(instance->map_id & 1)];
pFwRaidMap = &fw_map_old->raidMap;
#if VD_EXT_DEBUG
for (i = 0; i < pFwRaidMap->ldCount; i++) {
dev_dbg(&instance->pdev->dev, "(%d) :Index 0x%x "
"Target Id 0x%x Seq Num 0x%x Size 0/%llx\n",
instance->unique_id, i,
fw_map_old->raidMap.ldSpanMap[i].ldRaid.targetId,
fw_map_old->raidMap.ldSpanMap[i].ldRaid.seqNum,
fw_map_old->raidMap.ldSpanMap[i].ldRaid.size);
}
#endif
memset(drv_map, 0, fusion->drv_map_sz);
pDrvRaidMap->totalSize = pFwRaidMap->totalSize;
pDrvRaidMap->ldCount = pFwRaidMap->ldCount;
pDrvRaidMap->fpPdIoTimeoutSec = pFwRaidMap->fpPdIoTimeoutSec;
for (i = 0; i < MAX_RAIDMAP_LOGICAL_DRIVES + MAX_RAIDMAP_VIEWS; i++)
pDrvRaidMap->ldTgtIdToLd[i] =
(u8)pFwRaidMap->ldTgtIdToLd[i];
for (i = 0; i < pDrvRaidMap->ldCount; i++) {
pDrvRaidMap->ldSpanMap[i] = pFwRaidMap->ldSpanMap[i];
#if VD_EXT_DEBUG
dev_dbg(&instance->pdev->dev,
"pFwRaidMap->ldSpanMap[%d].ldRaid.targetId 0x%x "
"pFwRaidMap->ldSpanMap[%d].ldRaid.seqNum 0x%x "
"size 0x%x\n", i, i,
pFwRaidMap->ldSpanMap[i].ldRaid.targetId,
pFwRaidMap->ldSpanMap[i].ldRaid.seqNum,
(u32)pFwRaidMap->ldSpanMap[i].ldRaid.rowSize);
dev_dbg(&instance->pdev->dev,
"pDrvRaidMap->ldSpanMap[%d].ldRaid.targetId 0x%x "
"pDrvRaidMap->ldSpanMap[%d].ldRaid.seqNum 0x%x "
"size 0x%x\n", i, i,
pDrvRaidMap->ldSpanMap[i].ldRaid.targetId,
pDrvRaidMap->ldSpanMap[i].ldRaid.seqNum,
(u32)pDrvRaidMap->ldSpanMap[i].ldRaid.rowSize);
dev_dbg(&instance->pdev->dev, "Driver raid map all %p "
"raid map %p LD RAID MAP %p/%p\n", drv_map,
pDrvRaidMap, &pFwRaidMap->ldSpanMap[i].ldRaid,
&pDrvRaidMap->ldSpanMap[i].ldRaid);
#endif
}
memcpy(pDrvRaidMap->arMapInfo, pFwRaidMap->arMapInfo,
sizeof(struct MR_ARRAY_INFO) * MAX_RAIDMAP_ARRAYS);
memcpy(pDrvRaidMap->devHndlInfo, pFwRaidMap->devHndlInfo,
sizeof(struct MR_DEV_HANDLE_INFO) *
MAX_RAIDMAP_PHYSICAL_DEVICES);
}
}
/*
* This function will validate Map info data provided by FW
*/
u8 MR_ValidateMapInfo(struct megasas_instance *instance)
{
struct fusion_context *fusion = instance->ctrl_context;
struct MR_FW_RAID_MAP_ALL *map = fusion->ld_map[(instance->map_id & 1)];
struct LD_LOAD_BALANCE_INFO *lbInfo = fusion->load_balance_info;
PLD_SPAN_INFO ldSpanInfo = fusion->log_to_span;
struct MR_FW_RAID_MAP *pFwRaidMap = &map->raidMap;
struct fusion_context *fusion;
struct MR_DRV_RAID_MAP_ALL *drv_map;
struct MR_DRV_RAID_MAP *pDrvRaidMap;
struct LD_LOAD_BALANCE_INFO *lbInfo;
PLD_SPAN_INFO ldSpanInfo;
struct MR_LD_RAID *raid;
int ldCount, num_lds;
u16 ld;
u32 expected_size;
if (le32_to_cpu(pFwRaidMap->totalSize) !=
(sizeof(struct MR_FW_RAID_MAP) -sizeof(struct MR_LD_SPAN_MAP) +
(sizeof(struct MR_LD_SPAN_MAP) * le32_to_cpu(pFwRaidMap->ldCount)))) {
printk(KERN_ERR "megasas: map info structure size 0x%x is not matching with ld count\n",
(unsigned int)((sizeof(struct MR_FW_RAID_MAP) -
sizeof(struct MR_LD_SPAN_MAP)) +
(sizeof(struct MR_LD_SPAN_MAP) *
le32_to_cpu(pFwRaidMap->ldCount))));
printk(KERN_ERR "megasas: span map %x, pFwRaidMap->totalSize "
": %x\n", (unsigned int)sizeof(struct MR_LD_SPAN_MAP),
le32_to_cpu(pFwRaidMap->totalSize));
MR_PopulateDrvRaidMap(instance);
fusion = instance->ctrl_context;
drv_map = fusion->ld_drv_map[(instance->map_id & 1)];
pDrvRaidMap = &drv_map->raidMap;
lbInfo = fusion->load_balance_info;
ldSpanInfo = fusion->log_to_span;
if (instance->supportmax256vd)
expected_size = sizeof(struct MR_FW_RAID_MAP_EXT);
else
expected_size =
(sizeof(struct MR_FW_RAID_MAP) - sizeof(struct MR_LD_SPAN_MAP) +
(sizeof(struct MR_LD_SPAN_MAP) * le32_to_cpu(pDrvRaidMap->ldCount)));
if (le32_to_cpu(pDrvRaidMap->totalSize) != expected_size) {
dev_err(&instance->pdev->dev, "map info structure size 0x%x is not matching with ld count\n",
(unsigned int) expected_size);
dev_err(&instance->pdev->dev, "megasas: span map %x, pDrvRaidMap->totalSize : %x\n",
(unsigned int)sizeof(struct MR_LD_SPAN_MAP),
le32_to_cpu(pDrvRaidMap->totalSize));
return 0;
}
if (instance->UnevenSpanSupport)
mr_update_span_set(map, ldSpanInfo);
mr_update_span_set(drv_map, ldSpanInfo);
mr_update_load_balance_params(map, lbInfo);
mr_update_load_balance_params(drv_map, lbInfo);
num_lds = le32_to_cpu(map->raidMap.ldCount);
num_lds = le32_to_cpu(drv_map->raidMap.ldCount);
/*Convert Raid capability values to CPU arch */
for (ldCount = 0; ldCount < num_lds; ldCount++) {
ld = MR_TargetIdToLdGet(ldCount, map);
raid = MR_LdRaidGet(ld, map);
ld = MR_TargetIdToLdGet(ldCount, drv_map);
raid = MR_LdRaidGet(ld, drv_map);
le32_to_cpus((u32 *)&raid->capability);
}
@ -204,7 +297,7 @@ u8 MR_ValidateMapInfo(struct megasas_instance *instance)
}
u32 MR_GetSpanBlock(u32 ld, u64 row, u64 *span_blk,
struct MR_FW_RAID_MAP_ALL *map)
struct MR_DRV_RAID_MAP_ALL *map)
{
struct MR_SPAN_BLOCK_INFO *pSpanBlock = MR_LdSpanInfoGet(ld, map);
struct MR_QUAD_ELEMENT *quad;
@ -246,7 +339,8 @@ u32 MR_GetSpanBlock(u32 ld, u64 row, u64 *span_blk,
* ldSpanInfo - ldSpanInfo per HBA instance
*/
#if SPAN_DEBUG
static int getSpanInfo(struct MR_FW_RAID_MAP_ALL *map, PLD_SPAN_INFO ldSpanInfo)
static int getSpanInfo(struct MR_DRV_RAID_MAP_ALL *map,
PLD_SPAN_INFO ldSpanInfo)
{
u8 span;
@ -257,9 +351,9 @@ static int getSpanInfo(struct MR_FW_RAID_MAP_ALL *map, PLD_SPAN_INFO ldSpanInfo)
int ldCount;
u16 ld;
for (ldCount = 0; ldCount < MAX_LOGICAL_DRIVES; ldCount++) {
for (ldCount = 0; ldCount < MAX_LOGICAL_DRIVES_EXT; ldCount++) {
ld = MR_TargetIdToLdGet(ldCount, map);
if (ld >= MAX_LOGICAL_DRIVES)
if (ld >= MAX_LOGICAL_DRIVES_EXT)
continue;
raid = MR_LdRaidGet(ld, map);
dev_dbg(&instance->pdev->dev, "LD %x: span_depth=%x\n",
@ -339,7 +433,7 @@ static int getSpanInfo(struct MR_FW_RAID_MAP_ALL *map, PLD_SPAN_INFO ldSpanInfo)
*/
u32 mr_spanset_get_span_block(struct megasas_instance *instance,
u32 ld, u64 row, u64 *span_blk, struct MR_FW_RAID_MAP_ALL *map)
u32 ld, u64 row, u64 *span_blk, struct MR_DRV_RAID_MAP_ALL *map)
{
struct fusion_context *fusion = instance->ctrl_context;
struct MR_LD_RAID *raid = MR_LdRaidGet(ld, map);
@ -402,7 +496,7 @@ u32 mr_spanset_get_span_block(struct megasas_instance *instance,
*/
static u64 get_row_from_strip(struct megasas_instance *instance,
u32 ld, u64 strip, struct MR_FW_RAID_MAP_ALL *map)
u32 ld, u64 strip, struct MR_DRV_RAID_MAP_ALL *map)
{
struct fusion_context *fusion = instance->ctrl_context;
struct MR_LD_RAID *raid = MR_LdRaidGet(ld, map);
@ -471,7 +565,7 @@ static u64 get_row_from_strip(struct megasas_instance *instance,
*/
static u64 get_strip_from_row(struct megasas_instance *instance,
u32 ld, u64 row, struct MR_FW_RAID_MAP_ALL *map)
u32 ld, u64 row, struct MR_DRV_RAID_MAP_ALL *map)
{
struct fusion_context *fusion = instance->ctrl_context;
struct MR_LD_RAID *raid = MR_LdRaidGet(ld, map);
@ -532,7 +626,7 @@ static u64 get_strip_from_row(struct megasas_instance *instance,
*/
static u32 get_arm_from_strip(struct megasas_instance *instance,
u32 ld, u64 strip, struct MR_FW_RAID_MAP_ALL *map)
u32 ld, u64 strip, struct MR_DRV_RAID_MAP_ALL *map)
{
struct fusion_context *fusion = instance->ctrl_context;
struct MR_LD_RAID *raid = MR_LdRaidGet(ld, map);
@ -580,7 +674,7 @@ static u32 get_arm_from_strip(struct megasas_instance *instance,
/* This Function will return Phys arm */
u8 get_arm(struct megasas_instance *instance, u32 ld, u8 span, u64 stripe,
struct MR_FW_RAID_MAP_ALL *map)
struct MR_DRV_RAID_MAP_ALL *map)
{
struct MR_LD_RAID *raid = MR_LdRaidGet(ld, map);
/* Need to check correct default value */
@ -624,7 +718,7 @@ u8 get_arm(struct megasas_instance *instance, u32 ld, u8 span, u64 stripe,
static u8 mr_spanset_get_phy_params(struct megasas_instance *instance, u32 ld,
u64 stripRow, u16 stripRef, struct IO_REQUEST_INFO *io_info,
struct RAID_CONTEXT *pRAID_Context,
struct MR_FW_RAID_MAP_ALL *map)
struct MR_DRV_RAID_MAP_ALL *map)
{
struct MR_LD_RAID *raid = MR_LdRaidGet(ld, map);
u32 pd, arRef;
@ -682,6 +776,7 @@ static u8 mr_spanset_get_phy_params(struct megasas_instance *instance, u32 ld,
*pdBlock += stripRef + le64_to_cpu(MR_LdSpanPtrGet(ld, span, map)->startBlk);
pRAID_Context->spanArm = (span << RAID_CTX_SPANARM_SPAN_SHIFT) |
physArm;
io_info->span_arm = pRAID_Context->spanArm;
return retval;
}
@ -705,7 +800,7 @@ static u8 mr_spanset_get_phy_params(struct megasas_instance *instance, u32 ld,
u8 MR_GetPhyParams(struct megasas_instance *instance, u32 ld, u64 stripRow,
u16 stripRef, struct IO_REQUEST_INFO *io_info,
struct RAID_CONTEXT *pRAID_Context,
struct MR_FW_RAID_MAP_ALL *map)
struct MR_DRV_RAID_MAP_ALL *map)
{
struct MR_LD_RAID *raid = MR_LdRaidGet(ld, map);
u32 pd, arRef;
@ -778,6 +873,7 @@ u8 MR_GetPhyParams(struct megasas_instance *instance, u32 ld, u64 stripRow,
*pdBlock += stripRef + le64_to_cpu(MR_LdSpanPtrGet(ld, span, map)->startBlk);
pRAID_Context->spanArm = (span << RAID_CTX_SPANARM_SPAN_SHIFT) |
physArm;
io_info->span_arm = pRAID_Context->spanArm;
return retval;
}
@ -794,7 +890,7 @@ u8
MR_BuildRaidContext(struct megasas_instance *instance,
struct IO_REQUEST_INFO *io_info,
struct RAID_CONTEXT *pRAID_Context,
struct MR_FW_RAID_MAP_ALL *map, u8 **raidLUN)
struct MR_DRV_RAID_MAP_ALL *map, u8 **raidLUN)
{
struct MR_LD_RAID *raid;
u32 ld, stripSize, stripe_mask;
@ -1043,8 +1139,8 @@ MR_BuildRaidContext(struct megasas_instance *instance,
* ldSpanInfo - ldSpanInfo per HBA instance
*
*/
void mr_update_span_set(struct MR_FW_RAID_MAP_ALL *map,
PLD_SPAN_INFO ldSpanInfo)
void mr_update_span_set(struct MR_DRV_RAID_MAP_ALL *map,
PLD_SPAN_INFO ldSpanInfo)
{
u8 span, count;
u32 element, span_row_width;
@ -1056,9 +1152,9 @@ void mr_update_span_set(struct MR_FW_RAID_MAP_ALL *map,
u16 ld;
for (ldCount = 0; ldCount < MAX_LOGICAL_DRIVES; ldCount++) {
for (ldCount = 0; ldCount < MAX_LOGICAL_DRIVES_EXT; ldCount++) {
ld = MR_TargetIdToLdGet(ldCount, map);
if (ld >= MAX_LOGICAL_DRIVES)
if (ld >= MAX_LOGICAL_DRIVES_EXT)
continue;
raid = MR_LdRaidGet(ld, map);
for (element = 0; element < MAX_QUAD_DEPTH; element++) {
@ -1152,90 +1248,105 @@ void mr_update_span_set(struct MR_FW_RAID_MAP_ALL *map,
}
void
mr_update_load_balance_params(struct MR_FW_RAID_MAP_ALL *map,
struct LD_LOAD_BALANCE_INFO *lbInfo)
void mr_update_load_balance_params(struct MR_DRV_RAID_MAP_ALL *drv_map,
struct LD_LOAD_BALANCE_INFO *lbInfo)
{
int ldCount;
u16 ld;
struct MR_LD_RAID *raid;
for (ldCount = 0; ldCount < MAX_LOGICAL_DRIVES; ldCount++) {
ld = MR_TargetIdToLdGet(ldCount, map);
if (ld >= MAX_LOGICAL_DRIVES) {
if (lb_pending_cmds > 128 || lb_pending_cmds < 1)
lb_pending_cmds = LB_PENDING_CMDS_DEFAULT;
for (ldCount = 0; ldCount < MAX_LOGICAL_DRIVES_EXT; ldCount++) {
ld = MR_TargetIdToLdGet(ldCount, drv_map);
if (ld >= MAX_LOGICAL_DRIVES_EXT) {
lbInfo[ldCount].loadBalanceFlag = 0;
continue;
}
raid = MR_LdRaidGet(ld, map);
/* Two drive Optimal RAID 1 */
if ((raid->level == 1) && (raid->rowSize == 2) &&
(raid->spanDepth == 1) && raid->ldState ==
MR_LD_STATE_OPTIMAL) {
u32 pd, arRef;
lbInfo[ldCount].loadBalanceFlag = 1;
/* Get the array on which this span is present */
arRef = MR_LdSpanArrayGet(ld, 0, map);
/* Get the Pd */
pd = MR_ArPdGet(arRef, 0, map);
/* Get dev handle from Pd */
lbInfo[ldCount].raid1DevHandle[0] =
MR_PdDevHandleGet(pd, map);
/* Get the Pd */
pd = MR_ArPdGet(arRef, 1, map);
/* Get the dev handle from Pd */
lbInfo[ldCount].raid1DevHandle[1] =
MR_PdDevHandleGet(pd, map);
} else
raid = MR_LdRaidGet(ld, drv_map);
if ((raid->level != 1) ||
(raid->ldState != MR_LD_STATE_OPTIMAL)) {
lbInfo[ldCount].loadBalanceFlag = 0;
continue;
}
lbInfo[ldCount].loadBalanceFlag = 1;
}
}
u8 megasas_get_best_arm(struct LD_LOAD_BALANCE_INFO *lbInfo, u8 arm, u64 block,
u32 count)
u8 megasas_get_best_arm_pd(struct megasas_instance *instance,
struct LD_LOAD_BALANCE_INFO *lbInfo, struct IO_REQUEST_INFO *io_info)
{
u16 pend0, pend1;
struct fusion_context *fusion;
struct MR_LD_RAID *raid;
struct MR_DRV_RAID_MAP_ALL *drv_map;
u16 pend0, pend1, ld;
u64 diff0, diff1;
u8 bestArm;
u8 bestArm, pd0, pd1, span, arm;
u32 arRef, span_row_size;
u64 block = io_info->ldStartBlock;
u32 count = io_info->numBlocks;
span = ((io_info->span_arm & RAID_CTX_SPANARM_SPAN_MASK)
>> RAID_CTX_SPANARM_SPAN_SHIFT);
arm = (io_info->span_arm & RAID_CTX_SPANARM_ARM_MASK);
fusion = instance->ctrl_context;
drv_map = fusion->ld_drv_map[(instance->map_id & 1)];
ld = MR_TargetIdToLdGet(io_info->ldTgtId, drv_map);
raid = MR_LdRaidGet(ld, drv_map);
span_row_size = instance->UnevenSpanSupport ?
SPAN_ROW_SIZE(drv_map, ld, span) : raid->rowSize;
arRef = MR_LdSpanArrayGet(ld, span, drv_map);
pd0 = MR_ArPdGet(arRef, arm, drv_map);
pd1 = MR_ArPdGet(arRef, (arm + 1) >= span_row_size ?
(arm + 1 - span_row_size) : arm + 1, drv_map);
/* get the pending cmds for the data and mirror arms */
pend0 = atomic_read(&lbInfo->scsi_pending_cmds[0]);
pend1 = atomic_read(&lbInfo->scsi_pending_cmds[1]);
pend0 = atomic_read(&lbInfo->scsi_pending_cmds[pd0]);
pend1 = atomic_read(&lbInfo->scsi_pending_cmds[pd1]);
/* Determine the disk whose head is nearer to the req. block */
diff0 = ABS_DIFF(block, lbInfo->last_accessed_block[0]);
diff1 = ABS_DIFF(block, lbInfo->last_accessed_block[1]);
bestArm = (diff0 <= diff1 ? 0 : 1);
diff0 = ABS_DIFF(block, lbInfo->last_accessed_block[pd0]);
diff1 = ABS_DIFF(block, lbInfo->last_accessed_block[pd1]);
bestArm = (diff0 <= diff1 ? arm : arm ^ 1);
/*Make balance count from 16 to 4 to keep driver in sync with Firmware*/
if ((bestArm == arm && pend0 > pend1 + 4) ||
(bestArm != arm && pend1 > pend0 + 4))
if ((bestArm == arm && pend0 > pend1 + lb_pending_cmds) ||
(bestArm != arm && pend1 > pend0 + lb_pending_cmds))
bestArm ^= 1;
/* Update the last accessed block on the correct pd */
lbInfo->last_accessed_block[bestArm] = block + count - 1;
return bestArm;
io_info->pd_after_lb = (bestArm == arm) ? pd0 : pd1;
lbInfo->last_accessed_block[io_info->pd_after_lb] = block + count - 1;
io_info->span_arm = (span << RAID_CTX_SPANARM_SPAN_SHIFT) | bestArm;
#if SPAN_DEBUG
if (arm != bestArm)
dev_dbg(&instance->pdev->dev, "LSI Debug R1 Load balance "
"occur - span 0x%x arm 0x%x bestArm 0x%x "
"io_info->span_arm 0x%x\n",
span, arm, bestArm, io_info->span_arm);
#endif
return io_info->pd_after_lb;
}
u16 get_updated_dev_handle(struct LD_LOAD_BALANCE_INFO *lbInfo,
struct IO_REQUEST_INFO *io_info)
u16 get_updated_dev_handle(struct megasas_instance *instance,
struct LD_LOAD_BALANCE_INFO *lbInfo, struct IO_REQUEST_INFO *io_info)
{
u8 arm, old_arm;
u8 arm_pd;
u16 devHandle;
struct fusion_context *fusion;
struct MR_DRV_RAID_MAP_ALL *drv_map;
old_arm = lbInfo->raid1DevHandle[0] == io_info->devHandle ? 0 : 1;
/* get best new arm */
arm = megasas_get_best_arm(lbInfo, old_arm, io_info->ldStartBlock,
io_info->numBlocks);
devHandle = lbInfo->raid1DevHandle[arm];
atomic_inc(&lbInfo->scsi_pending_cmds[arm]);
fusion = instance->ctrl_context;
drv_map = fusion->ld_drv_map[(instance->map_id & 1)];
/* get best new arm (PD ID) */
arm_pd = megasas_get_best_arm_pd(instance, lbInfo, io_info);
devHandle = MR_PdDevHandleGet(arm_pd, drv_map);
atomic_inc(&lbInfo->scsi_pending_cmds[arm_pd]);
return devHandle;
}

View File

@ -50,6 +50,7 @@
#include <scsi/scsi_cmnd.h>
#include <scsi/scsi_device.h>
#include <scsi/scsi_host.h>
#include <scsi/scsi_dbg.h>
#include "megaraid_sas_fusion.h"
#include "megaraid_sas.h"
@ -76,8 +77,6 @@ megasas_issue_polled(struct megasas_instance *instance,
void
megasas_check_and_restore_queue_depth(struct megasas_instance *instance);
u16 get_updated_dev_handle(struct LD_LOAD_BALANCE_INFO *lbInfo,
struct IO_REQUEST_INFO *in_info);
int megasas_transition_to_ready(struct megasas_instance *instance, int ocr);
void megaraid_sas_kill_hba(struct megasas_instance *instance);
@ -91,6 +90,8 @@ void megasas_start_timer(struct megasas_instance *instance,
extern struct megasas_mgmt_info megasas_mgmt_info;
extern int resetwaittime;
/**
* megasas_enable_intr_fusion - Enables interrupts
* @regs: MFI register set
@ -163,7 +164,7 @@ struct megasas_cmd_fusion *megasas_get_cmd_fusion(struct megasas_instance
(struct fusion_context *)instance->ctrl_context;
struct megasas_cmd_fusion *cmd = NULL;
spin_lock_irqsave(&fusion->cmd_pool_lock, flags);
spin_lock_irqsave(&fusion->mpt_pool_lock, flags);
if (!list_empty(&fusion->cmd_pool)) {
cmd = list_entry((&fusion->cmd_pool)->next,
@ -173,7 +174,7 @@ struct megasas_cmd_fusion *megasas_get_cmd_fusion(struct megasas_instance
printk(KERN_ERR "megasas: Command pool (fusion) empty!\n");
}
spin_unlock_irqrestore(&fusion->cmd_pool_lock, flags);
spin_unlock_irqrestore(&fusion->mpt_pool_lock, flags);
return cmd;
}
@ -182,21 +183,47 @@ struct megasas_cmd_fusion *megasas_get_cmd_fusion(struct megasas_instance
* @instance: Adapter soft state
* @cmd: Command packet to be returned to free command pool
*/
static inline void
megasas_return_cmd_fusion(struct megasas_instance *instance,
struct megasas_cmd_fusion *cmd)
inline void megasas_return_cmd_fusion(struct megasas_instance *instance,
struct megasas_cmd_fusion *cmd)
{
unsigned long flags;
struct fusion_context *fusion =
(struct fusion_context *)instance->ctrl_context;
spin_lock_irqsave(&fusion->cmd_pool_lock, flags);
spin_lock_irqsave(&fusion->mpt_pool_lock, flags);
cmd->scmd = NULL;
cmd->sync_cmd_idx = (u32)ULONG_MAX;
list_add_tail(&cmd->list, &fusion->cmd_pool);
list_add(&cmd->list, (&fusion->cmd_pool)->next);
spin_unlock_irqrestore(&fusion->cmd_pool_lock, flags);
spin_unlock_irqrestore(&fusion->mpt_pool_lock, flags);
}
/**
* megasas_return_mfi_mpt_pthr - Return a mfi and mpt to free command pool
* @instance: Adapter soft state
* @cmd_mfi: MFI Command packet to be returned to free command pool
* @cmd_mpt: MPT Command packet to be returned to free command pool
*/
inline void megasas_return_mfi_mpt_pthr(struct megasas_instance *instance,
struct megasas_cmd *cmd_mfi,
struct megasas_cmd_fusion *cmd_fusion)
{
unsigned long flags;
/*
* TO DO: optimize this code and use only one lock instead of two
* locks being used currently- mpt_pool_lock is acquired
* inside mfi_pool_lock
*/
spin_lock_irqsave(&instance->mfi_pool_lock, flags);
megasas_return_cmd_fusion(instance, cmd_fusion);
if (atomic_read(&cmd_mfi->mfi_mpt_pthr) != MFI_MPT_ATTACHED)
dev_err(&instance->pdev->dev, "Possible bug from %s %d\n",
__func__, __LINE__);
atomic_set(&cmd_mfi->mfi_mpt_pthr, MFI_MPT_DETACHED);
__megasas_return_cmd(instance, cmd_mfi);
spin_unlock_irqrestore(&instance->mfi_pool_lock, flags);
}
/**
@ -562,9 +589,11 @@ wait_and_poll(struct megasas_instance *instance, struct megasas_cmd *cmd,
{
int i;
struct megasas_header *frame_hdr = &cmd->frame->hdr;
struct fusion_context *fusion;
u32 msecs = seconds * 1000;
fusion = instance->ctrl_context;
/*
* Wait for cmd_status to change
*/
@ -573,8 +602,12 @@ wait_and_poll(struct megasas_instance *instance, struct megasas_cmd *cmd,
msleep(20);
}
if (frame_hdr->cmd_status == 0xff)
if (frame_hdr->cmd_status == 0xff) {
if (fusion)
megasas_return_mfi_mpt_pthr(instance, cmd,
cmd->mpt_pthr_cmd_blocked);
return -ETIME;
}
return 0;
}
@ -650,6 +683,10 @@ megasas_ioc_init_fusion(struct megasas_instance *instance)
/* driver supports HA / Remote LUN over Fast Path interface */
init_frame->driver_operations.mfi_capabilities.support_fp_remote_lun
= 1;
init_frame->driver_operations.mfi_capabilities.support_max_255lds
= 1;
init_frame->driver_operations.mfi_capabilities.support_ndrive_r1_lb
= 1;
/* Convert capability to LE32 */
cpu_to_le32s((u32 *)&init_frame->driver_operations.mfi_capabilities);
@ -709,6 +746,13 @@ fail_get_cmd:
* Issues an internal command (DCMD) to get the FW's controller PD
* list structure. This information is mainly used to find out SYSTEM
* supported by the FW.
* dcmd.mbox value setting for MR_DCMD_LD_MAP_GET_INFO
* dcmd.mbox.b[0] - number of LDs being sync'd
* dcmd.mbox.b[1] - 0 - complete command immediately.
* - 1 - pend till config change
* dcmd.mbox.b[2] - 0 - supports max 64 lds and uses legacy MR_FW_RAID_MAP
* - 1 - supports max MAX_LOGICAL_DRIVES_EXT lds and
* uses extended struct MR_FW_RAID_MAP_EXT
*/
static int
megasas_get_ld_map_info(struct megasas_instance *instance)
@ -716,7 +760,7 @@ megasas_get_ld_map_info(struct megasas_instance *instance)
int ret = 0;
struct megasas_cmd *cmd;
struct megasas_dcmd_frame *dcmd;
struct MR_FW_RAID_MAP_ALL *ci;
void *ci;
dma_addr_t ci_h = 0;
u32 size_map_info;
struct fusion_context *fusion;
@ -737,10 +781,9 @@ megasas_get_ld_map_info(struct megasas_instance *instance)
dcmd = &cmd->frame->dcmd;
size_map_info = sizeof(struct MR_FW_RAID_MAP) +
(sizeof(struct MR_LD_SPAN_MAP) *(MAX_LOGICAL_DRIVES - 1));
size_map_info = fusion->current_map_sz;
ci = fusion->ld_map[(instance->map_id & 1)];
ci = (void *) fusion->ld_map[(instance->map_id & 1)];
ci_h = fusion->ld_map_phys[(instance->map_id & 1)];
if (!ci) {
@ -749,9 +792,13 @@ megasas_get_ld_map_info(struct megasas_instance *instance)
return -ENOMEM;
}
memset(ci, 0, sizeof(*ci));
memset(ci, 0, fusion->max_map_sz);
memset(dcmd->mbox.b, 0, MFI_MBOX_SIZE);
#if VD_EXT_DEBUG
dev_dbg(&instance->pdev->dev,
"%s sending MR_DCMD_LD_MAP_GET_INFO with size %d\n",
__func__, cpu_to_le32(size_map_info));
#endif
dcmd->cmd = MFI_CMD_DCMD;
dcmd->cmd_status = 0xFF;
dcmd->sge_count = 1;
@ -763,14 +810,17 @@ megasas_get_ld_map_info(struct megasas_instance *instance)
dcmd->sgl.sge32[0].phys_addr = cpu_to_le32(ci_h);
dcmd->sgl.sge32[0].length = cpu_to_le32(size_map_info);
if (!megasas_issue_polled(instance, cmd))
ret = 0;
else {
printk(KERN_ERR "megasas: Get LD Map Info Failed\n");
ret = -1;
}
if (instance->ctrl_context && !instance->mask_interrupts)
ret = megasas_issue_blocked_cmd(instance, cmd,
MEGASAS_BLOCKED_CMD_TIMEOUT);
else
ret = megasas_issue_polled(instance, cmd);
megasas_return_cmd(instance, cmd);
if (instance->ctrl_context && cmd->mpt_pthr_cmd_blocked)
megasas_return_mfi_mpt_pthr(instance, cmd,
cmd->mpt_pthr_cmd_blocked);
else
megasas_return_cmd(instance, cmd);
return ret;
}
@ -807,7 +857,7 @@ megasas_sync_map_info(struct megasas_instance *instance)
u32 size_sync_info, num_lds;
struct fusion_context *fusion;
struct MR_LD_TARGET_SYNC *ci = NULL;
struct MR_FW_RAID_MAP_ALL *map;
struct MR_DRV_RAID_MAP_ALL *map;
struct MR_LD_RAID *raid;
struct MR_LD_TARGET_SYNC *ld_sync;
dma_addr_t ci_h = 0;
@ -828,7 +878,7 @@ megasas_sync_map_info(struct megasas_instance *instance)
return 1;
}
map = fusion->ld_map[instance->map_id & 1];
map = fusion->ld_drv_map[instance->map_id & 1];
num_lds = le32_to_cpu(map->raidMap.ldCount);
@ -840,7 +890,7 @@ megasas_sync_map_info(struct megasas_instance *instance)
ci = (struct MR_LD_TARGET_SYNC *)
fusion->ld_map[(instance->map_id - 1) & 1];
memset(ci, 0, sizeof(struct MR_FW_RAID_MAP_ALL));
memset(ci, 0, fusion->max_map_sz);
ci_h = fusion->ld_map_phys[(instance->map_id - 1) & 1];
@ -852,8 +902,7 @@ megasas_sync_map_info(struct megasas_instance *instance)
ld_sync->seqNum = raid->seqNum;
}
size_map_info = sizeof(struct MR_FW_RAID_MAP) +
(sizeof(struct MR_LD_SPAN_MAP) *(MAX_LOGICAL_DRIVES - 1));
size_map_info = fusion->current_map_sz;
dcmd->cmd = MFI_CMD_DCMD;
dcmd->cmd_status = 0xFF;
@ -971,7 +1020,7 @@ megasas_init_adapter_fusion(struct megasas_instance *instance)
max_cmd = instance->max_fw_cmds;
fusion->reply_q_depth = ((max_cmd + 1 + 15)/16)*16;
fusion->reply_q_depth = 2 * (((max_cmd + 1 + 15)/16)*16);
fusion->request_alloc_sz =
sizeof(union MEGASAS_REQUEST_DESCRIPTOR_UNION) *max_cmd;
@ -988,8 +1037,8 @@ megasas_init_adapter_fusion(struct megasas_instance *instance)
fusion->max_sge_in_chain =
MEGASAS_MAX_SZ_CHAIN_FRAME / sizeof(union MPI2_SGE_IO_UNION);
instance->max_num_sge = fusion->max_sge_in_main_msg +
fusion->max_sge_in_chain - 2;
instance->max_num_sge = rounddown_pow_of_two(
fusion->max_sge_in_main_msg + fusion->max_sge_in_chain - 2);
/* Used for pass thru MFI frame (DCMD) */
fusion->chain_offset_mfi_pthru =
@ -1016,17 +1065,75 @@ megasas_init_adapter_fusion(struct megasas_instance *instance)
goto fail_ioc_init;
megasas_display_intel_branding(instance);
if (megasas_get_ctrl_info(instance, instance->ctrl_info)) {
dev_err(&instance->pdev->dev,
"Could not get controller info. Fail from %s %d\n",
__func__, __LINE__);
goto fail_ioc_init;
}
instance->supportmax256vd =
instance->ctrl_info->adapterOperations3.supportMaxExtLDs;
/* Below is additional check to address future FW enhancement */
if (instance->ctrl_info->max_lds > 64)
instance->supportmax256vd = 1;
instance->drv_supported_vd_count = MEGASAS_MAX_LD_CHANNELS
* MEGASAS_MAX_DEV_PER_CHANNEL;
instance->drv_supported_pd_count = MEGASAS_MAX_PD_CHANNELS
* MEGASAS_MAX_DEV_PER_CHANNEL;
if (instance->supportmax256vd) {
instance->fw_supported_vd_count = MAX_LOGICAL_DRIVES_EXT;
instance->fw_supported_pd_count = MAX_PHYSICAL_DEVICES;
} else {
instance->fw_supported_vd_count = MAX_LOGICAL_DRIVES;
instance->fw_supported_pd_count = MAX_PHYSICAL_DEVICES;
}
dev_info(&instance->pdev->dev, "Firmware supports %d VDs %d PDs\n"
"Driver supports %d VDs %d PDs\n",
instance->fw_supported_vd_count,
instance->fw_supported_pd_count,
instance->drv_supported_vd_count,
instance->drv_supported_pd_count);
instance->flag_ieee = 1;
fusion->map_sz = sizeof(struct MR_FW_RAID_MAP) +
(sizeof(struct MR_LD_SPAN_MAP) *(MAX_LOGICAL_DRIVES - 1));
fusion->fast_path_io = 0;
fusion->old_map_sz =
sizeof(struct MR_FW_RAID_MAP) + (sizeof(struct MR_LD_SPAN_MAP) *
(instance->fw_supported_vd_count - 1));
fusion->new_map_sz =
sizeof(struct MR_FW_RAID_MAP_EXT);
fusion->drv_map_sz =
sizeof(struct MR_DRV_RAID_MAP) + (sizeof(struct MR_LD_SPAN_MAP) *
(instance->drv_supported_vd_count - 1));
fusion->drv_map_pages = get_order(fusion->drv_map_sz);
for (i = 0; i < 2; i++) {
fusion->ld_map[i] = NULL;
fusion->ld_drv_map[i] = (void *)__get_free_pages(GFP_KERNEL,
fusion->drv_map_pages);
if (!fusion->ld_drv_map[i]) {
dev_err(&instance->pdev->dev, "Could not allocate "
"memory for local map info for %d pages\n",
fusion->drv_map_pages);
if (i == 1)
free_pages((ulong)fusion->ld_drv_map[0],
fusion->drv_map_pages);
goto fail_ioc_init;
}
}
fusion->max_map_sz = max(fusion->old_map_sz, fusion->new_map_sz);
if (instance->supportmax256vd)
fusion->current_map_sz = fusion->new_map_sz;
else
fusion->current_map_sz = fusion->old_map_sz;
for (i = 0; i < 2; i++) {
fusion->ld_map[i] = dma_alloc_coherent(&instance->pdev->dev,
fusion->map_sz,
fusion->max_map_sz,
&fusion->ld_map_phys[i],
GFP_KERNEL);
if (!fusion->ld_map[i]) {
@ -1043,7 +1150,7 @@ megasas_init_adapter_fusion(struct megasas_instance *instance)
fail_map_info:
if (i == 1)
dma_free_coherent(&instance->pdev->dev, fusion->map_sz,
dma_free_coherent(&instance->pdev->dev, fusion->max_map_sz,
fusion->ld_map[0], fusion->ld_map_phys[0]);
fail_ioc_init:
megasas_free_cmds_fusion(instance);
@ -1065,6 +1172,11 @@ megasas_fire_cmd_fusion(struct megasas_instance *instance,
u32 req_desc_hi,
struct megasas_register_set __iomem *regs)
{
#if defined(writeq) && defined(CONFIG_64BIT)
u64 req_data = (((u64)req_desc_hi << 32) | (u32)req_desc_lo);
writeq(le64_to_cpu(req_data), &(regs)->inbound_low_queue_port);
#else
unsigned long flags;
spin_lock_irqsave(&instance->hba_lock, flags);
@ -1072,6 +1184,7 @@ megasas_fire_cmd_fusion(struct megasas_instance *instance,
writel(le32_to_cpu(req_desc_lo), &(regs)->inbound_low_queue_port);
writel(le32_to_cpu(req_desc_hi), &(regs)->inbound_high_queue_port);
spin_unlock_irqrestore(&instance->hba_lock, flags);
#endif
}
/**
@ -1224,7 +1337,7 @@ megasas_make_sgl_fusion(struct megasas_instance *instance,
void
megasas_set_pd_lba(struct MPI2_RAID_SCSI_IO_REQUEST *io_request, u8 cdb_len,
struct IO_REQUEST_INFO *io_info, struct scsi_cmnd *scp,
struct MR_FW_RAID_MAP_ALL *local_map_ptr, u32 ref_tag)
struct MR_DRV_RAID_MAP_ALL *local_map_ptr, u32 ref_tag)
{
struct MR_LD_RAID *raid;
u32 ld;
@ -1409,7 +1522,7 @@ megasas_build_ldio_fusion(struct megasas_instance *instance,
union MEGASAS_REQUEST_DESCRIPTOR_UNION *req_desc;
struct IO_REQUEST_INFO io_info;
struct fusion_context *fusion;
struct MR_FW_RAID_MAP_ALL *local_map_ptr;
struct MR_DRV_RAID_MAP_ALL *local_map_ptr;
u8 *raidLUN;
device_id = MEGASAS_DEV_INDEX(instance, scp);
@ -1486,10 +1599,10 @@ megasas_build_ldio_fusion(struct megasas_instance *instance,
if (scp->sc_data_direction == PCI_DMA_FROMDEVICE)
io_info.isRead = 1;
local_map_ptr = fusion->ld_map[(instance->map_id & 1)];
local_map_ptr = fusion->ld_drv_map[(instance->map_id & 1)];
if ((MR_TargetIdToLdGet(device_id, local_map_ptr) >=
MAX_LOGICAL_DRIVES) || (!fusion->fast_path_io)) {
instance->fw_supported_vd_count) || (!fusion->fast_path_io)) {
io_request->RaidContext.regLockFlags = 0;
fp_possible = 0;
} else {
@ -1529,10 +1642,11 @@ megasas_build_ldio_fusion(struct megasas_instance *instance,
if ((fusion->load_balance_info[device_id].loadBalanceFlag) &&
(io_info.isRead)) {
io_info.devHandle =
get_updated_dev_handle(
get_updated_dev_handle(instance,
&fusion->load_balance_info[device_id],
&io_info);
scp->SCp.Status |= MEGASAS_LOAD_BALANCE_FLAG;
cmd->pd_r1_lb = io_info.pd_after_lb;
} else
scp->SCp.Status &= ~MEGASAS_LOAD_BALANCE_FLAG;
cmd->request_desc->SCSIIO.DevHandle = io_info.devHandle;
@ -1579,7 +1693,7 @@ megasas_build_dcdb_fusion(struct megasas_instance *instance,
u32 device_id;
struct MPI2_RAID_SCSI_IO_REQUEST *io_request;
u16 pd_index = 0;
struct MR_FW_RAID_MAP_ALL *local_map_ptr;
struct MR_DRV_RAID_MAP_ALL *local_map_ptr;
struct fusion_context *fusion = instance->ctrl_context;
u8 span, physArm;
u16 devHandle;
@ -1591,7 +1705,7 @@ megasas_build_dcdb_fusion(struct megasas_instance *instance,
device_id = MEGASAS_DEV_INDEX(instance, scmd);
pd_index = (scmd->device->channel * MEGASAS_MAX_DEV_PER_CHANNEL)
+scmd->device->id;
local_map_ptr = fusion->ld_map[(instance->map_id & 1)];
local_map_ptr = fusion->ld_drv_map[(instance->map_id & 1)];
io_request->DataLength = cpu_to_le32(scsi_bufflen(scmd));
@ -1639,7 +1753,8 @@ megasas_build_dcdb_fusion(struct megasas_instance *instance,
goto NonFastPath;
ld = MR_TargetIdToLdGet(device_id, local_map_ptr);
if ((ld >= MAX_LOGICAL_DRIVES) || (!fusion->fast_path_io))
if ((ld >= instance->fw_supported_vd_count) ||
(!fusion->fast_path_io))
goto NonFastPath;
raid = MR_LdRaidGet(ld, local_map_ptr);
@ -1864,10 +1979,11 @@ complete_cmd_fusion(struct megasas_instance *instance, u32 MSIxIndex)
struct megasas_cmd *cmd_mfi;
struct megasas_cmd_fusion *cmd_fusion;
u16 smid, num_completed;
u8 reply_descript_type, arm;
u8 reply_descript_type;
u32 status, extStatus, device_id;
union desc_value d_val;
struct LD_LOAD_BALANCE_INFO *lbinfo;
int threshold_reply_count = 0;
fusion = instance->ctrl_context;
@ -1914,10 +2030,7 @@ complete_cmd_fusion(struct megasas_instance *instance, u32 MSIxIndex)
lbinfo = &fusion->load_balance_info[device_id];
if (cmd_fusion->scmd->SCp.Status &
MEGASAS_LOAD_BALANCE_FLAG) {
arm = lbinfo->raid1DevHandle[0] ==
cmd_fusion->io_request->DevHandle ? 0 :
1;
atomic_dec(&lbinfo->scsi_pending_cmds[arm]);
atomic_dec(&lbinfo->scsi_pending_cmds[cmd_fusion->pd_r1_lb]);
cmd_fusion->scmd->SCp.Status &=
~MEGASAS_LOAD_BALANCE_FLAG;
}
@ -1941,10 +2054,19 @@ complete_cmd_fusion(struct megasas_instance *instance, u32 MSIxIndex)
break;
case MEGASAS_MPI2_FUNCTION_PASSTHRU_IO_REQUEST: /*MFI command */
cmd_mfi = instance->cmd_list[cmd_fusion->sync_cmd_idx];
if (!cmd_mfi->mpt_pthr_cmd_blocked) {
if (megasas_dbg_lvl == 5)
dev_info(&instance->pdev->dev,
"freeing mfi/mpt pass-through "
"from %s %d\n",
__func__, __LINE__);
megasas_return_mfi_mpt_pthr(instance, cmd_mfi,
cmd_fusion);
}
megasas_complete_cmd(instance, cmd_mfi, DID_OK);
cmd_fusion->flags = 0;
megasas_return_cmd_fusion(instance, cmd_fusion);
break;
}
@ -1955,6 +2077,7 @@ complete_cmd_fusion(struct megasas_instance *instance, u32 MSIxIndex)
desc->Words = ULLONG_MAX;
num_completed++;
threshold_reply_count++;
/* Get the next reply descriptor */
if (!fusion->last_reply_idx[MSIxIndex])
@ -1974,6 +2097,25 @@ complete_cmd_fusion(struct megasas_instance *instance, u32 MSIxIndex)
if (reply_descript_type == MPI2_RPY_DESCRIPT_FLAGS_UNUSED)
break;
/*
* Write to reply post host index register after completing threshold
* number of reply counts and still there are more replies in reply queue
* pending to be completed
*/
if (threshold_reply_count >= THRESHOLD_REPLY_COUNT) {
if ((instance->pdev->device ==
PCI_DEVICE_ID_LSI_INVADER) ||
(instance->pdev->device ==
PCI_DEVICE_ID_LSI_FURY))
writel(((MSIxIndex & 0x7) << 24) |
fusion->last_reply_idx[MSIxIndex],
instance->reply_post_host_index_addr[MSIxIndex/8]);
else
writel((MSIxIndex << 24) |
fusion->last_reply_idx[MSIxIndex],
instance->reply_post_host_index_addr[0]);
threshold_reply_count = 0;
}
}
if (!num_completed)
@ -2028,7 +2170,7 @@ irqreturn_t megasas_isr_fusion(int irq, void *devp)
{
struct megasas_irq_context *irq_context = devp;
struct megasas_instance *instance = irq_context->instance;
u32 mfiStatus, fw_state;
u32 mfiStatus, fw_state, dma_state;
if (instance->mask_interrupts)
return IRQ_NONE;
@ -2050,7 +2192,16 @@ irqreturn_t megasas_isr_fusion(int irq, void *devp)
/* If we didn't complete any commands, check for FW fault */
fw_state = instance->instancet->read_fw_status_reg(
instance->reg_set) & MFI_STATE_MASK;
if (fw_state == MFI_STATE_FAULT) {
dma_state = instance->instancet->read_fw_status_reg
(instance->reg_set) & MFI_STATE_DMADONE;
if (instance->crash_dump_drv_support &&
instance->crash_dump_app_support) {
/* Start collecting crash, if DMA bit is done */
if ((fw_state == MFI_STATE_FAULT) && dma_state)
schedule_work(&instance->crash_init);
else if (fw_state == MFI_STATE_FAULT)
schedule_work(&instance->work_init);
} else if (fw_state == MFI_STATE_FAULT) {
printk(KERN_WARNING "megaraid_sas: Iop2SysDoorbellInt"
"for scsi%d\n", instance->host->host_no);
schedule_work(&instance->work_init);
@ -2075,6 +2226,7 @@ build_mpt_mfi_pass_thru(struct megasas_instance *instance,
struct megasas_cmd_fusion *cmd;
struct fusion_context *fusion;
struct megasas_header *frame_hdr = &mfi_cmd->frame->hdr;
u32 opcode;
cmd = megasas_get_cmd_fusion(instance);
if (!cmd)
@ -2082,9 +2234,20 @@ build_mpt_mfi_pass_thru(struct megasas_instance *instance,
/* Save the smid. To be used for returning the cmd */
mfi_cmd->context.smid = cmd->index;
cmd->sync_cmd_idx = mfi_cmd->index;
/* Set this only for Blocked commands */
opcode = le32_to_cpu(mfi_cmd->frame->dcmd.opcode);
if ((opcode == MR_DCMD_LD_MAP_GET_INFO)
&& (mfi_cmd->frame->dcmd.mbox.b[1] == 1))
mfi_cmd->is_wait_event = 1;
if (opcode == MR_DCMD_CTRL_EVENT_WAIT)
mfi_cmd->is_wait_event = 1;
if (mfi_cmd->is_wait_event)
mfi_cmd->mpt_pthr_cmd_blocked = cmd;
/*
* For cmds where the flag is set, store the flag and check
* on completion. For cmds with this flag, don't call
@ -2173,6 +2336,7 @@ megasas_issue_dcmd_fusion(struct megasas_instance *instance,
printk(KERN_ERR "Couldn't issue MFI pass thru cmd\n");
return;
}
atomic_set(&cmd->mfi_mpt_pthr, MFI_MPT_ATTACHED);
instance->instancet->fire_cmd(instance, req_desc->u.low,
req_desc->u.high, instance->reg_set);
}
@ -2202,6 +2366,49 @@ megasas_read_fw_status_reg_fusion(struct megasas_register_set __iomem *regs)
return readl(&(regs)->outbound_scratch_pad);
}
/**
* megasas_alloc_host_crash_buffer - Host buffers for Crash dump collection from Firmware
* @instance: Controller's soft instance
* return: Number of allocated host crash buffers
*/
static void
megasas_alloc_host_crash_buffer(struct megasas_instance *instance)
{
unsigned int i;
instance->crash_buf_pages = get_order(CRASH_DMA_BUF_SIZE);
for (i = 0; i < MAX_CRASH_DUMP_SIZE; i++) {
instance->crash_buf[i] = (void *)__get_free_pages(GFP_KERNEL,
instance->crash_buf_pages);
if (!instance->crash_buf[i]) {
dev_info(&instance->pdev->dev, "Firmware crash dump "
"memory allocation failed at index %d\n", i);
break;
}
}
instance->drv_buf_alloc = i;
}
/**
* megasas_free_host_crash_buffer - Host buffers for Crash dump collection from Firmware
* @instance: Controller's soft instance
*/
void
megasas_free_host_crash_buffer(struct megasas_instance *instance)
{
unsigned int i
;
for (i = 0; i < instance->drv_buf_alloc; i++) {
if (instance->crash_buf[i])
free_pages((ulong)instance->crash_buf[i],
instance->crash_buf_pages);
}
instance->drv_buf_index = 0;
instance->drv_buf_alloc = 0;
instance->fw_crash_state = UNAVAILABLE;
instance->fw_crash_buffer_size = 0;
}
/**
* megasas_adp_reset_fusion - For controller reset
* @regs: MFI register set
@ -2345,6 +2552,7 @@ int megasas_reset_fusion(struct Scsi_Host *shost, int iotimeout)
struct megasas_cmd *cmd_mfi;
union MEGASAS_REQUEST_DESCRIPTOR_UNION *req_desc;
u32 host_diag, abs_state, status_reg, reset_adapter;
u32 io_timeout_in_crash_mode = 0;
instance = (struct megasas_instance *)shost->hostdata;
fusion = instance->ctrl_context;
@ -2355,8 +2563,45 @@ int megasas_reset_fusion(struct Scsi_Host *shost, int iotimeout)
printk(KERN_WARNING "megaraid_sas: Hardware critical error, "
"returning FAILED for scsi%d.\n",
instance->host->host_no);
mutex_unlock(&instance->reset_mutex);
return FAILED;
}
status_reg = instance->instancet->read_fw_status_reg(instance->reg_set);
abs_state = status_reg & MFI_STATE_MASK;
/* IO timeout detected, forcibly put FW in FAULT state */
if (abs_state != MFI_STATE_FAULT && instance->crash_dump_buf &&
instance->crash_dump_app_support && iotimeout) {
dev_info(&instance->pdev->dev, "IO timeout is detected, "
"forcibly FAULT Firmware\n");
instance->adprecovery = MEGASAS_ADPRESET_SM_INFAULT;
status_reg = readl(&instance->reg_set->doorbell);
writel(status_reg | MFI_STATE_FORCE_OCR,
&instance->reg_set->doorbell);
readl(&instance->reg_set->doorbell);
mutex_unlock(&instance->reset_mutex);
do {
ssleep(3);
io_timeout_in_crash_mode++;
dev_dbg(&instance->pdev->dev, "waiting for [%d] "
"seconds for crash dump collection and OCR "
"to be done\n", (io_timeout_in_crash_mode * 3));
} while ((instance->adprecovery != MEGASAS_HBA_OPERATIONAL) &&
(io_timeout_in_crash_mode < 80));
if (instance->adprecovery == MEGASAS_HBA_OPERATIONAL) {
dev_info(&instance->pdev->dev, "OCR done for IO "
"timeout case\n");
retval = SUCCESS;
} else {
dev_info(&instance->pdev->dev, "Controller is not "
"operational after 240 seconds wait for IO "
"timeout case in FW crash dump mode\n do "
"OCR/kill adapter\n");
retval = megasas_reset_fusion(shost, 0);
}
return retval;
}
if (instance->requestorId && !instance->skip_heartbeat_timer_del)
del_timer_sync(&instance->sriov_heartbeat_timer);
@ -2563,10 +2808,7 @@ int megasas_reset_fusion(struct Scsi_Host *shost, int iotimeout)
cmd_list[cmd_fusion->sync_cmd_idx];
if (cmd_mfi->frame->dcmd.opcode ==
cpu_to_le32(MR_DCMD_LD_MAP_GET_INFO)) {
megasas_return_cmd(instance,
cmd_mfi);
megasas_return_cmd_fusion(
instance, cmd_fusion);
megasas_return_mfi_mpt_pthr(instance, cmd_mfi, cmd_fusion);
} else {
req_desc =
megasas_get_request_descriptor(
@ -2603,7 +2845,7 @@ int megasas_reset_fusion(struct Scsi_Host *shost, int iotimeout)
/* Reset load balance info */
memset(fusion->load_balance_info, 0,
sizeof(struct LD_LOAD_BALANCE_INFO)
*MAX_LOGICAL_DRIVES);
*MAX_LOGICAL_DRIVES_EXT);
if (!megasas_get_map_info(instance))
megasas_sync_map_info(instance);
@ -2623,6 +2865,15 @@ int megasas_reset_fusion(struct Scsi_Host *shost, int iotimeout)
printk(KERN_WARNING "megaraid_sas: Reset "
"successful for scsi%d.\n",
instance->host->host_no);
if (instance->crash_dump_drv_support) {
if (instance->crash_dump_app_support)
megasas_set_crash_dump_params(instance,
MR_CRASH_BUF_TURN_ON);
else
megasas_set_crash_dump_params(instance,
MR_CRASH_BUF_TURN_OFF);
}
retval = SUCCESS;
goto out;
}
@ -2651,6 +2902,74 @@ out:
return retval;
}
/* Fusion Crash dump collection work queue */
void megasas_fusion_crash_dump_wq(struct work_struct *work)
{
struct megasas_instance *instance =
container_of(work, struct megasas_instance, crash_init);
u32 status_reg;
u8 partial_copy = 0;
status_reg = instance->instancet->read_fw_status_reg(instance->reg_set);
/*
* Allocate host crash buffers to copy data from 1 MB DMA crash buffer
* to host crash buffers
*/
if (instance->drv_buf_index == 0) {
/* Buffer is already allocated for old Crash dump.
* Do OCR and do not wait for crash dump collection
*/
if (instance->drv_buf_alloc) {
dev_info(&instance->pdev->dev, "earlier crash dump is "
"not yet copied by application, ignoring this "
"crash dump and initiating OCR\n");
status_reg |= MFI_STATE_CRASH_DUMP_DONE;
writel(status_reg,
&instance->reg_set->outbound_scratch_pad);
readl(&instance->reg_set->outbound_scratch_pad);
return;
}
megasas_alloc_host_crash_buffer(instance);
dev_info(&instance->pdev->dev, "Number of host crash buffers "
"allocated: %d\n", instance->drv_buf_alloc);
}
/*
* Driver has allocated max buffers, which can be allocated
* and FW has more crash dump data, then driver will
* ignore the data.
*/
if (instance->drv_buf_index >= (instance->drv_buf_alloc)) {
dev_info(&instance->pdev->dev, "Driver is done copying "
"the buffer: %d\n", instance->drv_buf_alloc);
status_reg |= MFI_STATE_CRASH_DUMP_DONE;
partial_copy = 1;
} else {
memcpy(instance->crash_buf[instance->drv_buf_index],
instance->crash_dump_buf, CRASH_DMA_BUF_SIZE);
instance->drv_buf_index++;
status_reg &= ~MFI_STATE_DMADONE;
}
if (status_reg & MFI_STATE_CRASH_DUMP_DONE) {
dev_info(&instance->pdev->dev, "Crash Dump is available,number "
"of copied buffers: %d\n", instance->drv_buf_index);
instance->fw_crash_buffer_size = instance->drv_buf_index;
instance->fw_crash_state = AVAILABLE;
instance->drv_buf_index = 0;
writel(status_reg, &instance->reg_set->outbound_scratch_pad);
readl(&instance->reg_set->outbound_scratch_pad);
if (!partial_copy)
megasas_reset_fusion(instance->host, 0);
} else {
writel(status_reg, &instance->reg_set->outbound_scratch_pad);
readl(&instance->reg_set->outbound_scratch_pad);
}
}
/* Fusion OCR work queue */
void megasas_fusion_ocr_wq(struct work_struct *work)
{

View File

@ -86,6 +86,7 @@ enum MR_RAID_FLAGS_IO_SUB_TYPE {
#define MEGASAS_FP_CMD_LEN 16
#define MEGASAS_FUSION_IN_RESET 0
#define THRESHOLD_REPLY_COUNT 50
/*
* Raid Context structure which describes MegaRAID specific IO Parameters
@ -478,10 +479,13 @@ struct MPI2_IOC_INIT_REQUEST {
#define MAX_ROW_SIZE 32
#define MAX_RAIDMAP_ROW_SIZE (MAX_ROW_SIZE)
#define MAX_LOGICAL_DRIVES 64
#define MAX_LOGICAL_DRIVES_EXT 256
#define MAX_RAIDMAP_LOGICAL_DRIVES (MAX_LOGICAL_DRIVES)
#define MAX_RAIDMAP_VIEWS (MAX_LOGICAL_DRIVES)
#define MAX_ARRAYS 128
#define MAX_RAIDMAP_ARRAYS (MAX_ARRAYS)
#define MAX_ARRAYS_EXT 256
#define MAX_API_ARRAYS_EXT (MAX_ARRAYS_EXT)
#define MAX_PHYSICAL_DEVICES 256
#define MAX_RAIDMAP_PHYSICAL_DEVICES (MAX_PHYSICAL_DEVICES)
#define MR_DCMD_LD_MAP_GET_INFO 0x0300e101
@ -601,7 +605,6 @@ struct MR_FW_RAID_MAP {
u32 maxArrays;
} validationInfo;
u32 version[5];
u32 reserved1[5];
};
u32 ldCount;
@ -627,6 +630,8 @@ struct IO_REQUEST_INFO {
u8 start_span;
u8 reserved;
u64 start_row;
u8 span_arm; /* span[7:5], arm[4:0] */
u8 pd_after_lb;
};
struct MR_LD_TARGET_SYNC {
@ -678,14 +683,14 @@ struct megasas_cmd_fusion {
u32 sync_cmd_idx;
u32 index;
u8 flags;
u8 pd_r1_lb;
};
struct LD_LOAD_BALANCE_INFO {
u8 loadBalanceFlag;
u8 reserved1;
u16 raid1DevHandle[2];
atomic_t scsi_pending_cmds[2];
u64 last_accessed_block[2];
atomic_t scsi_pending_cmds[MAX_PHYSICAL_DEVICES];
u64 last_accessed_block[MAX_PHYSICAL_DEVICES];
};
/* SPAN_SET is info caclulated from span info from Raid map per LD */
@ -713,11 +718,86 @@ struct MR_FW_RAID_MAP_ALL {
struct MR_LD_SPAN_MAP ldSpanMap[MAX_LOGICAL_DRIVES - 1];
} __attribute__ ((packed));
struct MR_DRV_RAID_MAP {
/* total size of this structure, including this field.
* This feild will be manupulated by driver for ext raid map,
* else pick the value from firmware raid map.
*/
u32 totalSize;
union {
struct {
u32 maxLd;
u32 maxSpanDepth;
u32 maxRowSize;
u32 maxPdCount;
u32 maxArrays;
} validationInfo;
u32 version[5];
};
/* timeout value used by driver in FP IOs*/
u8 fpPdIoTimeoutSec;
u8 reserved2[7];
u16 ldCount;
u16 arCount;
u16 spanCount;
u16 reserve3;
struct MR_DEV_HANDLE_INFO devHndlInfo[MAX_RAIDMAP_PHYSICAL_DEVICES];
u8 ldTgtIdToLd[MAX_LOGICAL_DRIVES_EXT];
struct MR_ARRAY_INFO arMapInfo[MAX_API_ARRAYS_EXT];
struct MR_LD_SPAN_MAP ldSpanMap[1];
};
/* Driver raid map size is same as raid map ext
* MR_DRV_RAID_MAP_ALL is created to sync with old raid.
* And it is mainly for code re-use purpose.
*/
struct MR_DRV_RAID_MAP_ALL {
struct MR_DRV_RAID_MAP raidMap;
struct MR_LD_SPAN_MAP ldSpanMap[MAX_LOGICAL_DRIVES_EXT - 1];
} __packed;
struct MR_FW_RAID_MAP_EXT {
/* Not usred in new map */
u32 reserved;
union {
struct {
u32 maxLd;
u32 maxSpanDepth;
u32 maxRowSize;
u32 maxPdCount;
u32 maxArrays;
} validationInfo;
u32 version[5];
};
u8 fpPdIoTimeoutSec;
u8 reserved2[7];
u16 ldCount;
u16 arCount;
u16 spanCount;
u16 reserve3;
struct MR_DEV_HANDLE_INFO devHndlInfo[MAX_RAIDMAP_PHYSICAL_DEVICES];
u8 ldTgtIdToLd[MAX_LOGICAL_DRIVES_EXT];
struct MR_ARRAY_INFO arMapInfo[MAX_API_ARRAYS_EXT];
struct MR_LD_SPAN_MAP ldSpanMap[MAX_LOGICAL_DRIVES_EXT];
};
struct fusion_context {
struct megasas_cmd_fusion **cmd_list;
struct list_head cmd_pool;
spinlock_t cmd_pool_lock;
spinlock_t mpt_pool_lock;
dma_addr_t req_frames_desc_phys;
u8 *req_frames_desc;
@ -749,10 +829,18 @@ struct fusion_context {
struct MR_FW_RAID_MAP_ALL *ld_map[2];
dma_addr_t ld_map_phys[2];
u32 map_sz;
/*Non dma-able memory. Driver local copy.*/
struct MR_DRV_RAID_MAP_ALL *ld_drv_map[2];
u32 max_map_sz;
u32 current_map_sz;
u32 old_map_sz;
u32 new_map_sz;
u32 drv_map_sz;
u32 drv_map_pages;
u8 fast_path_io;
struct LD_LOAD_BALANCE_INFO load_balance_info[MAX_LOGICAL_DRIVES];
LD_SPAN_INFO log_to_span[MAX_LOGICAL_DRIVES];
struct LD_LOAD_BALANCE_INFO load_balance_info[MAX_LOGICAL_DRIVES_EXT];
LD_SPAN_INFO log_to_span[MAX_LOGICAL_DRIVES_EXT];
};
union desc_value {
@ -763,4 +851,5 @@ union desc_value {
} u;
};
#endif /* _MEGARAID_SAS_FUSION_H_ */

View File

@ -2,7 +2,7 @@
# Kernel configuration file for the MPT2SAS
#
# This code is based on drivers/scsi/mpt2sas/Kconfig
# Copyright (C) 2007-2012 LSI Corporation
# Copyright (C) 2007-2014 LSI Corporation
# (mailto:DL-MPTFusionLinux@lsi.com)
# This program is free software; you can redistribute it and/or

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2000-2013 LSI Corporation.
* Copyright (c) 2000-2014 LSI Corporation.
*
*
* Name: mpi2.h
@ -8,7 +8,7 @@
* scatter/gather formats.
* Creation Date: June 21, 2006
*
* mpi2.h Version: 02.00.28
* mpi2.h Version: 02.00.32
*
* Version History
* ---------------
@ -78,6 +78,11 @@
* 07-10-12 02.00.26 Bumped MPI2_HEADER_VERSION_UNIT.
* 07-26-12 02.00.27 Bumped MPI2_HEADER_VERSION_UNIT.
* 11-27-12 02.00.28 Bumped MPI2_HEADER_VERSION_UNIT.
* 12-20-12 02.00.29 Bumped MPI2_HEADER_VERSION_UNIT.
* Added MPI25_SUP_REPLY_POST_HOST_INDEX_OFFSET.
* 04-09-13 02.00.30 Bumped MPI2_HEADER_VERSION_UNIT.
* 04-17-13 02.00.31 Bumped MPI2_HEADER_VERSION_UNIT.
* 08-19-13 02.00.32 Bumped MPI2_HEADER_VERSION_UNIT.
* --------------------------------------------------------------------------
*/
@ -103,7 +108,7 @@
#define MPI2_VERSION_02_00 (0x0200)
/* versioning for this MPI header set */
#define MPI2_HEADER_VERSION_UNIT (0x1C)
#define MPI2_HEADER_VERSION_UNIT (0x20)
#define MPI2_HEADER_VERSION_DEV (0x00)
#define MPI2_HEADER_VERSION_UNIT_MASK (0xFF00)
#define MPI2_HEADER_VERSION_UNIT_SHIFT (8)
@ -263,6 +268,7 @@ typedef volatile struct _MPI2_SYSTEM_INTERFACE_REGS
#define MPI2_REPLY_POST_HOST_INDEX_MASK (0x00FFFFFF)
#define MPI2_RPHI_MSIX_INDEX_MASK (0xFF000000)
#define MPI2_RPHI_MSIX_INDEX_SHIFT (24)
#define MPI25_SUP_REPLY_POST_HOST_INDEX_OFFSET (0x0000030C) /* MPI v2.5 only */
/*
* Defines for the HCBSize and address

View File

@ -1,12 +1,12 @@
/*
* Copyright (c) 2000-2013 LSI Corporation.
* Copyright (c) 2000-2014 LSI Corporation.
*
*
* Name: mpi2_cnfg.h
* Title: MPI Configuration messages and pages
* Creation Date: November 10, 2006
*
* mpi2_cnfg.h Version: 02.00.23
* mpi2_cnfg.h Version: 02.00.26
*
* Version History
* ---------------
@ -150,7 +150,13 @@
* Added UEFIVersion field to BIOS Page 1 and defined new
* BiosOptions bits.
* 11-27-12 02.00.23 Added MPI2_MANPAGE7_FLAG_EVENTREPLAY_SLOT_ORDER.
* Added MPI2_BIOSPAGE1_OPTIONS_MASK_OEM_ID.
* Added MPI2_BIOSPAGE1_OPTIONS_MASK_OEM_ID.
* 12-20-12 02.00.24 Marked MPI2_SASIOUNIT1_CONTROL_CLEAR_AFFILIATION as
* obsolete for MPI v2.5 and later.
* Added some defines for 12G SAS speeds.
* 04-09-13 02.00.25 Added MPI2_IOUNITPAGE1_ATA_SECURITY_FREEZE_LOCK.
* Fixed MPI2_IOUNITPAGE5_DMA_CAP_MASK_MAX_REQUESTS to
* match the specification.
* --------------------------------------------------------------------------
*/
@ -773,6 +779,7 @@ typedef struct _MPI2_CONFIG_PAGE_IO_UNIT_1
#define MPI2_IOUNITPAGE1_PAGEVERSION (0x04)
/* IO Unit Page 1 Flags defines */
#define MPI2_IOUNITPAGE1_ATA_SECURITY_FREEZE_LOCK (0x00004000)
#define MPI2_IOUNITPAGE1_ENABLE_HOST_BASED_DISCOVERY (0x00000800)
#define MPI2_IOUNITPAGE1_MASK_SATA_WRITE_CACHE (0x00000600)
#define MPI2_IOUNITPAGE1_SATA_WRITE_CACHE_SHIFT (9)
@ -844,7 +851,7 @@ typedef struct _MPI2_CONFIG_PAGE_IO_UNIT_5 {
#define MPI2_IOUNITPAGE5_PAGEVERSION (0x00)
/* defines for IO Unit Page 5 DmaEngineCapabilities field */
#define MPI2_IOUNITPAGE5_DMA_CAP_MASK_MAX_REQUESTS (0xFF00)
#define MPI2_IOUNITPAGE5_DMA_CAP_MASK_MAX_REQUESTS (0xFFFF0000)
#define MPI2_IOUNITPAGE5_DMA_CAP_SHIFT_MAX_REQUESTS (16)
#define MPI2_IOUNITPAGE5_DMA_CAP_EEDP (0x0008)
@ -885,13 +892,17 @@ typedef struct _MPI2_CONFIG_PAGE_IO_UNIT_7 {
U16 IOCTemperature; /* 0x10 */
U8 IOCTemperatureUnits; /* 0x12 */
U8 IOCSpeed; /* 0x13 */
U16 BoardTemperature; /* 0x14 */
U8 BoardTemperatureUnits; /* 0x16 */
U8 Reserved3; /* 0x17 */
U16 BoardTemperature; /* 0x14 */
U8 BoardTemperatureUnits; /* 0x16 */
U8 Reserved3; /* 0x17 */
U32 Reserved4; /* 0x18 */
U32 Reserved5; /* 0x1C */
U32 Reserved6; /* 0x20 */
U32 Reserved7; /* 0x24 */
} MPI2_CONFIG_PAGE_IO_UNIT_7, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_IO_UNIT_7,
Mpi2IOUnitPage7_t, MPI2_POINTER pMpi2IOUnitPage7_t;
#define MPI2_IOUNITPAGE7_PAGEVERSION (0x02)
#define MPI2_IOUNITPAGE7_PAGEVERSION (0x04)
/* defines for IO Unit Page 7 PCIeWidth field */
#define MPI2_IOUNITPAGE7_PCIE_WIDTH_X1 (0x01)
@ -1801,6 +1812,7 @@ typedef struct _MPI2_CONFIG_PAGE_RD_PDISK_1
#define MPI2_SAS_PRATE_MAX_RATE_1_5 (0x80)
#define MPI2_SAS_PRATE_MAX_RATE_3_0 (0x90)
#define MPI2_SAS_PRATE_MAX_RATE_6_0 (0xA0)
#define MPI25_SAS_PRATE_MAX_RATE_12_0 (0xB0)
#define MPI2_SAS_PRATE_MIN_RATE_MASK (0x0F)
#define MPI2_SAS_PRATE_MIN_RATE_NOT_PROGRAMMABLE (0x00)
#define MPI2_SAS_PRATE_MIN_RATE_1_5 (0x08)
@ -1813,6 +1825,7 @@ typedef struct _MPI2_CONFIG_PAGE_RD_PDISK_1
#define MPI2_SAS_HWRATE_MAX_RATE_1_5 (0x80)
#define MPI2_SAS_HWRATE_MAX_RATE_3_0 (0x90)
#define MPI2_SAS_HWRATE_MAX_RATE_6_0 (0xA0)
#define MPI25_SAS_HWRATE_MAX_RATE_12_0 (0xB0)
#define MPI2_SAS_HWRATE_MIN_RATE_MASK (0x0F)
#define MPI2_SAS_HWRATE_MIN_RATE_1_5 (0x08)
#define MPI2_SAS_HWRATE_MIN_RATE_3_0 (0x09)

View File

@ -1,12 +1,12 @@
/*
* Copyright (c) 2000-2013 LSI Corporation.
* Copyright (c) 2000-2014 LSI Corporation.
*
*
* Name: mpi2_init.h
* Title: MPI SCSI initiator mode messages and structures
* Creation Date: June 23, 2006
*
* mpi2_init.h Version: 02.00.14
* mpi2_init.h Version: 02.00.15
*
* Version History
* ---------------
@ -37,6 +37,8 @@
* 02-06-12 02.00.13 Added alternate defines for Task Priority / Command
* Priority to match SAM-4.
* 07-10-12 02.00.14 Added MPI2_SCSIIO_CONTROL_SHIFT_DATADIRECTION.
* 04-09-13 02.00.15 Added SCSIStatusQualifier field to MPI2_SCSI_IO_REPLY,
* replacing the Reserved4 field.
* --------------------------------------------------------------------------
*/
@ -234,7 +236,7 @@ typedef struct _MPI2_SCSI_IO_REPLY
U32 SenseCount; /* 0x18 */
U32 ResponseInfo; /* 0x1C */
U16 TaskTag; /* 0x20 */
U16 Reserved4; /* 0x22 */
U16 SCSIStatusQualifier; /* 0x22 */
U32 BidirectionalTransferCount; /* 0x24 */
U32 Reserved5; /* 0x28 */
U32 Reserved6; /* 0x2C */

View File

@ -1,12 +1,12 @@
/*
* Copyright (c) 2000-2013 LSI Corporation.
* Copyright (c) 2000-2014 LSI Corporation.
*
*
* Name: mpi2_ioc.h
* Title: MPI IOC, Port, Event, FW Download, and FW Upload messages
* Creation Date: October 11, 2006
*
* mpi2_ioc.h Version: 02.00.22
* mpi2_ioc.h Version: 02.00.23
*
* Version History
* ---------------
@ -121,6 +121,11 @@
* 07-26-12 02.00.22 Added MPI2_IOCFACTS_EXCEPT_PARTIAL_MEMORY_FAILURE.
* Added ElapsedSeconds field to
* MPI2_EVENT_DATA_IR_OPERATION_STATUS.
* 08-19-13 02.00.23 For IOCInit, added MPI2_IOCINIT_MSGFLAG_RDPQ_ARRAY_MODE
* and MPI2_IOC_INIT_RDPQ_ARRAY_ENTRY.
* Added MPI2_IOCFACTS_CAPABILITY_RDPQ_ARRAY_CAPABLE.
* Added MPI2_FW_DOWNLOAD_ITYPE_PUBLIC_KEY.
* Added Encrypted Hash Extended Image.
* --------------------------------------------------------------------------
*/
@ -177,6 +182,9 @@ typedef struct _MPI2_IOC_INIT_REQUEST
#define MPI2_WHOINIT_HOST_DRIVER (0x04)
#define MPI2_WHOINIT_MANUFACTURER (0x05)
/* MsgFlags */
#define MPI2_IOCINIT_MSGFLAG_RDPQ_ARRAY_MODE (0x01)
/* MsgVersion */
#define MPI2_IOCINIT_MSGVERSION_MAJOR_MASK (0xFF00)
#define MPI2_IOCINIT_MSGVERSION_MAJOR_SHIFT (8)
@ -189,9 +197,17 @@ typedef struct _MPI2_IOC_INIT_REQUEST
#define MPI2_IOCINIT_HDRVERSION_DEV_MASK (0x00FF)
#define MPI2_IOCINIT_HDRVERSION_DEV_SHIFT (0)
/* minimum depth for the Reply Descriptor Post Queue */
/* minimum depth for a Reply Descriptor Post Queue */
#define MPI2_RDPQ_DEPTH_MIN (16)
/* Reply Descriptor Post Queue Array Entry */
typedef struct _MPI2_IOC_INIT_RDPQ_ARRAY_ENTRY {
U64 RDPQBaseAddress; /* 0x00 */
U32 Reserved1; /* 0x08 */
U32 Reserved2; /* 0x0C */
} MPI2_IOC_INIT_RDPQ_ARRAY_ENTRY,
MPI2_POINTER PTR_MPI2_IOC_INIT_RDPQ_ARRAY_ENTRY,
Mpi2IOCInitRDPQArrayEntry, MPI2_POINTER pMpi2IOCInitRDPQArrayEntry;
/* IOCInit Reply message */
typedef struct _MPI2_IOC_INIT_REPLY
@ -307,6 +323,7 @@ typedef struct _MPI2_IOC_FACTS_REPLY
/* ProductID field uses MPI2_FW_HEADER_PID_ */
/* IOCCapabilities */
#define MPI2_IOCFACTS_CAPABILITY_RDPQ_ARRAY_CAPABLE (0x00040000)
#define MPI2_IOCFACTS_CAPABILITY_HOST_BASED_DISCOVERY (0x00010000)
#define MPI2_IOCFACTS_CAPABILITY_MSI_X_INDEX (0x00008000)
#define MPI2_IOCFACTS_CAPABILITY_RAID_ACCELERATOR (0x00004000)
@ -1153,6 +1170,7 @@ typedef struct _MPI2_FW_DOWNLOAD_REQUEST
#define MPI2_FW_DOWNLOAD_ITYPE_MEGARAID (0x09)
#define MPI2_FW_DOWNLOAD_ITYPE_COMPLETE (0x0A)
#define MPI2_FW_DOWNLOAD_ITYPE_COMMON_BOOT_BLOCK (0x0B)
#define MPI2_FW_DOWNLOAD_ITYPE_PUBLIC_KEY (0x0C)
#define MPI2_FW_DOWNLOAD_ITYPE_MIN_PRODUCT_SPECIFIC (0xF0)
/* FWDownload TransactionContext Element */
@ -1379,14 +1397,15 @@ typedef struct _MPI2_EXT_IMAGE_HEADER
#define MPI2_EXT_IMAGE_HEADER_SIZE (0x40)
/* defines for the ImageType field */
#define MPI2_EXT_IMAGE_TYPE_UNSPECIFIED (0x00)
#define MPI2_EXT_IMAGE_TYPE_FW (0x01)
#define MPI2_EXT_IMAGE_TYPE_NVDATA (0x03)
#define MPI2_EXT_IMAGE_TYPE_BOOTLOADER (0x04)
#define MPI2_EXT_IMAGE_TYPE_INITIALIZATION (0x05)
#define MPI2_EXT_IMAGE_TYPE_FLASH_LAYOUT (0x06)
#define MPI2_EXT_IMAGE_TYPE_SUPPORTED_DEVICES (0x07)
#define MPI2_EXT_IMAGE_TYPE_MEGARAID (0x08)
#define MPI2_EXT_IMAGE_TYPE_UNSPECIFIED (0x00)
#define MPI2_EXT_IMAGE_TYPE_FW (0x01)
#define MPI2_EXT_IMAGE_TYPE_NVDATA (0x03)
#define MPI2_EXT_IMAGE_TYPE_BOOTLOADER (0x04)
#define MPI2_EXT_IMAGE_TYPE_INITIALIZATION (0x05)
#define MPI2_EXT_IMAGE_TYPE_FLASH_LAYOUT (0x06)
#define MPI2_EXT_IMAGE_TYPE_SUPPORTED_DEVICES (0x07)
#define MPI2_EXT_IMAGE_TYPE_MEGARAID (0x08)
#define MPI2_EXT_IMAGE_TYPE_ENCRYPTED_HASH (0x09)
#define MPI2_EXT_IMAGE_TYPE_MIN_PRODUCT_SPECIFIC (0x80)
#define MPI2_EXT_IMAGE_TYPE_MAX_PRODUCT_SPECIFIC (0xFF)
#define MPI2_EXT_IMAGE_TYPE_MAX \
@ -1555,6 +1574,39 @@ typedef struct _MPI2_INIT_IMAGE_FOOTER
#define MPI2_INIT_IMAGE_RESETVECTOR_OFFSET (0x14)
/* Encrypted Hash Extended Image Data */
typedef struct _MPI25_ENCRYPTED_HASH_ENTRY {
U8 HashImageType; /* 0x00 */
U8 HashAlgorithm; /* 0x01 */
U8 EncryptionAlgorithm; /* 0x02 */
U8 Reserved1; /* 0x03 */
U32 Reserved2; /* 0x04 */
U32 EncryptedHash[1]; /* 0x08 */
} MPI25_ENCRYPTED_HASH_ENTRY, MPI2_POINTER PTR_MPI25_ENCRYPTED_HASH_ENTRY,
Mpi25EncryptedHashEntry_t, MPI2_POINTER pMpi25EncryptedHashEntry_t;
/* values for HashImageType */
#define MPI25_HASH_IMAGE_TYPE_UNUSED (0x00)
#define MPI25_HASH_IMAGE_TYPE_FIRMWARE (0x01)
/* values for HashAlgorithm */
#define MPI25_HASH_ALGORITHM_UNUSED (0x00)
#define MPI25_HASH_ALGORITHM_SHA256 (0x01)
/* values for EncryptionAlgorithm */
#define MPI25_ENCRYPTION_ALG_UNUSED (0x00)
#define MPI25_ENCRYPTION_ALG_RSA256 (0x01)
typedef struct _MPI25_ENCRYPTED_HASH_DATA {
U8 ImageVersion; /* 0x00 */
U8 NumHash; /* 0x01 */
U16 Reserved1; /* 0x02 */
U32 Reserved2; /* 0x04 */
MPI25_ENCRYPTED_HASH_ENTRY EncryptedHashEntry[1]; /* 0x08 */
} MPI25_ENCRYPTED_HASH_DATA, MPI2_POINTER PTR_MPI25_ENCRYPTED_HASH_DATA,
Mpi25EncryptedHashData_t, MPI2_POINTER pMpi25EncryptedHashData_t;
/****************************************************************************
* PowerManagementControl message
****************************************************************************/

View File

@ -1,12 +1,12 @@
/*
* Copyright (c) 2000-2013 LSI Corporation.
* Copyright (c) 2000-2014 LSI Corporation.
*
*
* Name: mpi2_raid.h
* Title: MPI Integrated RAID messages and structures
* Creation Date: April 26, 2007
*
* mpi2_raid.h Version: 02.00.09
* mpi2_raid.h Version: 02.00.10
*
* Version History
* ---------------
@ -29,6 +29,7 @@
* 02-06-12 02.00.08 Added MPI2_RAID_ACTION_PHYSDISK_HIDDEN.
* 07-26-12 02.00.09 Added ElapsedSeconds field to MPI2_RAID_VOL_INDICATOR.
* Added MPI2_RAID_VOL_FLAGS_ELAPSED_SECONDS_VALID define.
* 04-17-13 02.00.10 Added MPI25_RAID_ACTION_ADATA_ALLOW_PI.
* --------------------------------------------------------------------------
*/
@ -45,6 +46,9 @@
* RAID Action messages
****************************************************************************/
/* ActionDataWord defines for use with MPI2_RAID_ACTION_CREATE_VOLUME action */
#define MPI25_RAID_ACTION_ADATA_ALLOW_PI (0x80000000)
/* ActionDataWord defines for use with MPI2_RAID_ACTION_DELETE_VOLUME action */
#define MPI2_RAID_ACTION_ADATA_KEEP_LBA0 (0x00000000)
#define MPI2_RAID_ACTION_ADATA_ZERO_LBA0 (0x00000001)

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2000-2013 LSI Corporation.
* Copyright (c) 2000-2014 LSI Corporation.
*
*
* Name: mpi2_sas.h

View File

@ -1,12 +1,12 @@
/*
* Copyright (c) 2000-2013 LSI Corporation.
* Copyright (c) 2000-2014 LSI Corporation.
*
*
* Name: mpi2_tool.h
* Title: MPI diagnostic tool structures and definitions
* Creation Date: March 26, 2007
*
* mpi2_tool.h Version: 02.00.10
* mpi2_tool.h Version: 02.00.11
*
* Version History
* ---------------
@ -29,6 +29,7 @@
* MPI2_TOOLBOX_ISTWI_READ_WRITE_REQUEST.
* 07-26-12 02.00.10 Modified MPI2_TOOLBOX_DIAGNOSTIC_CLI_REQUEST so that
* it uses MPI Chain SGE as well as MPI Simple SGE.
* 08-19-13 02.00.11 Added MPI2_TOOLBOX_TEXT_DISPLAY_TOOL and related info.
* --------------------------------------------------------------------------
*/
@ -48,6 +49,7 @@
#define MPI2_TOOLBOX_ISTWI_READ_WRITE_TOOL (0x03)
#define MPI2_TOOLBOX_BEACON_TOOL (0x05)
#define MPI2_TOOLBOX_DIAGNOSTIC_CLI_TOOL (0x06)
#define MPI2_TOOLBOX_TEXT_DISPLAY_TOOL (0x07)
/****************************************************************************
@ -321,6 +323,44 @@ typedef struct _MPI2_TOOLBOX_DIAGNOSTIC_CLI_REPLY {
MPI2_POINTER pMpi2ToolboxDiagnosticCliReply_t;
/****************************************************************************
* Toolbox Console Text Display Tool
****************************************************************************/
/* Toolbox Console Text Display Tool request message */
typedef struct _MPI2_TOOLBOX_TEXT_DISPLAY_REQUEST {
U8 Tool; /* 0x00 */
U8 Reserved1; /* 0x01 */
U8 ChainOffset; /* 0x02 */
U8 Function; /* 0x03 */
U16 Reserved2; /* 0x04 */
U8 Reserved3; /* 0x06 */
U8 MsgFlags; /* 0x07 */
U8 VP_ID; /* 0x08 */
U8 VF_ID; /* 0x09 */
U16 Reserved4; /* 0x0A */
U8 Console; /* 0x0C */
U8 Flags; /* 0x0D */
U16 Reserved6; /* 0x0E */
U8 TextToDisplay[4]; /* 0x10 */
} MPI2_TOOLBOX_TEXT_DISPLAY_REQUEST,
MPI2_POINTER PTR_MPI2_TOOLBOX_TEXT_DISPLAY_REQUEST,
Mpi2ToolboxTextDisplayRequest_t,
MPI2_POINTER pMpi2ToolboxTextDisplayRequest_t;
/* defines for the Console field */
#define MPI2_TOOLBOX_CONSOLE_TYPE_MASK (0xF0)
#define MPI2_TOOLBOX_CONSOLE_TYPE_DEFAULT (0x00)
#define MPI2_TOOLBOX_CONSOLE_TYPE_UART (0x10)
#define MPI2_TOOLBOX_CONSOLE_TYPE_ETHERNET (0x20)
#define MPI2_TOOLBOX_CONSOLE_NUMBER_MASK (0x0F)
/* defines for the Flags field */
#define MPI2_TOOLBOX_CONSOLE_FLAG_TIMESTAMP (0x01)
/*****************************************************************************
*
* Diagnostic Buffer Messages

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2000-2013 LSI Corporation.
* Copyright (c) 2000-2014 LSI Corporation.
*
*
* Name: mpi2_type.h

View File

@ -3,7 +3,7 @@
* for access to MPT (Message Passing Technology) firmware.
*
* This code is based on drivers/scsi/mpt2sas/mpt2_base.c
* Copyright (C) 2007-2013 LSI Corporation
* Copyright (C) 2007-2014 LSI Corporation
* (mailto:DL-MPTFusionLinux@lsi.com)
*
* This program is free software; you can redistribute it and/or
@ -80,6 +80,10 @@ static int msix_disable = -1;
module_param(msix_disable, int, 0);
MODULE_PARM_DESC(msix_disable, " disable msix routed interrupts (default=0)");
static int max_msix_vectors = -1;
module_param(max_msix_vectors, int, 0);
MODULE_PARM_DESC(max_msix_vectors, " max msix vectors ");
static int mpt2sas_fwfault_debug;
MODULE_PARM_DESC(mpt2sas_fwfault_debug, " enable detection of firmware fault "
"and halt firmware - (default=0)");
@ -88,6 +92,12 @@ static int disable_discovery = -1;
module_param(disable_discovery, int, 0);
MODULE_PARM_DESC(disable_discovery, " disable discovery ");
static int
_base_get_ioc_facts(struct MPT2SAS_ADAPTER *ioc, int sleep_flag);
static int
_base_diag_reset(struct MPT2SAS_ADAPTER *ioc, int sleep_flag);
/**
* _scsih_set_fwfault_debug - global setting of ioc->fwfault_debug.
*
@ -1175,17 +1185,22 @@ static int
_base_config_dma_addressing(struct MPT2SAS_ADAPTER *ioc, struct pci_dev *pdev)
{
struct sysinfo s;
char *desc = NULL;
u64 consistent_dma_mask;
if (ioc->dma_mask)
consistent_dma_mask = DMA_BIT_MASK(64);
else
consistent_dma_mask = DMA_BIT_MASK(32);
if (sizeof(dma_addr_t) > 4) {
const uint64_t required_mask =
dma_get_required_mask(&pdev->dev);
if ((required_mask > DMA_BIT_MASK(32)) && !pci_set_dma_mask(pdev,
DMA_BIT_MASK(64)) && !pci_set_consistent_dma_mask(pdev,
DMA_BIT_MASK(64))) {
if ((required_mask > DMA_BIT_MASK(32)) &&
!pci_set_dma_mask(pdev, DMA_BIT_MASK(64)) &&
!pci_set_consistent_dma_mask(pdev, consistent_dma_mask)) {
ioc->base_add_sg_single = &_base_add_sg_single_64;
ioc->sge_size = sizeof(Mpi2SGESimple64_t);
desc = "64";
ioc->dma_mask = 64;
goto out;
}
}
@ -1194,18 +1209,29 @@ _base_config_dma_addressing(struct MPT2SAS_ADAPTER *ioc, struct pci_dev *pdev)
&& !pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32))) {
ioc->base_add_sg_single = &_base_add_sg_single_32;
ioc->sge_size = sizeof(Mpi2SGESimple32_t);
desc = "32";
ioc->dma_mask = 32;
} else
return -ENODEV;
out:
si_meminfo(&s);
printk(MPT2SAS_INFO_FMT "%s BIT PCI BUS DMA ADDRESSING SUPPORTED, "
"total mem (%ld kB)\n", ioc->name, desc, convert_to_kb(s.totalram));
printk(MPT2SAS_INFO_FMT
"%d BIT PCI BUS DMA ADDRESSING SUPPORTED, total mem (%ld kB)\n",
ioc->name, ioc->dma_mask, convert_to_kb(s.totalram));
return 0;
}
static int
_base_change_consistent_dma_mask(struct MPT2SAS_ADAPTER *ioc,
struct pci_dev *pdev)
{
if (pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64))) {
if (pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32)))
return -ENODEV;
}
return 0;
}
/**
* _base_check_enable_msix - checks MSIX capabable.
* @ioc: per adapter object
@ -1402,6 +1428,20 @@ _base_enable_msix(struct MPT2SAS_ADAPTER *ioc)
ioc->reply_queue_count = min_t(int, ioc->cpu_count,
ioc->msix_vector_count);
if (!ioc->rdpq_array_enable && max_msix_vectors == -1)
max_msix_vectors = 8;
if (max_msix_vectors > 0) {
ioc->reply_queue_count = min_t(int, max_msix_vectors,
ioc->reply_queue_count);
ioc->msix_vector_count = ioc->reply_queue_count;
} else if (max_msix_vectors == 0)
goto try_ioapic;
printk(MPT2SAS_INFO_FMT
"MSI-X vectors supported: %d, no of cores: %d, max_msix_vectors: %d\n",
ioc->name, ioc->msix_vector_count, ioc->cpu_count, max_msix_vectors);
entries = kcalloc(ioc->reply_queue_count, sizeof(struct msix_entry),
GFP_KERNEL);
if (!entries) {
@ -1414,10 +1454,10 @@ _base_enable_msix(struct MPT2SAS_ADAPTER *ioc)
for (i = 0, a = entries; i < ioc->reply_queue_count; i++, a++)
a->entry = i;
r = pci_enable_msix(ioc->pdev, entries, ioc->reply_queue_count);
r = pci_enable_msix_exact(ioc->pdev, entries, ioc->reply_queue_count);
if (r) {
dfailprintk(ioc, printk(MPT2SAS_INFO_FMT "pci_enable_msix "
"failed (r=%d) !!!\n", ioc->name, r));
dfailprintk(ioc, printk(MPT2SAS_INFO_FMT
"pci_enable_msix_exact failed (r=%d) !!!\n", ioc->name, r));
kfree(entries);
goto try_ioapic;
}
@ -1439,6 +1479,7 @@ _base_enable_msix(struct MPT2SAS_ADAPTER *ioc)
/* failback to io_apic interrupt routing */
try_ioapic:
ioc->reply_queue_count = 1;
r = _base_request_irq(ioc, 0, ioc->pdev->irq);
return r;
@ -1520,6 +1561,16 @@ mpt2sas_base_map_resources(struct MPT2SAS_ADAPTER *ioc)
}
_base_mask_interrupts(ioc);
r = _base_get_ioc_facts(ioc, CAN_SLEEP);
if (r)
goto out_fail;
if (!ioc->rdpq_array_enable_assigned) {
ioc->rdpq_array_enable = ioc->rdpq_array_capable;
ioc->rdpq_array_enable_assigned = 1;
}
r = _base_enable_msix(ioc);
if (r)
goto out_fail;
@ -2317,7 +2368,8 @@ _base_static_config_pages(struct MPT2SAS_ADAPTER *ioc)
static void
_base_release_memory_pools(struct MPT2SAS_ADAPTER *ioc)
{
int i;
int i = 0;
struct reply_post_struct *rps;
dexitprintk(ioc, printk(MPT2SAS_INFO_FMT "%s\n", ioc->name,
__func__));
@ -2358,15 +2410,25 @@ _base_release_memory_pools(struct MPT2SAS_ADAPTER *ioc)
ioc->reply_free = NULL;
}
if (ioc->reply_post_free) {
pci_pool_free(ioc->reply_post_free_dma_pool,
ioc->reply_post_free, ioc->reply_post_free_dma);
if (ioc->reply_post) {
do {
rps = &ioc->reply_post[i];
if (rps->reply_post_free) {
pci_pool_free(
ioc->reply_post_free_dma_pool,
rps->reply_post_free,
rps->reply_post_free_dma);
dexitprintk(ioc, printk(MPT2SAS_INFO_FMT
"reply_post_free_pool(0x%p): free\n",
ioc->name, rps->reply_post_free));
rps->reply_post_free = NULL;
}
} while (ioc->rdpq_array_enable &&
(++i < ioc->reply_queue_count));
if (ioc->reply_post_free_dma_pool)
pci_pool_destroy(ioc->reply_post_free_dma_pool);
dexitprintk(ioc, printk(MPT2SAS_INFO_FMT
"reply_post_free_pool(0x%p): free\n", ioc->name,
ioc->reply_post_free));
ioc->reply_post_free = NULL;
kfree(ioc->reply_post);
}
if (ioc->config_page) {
@ -2509,6 +2571,65 @@ _base_allocate_memory_pools(struct MPT2SAS_ADAPTER *ioc, int sleep_flag)
ioc->max_sges_in_chain_message, ioc->shost->sg_tablesize,
ioc->chains_needed_per_io));
/* reply post queue, 16 byte align */
reply_post_free_sz = ioc->reply_post_queue_depth *
sizeof(Mpi2DefaultReplyDescriptor_t);
sz = reply_post_free_sz;
if (_base_is_controller_msix_enabled(ioc) && !ioc->rdpq_array_enable)
sz *= ioc->reply_queue_count;
ioc->reply_post = kcalloc((ioc->rdpq_array_enable) ?
(ioc->reply_queue_count):1,
sizeof(struct reply_post_struct), GFP_KERNEL);
if (!ioc->reply_post) {
printk(MPT2SAS_ERR_FMT "reply_post_free pool: kcalloc failed\n",
ioc->name);
goto out;
}
ioc->reply_post_free_dma_pool = pci_pool_create("reply_post_free pool",
ioc->pdev, sz, 16, 0);
if (!ioc->reply_post_free_dma_pool) {
printk(MPT2SAS_ERR_FMT
"reply_post_free pool: pci_pool_create failed\n",
ioc->name);
goto out;
}
i = 0;
do {
ioc->reply_post[i].reply_post_free =
pci_pool_alloc(ioc->reply_post_free_dma_pool,
GFP_KERNEL,
&ioc->reply_post[i].reply_post_free_dma);
if (!ioc->reply_post[i].reply_post_free) {
printk(MPT2SAS_ERR_FMT
"reply_post_free pool: pci_pool_alloc failed\n",
ioc->name);
goto out;
}
memset(ioc->reply_post[i].reply_post_free, 0, sz);
dinitprintk(ioc, printk(MPT2SAS_INFO_FMT
"reply post free pool (0x%p): depth(%d),"
"element_size(%d), pool_size(%d kB)\n", ioc->name,
ioc->reply_post[i].reply_post_free,
ioc->reply_post_queue_depth, 8, sz/1024));
dinitprintk(ioc, printk(MPT2SAS_INFO_FMT
"reply_post_free_dma = (0x%llx)\n", ioc->name,
(unsigned long long)
ioc->reply_post[i].reply_post_free_dma));
total_sz += sz;
} while (ioc->rdpq_array_enable && (++i < ioc->reply_queue_count));
if (ioc->dma_mask == 64) {
if (_base_change_consistent_dma_mask(ioc, ioc->pdev) != 0) {
printk(MPT2SAS_WARN_FMT
"no suitable consistent DMA mask for %s\n",
ioc->name, pci_name(ioc->pdev));
goto out;
}
}
ioc->scsiio_depth = ioc->hba_queue_depth -
ioc->hi_priority_depth - ioc->internal_depth;
@ -2720,37 +2841,6 @@ chain_done:
"(0x%llx)\n", ioc->name, (unsigned long long)ioc->reply_free_dma));
total_sz += sz;
/* reply post queue, 16 byte align */
reply_post_free_sz = ioc->reply_post_queue_depth *
sizeof(Mpi2DefaultReplyDescriptor_t);
if (_base_is_controller_msix_enabled(ioc))
sz = reply_post_free_sz * ioc->reply_queue_count;
else
sz = reply_post_free_sz;
ioc->reply_post_free_dma_pool = pci_pool_create("reply_post_free pool",
ioc->pdev, sz, 16, 0);
if (!ioc->reply_post_free_dma_pool) {
printk(MPT2SAS_ERR_FMT "reply_post_free pool: pci_pool_create "
"failed\n", ioc->name);
goto out;
}
ioc->reply_post_free = pci_pool_alloc(ioc->reply_post_free_dma_pool ,
GFP_KERNEL, &ioc->reply_post_free_dma);
if (!ioc->reply_post_free) {
printk(MPT2SAS_ERR_FMT "reply_post_free pool: pci_pool_alloc "
"failed\n", ioc->name);
goto out;
}
memset(ioc->reply_post_free, 0, sz);
dinitprintk(ioc, printk(MPT2SAS_INFO_FMT "reply post free pool"
"(0x%p): depth(%d), element_size(%d), pool_size(%d kB)\n",
ioc->name, ioc->reply_post_free, ioc->reply_post_queue_depth, 8,
sz/1024));
dinitprintk(ioc, printk(MPT2SAS_INFO_FMT "reply_post_free_dma = "
"(0x%llx)\n", ioc->name, (unsigned long long)
ioc->reply_post_free_dma));
total_sz += sz;
ioc->config_page_sz = 512;
ioc->config_page = pci_alloc_consistent(ioc->pdev,
ioc->config_page_sz, &ioc->config_page_dma);
@ -3373,6 +3463,64 @@ _base_get_port_facts(struct MPT2SAS_ADAPTER *ioc, int port, int sleep_flag)
return 0;
}
/**
* _base_wait_for_iocstate - Wait until the card is in READY or OPERATIONAL
* @ioc: per adapter object
* @timeout:
* @sleep_flag: CAN_SLEEP or NO_SLEEP
*
* Returns 0 for success, non-zero for failure.
*/
static int
_base_wait_for_iocstate(struct MPT2SAS_ADAPTER *ioc, int timeout,
int sleep_flag)
{
u32 ioc_state, doorbell;
int rc;
dinitprintk(ioc, printk(MPT2SAS_INFO_FMT "%s\n", ioc->name,
__func__));
if (ioc->pci_error_recovery)
return 0;
doorbell = mpt2sas_base_get_iocstate(ioc, 0);
ioc_state = doorbell & MPI2_IOC_STATE_MASK;
dhsprintk(ioc, printk(MPT2SAS_INFO_FMT "%s: ioc_state(0x%08x)\n",
ioc->name, __func__, ioc_state));
switch (ioc_state) {
case MPI2_IOC_STATE_READY:
case MPI2_IOC_STATE_OPERATIONAL:
return 0;
}
if (doorbell & MPI2_DOORBELL_USED) {
dhsprintk(ioc, printk(MPT2SAS_INFO_FMT
"unexpected doorbell activ!e\n", ioc->name));
goto issue_diag_reset;
}
if (ioc_state == MPI2_IOC_STATE_FAULT) {
mpt2sas_base_fault_info(ioc, doorbell &
MPI2_DOORBELL_DATA_MASK);
goto issue_diag_reset;
}
ioc_state = _base_wait_on_iocstate(ioc, MPI2_IOC_STATE_READY,
timeout, sleep_flag);
if (ioc_state) {
printk(MPT2SAS_ERR_FMT
"%s: failed going to ready state (ioc_state=0x%x)\n",
ioc->name, __func__, ioc_state);
return -EFAULT;
}
issue_diag_reset:
rc = _base_diag_reset(ioc, sleep_flag);
return rc;
}
/**
* _base_get_ioc_facts - obtain ioc facts reply and save in ioc
* @ioc: per adapter object
@ -3391,6 +3539,13 @@ _base_get_ioc_facts(struct MPT2SAS_ADAPTER *ioc, int sleep_flag)
dinitprintk(ioc, printk(MPT2SAS_INFO_FMT "%s\n", ioc->name,
__func__));
r = _base_wait_for_iocstate(ioc, 10, sleep_flag);
if (r) {
printk(MPT2SAS_ERR_FMT "%s: failed getting to correct state\n",
ioc->name, __func__);
return r;
}
mpi_reply_sz = sizeof(Mpi2IOCFactsReply_t);
mpi_request_sz = sizeof(Mpi2IOCFactsRequest_t);
memset(&mpi_request, 0, mpi_request_sz);
@ -3422,6 +3577,9 @@ _base_get_ioc_facts(struct MPT2SAS_ADAPTER *ioc, int sleep_flag)
facts->IOCCapabilities = le32_to_cpu(mpi_reply.IOCCapabilities);
if ((facts->IOCCapabilities & MPI2_IOCFACTS_CAPABILITY_INTEGRATED_RAID))
ioc->ir_firmware = 1;
if ((facts->IOCCapabilities &
MPI2_IOCFACTS_CAPABILITY_RDPQ_ARRAY_CAPABLE))
ioc->rdpq_array_capable = 1;
facts->FWVersion.Word = le32_to_cpu(mpi_reply.FWVersion.Word);
facts->IOCRequestFrameSize =
le16_to_cpu(mpi_reply.IOCRequestFrameSize);
@ -3457,9 +3615,12 @@ _base_send_ioc_init(struct MPT2SAS_ADAPTER *ioc, int sleep_flag)
{
Mpi2IOCInitRequest_t mpi_request;
Mpi2IOCInitReply_t mpi_reply;
int r;
int i, r = 0;
struct timeval current_time;
u16 ioc_status;
u32 reply_post_free_array_sz = 0;
Mpi2IOCInitRDPQArrayEntry *reply_post_free_array = NULL;
dma_addr_t reply_post_free_array_dma;
dinitprintk(ioc, printk(MPT2SAS_INFO_FMT "%s\n", ioc->name,
__func__));
@ -3488,9 +3649,31 @@ _base_send_ioc_init(struct MPT2SAS_ADAPTER *ioc, int sleep_flag)
cpu_to_le64((u64)ioc->request_dma);
mpi_request.ReplyFreeQueueAddress =
cpu_to_le64((u64)ioc->reply_free_dma);
mpi_request.ReplyDescriptorPostQueueAddress =
cpu_to_le64((u64)ioc->reply_post_free_dma);
if (ioc->rdpq_array_enable) {
reply_post_free_array_sz = ioc->reply_queue_count *
sizeof(Mpi2IOCInitRDPQArrayEntry);
reply_post_free_array = pci_alloc_consistent(ioc->pdev,
reply_post_free_array_sz, &reply_post_free_array_dma);
if (!reply_post_free_array) {
printk(MPT2SAS_ERR_FMT
"reply_post_free_array: pci_alloc_consistent failed\n",
ioc->name);
r = -ENOMEM;
goto out;
}
memset(reply_post_free_array, 0, reply_post_free_array_sz);
for (i = 0; i < ioc->reply_queue_count; i++)
reply_post_free_array[i].RDPQBaseAddress =
cpu_to_le64(
(u64)ioc->reply_post[i].reply_post_free_dma);
mpi_request.MsgFlags = MPI2_IOCINIT_MSGFLAG_RDPQ_ARRAY_MODE;
mpi_request.ReplyDescriptorPostQueueAddress =
cpu_to_le64((u64)reply_post_free_array_dma);
} else {
mpi_request.ReplyDescriptorPostQueueAddress =
cpu_to_le64((u64)ioc->reply_post[0].reply_post_free_dma);
}
/* This time stamp specifies number of milliseconds
* since epoch ~ midnight January 1, 1970.
@ -3518,7 +3701,7 @@ _base_send_ioc_init(struct MPT2SAS_ADAPTER *ioc, int sleep_flag)
if (r != 0) {
printk(MPT2SAS_ERR_FMT "%s: handshake failed (r=%d)\n",
ioc->name, __func__, r);
return r;
goto out;
}
ioc_status = le16_to_cpu(mpi_reply.IOCStatus) & MPI2_IOCSTATUS_MASK;
@ -3528,7 +3711,12 @@ _base_send_ioc_init(struct MPT2SAS_ADAPTER *ioc, int sleep_flag)
r = -EIO;
}
return 0;
out:
if (reply_post_free_array)
pci_free_consistent(ioc->pdev, reply_post_free_array_sz,
reply_post_free_array,
reply_post_free_array_dma);
return r;
}
/**
@ -4061,7 +4249,7 @@ _base_make_ioc_operational(struct MPT2SAS_ADAPTER *ioc, int sleep_flag)
u8 hide_flag;
struct adapter_reply_queue *reply_q;
long reply_post_free;
u32 reply_post_free_sz;
u32 reply_post_free_sz, index = 0;
dinitprintk(ioc, printk(MPT2SAS_INFO_FMT "%s\n", ioc->name,
__func__));
@ -4132,19 +4320,27 @@ _base_make_ioc_operational(struct MPT2SAS_ADAPTER *ioc, int sleep_flag)
_base_assign_reply_queues(ioc);
/* initialize Reply Post Free Queue */
reply_post_free = (long)ioc->reply_post_free;
reply_post_free_sz = ioc->reply_post_queue_depth *
sizeof(Mpi2DefaultReplyDescriptor_t);
reply_post_free = (long)ioc->reply_post[index].reply_post_free;
list_for_each_entry(reply_q, &ioc->reply_queue_list, list) {
reply_q->reply_post_host_index = 0;
reply_q->reply_post_free = (Mpi2ReplyDescriptorsUnion_t *)
reply_post_free;
for (i = 0; i < ioc->reply_post_queue_depth; i++)
reply_q->reply_post_free[i].Words =
cpu_to_le64(ULLONG_MAX);
cpu_to_le64(ULLONG_MAX);
if (!_base_is_controller_msix_enabled(ioc))
goto skip_init_reply_post_free_queue;
reply_post_free += reply_post_free_sz;
/*
* If RDPQ is enabled, switch to the next allocation.
* Otherwise advance within the contiguous region.
*/
if (ioc->rdpq_array_enable)
reply_post_free = (long)
ioc->reply_post[++index].reply_post_free;
else
reply_post_free += reply_post_free_sz;
}
skip_init_reply_post_free_queue:
@ -4272,6 +4468,8 @@ mpt2sas_base_attach(struct MPT2SAS_ADAPTER *ioc)
}
}
ioc->rdpq_array_enable_assigned = 0;
ioc->dma_mask = 0;
r = mpt2sas_base_map_resources(ioc);
if (r)
goto out_free_resources;
@ -4633,6 +4831,16 @@ mpt2sas_base_hard_reset_handler(struct MPT2SAS_ADAPTER *ioc, int sleep_flag,
r = -EFAULT;
goto out;
}
r = _base_get_ioc_facts(ioc, CAN_SLEEP);
if (r)
goto out;
if (ioc->rdpq_array_enable && !ioc->rdpq_array_capable)
panic("%s: Issue occurred with flashing controller firmware."
"Please reboot the system and ensure that the correct"
" firmware version is running\n", ioc->name);
r = _base_make_ioc_operational(ioc, sleep_flag);
if (!r)
_base_reset_handler(ioc, MPT2_IOC_DONE_RESET);

View File

@ -3,7 +3,7 @@
* for access to MPT (Message Passing Technology) firmware.
*
* This code is based on drivers/scsi/mpt2sas/mpt2_base.h
* Copyright (C) 2007-2013 LSI Corporation
* Copyright (C) 2007-2014 LSI Corporation
* (mailto:DL-MPTFusionLinux@lsi.com)
*
* This program is free software; you can redistribute it and/or
@ -69,8 +69,8 @@
#define MPT2SAS_DRIVER_NAME "mpt2sas"
#define MPT2SAS_AUTHOR "LSI Corporation <DL-MPTFusionLinux@lsi.com>"
#define MPT2SAS_DESCRIPTION "LSI MPT Fusion SAS 2.0 Device Driver"
#define MPT2SAS_DRIVER_VERSION "16.100.00.00"
#define MPT2SAS_MAJOR_VERSION 16
#define MPT2SAS_DRIVER_VERSION "18.100.00.00"
#define MPT2SAS_MAJOR_VERSION 18
#define MPT2SAS_MINOR_VERSION 100
#define MPT2SAS_BUILD_VERSION 00
#define MPT2SAS_RELEASE_VERSION 00
@ -355,6 +355,7 @@ struct _internal_cmd {
* @slot: number number
* @phy: phy identifier provided in sas device page 0
* @responding: used in _scsih_sas_device_mark_responding
* @pfa_led_on: flag for PFA LED status
*/
struct _sas_device {
struct list_head list;
@ -373,6 +374,7 @@ struct _sas_device {
u16 slot;
u8 phy;
u8 responding;
u8 pfa_led_on;
};
/**
@ -634,6 +636,11 @@ struct mpt2sas_port_facts {
u16 MaxPostedCmdBuffers;
};
struct reply_post_struct {
Mpi2ReplyDescriptorsUnion_t *reply_post_free;
dma_addr_t reply_post_free_dma;
};
/**
* enum mutex_type - task management mutex type
* @TM_MUTEX_OFF: mutex is not required becuase calling function is acquiring it
@ -661,6 +668,7 @@ typedef void (*MPT2SAS_FLUSH_RUNNING_CMDS)(struct MPT2SAS_ADAPTER *ioc);
* @ir_firmware: IR firmware present
* @bars: bitmask of BAR's that must be configured
* @mask_interrupts: ignore interrupt
* @dma_mask: used to set the consistent dma mask
* @fault_reset_work_q_name: fw fault work queue
* @fault_reset_work_q: ""
* @fault_reset_work: ""
@ -777,8 +785,11 @@ typedef void (*MPT2SAS_FLUSH_RUNNING_CMDS)(struct MPT2SAS_ADAPTER *ioc);
* @reply_free_dma_pool:
* @reply_free_host_index: tail index in pool to insert free replys
* @reply_post_queue_depth: reply post queue depth
* @reply_post_free: pool for reply post (64bit descriptor)
* @reply_post_free_dma:
* @reply_post_struct: struct for reply_post_free physical & virt address
* @rdpq_array_capable: FW supports multiple reply queue addresses in ioc_init
* @rdpq_array_enable: rdpq_array support is enabled in the driver
* @rdpq_array_enable_assigned: this ensures that rdpq_array_enable flag
* is assigned only ones
* @reply_queue_count: number of reply queue's
* @reply_queue_list: link list contaning the reply queue info
* @reply_post_host_index: head index in the pool where FW completes IO
@ -800,6 +811,7 @@ struct MPT2SAS_ADAPTER {
u8 ir_firmware;
int bars;
u8 mask_interrupts;
int dma_mask;
/* fw fault handler */
char fault_reset_work_q_name[20];
@ -970,8 +982,10 @@ struct MPT2SAS_ADAPTER {
/* reply post queue */
u16 reply_post_queue_depth;
Mpi2ReplyDescriptorsUnion_t *reply_post_free;
dma_addr_t reply_post_free_dma;
struct reply_post_struct *reply_post;
u8 rdpq_array_capable;
u8 rdpq_array_enable;
u8 rdpq_array_enable_assigned;
struct dma_pool *reply_post_free_dma_pool;
u8 reply_queue_count;
struct list_head reply_queue_list;

View File

@ -2,7 +2,7 @@
* This module provides common API for accessing firmware configuration pages
*
* This code is based on drivers/scsi/mpt2sas/mpt2_base.c
* Copyright (C) 2007-2013 LSI Corporation
* Copyright (C) 2007-2014 LSI Corporation
* (mailto:DL-MPTFusionLinux@lsi.com)
*
* This program is free software; you can redistribute it and/or

View File

@ -3,7 +3,7 @@
* controllers
*
* This code is based on drivers/scsi/mpt2sas/mpt2_ctl.c
* Copyright (C) 2007-2013 LSI Corporation
* Copyright (C) 2007-2014 LSI Corporation
* (mailto:DL-MPTFusionLinux@lsi.com)
*
* This program is free software; you can redistribute it and/or

View File

@ -3,7 +3,7 @@
* controllers
*
* This code is based on drivers/scsi/mpt2sas/mpt2_ctl.h
* Copyright (C) 2007-2013 LSI Corporation
* Copyright (C) 2007-2014 LSI Corporation
* (mailto:DL-MPTFusionLinux@lsi.com)
*
* This program is free software; you can redistribute it and/or

View File

@ -2,7 +2,7 @@
* Logging Support for MPT (Message Passing Technology) based controllers
*
* This code is based on drivers/scsi/mpt2sas/mpt2_debug.c
* Copyright (C) 2007-2013 LSI Corporation
* Copyright (C) 2007-2014 LSI Corporation
* (mailto:DL-MPTFusionLinux@lsi.com)
*
* This program is free software; you can redistribute it and/or

View File

@ -2,7 +2,7 @@
* Scsi Host Layer for MPT (Message Passing Technology) based controllers
*
* This code is based on drivers/scsi/mpt2sas/mpt2_scsih.c
* Copyright (C) 2007-2013 LSI Corporation
* Copyright (C) 2007-2014 LSI Corporation
* (mailto:DL-MPTFusionLinux@lsi.com)
*
* This program is free software; you can redistribute it and/or
@ -55,6 +55,8 @@
#include <linux/raid_class.h>
#include <linux/slab.h>
#include <asm/unaligned.h>
#include "mpt2sas_base.h"
MODULE_AUTHOR(MPT2SAS_AUTHOR);
@ -145,7 +147,7 @@ struct sense_info {
};
#define MPT2SAS_TURN_ON_FAULT_LED (0xFFFC)
#define MPT2SAS_TURN_ON_PFA_LED (0xFFFC)
#define MPT2SAS_PORT_ENABLE_COMPLETE (0xFFFD)
#define MPT2SAS_REMOVE_UNRESPONDING_DEVICES (0xFFFF)
/**
@ -3858,85 +3860,46 @@ _scsih_setup_direct_io(struct MPT2SAS_ADAPTER *ioc, struct scsi_cmnd *scmd,
struct _raid_device *raid_device, Mpi2SCSIIORequest_t *mpi_request,
u16 smid)
{
u32 v_lba, p_lba, stripe_off, stripe_unit, column, io_size;
sector_t v_lba, p_lba, stripe_off, column, io_size;
u32 stripe_sz, stripe_exp;
u8 num_pds, *cdb_ptr, i;
u8 cdb0 = scmd->cmnd[0];
u64 v_llba;
u8 num_pds, cmd = scmd->cmnd[0];
/*
* Try Direct I/O to RAID memeber disks
*/
if (cdb0 == READ_16 || cdb0 == READ_10 ||
cdb0 == WRITE_16 || cdb0 == WRITE_10) {
cdb_ptr = mpi_request->CDB.CDB32;
if (cmd != READ_10 && cmd != WRITE_10 &&
cmd != READ_16 && cmd != WRITE_16)
return;
if ((cdb0 < READ_16) || !(cdb_ptr[2] | cdb_ptr[3] | cdb_ptr[4]
| cdb_ptr[5])) {
io_size = scsi_bufflen(scmd) >>
raid_device->block_exponent;
i = (cdb0 < READ_16) ? 2 : 6;
/* get virtual lba */
v_lba = be32_to_cpu(*(__be32 *)(&cdb_ptr[i]));
if (cmd == READ_10 || cmd == WRITE_10)
v_lba = get_unaligned_be32(&mpi_request->CDB.CDB32[2]);
else
v_lba = get_unaligned_be64(&mpi_request->CDB.CDB32[2]);
if (((u64)v_lba + (u64)io_size - 1) <=
(u32)raid_device->max_lba) {
stripe_sz = raid_device->stripe_sz;
stripe_exp = raid_device->stripe_exponent;
stripe_off = v_lba & (stripe_sz - 1);
io_size = scsi_bufflen(scmd) >> raid_device->block_exponent;
/* Check whether IO falls within a stripe */
if ((stripe_off + io_size) <= stripe_sz) {
num_pds = raid_device->num_pds;
p_lba = v_lba >> stripe_exp;
stripe_unit = p_lba / num_pds;
column = p_lba % num_pds;
p_lba = (stripe_unit << stripe_exp) +
stripe_off;
mpi_request->DevHandle =
cpu_to_le16(raid_device->
pd_handle[column]);
(*(__be32 *)(&cdb_ptr[i])) =
cpu_to_be32(p_lba);
/*
* WD: To indicate this I/O is directI/O
*/
_scsih_scsi_direct_io_set(ioc, smid, 1);
}
}
} else {
io_size = scsi_bufflen(scmd) >>
raid_device->block_exponent;
/* get virtual lba */
v_llba = be64_to_cpu(*(__be64 *)(&cdb_ptr[2]));
if (v_lba + io_size - 1 > raid_device->max_lba)
return;
if ((v_llba + (u64)io_size - 1) <=
raid_device->max_lba) {
stripe_sz = raid_device->stripe_sz;
stripe_exp = raid_device->stripe_exponent;
stripe_off = (u32) (v_llba & (stripe_sz - 1));
stripe_sz = raid_device->stripe_sz;
stripe_exp = raid_device->stripe_exponent;
stripe_off = v_lba & (stripe_sz - 1);
/* Check whether IO falls within a stripe */
if ((stripe_off + io_size) <= stripe_sz) {
num_pds = raid_device->num_pds;
p_lba = (u32)(v_llba >> stripe_exp);
stripe_unit = p_lba / num_pds;
column = p_lba % num_pds;
p_lba = (stripe_unit << stripe_exp) +
stripe_off;
mpi_request->DevHandle =
cpu_to_le16(raid_device->
pd_handle[column]);
(*(__be64 *)(&cdb_ptr[2])) =
cpu_to_be64((u64)p_lba);
/*
* WD: To indicate this I/O is directI/O
*/
_scsih_scsi_direct_io_set(ioc, smid, 1);
}
}
}
}
/* Return unless IO falls within a stripe */
if (stripe_off + io_size > stripe_sz)
return;
num_pds = raid_device->num_pds;
p_lba = v_lba >> stripe_exp;
column = sector_div(p_lba, num_pds);
p_lba = (p_lba << stripe_exp) + stripe_off;
mpi_request->DevHandle = cpu_to_le16(raid_device->pd_handle[column]);
if (cmd == READ_10 || cmd == WRITE_10)
put_unaligned_be32(lower_32_bits(p_lba),
&mpi_request->CDB.CDB32[2]);
else
put_unaligned_be64(p_lba, &mpi_request->CDB.CDB32[2]);
_scsih_scsi_direct_io_set(ioc, smid, 1);
}
/**
@ -4308,7 +4271,7 @@ _scsih_scsi_ioc_info(struct MPT2SAS_ADAPTER *ioc, struct scsi_cmnd *scmd,
#endif
/**
* _scsih_turn_on_fault_led - illuminate Fault LED
* _scsih_turn_on_pfa_led - illuminate PFA LED
* @ioc: per adapter object
* @handle: device handle
* Context: process
@ -4316,10 +4279,15 @@ _scsih_scsi_ioc_info(struct MPT2SAS_ADAPTER *ioc, struct scsi_cmnd *scmd,
* Return nothing.
*/
static void
_scsih_turn_on_fault_led(struct MPT2SAS_ADAPTER *ioc, u16 handle)
_scsih_turn_on_pfa_led(struct MPT2SAS_ADAPTER *ioc, u16 handle)
{
Mpi2SepReply_t mpi_reply;
Mpi2SepRequest_t mpi_request;
struct _sas_device *sas_device;
sas_device = _scsih_sas_device_find_by_handle(ioc, handle);
if (!sas_device)
return;
memset(&mpi_request, 0, sizeof(Mpi2SepRequest_t));
mpi_request.Function = MPI2_FUNCTION_SCSI_ENCLOSURE_PROCESSOR;
@ -4334,6 +4302,47 @@ _scsih_turn_on_fault_led(struct MPT2SAS_ADAPTER *ioc, u16 handle)
__FILE__, __LINE__, __func__);
return;
}
sas_device->pfa_led_on = 1;
if (mpi_reply.IOCStatus || mpi_reply.IOCLogInfo) {
dewtprintk(ioc, printk(MPT2SAS_INFO_FMT
"enclosure_processor: ioc_status (0x%04x), loginfo(0x%08x)\n",
ioc->name, le16_to_cpu(mpi_reply.IOCStatus),
le32_to_cpu(mpi_reply.IOCLogInfo)));
return;
}
}
/**
* _scsih_turn_off_pfa_led - turn off PFA LED
* @ioc: per adapter object
* @sas_device: sas device whose PFA LED has to turned off
* Context: process
*
* Return nothing.
*/
static void
_scsih_turn_off_pfa_led(struct MPT2SAS_ADAPTER *ioc,
struct _sas_device *sas_device)
{
Mpi2SepReply_t mpi_reply;
Mpi2SepRequest_t mpi_request;
memset(&mpi_request, 0, sizeof(Mpi2SepRequest_t));
mpi_request.Function = MPI2_FUNCTION_SCSI_ENCLOSURE_PROCESSOR;
mpi_request.Action = MPI2_SEP_REQ_ACTION_WRITE_STATUS;
mpi_request.SlotStatus = 0;
mpi_request.Slot = cpu_to_le16(sas_device->slot);
mpi_request.DevHandle = 0;
mpi_request.EnclosureHandle = cpu_to_le16(sas_device->enclosure_handle);
mpi_request.Flags = MPI2_SEP_REQ_FLAGS_ENCLOSURE_SLOT_ADDRESS;
if ((mpt2sas_base_scsi_enclosure_processor(ioc, &mpi_reply,
&mpi_request)) != 0) {
printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n", ioc->name,
__FILE__, __LINE__, __func__);
return;
}
if (mpi_reply.IOCStatus || mpi_reply.IOCLogInfo) {
dewtprintk(ioc, printk(MPT2SAS_INFO_FMT "enclosure_processor: "
@ -4345,7 +4354,7 @@ _scsih_turn_on_fault_led(struct MPT2SAS_ADAPTER *ioc, u16 handle)
}
/**
* _scsih_send_event_to_turn_on_fault_led - fire delayed event
* _scsih_send_event_to_turn_on_pfa_led - fire delayed event
* @ioc: per adapter object
* @handle: device handle
* Context: interrupt.
@ -4353,14 +4362,14 @@ _scsih_turn_on_fault_led(struct MPT2SAS_ADAPTER *ioc, u16 handle)
* Return nothing.
*/
static void
_scsih_send_event_to_turn_on_fault_led(struct MPT2SAS_ADAPTER *ioc, u16 handle)
_scsih_send_event_to_turn_on_pfa_led(struct MPT2SAS_ADAPTER *ioc, u16 handle)
{
struct fw_event_work *fw_event;
fw_event = kzalloc(sizeof(struct fw_event_work), GFP_ATOMIC);
if (!fw_event)
return;
fw_event->event = MPT2SAS_TURN_ON_FAULT_LED;
fw_event->event = MPT2SAS_TURN_ON_PFA_LED;
fw_event->device_handle = handle;
fw_event->ioc = ioc;
_scsih_fw_event_add(ioc, fw_event);
@ -4404,7 +4413,7 @@ _scsih_smart_predicted_fault(struct MPT2SAS_ADAPTER *ioc, u16 handle)
spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
if (ioc->pdev->subsystem_vendor == PCI_VENDOR_ID_IBM)
_scsih_send_event_to_turn_on_fault_led(ioc, handle);
_scsih_send_event_to_turn_on_pfa_led(ioc, handle);
/* insert into event log */
sz = offsetof(Mpi2EventNotificationReply_t, EventData) +
@ -5325,6 +5334,12 @@ _scsih_remove_device(struct MPT2SAS_ADAPTER *ioc,
{
struct MPT2SAS_TARGET *sas_target_priv_data;
if ((ioc->pdev->subsystem_vendor == PCI_VENDOR_ID_IBM) &&
(sas_device->pfa_led_on)) {
_scsih_turn_off_pfa_led(ioc, sas_device);
sas_device->pfa_led_on = 0;
}
dewtprintk(ioc, printk(MPT2SAS_INFO_FMT "%s: enter: "
"handle(0x%04x), sas_addr(0x%016llx)\n", ioc->name, __func__,
sas_device->handle, (unsigned long long)
@ -7441,8 +7456,8 @@ _firmware_event_work(struct work_struct *work)
dewtprintk(ioc, printk(MPT2SAS_INFO_FMT "port enable: complete "
"from worker thread\n", ioc->name));
break;
case MPT2SAS_TURN_ON_FAULT_LED:
_scsih_turn_on_fault_led(ioc, fw_event->device_handle);
case MPT2SAS_TURN_ON_PFA_LED:
_scsih_turn_on_pfa_led(ioc, fw_event->device_handle);
break;
case MPI2_EVENT_SAS_TOPOLOGY_CHANGE_LIST:
_scsih_sas_topology_change_event(ioc, fw_event);
@ -8132,6 +8147,7 @@ _scsih_probe(struct pci_dev *pdev, const struct pci_device_id *id)
{
struct MPT2SAS_ADAPTER *ioc;
struct Scsi_Host *shost;
int rv;
shost = scsi_host_alloc(&scsih_driver_template,
sizeof(struct MPT2SAS_ADAPTER));
@ -8227,6 +8243,7 @@ _scsih_probe(struct pci_dev *pdev, const struct pci_device_id *id)
if (!ioc->firmware_event_thread) {
printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
ioc->name, __FILE__, __LINE__, __func__);
rv = -ENODEV;
goto out_thread_fail;
}
@ -8234,6 +8251,7 @@ _scsih_probe(struct pci_dev *pdev, const struct pci_device_id *id)
if ((mpt2sas_base_attach(ioc))) {
printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
ioc->name, __FILE__, __LINE__, __func__);
rv = -ENODEV;
goto out_attach_fail;
}
@ -8251,7 +8269,8 @@ _scsih_probe(struct pci_dev *pdev, const struct pci_device_id *id)
} else
ioc->hide_drives = 0;
if ((scsi_add_host(shost, &pdev->dev))) {
rv = scsi_add_host(shost, &pdev->dev);
if (rv) {
printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
ioc->name, __FILE__, __LINE__, __func__);
goto out_add_shost_fail;
@ -8268,7 +8287,7 @@ _scsih_probe(struct pci_dev *pdev, const struct pci_device_id *id)
out_thread_fail:
list_del(&ioc->list);
scsi_host_put(shost);
return -ENODEV;
return rv;
}
#ifdef CONFIG_PM

View File

@ -2,7 +2,7 @@
* SAS Transport Layer for MPT (Message Passing Technology) based controllers
*
* This code is based on drivers/scsi/mpt2sas/mpt2_transport.c
* Copyright (C) 2007-2013 LSI Corporation
* Copyright (C) 2007-2014 LSI Corporation
* (mailto:DL-MPTFusionLinux@lsi.com)
*
* This program is free software; you can redistribute it and/or

View File

@ -2,7 +2,7 @@
# Kernel configuration file for the MPT3SAS
#
# This code is based on drivers/scsi/mpt3sas/Kconfig
# Copyright (C) 2012-2013 LSI Corporation
# Copyright (C) 2012-2014 LSI Corporation
# (mailto:DL-MPTFusionLinux@lsi.com)
# This program is free software; you can redistribute it and/or

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2000-2013 LSI Corporation.
* Copyright (c) 2000-2014 LSI Corporation.
*
*
* Name: mpi2.h
@ -8,7 +8,7 @@
* scatter/gather formats.
* Creation Date: June 21, 2006
*
* mpi2.h Version: 02.00.29
* mpi2.h Version: 02.00.31
*
* NOTE: Names (typedefs, defines, etc.) beginning with an MPI25 or Mpi25
* prefix are for use only on MPI v2.5 products, and must not be used
@ -86,6 +86,8 @@
* 11-27-12 02.00.28 Bumped MPI2_HEADER_VERSION_UNIT.
* 12-20-12 02.00.29 Bumped MPI2_HEADER_VERSION_UNIT.
* Added MPI25_SUP_REPLY_POST_HOST_INDEX_OFFSET.
* 04-09-13 02.00.30 Bumped MPI2_HEADER_VERSION_UNIT.
* 04-17-13 02.00.31 Bumped MPI2_HEADER_VERSION_UNIT.
* --------------------------------------------------------------------------
*/
@ -119,7 +121,7 @@
#define MPI2_VERSION_02_05 (0x0205)
/*Unit and Dev versioning for this MPI header set */
#define MPI2_HEADER_VERSION_UNIT (0x1D)
#define MPI2_HEADER_VERSION_UNIT (0x1F)
#define MPI2_HEADER_VERSION_DEV (0x00)
#define MPI2_HEADER_VERSION_UNIT_MASK (0xFF00)
#define MPI2_HEADER_VERSION_UNIT_SHIFT (8)

View File

@ -1,12 +1,12 @@
/*
* Copyright (c) 2000-2013 LSI Corporation.
* Copyright (c) 2000-2014 LSI Corporation.
*
*
* Name: mpi2_cnfg.h
* Title: MPI Configuration messages and pages
* Creation Date: November 10, 2006
*
* mpi2_cnfg.h Version: 02.00.24
* mpi2_cnfg.h Version: 02.00.26
*
* NOTE: Names (typedefs, defines, etc.) beginning with an MPI25 or Mpi25
* prefix are for use only on MPI v2.5 products, and must not be used
@ -160,6 +160,11 @@
* 12-20-12 02.00.24 Marked MPI2_SASIOUNIT1_CONTROL_CLEAR_AFFILIATION as
* obsolete for MPI v2.5 and later.
* Added some defines for 12G SAS speeds.
* 04-09-13 02.00.25 Added MPI2_IOUNITPAGE1_ATA_SECURITY_FREEZE_LOCK.
* Fixed MPI2_IOUNITPAGE5_DMA_CAP_MASK_MAX_REQUESTS to
* match the specification.
* 08-19-13 02.00.26 Added reserved words to MPI2_CONFIG_PAGE_IO_UNIT_7 for
* future use.
* --------------------------------------------------------------------------
*/
@ -792,6 +797,7 @@ typedef struct _MPI2_CONFIG_PAGE_IO_UNIT_1 {
#define MPI2_IOUNITPAGE1_PAGEVERSION (0x04)
/*IO Unit Page 1 Flags defines */
#define MPI2_IOUNITPAGE1_ATA_SECURITY_FREEZE_LOCK (0x00004000)
#define MPI25_IOUNITPAGE1_NEW_DEVICE_FAST_PATH_DISABLE (0x00002000)
#define MPI25_IOUNITPAGE1_DISABLE_FAST_PATH (0x00001000)
#define MPI2_IOUNITPAGE1_ENABLE_HOST_BASED_DISCOVERY (0x00000800)
@ -870,7 +876,7 @@ typedef struct _MPI2_CONFIG_PAGE_IO_UNIT_5 {
#define MPI2_IOUNITPAGE5_PAGEVERSION (0x00)
/*defines for IO Unit Page 5 DmaEngineCapabilities field */
#define MPI2_IOUNITPAGE5_DMA_CAP_MASK_MAX_REQUESTS (0xFF00)
#define MPI2_IOUNITPAGE5_DMA_CAP_MASK_MAX_REQUESTS (0xFFFF0000)
#define MPI2_IOUNITPAGE5_DMA_CAP_SHIFT_MAX_REQUESTS (16)
#define MPI2_IOUNITPAGE5_DMA_CAP_EEDP (0x0008)
@ -920,11 +926,15 @@ typedef struct _MPI2_CONFIG_PAGE_IO_UNIT_7 {
U8
BoardTemperatureUnits; /*0x16 */
U8 Reserved3; /*0x17 */
U32 Reserved4; /* 0x18 */
U32 Reserved5; /* 0x1C */
U32 Reserved6; /* 0x20 */
U32 Reserved7; /* 0x24 */
} MPI2_CONFIG_PAGE_IO_UNIT_7,
*PTR_MPI2_CONFIG_PAGE_IO_UNIT_7,
Mpi2IOUnitPage7_t, *pMpi2IOUnitPage7_t;
#define MPI2_IOUNITPAGE7_PAGEVERSION (0x02)
#define MPI2_IOUNITPAGE7_PAGEVERSION (0x04)
/*defines for IO Unit Page 7 CurrentPowerMode and PreviousPowerMode fields */
#define MPI25_IOUNITPAGE7_PM_INIT_MASK (0xC0)

View File

@ -1,12 +1,12 @@
/*
* Copyright (c) 2000-2013 LSI Corporation.
* Copyright (c) 2000-2014 LSI Corporation.
*
*
* Name: mpi2_init.h
* Title: MPI SCSI initiator mode messages and structures
* Creation Date: June 23, 2006
*
* mpi2_init.h Version: 02.00.14
* mpi2_init.h Version: 02.00.15
*
* NOTE: Names (typedefs, defines, etc.) beginning with an MPI25 or Mpi25
* prefix are for use only on MPI v2.5 products, and must not be used
@ -44,6 +44,8 @@
* Priority to match SAM-4.
* Added EEDPErrorOffset to MPI2_SCSI_IO_REPLY.
* 07-10-12 02.00.14 Added MPI2_SCSIIO_CONTROL_SHIFT_DATADIRECTION.
* 04-09-13 02.00.15 Added SCSIStatusQualifier field to MPI2_SCSI_IO_REPLY,
* replacing the Reserved4 field.
* --------------------------------------------------------------------------
*/
@ -347,7 +349,7 @@ typedef struct _MPI2_SCSI_IO_REPLY {
U32 SenseCount; /*0x18 */
U32 ResponseInfo; /*0x1C */
U16 TaskTag; /*0x20 */
U16 Reserved4; /*0x22 */
U16 SCSIStatusQualifier; /* 0x22 */
U32 BidirectionalTransferCount; /*0x24 */
U32 EEDPErrorOffset; /*0x28 *//*MPI 2.5 only; Reserved in MPI 2.0*/
U32 Reserved6; /*0x2C */

View File

@ -1,12 +1,12 @@
/*
* Copyright (c) 2000-2013 LSI Corporation.
* Copyright (c) 2000-2014 LSI Corporation.
*
*
* Name: mpi2_ioc.h
* Title: MPI IOC, Port, Event, FW Download, and FW Upload messages
* Creation Date: October 11, 2006
*
* mpi2_ioc.h Version: 02.00.22
* mpi2_ioc.h Version: 02.00.23
*
* NOTE: Names (typedefs, defines, etc.) beginning with an MPI25 or Mpi25
* prefix are for use only on MPI v2.5 products, and must not be used
@ -127,6 +127,11 @@
* 07-26-12 02.00.22 Added MPI2_IOCFACTS_EXCEPT_PARTIAL_MEMORY_FAILURE.
* Added ElapsedSeconds field to
* MPI2_EVENT_DATA_IR_OPERATION_STATUS.
* 08-19-13 02.00.23 For IOCInit, added MPI2_IOCINIT_MSGFLAG_RDPQ_ARRAY_MODE
* and MPI2_IOC_INIT_RDPQ_ARRAY_ENTRY.
* Added MPI2_IOCFACTS_CAPABILITY_RDPQ_ARRAY_CAPABLE.
* Added MPI2_FW_DOWNLOAD_ITYPE_PUBLIC_KEY.
* Added Encrypted Hash Extended Image.
* --------------------------------------------------------------------------
*/
@ -182,6 +187,10 @@ typedef struct _MPI2_IOC_INIT_REQUEST {
#define MPI2_WHOINIT_HOST_DRIVER (0x04)
#define MPI2_WHOINIT_MANUFACTURER (0x05)
/* MsgFlags */
#define MPI2_IOCINIT_MSGFLAG_RDPQ_ARRAY_MODE (0x01)
/*MsgVersion */
#define MPI2_IOCINIT_MSGVERSION_MAJOR_MASK (0xFF00)
#define MPI2_IOCINIT_MSGVERSION_MAJOR_SHIFT (8)
@ -194,9 +203,19 @@ typedef struct _MPI2_IOC_INIT_REQUEST {
#define MPI2_IOCINIT_HDRVERSION_DEV_MASK (0x00FF)
#define MPI2_IOCINIT_HDRVERSION_DEV_SHIFT (0)
/*minimum depth for the Reply Descriptor Post Queue */
/*minimum depth for a Reply Descriptor Post Queue */
#define MPI2_RDPQ_DEPTH_MIN (16)
/* Reply Descriptor Post Queue Array Entry */
typedef struct _MPI2_IOC_INIT_RDPQ_ARRAY_ENTRY {
U64 RDPQBaseAddress; /* 0x00 */
U32 Reserved1; /* 0x08 */
U32 Reserved2; /* 0x0C */
} MPI2_IOC_INIT_RDPQ_ARRAY_ENTRY,
*PTR_MPI2_IOC_INIT_RDPQ_ARRAY_ENTRY,
Mpi2IOCInitRDPQArrayEntry, *pMpi2IOCInitRDPQArrayEntry;
/*IOCInit Reply message */
typedef struct _MPI2_IOC_INIT_REPLY {
U8 WhoInit; /*0x00 */
@ -306,6 +325,7 @@ typedef struct _MPI2_IOC_FACTS_REPLY {
/*ProductID field uses MPI2_FW_HEADER_PID_ */
/*IOCCapabilities */
#define MPI2_IOCFACTS_CAPABILITY_RDPQ_ARRAY_CAPABLE (0x00040000)
#define MPI25_IOCFACTS_CAPABILITY_FAST_PATH_CAPABLE (0x00020000)
#define MPI2_IOCFACTS_CAPABILITY_HOST_BASED_DISCOVERY (0x00010000)
#define MPI2_IOCFACTS_CAPABILITY_MSI_X_INDEX (0x00008000)
@ -1140,6 +1160,7 @@ typedef struct _MPI2_FW_DOWNLOAD_REQUEST {
#define MPI2_FW_DOWNLOAD_ITYPE_MEGARAID (0x09)
#define MPI2_FW_DOWNLOAD_ITYPE_COMPLETE (0x0A)
#define MPI2_FW_DOWNLOAD_ITYPE_COMMON_BOOT_BLOCK (0x0B)
#define MPI2_FW_DOWNLOAD_ITYPE_PUBLIC_KEY (0x0C)
#define MPI2_FW_DOWNLOAD_ITYPE_MIN_PRODUCT_SPECIFIC (0xF0)
/*MPI v2.0 FWDownload TransactionContext Element */
@ -1404,6 +1425,7 @@ typedef struct _MPI2_EXT_IMAGE_HEADER {
#define MPI2_EXT_IMAGE_TYPE_FLASH_LAYOUT (0x06)
#define MPI2_EXT_IMAGE_TYPE_SUPPORTED_DEVICES (0x07)
#define MPI2_EXT_IMAGE_TYPE_MEGARAID (0x08)
#define MPI2_EXT_IMAGE_TYPE_ENCRYPTED_HASH (0x09)
#define MPI2_EXT_IMAGE_TYPE_MIN_PRODUCT_SPECIFIC (0x80)
#define MPI2_EXT_IMAGE_TYPE_MAX_PRODUCT_SPECIFIC (0xFF)
@ -1560,6 +1582,42 @@ typedef struct _MPI2_INIT_IMAGE_FOOTER {
/*defines for the ResetVector field */
#define MPI2_INIT_IMAGE_RESETVECTOR_OFFSET (0x14)
/* Encrypted Hash Extended Image Data */
typedef struct _MPI25_ENCRYPTED_HASH_ENTRY {
U8 HashImageType; /* 0x00 */
U8 HashAlgorithm; /* 0x01 */
U8 EncryptionAlgorithm; /* 0x02 */
U8 Reserved1; /* 0x03 */
U32 Reserved2; /* 0x04 */
U32 EncryptedHash[1]; /* 0x08 */ /* variable length */
} MPI25_ENCRYPTED_HASH_ENTRY, *PTR_MPI25_ENCRYPTED_HASH_ENTRY,
Mpi25EncryptedHashEntry_t, *pMpi25EncryptedHashEntry_t;
/* values for HashImageType */
#define MPI25_HASH_IMAGE_TYPE_UNUSED (0x00)
#define MPI25_HASH_IMAGE_TYPE_FIRMWARE (0x01)
/* values for HashAlgorithm */
#define MPI25_HASH_ALGORITHM_UNUSED (0x00)
#define MPI25_HASH_ALGORITHM_SHA256 (0x01)
/* values for EncryptionAlgorithm */
#define MPI25_ENCRYPTION_ALG_UNUSED (0x00)
#define MPI25_ENCRYPTION_ALG_RSA256 (0x01)
typedef struct _MPI25_ENCRYPTED_HASH_DATA {
U8 ImageVersion; /* 0x00 */
U8 NumHash; /* 0x01 */
U16 Reserved1; /* 0x02 */
U32 Reserved2; /* 0x04 */
MPI25_ENCRYPTED_HASH_ENTRY EncryptedHashEntry[1]; /* 0x08 */
} MPI25_ENCRYPTED_HASH_DATA, *PTR_MPI25_ENCRYPTED_HASH_DATA,
Mpi25EncryptedHashData_t, *pMpi25EncryptedHashData_t;
/****************************************************************************
* PowerManagementControl message
****************************************************************************/

View File

@ -1,12 +1,12 @@
/*
* Copyright (c) 2000-2013 LSI Corporation.
* Copyright (c) 2000-2014 LSI Corporation.
*
*
* Name: mpi2_raid.h
* Title: MPI Integrated RAID messages and structures
* Creation Date: April 26, 2007
*
* mpi2_raid.h Version: 02.00.09
* mpi2_raid.h Version: 02.00.10
*
* Version History
* ---------------
@ -30,6 +30,7 @@
* 02-06-12 02.00.08 Added MPI2_RAID_ACTION_PHYSDISK_HIDDEN.
* 07-26-12 02.00.09 Added ElapsedSeconds field to MPI2_RAID_VOL_INDICATOR.
* Added MPI2_RAID_VOL_FLAGS_ELAPSED_SECONDS_VALID define.
* 04-17-13 02.00.10 Added MPI25_RAID_ACTION_ADATA_ALLOW_PI.
* --------------------------------------------------------------------------
*/
@ -46,6 +47,9 @@
* RAID Action messages
****************************************************************************/
/* ActionDataWord defines for use with MPI2_RAID_ACTION_CREATE_VOLUME action */
#define MPI25_RAID_ACTION_ADATA_ALLOW_PI (0x80000000)
/*ActionDataWord defines for use with MPI2_RAID_ACTION_DELETE_VOLUME action */
#define MPI2_RAID_ACTION_ADATA_KEEP_LBA0 (0x00000000)
#define MPI2_RAID_ACTION_ADATA_ZERO_LBA0 (0x00000001)

View File

@ -1,12 +1,12 @@
/*
* Copyright (c) 2000-2013 LSI Corporation.
* Copyright (c) 2000-2014 LSI Corporation.
*
*
* Name: mpi2_sas.h
* Title: MPI Serial Attached SCSI structures and definitions
* Creation Date: February 9, 2007
*
* mpi2_sas.h Version: 02.00.07
* mpi2_sas.h Version: 02.00.08
*
* NOTE: Names (typedefs, defines, etc.) beginning with an MPI25 or Mpi25
* prefix are for use only on MPI v2.5 products, and must not be used
@ -30,6 +30,8 @@
* 11-18-11 02.00.06 Incorporating additions for MPI v2.5.
* 07-10-12 02.00.07 Added MPI2_SATA_PT_SGE_UNION for use in the SATA
* Passthrough Request message.
* 08-19-13 02.00.08 Made MPI2_SAS_OP_TRANSMIT_PORT_SELECT_SIGNAL obsolete
* for anything newer than MPI v2.0.
* --------------------------------------------------------------------------
*/
@ -251,7 +253,7 @@ typedef struct _MPI2_SAS_IOUNIT_CONTROL_REQUEST {
#define MPI2_SAS_OP_PHY_CLEAR_ERROR_LOG (0x08)
#define MPI2_SAS_OP_SEND_PRIMITIVE (0x0A)
#define MPI2_SAS_OP_FORCE_FULL_DISCOVERY (0x0B)
#define MPI2_SAS_OP_TRANSMIT_PORT_SELECT_SIGNAL (0x0C)
#define MPI2_SAS_OP_TRANSMIT_PORT_SELECT_SIGNAL (0x0C) /* MPI v2.0 only */
#define MPI2_SAS_OP_REMOVE_DEVICE (0x0D)
#define MPI2_SAS_OP_LOOKUP_MAPPING (0x0E)
#define MPI2_SAS_OP_SET_IOC_PARAMETER (0x0F)

View File

@ -1,12 +1,12 @@
/*
* Copyright (c) 2000-2013 LSI Corporation.
* Copyright (c) 2000-2014 LSI Corporation.
*
*
* Name: mpi2_tool.h
* Title: MPI diagnostic tool structures and definitions
* Creation Date: March 26, 2007
*
* mpi2_tool.h Version: 02.00.10
* mpi2_tool.h Version: 02.00.11
*
* Version History
* ---------------
@ -32,6 +32,7 @@
* message.
* 07-26-12 02.00.10 Modified MPI2_TOOLBOX_DIAGNOSTIC_CLI_REQUEST so that
* it uses MPI Chain SGE as well as MPI Simple SGE.
* 08-19-13 02.00.11 Added MPI2_TOOLBOX_TEXT_DISPLAY_TOOL and related info.
* --------------------------------------------------------------------------
*/
@ -51,6 +52,7 @@
#define MPI2_TOOLBOX_ISTWI_READ_WRITE_TOOL (0x03)
#define MPI2_TOOLBOX_BEACON_TOOL (0x05)
#define MPI2_TOOLBOX_DIAGNOSTIC_CLI_TOOL (0x06)
#define MPI2_TOOLBOX_TEXT_DISPLAY_TOOL (0x07)
/****************************************************************************
* Toolbox reply
@ -331,6 +333,45 @@ typedef struct _MPI2_TOOLBOX_DIAGNOSTIC_CLI_REPLY {
Mpi2ToolboxDiagnosticCliReply_t,
*pMpi2ToolboxDiagnosticCliReply_t;
/****************************************************************************
* Toolbox Console Text Display Tool
****************************************************************************/
/* Toolbox Console Text Display Tool request message */
typedef struct _MPI2_TOOLBOX_TEXT_DISPLAY_REQUEST {
U8 Tool; /* 0x00 */
U8 Reserved1; /* 0x01 */
U8 ChainOffset; /* 0x02 */
U8 Function; /* 0x03 */
U16 Reserved2; /* 0x04 */
U8 Reserved3; /* 0x06 */
U8 MsgFlags; /* 0x07 */
U8 VP_ID; /* 0x08 */
U8 VF_ID; /* 0x09 */
U16 Reserved4; /* 0x0A */
U8 Console; /* 0x0C */
U8 Flags; /* 0x0D */
U16 Reserved6; /* 0x0E */
U8 TextToDisplay[4]; /* 0x10 */
} MPI2_TOOLBOX_TEXT_DISPLAY_REQUEST,
*PTR_MPI2_TOOLBOX_TEXT_DISPLAY_REQUEST,
Mpi2ToolboxTextDisplayRequest_t,
*pMpi2ToolboxTextDisplayRequest_t;
/* defines for the Console field */
#define MPI2_TOOLBOX_CONSOLE_TYPE_MASK (0xF0)
#define MPI2_TOOLBOX_CONSOLE_TYPE_DEFAULT (0x00)
#define MPI2_TOOLBOX_CONSOLE_TYPE_UART (0x10)
#define MPI2_TOOLBOX_CONSOLE_TYPE_ETHERNET (0x20)
#define MPI2_TOOLBOX_CONSOLE_NUMBER_MASK (0x0F)
/* defines for the Flags field */
#define MPI2_TOOLBOX_CONSOLE_FLAG_TIMESTAMP (0x01)
/*****************************************************************************
*
* Diagnostic Buffer Messages

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2000-2013 LSI Corporation.
* Copyright (c) 2000-2014 LSI Corporation.
*
*
* Name: mpi2_type.h

View File

@ -3,7 +3,7 @@
* for access to MPT (Message Passing Technology) firmware.
*
* This code is based on drivers/scsi/mpt3sas/mpt3sas_base.c
* Copyright (C) 2012-2013 LSI Corporation
* Copyright (C) 2012-2014 LSI Corporation
* (mailto:DL-MPTFusionLinux@lsi.com)
*
* This program is free software; you can redistribute it and/or
@ -91,6 +91,8 @@ static int mpt3sas_fwfault_debug;
MODULE_PARM_DESC(mpt3sas_fwfault_debug,
" enable detection of firmware fault and halt firmware - (default=0)");
static int
_base_get_ioc_facts(struct MPT3SAS_ADAPTER *ioc, int sleep_flag);
/**
* _scsih_set_fwfault_debug - global setting of ioc->fwfault_debug.
@ -1482,17 +1484,22 @@ static int
_base_config_dma_addressing(struct MPT3SAS_ADAPTER *ioc, struct pci_dev *pdev)
{
struct sysinfo s;
char *desc = NULL;
u64 consistent_dma_mask;
if (ioc->dma_mask)
consistent_dma_mask = DMA_BIT_MASK(64);
else
consistent_dma_mask = DMA_BIT_MASK(32);
if (sizeof(dma_addr_t) > 4) {
const uint64_t required_mask =
dma_get_required_mask(&pdev->dev);
if ((required_mask > DMA_BIT_MASK(32)) &&
!pci_set_dma_mask(pdev, DMA_BIT_MASK(64)) &&
!pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64))) {
!pci_set_consistent_dma_mask(pdev, consistent_dma_mask)) {
ioc->base_add_sg_single = &_base_add_sg_single_64;
ioc->sge_size = sizeof(Mpi2SGESimple64_t);
desc = "64";
ioc->dma_mask = 64;
goto out;
}
}
@ -1501,19 +1508,30 @@ _base_config_dma_addressing(struct MPT3SAS_ADAPTER *ioc, struct pci_dev *pdev)
&& !pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32))) {
ioc->base_add_sg_single = &_base_add_sg_single_32;
ioc->sge_size = sizeof(Mpi2SGESimple32_t);
desc = "32";
ioc->dma_mask = 32;
} else
return -ENODEV;
out:
si_meminfo(&s);
pr_info(MPT3SAS_FMT
"%s BIT PCI BUS DMA ADDRESSING SUPPORTED, total mem (%ld kB)\n",
ioc->name, desc, convert_to_kb(s.totalram));
"%d BIT PCI BUS DMA ADDRESSING SUPPORTED, total mem (%ld kB)\n",
ioc->name, ioc->dma_mask, convert_to_kb(s.totalram));
return 0;
}
static int
_base_change_consistent_dma_mask(struct MPT3SAS_ADAPTER *ioc,
struct pci_dev *pdev)
{
if (pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64))) {
if (pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32)))
return -ENODEV;
}
return 0;
}
/**
* _base_check_enable_msix - checks MSIX capabable.
* @ioc: per adapter object
@ -1698,11 +1716,15 @@ _base_enable_msix(struct MPT3SAS_ADAPTER *ioc)
": %d, max_msix_vectors: %d\n", ioc->name, ioc->msix_vector_count,
ioc->cpu_count, max_msix_vectors);
if (!ioc->rdpq_array_enable && max_msix_vectors == -1)
max_msix_vectors = 8;
if (max_msix_vectors > 0) {
ioc->reply_queue_count = min_t(int, max_msix_vectors,
ioc->reply_queue_count);
ioc->msix_vector_count = ioc->reply_queue_count;
}
} else if (max_msix_vectors == 0)
goto try_ioapic;
entries = kcalloc(ioc->reply_queue_count, sizeof(struct msix_entry),
GFP_KERNEL);
@ -1716,10 +1738,10 @@ _base_enable_msix(struct MPT3SAS_ADAPTER *ioc)
for (i = 0, a = entries; i < ioc->reply_queue_count; i++, a++)
a->entry = i;
r = pci_enable_msix(ioc->pdev, entries, ioc->reply_queue_count);
r = pci_enable_msix_exact(ioc->pdev, entries, ioc->reply_queue_count);
if (r) {
dfailprintk(ioc, pr_info(MPT3SAS_FMT
"pci_enable_msix failed (r=%d) !!!\n",
"pci_enable_msix_exact failed (r=%d) !!!\n",
ioc->name, r));
kfree(entries);
goto try_ioapic;
@ -1742,6 +1764,7 @@ _base_enable_msix(struct MPT3SAS_ADAPTER *ioc)
/* failback to io_apic interrupt routing */
try_ioapic:
ioc->reply_queue_count = 1;
r = _base_request_irq(ioc, 0, ioc->pdev->irq);
return r;
@ -1821,6 +1844,16 @@ mpt3sas_base_map_resources(struct MPT3SAS_ADAPTER *ioc)
}
_base_mask_interrupts(ioc);
r = _base_get_ioc_facts(ioc, CAN_SLEEP);
if (r)
goto out_fail;
if (!ioc->rdpq_array_enable_assigned) {
ioc->rdpq_array_enable = ioc->rdpq_array_capable;
ioc->rdpq_array_enable_assigned = 1;
}
r = _base_enable_msix(ioc);
if (r)
goto out_fail;
@ -2185,6 +2218,53 @@ mpt3sas_base_put_smid_default(struct MPT3SAS_ADAPTER *ioc, u16 smid)
&ioc->scsi_lookup_lock);
}
/**
* _base_display_intel_branding - Display branding string
* @ioc: per adapter object
*
* Return nothing.
*/
static void
_base_display_intel_branding(struct MPT3SAS_ADAPTER *ioc)
{
if (ioc->pdev->subsystem_vendor != PCI_VENDOR_ID_INTEL)
return;
switch (ioc->pdev->device) {
case MPI25_MFGPAGE_DEVID_SAS3008:
switch (ioc->pdev->subsystem_device) {
case MPT3SAS_INTEL_RMS3JC080_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT3SAS_INTEL_RMS3JC080_BRANDING);
break;
case MPT3SAS_INTEL_RS3GC008_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT3SAS_INTEL_RS3GC008_BRANDING);
break;
case MPT3SAS_INTEL_RS3FC044_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT3SAS_INTEL_RS3FC044_BRANDING);
break;
case MPT3SAS_INTEL_RS3UC080_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT3SAS_INTEL_RS3UC080_BRANDING);
break;
default:
pr_info(MPT3SAS_FMT
"Intel(R) Controller: Subsystem ID: 0x%X\n",
ioc->name, ioc->pdev->subsystem_device);
break;
}
break;
default:
pr_info(MPT3SAS_FMT
"Intel(R) Controller: Subsystem ID: 0x%X\n",
ioc->name, ioc->pdev->subsystem_device);
break;
}
}
/**
@ -2216,6 +2296,8 @@ _base_display_ioc_capabilities(struct MPT3SAS_ADAPTER *ioc)
(bios_version & 0x0000FF00) >> 8,
bios_version & 0x000000FF);
_base_display_intel_branding(ioc);
pr_info(MPT3SAS_FMT "Protocol=(", ioc->name);
if (ioc->facts.ProtocolFlags & MPI2_IOCFACTS_PROTOCOL_SCSI_INITIATOR) {
@ -2447,7 +2529,8 @@ _base_static_config_pages(struct MPT3SAS_ADAPTER *ioc)
static void
_base_release_memory_pools(struct MPT3SAS_ADAPTER *ioc)
{
int i;
int i = 0;
struct reply_post_struct *rps;
dexitprintk(ioc, pr_info(MPT3SAS_FMT "%s\n", ioc->name,
__func__));
@ -2492,15 +2575,25 @@ _base_release_memory_pools(struct MPT3SAS_ADAPTER *ioc)
ioc->reply_free = NULL;
}
if (ioc->reply_post_free) {
pci_pool_free(ioc->reply_post_free_dma_pool,
ioc->reply_post_free, ioc->reply_post_free_dma);
if (ioc->reply_post) {
do {
rps = &ioc->reply_post[i];
if (rps->reply_post_free) {
pci_pool_free(
ioc->reply_post_free_dma_pool,
rps->reply_post_free,
rps->reply_post_free_dma);
dexitprintk(ioc, pr_info(MPT3SAS_FMT
"reply_post_free_pool(0x%p): free\n",
ioc->name, rps->reply_post_free));
rps->reply_post_free = NULL;
}
} while (ioc->rdpq_array_enable &&
(++i < ioc->reply_queue_count));
if (ioc->reply_post_free_dma_pool)
pci_pool_destroy(ioc->reply_post_free_dma_pool);
dexitprintk(ioc, pr_info(MPT3SAS_FMT
"reply_post_free_pool(0x%p): free\n", ioc->name,
ioc->reply_post_free));
ioc->reply_post_free = NULL;
kfree(ioc->reply_post);
}
if (ioc->config_page) {
@ -2647,6 +2740,65 @@ _base_allocate_memory_pools(struct MPT3SAS_ADAPTER *ioc, int sleep_flag)
ioc->max_sges_in_chain_message, ioc->shost->sg_tablesize,
ioc->chains_needed_per_io));
/* reply post queue, 16 byte align */
reply_post_free_sz = ioc->reply_post_queue_depth *
sizeof(Mpi2DefaultReplyDescriptor_t);
sz = reply_post_free_sz;
if (_base_is_controller_msix_enabled(ioc) && !ioc->rdpq_array_enable)
sz *= ioc->reply_queue_count;
ioc->reply_post = kcalloc((ioc->rdpq_array_enable) ?
(ioc->reply_queue_count):1,
sizeof(struct reply_post_struct), GFP_KERNEL);
if (!ioc->reply_post) {
pr_err(MPT3SAS_FMT "reply_post_free pool: kcalloc failed\n",
ioc->name);
goto out;
}
ioc->reply_post_free_dma_pool = pci_pool_create("reply_post_free pool",
ioc->pdev, sz, 16, 0);
if (!ioc->reply_post_free_dma_pool) {
pr_err(MPT3SAS_FMT
"reply_post_free pool: pci_pool_create failed\n",
ioc->name);
goto out;
}
i = 0;
do {
ioc->reply_post[i].reply_post_free =
pci_pool_alloc(ioc->reply_post_free_dma_pool,
GFP_KERNEL,
&ioc->reply_post[i].reply_post_free_dma);
if (!ioc->reply_post[i].reply_post_free) {
pr_err(MPT3SAS_FMT
"reply_post_free pool: pci_pool_alloc failed\n",
ioc->name);
goto out;
}
memset(ioc->reply_post[i].reply_post_free, 0, sz);
dinitprintk(ioc, pr_info(MPT3SAS_FMT
"reply post free pool (0x%p): depth(%d),"
"element_size(%d), pool_size(%d kB)\n", ioc->name,
ioc->reply_post[i].reply_post_free,
ioc->reply_post_queue_depth, 8, sz/1024));
dinitprintk(ioc, pr_info(MPT3SAS_FMT
"reply_post_free_dma = (0x%llx)\n", ioc->name,
(unsigned long long)
ioc->reply_post[i].reply_post_free_dma));
total_sz += sz;
} while (ioc->rdpq_array_enable && (++i < ioc->reply_queue_count));
if (ioc->dma_mask == 64) {
if (_base_change_consistent_dma_mask(ioc, ioc->pdev) != 0) {
pr_warn(MPT3SAS_FMT
"no suitable consistent DMA mask for %s\n",
ioc->name, pci_name(ioc->pdev));
goto out;
}
}
ioc->scsiio_depth = ioc->hba_queue_depth -
ioc->hi_priority_depth - ioc->internal_depth;
@ -2861,40 +3013,6 @@ _base_allocate_memory_pools(struct MPT3SAS_ADAPTER *ioc, int sleep_flag)
ioc->name, (unsigned long long)ioc->reply_free_dma));
total_sz += sz;
/* reply post queue, 16 byte align */
reply_post_free_sz = ioc->reply_post_queue_depth *
sizeof(Mpi2DefaultReplyDescriptor_t);
if (_base_is_controller_msix_enabled(ioc))
sz = reply_post_free_sz * ioc->reply_queue_count;
else
sz = reply_post_free_sz;
ioc->reply_post_free_dma_pool = pci_pool_create("reply_post_free pool",
ioc->pdev, sz, 16, 0);
if (!ioc->reply_post_free_dma_pool) {
pr_err(MPT3SAS_FMT
"reply_post_free pool: pci_pool_create failed\n",
ioc->name);
goto out;
}
ioc->reply_post_free = pci_pool_alloc(ioc->reply_post_free_dma_pool ,
GFP_KERNEL, &ioc->reply_post_free_dma);
if (!ioc->reply_post_free) {
pr_err(MPT3SAS_FMT
"reply_post_free pool: pci_pool_alloc failed\n",
ioc->name);
goto out;
}
memset(ioc->reply_post_free, 0, sz);
dinitprintk(ioc, pr_info(MPT3SAS_FMT "reply post free pool" \
"(0x%p): depth(%d), element_size(%d), pool_size(%d kB)\n",
ioc->name, ioc->reply_post_free, ioc->reply_post_queue_depth, 8,
sz/1024));
dinitprintk(ioc, pr_info(MPT3SAS_FMT
"reply_post_free_dma = (0x%llx)\n",
ioc->name, (unsigned long long)
ioc->reply_post_free_dma));
total_sz += sz;
ioc->config_page_sz = 512;
ioc->config_page = pci_alloc_consistent(ioc->pdev,
ioc->config_page_sz, &ioc->config_page_dma);
@ -3577,6 +3695,9 @@ _base_get_ioc_facts(struct MPT3SAS_ADAPTER *ioc, int sleep_flag)
facts->IOCCapabilities = le32_to_cpu(mpi_reply.IOCCapabilities);
if ((facts->IOCCapabilities & MPI2_IOCFACTS_CAPABILITY_INTEGRATED_RAID))
ioc->ir_firmware = 1;
if ((facts->IOCCapabilities &
MPI2_IOCFACTS_CAPABILITY_RDPQ_ARRAY_CAPABLE))
ioc->rdpq_array_capable = 1;
facts->FWVersion.Word = le32_to_cpu(mpi_reply.FWVersion.Word);
facts->IOCRequestFrameSize =
le16_to_cpu(mpi_reply.IOCRequestFrameSize);
@ -3613,9 +3734,12 @@ _base_send_ioc_init(struct MPT3SAS_ADAPTER *ioc, int sleep_flag)
{
Mpi2IOCInitRequest_t mpi_request;
Mpi2IOCInitReply_t mpi_reply;
int r;
int i, r = 0;
struct timeval current_time;
u16 ioc_status;
u32 reply_post_free_array_sz = 0;
Mpi2IOCInitRDPQArrayEntry *reply_post_free_array = NULL;
dma_addr_t reply_post_free_array_dma;
dinitprintk(ioc, pr_info(MPT3SAS_FMT "%s\n", ioc->name,
__func__));
@ -3644,9 +3768,31 @@ _base_send_ioc_init(struct MPT3SAS_ADAPTER *ioc, int sleep_flag)
cpu_to_le64((u64)ioc->request_dma);
mpi_request.ReplyFreeQueueAddress =
cpu_to_le64((u64)ioc->reply_free_dma);
mpi_request.ReplyDescriptorPostQueueAddress =
cpu_to_le64((u64)ioc->reply_post_free_dma);
if (ioc->rdpq_array_enable) {
reply_post_free_array_sz = ioc->reply_queue_count *
sizeof(Mpi2IOCInitRDPQArrayEntry);
reply_post_free_array = pci_alloc_consistent(ioc->pdev,
reply_post_free_array_sz, &reply_post_free_array_dma);
if (!reply_post_free_array) {
pr_err(MPT3SAS_FMT
"reply_post_free_array: pci_alloc_consistent failed\n",
ioc->name);
r = -ENOMEM;
goto out;
}
memset(reply_post_free_array, 0, reply_post_free_array_sz);
for (i = 0; i < ioc->reply_queue_count; i++)
reply_post_free_array[i].RDPQBaseAddress =
cpu_to_le64(
(u64)ioc->reply_post[i].reply_post_free_dma);
mpi_request.MsgFlags = MPI2_IOCINIT_MSGFLAG_RDPQ_ARRAY_MODE;
mpi_request.ReplyDescriptorPostQueueAddress =
cpu_to_le64((u64)reply_post_free_array_dma);
} else {
mpi_request.ReplyDescriptorPostQueueAddress =
cpu_to_le64((u64)ioc->reply_post[0].reply_post_free_dma);
}
/* This time stamp specifies number of milliseconds
* since epoch ~ midnight January 1, 1970.
@ -3674,7 +3820,7 @@ _base_send_ioc_init(struct MPT3SAS_ADAPTER *ioc, int sleep_flag)
if (r != 0) {
pr_err(MPT3SAS_FMT "%s: handshake failed (r=%d)\n",
ioc->name, __func__, r);
return r;
goto out;
}
ioc_status = le16_to_cpu(mpi_reply.IOCStatus) & MPI2_IOCSTATUS_MASK;
@ -3684,7 +3830,12 @@ _base_send_ioc_init(struct MPT3SAS_ADAPTER *ioc, int sleep_flag)
r = -EIO;
}
return 0;
out:
if (reply_post_free_array)
pci_free_consistent(ioc->pdev, reply_post_free_array_sz,
reply_post_free_array,
reply_post_free_array_dma);
return r;
}
/**
@ -4234,7 +4385,7 @@ _base_make_ioc_operational(struct MPT3SAS_ADAPTER *ioc, int sleep_flag)
struct _tr_list *delayed_tr, *delayed_tr_next;
struct adapter_reply_queue *reply_q;
long reply_post_free;
u32 reply_post_free_sz;
u32 reply_post_free_sz, index = 0;
dinitprintk(ioc, pr_info(MPT3SAS_FMT "%s\n", ioc->name,
__func__));
@ -4305,9 +4456,9 @@ _base_make_ioc_operational(struct MPT3SAS_ADAPTER *ioc, int sleep_flag)
_base_assign_reply_queues(ioc);
/* initialize Reply Post Free Queue */
reply_post_free = (long)ioc->reply_post_free;
reply_post_free_sz = ioc->reply_post_queue_depth *
sizeof(Mpi2DefaultReplyDescriptor_t);
reply_post_free = (long)ioc->reply_post[index].reply_post_free;
list_for_each_entry(reply_q, &ioc->reply_queue_list, list) {
reply_q->reply_post_host_index = 0;
reply_q->reply_post_free = (Mpi2ReplyDescriptorsUnion_t *)
@ -4317,7 +4468,15 @@ _base_make_ioc_operational(struct MPT3SAS_ADAPTER *ioc, int sleep_flag)
cpu_to_le64(ULLONG_MAX);
if (!_base_is_controller_msix_enabled(ioc))
goto skip_init_reply_post_free_queue;
reply_post_free += reply_post_free_sz;
/*
* If RDPQ is enabled, switch to the next allocation.
* Otherwise advance within the contiguous region.
*/
if (ioc->rdpq_array_enable)
reply_post_free = (long)
ioc->reply_post[++index].reply_post_free;
else
reply_post_free += reply_post_free_sz;
}
skip_init_reply_post_free_queue:
@ -4428,6 +4587,8 @@ mpt3sas_base_attach(struct MPT3SAS_ADAPTER *ioc)
goto out_free_resources;
}
ioc->rdpq_array_enable_assigned = 0;
ioc->dma_mask = 0;
r = mpt3sas_base_map_resources(ioc);
if (r)
goto out_free_resources;
@ -4804,6 +4965,12 @@ mpt3sas_base_hard_reset_handler(struct MPT3SAS_ADAPTER *ioc, int sleep_flag,
r = _base_get_ioc_facts(ioc, CAN_SLEEP);
if (r)
goto out;
if (ioc->rdpq_array_enable && !ioc->rdpq_array_capable)
panic("%s: Issue occurred with flashing controller firmware."
"Please reboot the system and ensure that the correct"
" firmware version is running\n", ioc->name);
r = _base_make_ioc_operational(ioc, sleep_flag);
if (!r)
_base_reset_handler(ioc, MPT3_IOC_DONE_RESET);

View File

@ -3,7 +3,7 @@
* for access to MPT (Message Passing Technology) firmware.
*
* This code is based on drivers/scsi/mpt3sas/mpt3sas_base.h
* Copyright (C) 2012-2013 LSI Corporation
* Copyright (C) 2012-2014 LSI Corporation
* (mailto:DL-MPTFusionLinux@lsi.com)
*
* This program is free software; you can redistribute it and/or
@ -70,8 +70,8 @@
#define MPT3SAS_DRIVER_NAME "mpt3sas"
#define MPT3SAS_AUTHOR "LSI Corporation <DL-MPTFusionLinux@lsi.com>"
#define MPT3SAS_DESCRIPTION "LSI MPT Fusion SAS 3.0 Device Driver"
#define MPT3SAS_DRIVER_VERSION "02.100.00.00"
#define MPT3SAS_MAJOR_VERSION 2
#define MPT3SAS_DRIVER_VERSION "04.100.00.00"
#define MPT3SAS_MAJOR_VERSION 4
#define MPT3SAS_MINOR_VERSION 100
#define MPT3SAS_BUILD_VERSION 0
#define MPT3SAS_RELEASE_VERSION 00
@ -130,7 +130,25 @@
#define MPT_TARGET_FLAGS_DELETED 0x04
#define MPT_TARGET_FASTPATH_IO 0x08
/*
* Intel HBA branding
*/
#define MPT3SAS_INTEL_RMS3JC080_BRANDING \
"Intel(R) Integrated RAID Module RMS3JC080"
#define MPT3SAS_INTEL_RS3GC008_BRANDING \
"Intel(R) RAID Controller RS3GC008"
#define MPT3SAS_INTEL_RS3FC044_BRANDING \
"Intel(R) RAID Controller RS3FC044"
#define MPT3SAS_INTEL_RS3UC080_BRANDING \
"Intel(R) RAID Controller RS3UC080"
/*
* Intel HBA SSDIDs
*/
#define MPT3SAS_INTEL_RMS3JC080_SSDID 0x3521
#define MPT3SAS_INTEL_RS3GC008_SSDID 0x3522
#define MPT3SAS_INTEL_RS3FC044_SSDID 0x3523
#define MPT3SAS_INTEL_RS3UC080_SSDID 0x3524
/*
* status bits for ioc->diag_buffer_status
@ -272,8 +290,10 @@ struct _internal_cmd {
* @channel: target channel
* @slot: number number
* @phy: phy identifier provided in sas device page 0
* @fast_path: fast path feature enable bit
* @responding: used in _scsih_sas_device_mark_responding
* @fast_path: fast path feature enable bit
* @pfa_led_on: flag for PFA LED status
*
*/
struct _sas_device {
struct list_head list;
@ -293,6 +313,7 @@ struct _sas_device {
u8 phy;
u8 responding;
u8 fast_path;
u8 pfa_led_on;
};
/**
@ -548,6 +569,11 @@ struct mpt3sas_port_facts {
u16 MaxPostedCmdBuffers;
};
struct reply_post_struct {
Mpi2ReplyDescriptorsUnion_t *reply_post_free;
dma_addr_t reply_post_free_dma;
};
/**
* enum mutex_type - task management mutex type
* @TM_MUTEX_OFF: mutex is not required becuase calling function is acquiring it
@ -576,6 +602,7 @@ typedef void (*MPT3SAS_FLUSH_RUNNING_CMDS)(struct MPT3SAS_ADAPTER *ioc);
* @ir_firmware: IR firmware present
* @bars: bitmask of BAR's that must be configured
* @mask_interrupts: ignore interrupt
* @dma_mask: used to set the consistent dma mask
* @fault_reset_work_q_name: fw fault work queue
* @fault_reset_work_q: ""
* @fault_reset_work: ""
@ -691,8 +718,11 @@ typedef void (*MPT3SAS_FLUSH_RUNNING_CMDS)(struct MPT3SAS_ADAPTER *ioc);
* @reply_free_dma_pool:
* @reply_free_host_index: tail index in pool to insert free replys
* @reply_post_queue_depth: reply post queue depth
* @reply_post_free: pool for reply post (64bit descriptor)
* @reply_post_free_dma:
* @reply_post_struct: struct for reply_post_free physical & virt address
* @rdpq_array_capable: FW supports multiple reply queue addresses in ioc_init
* @rdpq_array_enable: rdpq_array support is enabled in the driver
* @rdpq_array_enable_assigned: this ensures that rdpq_array_enable flag
* is assigned only ones
* @reply_queue_count: number of reply queue's
* @reply_queue_list: link list contaning the reply queue info
* @reply_post_host_index: head index in the pool where FW completes IO
@ -714,6 +744,7 @@ struct MPT3SAS_ADAPTER {
u8 ir_firmware;
int bars;
u8 mask_interrupts;
int dma_mask;
/* fw fault handler */
char fault_reset_work_q_name[20];
@ -893,8 +924,10 @@ struct MPT3SAS_ADAPTER {
/* reply post queue */
u16 reply_post_queue_depth;
Mpi2ReplyDescriptorsUnion_t *reply_post_free;
dma_addr_t reply_post_free_dma;
struct reply_post_struct *reply_post;
u8 rdpq_array_capable;
u8 rdpq_array_enable;
u8 rdpq_array_enable_assigned;
struct dma_pool *reply_post_free_dma_pool;
u8 reply_queue_count;
struct list_head reply_queue_list;

View File

@ -2,7 +2,7 @@
* This module provides common API for accessing firmware configuration pages
*
* This code is based on drivers/scsi/mpt3sas/mpt3sas_base.c
* Copyright (C) 2012-2013 LSI Corporation
* Copyright (C) 2012-2014 LSI Corporation
* (mailto:DL-MPTFusionLinux@lsi.com)
*
* This program is free software; you can redistribute it and/or

View File

@ -3,7 +3,7 @@
* controllers
*
* This code is based on drivers/scsi/mpt3sas/mpt3sas_ctl.c
* Copyright (C) 2012-2013 LSI Corporation
* Copyright (C) 2012-2014 LSI Corporation
* (mailto:DL-MPTFusionLinux@lsi.com)
*
* This program is free software; you can redistribute it and/or

View File

@ -3,7 +3,7 @@
* controllers
*
* This code is based on drivers/scsi/mpt3sas/mpt3sas_ctl.h
* Copyright (C) 2012-2013 LSI Corporation
* Copyright (C) 2012-2014 LSI Corporation
* (mailto:DL-MPTFusionLinux@lsi.com)
*
* This program is free software; you can redistribute it and/or

View File

@ -2,7 +2,7 @@
* Logging Support for MPT (Message Passing Technology) based controllers
*
* This code is based on drivers/scsi/mpt3sas/mpt3sas_debug.c
* Copyright (C) 2012-2013 LSI Corporation
* Copyright (C) 2012-2014 LSI Corporation
* (mailto:DL-MPTFusionLinux@lsi.com)
*
* This program is free software; you can redistribute it and/or

View File

@ -2,7 +2,7 @@
* Scsi Host Layer for MPT (Message Passing Technology) based controllers
*
* This code is based on drivers/scsi/mpt3sas/mpt3sas_scsih.c
* Copyright (C) 2012-2013 LSI Corporation
* Copyright (C) 2012-2014 LSI Corporation
* (mailto:DL-MPTFusionLinux@lsi.com)
*
* This program is free software; you can redistribute it and/or
@ -159,7 +159,7 @@ struct sense_info {
};
#define MPT3SAS_PROCESS_TRIGGER_DIAG (0xFFFB)
#define MPT3SAS_TURN_ON_FAULT_LED (0xFFFC)
#define MPT3SAS_TURN_ON_PFA_LED (0xFFFC)
#define MPT3SAS_PORT_ENABLE_COMPLETE (0xFFFD)
#define MPT3SAS_ABRT_TASK_SET (0xFFFE)
#define MPT3SAS_REMOVE_UNRESPONDING_DEVICES (0xFFFF)
@ -3885,7 +3885,7 @@ _scsih_scsi_ioc_info(struct MPT3SAS_ADAPTER *ioc, struct scsi_cmnd *scmd,
#endif
/**
* _scsih_turn_on_fault_led - illuminate Fault LED
* _scsih_turn_on_pfa_led - illuminate PFA LED
* @ioc: per adapter object
* @handle: device handle
* Context: process
@ -3893,10 +3893,15 @@ _scsih_scsi_ioc_info(struct MPT3SAS_ADAPTER *ioc, struct scsi_cmnd *scmd,
* Return nothing.
*/
static void
_scsih_turn_on_fault_led(struct MPT3SAS_ADAPTER *ioc, u16 handle)
_scsih_turn_on_pfa_led(struct MPT3SAS_ADAPTER *ioc, u16 handle)
{
Mpi2SepReply_t mpi_reply;
Mpi2SepRequest_t mpi_request;
struct _sas_device *sas_device;
sas_device = _scsih_sas_device_find_by_handle(ioc, handle);
if (!sas_device)
return;
memset(&mpi_request, 0, sizeof(Mpi2SepRequest_t));
mpi_request.Function = MPI2_FUNCTION_SCSI_ENCLOSURE_PROCESSOR;
@ -3911,6 +3916,7 @@ _scsih_turn_on_fault_led(struct MPT3SAS_ADAPTER *ioc, u16 handle)
__FILE__, __LINE__, __func__);
return;
}
sas_device->pfa_led_on = 1;
if (mpi_reply.IOCStatus || mpi_reply.IOCLogInfo) {
dewtprintk(ioc, pr_info(MPT3SAS_FMT
@ -3920,9 +3926,46 @@ _scsih_turn_on_fault_led(struct MPT3SAS_ADAPTER *ioc, u16 handle)
return;
}
}
/**
* _scsih_send_event_to_turn_on_fault_led - fire delayed event
* _scsih_turn_off_pfa_led - turn off Fault LED
* @ioc: per adapter object
* @sas_device: sas device whose PFA LED has to turned off
* Context: process
*
* Return nothing.
*/
static void
_scsih_turn_off_pfa_led(struct MPT3SAS_ADAPTER *ioc,
struct _sas_device *sas_device)
{
Mpi2SepReply_t mpi_reply;
Mpi2SepRequest_t mpi_request;
memset(&mpi_request, 0, sizeof(Mpi2SepRequest_t));
mpi_request.Function = MPI2_FUNCTION_SCSI_ENCLOSURE_PROCESSOR;
mpi_request.Action = MPI2_SEP_REQ_ACTION_WRITE_STATUS;
mpi_request.SlotStatus = 0;
mpi_request.Slot = cpu_to_le16(sas_device->slot);
mpi_request.DevHandle = 0;
mpi_request.EnclosureHandle = cpu_to_le16(sas_device->enclosure_handle);
mpi_request.Flags = MPI2_SEP_REQ_FLAGS_ENCLOSURE_SLOT_ADDRESS;
if ((mpt3sas_base_scsi_enclosure_processor(ioc, &mpi_reply,
&mpi_request)) != 0) {
printk(MPT3SAS_FMT "failure at %s:%d/%s()!\n", ioc->name,
__FILE__, __LINE__, __func__);
return;
}
if (mpi_reply.IOCStatus || mpi_reply.IOCLogInfo) {
dewtprintk(ioc, printk(MPT3SAS_FMT
"enclosure_processor: ioc_status (0x%04x), loginfo(0x%08x)\n",
ioc->name, le16_to_cpu(mpi_reply.IOCStatus),
le32_to_cpu(mpi_reply.IOCLogInfo)));
return;
}
}
/**
* _scsih_send_event_to_turn_on_pfa_led - fire delayed event
* @ioc: per adapter object
* @handle: device handle
* Context: interrupt.
@ -3930,14 +3973,14 @@ _scsih_turn_on_fault_led(struct MPT3SAS_ADAPTER *ioc, u16 handle)
* Return nothing.
*/
static void
_scsih_send_event_to_turn_on_fault_led(struct MPT3SAS_ADAPTER *ioc, u16 handle)
_scsih_send_event_to_turn_on_pfa_led(struct MPT3SAS_ADAPTER *ioc, u16 handle)
{
struct fw_event_work *fw_event;
fw_event = kzalloc(sizeof(struct fw_event_work), GFP_ATOMIC);
if (!fw_event)
return;
fw_event->event = MPT3SAS_TURN_ON_FAULT_LED;
fw_event->event = MPT3SAS_TURN_ON_PFA_LED;
fw_event->device_handle = handle;
fw_event->ioc = ioc;
_scsih_fw_event_add(ioc, fw_event);
@ -3981,7 +4024,7 @@ _scsih_smart_predicted_fault(struct MPT3SAS_ADAPTER *ioc, u16 handle)
spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
if (ioc->pdev->subsystem_vendor == PCI_VENDOR_ID_IBM)
_scsih_send_event_to_turn_on_fault_led(ioc, handle);
_scsih_send_event_to_turn_on_pfa_led(ioc, handle);
/* insert into event log */
sz = offsetof(Mpi2EventNotificationReply_t, EventData) +
@ -4911,7 +4954,11 @@ _scsih_remove_device(struct MPT3SAS_ADAPTER *ioc,
{
struct MPT3SAS_TARGET *sas_target_priv_data;
if ((ioc->pdev->subsystem_vendor == PCI_VENDOR_ID_IBM) &&
(sas_device->pfa_led_on)) {
_scsih_turn_off_pfa_led(ioc, sas_device);
sas_device->pfa_led_on = 0;
}
dewtprintk(ioc, pr_info(MPT3SAS_FMT
"%s: enter: handle(0x%04x), sas_addr(0x%016llx)\n",
ioc->name, __func__,
@ -7065,8 +7112,8 @@ _mpt3sas_fw_work(struct MPT3SAS_ADAPTER *ioc, struct fw_event_work *fw_event)
"port enable: complete from worker thread\n",
ioc->name));
break;
case MPT3SAS_TURN_ON_FAULT_LED:
_scsih_turn_on_fault_led(ioc, fw_event->device_handle);
case MPT3SAS_TURN_ON_PFA_LED:
_scsih_turn_on_pfa_led(ioc, fw_event->device_handle);
break;
case MPI2_EVENT_SAS_TOPOLOGY_CHANGE_LIST:
_scsih_sas_topology_change_event(ioc, fw_event);
@ -7734,6 +7781,7 @@ _scsih_probe(struct pci_dev *pdev, const struct pci_device_id *id)
{
struct MPT3SAS_ADAPTER *ioc;
struct Scsi_Host *shost;
int rv;
shost = scsi_host_alloc(&scsih_driver_template,
sizeof(struct MPT3SAS_ADAPTER));
@ -7826,6 +7874,7 @@ _scsih_probe(struct pci_dev *pdev, const struct pci_device_id *id)
if (!ioc->firmware_event_thread) {
pr_err(MPT3SAS_FMT "failure at %s:%d/%s()!\n",
ioc->name, __FILE__, __LINE__, __func__);
rv = -ENODEV;
goto out_thread_fail;
}
@ -7833,12 +7882,13 @@ _scsih_probe(struct pci_dev *pdev, const struct pci_device_id *id)
if ((mpt3sas_base_attach(ioc))) {
pr_err(MPT3SAS_FMT "failure at %s:%d/%s()!\n",
ioc->name, __FILE__, __LINE__, __func__);
rv = -ENODEV;
goto out_attach_fail;
}
if ((scsi_add_host(shost, &pdev->dev))) {
rv = scsi_add_host(shost, &pdev->dev);
if (rv) {
pr_err(MPT3SAS_FMT "failure at %s:%d/%s()!\n",
ioc->name, __FILE__, __LINE__, __func__);
list_del(&ioc->list);
goto out_add_shost_fail;
}
@ -7851,7 +7901,7 @@ out_add_shost_fail:
out_thread_fail:
list_del(&ioc->list);
scsi_host_put(shost);
return -ENODEV;
return rv;
}
#ifdef CONFIG_PM

View File

@ -2,7 +2,7 @@
* SAS Transport Layer for MPT (Message Passing Technology) based controllers
*
* This code is based on drivers/scsi/mpt3sas/mpt3sas_transport.c
* Copyright (C) 2012-2013 LSI Corporation
* Copyright (C) 2012-2014 LSI Corporation
* (mailto:DL-MPTFusionLinux@lsi.com)
*
* This program is free software; you can redistribute it and/or

View File

@ -3,7 +3,7 @@
* (Message Passing Technology) based controllers
*
* This code is based on drivers/scsi/mpt3sas/mpt3sas_trigger_diag.c
* Copyright (C) 2012-2013 LSI Corporation
* Copyright (C) 2012-2014 LSI Corporation
* (mailto:DL-MPTFusionLinux@lsi.com)
*
* This program is free software; you can redistribute it and/or

View File

@ -4,7 +4,7 @@
* controllers
*
* This code is based on drivers/scsi/mpt3sas/mpt3sas_base.h
* Copyright (C) 2012-2013 LSI Corporation
* Copyright (C) 2012-2014 LSI Corporation
* (mailto:DL-MPTFusionLinux@lsi.com)
*
* This program is free software; you can redistribute it and/or

View File

@ -915,7 +915,7 @@ static int nsp32_queuecommand_lck(struct scsi_cmnd *SCpnt, void (*done)(struct s
int ret;
nsp32_dbg(NSP32_DEBUG_QUEUECOMMAND,
"enter. target: 0x%x LUN: 0x%llu cmnd: 0x%x cmndlen: 0x%x "
"enter. target: 0x%x LUN: 0x%llx cmnd: 0x%x cmndlen: 0x%x "
"use_sg: 0x%x reqbuf: 0x%lx reqlen: 0x%x",
SCpnt->device->id, SCpnt->device->lun, SCpnt->cmnd[0], SCpnt->cmd_len,
scsi_sg_count(SCpnt), scsi_sglist(SCpnt), scsi_bufflen(SCpnt));

View File

@ -385,7 +385,6 @@ static ssize_t pm8001_ctl_bios_version_show(struct device *cdev,
struct sas_ha_struct *sha = SHOST_TO_SAS_HA(shost);
struct pm8001_hba_info *pm8001_ha = sha->lldd_ha;
char *str = buf;
void *virt_addr;
int bios_index;
DECLARE_COMPLETION_ONSTACK(completion);
struct pm8001_ioctl_payload payload;
@ -402,11 +401,10 @@ static ssize_t pm8001_ctl_bios_version_show(struct device *cdev,
return -ENOMEM;
}
wait_for_completion(&completion);
virt_addr = pm8001_ha->memoryMap.region[NVMD].virt_ptr;
for (bios_index = BIOSOFFSET; bios_index < BIOS_OFFSET_LIMIT;
bios_index++)
str += sprintf(str, "%c",
*((u8 *)((u8 *)virt_addr+bios_index)));
*(payload.func_specific+bios_index));
kfree(payload.func_specific);
return str - buf;
}

View File

@ -3132,6 +3132,7 @@ void pm8001_mpi_set_nvmd_resp(struct pm8001_hba_info *pm8001_ha, void *piomb)
void
pm8001_mpi_get_nvmd_resp(struct pm8001_hba_info *pm8001_ha, void *piomb)
{
struct fw_control_ex *fw_control_context;
struct get_nvm_data_resp *pPayload =
(struct get_nvm_data_resp *)(piomb + 4);
u32 tag = le32_to_cpu(pPayload->tag);
@ -3140,6 +3141,7 @@ pm8001_mpi_get_nvmd_resp(struct pm8001_hba_info *pm8001_ha, void *piomb)
u32 ir_tds_bn_dps_das_nvm =
le32_to_cpu(pPayload->ir_tda_bn_dps_das_nvm);
void *virt_addr = pm8001_ha->memoryMap.region[NVMD].virt_ptr;
fw_control_context = ccb->fw_control_context;
PM8001_MSG_DBG(pm8001_ha, pm8001_printk("Get nvm data complete!\n"));
if ((dlen_status & NVMD_STAT) != 0) {
@ -3180,6 +3182,12 @@ pm8001_mpi_get_nvmd_resp(struct pm8001_hba_info *pm8001_ha, void *piomb)
pm8001_printk("Get NVMD success, IR=0, dataLen=%d\n",
(dlen_status & NVMD_LEN) >> 24));
}
/* Though fw_control_context is freed below, usrAddr still needs
* to be updated as this holds the response to the request function
*/
memcpy(fw_control_context->usrAddr,
pm8001_ha->memoryMap.region[NVMD].virt_ptr,
fw_control_context->len);
kfree(ccb->fw_control_context);
ccb->task = NULL;
ccb->ccb_tag = 0xFFFFFFFF;

View File

@ -4698,19 +4698,10 @@ pmcraid_register_interrupt_handler(struct pmcraid_instance *pinstance)
for (i = 0; i < PMCRAID_NUM_MSIX_VECTORS; i++)
entries[i].entry = i;
rc = pci_enable_msix(pdev, entries, num_hrrq);
if (rc < 0)
num_hrrq = pci_enable_msix_range(pdev, entries, 1, num_hrrq);
if (num_hrrq < 0)
goto pmcraid_isr_legacy;
/* Check how many MSIX vectors are allocated and register
* msi-x handlers for each of them giving appropriate buffer
*/
if (rc > 0) {
num_hrrq = rc;
if (pci_enable_msix(pdev, entries, num_hrrq))
goto pmcraid_isr_legacy;
}
for (i = 0; i < num_hrrq; i++) {
pinstance->hrrq_vector[i].hrrq_id = i;
pinstance->hrrq_vector[i].drv_inst = pinstance;
@ -4746,7 +4737,6 @@ pmcraid_isr_legacy:
pinstance->hrrq_vector[0].drv_inst = pinstance;
pinstance->hrrq_vector[0].vector = pdev->irq;
pinstance->num_hrrq = 1;
rc = 0;
rc = request_irq(pdev->irq, pmcraid_isr, IRQF_SHARED,
PMCRAID_DRIVER_NAME, &pinstance->hrrq_vector[0]);

View File

@ -484,7 +484,8 @@ qla2x00_sysfs_write_optrom_ctl(struct file *filp, struct kobject *kobj,
start == (ha->flt_region_fw * 4))
valid = 1;
else if (IS_QLA24XX_TYPE(ha) || IS_QLA25XX(ha)
|| IS_CNA_CAPABLE(ha) || IS_QLA2031(ha))
|| IS_CNA_CAPABLE(ha) || IS_QLA2031(ha)
|| IS_QLA27XX(ha))
valid = 1;
if (!valid) {
ql_log(ql_log_warn, vha, 0x7065,
@ -987,6 +988,8 @@ qla2x00_free_sysfs_attr(scsi_qla_host_t *vha, bool stop_beacon)
continue;
if (iter->is4GBp_only == 3 && !(IS_CNA_CAPABLE(vha->hw)))
continue;
if (iter->is4GBp_only == 0x27 && !IS_QLA27XX(vha->hw))
continue;
sysfs_remove_bin_file(&host->shost_gendev.kobj,
iter->attr);
@ -1014,7 +1017,7 @@ qla2x00_fw_version_show(struct device *dev,
char fw_str[128];
return scnprintf(buf, PAGE_SIZE, "%s\n",
ha->isp_ops->fw_version_str(vha, fw_str));
ha->isp_ops->fw_version_str(vha, fw_str, sizeof(fw_str)));
}
static ssize_t
@ -1440,7 +1443,7 @@ qla2x00_fw_state_show(struct device *dev, struct device_attribute *attr,
{
scsi_qla_host_t *vha = shost_priv(class_to_shost(dev));
int rval = QLA_FUNCTION_FAILED;
uint16_t state[5];
uint16_t state[6];
uint32_t pstate;
if (IS_QLAFX00(vha->hw)) {
@ -1456,8 +1459,8 @@ qla2x00_fw_state_show(struct device *dev, struct device_attribute *attr,
if (rval != QLA_SUCCESS)
memset(state, -1, sizeof(state));
return scnprintf(buf, PAGE_SIZE, "0x%x 0x%x 0x%x 0x%x 0x%x\n", state[0],
state[1], state[2], state[3], state[4]);
return scnprintf(buf, PAGE_SIZE, "0x%x 0x%x 0x%x 0x%x 0x%x 0x%x\n",
state[0], state[1], state[2], state[3], state[4], state[5]);
}
static ssize_t
@ -1924,7 +1927,8 @@ qla2x00_get_host_symbolic_name(struct Scsi_Host *shost)
{
scsi_qla_host_t *vha = shost_priv(shost);
qla2x00_get_sym_node_name(vha, fc_host_symbolic_name(shost));
qla2x00_get_sym_node_name(vha, fc_host_symbolic_name(shost),
sizeof(fc_host_symbolic_name(shost)));
}
static void

View File

@ -1390,7 +1390,7 @@ qla2x00_optrom_setup(struct fc_bsg_job *bsg_job, scsi_qla_host_t *vha,
start == (ha->flt_region_fw * 4))
valid = 1;
else if (IS_QLA24XX_TYPE(ha) || IS_QLA25XX(ha) ||
IS_CNA_CAPABLE(ha) || IS_QLA2031(ha))
IS_CNA_CAPABLE(ha) || IS_QLA2031(ha) || IS_QLA27XX(ha))
valid = 1;
if (!valid) {
ql_log(ql_log_warn, vha, 0x7058,

View File

@ -11,19 +11,15 @@
* ----------------------------------------------------------------------
* | Level | Last Value Used | Holes |
* ----------------------------------------------------------------------
* | Module Init and Probe | 0x017d | 0x004b,0x0141 |
* | | | 0x0144,0x0146 |
* | Module Init and Probe | 0x017d | 0x0144,0x0146 |
* | | | 0x015b-0x0160 |
* | | | 0x016e-0x0170 |
* | Mailbox commands | 0x118d | 0x1018-0x1019 |
* | | | 0x10ca |
* | | | 0x1115-0x1116 |
* | | | 0x111a-0x111b |
* | | | 0x1155-0x1158 |
* | Device Discovery | 0x2095 | 0x2020-0x2022, |
* | Mailbox commands | 0x118d | 0x1115-0x1116 |
* | | | 0x111a-0x111b |
* | Device Discovery | 0x2016 | 0x2020-0x2022, |
* | | | 0x2011-0x2012, |
* | | | 0x2016 |
* | Queue Command and IO tracing | 0x3059 | 0x3006-0x300b |
* | | | 0x2099-0x20a4 |
* | Queue Command and IO tracing | 0x3059 | 0x300b |
* | | | 0x3027-0x3028 |
* | | | 0x303d-0x3041 |
* | | | 0x302d,0x3033 |
@ -31,10 +27,10 @@
* | | | 0x303a |
* | DPC Thread | 0x4023 | 0x4002,0x4013 |
* | Async Events | 0x5087 | 0x502b-0x502f |
* | | | 0x5047,0x5052 |
* | | | 0x5047 |
* | | | 0x5084,0x5075 |
* | | | 0x503d,0x5044 |
* | | | 0x507b |
* | | | 0x507b,0x505f |
* | Timer Routines | 0x6012 | |
* | User Space Interactions | 0x70e2 | 0x7018,0x702e |
* | | | 0x7020,0x7024 |
@ -64,13 +60,15 @@
* | | | 0xb13c-0xb140 |
* | | | 0xb149 |
* | MultiQ | 0xc00c | |
* | Misc | 0xd212 | 0xd017-0xd019 |
* | | | 0xd020 |
* | | | 0xd030-0xd0ff |
* | Misc | 0xd213 | 0xd011-0xd017 |
* | | | 0xd021,0xd024 |
* | | | 0xd025,0xd029 |
* | | | 0xd02a,0xd02e |
* | | | 0xd031-0xd0ff |
* | | | 0xd101-0xd1fe |
* | | | 0xd213-0xd2fe |
* | Target Mode | 0xe078 | |
* | Target Mode Management | 0xf072 | 0xf002-0xf003 |
* | | | 0xd214-0xd2fe |
* | Target Mode | 0xe079 | |
* | Target Mode Management | 0xf072 | 0xf002 |
* | | | 0xf046-0xf049 |
* | Target Mode Task Management | 0x1000b | |
* ----------------------------------------------------------------------

Some files were not shown because too many files have changed in this diff Show More