1
0
Fork 0

SCSI misc on 20161213

This update includes the usual round of major driver updates (ncr5380,
 lpfc, hisi_sas, megaraid_sas, ufs, ibmvscsis, mpt3sas).  There's also
 an assortment of minor fixes, mostly in error legs or other not very
 user visible stuff.  The major change is the pci_alloc_irq_vectors
 replacement for the old pci_msix_.. calls; this effectively makes IRQ
 mapping generic for the drivers and allows blk_mq to use the
 information.
 
 Signed-off-by: James E.J. Bottomley <jejb@linux.vnet.ibm.com>
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2
 
 iQIcBAABAgAGBQJYUJOvAAoJEAVr7HOZEZN42B0P/1lj1W2N7y0LOAbR2MasyQvT
 fMD/SSip/v+R/zJiTv+5M/IDQT6ez62JnQGWyX3HZTob9VEfoqagbUuHH6y+fmib
 tHyqiYc9FC/NZSRX/0ib+rpnSVhC/YRSVV7RrAqilbpOKAzeU25FlN/vbz+Nv/XL
 ReVBl+2nGjJtHyWqUN45Zuf74c0zXOWPPUD0hRaNclK5CfZv5wDYupzHzTNSQTkj
 nWvwPYT0OgSMNe7mR+IDFyOe3UQ/OYyeJB0yBNqO63IiaUabT8/hgrWR9qJAvWh8
 LiH+iSQ69+sDUnvWvFjuls/GzcFuuTljwJbm+FyTsmNHONPVY8JRCLNq7CNDJ6Vx
 HwpNuJdTSJpne4lAVBGPwgjs+GhlMvUP/xYVLWAXdaBxU9XGePrwqQDcFu1Rbx3P
 yfMiVaY1+e45OEjLRCbDAwFnMPevC3kyymIvSsTySJxhTbYrOsyrrWt5kwWsvE3r
 SKANsub+xUnpCkyg57nXRQStJSCiSfGIDsydKmMX+pf1SR4k6gCUQZlcchUX0uOa
 dcY6re0c7EJIQQiT7qeGP5TRBblxARocCA/Igx6b5U5HmuQ48tDFlMCps7/TE84V
 JBsBnmkXcEi/ALShL/Tui+3YKA1DfOtEnwHtXx/9Ecx/nxP2Sjr9LJwCKiONv8NY
 RgLpGfccrix34lQumOk5
 =sPXh
 -----END PGP SIGNATURE-----

Merge tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi

Pull SCSI updates from James Bottomley:
 "This update includes the usual round of major driver updates (ncr5380,
  lpfc, hisi_sas, megaraid_sas, ufs, ibmvscsis, mpt3sas).

  There's also an assortment of minor fixes, mostly in error legs or
  other not very user visible stuff. The major change is the
  pci_alloc_irq_vectors replacement for the old pci_msix_.. calls; this
  effectively makes IRQ mapping generic for the drivers and allows
  blk_mq to use the information"

* tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (256 commits)
  scsi: qla4xxx: switch to pci_alloc_irq_vectors
  scsi: hisi_sas: support deferred probe for v2 hw
  scsi: megaraid_sas: switch to pci_alloc_irq_vectors
  scsi: scsi_devinfo: remove synchronous ALUA for NETAPP devices
  scsi: be2iscsi: set errno on error path
  scsi: be2iscsi: set errno on error path
  scsi: hpsa: fallback to use legacy REPORT PHYS command
  scsi: scsi_dh_alua: Fix RCU annotations
  scsi: hpsa: use %phN for short hex dumps
  scsi: hisi_sas: fix free'ing in probe and remove
  scsi: isci: switch to pci_alloc_irq_vectors
  scsi: ipr: Fix runaway IRQs when falling back from MSI to LSI
  scsi: dpt_i2o: double free on error path
  scsi: cxlflash: Migrate scsi command pointer to AFU command
  scsi: cxlflash: Migrate IOARRIN specific routines to function pointers
  scsi: cxlflash: Cleanup queuecommand()
  scsi: cxlflash: Cleanup send_tmf()
  scsi: cxlflash: Remove AFU command lock
  scsi: cxlflash: Wait for active AFU commands to timeout upon tear down
  scsi: cxlflash: Remove private command pool
  ...
hifive-unleashed-5.1
Linus Torvalds 2016-12-14 10:49:33 -08:00
commit a829a8445f
142 changed files with 5358 additions and 4470 deletions

View File

@ -6,6 +6,7 @@ Main node required properties:
- compatible : value should be as follows: - compatible : value should be as follows:
(a) "hisilicon,hip05-sas-v1" for v1 hw in hip05 chipset (a) "hisilicon,hip05-sas-v1" for v1 hw in hip05 chipset
(b) "hisilicon,hip06-sas-v2" for v2 hw in hip06 chipset (b) "hisilicon,hip06-sas-v2" for v2 hw in hip06 chipset
(c) "hisilicon,hip07-sas-v2" for v2 hw in hip07 chipset
- sas-addr : array of 8 bytes for host SAS address - sas-addr : array of 8 bytes for host SAS address
- reg : Address and length of the SAS register - reg : Address and length of the SAS register
- hisilicon,sas-syscon: phandle of syscon used for sas control - hisilicon,sas-syscon: phandle of syscon used for sas control

View File

@ -7,8 +7,11 @@ To bind UFS PHY with UFS host controller, the controller node should
contain a phandle reference to UFS PHY node. contain a phandle reference to UFS PHY node.
Required properties: Required properties:
- compatible : compatible list, contains "qcom,ufs-phy-qmp-20nm" - compatible : compatible list, contains one of the following -
or "qcom,ufs-phy-qmp-14nm" according to the relevant phy in use. "qcom,ufs-phy-qmp-20nm" for 20nm ufs phy,
"qcom,ufs-phy-qmp-14nm" for legacy 14nm ufs phy,
"qcom,msm8996-ufs-phy-qmp-14nm" for 14nm ufs phy
present on MSM8996 chipset.
- reg : should contain PHY register address space (mandatory), - reg : should contain PHY register address space (mandatory),
- reg-names : indicates various resources passed to driver (via reg proptery) by name. - reg-names : indicates various resources passed to driver (via reg proptery) by name.
Required "reg-names" is "phy_mem". Required "reg-names" is "phy_mem".

View File

@ -3192,15 +3192,15 @@ S: Supported
F: drivers/clocksource F: drivers/clocksource
CISCO FCOE HBA DRIVER CISCO FCOE HBA DRIVER
M: Hiral Patel <hiralpat@cisco.com> M: Satish Kharat <satishkh@cisco.com>
M: Suma Ramars <sramars@cisco.com> M: Sesidhar Baddela <sebaddel@cisco.com>
M: Brian Uchino <buchino@cisco.com> M: Karan Tilak Kumar <kartilak@cisco.com>
L: linux-scsi@vger.kernel.org L: linux-scsi@vger.kernel.org
S: Supported S: Supported
F: drivers/scsi/fnic/ F: drivers/scsi/fnic/
CISCO SCSI HBA DRIVER CISCO SCSI HBA DRIVER
M: Narsimhulu Musini <nmusini@cisco.com> M: Karan Tilak Kumar <kartilak@cisco.com>
M: Sesidhar Baddela <sebaddel@cisco.com> M: Sesidhar Baddela <sebaddel@cisco.com>
L: linux-scsi@vger.kernel.org L: linux-scsi@vger.kernel.org
S: Supported S: Supported
@ -4787,11 +4787,11 @@ M: David Woodhouse <dwmw2@infradead.org>
L: linux-embedded@vger.kernel.org L: linux-embedded@vger.kernel.org
S: Maintained S: Maintained
EMULEX/AVAGO LPFC FC/FCOE SCSI DRIVER EMULEX/BROADCOM LPFC FC/FCOE SCSI DRIVER
M: James Smart <james.smart@avagotech.com> M: James Smart <james.smart@broadcom.com>
M: Dick Kennedy <dick.kennedy@avagotech.com> M: Dick Kennedy <dick.kennedy@broadcom.com>
L: linux-scsi@vger.kernel.org L: linux-scsi@vger.kernel.org
W: http://www.avagotech.com W: http://www.broadcom.com
S: Supported S: Supported
F: drivers/scsi/lpfc/ F: drivers/scsi/lpfc/
@ -5717,7 +5717,6 @@ F: drivers/watchdog/hpwdt.c
HEWLETT-PACKARD SMART ARRAY RAID DRIVER (hpsa) HEWLETT-PACKARD SMART ARRAY RAID DRIVER (hpsa)
M: Don Brace <don.brace@microsemi.com> M: Don Brace <don.brace@microsemi.com>
L: iss_storagedev@hp.com
L: esc.storagedev@microsemi.com L: esc.storagedev@microsemi.com
L: linux-scsi@vger.kernel.org L: linux-scsi@vger.kernel.org
S: Supported S: Supported
@ -5728,7 +5727,6 @@ F: include/uapi/linux/cciss*.h
HEWLETT-PACKARD SMART CISS RAID DRIVER (cciss) HEWLETT-PACKARD SMART CISS RAID DRIVER (cciss)
M: Don Brace <don.brace@microsemi.com> M: Don Brace <don.brace@microsemi.com>
L: iss_storagedev@hp.com
L: esc.storagedev@microsemi.com L: esc.storagedev@microsemi.com
L: linux-scsi@vger.kernel.org L: linux-scsi@vger.kernel.org
S: Supported S: Supported
@ -7968,12 +7966,12 @@ S: Maintained
F: drivers/net/wireless/mediatek/mt7601u/ F: drivers/net/wireless/mediatek/mt7601u/
MEGARAID SCSI/SAS DRIVERS MEGARAID SCSI/SAS DRIVERS
M: Kashyap Desai <kashyap.desai@avagotech.com> M: Kashyap Desai <kashyap.desai@broadcom.com>
M: Sumit Saxena <sumit.saxena@avagotech.com> M: Sumit Saxena <sumit.saxena@broadcom.com>
M: Uday Lingala <uday.lingala@avagotech.com> M: Shivasharan S <shivasharan.srikanteshwara@broadcom.com>
L: megaraidlinux.pdl@avagotech.com L: megaraidlinux.pdl@broadcom.com
L: linux-scsi@vger.kernel.org L: linux-scsi@vger.kernel.org
W: http://www.lsi.com W: http://www.avagotech.com/support/
S: Maintained S: Maintained
F: Documentation/scsi/megaraid.txt F: Documentation/scsi/megaraid.txt
F: drivers/scsi/megaraid.* F: drivers/scsi/megaraid.*
@ -8453,7 +8451,6 @@ F: drivers/scsi/arm/oak.c
F: drivers/scsi/atari_scsi.* F: drivers/scsi/atari_scsi.*
F: drivers/scsi/dmx3191d.c F: drivers/scsi/dmx3191d.c
F: drivers/scsi/g_NCR5380.* F: drivers/scsi/g_NCR5380.*
F: drivers/scsi/g_NCR5380_mmio.c
F: drivers/scsi/mac_scsi.* F: drivers/scsi/mac_scsi.*
F: drivers/scsi/sun3_scsi.* F: drivers/scsi/sun3_scsi.*
F: drivers/scsi/sun3_scsi_vme.c F: drivers/scsi/sun3_scsi_vme.c
@ -12547,7 +12544,8 @@ F: Documentation/scsi/ufs.txt
F: drivers/scsi/ufs/ F: drivers/scsi/ufs/
UNIVERSAL FLASH STORAGE HOST CONTROLLER DRIVER DWC HOOKS UNIVERSAL FLASH STORAGE HOST CONTROLLER DRIVER DWC HOOKS
M: Joao Pinto <Joao.Pinto@synopsys.com> M: Manjunath M Bettegowda <manjumb@synopsys.com>
M: Prabu Thangamuthu <prabut@synopsys.com>
L: linux-scsi@vger.kernel.org L: linux-scsi@vger.kernel.org
S: Supported S: Supported
F: drivers/scsi/ufs/*dwc* F: drivers/scsi/ufs/*dwc*

View File

@ -87,6 +87,7 @@ int blk_mq_map_queues(struct blk_mq_tag_set *set)
free_cpumask_var(cpus); free_cpumask_var(cpus);
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(blk_mq_map_queues);
/* /*
* We have no quick way of doing reverse lookups. This is only used at * We have no quick way of doing reverse lookups. This is only used at

View File

@ -42,7 +42,6 @@ void blk_mq_disable_hotplug(void);
/* /*
* CPU -> queue mappings * CPU -> queue mappings
*/ */
int blk_mq_map_queues(struct blk_mq_tag_set *set);
extern int blk_mq_hw_queue_to_node(unsigned int *map, unsigned int); extern int blk_mq_hw_queue_to_node(unsigned int *map, unsigned int);
static inline struct blk_mq_hw_ctx *blk_mq_map_queue(struct request_queue *q, static inline struct blk_mq_hw_ctx *blk_mq_map_queue(struct request_queue *q,

View File

@ -32,8 +32,13 @@
* bsg_destroy_job - routine to teardown/delete a bsg job * bsg_destroy_job - routine to teardown/delete a bsg job
* @job: bsg_job that is to be torn down * @job: bsg_job that is to be torn down
*/ */
static void bsg_destroy_job(struct bsg_job *job) static void bsg_destroy_job(struct kref *kref)
{ {
struct bsg_job *job = container_of(kref, struct bsg_job, kref);
struct request *rq = job->req;
blk_end_request_all(rq, rq->errors);
put_device(job->dev); /* release reference for the request */ put_device(job->dev); /* release reference for the request */
kfree(job->request_payload.sg_list); kfree(job->request_payload.sg_list);
@ -41,6 +46,18 @@ static void bsg_destroy_job(struct bsg_job *job)
kfree(job); kfree(job);
} }
void bsg_job_put(struct bsg_job *job)
{
kref_put(&job->kref, bsg_destroy_job);
}
EXPORT_SYMBOL_GPL(bsg_job_put);
int bsg_job_get(struct bsg_job *job)
{
return kref_get_unless_zero(&job->kref);
}
EXPORT_SYMBOL_GPL(bsg_job_get);
/** /**
* bsg_job_done - completion routine for bsg requests * bsg_job_done - completion routine for bsg requests
* @job: bsg_job that is complete * @job: bsg_job that is complete
@ -83,8 +100,7 @@ static void bsg_softirq_done(struct request *rq)
{ {
struct bsg_job *job = rq->special; struct bsg_job *job = rq->special;
blk_end_request_all(rq, rq->errors); bsg_job_put(job);
bsg_destroy_job(job);
} }
static int bsg_map_buffer(struct bsg_buffer *buf, struct request *req) static int bsg_map_buffer(struct bsg_buffer *buf, struct request *req)
@ -142,6 +158,7 @@ static int bsg_create_job(struct device *dev, struct request *req)
job->dev = dev; job->dev = dev;
/* take a reference for the request */ /* take a reference for the request */
get_device(job->dev); get_device(job->dev);
kref_init(&job->kref);
return 0; return 0;
failjob_rls_rqst_payload: failjob_rls_rqst_payload:

View File

@ -260,43 +260,6 @@ scsi_cmd_stack_free(ctlr_info_t *h)
} }
#if 0 #if 0
static int xmargin=8;
static int amargin=60;
static void
print_bytes (unsigned char *c, int len, int hex, int ascii)
{
int i;
unsigned char *x;
if (hex)
{
x = c;
for (i=0;i<len;i++)
{
if ((i % xmargin) == 0 && i>0) printk("\n");
if ((i % xmargin) == 0) printk("0x%04x:", i);
printk(" %02x", *x);
x++;
}
printk("\n");
}
if (ascii)
{
x = c;
for (i=0;i<len;i++)
{
if ((i % amargin) == 0 && i>0) printk("\n");
if ((i % amargin) == 0) printk("0x%04x:", i);
if (*x > 26 && *x < 128) printk("%c", *x);
else printk(".");
x++;
}
printk("\n");
}
}
static void static void
print_cmd(CommandList_struct *cp) print_cmd(CommandList_struct *cp)
{ {
@ -305,30 +268,13 @@ print_cmd(CommandList_struct *cp)
printk("sgtot:%d\n", cp->Header.SGTotal); printk("sgtot:%d\n", cp->Header.SGTotal);
printk("Tag:0x%08x/0x%08x\n", cp->Header.Tag.upper, printk("Tag:0x%08x/0x%08x\n", cp->Header.Tag.upper,
cp->Header.Tag.lower); cp->Header.Tag.lower);
printk("LUN:0x%02x%02x%02x%02x%02x%02x%02x%02x\n", printk("LUN:0x%8phN\n", cp->Header.LUN.LunAddrBytes);
cp->Header.LUN.LunAddrBytes[0],
cp->Header.LUN.LunAddrBytes[1],
cp->Header.LUN.LunAddrBytes[2],
cp->Header.LUN.LunAddrBytes[3],
cp->Header.LUN.LunAddrBytes[4],
cp->Header.LUN.LunAddrBytes[5],
cp->Header.LUN.LunAddrBytes[6],
cp->Header.LUN.LunAddrBytes[7]);
printk("CDBLen:%d\n", cp->Request.CDBLen); printk("CDBLen:%d\n", cp->Request.CDBLen);
printk("Type:%d\n",cp->Request.Type.Type); printk("Type:%d\n",cp->Request.Type.Type);
printk("Attr:%d\n",cp->Request.Type.Attribute); printk("Attr:%d\n",cp->Request.Type.Attribute);
printk(" Dir:%d\n",cp->Request.Type.Direction); printk(" Dir:%d\n",cp->Request.Type.Direction);
printk("Timeout:%d\n",cp->Request.Timeout); printk("Timeout:%d\n",cp->Request.Timeout);
printk( "CDB: %02x %02x %02x %02x %02x %02x %02x %02x" printk("CDB: %16ph\n", cp->Request.CDB);
" %02x %02x %02x %02x %02x %02x %02x %02x\n",
cp->Request.CDB[0], cp->Request.CDB[1],
cp->Request.CDB[2], cp->Request.CDB[3],
cp->Request.CDB[4], cp->Request.CDB[5],
cp->Request.CDB[6], cp->Request.CDB[7],
cp->Request.CDB[8], cp->Request.CDB[9],
cp->Request.CDB[10], cp->Request.CDB[11],
cp->Request.CDB[12], cp->Request.CDB[13],
cp->Request.CDB[14], cp->Request.CDB[15]),
printk("edesc.Addr: 0x%08x/0%08x, Len = %d\n", printk("edesc.Addr: 0x%08x/0%08x, Len = %d\n",
cp->ErrDesc.Addr.upper, cp->ErrDesc.Addr.lower, cp->ErrDesc.Addr.upper, cp->ErrDesc.Addr.lower,
cp->ErrDesc.Len); cp->ErrDesc.Len);
@ -340,9 +286,7 @@ print_cmd(CommandList_struct *cp)
printk("offense size:%d\n", cp->err_info->MoreErrInfo.Invalid_Cmd.offense_size); printk("offense size:%d\n", cp->err_info->MoreErrInfo.Invalid_Cmd.offense_size);
printk("offense byte:%d\n", cp->err_info->MoreErrInfo.Invalid_Cmd.offense_num); printk("offense byte:%d\n", cp->err_info->MoreErrInfo.Invalid_Cmd.offense_num);
printk("offense value:%d\n", cp->err_info->MoreErrInfo.Invalid_Cmd.offense_value); printk("offense value:%d\n", cp->err_info->MoreErrInfo.Invalid_Cmd.offense_value);
} }
#endif #endif
static int static int
@ -782,8 +726,10 @@ static void complete_scsi_command(CommandList_struct *c, int timeout,
"reported\n", c); "reported\n", c);
break; break;
case CMD_INVALID: { case CMD_INVALID: {
/* print_bytes(c, sizeof(*c), 1, 0); /*
print_cmd(c); */ print_hex_dump(KERN_INFO, "", DUMP_PREFIX_OFFSET, 16, 1, c, sizeof(*c), false);
print_cmd(c);
*/
/* We get CMD_INVALID if you address a non-existent tape drive instead /* We get CMD_INVALID if you address a non-existent tape drive instead
of a selection timeout (no response). You will see this if you yank of a selection timeout (no response). You will see this if you yank
out a tape drive, then try to access it. This is kind of a shame out a tape drive, then try to access it. This is kind of a shame
@ -985,8 +931,10 @@ cciss_scsi_interpret_error(ctlr_info_t *h, CommandList_struct *c)
dev_warn(&h->pdev->dev, dev_warn(&h->pdev->dev,
"%p is reported invalid (probably means " "%p is reported invalid (probably means "
"target device no longer present)\n", c); "target device no longer present)\n", c);
/* print_bytes((unsigned char *) c, sizeof(*c), 1, 0); /*
print_cmd(c); */ print_hex_dump(KERN_INFO, "", DUMP_PREFIX_OFFSET, 16, 1, c, sizeof(*c), false);
print_cmd(c);
*/
} }
break; break;
case CMD_PROTOCOL_ERR: case CMD_PROTOCOL_ERR:

View File

@ -2585,10 +2585,7 @@ mpt_do_ioc_recovery(MPT_ADAPTER *ioc, u32 reason, int sleepFlag)
(void) GetLanConfigPages(ioc); (void) GetLanConfigPages(ioc);
a = (u8*)&ioc->lan_cnfg_page1.HardwareAddressLow; a = (u8*)&ioc->lan_cnfg_page1.HardwareAddressLow;
dprintk(ioc, printk(MYIOC_s_DEBUG_FMT dprintk(ioc, printk(MYIOC_s_DEBUG_FMT
"LanAddr = %02X:%02X:%02X" "LanAddr = %pMR\n", ioc->name, a));
":%02X:%02X:%02X\n",
ioc->name, a[5], a[4],
a[3], a[2], a[1], a[0]));
} }
break; break;
@ -2868,21 +2865,21 @@ MptDisplayIocCapabilities(MPT_ADAPTER *ioc)
printk(KERN_INFO "%s: ", ioc->name); printk(KERN_INFO "%s: ", ioc->name);
if (ioc->prod_name) if (ioc->prod_name)
printk("%s: ", ioc->prod_name); pr_cont("%s: ", ioc->prod_name);
printk("Capabilities={"); pr_cont("Capabilities={");
if (ioc->pfacts[0].ProtocolFlags & MPI_PORTFACTS_PROTOCOL_INITIATOR) { if (ioc->pfacts[0].ProtocolFlags & MPI_PORTFACTS_PROTOCOL_INITIATOR) {
printk("Initiator"); pr_cont("Initiator");
i++; i++;
} }
if (ioc->pfacts[0].ProtocolFlags & MPI_PORTFACTS_PROTOCOL_TARGET) { if (ioc->pfacts[0].ProtocolFlags & MPI_PORTFACTS_PROTOCOL_TARGET) {
printk("%sTarget", i ? "," : ""); pr_cont("%sTarget", i ? "," : "");
i++; i++;
} }
if (ioc->pfacts[0].ProtocolFlags & MPI_PORTFACTS_PROTOCOL_LAN) { if (ioc->pfacts[0].ProtocolFlags & MPI_PORTFACTS_PROTOCOL_LAN) {
printk("%sLAN", i ? "," : ""); pr_cont("%sLAN", i ? "," : "");
i++; i++;
} }
@ -2891,12 +2888,12 @@ MptDisplayIocCapabilities(MPT_ADAPTER *ioc)
* This would probably evoke more questions than it's worth * This would probably evoke more questions than it's worth
*/ */
if (ioc->pfacts[0].ProtocolFlags & MPI_PORTFACTS_PROTOCOL_TARGET) { if (ioc->pfacts[0].ProtocolFlags & MPI_PORTFACTS_PROTOCOL_TARGET) {
printk("%sLogBusAddr", i ? "," : ""); pr_cont("%sLogBusAddr", i ? "," : "");
i++; i++;
} }
#endif #endif
printk("}\n"); pr_cont("}\n");
} }
/*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/ /*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/
@ -6783,8 +6780,7 @@ static int mpt_iocinfo_proc_show(struct seq_file *m, void *v)
if (ioc->bus_type == FC) { if (ioc->bus_type == FC) {
if (ioc->pfacts[p].ProtocolFlags & MPI_PORTFACTS_PROTOCOL_LAN) { if (ioc->pfacts[p].ProtocolFlags & MPI_PORTFACTS_PROTOCOL_LAN) {
u8 *a = (u8*)&ioc->lan_cnfg_page1.HardwareAddressLow; u8 *a = (u8*)&ioc->lan_cnfg_page1.HardwareAddressLow;
seq_printf(m, " LanAddr = %02X:%02X:%02X:%02X:%02X:%02X\n", seq_printf(m, " LanAddr = %pMR\n", a);
a[5], a[4], a[3], a[2], a[1], a[0]);
} }
seq_printf(m, " WWN = %08X%08X:%08X%08X\n", seq_printf(m, " WWN = %08X%08X:%08X%08X\n",
ioc->fc_port_page0[p].WWNN.High, ioc->fc_port_page0[p].WWNN.High,
@ -6861,8 +6857,7 @@ mpt_print_ioc_summary(MPT_ADAPTER *ioc, char *buffer, int *size, int len, int sh
if (showlan && (ioc->pfacts[0].ProtocolFlags & MPI_PORTFACTS_PROTOCOL_LAN)) { if (showlan && (ioc->pfacts[0].ProtocolFlags & MPI_PORTFACTS_PROTOCOL_LAN)) {
u8 *a = (u8*)&ioc->lan_cnfg_page1.HardwareAddressLow; u8 *a = (u8*)&ioc->lan_cnfg_page1.HardwareAddressLow;
y += sprintf(buffer+len+y, ", LanAddr=%02X:%02X:%02X:%02X:%02X:%02X", y += sprintf(buffer+len+y, ", LanAddr=%pMR", a);
a[5], a[4], a[3], a[2], a[1], a[0]);
} }
y += sprintf(buffer+len+y, ", IRQ=%d", ioc->pci_irq); y += sprintf(buffer+len+y, ", IRQ=%d", ioc->pci_irq);
@ -6896,8 +6891,7 @@ static void seq_mpt_print_ioc_summary(MPT_ADAPTER *ioc, struct seq_file *m, int
if (showlan && (ioc->pfacts[0].ProtocolFlags & MPI_PORTFACTS_PROTOCOL_LAN)) { if (showlan && (ioc->pfacts[0].ProtocolFlags & MPI_PORTFACTS_PROTOCOL_LAN)) {
u8 *a = (u8*)&ioc->lan_cnfg_page1.HardwareAddressLow; u8 *a = (u8*)&ioc->lan_cnfg_page1.HardwareAddressLow;
seq_printf(m, ", LanAddr=%02X:%02X:%02X:%02X:%02X:%02X", seq_printf(m, ", LanAddr=%pMR", a);
a[5], a[4], a[3], a[2], a[1], a[0]);
} }
seq_printf(m, ", IRQ=%d", ioc->pci_irq); seq_printf(m, ", IRQ=%d", ioc->pci_irq);

View File

@ -1366,15 +1366,10 @@ mptscsih_qcmd(struct scsi_cmnd *SCpnt)
/* Default to untagged. Once a target structure has been allocated, /* Default to untagged. Once a target structure has been allocated,
* use the Inquiry data to determine if device supports tagged. * use the Inquiry data to determine if device supports tagged.
*/ */
if ((vdevice->vtarget->tflags & MPT_TARGET_FLAGS_Q_YES) if ((vdevice->vtarget->tflags & MPT_TARGET_FLAGS_Q_YES) &&
&& (SCpnt->device->tagged_supported)) { SCpnt->device->tagged_supported)
scsictl = scsidir | MPI_SCSIIO_CONTROL_SIMPLEQ; scsictl = scsidir | MPI_SCSIIO_CONTROL_SIMPLEQ;
if (SCpnt->request && SCpnt->request->ioprio) { else
if (((SCpnt->request->ioprio & 0x7) == 1) ||
!(SCpnt->request->ioprio & 0x7))
scsictl |= MPI_SCSIIO_CONTROL_HEADOFQ;
}
} else
scsictl = scsidir | MPI_SCSIIO_CONTROL_UNTAGGED; scsictl = scsidir | MPI_SCSIIO_CONTROL_UNTAGGED;

View File

@ -141,11 +141,8 @@ struct ufs_qcom_phy_specific_ops {
struct ufs_qcom_phy *get_ufs_qcom_phy(struct phy *generic_phy); struct ufs_qcom_phy *get_ufs_qcom_phy(struct phy *generic_phy);
int ufs_qcom_phy_power_on(struct phy *generic_phy); int ufs_qcom_phy_power_on(struct phy *generic_phy);
int ufs_qcom_phy_power_off(struct phy *generic_phy); int ufs_qcom_phy_power_off(struct phy *generic_phy);
int ufs_qcom_phy_exit(struct phy *generic_phy); int ufs_qcom_phy_init_clks(struct ufs_qcom_phy *phy_common);
int ufs_qcom_phy_init_clks(struct phy *generic_phy, int ufs_qcom_phy_init_vregulators(struct ufs_qcom_phy *phy_common);
struct ufs_qcom_phy *phy_common);
int ufs_qcom_phy_init_vregulators(struct phy *generic_phy,
struct ufs_qcom_phy *phy_common);
int ufs_qcom_phy_remove(struct phy *generic_phy, int ufs_qcom_phy_remove(struct phy *generic_phy,
struct ufs_qcom_phy *ufs_qcom_phy); struct ufs_qcom_phy *ufs_qcom_phy);
struct phy *ufs_qcom_phy_generic_probe(struct platform_device *pdev, struct phy *ufs_qcom_phy_generic_probe(struct platform_device *pdev,

View File

@ -44,30 +44,12 @@ void ufs_qcom_phy_qmp_14nm_advertise_quirks(struct ufs_qcom_phy *phy_common)
static int ufs_qcom_phy_qmp_14nm_init(struct phy *generic_phy) static int ufs_qcom_phy_qmp_14nm_init(struct phy *generic_phy)
{ {
struct ufs_qcom_phy_qmp_14nm *phy = phy_get_drvdata(generic_phy); return 0;
struct ufs_qcom_phy *phy_common = &phy->common_cfg; }
int err;
err = ufs_qcom_phy_init_clks(generic_phy, phy_common); static int ufs_qcom_phy_qmp_14nm_exit(struct phy *generic_phy)
if (err) { {
dev_err(phy_common->dev, "%s: ufs_qcom_phy_init_clks() failed %d\n", return 0;
__func__, err);
goto out;
}
err = ufs_qcom_phy_init_vregulators(generic_phy, phy_common);
if (err) {
dev_err(phy_common->dev, "%s: ufs_qcom_phy_init_vregulators() failed %d\n",
__func__, err);
goto out;
}
phy_common->vdda_phy.max_uV = UFS_PHY_VDDA_PHY_UV;
phy_common->vdda_phy.min_uV = UFS_PHY_VDDA_PHY_UV;
ufs_qcom_phy_qmp_14nm_advertise_quirks(phy_common);
out:
return err;
} }
static static
@ -117,7 +99,7 @@ static int ufs_qcom_phy_qmp_14nm_is_pcs_ready(struct ufs_qcom_phy *phy_common)
static const struct phy_ops ufs_qcom_phy_qmp_14nm_phy_ops = { static const struct phy_ops ufs_qcom_phy_qmp_14nm_phy_ops = {
.init = ufs_qcom_phy_qmp_14nm_init, .init = ufs_qcom_phy_qmp_14nm_init,
.exit = ufs_qcom_phy_exit, .exit = ufs_qcom_phy_qmp_14nm_exit,
.power_on = ufs_qcom_phy_power_on, .power_on = ufs_qcom_phy_power_on,
.power_off = ufs_qcom_phy_power_off, .power_off = ufs_qcom_phy_power_off,
.owner = THIS_MODULE, .owner = THIS_MODULE,
@ -136,6 +118,7 @@ static int ufs_qcom_phy_qmp_14nm_probe(struct platform_device *pdev)
struct device *dev = &pdev->dev; struct device *dev = &pdev->dev;
struct phy *generic_phy; struct phy *generic_phy;
struct ufs_qcom_phy_qmp_14nm *phy; struct ufs_qcom_phy_qmp_14nm *phy;
struct ufs_qcom_phy *phy_common;
int err = 0; int err = 0;
phy = devm_kzalloc(dev, sizeof(*phy), GFP_KERNEL); phy = devm_kzalloc(dev, sizeof(*phy), GFP_KERNEL);
@ -143,8 +126,9 @@ static int ufs_qcom_phy_qmp_14nm_probe(struct platform_device *pdev)
err = -ENOMEM; err = -ENOMEM;
goto out; goto out;
} }
phy_common = &phy->common_cfg;
generic_phy = ufs_qcom_phy_generic_probe(pdev, &phy->common_cfg, generic_phy = ufs_qcom_phy_generic_probe(pdev, phy_common,
&ufs_qcom_phy_qmp_14nm_phy_ops, &phy_14nm_ops); &ufs_qcom_phy_qmp_14nm_phy_ops, &phy_14nm_ops);
if (!generic_phy) { if (!generic_phy) {
@ -154,39 +138,43 @@ static int ufs_qcom_phy_qmp_14nm_probe(struct platform_device *pdev)
goto out; goto out;
} }
err = ufs_qcom_phy_init_clks(phy_common);
if (err) {
dev_err(phy_common->dev,
"%s: ufs_qcom_phy_init_clks() failed %d\n",
__func__, err);
goto out;
}
err = ufs_qcom_phy_init_vregulators(phy_common);
if (err) {
dev_err(phy_common->dev,
"%s: ufs_qcom_phy_init_vregulators() failed %d\n",
__func__, err);
goto out;
}
phy_common->vdda_phy.max_uV = UFS_PHY_VDDA_PHY_UV;
phy_common->vdda_phy.min_uV = UFS_PHY_VDDA_PHY_UV;
ufs_qcom_phy_qmp_14nm_advertise_quirks(phy_common);
phy_set_drvdata(generic_phy, phy); phy_set_drvdata(generic_phy, phy);
strlcpy(phy->common_cfg.name, UFS_PHY_NAME, strlcpy(phy_common->name, UFS_PHY_NAME, sizeof(phy_common->name));
sizeof(phy->common_cfg.name));
out: out:
return err; return err;
} }
static int ufs_qcom_phy_qmp_14nm_remove(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct phy *generic_phy = to_phy(dev);
struct ufs_qcom_phy *ufs_qcom_phy = get_ufs_qcom_phy(generic_phy);
int err = 0;
err = ufs_qcom_phy_remove(generic_phy, ufs_qcom_phy);
if (err)
dev_err(dev, "%s: ufs_qcom_phy_remove failed = %d\n",
__func__, err);
return err;
}
static const struct of_device_id ufs_qcom_phy_qmp_14nm_of_match[] = { static const struct of_device_id ufs_qcom_phy_qmp_14nm_of_match[] = {
{.compatible = "qcom,ufs-phy-qmp-14nm"}, {.compatible = "qcom,ufs-phy-qmp-14nm"},
{.compatible = "qcom,msm8996-ufs-phy-qmp-14nm"},
{}, {},
}; };
MODULE_DEVICE_TABLE(of, ufs_qcom_phy_qmp_14nm_of_match); MODULE_DEVICE_TABLE(of, ufs_qcom_phy_qmp_14nm_of_match);
static struct platform_driver ufs_qcom_phy_qmp_14nm_driver = { static struct platform_driver ufs_qcom_phy_qmp_14nm_driver = {
.probe = ufs_qcom_phy_qmp_14nm_probe, .probe = ufs_qcom_phy_qmp_14nm_probe,
.remove = ufs_qcom_phy_qmp_14nm_remove,
.driver = { .driver = {
.of_match_table = ufs_qcom_phy_qmp_14nm_of_match, .of_match_table = ufs_qcom_phy_qmp_14nm_of_match,
.name = "ufs_qcom_phy_qmp_14nm", .name = "ufs_qcom_phy_qmp_14nm",

View File

@ -63,28 +63,12 @@ void ufs_qcom_phy_qmp_20nm_advertise_quirks(struct ufs_qcom_phy *phy_common)
static int ufs_qcom_phy_qmp_20nm_init(struct phy *generic_phy) static int ufs_qcom_phy_qmp_20nm_init(struct phy *generic_phy)
{ {
struct ufs_qcom_phy_qmp_20nm *phy = phy_get_drvdata(generic_phy); return 0;
struct ufs_qcom_phy *phy_common = &phy->common_cfg; }
int err = 0;
err = ufs_qcom_phy_init_clks(generic_phy, phy_common); static int ufs_qcom_phy_qmp_20nm_exit(struct phy *generic_phy)
if (err) { {
dev_err(phy_common->dev, "%s: ufs_qcom_phy_init_clks() failed %d\n", return 0;
__func__, err);
goto out;
}
err = ufs_qcom_phy_init_vregulators(generic_phy, phy_common);
if (err) {
dev_err(phy_common->dev, "%s: ufs_qcom_phy_init_vregulators() failed %d\n",
__func__, err);
goto out;
}
ufs_qcom_phy_qmp_20nm_advertise_quirks(phy_common);
out:
return err;
} }
static static
@ -173,7 +157,7 @@ static int ufs_qcom_phy_qmp_20nm_is_pcs_ready(struct ufs_qcom_phy *phy_common)
static const struct phy_ops ufs_qcom_phy_qmp_20nm_phy_ops = { static const struct phy_ops ufs_qcom_phy_qmp_20nm_phy_ops = {
.init = ufs_qcom_phy_qmp_20nm_init, .init = ufs_qcom_phy_qmp_20nm_init,
.exit = ufs_qcom_phy_exit, .exit = ufs_qcom_phy_qmp_20nm_exit,
.power_on = ufs_qcom_phy_power_on, .power_on = ufs_qcom_phy_power_on,
.power_off = ufs_qcom_phy_power_off, .power_off = ufs_qcom_phy_power_off,
.owner = THIS_MODULE, .owner = THIS_MODULE,
@ -192,6 +176,7 @@ static int ufs_qcom_phy_qmp_20nm_probe(struct platform_device *pdev)
struct device *dev = &pdev->dev; struct device *dev = &pdev->dev;
struct phy *generic_phy; struct phy *generic_phy;
struct ufs_qcom_phy_qmp_20nm *phy; struct ufs_qcom_phy_qmp_20nm *phy;
struct ufs_qcom_phy *phy_common;
int err = 0; int err = 0;
phy = devm_kzalloc(dev, sizeof(*phy), GFP_KERNEL); phy = devm_kzalloc(dev, sizeof(*phy), GFP_KERNEL);
@ -199,8 +184,9 @@ static int ufs_qcom_phy_qmp_20nm_probe(struct platform_device *pdev)
err = -ENOMEM; err = -ENOMEM;
goto out; goto out;
} }
phy_common = &phy->common_cfg;
generic_phy = ufs_qcom_phy_generic_probe(pdev, &phy->common_cfg, generic_phy = ufs_qcom_phy_generic_probe(pdev, phy_common,
&ufs_qcom_phy_qmp_20nm_phy_ops, &phy_20nm_ops); &ufs_qcom_phy_qmp_20nm_phy_ops, &phy_20nm_ops);
if (!generic_phy) { if (!generic_phy) {
@ -210,30 +196,30 @@ static int ufs_qcom_phy_qmp_20nm_probe(struct platform_device *pdev)
goto out; goto out;
} }
err = ufs_qcom_phy_init_clks(phy_common);
if (err) {
dev_err(phy_common->dev, "%s: ufs_qcom_phy_init_clks() failed %d\n",
__func__, err);
goto out;
}
err = ufs_qcom_phy_init_vregulators(phy_common);
if (err) {
dev_err(phy_common->dev, "%s: ufs_qcom_phy_init_vregulators() failed %d\n",
__func__, err);
goto out;
}
ufs_qcom_phy_qmp_20nm_advertise_quirks(phy_common);
phy_set_drvdata(generic_phy, phy); phy_set_drvdata(generic_phy, phy);
strlcpy(phy->common_cfg.name, UFS_PHY_NAME, strlcpy(phy_common->name, UFS_PHY_NAME, sizeof(phy_common->name));
sizeof(phy->common_cfg.name));
out: out:
return err; return err;
} }
static int ufs_qcom_phy_qmp_20nm_remove(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct phy *generic_phy = to_phy(dev);
struct ufs_qcom_phy *ufs_qcom_phy = get_ufs_qcom_phy(generic_phy);
int err = 0;
err = ufs_qcom_phy_remove(generic_phy, ufs_qcom_phy);
if (err)
dev_err(dev, "%s: ufs_qcom_phy_remove failed = %d\n",
__func__, err);
return err;
}
static const struct of_device_id ufs_qcom_phy_qmp_20nm_of_match[] = { static const struct of_device_id ufs_qcom_phy_qmp_20nm_of_match[] = {
{.compatible = "qcom,ufs-phy-qmp-20nm"}, {.compatible = "qcom,ufs-phy-qmp-20nm"},
{}, {},
@ -242,7 +228,6 @@ MODULE_DEVICE_TABLE(of, ufs_qcom_phy_qmp_20nm_of_match);
static struct platform_driver ufs_qcom_phy_qmp_20nm_driver = { static struct platform_driver ufs_qcom_phy_qmp_20nm_driver = {
.probe = ufs_qcom_phy_qmp_20nm_probe, .probe = ufs_qcom_phy_qmp_20nm_probe,
.remove = ufs_qcom_phy_qmp_20nm_remove,
.driver = { .driver = {
.of_match_table = ufs_qcom_phy_qmp_20nm_of_match, .of_match_table = ufs_qcom_phy_qmp_20nm_of_match,
.name = "ufs_qcom_phy_qmp_20nm", .name = "ufs_qcom_phy_qmp_20nm",

View File

@ -22,13 +22,6 @@
#define VDDP_REF_CLK_MIN_UV 1200000 #define VDDP_REF_CLK_MIN_UV 1200000
#define VDDP_REF_CLK_MAX_UV 1200000 #define VDDP_REF_CLK_MAX_UV 1200000
static int __ufs_qcom_phy_init_vreg(struct phy *, struct ufs_qcom_phy_vreg *,
const char *, bool);
static int ufs_qcom_phy_init_vreg(struct phy *, struct ufs_qcom_phy_vreg *,
const char *);
static int ufs_qcom_phy_base_init(struct platform_device *pdev,
struct ufs_qcom_phy *phy_common);
int ufs_qcom_phy_calibrate(struct ufs_qcom_phy *ufs_qcom_phy, int ufs_qcom_phy_calibrate(struct ufs_qcom_phy *ufs_qcom_phy,
struct ufs_qcom_phy_calibration *tbl_A, struct ufs_qcom_phy_calibration *tbl_A,
int tbl_size_A, int tbl_size_A,
@ -75,45 +68,6 @@ out:
} }
EXPORT_SYMBOL_GPL(ufs_qcom_phy_calibrate); EXPORT_SYMBOL_GPL(ufs_qcom_phy_calibrate);
struct phy *ufs_qcom_phy_generic_probe(struct platform_device *pdev,
struct ufs_qcom_phy *common_cfg,
const struct phy_ops *ufs_qcom_phy_gen_ops,
struct ufs_qcom_phy_specific_ops *phy_spec_ops)
{
int err;
struct device *dev = &pdev->dev;
struct phy *generic_phy = NULL;
struct phy_provider *phy_provider;
err = ufs_qcom_phy_base_init(pdev, common_cfg);
if (err) {
dev_err(dev, "%s: phy base init failed %d\n", __func__, err);
goto out;
}
phy_provider = devm_of_phy_provider_register(dev, of_phy_simple_xlate);
if (IS_ERR(phy_provider)) {
err = PTR_ERR(phy_provider);
dev_err(dev, "%s: failed to register phy %d\n", __func__, err);
goto out;
}
generic_phy = devm_phy_create(dev, NULL, ufs_qcom_phy_gen_ops);
if (IS_ERR(generic_phy)) {
err = PTR_ERR(generic_phy);
dev_err(dev, "%s: failed to create phy %d\n", __func__, err);
generic_phy = NULL;
goto out;
}
common_cfg->phy_spec_ops = phy_spec_ops;
common_cfg->dev = dev;
out:
return generic_phy;
}
EXPORT_SYMBOL_GPL(ufs_qcom_phy_generic_probe);
/* /*
* This assumes the embedded phy structure inside generic_phy is of type * This assumes the embedded phy structure inside generic_phy is of type
* struct ufs_qcom_phy. In order to function properly it's crucial * struct ufs_qcom_phy. In order to function properly it's crucial
@ -154,13 +108,50 @@ int ufs_qcom_phy_base_init(struct platform_device *pdev,
return 0; return 0;
} }
static int __ufs_qcom_phy_clk_get(struct phy *phy, struct phy *ufs_qcom_phy_generic_probe(struct platform_device *pdev,
struct ufs_qcom_phy *common_cfg,
const struct phy_ops *ufs_qcom_phy_gen_ops,
struct ufs_qcom_phy_specific_ops *phy_spec_ops)
{
int err;
struct device *dev = &pdev->dev;
struct phy *generic_phy = NULL;
struct phy_provider *phy_provider;
err = ufs_qcom_phy_base_init(pdev, common_cfg);
if (err) {
dev_err(dev, "%s: phy base init failed %d\n", __func__, err);
goto out;
}
phy_provider = devm_of_phy_provider_register(dev, of_phy_simple_xlate);
if (IS_ERR(phy_provider)) {
err = PTR_ERR(phy_provider);
dev_err(dev, "%s: failed to register phy %d\n", __func__, err);
goto out;
}
generic_phy = devm_phy_create(dev, NULL, ufs_qcom_phy_gen_ops);
if (IS_ERR(generic_phy)) {
err = PTR_ERR(generic_phy);
dev_err(dev, "%s: failed to create phy %d\n", __func__, err);
generic_phy = NULL;
goto out;
}
common_cfg->phy_spec_ops = phy_spec_ops;
common_cfg->dev = dev;
out:
return generic_phy;
}
EXPORT_SYMBOL_GPL(ufs_qcom_phy_generic_probe);
static int __ufs_qcom_phy_clk_get(struct device *dev,
const char *name, struct clk **clk_out, bool err_print) const char *name, struct clk **clk_out, bool err_print)
{ {
struct clk *clk; struct clk *clk;
int err = 0; int err = 0;
struct ufs_qcom_phy *ufs_qcom_phy = get_ufs_qcom_phy(phy);
struct device *dev = ufs_qcom_phy->dev;
clk = devm_clk_get(dev, name); clk = devm_clk_get(dev, name);
if (IS_ERR(clk)) { if (IS_ERR(clk)) {
@ -174,42 +165,44 @@ static int __ufs_qcom_phy_clk_get(struct phy *phy,
return err; return err;
} }
static static int ufs_qcom_phy_clk_get(struct device *dev,
int ufs_qcom_phy_clk_get(struct phy *phy,
const char *name, struct clk **clk_out) const char *name, struct clk **clk_out)
{ {
return __ufs_qcom_phy_clk_get(phy, name, clk_out, true); return __ufs_qcom_phy_clk_get(dev, name, clk_out, true);
} }
int int ufs_qcom_phy_init_clks(struct ufs_qcom_phy *phy_common)
ufs_qcom_phy_init_clks(struct phy *generic_phy,
struct ufs_qcom_phy *phy_common)
{ {
int err; int err;
err = ufs_qcom_phy_clk_get(generic_phy, "tx_iface_clk", if (of_device_is_compatible(phy_common->dev->of_node,
"qcom,msm8996-ufs-phy-qmp-14nm"))
goto skip_txrx_clk;
err = ufs_qcom_phy_clk_get(phy_common->dev, "tx_iface_clk",
&phy_common->tx_iface_clk); &phy_common->tx_iface_clk);
if (err) if (err)
goto out; goto out;
err = ufs_qcom_phy_clk_get(generic_phy, "rx_iface_clk", err = ufs_qcom_phy_clk_get(phy_common->dev, "rx_iface_clk",
&phy_common->rx_iface_clk); &phy_common->rx_iface_clk);
if (err) if (err)
goto out; goto out;
err = ufs_qcom_phy_clk_get(generic_phy, "ref_clk_src", err = ufs_qcom_phy_clk_get(phy_common->dev, "ref_clk_src",
&phy_common->ref_clk_src); &phy_common->ref_clk_src);
if (err) if (err)
goto out; goto out;
skip_txrx_clk:
/* /*
* "ref_clk_parent" is optional hence don't abort init if it's not * "ref_clk_parent" is optional hence don't abort init if it's not
* found. * found.
*/ */
__ufs_qcom_phy_clk_get(generic_phy, "ref_clk_parent", __ufs_qcom_phy_clk_get(phy_common->dev, "ref_clk_parent",
&phy_common->ref_clk_parent, false); &phy_common->ref_clk_parent, false);
err = ufs_qcom_phy_clk_get(generic_phy, "ref_clk", err = ufs_qcom_phy_clk_get(phy_common->dev, "ref_clk",
&phy_common->ref_clk); &phy_common->ref_clk);
out: out:
@ -217,41 +210,14 @@ out:
} }
EXPORT_SYMBOL_GPL(ufs_qcom_phy_init_clks); EXPORT_SYMBOL_GPL(ufs_qcom_phy_init_clks);
int static int __ufs_qcom_phy_init_vreg(struct device *dev,
ufs_qcom_phy_init_vregulators(struct phy *generic_phy,
struct ufs_qcom_phy *phy_common)
{
int err;
err = ufs_qcom_phy_init_vreg(generic_phy, &phy_common->vdda_pll,
"vdda-pll");
if (err)
goto out;
err = ufs_qcom_phy_init_vreg(generic_phy, &phy_common->vdda_phy,
"vdda-phy");
if (err)
goto out;
/* vddp-ref-clk-* properties are optional */
__ufs_qcom_phy_init_vreg(generic_phy, &phy_common->vddp_ref_clk,
"vddp-ref-clk", true);
out:
return err;
}
EXPORT_SYMBOL_GPL(ufs_qcom_phy_init_vregulators);
static int __ufs_qcom_phy_init_vreg(struct phy *phy,
struct ufs_qcom_phy_vreg *vreg, const char *name, bool optional) struct ufs_qcom_phy_vreg *vreg, const char *name, bool optional)
{ {
int err = 0; int err = 0;
struct ufs_qcom_phy *ufs_qcom_phy = get_ufs_qcom_phy(phy);
struct device *dev = ufs_qcom_phy->dev;
char prop_name[MAX_PROP_NAME]; char prop_name[MAX_PROP_NAME];
vreg->name = kstrdup(name, GFP_KERNEL); vreg->name = devm_kstrdup(dev, name, GFP_KERNEL);
if (!vreg->name) { if (!vreg->name) {
err = -ENOMEM; err = -ENOMEM;
goto out; goto out;
@ -304,14 +270,36 @@ out:
return err; return err;
} }
static int ufs_qcom_phy_init_vreg(struct phy *phy, static int ufs_qcom_phy_init_vreg(struct device *dev,
struct ufs_qcom_phy_vreg *vreg, const char *name) struct ufs_qcom_phy_vreg *vreg, const char *name)
{ {
return __ufs_qcom_phy_init_vreg(phy, vreg, name, false); return __ufs_qcom_phy_init_vreg(dev, vreg, name, false);
} }
static int ufs_qcom_phy_init_vregulators(struct ufs_qcom_phy *phy_common)
int ufs_qcom_phy_cfg_vreg(struct phy *phy, {
int err;
err = ufs_qcom_phy_init_vreg(phy_common->dev, &phy_common->vdda_pll,
"vdda-pll");
if (err)
goto out;
err = ufs_qcom_phy_init_vreg(phy_common->dev, &phy_common->vdda_phy,
"vdda-phy");
if (err)
goto out;
/* vddp-ref-clk-* properties are optional */
__ufs_qcom_phy_init_vreg(phy_common->dev, &phy_common->vddp_ref_clk,
"vddp-ref-clk", true);
out:
return err;
}
EXPORT_SYMBOL_GPL(ufs_qcom_phy_init_vregulators);
static int ufs_qcom_phy_cfg_vreg(struct device *dev,
struct ufs_qcom_phy_vreg *vreg, bool on) struct ufs_qcom_phy_vreg *vreg, bool on)
{ {
int ret = 0; int ret = 0;
@ -319,10 +307,6 @@ int ufs_qcom_phy_cfg_vreg(struct phy *phy,
const char *name = vreg->name; const char *name = vreg->name;
int min_uV; int min_uV;
int uA_load; int uA_load;
struct ufs_qcom_phy *ufs_qcom_phy = get_ufs_qcom_phy(phy);
struct device *dev = ufs_qcom_phy->dev;
BUG_ON(!vreg);
if (regulator_count_voltages(reg) > 0) { if (regulator_count_voltages(reg) > 0) {
min_uV = on ? vreg->min_uV : 0; min_uV = on ? vreg->min_uV : 0;
@ -350,18 +334,15 @@ out:
return ret; return ret;
} }
static static int ufs_qcom_phy_enable_vreg(struct device *dev,
int ufs_qcom_phy_enable_vreg(struct phy *phy,
struct ufs_qcom_phy_vreg *vreg) struct ufs_qcom_phy_vreg *vreg)
{ {
struct ufs_qcom_phy *ufs_qcom_phy = get_ufs_qcom_phy(phy);
struct device *dev = ufs_qcom_phy->dev;
int ret = 0; int ret = 0;
if (!vreg || vreg->enabled) if (!vreg || vreg->enabled)
goto out; goto out;
ret = ufs_qcom_phy_cfg_vreg(phy, vreg, true); ret = ufs_qcom_phy_cfg_vreg(dev, vreg, true);
if (ret) { if (ret) {
dev_err(dev, "%s: ufs_qcom_phy_cfg_vreg() failed, err=%d\n", dev_err(dev, "%s: ufs_qcom_phy_cfg_vreg() failed, err=%d\n",
__func__, ret); __func__, ret);
@ -380,10 +361,9 @@ out:
return ret; return ret;
} }
int ufs_qcom_phy_enable_ref_clk(struct phy *generic_phy) static int ufs_qcom_phy_enable_ref_clk(struct ufs_qcom_phy *phy)
{ {
int ret = 0; int ret = 0;
struct ufs_qcom_phy *phy = get_ufs_qcom_phy(generic_phy);
if (phy->is_ref_clk_enabled) if (phy->is_ref_clk_enabled)
goto out; goto out;
@ -430,14 +410,10 @@ out_disable_src:
out: out:
return ret; return ret;
} }
EXPORT_SYMBOL_GPL(ufs_qcom_phy_enable_ref_clk);
static static int ufs_qcom_phy_disable_vreg(struct device *dev,
int ufs_qcom_phy_disable_vreg(struct phy *phy,
struct ufs_qcom_phy_vreg *vreg) struct ufs_qcom_phy_vreg *vreg)
{ {
struct ufs_qcom_phy *ufs_qcom_phy = get_ufs_qcom_phy(phy);
struct device *dev = ufs_qcom_phy->dev;
int ret = 0; int ret = 0;
if (!vreg || !vreg->enabled || vreg->is_always_on) if (!vreg || !vreg->enabled || vreg->is_always_on)
@ -447,7 +423,7 @@ int ufs_qcom_phy_disable_vreg(struct phy *phy,
if (!ret) { if (!ret) {
/* ignore errors on applying disable config */ /* ignore errors on applying disable config */
ufs_qcom_phy_cfg_vreg(phy, vreg, false); ufs_qcom_phy_cfg_vreg(dev, vreg, false);
vreg->enabled = false; vreg->enabled = false;
} else { } else {
dev_err(dev, "%s: %s disable failed, err=%d\n", dev_err(dev, "%s: %s disable failed, err=%d\n",
@ -457,10 +433,8 @@ out:
return ret; return ret;
} }
void ufs_qcom_phy_disable_ref_clk(struct phy *generic_phy) static void ufs_qcom_phy_disable_ref_clk(struct ufs_qcom_phy *phy)
{ {
struct ufs_qcom_phy *phy = get_ufs_qcom_phy(generic_phy);
if (phy->is_ref_clk_enabled) { if (phy->is_ref_clk_enabled) {
clk_disable_unprepare(phy->ref_clk); clk_disable_unprepare(phy->ref_clk);
/* /*
@ -473,7 +447,6 @@ void ufs_qcom_phy_disable_ref_clk(struct phy *generic_phy)
phy->is_ref_clk_enabled = false; phy->is_ref_clk_enabled = false;
} }
} }
EXPORT_SYMBOL_GPL(ufs_qcom_phy_disable_ref_clk);
#define UFS_REF_CLK_EN (1 << 5) #define UFS_REF_CLK_EN (1 << 5)
@ -526,9 +499,8 @@ void ufs_qcom_phy_disable_dev_ref_clk(struct phy *generic_phy)
EXPORT_SYMBOL_GPL(ufs_qcom_phy_disable_dev_ref_clk); EXPORT_SYMBOL_GPL(ufs_qcom_phy_disable_dev_ref_clk);
/* Turn ON M-PHY RMMI interface clocks */ /* Turn ON M-PHY RMMI interface clocks */
int ufs_qcom_phy_enable_iface_clk(struct phy *generic_phy) static int ufs_qcom_phy_enable_iface_clk(struct ufs_qcom_phy *phy)
{ {
struct ufs_qcom_phy *phy = get_ufs_qcom_phy(generic_phy);
int ret = 0; int ret = 0;
if (phy->is_iface_clk_enabled) if (phy->is_iface_clk_enabled)
@ -552,20 +524,16 @@ int ufs_qcom_phy_enable_iface_clk(struct phy *generic_phy)
out: out:
return ret; return ret;
} }
EXPORT_SYMBOL_GPL(ufs_qcom_phy_enable_iface_clk);
/* Turn OFF M-PHY RMMI interface clocks */ /* Turn OFF M-PHY RMMI interface clocks */
void ufs_qcom_phy_disable_iface_clk(struct phy *generic_phy) void ufs_qcom_phy_disable_iface_clk(struct ufs_qcom_phy *phy)
{ {
struct ufs_qcom_phy *phy = get_ufs_qcom_phy(generic_phy);
if (phy->is_iface_clk_enabled) { if (phy->is_iface_clk_enabled) {
clk_disable_unprepare(phy->tx_iface_clk); clk_disable_unprepare(phy->tx_iface_clk);
clk_disable_unprepare(phy->rx_iface_clk); clk_disable_unprepare(phy->rx_iface_clk);
phy->is_iface_clk_enabled = false; phy->is_iface_clk_enabled = false;
} }
} }
EXPORT_SYMBOL_GPL(ufs_qcom_phy_disable_iface_clk);
int ufs_qcom_phy_start_serdes(struct phy *generic_phy) int ufs_qcom_phy_start_serdes(struct phy *generic_phy)
{ {
@ -634,29 +602,6 @@ int ufs_qcom_phy_calibrate_phy(struct phy *generic_phy, bool is_rate_B)
} }
EXPORT_SYMBOL_GPL(ufs_qcom_phy_calibrate_phy); EXPORT_SYMBOL_GPL(ufs_qcom_phy_calibrate_phy);
int ufs_qcom_phy_remove(struct phy *generic_phy,
struct ufs_qcom_phy *ufs_qcom_phy)
{
phy_power_off(generic_phy);
kfree(ufs_qcom_phy->vdda_pll.name);
kfree(ufs_qcom_phy->vdda_phy.name);
return 0;
}
EXPORT_SYMBOL_GPL(ufs_qcom_phy_remove);
int ufs_qcom_phy_exit(struct phy *generic_phy)
{
struct ufs_qcom_phy *ufs_qcom_phy = get_ufs_qcom_phy(generic_phy);
if (ufs_qcom_phy->is_powered_on)
phy_power_off(generic_phy);
return 0;
}
EXPORT_SYMBOL_GPL(ufs_qcom_phy_exit);
int ufs_qcom_phy_is_pcs_ready(struct phy *generic_phy) int ufs_qcom_phy_is_pcs_ready(struct phy *generic_phy)
{ {
struct ufs_qcom_phy *ufs_qcom_phy = get_ufs_qcom_phy(generic_phy); struct ufs_qcom_phy *ufs_qcom_phy = get_ufs_qcom_phy(generic_phy);
@ -678,7 +623,10 @@ int ufs_qcom_phy_power_on(struct phy *generic_phy)
struct device *dev = phy_common->dev; struct device *dev = phy_common->dev;
int err; int err;
err = ufs_qcom_phy_enable_vreg(generic_phy, &phy_common->vdda_phy); if (phy_common->is_powered_on)
return 0;
err = ufs_qcom_phy_enable_vreg(dev, &phy_common->vdda_phy);
if (err) { if (err) {
dev_err(dev, "%s enable vdda_phy failed, err=%d\n", dev_err(dev, "%s enable vdda_phy failed, err=%d\n",
__func__, err); __func__, err);
@ -688,23 +636,30 @@ int ufs_qcom_phy_power_on(struct phy *generic_phy)
phy_common->phy_spec_ops->power_control(phy_common, true); phy_common->phy_spec_ops->power_control(phy_common, true);
/* vdda_pll also enables ref clock LDOs so enable it first */ /* vdda_pll also enables ref clock LDOs so enable it first */
err = ufs_qcom_phy_enable_vreg(generic_phy, &phy_common->vdda_pll); err = ufs_qcom_phy_enable_vreg(dev, &phy_common->vdda_pll);
if (err) { if (err) {
dev_err(dev, "%s enable vdda_pll failed, err=%d\n", dev_err(dev, "%s enable vdda_pll failed, err=%d\n",
__func__, err); __func__, err);
goto out_disable_phy; goto out_disable_phy;
} }
err = ufs_qcom_phy_enable_ref_clk(generic_phy); err = ufs_qcom_phy_enable_iface_clk(phy_common);
if (err) { if (err) {
dev_err(dev, "%s enable phy ref clock failed, err=%d\n", dev_err(dev, "%s enable phy iface clock failed, err=%d\n",
__func__, err); __func__, err);
goto out_disable_pll; goto out_disable_pll;
} }
err = ufs_qcom_phy_enable_ref_clk(phy_common);
if (err) {
dev_err(dev, "%s enable phy ref clock failed, err=%d\n",
__func__, err);
goto out_disable_iface_clk;
}
/* enable device PHY ref_clk pad rail */ /* enable device PHY ref_clk pad rail */
if (phy_common->vddp_ref_clk.reg) { if (phy_common->vddp_ref_clk.reg) {
err = ufs_qcom_phy_enable_vreg(generic_phy, err = ufs_qcom_phy_enable_vreg(dev,
&phy_common->vddp_ref_clk); &phy_common->vddp_ref_clk);
if (err) { if (err) {
dev_err(dev, "%s enable vddp_ref_clk failed, err=%d\n", dev_err(dev, "%s enable vddp_ref_clk failed, err=%d\n",
@ -717,11 +672,13 @@ int ufs_qcom_phy_power_on(struct phy *generic_phy)
goto out; goto out;
out_disable_ref_clk: out_disable_ref_clk:
ufs_qcom_phy_disable_ref_clk(generic_phy); ufs_qcom_phy_disable_ref_clk(phy_common);
out_disable_iface_clk:
ufs_qcom_phy_disable_iface_clk(phy_common);
out_disable_pll: out_disable_pll:
ufs_qcom_phy_disable_vreg(generic_phy, &phy_common->vdda_pll); ufs_qcom_phy_disable_vreg(dev, &phy_common->vdda_pll);
out_disable_phy: out_disable_phy:
ufs_qcom_phy_disable_vreg(generic_phy, &phy_common->vdda_phy); ufs_qcom_phy_disable_vreg(dev, &phy_common->vdda_phy);
out: out:
return err; return err;
} }
@ -731,15 +688,19 @@ int ufs_qcom_phy_power_off(struct phy *generic_phy)
{ {
struct ufs_qcom_phy *phy_common = get_ufs_qcom_phy(generic_phy); struct ufs_qcom_phy *phy_common = get_ufs_qcom_phy(generic_phy);
if (!phy_common->is_powered_on)
return 0;
phy_common->phy_spec_ops->power_control(phy_common, false); phy_common->phy_spec_ops->power_control(phy_common, false);
if (phy_common->vddp_ref_clk.reg) if (phy_common->vddp_ref_clk.reg)
ufs_qcom_phy_disable_vreg(generic_phy, ufs_qcom_phy_disable_vreg(phy_common->dev,
&phy_common->vddp_ref_clk); &phy_common->vddp_ref_clk);
ufs_qcom_phy_disable_ref_clk(generic_phy); ufs_qcom_phy_disable_ref_clk(phy_common);
ufs_qcom_phy_disable_iface_clk(phy_common);
ufs_qcom_phy_disable_vreg(generic_phy, &phy_common->vdda_pll); ufs_qcom_phy_disable_vreg(phy_common->dev, &phy_common->vdda_pll);
ufs_qcom_phy_disable_vreg(generic_phy, &phy_common->vdda_phy); ufs_qcom_phy_disable_vreg(phy_common->dev, &phy_common->vdda_phy);
phy_common->is_powered_on = false; phy_common->is_powered_on = false;
return 0; return 0;

View File

@ -84,8 +84,8 @@ extern void zfcp_fc_link_test_work(struct work_struct *);
extern void zfcp_fc_wka_ports_force_offline(struct zfcp_fc_wka_ports *); extern void zfcp_fc_wka_ports_force_offline(struct zfcp_fc_wka_ports *);
extern int zfcp_fc_gs_setup(struct zfcp_adapter *); extern int zfcp_fc_gs_setup(struct zfcp_adapter *);
extern void zfcp_fc_gs_destroy(struct zfcp_adapter *); extern void zfcp_fc_gs_destroy(struct zfcp_adapter *);
extern int zfcp_fc_exec_bsg_job(struct fc_bsg_job *); extern int zfcp_fc_exec_bsg_job(struct bsg_job *);
extern int zfcp_fc_timeout_bsg_job(struct fc_bsg_job *); extern int zfcp_fc_timeout_bsg_job(struct bsg_job *);
extern void zfcp_fc_sym_name_update(struct work_struct *); extern void zfcp_fc_sym_name_update(struct work_struct *);
extern unsigned int zfcp_fc_port_scan_backoff(void); extern unsigned int zfcp_fc_port_scan_backoff(void);
extern void zfcp_fc_conditional_port_scan(struct zfcp_adapter *); extern void zfcp_fc_conditional_port_scan(struct zfcp_adapter *);

View File

@ -13,6 +13,7 @@
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/utsname.h> #include <linux/utsname.h>
#include <linux/random.h> #include <linux/random.h>
#include <linux/bsg-lib.h>
#include <scsi/fc/fc_els.h> #include <scsi/fc/fc_els.h>
#include <scsi/libfc.h> #include <scsi/libfc.h>
#include "zfcp_ext.h" #include "zfcp_ext.h"
@ -885,26 +886,30 @@ out_free:
static void zfcp_fc_ct_els_job_handler(void *data) static void zfcp_fc_ct_els_job_handler(void *data)
{ {
struct fc_bsg_job *job = data; struct bsg_job *job = data;
struct zfcp_fsf_ct_els *zfcp_ct_els = job->dd_data; struct zfcp_fsf_ct_els *zfcp_ct_els = job->dd_data;
struct fc_bsg_reply *jr = job->reply; struct fc_bsg_reply *jr = job->reply;
jr->reply_payload_rcv_len = job->reply_payload.payload_len; jr->reply_payload_rcv_len = job->reply_payload.payload_len;
jr->reply_data.ctels_reply.status = FC_CTELS_STATUS_OK; jr->reply_data.ctels_reply.status = FC_CTELS_STATUS_OK;
jr->result = zfcp_ct_els->status ? -EIO : 0; jr->result = zfcp_ct_els->status ? -EIO : 0;
job->job_done(job); bsg_job_done(job, jr->result, jr->reply_payload_rcv_len);
} }
static struct zfcp_fc_wka_port *zfcp_fc_job_wka_port(struct fc_bsg_job *job) static struct zfcp_fc_wka_port *zfcp_fc_job_wka_port(struct bsg_job *job)
{ {
u32 preamble_word1; u32 preamble_word1;
u8 gs_type; u8 gs_type;
struct zfcp_adapter *adapter; struct zfcp_adapter *adapter;
struct fc_bsg_request *bsg_request = job->request;
struct fc_rport *rport = fc_bsg_to_rport(job);
struct Scsi_Host *shost;
preamble_word1 = job->request->rqst_data.r_ct.preamble_word1; preamble_word1 = bsg_request->rqst_data.r_ct.preamble_word1;
gs_type = (preamble_word1 & 0xff000000) >> 24; gs_type = (preamble_word1 & 0xff000000) >> 24;
adapter = (struct zfcp_adapter *) job->shost->hostdata[0]; shost = rport ? rport_to_shost(rport) : fc_bsg_to_shost(job);
adapter = (struct zfcp_adapter *) shost->hostdata[0];
switch (gs_type) { switch (gs_type) {
case FC_FST_ALIAS: case FC_FST_ALIAS:
@ -924,7 +929,7 @@ static struct zfcp_fc_wka_port *zfcp_fc_job_wka_port(struct fc_bsg_job *job)
static void zfcp_fc_ct_job_handler(void *data) static void zfcp_fc_ct_job_handler(void *data)
{ {
struct fc_bsg_job *job = data; struct bsg_job *job = data;
struct zfcp_fc_wka_port *wka_port; struct zfcp_fc_wka_port *wka_port;
wka_port = zfcp_fc_job_wka_port(job); wka_port = zfcp_fc_job_wka_port(job);
@ -933,11 +938,12 @@ static void zfcp_fc_ct_job_handler(void *data)
zfcp_fc_ct_els_job_handler(data); zfcp_fc_ct_els_job_handler(data);
} }
static int zfcp_fc_exec_els_job(struct fc_bsg_job *job, static int zfcp_fc_exec_els_job(struct bsg_job *job,
struct zfcp_adapter *adapter) struct zfcp_adapter *adapter)
{ {
struct zfcp_fsf_ct_els *els = job->dd_data; struct zfcp_fsf_ct_els *els = job->dd_data;
struct fc_rport *rport = job->rport; struct fc_rport *rport = fc_bsg_to_rport(job);
struct fc_bsg_request *bsg_request = job->request;
struct zfcp_port *port; struct zfcp_port *port;
u32 d_id; u32 d_id;
@ -949,13 +955,13 @@ static int zfcp_fc_exec_els_job(struct fc_bsg_job *job,
d_id = port->d_id; d_id = port->d_id;
put_device(&port->dev); put_device(&port->dev);
} else } else
d_id = ntoh24(job->request->rqst_data.h_els.port_id); d_id = ntoh24(bsg_request->rqst_data.h_els.port_id);
els->handler = zfcp_fc_ct_els_job_handler; els->handler = zfcp_fc_ct_els_job_handler;
return zfcp_fsf_send_els(adapter, d_id, els, job->req->timeout / HZ); return zfcp_fsf_send_els(adapter, d_id, els, job->req->timeout / HZ);
} }
static int zfcp_fc_exec_ct_job(struct fc_bsg_job *job, static int zfcp_fc_exec_ct_job(struct bsg_job *job,
struct zfcp_adapter *adapter) struct zfcp_adapter *adapter)
{ {
int ret; int ret;
@ -978,13 +984,15 @@ static int zfcp_fc_exec_ct_job(struct fc_bsg_job *job,
return ret; return ret;
} }
int zfcp_fc_exec_bsg_job(struct fc_bsg_job *job) int zfcp_fc_exec_bsg_job(struct bsg_job *job)
{ {
struct Scsi_Host *shost; struct Scsi_Host *shost;
struct zfcp_adapter *adapter; struct zfcp_adapter *adapter;
struct zfcp_fsf_ct_els *ct_els = job->dd_data; struct zfcp_fsf_ct_els *ct_els = job->dd_data;
struct fc_bsg_request *bsg_request = job->request;
struct fc_rport *rport = fc_bsg_to_rport(job);
shost = job->rport ? rport_to_shost(job->rport) : job->shost; shost = rport ? rport_to_shost(rport) : fc_bsg_to_shost(job);
adapter = (struct zfcp_adapter *)shost->hostdata[0]; adapter = (struct zfcp_adapter *)shost->hostdata[0];
if (!(atomic_read(&adapter->status) & ZFCP_STATUS_COMMON_OPEN)) if (!(atomic_read(&adapter->status) & ZFCP_STATUS_COMMON_OPEN))
@ -994,7 +1002,7 @@ int zfcp_fc_exec_bsg_job(struct fc_bsg_job *job)
ct_els->resp = job->reply_payload.sg_list; ct_els->resp = job->reply_payload.sg_list;
ct_els->handler_data = job; ct_els->handler_data = job;
switch (job->request->msgcode) { switch (bsg_request->msgcode) {
case FC_BSG_RPT_ELS: case FC_BSG_RPT_ELS:
case FC_BSG_HST_ELS_NOLOGIN: case FC_BSG_HST_ELS_NOLOGIN:
return zfcp_fc_exec_els_job(job, adapter); return zfcp_fc_exec_els_job(job, adapter);
@ -1006,7 +1014,7 @@ int zfcp_fc_exec_bsg_job(struct fc_bsg_job *job)
} }
} }
int zfcp_fc_timeout_bsg_job(struct fc_bsg_job *job) int zfcp_fc_timeout_bsg_job(struct bsg_job *job)
{ {
/* hardware tracks timeout, reset bsg timeout to not interfere */ /* hardware tracks timeout, reset bsg timeout to not interfere */
return -EAGAIN; return -EAGAIN;

View File

@ -263,6 +263,7 @@ config SCSI_SPI_ATTRS
config SCSI_FC_ATTRS config SCSI_FC_ATTRS
tristate "FiberChannel Transport Attributes" tristate "FiberChannel Transport Attributes"
depends on SCSI && NET depends on SCSI && NET
select BLK_DEV_BSGLIB
select SCSI_NETLINK select SCSI_NETLINK
help help
If you wish to export transport-specific information about If you wish to export transport-specific information about
@ -743,40 +744,18 @@ config SCSI_ISCI
control unit found in the Intel(R) C600 series chipset. control unit found in the Intel(R) C600 series chipset.
config SCSI_GENERIC_NCR5380 config SCSI_GENERIC_NCR5380
tristate "Generic NCR5380/53c400 SCSI PIO support" tristate "Generic NCR5380/53c400 SCSI ISA card support"
depends on ISA && SCSI depends on ISA && SCSI && HAS_IOPORT_MAP
select SCSI_SPI_ATTRS select SCSI_SPI_ATTRS
---help--- ---help---
This is a driver for the old NCR 53c80 series of SCSI controllers This is a driver for old ISA card SCSI controllers based on a
on boards using PIO. Most boards such as the Trantor T130 fit this NCR 5380, 53C80, 53C400, 53C400A, or DTC 436 device.
category, along with a large number of ISA 8bit controllers shipped Most boards such as the Trantor T130 fit this category, as do
for free with SCSI scanners. If you have a PAS16, T128 or DMX3191 various 8-bit and 16-bit ISA cards bundled with SCSI scanners.
you should select the specific driver for that card rather than
generic 5380 support.
It is explained in section 3.8 of the SCSI-HOWTO, available from
<http://www.tldp.org/docs.html#howto>. If it doesn't work out
of the box, you may have to change some settings in
<file:drivers/scsi/g_NCR5380.h>.
To compile this driver as a module, choose M here: the To compile this driver as a module, choose M here: the
module will be called g_NCR5380. module will be called g_NCR5380.
config SCSI_GENERIC_NCR5380_MMIO
tristate "Generic NCR5380/53c400 SCSI MMIO support"
depends on ISA && SCSI
select SCSI_SPI_ATTRS
---help---
This is a driver for the old NCR 53c80 series of SCSI controllers
on boards using memory mapped I/O.
It is explained in section 3.8 of the SCSI-HOWTO, available from
<http://www.tldp.org/docs.html#howto>. If it doesn't work out
of the box, you may have to change some settings in
<file:drivers/scsi/g_NCR5380.h>.
To compile this driver as a module, choose M here: the
module will be called g_NCR5380_mmio.
config SCSI_IPS config SCSI_IPS
tristate "IBM ServeRAID support" tristate "IBM ServeRAID support"
depends on PCI && SCSI depends on PCI && SCSI

View File

@ -74,7 +74,6 @@ obj-$(CONFIG_SCSI_ISCI) += isci/
obj-$(CONFIG_SCSI_IPS) += ips.o obj-$(CONFIG_SCSI_IPS) += ips.o
obj-$(CONFIG_SCSI_FUTURE_DOMAIN)+= fdomain.o obj-$(CONFIG_SCSI_FUTURE_DOMAIN)+= fdomain.o
obj-$(CONFIG_SCSI_GENERIC_NCR5380) += g_NCR5380.o obj-$(CONFIG_SCSI_GENERIC_NCR5380) += g_NCR5380.o
obj-$(CONFIG_SCSI_GENERIC_NCR5380_MMIO) += g_NCR5380_mmio.o
obj-$(CONFIG_SCSI_NCR53C406A) += NCR53c406a.o obj-$(CONFIG_SCSI_NCR53C406A) += NCR53c406a.o
obj-$(CONFIG_SCSI_NCR_D700) += 53c700.o NCR_D700.o obj-$(CONFIG_SCSI_NCR_D700) += 53c700.o NCR_D700.o
obj-$(CONFIG_SCSI_NCR_Q720) += NCR_Q720_mod.o obj-$(CONFIG_SCSI_NCR_Q720) += NCR_Q720_mod.o

View File

@ -121,9 +121,10 @@
* *
* Either real DMA *or* pseudo DMA may be implemented * Either real DMA *or* pseudo DMA may be implemented
* *
* NCR5380_dma_write_setup(instance, src, count) - initialize * NCR5380_dma_xfer_len - determine size of DMA/PDMA transfer
* NCR5380_dma_read_setup(instance, dst, count) - initialize * NCR5380_dma_send_setup - execute DMA/PDMA from memory to 5380
* NCR5380_dma_residual(instance); - residual count * NCR5380_dma_recv_setup - execute DMA/PDMA from 5380 to memory
* NCR5380_dma_residual - residual byte count
* *
* The generic driver is initialized by calling NCR5380_init(instance), * The generic driver is initialized by calling NCR5380_init(instance),
* after setting the appropriate host specific fields and ID. If the * after setting the appropriate host specific fields and ID. If the
@ -178,7 +179,7 @@ static inline void initialize_SCp(struct scsi_cmnd *cmd)
/** /**
* NCR5380_poll_politely2 - wait for two chip register values * NCR5380_poll_politely2 - wait for two chip register values
* @instance: controller to poll * @hostdata: host private data
* @reg1: 5380 register to poll * @reg1: 5380 register to poll
* @bit1: Bitmask to check * @bit1: Bitmask to check
* @val1: Expected value * @val1: Expected value
@ -195,18 +196,14 @@ static inline void initialize_SCp(struct scsi_cmnd *cmd)
* Returns 0 if either or both event(s) occurred otherwise -ETIMEDOUT. * Returns 0 if either or both event(s) occurred otherwise -ETIMEDOUT.
*/ */
static int NCR5380_poll_politely2(struct Scsi_Host *instance, static int NCR5380_poll_politely2(struct NCR5380_hostdata *hostdata,
int reg1, int bit1, int val1, unsigned int reg1, u8 bit1, u8 val1,
int reg2, int bit2, int val2, int wait) unsigned int reg2, u8 bit2, u8 val2,
unsigned long wait)
{ {
struct NCR5380_hostdata *hostdata = shost_priv(instance); unsigned long n = hostdata->poll_loops;
unsigned long deadline = jiffies + wait; unsigned long deadline = jiffies + wait;
unsigned long n;
/* Busy-wait for up to 10 ms */
n = min(10000U, jiffies_to_usecs(wait));
n *= hostdata->accesses_per_ms;
n /= 2000;
do { do {
if ((NCR5380_read(reg1) & bit1) == val1) if ((NCR5380_read(reg1) & bit1) == val1)
return 0; return 0;
@ -288,6 +285,7 @@ mrs[] = {
static void NCR5380_print(struct Scsi_Host *instance) static void NCR5380_print(struct Scsi_Host *instance)
{ {
struct NCR5380_hostdata *hostdata = shost_priv(instance);
unsigned char status, data, basr, mr, icr, i; unsigned char status, data, basr, mr, icr, i;
data = NCR5380_read(CURRENT_SCSI_DATA_REG); data = NCR5380_read(CURRENT_SCSI_DATA_REG);
@ -337,6 +335,7 @@ static struct {
static void NCR5380_print_phase(struct Scsi_Host *instance) static void NCR5380_print_phase(struct Scsi_Host *instance)
{ {
struct NCR5380_hostdata *hostdata = shost_priv(instance);
unsigned char status; unsigned char status;
int i; int i;
@ -441,14 +440,14 @@ static void prepare_info(struct Scsi_Host *instance)
struct NCR5380_hostdata *hostdata = shost_priv(instance); struct NCR5380_hostdata *hostdata = shost_priv(instance);
snprintf(hostdata->info, sizeof(hostdata->info), snprintf(hostdata->info, sizeof(hostdata->info),
"%s, io_port 0x%lx, n_io_port %d, " "%s, irq %d, "
"base 0x%lx, irq %d, " "io_port 0x%lx, base 0x%lx, "
"can_queue %d, cmd_per_lun %d, " "can_queue %d, cmd_per_lun %d, "
"sg_tablesize %d, this_id %d, " "sg_tablesize %d, this_id %d, "
"flags { %s%s%s}, " "flags { %s%s%s}, "
"options { %s} ", "options { %s} ",
instance->hostt->name, instance->io_port, instance->n_io_port, instance->hostt->name, instance->irq,
instance->base, instance->irq, hostdata->io_port, hostdata->base,
instance->can_queue, instance->cmd_per_lun, instance->can_queue, instance->cmd_per_lun,
instance->sg_tablesize, instance->this_id, instance->sg_tablesize, instance->this_id,
hostdata->flags & FLAG_DMA_FIXUP ? "DMA_FIXUP " : "", hostdata->flags & FLAG_DMA_FIXUP ? "DMA_FIXUP " : "",
@ -482,6 +481,7 @@ static int NCR5380_init(struct Scsi_Host *instance, int flags)
struct NCR5380_hostdata *hostdata = shost_priv(instance); struct NCR5380_hostdata *hostdata = shost_priv(instance);
int i; int i;
unsigned long deadline; unsigned long deadline;
unsigned long accesses_per_ms;
instance->max_lun = 7; instance->max_lun = 7;
@ -530,7 +530,8 @@ static int NCR5380_init(struct Scsi_Host *instance, int flags)
++i; ++i;
cpu_relax(); cpu_relax();
} while (time_is_after_jiffies(deadline)); } while (time_is_after_jiffies(deadline));
hostdata->accesses_per_ms = i / 256; accesses_per_ms = i / 256;
hostdata->poll_loops = NCR5380_REG_POLL_TIME * accesses_per_ms / 2;
return 0; return 0;
} }
@ -560,7 +561,7 @@ static int NCR5380_maybe_reset_bus(struct Scsi_Host *instance)
case 3: case 3:
case 5: case 5:
shost_printk(KERN_ERR, instance, "SCSI bus busy, waiting up to five seconds\n"); shost_printk(KERN_ERR, instance, "SCSI bus busy, waiting up to five seconds\n");
NCR5380_poll_politely(instance, NCR5380_poll_politely(hostdata,
STATUS_REG, SR_BSY, 0, 5 * HZ); STATUS_REG, SR_BSY, 0, 5 * HZ);
break; break;
case 2: case 2:
@ -871,7 +872,7 @@ static void NCR5380_dma_complete(struct Scsi_Host *instance)
NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE); NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE);
NCR5380_read(RESET_PARITY_INTERRUPT_REG); NCR5380_read(RESET_PARITY_INTERRUPT_REG);
transferred = hostdata->dma_len - NCR5380_dma_residual(instance); transferred = hostdata->dma_len - NCR5380_dma_residual(hostdata);
hostdata->dma_len = 0; hostdata->dma_len = 0;
data = (unsigned char **)&hostdata->connected->SCp.ptr; data = (unsigned char **)&hostdata->connected->SCp.ptr;
@ -994,7 +995,7 @@ static irqreturn_t __maybe_unused NCR5380_intr(int irq, void *dev_id)
} }
handled = 1; handled = 1;
} else { } else {
shost_printk(KERN_NOTICE, instance, "interrupt without IRQ bit\n"); dsprintk(NDEBUG_INTR, instance, "interrupt without IRQ bit\n");
#ifdef SUN3_SCSI_VME #ifdef SUN3_SCSI_VME
dregs->csr |= CSR_DMA_ENABLE; dregs->csr |= CSR_DMA_ENABLE;
#endif #endif
@ -1075,7 +1076,7 @@ static struct scsi_cmnd *NCR5380_select(struct Scsi_Host *instance,
*/ */
spin_unlock_irq(&hostdata->lock); spin_unlock_irq(&hostdata->lock);
err = NCR5380_poll_politely2(instance, MODE_REG, MR_ARBITRATE, 0, err = NCR5380_poll_politely2(hostdata, MODE_REG, MR_ARBITRATE, 0,
INITIATOR_COMMAND_REG, ICR_ARBITRATION_PROGRESS, INITIATOR_COMMAND_REG, ICR_ARBITRATION_PROGRESS,
ICR_ARBITRATION_PROGRESS, HZ); ICR_ARBITRATION_PROGRESS, HZ);
spin_lock_irq(&hostdata->lock); spin_lock_irq(&hostdata->lock);
@ -1201,7 +1202,7 @@ static struct scsi_cmnd *NCR5380_select(struct Scsi_Host *instance,
* selection. * selection.
*/ */
err = NCR5380_poll_politely(instance, STATUS_REG, SR_BSY, SR_BSY, err = NCR5380_poll_politely(hostdata, STATUS_REG, SR_BSY, SR_BSY,
msecs_to_jiffies(250)); msecs_to_jiffies(250));
if ((NCR5380_read(STATUS_REG) & (SR_SEL | SR_IO)) == (SR_SEL | SR_IO)) { if ((NCR5380_read(STATUS_REG) & (SR_SEL | SR_IO)) == (SR_SEL | SR_IO)) {
@ -1247,7 +1248,7 @@ static struct scsi_cmnd *NCR5380_select(struct Scsi_Host *instance,
/* Wait for start of REQ/ACK handshake */ /* Wait for start of REQ/ACK handshake */
err = NCR5380_poll_politely(instance, STATUS_REG, SR_REQ, SR_REQ, HZ); err = NCR5380_poll_politely(hostdata, STATUS_REG, SR_REQ, SR_REQ, HZ);
spin_lock_irq(&hostdata->lock); spin_lock_irq(&hostdata->lock);
if (err < 0) { if (err < 0) {
shost_printk(KERN_ERR, instance, "select: REQ timeout\n"); shost_printk(KERN_ERR, instance, "select: REQ timeout\n");
@ -1318,6 +1319,7 @@ static int NCR5380_transfer_pio(struct Scsi_Host *instance,
unsigned char *phase, int *count, unsigned char *phase, int *count,
unsigned char **data) unsigned char **data)
{ {
struct NCR5380_hostdata *hostdata = shost_priv(instance);
unsigned char p = *phase, tmp; unsigned char p = *phase, tmp;
int c = *count; int c = *count;
unsigned char *d = *data; unsigned char *d = *data;
@ -1336,7 +1338,7 @@ static int NCR5380_transfer_pio(struct Scsi_Host *instance,
* valid * valid
*/ */
if (NCR5380_poll_politely(instance, STATUS_REG, SR_REQ, SR_REQ, HZ) < 0) if (NCR5380_poll_politely(hostdata, STATUS_REG, SR_REQ, SR_REQ, HZ) < 0)
break; break;
dsprintk(NDEBUG_HANDSHAKE, instance, "REQ asserted\n"); dsprintk(NDEBUG_HANDSHAKE, instance, "REQ asserted\n");
@ -1381,7 +1383,7 @@ static int NCR5380_transfer_pio(struct Scsi_Host *instance,
NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE | ICR_ASSERT_ACK); NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE | ICR_ASSERT_ACK);
} }
if (NCR5380_poll_politely(instance, if (NCR5380_poll_politely(hostdata,
STATUS_REG, SR_REQ, 0, 5 * HZ) < 0) STATUS_REG, SR_REQ, 0, 5 * HZ) < 0)
break; break;
@ -1440,6 +1442,7 @@ static int NCR5380_transfer_pio(struct Scsi_Host *instance,
static void do_reset(struct Scsi_Host *instance) static void do_reset(struct Scsi_Host *instance)
{ {
struct NCR5380_hostdata __maybe_unused *hostdata = shost_priv(instance);
unsigned long flags; unsigned long flags;
local_irq_save(flags); local_irq_save(flags);
@ -1462,6 +1465,7 @@ static void do_reset(struct Scsi_Host *instance)
static int do_abort(struct Scsi_Host *instance) static int do_abort(struct Scsi_Host *instance)
{ {
struct NCR5380_hostdata *hostdata = shost_priv(instance);
unsigned char *msgptr, phase, tmp; unsigned char *msgptr, phase, tmp;
int len; int len;
int rc; int rc;
@ -1479,7 +1483,7 @@ static int do_abort(struct Scsi_Host *instance)
* the target sees, so we just handshake. * the target sees, so we just handshake.
*/ */
rc = NCR5380_poll_politely(instance, STATUS_REG, SR_REQ, SR_REQ, 10 * HZ); rc = NCR5380_poll_politely(hostdata, STATUS_REG, SR_REQ, SR_REQ, 10 * HZ);
if (rc < 0) if (rc < 0)
goto timeout; goto timeout;
@ -1490,7 +1494,7 @@ static int do_abort(struct Scsi_Host *instance)
if (tmp != PHASE_MSGOUT) { if (tmp != PHASE_MSGOUT) {
NCR5380_write(INITIATOR_COMMAND_REG, NCR5380_write(INITIATOR_COMMAND_REG,
ICR_BASE | ICR_ASSERT_ATN | ICR_ASSERT_ACK); ICR_BASE | ICR_ASSERT_ATN | ICR_ASSERT_ACK);
rc = NCR5380_poll_politely(instance, STATUS_REG, SR_REQ, 0, 3 * HZ); rc = NCR5380_poll_politely(hostdata, STATUS_REG, SR_REQ, 0, 3 * HZ);
if (rc < 0) if (rc < 0)
goto timeout; goto timeout;
NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE | ICR_ASSERT_ATN); NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE | ICR_ASSERT_ATN);
@ -1575,9 +1579,9 @@ static int NCR5380_transfer_dma(struct Scsi_Host *instance,
* starting the NCR. This is also the cleaner way for the TT. * starting the NCR. This is also the cleaner way for the TT.
*/ */
if (p & SR_IO) if (p & SR_IO)
result = NCR5380_dma_recv_setup(instance, d, c); result = NCR5380_dma_recv_setup(hostdata, d, c);
else else
result = NCR5380_dma_send_setup(instance, d, c); result = NCR5380_dma_send_setup(hostdata, d, c);
} }
/* /*
@ -1609,9 +1613,9 @@ static int NCR5380_transfer_dma(struct Scsi_Host *instance,
* NCR access, else the DMA setup gets trashed! * NCR access, else the DMA setup gets trashed!
*/ */
if (p & SR_IO) if (p & SR_IO)
result = NCR5380_dma_recv_setup(instance, d, c); result = NCR5380_dma_recv_setup(hostdata, d, c);
else else
result = NCR5380_dma_send_setup(instance, d, c); result = NCR5380_dma_send_setup(hostdata, d, c);
} }
/* On failure, NCR5380_dma_xxxx_setup() returns a negative int. */ /* On failure, NCR5380_dma_xxxx_setup() returns a negative int. */
@ -1678,12 +1682,12 @@ static int NCR5380_transfer_dma(struct Scsi_Host *instance,
* byte. * byte.
*/ */
if (NCR5380_poll_politely(instance, BUS_AND_STATUS_REG, if (NCR5380_poll_politely(hostdata, BUS_AND_STATUS_REG,
BASR_DRQ, BASR_DRQ, HZ) < 0) { BASR_DRQ, BASR_DRQ, HZ) < 0) {
result = -1; result = -1;
shost_printk(KERN_ERR, instance, "PDMA read: DRQ timeout\n"); shost_printk(KERN_ERR, instance, "PDMA read: DRQ timeout\n");
} }
if (NCR5380_poll_politely(instance, STATUS_REG, if (NCR5380_poll_politely(hostdata, STATUS_REG,
SR_REQ, 0, HZ) < 0) { SR_REQ, 0, HZ) < 0) {
result = -1; result = -1;
shost_printk(KERN_ERR, instance, "PDMA read: !REQ timeout\n"); shost_printk(KERN_ERR, instance, "PDMA read: !REQ timeout\n");
@ -1694,7 +1698,7 @@ static int NCR5380_transfer_dma(struct Scsi_Host *instance,
* Wait for the last byte to be sent. If REQ is being asserted for * Wait for the last byte to be sent. If REQ is being asserted for
* the byte we're interested, we'll ACK it and it will go false. * the byte we're interested, we'll ACK it and it will go false.
*/ */
if (NCR5380_poll_politely2(instance, if (NCR5380_poll_politely2(hostdata,
BUS_AND_STATUS_REG, BASR_DRQ, BASR_DRQ, BUS_AND_STATUS_REG, BASR_DRQ, BASR_DRQ,
BUS_AND_STATUS_REG, BASR_PHASE_MATCH, 0, HZ) < 0) { BUS_AND_STATUS_REG, BASR_PHASE_MATCH, 0, HZ) < 0) {
result = -1; result = -1;
@ -1751,22 +1755,26 @@ static void NCR5380_information_transfer(struct Scsi_Host *instance)
NCR5380_dprint_phase(NDEBUG_INFORMATION, instance); NCR5380_dprint_phase(NDEBUG_INFORMATION, instance);
} }
#ifdef CONFIG_SUN3 #ifdef CONFIG_SUN3
if (phase == PHASE_CMDOUT) { if (phase == PHASE_CMDOUT &&
void *d; sun3_dma_setup_done != cmd) {
unsigned long count; int count;
if (!cmd->SCp.this_residual && cmd->SCp.buffers_residual) { if (!cmd->SCp.this_residual && cmd->SCp.buffers_residual) {
count = cmd->SCp.buffer->length; ++cmd->SCp.buffer;
d = sg_virt(cmd->SCp.buffer); --cmd->SCp.buffers_residual;
} else { cmd->SCp.this_residual = cmd->SCp.buffer->length;
count = cmd->SCp.this_residual; cmd->SCp.ptr = sg_virt(cmd->SCp.buffer);
d = cmd->SCp.ptr;
} }
if (sun3_dma_setup_done != cmd && count = sun3scsi_dma_xfer_len(hostdata, cmd);
sun3scsi_dma_xfer_len(count, cmd) > 0) {
sun3scsi_dma_setup(instance, d, count, if (count > 0) {
rq_data_dir(cmd->request)); if (rq_data_dir(cmd->request))
sun3scsi_dma_send_setup(hostdata,
cmd->SCp.ptr, count);
else
sun3scsi_dma_recv_setup(hostdata,
cmd->SCp.ptr, count);
sun3_dma_setup_done = cmd; sun3_dma_setup_done = cmd;
} }
#ifdef SUN3_SCSI_VME #ifdef SUN3_SCSI_VME
@ -1827,7 +1835,7 @@ static void NCR5380_information_transfer(struct Scsi_Host *instance)
transfersize = 0; transfersize = 0;
if (!cmd->device->borken) if (!cmd->device->borken)
transfersize = NCR5380_dma_xfer_len(instance, cmd, phase); transfersize = NCR5380_dma_xfer_len(hostdata, cmd);
if (transfersize > 0) { if (transfersize > 0) {
len = transfersize; len = transfersize;
@ -2073,7 +2081,7 @@ static void NCR5380_information_transfer(struct Scsi_Host *instance)
} /* switch(phase) */ } /* switch(phase) */
} else { } else {
spin_unlock_irq(&hostdata->lock); spin_unlock_irq(&hostdata->lock);
NCR5380_poll_politely(instance, STATUS_REG, SR_REQ, SR_REQ, HZ); NCR5380_poll_politely(hostdata, STATUS_REG, SR_REQ, SR_REQ, HZ);
spin_lock_irq(&hostdata->lock); spin_lock_irq(&hostdata->lock);
} }
} }
@ -2119,7 +2127,7 @@ static void NCR5380_reselect(struct Scsi_Host *instance)
*/ */
NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE | ICR_ASSERT_BSY); NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE | ICR_ASSERT_BSY);
if (NCR5380_poll_politely(instance, if (NCR5380_poll_politely(hostdata,
STATUS_REG, SR_SEL, 0, 2 * HZ) < 0) { STATUS_REG, SR_SEL, 0, 2 * HZ) < 0) {
NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE); NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE);
return; return;
@ -2130,7 +2138,7 @@ static void NCR5380_reselect(struct Scsi_Host *instance)
* Wait for target to go into MSGIN. * Wait for target to go into MSGIN.
*/ */
if (NCR5380_poll_politely(instance, if (NCR5380_poll_politely(hostdata,
STATUS_REG, SR_REQ, SR_REQ, 2 * HZ) < 0) { STATUS_REG, SR_REQ, SR_REQ, 2 * HZ) < 0) {
do_abort(instance); do_abort(instance);
return; return;
@ -2204,22 +2212,25 @@ static void NCR5380_reselect(struct Scsi_Host *instance)
} }
#ifdef CONFIG_SUN3 #ifdef CONFIG_SUN3
{ if (sun3_dma_setup_done != tmp) {
void *d; int count;
unsigned long count;
if (!tmp->SCp.this_residual && tmp->SCp.buffers_residual) { if (!tmp->SCp.this_residual && tmp->SCp.buffers_residual) {
count = tmp->SCp.buffer->length; ++tmp->SCp.buffer;
d = sg_virt(tmp->SCp.buffer); --tmp->SCp.buffers_residual;
} else { tmp->SCp.this_residual = tmp->SCp.buffer->length;
count = tmp->SCp.this_residual; tmp->SCp.ptr = sg_virt(tmp->SCp.buffer);
d = tmp->SCp.ptr;
} }
if (sun3_dma_setup_done != tmp && count = sun3scsi_dma_xfer_len(hostdata, tmp);
sun3scsi_dma_xfer_len(count, tmp) > 0) {
sun3scsi_dma_setup(instance, d, count, if (count > 0) {
rq_data_dir(tmp->request)); if (rq_data_dir(tmp->request))
sun3scsi_dma_send_setup(hostdata,
tmp->SCp.ptr, count);
else
sun3scsi_dma_recv_setup(hostdata,
tmp->SCp.ptr, count);
sun3_dma_setup_done = tmp; sun3_dma_setup_done = tmp;
} }
} }

View File

@ -219,27 +219,32 @@
#define FLAG_TOSHIBA_DELAY 128 /* Allow for borken CD-ROMs */ #define FLAG_TOSHIBA_DELAY 128 /* Allow for borken CD-ROMs */
struct NCR5380_hostdata { struct NCR5380_hostdata {
NCR5380_implementation_fields; /* implementation specific */ NCR5380_implementation_fields; /* Board-specific data */
struct Scsi_Host *host; /* Host backpointer */ u8 __iomem *io; /* Remapped 5380 address */
unsigned char id_mask, id_higher_mask; /* 1 << id, all bits greater */ u8 __iomem *pdma_io; /* Remapped PDMA address */
unsigned char busy[8]; /* index = target, bit = lun */ unsigned long poll_loops; /* Register polling limit */
int dma_len; /* requested length of DMA */ spinlock_t lock; /* Protects this struct */
unsigned char last_message; /* last message OUT */ struct scsi_cmnd *connected; /* Currently connected cmnd */
struct scsi_cmnd *connected; /* currently connected cmnd */ struct list_head disconnected; /* Waiting for reconnect */
struct scsi_cmnd *selecting; /* cmnd to be connected */ struct Scsi_Host *host; /* SCSI host backpointer */
struct list_head unissued; /* waiting to be issued */ struct workqueue_struct *work_q; /* SCSI host work queue */
struct list_head autosense; /* priority issue queue */ struct work_struct main_task; /* Work item for main loop */
struct list_head disconnected; /* waiting for reconnect */ int flags; /* Board-specific quirks */
spinlock_t lock; /* protects this struct */ int dma_len; /* Requested length of DMA */
int flags; int read_overruns; /* Transfer size reduction for DMA erratum */
struct scsi_eh_save ses; unsigned long io_port; /* Device IO port */
struct scsi_cmnd *sensing; unsigned long base; /* Device base address */
struct list_head unissued; /* Waiting to be issued */
struct scsi_cmnd *selecting; /* Cmnd to be connected */
struct list_head autosense; /* Priority cmnd queue */
struct scsi_cmnd *sensing; /* Cmnd needing autosense */
struct scsi_eh_save ses; /* Cmnd state saved for EH */
unsigned char busy[8]; /* Index = target, bit = lun */
unsigned char id_mask; /* 1 << Host ID */
unsigned char id_higher_mask; /* All bits above id_mask */
unsigned char last_message; /* Last Message Out */
unsigned long region_size; /* Size of address/port range */
char info[256]; char info[256];
int read_overruns; /* number of bytes to cut from a
* transfer to handle chip overruns */
struct work_struct main_task;
struct workqueue_struct *work_q;
unsigned long accesses_per_ms; /* chip register accesses per ms */
}; };
#ifdef __KERNEL__ #ifdef __KERNEL__
@ -252,6 +257,9 @@ struct NCR5380_cmd {
#define NCR5380_PIO_CHUNK_SIZE 256 #define NCR5380_PIO_CHUNK_SIZE 256
/* Time limit (ms) to poll registers when IRQs are disabled, e.g. during PDMA */
#define NCR5380_REG_POLL_TIME 15
static inline struct scsi_cmnd *NCR5380_to_scmd(struct NCR5380_cmd *ncmd_ptr) static inline struct scsi_cmnd *NCR5380_to_scmd(struct NCR5380_cmd *ncmd_ptr)
{ {
return ((struct scsi_cmnd *)ncmd_ptr) - 1; return ((struct scsi_cmnd *)ncmd_ptr) - 1;
@ -294,14 +302,45 @@ static void NCR5380_reselect(struct Scsi_Host *instance);
static struct scsi_cmnd *NCR5380_select(struct Scsi_Host *, struct scsi_cmnd *); static struct scsi_cmnd *NCR5380_select(struct Scsi_Host *, struct scsi_cmnd *);
static int NCR5380_transfer_dma(struct Scsi_Host *instance, unsigned char *phase, int *count, unsigned char **data); static int NCR5380_transfer_dma(struct Scsi_Host *instance, unsigned char *phase, int *count, unsigned char **data);
static int NCR5380_transfer_pio(struct Scsi_Host *instance, unsigned char *phase, int *count, unsigned char **data); static int NCR5380_transfer_pio(struct Scsi_Host *instance, unsigned char *phase, int *count, unsigned char **data);
static int NCR5380_poll_politely2(struct Scsi_Host *, int, int, int, int, int, int, int); static int NCR5380_poll_politely2(struct NCR5380_hostdata *,
unsigned int, u8, u8,
unsigned int, u8, u8, unsigned long);
static inline int NCR5380_poll_politely(struct Scsi_Host *instance, static inline int NCR5380_poll_politely(struct NCR5380_hostdata *hostdata,
int reg, int bit, int val, int wait) unsigned int reg, u8 bit, u8 val,
unsigned long wait)
{ {
return NCR5380_poll_politely2(instance, reg, bit, val, if ((NCR5380_read(reg) & bit) == val)
return 0;
return NCR5380_poll_politely2(hostdata, reg, bit, val,
reg, bit, val, wait); reg, bit, val, wait);
} }
static int NCR5380_dma_xfer_len(struct NCR5380_hostdata *,
struct scsi_cmnd *);
static int NCR5380_dma_send_setup(struct NCR5380_hostdata *,
unsigned char *, int);
static int NCR5380_dma_recv_setup(struct NCR5380_hostdata *,
unsigned char *, int);
static int NCR5380_dma_residual(struct NCR5380_hostdata *);
static inline int NCR5380_dma_xfer_none(struct NCR5380_hostdata *hostdata,
struct scsi_cmnd *cmd)
{
return 0;
}
static inline int NCR5380_dma_setup_none(struct NCR5380_hostdata *hostdata,
unsigned char *data, int count)
{
return 0;
}
static inline int NCR5380_dma_residual_none(struct NCR5380_hostdata *hostdata)
{
return 0;
}
#endif /* __KERNEL__ */ #endif /* __KERNEL__ */
#endif /* NCR5380_H */ #endif /* NCR5380_H */

View File

@ -1246,7 +1246,6 @@ struct aac_dev
u32 max_msix; /* max. MSI-X vectors */ u32 max_msix; /* max. MSI-X vectors */
u32 vector_cap; /* MSI-X vector capab.*/ u32 vector_cap; /* MSI-X vector capab.*/
int msi_enabled; /* MSI/MSI-X enabled */ int msi_enabled; /* MSI/MSI-X enabled */
struct msix_entry msixentry[AAC_MAX_MSIX];
struct aac_msix_ctx aac_msix[AAC_MAX_MSIX]; /* context */ struct aac_msix_ctx aac_msix[AAC_MAX_MSIX]; /* context */
u8 adapter_shutdown; u8 adapter_shutdown;
u32 handle_pci_error; u32 handle_pci_error;

View File

@ -378,16 +378,12 @@ void aac_define_int_mode(struct aac_dev *dev)
if (msi_count > AAC_MAX_MSIX) if (msi_count > AAC_MAX_MSIX)
msi_count = AAC_MAX_MSIX; msi_count = AAC_MAX_MSIX;
for (i = 0; i < msi_count; i++)
dev->msixentry[i].entry = i;
if (msi_count > 1 && if (msi_count > 1 &&
pci_find_capability(dev->pdev, PCI_CAP_ID_MSIX)) { pci_find_capability(dev->pdev, PCI_CAP_ID_MSIX)) {
min_msix = 2; min_msix = 2;
i = pci_enable_msix_range(dev->pdev, i = pci_alloc_irq_vectors(dev->pdev,
dev->msixentry, min_msix, msi_count,
min_msix, PCI_IRQ_MSIX | PCI_IRQ_AFFINITY);
msi_count);
if (i > 0) { if (i > 0) {
dev->msi_enabled = 1; dev->msi_enabled = 1;
msi_count = i; msi_count = i;

View File

@ -2043,30 +2043,22 @@ int aac_acquire_irq(struct aac_dev *dev)
int i; int i;
int j; int j;
int ret = 0; int ret = 0;
int cpu;
cpu = cpumask_first(cpu_online_mask);
if (!dev->sync_mode && dev->msi_enabled && dev->max_msix > 1) { if (!dev->sync_mode && dev->msi_enabled && dev->max_msix > 1) {
for (i = 0; i < dev->max_msix; i++) { for (i = 0; i < dev->max_msix; i++) {
dev->aac_msix[i].vector_no = i; dev->aac_msix[i].vector_no = i;
dev->aac_msix[i].dev = dev; dev->aac_msix[i].dev = dev;
if (request_irq(dev->msixentry[i].vector, if (request_irq(pci_irq_vector(dev->pdev, i),
dev->a_ops.adapter_intr, dev->a_ops.adapter_intr,
0, "aacraid", &(dev->aac_msix[i]))) { 0, "aacraid", &(dev->aac_msix[i]))) {
printk(KERN_ERR "%s%d: Failed to register IRQ for vector %d.\n", printk(KERN_ERR "%s%d: Failed to register IRQ for vector %d.\n",
dev->name, dev->id, i); dev->name, dev->id, i);
for (j = 0 ; j < i ; j++) for (j = 0 ; j < i ; j++)
free_irq(dev->msixentry[j].vector, free_irq(pci_irq_vector(dev->pdev, j),
&(dev->aac_msix[j])); &(dev->aac_msix[j]));
pci_disable_msix(dev->pdev); pci_disable_msix(dev->pdev);
ret = -1; ret = -1;
} }
if (irq_set_affinity_hint(dev->msixentry[i].vector,
get_cpu_mask(cpu))) {
printk(KERN_ERR "%s%d: Failed to set IRQ affinity for cpu %d\n",
dev->name, dev->id, cpu);
}
cpu = cpumask_next(cpu, cpu_online_mask);
} }
} else { } else {
dev->aac_msix[0].vector_no = 0; dev->aac_msix[0].vector_no = 0;
@ -2096,16 +2088,9 @@ void aac_free_irq(struct aac_dev *dev)
dev->pdev->device == PMC_DEVICE_S8 || dev->pdev->device == PMC_DEVICE_S8 ||
dev->pdev->device == PMC_DEVICE_S9) { dev->pdev->device == PMC_DEVICE_S9) {
if (dev->max_msix > 1) { if (dev->max_msix > 1) {
for (i = 0; i < dev->max_msix; i++) { for (i = 0; i < dev->max_msix; i++)
if (irq_set_affinity_hint( free_irq(pci_irq_vector(dev->pdev, i),
dev->msixentry[i].vector, NULL)) {
printk(KERN_ERR "%s%d: Failed to reset IRQ affinity for cpu %d\n",
dev->name, dev->id, cpu);
}
cpu = cpumask_next(cpu, cpu_online_mask);
free_irq(dev->msixentry[i].vector,
&(dev->aac_msix[i])); &(dev->aac_msix[i]));
}
} else { } else {
free_irq(dev->pdev->irq, &(dev->aac_msix[0])); free_irq(dev->pdev->irq, &(dev->aac_msix[0]));
} }

View File

@ -1071,7 +1071,6 @@ static struct scsi_host_template aac_driver_template = {
static void __aac_shutdown(struct aac_dev * aac) static void __aac_shutdown(struct aac_dev * aac)
{ {
int i; int i;
int cpu;
aac_send_shutdown(aac); aac_send_shutdown(aac);
@ -1087,24 +1086,13 @@ static void __aac_shutdown(struct aac_dev * aac)
kthread_stop(aac->thread); kthread_stop(aac->thread);
} }
aac_adapter_disable_int(aac); aac_adapter_disable_int(aac);
cpu = cpumask_first(cpu_online_mask);
if (aac->pdev->device == PMC_DEVICE_S6 || if (aac->pdev->device == PMC_DEVICE_S6 ||
aac->pdev->device == PMC_DEVICE_S7 || aac->pdev->device == PMC_DEVICE_S7 ||
aac->pdev->device == PMC_DEVICE_S8 || aac->pdev->device == PMC_DEVICE_S8 ||
aac->pdev->device == PMC_DEVICE_S9) { aac->pdev->device == PMC_DEVICE_S9) {
if (aac->max_msix > 1) { if (aac->max_msix > 1) {
for (i = 0; i < aac->max_msix; i++) { for (i = 0; i < aac->max_msix; i++) {
if (irq_set_affinity_hint( free_irq(pci_irq_vector(aac->pdev, i),
aac->msixentry[i].vector,
NULL)) {
printk(KERN_ERR "%s%d: Failed to reset IRQ affinity for cpu %d\n",
aac->name,
aac->id,
cpu);
}
cpu = cpumask_next(cpu,
cpu_online_mask);
free_irq(aac->msixentry[i].vector,
&(aac->aac_msix[i])); &(aac->aac_msix[i]));
} }
} else { } else {
@ -1350,7 +1338,7 @@ static void aac_release_resources(struct aac_dev *aac)
aac->pdev->device == PMC_DEVICE_S9) { aac->pdev->device == PMC_DEVICE_S9) {
if (aac->max_msix > 1) { if (aac->max_msix > 1) {
for (i = 0; i < aac->max_msix; i++) for (i = 0; i < aac->max_msix; i++)
free_irq(aac->msixentry[i].vector, free_irq(pci_irq_vector(aac->pdev, i),
&(aac->aac_msix[i])); &(aac->aac_msix[i]));
} else { } else {
free_irq(aac->pdev->irq, &(aac->aac_msix[0])); free_irq(aac->pdev->irq, &(aac->aac_msix[0]));
@ -1396,13 +1384,13 @@ static int aac_acquire_resources(struct aac_dev *dev)
dev->aac_msix[i].vector_no = i; dev->aac_msix[i].vector_no = i;
dev->aac_msix[i].dev = dev; dev->aac_msix[i].dev = dev;
if (request_irq(dev->msixentry[i].vector, if (request_irq(pci_irq_vector(dev->pdev, i),
dev->a_ops.adapter_intr, dev->a_ops.adapter_intr,
0, "aacraid", &(dev->aac_msix[i]))) { 0, "aacraid", &(dev->aac_msix[i]))) {
printk(KERN_ERR "%s%d: Failed to register IRQ for vector %d.\n", printk(KERN_ERR "%s%d: Failed to register IRQ for vector %d.\n",
name, instance, i); name, instance, i);
for (j = 0 ; j < i ; j++) for (j = 0 ; j < i ; j++)
free_irq(dev->msixentry[j].vector, free_irq(pci_irq_vector(dev->pdev, j),
&(dev->aac_msix[j])); &(dev->aac_msix[j]));
pci_disable_msix(dev->pdev); pci_disable_msix(dev->pdev);
goto error_iounmap; goto error_iounmap;

View File

@ -11030,6 +11030,9 @@ static int advansys_board_found(struct Scsi_Host *shost, unsigned int iop,
ASC_DBG(2, "AdvInitGetConfig()\n"); ASC_DBG(2, "AdvInitGetConfig()\n");
ret = AdvInitGetConfig(pdev, shost) ? -ENODEV : 0; ret = AdvInitGetConfig(pdev, shost) ? -ENODEV : 0;
#else
share_irq = 0;
ret = -ENODEV;
#endif /* CONFIG_PCI */ #endif /* CONFIG_PCI */
} }

View File

@ -228,8 +228,11 @@ static int asd_init_scbs(struct asd_ha_struct *asd_ha)
bitmap_bytes = (asd_ha->seq.tc_index_bitmap_bits+7)/8; bitmap_bytes = (asd_ha->seq.tc_index_bitmap_bits+7)/8;
bitmap_bytes = BITS_TO_LONGS(bitmap_bytes*8)*sizeof(unsigned long); bitmap_bytes = BITS_TO_LONGS(bitmap_bytes*8)*sizeof(unsigned long);
asd_ha->seq.tc_index_bitmap = kzalloc(bitmap_bytes, GFP_KERNEL); asd_ha->seq.tc_index_bitmap = kzalloc(bitmap_bytes, GFP_KERNEL);
if (!asd_ha->seq.tc_index_bitmap) if (!asd_ha->seq.tc_index_bitmap) {
kfree(asd_ha->seq.tc_index_array);
asd_ha->seq.tc_index_array = NULL;
return -ENOMEM; return -ENOMEM;
}
spin_lock_init(&seq->tc_index_lock); spin_lock_init(&seq->tc_index_lock);

View File

@ -629,7 +629,6 @@ struct AdapterControlBlock
struct pci_dev * pdev; struct pci_dev * pdev;
struct Scsi_Host * host; struct Scsi_Host * host;
unsigned long vir2phy_offset; unsigned long vir2phy_offset;
struct msix_entry entries[ARCMST_NUM_MSIX_VECTORS];
/* Offset is used in making arc cdb physical to virtual calculations */ /* Offset is used in making arc cdb physical to virtual calculations */
uint32_t outbound_int_enable; uint32_t outbound_int_enable;
uint32_t cdb_phyaddr_hi32; uint32_t cdb_phyaddr_hi32;
@ -671,8 +670,6 @@ struct AdapterControlBlock
/* iop init */ /* iop init */
#define ACB_F_ABORT 0x0200 #define ACB_F_ABORT 0x0200
#define ACB_F_FIRMWARE_TRAP 0x0400 #define ACB_F_FIRMWARE_TRAP 0x0400
#define ACB_F_MSI_ENABLED 0x1000
#define ACB_F_MSIX_ENABLED 0x2000
struct CommandControlBlock * pccb_pool[ARCMSR_MAX_FREECCB_NUM]; struct CommandControlBlock * pccb_pool[ARCMSR_MAX_FREECCB_NUM];
/* used for memory free */ /* used for memory free */
struct list_head ccb_free_list; struct list_head ccb_free_list;
@ -725,7 +722,7 @@ struct AdapterControlBlock
atomic_t rq_map_token; atomic_t rq_map_token;
atomic_t ante_token_value; atomic_t ante_token_value;
uint32_t maxOutstanding; uint32_t maxOutstanding;
int msix_vector_count; int vector_count;
};/* HW_DEVICE_EXTENSION */ };/* HW_DEVICE_EXTENSION */
/* /*
******************************************************************************* *******************************************************************************

View File

@ -720,51 +720,39 @@ static void arcmsr_message_isr_bh_fn(struct work_struct *work)
static int static int
arcmsr_request_irq(struct pci_dev *pdev, struct AdapterControlBlock *acb) arcmsr_request_irq(struct pci_dev *pdev, struct AdapterControlBlock *acb)
{ {
int i, j, r; unsigned long flags;
struct msix_entry entries[ARCMST_NUM_MSIX_VECTORS]; int nvec, i;
for (i = 0; i < ARCMST_NUM_MSIX_VECTORS; i++) nvec = pci_alloc_irq_vectors(pdev, 1, ARCMST_NUM_MSIX_VECTORS,
entries[i].entry = i; PCI_IRQ_MSIX);
r = pci_enable_msix_range(pdev, entries, 1, ARCMST_NUM_MSIX_VECTORS); if (nvec > 0) {
if (r < 0)
goto msi_int;
acb->msix_vector_count = r;
for (i = 0; i < r; i++) {
if (request_irq(entries[i].vector,
arcmsr_do_interrupt, 0, "arcmsr", acb)) {
pr_warn("arcmsr%d: request_irq =%d failed!\n",
acb->host->host_no, entries[i].vector);
for (j = 0 ; j < i ; j++)
free_irq(entries[j].vector, acb);
pci_disable_msix(pdev);
goto msi_int;
}
acb->entries[i] = entries[i];
}
acb->acb_flags |= ACB_F_MSIX_ENABLED;
pr_info("arcmsr%d: msi-x enabled\n", acb->host->host_no); pr_info("arcmsr%d: msi-x enabled\n", acb->host->host_no);
return SUCCESS; flags = 0;
msi_int: } else {
if (pci_enable_msi_exact(pdev, 1) < 0) nvec = pci_alloc_irq_vectors(pdev, 1, 1,
goto legacy_int; PCI_IRQ_MSI | PCI_IRQ_LEGACY);
if (request_irq(pdev->irq, arcmsr_do_interrupt, if (nvec < 1)
IRQF_SHARED, "arcmsr", acb)) {
pr_warn("arcmsr%d: request_irq =%d failed!\n",
acb->host->host_no, pdev->irq);
pci_disable_msi(pdev);
goto legacy_int;
}
acb->acb_flags |= ACB_F_MSI_ENABLED;
pr_info("arcmsr%d: msi enabled\n", acb->host->host_no);
return SUCCESS;
legacy_int:
if (request_irq(pdev->irq, arcmsr_do_interrupt,
IRQF_SHARED, "arcmsr", acb)) {
pr_warn("arcmsr%d: request_irq = %d failed!\n",
acb->host->host_no, pdev->irq);
return FAILED; return FAILED;
flags = IRQF_SHARED;
} }
acb->vector_count = nvec;
for (i = 0; i < nvec; i++) {
if (request_irq(pci_irq_vector(pdev, i), arcmsr_do_interrupt,
flags, "arcmsr", acb)) {
pr_warn("arcmsr%d: request_irq =%d failed!\n",
acb->host->host_no, pci_irq_vector(pdev, i));
goto out_free_irq;
}
}
return SUCCESS; return SUCCESS;
out_free_irq:
while (--i >= 0)
free_irq(pci_irq_vector(pdev, i), acb);
pci_free_irq_vectors(pdev);
return FAILED;
} }
static int arcmsr_probe(struct pci_dev *pdev, const struct pci_device_id *id) static int arcmsr_probe(struct pci_dev *pdev, const struct pci_device_id *id)
@ -886,15 +874,9 @@ static void arcmsr_free_irq(struct pci_dev *pdev,
{ {
int i; int i;
if (acb->acb_flags & ACB_F_MSI_ENABLED) { for (i = 0; i < acb->vector_count; i++)
free_irq(pdev->irq, acb); free_irq(pci_irq_vector(pdev, i), acb);
pci_disable_msi(pdev); pci_free_irq_vectors(pdev);
} else if (acb->acb_flags & ACB_F_MSIX_ENABLED) {
for (i = 0; i < acb->msix_vector_count; i++)
free_irq(acb->entries[i].vector, acb);
pci_disable_msix(pdev);
} else
free_irq(pdev->irq, acb);
} }
static int arcmsr_suspend(struct pci_dev *pdev, pm_message_t state) static int arcmsr_suspend(struct pci_dev *pdev, pm_message_t state)

View File

@ -14,49 +14,48 @@
#include <scsi/scsi_host.h> #include <scsi/scsi_host.h>
#define priv(host) ((struct NCR5380_hostdata *)(host)->hostdata) #define priv(host) ((struct NCR5380_hostdata *)(host)->hostdata)
#define NCR5380_read(reg) cumanascsi_read(instance, reg) #define NCR5380_read(reg) cumanascsi_read(hostdata, reg)
#define NCR5380_write(reg, value) cumanascsi_write(instance, reg, value) #define NCR5380_write(reg, value) cumanascsi_write(hostdata, reg, value)
#define NCR5380_dma_xfer_len(instance, cmd, phase) (cmd->transfersize) #define NCR5380_dma_xfer_len cumanascsi_dma_xfer_len
#define NCR5380_dma_recv_setup cumanascsi_pread #define NCR5380_dma_recv_setup cumanascsi_pread
#define NCR5380_dma_send_setup cumanascsi_pwrite #define NCR5380_dma_send_setup cumanascsi_pwrite
#define NCR5380_dma_residual(instance) (0) #define NCR5380_dma_residual NCR5380_dma_residual_none
#define NCR5380_intr cumanascsi_intr #define NCR5380_intr cumanascsi_intr
#define NCR5380_queue_command cumanascsi_queue_command #define NCR5380_queue_command cumanascsi_queue_command
#define NCR5380_info cumanascsi_info #define NCR5380_info cumanascsi_info
#define NCR5380_implementation_fields \ #define NCR5380_implementation_fields \
unsigned ctrl; \ unsigned ctrl
void __iomem *base; \
void __iomem *dma struct NCR5380_hostdata;
static u8 cumanascsi_read(struct NCR5380_hostdata *, unsigned int);
static void cumanascsi_write(struct NCR5380_hostdata *, unsigned int, u8);
#include "../NCR5380.h" #include "../NCR5380.h"
void cumanascsi_setup(char *str, int *ints)
{
}
#define CTRL 0x16fc #define CTRL 0x16fc
#define STAT 0x2004 #define STAT 0x2004
#define L(v) (((v)<<16)|((v) & 0x0000ffff)) #define L(v) (((v)<<16)|((v) & 0x0000ffff))
#define H(v) (((v)>>16)|((v) & 0xffff0000)) #define H(v) (((v)>>16)|((v) & 0xffff0000))
static inline int cumanascsi_pwrite(struct Scsi_Host *host, static inline int cumanascsi_pwrite(struct NCR5380_hostdata *hostdata,
unsigned char *addr, int len) unsigned char *addr, int len)
{ {
unsigned long *laddr; unsigned long *laddr;
void __iomem *dma = priv(host)->dma + 0x2000; u8 __iomem *base = hostdata->io;
u8 __iomem *dma = hostdata->pdma_io + 0x2000;
if(!len) return 0; if(!len) return 0;
writeb(0x02, priv(host)->base + CTRL); writeb(0x02, base + CTRL);
laddr = (unsigned long *)addr; laddr = (unsigned long *)addr;
while(len >= 32) while(len >= 32)
{ {
unsigned int status; unsigned int status;
unsigned long v; unsigned long v;
status = readb(priv(host)->base + STAT); status = readb(base + STAT);
if(status & 0x80) if(status & 0x80)
goto end; goto end;
if(!(status & 0x40)) if(!(status & 0x40))
@ -75,12 +74,12 @@ static inline int cumanascsi_pwrite(struct Scsi_Host *host,
} }
addr = (unsigned char *)laddr; addr = (unsigned char *)laddr;
writeb(0x12, priv(host)->base + CTRL); writeb(0x12, base + CTRL);
while(len > 0) while(len > 0)
{ {
unsigned int status; unsigned int status;
status = readb(priv(host)->base + STAT); status = readb(base + STAT);
if(status & 0x80) if(status & 0x80)
goto end; goto end;
if(status & 0x40) if(status & 0x40)
@ -90,7 +89,7 @@ static inline int cumanascsi_pwrite(struct Scsi_Host *host,
break; break;
} }
status = readb(priv(host)->base + STAT); status = readb(base + STAT);
if(status & 0x80) if(status & 0x80)
goto end; goto end;
if(status & 0x40) if(status & 0x40)
@ -101,27 +100,28 @@ static inline int cumanascsi_pwrite(struct Scsi_Host *host,
} }
} }
end: end:
writeb(priv(host)->ctrl | 0x40, priv(host)->base + CTRL); writeb(hostdata->ctrl | 0x40, base + CTRL);
if (len) if (len)
return -1; return -1;
return 0; return 0;
} }
static inline int cumanascsi_pread(struct Scsi_Host *host, static inline int cumanascsi_pread(struct NCR5380_hostdata *hostdata,
unsigned char *addr, int len) unsigned char *addr, int len)
{ {
unsigned long *laddr; unsigned long *laddr;
void __iomem *dma = priv(host)->dma + 0x2000; u8 __iomem *base = hostdata->io;
u8 __iomem *dma = hostdata->pdma_io + 0x2000;
if(!len) return 0; if(!len) return 0;
writeb(0x00, priv(host)->base + CTRL); writeb(0x00, base + CTRL);
laddr = (unsigned long *)addr; laddr = (unsigned long *)addr;
while(len >= 32) while(len >= 32)
{ {
unsigned int status; unsigned int status;
status = readb(priv(host)->base + STAT); status = readb(base + STAT);
if(status & 0x80) if(status & 0x80)
goto end; goto end;
if(!(status & 0x40)) if(!(status & 0x40))
@ -140,12 +140,12 @@ static inline int cumanascsi_pread(struct Scsi_Host *host,
} }
addr = (unsigned char *)laddr; addr = (unsigned char *)laddr;
writeb(0x10, priv(host)->base + CTRL); writeb(0x10, base + CTRL);
while(len > 0) while(len > 0)
{ {
unsigned int status; unsigned int status;
status = readb(priv(host)->base + STAT); status = readb(base + STAT);
if(status & 0x80) if(status & 0x80)
goto end; goto end;
if(status & 0x40) if(status & 0x40)
@ -155,7 +155,7 @@ static inline int cumanascsi_pread(struct Scsi_Host *host,
break; break;
} }
status = readb(priv(host)->base + STAT); status = readb(base + STAT);
if(status & 0x80) if(status & 0x80)
goto end; goto end;
if(status & 0x40) if(status & 0x40)
@ -166,37 +166,45 @@ static inline int cumanascsi_pread(struct Scsi_Host *host,
} }
} }
end: end:
writeb(priv(host)->ctrl | 0x40, priv(host)->base + CTRL); writeb(hostdata->ctrl | 0x40, base + CTRL);
if (len) if (len)
return -1; return -1;
return 0; return 0;
} }
static unsigned char cumanascsi_read(struct Scsi_Host *host, unsigned int reg) static int cumanascsi_dma_xfer_len(struct NCR5380_hostdata *hostdata,
struct scsi_cmnd *cmd)
{ {
void __iomem *base = priv(host)->base; return cmd->transfersize;
unsigned char val; }
static u8 cumanascsi_read(struct NCR5380_hostdata *hostdata,
unsigned int reg)
{
u8 __iomem *base = hostdata->io;
u8 val;
writeb(0, base + CTRL); writeb(0, base + CTRL);
val = readb(base + 0x2100 + (reg << 2)); val = readb(base + 0x2100 + (reg << 2));
priv(host)->ctrl = 0x40; hostdata->ctrl = 0x40;
writeb(0x40, base + CTRL); writeb(0x40, base + CTRL);
return val; return val;
} }
static void cumanascsi_write(struct Scsi_Host *host, unsigned int reg, unsigned int value) static void cumanascsi_write(struct NCR5380_hostdata *hostdata,
unsigned int reg, u8 value)
{ {
void __iomem *base = priv(host)->base; u8 __iomem *base = hostdata->io;
writeb(0, base + CTRL); writeb(0, base + CTRL);
writeb(value, base + 0x2100 + (reg << 2)); writeb(value, base + 0x2100 + (reg << 2));
priv(host)->ctrl = 0x40; hostdata->ctrl = 0x40;
writeb(0x40, base + CTRL); writeb(0x40, base + CTRL);
} }
@ -235,11 +243,11 @@ static int cumanascsi1_probe(struct expansion_card *ec,
goto out_release; goto out_release;
} }
priv(host)->base = ioremap(ecard_resource_start(ec, ECARD_RES_IOCSLOW), priv(host)->io = ioremap(ecard_resource_start(ec, ECARD_RES_IOCSLOW),
ecard_resource_len(ec, ECARD_RES_IOCSLOW)); ecard_resource_len(ec, ECARD_RES_IOCSLOW));
priv(host)->dma = ioremap(ecard_resource_start(ec, ECARD_RES_MEMC), priv(host)->pdma_io = ioremap(ecard_resource_start(ec, ECARD_RES_MEMC),
ecard_resource_len(ec, ECARD_RES_MEMC)); ecard_resource_len(ec, ECARD_RES_MEMC));
if (!priv(host)->base || !priv(host)->dma) { if (!priv(host)->io || !priv(host)->pdma_io) {
ret = -ENOMEM; ret = -ENOMEM;
goto out_unmap; goto out_unmap;
} }
@ -253,7 +261,7 @@ static int cumanascsi1_probe(struct expansion_card *ec,
NCR5380_maybe_reset_bus(host); NCR5380_maybe_reset_bus(host);
priv(host)->ctrl = 0; priv(host)->ctrl = 0;
writeb(0, priv(host)->base + CTRL); writeb(0, priv(host)->io + CTRL);
ret = request_irq(host->irq, cumanascsi_intr, 0, ret = request_irq(host->irq, cumanascsi_intr, 0,
"CumanaSCSI-1", host); "CumanaSCSI-1", host);
@ -275,8 +283,8 @@ static int cumanascsi1_probe(struct expansion_card *ec,
out_exit: out_exit:
NCR5380_exit(host); NCR5380_exit(host);
out_unmap: out_unmap:
iounmap(priv(host)->base); iounmap(priv(host)->io);
iounmap(priv(host)->dma); iounmap(priv(host)->pdma_io);
scsi_host_put(host); scsi_host_put(host);
out_release: out_release:
ecard_release_resources(ec); ecard_release_resources(ec);
@ -287,15 +295,17 @@ static int cumanascsi1_probe(struct expansion_card *ec,
static void cumanascsi1_remove(struct expansion_card *ec) static void cumanascsi1_remove(struct expansion_card *ec)
{ {
struct Scsi_Host *host = ecard_get_drvdata(ec); struct Scsi_Host *host = ecard_get_drvdata(ec);
void __iomem *base = priv(host)->io;
void __iomem *dma = priv(host)->pdma_io;
ecard_set_drvdata(ec, NULL); ecard_set_drvdata(ec, NULL);
scsi_remove_host(host); scsi_remove_host(host);
free_irq(host->irq, host); free_irq(host->irq, host);
NCR5380_exit(host); NCR5380_exit(host);
iounmap(priv(host)->base);
iounmap(priv(host)->dma);
scsi_host_put(host); scsi_host_put(host);
iounmap(base);
iounmap(dma);
ecard_release_resources(ec); ecard_release_resources(ec);
} }

View File

@ -16,21 +16,18 @@
#define priv(host) ((struct NCR5380_hostdata *)(host)->hostdata) #define priv(host) ((struct NCR5380_hostdata *)(host)->hostdata)
#define NCR5380_read(reg) \ #define NCR5380_read(reg) readb(hostdata->io + ((reg) << 2))
readb(priv(instance)->base + ((reg) << 2)) #define NCR5380_write(reg, value) writeb(value, hostdata->io + ((reg) << 2))
#define NCR5380_write(reg, value) \
writeb(value, priv(instance)->base + ((reg) << 2))
#define NCR5380_dma_xfer_len(instance, cmd, phase) (0) #define NCR5380_dma_xfer_len NCR5380_dma_xfer_none
#define NCR5380_dma_recv_setup oakscsi_pread #define NCR5380_dma_recv_setup oakscsi_pread
#define NCR5380_dma_send_setup oakscsi_pwrite #define NCR5380_dma_send_setup oakscsi_pwrite
#define NCR5380_dma_residual(instance) (0) #define NCR5380_dma_residual NCR5380_dma_residual_none
#define NCR5380_queue_command oakscsi_queue_command #define NCR5380_queue_command oakscsi_queue_command
#define NCR5380_info oakscsi_info #define NCR5380_info oakscsi_info
#define NCR5380_implementation_fields \ #define NCR5380_implementation_fields /* none */
void __iomem *base
#include "../NCR5380.h" #include "../NCR5380.h"
@ -40,10 +37,10 @@
#define STAT ((128 + 16) << 2) #define STAT ((128 + 16) << 2)
#define DATA ((128 + 8) << 2) #define DATA ((128 + 8) << 2)
static inline int oakscsi_pwrite(struct Scsi_Host *instance, static inline int oakscsi_pwrite(struct NCR5380_hostdata *hostdata,
unsigned char *addr, int len) unsigned char *addr, int len)
{ {
void __iomem *base = priv(instance)->base; u8 __iomem *base = hostdata->io;
printk("writing %p len %d\n",addr, len); printk("writing %p len %d\n",addr, len);
@ -55,10 +52,11 @@ printk("writing %p len %d\n",addr, len);
return 0; return 0;
} }
static inline int oakscsi_pread(struct Scsi_Host *instance, static inline int oakscsi_pread(struct NCR5380_hostdata *hostdata,
unsigned char *addr, int len) unsigned char *addr, int len)
{ {
void __iomem *base = priv(instance)->base; u8 __iomem *base = hostdata->io;
printk("reading %p len %d\n", addr, len); printk("reading %p len %d\n", addr, len);
while(len > 0) while(len > 0)
{ {
@ -133,15 +131,14 @@ static int oakscsi_probe(struct expansion_card *ec, const struct ecard_id *id)
goto release; goto release;
} }
priv(host)->base = ioremap(ecard_resource_start(ec, ECARD_RES_MEMC), priv(host)->io = ioremap(ecard_resource_start(ec, ECARD_RES_MEMC),
ecard_resource_len(ec, ECARD_RES_MEMC)); ecard_resource_len(ec, ECARD_RES_MEMC));
if (!priv(host)->base) { if (!priv(host)->io) {
ret = -ENOMEM; ret = -ENOMEM;
goto unreg; goto unreg;
} }
host->irq = NO_IRQ; host->irq = NO_IRQ;
host->n_io_port = 255;
ret = NCR5380_init(host, FLAG_DMA_FIXUP | FLAG_LATE_DMA_SETUP); ret = NCR5380_init(host, FLAG_DMA_FIXUP | FLAG_LATE_DMA_SETUP);
if (ret) if (ret)
@ -159,7 +156,7 @@ static int oakscsi_probe(struct expansion_card *ec, const struct ecard_id *id)
out_exit: out_exit:
NCR5380_exit(host); NCR5380_exit(host);
out_unmap: out_unmap:
iounmap(priv(host)->base); iounmap(priv(host)->io);
unreg: unreg:
scsi_host_put(host); scsi_host_put(host);
release: release:
@ -171,13 +168,14 @@ static int oakscsi_probe(struct expansion_card *ec, const struct ecard_id *id)
static void oakscsi_remove(struct expansion_card *ec) static void oakscsi_remove(struct expansion_card *ec)
{ {
struct Scsi_Host *host = ecard_get_drvdata(ec); struct Scsi_Host *host = ecard_get_drvdata(ec);
void __iomem *base = priv(host)->io;
ecard_set_drvdata(ec, NULL); ecard_set_drvdata(ec, NULL);
scsi_remove_host(host); scsi_remove_host(host);
NCR5380_exit(host); NCR5380_exit(host);
iounmap(priv(host)->base);
scsi_host_put(host); scsi_host_put(host);
iounmap(base);
ecard_release_resources(ec); ecard_release_resources(ec);
} }

View File

@ -57,6 +57,9 @@
#define NCR5380_implementation_fields /* none */ #define NCR5380_implementation_fields /* none */
static u8 (*atari_scsi_reg_read)(unsigned int);
static void (*atari_scsi_reg_write)(unsigned int, u8);
#define NCR5380_read(reg) atari_scsi_reg_read(reg) #define NCR5380_read(reg) atari_scsi_reg_read(reg)
#define NCR5380_write(reg, value) atari_scsi_reg_write(reg, value) #define NCR5380_write(reg, value) atari_scsi_reg_write(reg, value)
@ -64,14 +67,10 @@
#define NCR5380_abort atari_scsi_abort #define NCR5380_abort atari_scsi_abort
#define NCR5380_info atari_scsi_info #define NCR5380_info atari_scsi_info
#define NCR5380_dma_recv_setup(instance, data, count) \ #define NCR5380_dma_xfer_len atari_scsi_dma_xfer_len
atari_scsi_dma_setup(instance, data, count, 0) #define NCR5380_dma_recv_setup atari_scsi_dma_recv_setup
#define NCR5380_dma_send_setup(instance, data, count) \ #define NCR5380_dma_send_setup atari_scsi_dma_send_setup
atari_scsi_dma_setup(instance, data, count, 1) #define NCR5380_dma_residual atari_scsi_dma_residual
#define NCR5380_dma_residual(instance) \
atari_scsi_dma_residual(instance)
#define NCR5380_dma_xfer_len(instance, cmd, phase) \
atari_dma_xfer_len(cmd->SCp.this_residual, cmd, !((phase) & SR_IO))
#define NCR5380_acquire_dma_irq(instance) falcon_get_lock(instance) #define NCR5380_acquire_dma_irq(instance) falcon_get_lock(instance)
#define NCR5380_release_dma_irq(instance) falcon_release_lock() #define NCR5380_release_dma_irq(instance) falcon_release_lock()
@ -126,9 +125,6 @@ static inline unsigned long SCSI_DMA_GETADR(void)
static void atari_scsi_fetch_restbytes(void); static void atari_scsi_fetch_restbytes(void);
static unsigned char (*atari_scsi_reg_read)(unsigned char reg);
static void (*atari_scsi_reg_write)(unsigned char reg, unsigned char value);
static unsigned long atari_dma_residual, atari_dma_startaddr; static unsigned long atari_dma_residual, atari_dma_startaddr;
static short atari_dma_active; static short atari_dma_active;
/* pointer to the dribble buffer */ /* pointer to the dribble buffer */
@ -457,15 +453,14 @@ static int __init atari_scsi_setup(char *str)
__setup("atascsi=", atari_scsi_setup); __setup("atascsi=", atari_scsi_setup);
#endif /* !MODULE */ #endif /* !MODULE */
static unsigned long atari_scsi_dma_setup(struct NCR5380_hostdata *hostdata,
static unsigned long atari_scsi_dma_setup(struct Scsi_Host *instance,
void *data, unsigned long count, void *data, unsigned long count,
int dir) int dir)
{ {
unsigned long addr = virt_to_phys(data); unsigned long addr = virt_to_phys(data);
dprintk(NDEBUG_DMA, "scsi%d: setting up dma, data = %p, phys = %lx, count = %ld, " dprintk(NDEBUG_DMA, "scsi%d: setting up dma, data = %p, phys = %lx, count = %ld, dir = %d\n",
"dir = %d\n", instance->host_no, data, addr, count, dir); hostdata->host->host_no, data, addr, count, dir);
if (!IS_A_TT() && !STRAM_ADDR(addr)) { if (!IS_A_TT() && !STRAM_ADDR(addr)) {
/* If we have a non-DMAable address on a Falcon, use the dribble /* If we have a non-DMAable address on a Falcon, use the dribble
@ -522,8 +517,19 @@ static unsigned long atari_scsi_dma_setup(struct Scsi_Host *instance,
return count; return count;
} }
static inline int atari_scsi_dma_recv_setup(struct NCR5380_hostdata *hostdata,
unsigned char *data, int count)
{
return atari_scsi_dma_setup(hostdata, data, count, 0);
}
static long atari_scsi_dma_residual(struct Scsi_Host *instance) static inline int atari_scsi_dma_send_setup(struct NCR5380_hostdata *hostdata,
unsigned char *data, int count)
{
return atari_scsi_dma_setup(hostdata, data, count, 1);
}
static int atari_scsi_dma_residual(struct NCR5380_hostdata *hostdata)
{ {
return atari_dma_residual; return atari_dma_residual;
} }
@ -564,10 +570,11 @@ static int falcon_classify_cmd(struct scsi_cmnd *cmd)
* the overrun problem, so this question is academic :-) * the overrun problem, so this question is academic :-)
*/ */
static unsigned long atari_dma_xfer_len(unsigned long wanted_len, static int atari_scsi_dma_xfer_len(struct NCR5380_hostdata *hostdata,
struct scsi_cmnd *cmd, int write_flag) struct scsi_cmnd *cmd)
{ {
unsigned long possible_len, limit; int wanted_len = cmd->SCp.this_residual;
int possible_len, limit;
if (wanted_len < DMA_MIN_SIZE) if (wanted_len < DMA_MIN_SIZE)
return 0; return 0;
@ -604,7 +611,7 @@ static unsigned long atari_dma_xfer_len(unsigned long wanted_len,
* use the dribble buffer and thus can do only STRAM_BUFFER_SIZE bytes. * use the dribble buffer and thus can do only STRAM_BUFFER_SIZE bytes.
*/ */
if (write_flag) { if (cmd->sc_data_direction == DMA_TO_DEVICE) {
/* Write operation can always use the DMA, but the transfer size must /* Write operation can always use the DMA, but the transfer size must
* be rounded up to the next multiple of 512 (atari_dma_setup() does * be rounded up to the next multiple of 512 (atari_dma_setup() does
* this). * this).
@ -644,8 +651,8 @@ static unsigned long atari_dma_xfer_len(unsigned long wanted_len,
possible_len = limit; possible_len = limit;
if (possible_len != wanted_len) if (possible_len != wanted_len)
dprintk(NDEBUG_DMA, "Sorry, must cut DMA transfer size to %ld bytes " dprintk(NDEBUG_DMA, "DMA transfer now %d bytes instead of %d\n",
"instead of %ld\n", possible_len, wanted_len); possible_len, wanted_len);
return possible_len; return possible_len;
} }
@ -658,26 +665,38 @@ static unsigned long atari_dma_xfer_len(unsigned long wanted_len,
* NCR5380_write call these functions via function pointers. * NCR5380_write call these functions via function pointers.
*/ */
static unsigned char atari_scsi_tt_reg_read(unsigned char reg) static u8 atari_scsi_tt_reg_read(unsigned int reg)
{ {
return tt_scsi_regp[reg * 2]; return tt_scsi_regp[reg * 2];
} }
static void atari_scsi_tt_reg_write(unsigned char reg, unsigned char value) static void atari_scsi_tt_reg_write(unsigned int reg, u8 value)
{ {
tt_scsi_regp[reg * 2] = value; tt_scsi_regp[reg * 2] = value;
} }
static unsigned char atari_scsi_falcon_reg_read(unsigned char reg) static u8 atari_scsi_falcon_reg_read(unsigned int reg)
{ {
dma_wd.dma_mode_status= (u_short)(0x88 + reg); unsigned long flags;
return (u_char)dma_wd.fdc_acces_seccount; u8 result;
reg += 0x88;
local_irq_save(flags);
dma_wd.dma_mode_status = (u_short)reg;
result = (u8)dma_wd.fdc_acces_seccount;
local_irq_restore(flags);
return result;
} }
static void atari_scsi_falcon_reg_write(unsigned char reg, unsigned char value) static void atari_scsi_falcon_reg_write(unsigned int reg, u8 value)
{ {
dma_wd.dma_mode_status = (u_short)(0x88 + reg); unsigned long flags;
reg += 0x88;
local_irq_save(flags);
dma_wd.dma_mode_status = (u_short)reg;
dma_wd.fdc_acces_seccount = (u_short)value; dma_wd.fdc_acces_seccount = (u_short)value;
local_irq_restore(flags);
} }

View File

@ -3049,8 +3049,10 @@ static int beiscsi_create_eqs(struct beiscsi_hba *phba,
eq_vaddress = pci_alloc_consistent(phba->pcidev, eq_vaddress = pci_alloc_consistent(phba->pcidev,
num_eq_pages * PAGE_SIZE, num_eq_pages * PAGE_SIZE,
&paddr); &paddr);
if (!eq_vaddress) if (!eq_vaddress) {
ret = -ENOMEM;
goto create_eq_error; goto create_eq_error;
}
mem->va = eq_vaddress; mem->va = eq_vaddress;
ret = be_fill_queue(eq, phba->params.num_eq_entries, ret = be_fill_queue(eq, phba->params.num_eq_entries,
@ -3113,8 +3115,10 @@ static int beiscsi_create_cqs(struct beiscsi_hba *phba,
cq_vaddress = pci_alloc_consistent(phba->pcidev, cq_vaddress = pci_alloc_consistent(phba->pcidev,
num_cq_pages * PAGE_SIZE, num_cq_pages * PAGE_SIZE,
&paddr); &paddr);
if (!cq_vaddress) if (!cq_vaddress) {
ret = -ENOMEM;
goto create_cq_error; goto create_cq_error;
}
ret = be_fill_queue(cq, phba->params.num_cq_entries, ret = be_fill_queue(cq, phba->params.num_cq_entries,
sizeof(struct sol_cqe), cq_vaddress); sizeof(struct sol_cqe), cq_vaddress);

View File

@ -111,20 +111,24 @@ struct bfa_meminfo_s {
struct bfa_mem_kva_s kva_info; struct bfa_mem_kva_s kva_info;
}; };
/* BFA memory segment setup macros */ /* BFA memory segment setup helpers */
#define bfa_mem_dma_setup(_meminfo, _dm_ptr, _seg_sz) do { \ static inline void bfa_mem_dma_setup(struct bfa_meminfo_s *meminfo,
((bfa_mem_dma_t *)(_dm_ptr))->mem_len = (_seg_sz); \ struct bfa_mem_dma_s *dm_ptr,
if (_seg_sz) \ size_t seg_sz)
list_add_tail(&((bfa_mem_dma_t *)_dm_ptr)->qe, \ {
&(_meminfo)->dma_info.qe); \ dm_ptr->mem_len = seg_sz;
} while (0) if (seg_sz)
list_add_tail(&dm_ptr->qe, &meminfo->dma_info.qe);
}
#define bfa_mem_kva_setup(_meminfo, _kva_ptr, _seg_sz) do { \ static inline void bfa_mem_kva_setup(struct bfa_meminfo_s *meminfo,
((bfa_mem_kva_t *)(_kva_ptr))->mem_len = (_seg_sz); \ struct bfa_mem_kva_s *kva_ptr,
if (_seg_sz) \ size_t seg_sz)
list_add_tail(&((bfa_mem_kva_t *)_kva_ptr)->qe, \ {
&(_meminfo)->kva_info.qe); \ kva_ptr->mem_len = seg_sz;
} while (0) if (seg_sz)
list_add_tail(&kva_ptr->qe, &meminfo->kva_info.qe);
}
/* BFA dma memory segments iterator */ /* BFA dma memory segments iterator */
#define bfa_mem_dma_sptr(_mod, _i) (&(_mod)->dma_seg[(_i)]) #define bfa_mem_dma_sptr(_mod, _i) (&(_mod)->dma_seg[(_i)])

View File

@ -3130,11 +3130,12 @@ bfad_iocmd_handler(struct bfad_s *bfad, unsigned int cmd, void *iocmd,
} }
static int static int
bfad_im_bsg_vendor_request(struct fc_bsg_job *job) bfad_im_bsg_vendor_request(struct bsg_job *job)
{ {
uint32_t vendor_cmd = job->request->rqst_data.h_vendor.vendor_cmd[0]; struct fc_bsg_request *bsg_request = job->request;
struct bfad_im_port_s *im_port = struct fc_bsg_reply *bsg_reply = job->reply;
(struct bfad_im_port_s *) job->shost->hostdata[0]; uint32_t vendor_cmd = bsg_request->rqst_data.h_vendor.vendor_cmd[0];
struct bfad_im_port_s *im_port = shost_priv(fc_bsg_to_shost(job));
struct bfad_s *bfad = im_port->bfad; struct bfad_s *bfad = im_port->bfad;
struct request_queue *request_q = job->req->q; struct request_queue *request_q = job->req->q;
void *payload_kbuf; void *payload_kbuf;
@ -3175,18 +3176,19 @@ bfad_im_bsg_vendor_request(struct fc_bsg_job *job)
/* Fill the BSG job reply data */ /* Fill the BSG job reply data */
job->reply_len = job->reply_payload.payload_len; job->reply_len = job->reply_payload.payload_len;
job->reply->reply_payload_rcv_len = job->reply_payload.payload_len; bsg_reply->reply_payload_rcv_len = job->reply_payload.payload_len;
job->reply->result = rc; bsg_reply->result = rc;
job->job_done(job); bsg_job_done(job, bsg_reply->result,
bsg_reply->reply_payload_rcv_len);
return rc; return rc;
error: error:
/* free the command buffer */ /* free the command buffer */
kfree(payload_kbuf); kfree(payload_kbuf);
out: out:
job->reply->result = rc; bsg_reply->result = rc;
job->reply_len = sizeof(uint32_t); job->reply_len = sizeof(uint32_t);
job->reply->reply_payload_rcv_len = 0; bsg_reply->reply_payload_rcv_len = 0;
return rc; return rc;
} }
@ -3312,7 +3314,7 @@ bfad_fcxp_free_mem(struct bfad_s *bfad, struct bfad_buf_info *buf_base,
} }
int int
bfad_fcxp_bsg_send(struct fc_bsg_job *job, struct bfad_fcxp *drv_fcxp, bfad_fcxp_bsg_send(struct bsg_job *job, struct bfad_fcxp *drv_fcxp,
bfa_bsg_fcpt_t *bsg_fcpt) bfa_bsg_fcpt_t *bsg_fcpt)
{ {
struct bfa_fcxp_s *hal_fcxp; struct bfa_fcxp_s *hal_fcxp;
@ -3352,27 +3354,28 @@ bfad_fcxp_bsg_send(struct fc_bsg_job *job, struct bfad_fcxp *drv_fcxp,
} }
int int
bfad_im_bsg_els_ct_request(struct fc_bsg_job *job) bfad_im_bsg_els_ct_request(struct bsg_job *job)
{ {
struct bfa_bsg_data *bsg_data; struct bfa_bsg_data *bsg_data;
struct bfad_im_port_s *im_port = struct bfad_im_port_s *im_port = shost_priv(fc_bsg_to_shost(job));
(struct bfad_im_port_s *) job->shost->hostdata[0];
struct bfad_s *bfad = im_port->bfad; struct bfad_s *bfad = im_port->bfad;
bfa_bsg_fcpt_t *bsg_fcpt; bfa_bsg_fcpt_t *bsg_fcpt;
struct bfad_fcxp *drv_fcxp; struct bfad_fcxp *drv_fcxp;
struct bfa_fcs_lport_s *fcs_port; struct bfa_fcs_lport_s *fcs_port;
struct bfa_fcs_rport_s *fcs_rport; struct bfa_fcs_rport_s *fcs_rport;
uint32_t command_type = job->request->msgcode; struct fc_bsg_request *bsg_request = bsg_request;
struct fc_bsg_reply *bsg_reply = job->reply;
uint32_t command_type = bsg_request->msgcode;
unsigned long flags; unsigned long flags;
struct bfad_buf_info *rsp_buf_info; struct bfad_buf_info *rsp_buf_info;
void *req_kbuf = NULL, *rsp_kbuf = NULL; void *req_kbuf = NULL, *rsp_kbuf = NULL;
int rc = -EINVAL; int rc = -EINVAL;
job->reply_len = sizeof(uint32_t); /* Atleast uint32_t reply_len */ job->reply_len = sizeof(uint32_t); /* Atleast uint32_t reply_len */
job->reply->reply_payload_rcv_len = 0; bsg_reply->reply_payload_rcv_len = 0;
/* Get the payload passed in from userspace */ /* Get the payload passed in from userspace */
bsg_data = (struct bfa_bsg_data *) (((char *)job->request) + bsg_data = (struct bfa_bsg_data *) (((char *)bsg_request) +
sizeof(struct fc_bsg_request)); sizeof(struct fc_bsg_request));
if (bsg_data == NULL) if (bsg_data == NULL)
goto out; goto out;
@ -3517,13 +3520,13 @@ bfad_im_bsg_els_ct_request(struct fc_bsg_job *job)
/* fill the job->reply data */ /* fill the job->reply data */
if (drv_fcxp->req_status == BFA_STATUS_OK) { if (drv_fcxp->req_status == BFA_STATUS_OK) {
job->reply_len = drv_fcxp->rsp_len; job->reply_len = drv_fcxp->rsp_len;
job->reply->reply_payload_rcv_len = drv_fcxp->rsp_len; bsg_reply->reply_payload_rcv_len = drv_fcxp->rsp_len;
job->reply->reply_data.ctels_reply.status = FC_CTELS_STATUS_OK; bsg_reply->reply_data.ctels_reply.status = FC_CTELS_STATUS_OK;
} else { } else {
job->reply->reply_payload_rcv_len = bsg_reply->reply_payload_rcv_len =
sizeof(struct fc_bsg_ctels_reply); sizeof(struct fc_bsg_ctels_reply);
job->reply_len = sizeof(uint32_t); job->reply_len = sizeof(uint32_t);
job->reply->reply_data.ctels_reply.status = bsg_reply->reply_data.ctels_reply.status =
FC_CTELS_STATUS_REJECT; FC_CTELS_STATUS_REJECT;
} }
@ -3549,20 +3552,23 @@ out_free_mem:
kfree(bsg_fcpt); kfree(bsg_fcpt);
kfree(drv_fcxp); kfree(drv_fcxp);
out: out:
job->reply->result = rc; bsg_reply->result = rc;
if (rc == BFA_STATUS_OK) if (rc == BFA_STATUS_OK)
job->job_done(job); bsg_job_done(job, bsg_reply->result,
bsg_reply->reply_payload_rcv_len);
return rc; return rc;
} }
int int
bfad_im_bsg_request(struct fc_bsg_job *job) bfad_im_bsg_request(struct bsg_job *job)
{ {
struct fc_bsg_request *bsg_request = job->request;
struct fc_bsg_reply *bsg_reply = job->reply;
uint32_t rc = BFA_STATUS_OK; uint32_t rc = BFA_STATUS_OK;
switch (job->request->msgcode) { switch (bsg_request->msgcode) {
case FC_BSG_HST_VENDOR: case FC_BSG_HST_VENDOR:
/* Process BSG HST Vendor requests */ /* Process BSG HST Vendor requests */
rc = bfad_im_bsg_vendor_request(job); rc = bfad_im_bsg_vendor_request(job);
@ -3575,8 +3581,8 @@ bfad_im_bsg_request(struct fc_bsg_job *job)
rc = bfad_im_bsg_els_ct_request(job); rc = bfad_im_bsg_els_ct_request(job);
break; break;
default: default:
job->reply->result = rc = -EINVAL; bsg_reply->result = rc = -EINVAL;
job->reply->reply_payload_rcv_len = 0; bsg_reply->reply_payload_rcv_len = 0;
break; break;
} }
@ -3584,7 +3590,7 @@ bfad_im_bsg_request(struct fc_bsg_job *job)
} }
int int
bfad_im_bsg_timeout(struct fc_bsg_job *job) bfad_im_bsg_timeout(struct bsg_job *job)
{ {
/* Don't complete the BSG job request - return -EAGAIN /* Don't complete the BSG job request - return -EAGAIN
* to reset bsg job timeout : for ELS/CT pass thru we * to reset bsg job timeout : for ELS/CT pass thru we

View File

@ -166,8 +166,8 @@ extern struct device_attribute *bfad_im_vport_attrs[];
irqreturn_t bfad_intx(int irq, void *dev_id); irqreturn_t bfad_intx(int irq, void *dev_id);
int bfad_im_bsg_request(struct fc_bsg_job *job); int bfad_im_bsg_request(struct bsg_job *job);
int bfad_im_bsg_timeout(struct fc_bsg_job *job); int bfad_im_bsg_timeout(struct bsg_job *job);
/* /*
* Macro to set the SCSI device sdev_bflags - sdev_bflags are used by the * Macro to set the SCSI device sdev_bflags - sdev_bflags are used by the

View File

@ -970,7 +970,6 @@ static int bnx2fc_libfc_config(struct fc_lport *lport)
sizeof(struct libfc_function_template)); sizeof(struct libfc_function_template));
fc_elsct_init(lport); fc_elsct_init(lport);
fc_exch_init(lport); fc_exch_init(lport);
fc_rport_init(lport);
fc_disc_init(lport); fc_disc_init(lport);
fc_disc_config(lport, lport); fc_disc_config(lport, lport);
return 0; return 0;

View File

@ -80,7 +80,6 @@ static void bnx2fc_offload_session(struct fcoe_port *port,
struct bnx2fc_rport *tgt, struct bnx2fc_rport *tgt,
struct fc_rport_priv *rdata) struct fc_rport_priv *rdata)
{ {
struct fc_lport *lport = rdata->local_port;
struct fc_rport *rport = rdata->rport; struct fc_rport *rport = rdata->rport;
struct bnx2fc_interface *interface = port->priv; struct bnx2fc_interface *interface = port->priv;
struct bnx2fc_hba *hba = interface->hba; struct bnx2fc_hba *hba = interface->hba;
@ -160,7 +159,7 @@ ofld_err:
tgt_init_err: tgt_init_err:
if (tgt->fcoe_conn_id != -1) if (tgt->fcoe_conn_id != -1)
bnx2fc_free_conn_id(hba, tgt->fcoe_conn_id); bnx2fc_free_conn_id(hba, tgt->fcoe_conn_id);
lport->tt.rport_logoff(rdata); fc_rport_logoff(rdata);
} }
void bnx2fc_flush_active_ios(struct bnx2fc_rport *tgt) void bnx2fc_flush_active_ios(struct bnx2fc_rport *tgt)

View File

@ -1411,7 +1411,7 @@ static int init_act_open(struct cxgbi_sock *csk)
csk->atid = cxgb4_alloc_atid(lldi->tids, csk); csk->atid = cxgb4_alloc_atid(lldi->tids, csk);
if (csk->atid < 0) { if (csk->atid < 0) {
pr_err("%s, NO atid available.\n", ndev->name); pr_err("%s, NO atid available.\n", ndev->name);
return -EINVAL; goto rel_resource_without_clip;
} }
cxgbi_sock_set_flag(csk, CTPF_HAS_ATID); cxgbi_sock_set_flag(csk, CTPF_HAS_ATID);
cxgbi_sock_get(csk); cxgbi_sock_get(csk);

View File

@ -19,6 +19,7 @@
#include <linux/rwsem.h> #include <linux/rwsem.h>
#include <linux/types.h> #include <linux/types.h>
#include <scsi/scsi.h> #include <scsi/scsi.h>
#include <scsi/scsi_cmnd.h>
#include <scsi/scsi_device.h> #include <scsi/scsi_device.h>
extern const struct file_operations cxlflash_cxl_fops; extern const struct file_operations cxlflash_cxl_fops;
@ -62,11 +63,6 @@ static inline void check_sizes(void)
/* AFU defines a fixed size of 4K for command buffers (borrow 4K page define) */ /* AFU defines a fixed size of 4K for command buffers (borrow 4K page define) */
#define CMD_BUFSIZE SIZE_4K #define CMD_BUFSIZE SIZE_4K
/* flags in IOA status area for host use */
#define B_DONE 0x01
#define B_ERROR 0x02 /* set with B_DONE */
#define B_TIMEOUT 0x04 /* set with B_DONE & B_ERROR */
enum cxlflash_lr_state { enum cxlflash_lr_state {
LINK_RESET_INVALID, LINK_RESET_INVALID,
LINK_RESET_REQUIRED, LINK_RESET_REQUIRED,
@ -132,12 +128,9 @@ struct cxlflash_cfg {
struct afu_cmd { struct afu_cmd {
struct sisl_ioarcb rcb; /* IOARCB (cache line aligned) */ struct sisl_ioarcb rcb; /* IOARCB (cache line aligned) */
struct sisl_ioasa sa; /* IOASA must follow IOARCB */ struct sisl_ioasa sa; /* IOASA must follow IOARCB */
spinlock_t slock;
struct completion cevent;
char *buf; /* per command buffer */
struct afu *parent; struct afu *parent;
int slot; struct scsi_cmnd *scp;
atomic_t free; struct completion cevent;
u8 cmd_tmf:1; u8 cmd_tmf:1;
@ -147,19 +140,31 @@ struct afu_cmd {
*/ */
} __aligned(cache_line_size()); } __aligned(cache_line_size());
static inline struct afu_cmd *sc_to_afuc(struct scsi_cmnd *sc)
{
return PTR_ALIGN(scsi_cmd_priv(sc), __alignof__(struct afu_cmd));
}
static inline struct afu_cmd *sc_to_afucz(struct scsi_cmnd *sc)
{
struct afu_cmd *afuc = sc_to_afuc(sc);
memset(afuc, 0, sizeof(*afuc));
return afuc;
}
struct afu { struct afu {
/* Stuff requiring alignment go first. */ /* Stuff requiring alignment go first. */
u64 rrq_entry[NUM_RRQ_ENTRY]; /* 2K RRQ */ u64 rrq_entry[NUM_RRQ_ENTRY]; /* 2K RRQ */
/*
* Command & data for AFU commands.
*/
struct afu_cmd cmd[CXLFLASH_NUM_CMDS];
/* Beware of alignment till here. Preferably introduce new /* Beware of alignment till here. Preferably introduce new
* fields after this point * fields after this point
*/ */
int (*send_cmd)(struct afu *, struct afu_cmd *);
void (*context_reset)(struct afu_cmd *);
/* AFU HW */ /* AFU HW */
struct cxl_ioctl_start_work work; struct cxl_ioctl_start_work work;
struct cxlflash_afu_map __iomem *afu_map; /* entire MMIO map */ struct cxlflash_afu_map __iomem *afu_map; /* entire MMIO map */
@ -173,10 +178,10 @@ struct afu {
u64 *hrrq_end; u64 *hrrq_end;
u64 *hrrq_curr; u64 *hrrq_curr;
bool toggle; bool toggle;
bool read_room; atomic_t cmds_active; /* Number of currently active AFU commands */
atomic64_t room; s64 room;
spinlock_t rrin_slock; /* Lock to rrin queuing and cmd_room updates */
u64 hb; u64 hb;
u32 cmd_couts; /* Number of command checkouts */
u32 internal_lun; /* User-desired LUN mode for this AFU */ u32 internal_lun; /* User-desired LUN mode for this AFU */
char version[16]; char version[16];

View File

@ -254,8 +254,14 @@ int cxlflash_manage_lun(struct scsi_device *sdev,
if (lli->parent->mode != MODE_NONE) if (lli->parent->mode != MODE_NONE)
rc = -EBUSY; rc = -EBUSY;
else { else {
/*
* Clean up local LUN for this port and reset table
* tracking when no more references exist.
*/
sdev->hostdata = NULL; sdev->hostdata = NULL;
lli->port_sel &= ~CHAN2PORT(chan); lli->port_sel &= ~CHAN2PORT(chan);
if (lli->port_sel == 0U)
lli->in_table = false;
} }
} }

View File

@ -34,67 +34,6 @@ MODULE_AUTHOR("Manoj N. Kumar <manoj@linux.vnet.ibm.com>");
MODULE_AUTHOR("Matthew R. Ochs <mrochs@linux.vnet.ibm.com>"); MODULE_AUTHOR("Matthew R. Ochs <mrochs@linux.vnet.ibm.com>");
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
/**
* cmd_checkout() - checks out an AFU command
* @afu: AFU to checkout from.
*
* Commands are checked out in a round-robin fashion. Note that since
* the command pool is larger than the hardware queue, the majority of
* times we will only loop once or twice before getting a command. The
* buffer and CDB within the command are initialized (zeroed) prior to
* returning.
*
* Return: The checked out command or NULL when command pool is empty.
*/
static struct afu_cmd *cmd_checkout(struct afu *afu)
{
int k, dec = CXLFLASH_NUM_CMDS;
struct afu_cmd *cmd;
while (dec--) {
k = (afu->cmd_couts++ & (CXLFLASH_NUM_CMDS - 1));
cmd = &afu->cmd[k];
if (!atomic_dec_if_positive(&cmd->free)) {
pr_devel("%s: returning found index=%d cmd=%p\n",
__func__, cmd->slot, cmd);
memset(cmd->buf, 0, CMD_BUFSIZE);
memset(cmd->rcb.cdb, 0, sizeof(cmd->rcb.cdb));
return cmd;
}
}
return NULL;
}
/**
* cmd_checkin() - checks in an AFU command
* @cmd: AFU command to checkin.
*
* Safe to pass commands that have already been checked in. Several
* internal tracking fields are reset as part of the checkin. Note
* that these are intentionally reset prior to toggling the free bit
* to avoid clobbering values in the event that the command is checked
* out right away.
*/
static void cmd_checkin(struct afu_cmd *cmd)
{
cmd->rcb.scp = NULL;
cmd->rcb.timeout = 0;
cmd->sa.ioasc = 0;
cmd->cmd_tmf = false;
cmd->sa.host_use[0] = 0; /* clears both completion and retry bytes */
if (unlikely(atomic_inc_return(&cmd->free) != 1)) {
pr_err("%s: Freeing cmd (%d) that is not in use!\n",
__func__, cmd->slot);
return;
}
pr_devel("%s: released cmd %p index=%d\n", __func__, cmd, cmd->slot);
}
/** /**
* process_cmd_err() - command error handler * process_cmd_err() - command error handler
* @cmd: AFU command that experienced the error. * @cmd: AFU command that experienced the error.
@ -212,7 +151,7 @@ static void process_cmd_err(struct afu_cmd *cmd, struct scsi_cmnd *scp)
* *
* Prepares and submits command that has either completed or timed out to * Prepares and submits command that has either completed or timed out to
* the SCSI stack. Checks AFU command back into command pool for non-internal * the SCSI stack. Checks AFU command back into command pool for non-internal
* (rcb.scp populated) commands. * (cmd->scp populated) commands.
*/ */
static void cmd_complete(struct afu_cmd *cmd) static void cmd_complete(struct afu_cmd *cmd)
{ {
@ -222,19 +161,14 @@ static void cmd_complete(struct afu_cmd *cmd)
struct cxlflash_cfg *cfg = afu->parent; struct cxlflash_cfg *cfg = afu->parent;
bool cmd_is_tmf; bool cmd_is_tmf;
spin_lock_irqsave(&cmd->slock, lock_flags); if (cmd->scp) {
cmd->sa.host_use_b[0] |= B_DONE; scp = cmd->scp;
spin_unlock_irqrestore(&cmd->slock, lock_flags);
if (cmd->rcb.scp) {
scp = cmd->rcb.scp;
if (unlikely(cmd->sa.ioasc)) if (unlikely(cmd->sa.ioasc))
process_cmd_err(cmd, scp); process_cmd_err(cmd, scp);
else else
scp->result = (DID_OK << 16); scp->result = (DID_OK << 16);
cmd_is_tmf = cmd->cmd_tmf; cmd_is_tmf = cmd->cmd_tmf;
cmd_checkin(cmd); /* Don't use cmd after here */
pr_debug_ratelimited("%s: calling scsi_done scp=%p result=%X " pr_debug_ratelimited("%s: calling scsi_done scp=%p result=%X "
"ioasc=%d\n", __func__, scp, scp->result, "ioasc=%d\n", __func__, scp, scp->result,
@ -254,49 +188,19 @@ static void cmd_complete(struct afu_cmd *cmd)
} }
/** /**
* context_reset() - timeout handler for AFU commands * context_reset_ioarrin() - reset command owner context via IOARRIN register
* @cmd: AFU command that timed out. * @cmd: AFU command that timed out.
*
* Sends a reset to the AFU.
*/ */
static void context_reset(struct afu_cmd *cmd) static void context_reset_ioarrin(struct afu_cmd *cmd)
{ {
int nretry = 0; int nretry = 0;
u64 rrin = 0x1; u64 rrin = 0x1;
u64 room = 0;
struct afu *afu = cmd->parent; struct afu *afu = cmd->parent;
ulong lock_flags; struct cxlflash_cfg *cfg = afu->parent;
struct device *dev = &cfg->dev->dev;
pr_debug("%s: cmd=%p\n", __func__, cmd); pr_debug("%s: cmd=%p\n", __func__, cmd);
spin_lock_irqsave(&cmd->slock, lock_flags);
/* Already completed? */
if (cmd->sa.host_use_b[0] & B_DONE) {
spin_unlock_irqrestore(&cmd->slock, lock_flags);
return;
}
cmd->sa.host_use_b[0] |= (B_DONE | B_ERROR | B_TIMEOUT);
spin_unlock_irqrestore(&cmd->slock, lock_flags);
/*
* We really want to send this reset at all costs, so spread
* out wait time on successive retries for available room.
*/
do {
room = readq_be(&afu->host_map->cmd_room);
atomic64_set(&afu->room, room);
if (room)
goto write_rrin;
udelay(1 << nretry);
} while (nretry++ < MC_ROOM_RETRY_CNT);
pr_err("%s: no cmd_room to send reset\n", __func__);
return;
write_rrin:
nretry = 0;
writeq_be(rrin, &afu->host_map->ioarrin); writeq_be(rrin, &afu->host_map->ioarrin);
do { do {
rrin = readq_be(&afu->host_map->ioarrin); rrin = readq_be(&afu->host_map->ioarrin);
@ -305,93 +209,81 @@ write_rrin:
/* Double delay each time */ /* Double delay each time */
udelay(1 << nretry); udelay(1 << nretry);
} while (nretry++ < MC_ROOM_RETRY_CNT); } while (nretry++ < MC_ROOM_RETRY_CNT);
dev_dbg(dev, "%s: returning rrin=0x%016llX nretry=%d\n",
__func__, rrin, nretry);
} }
/** /**
* send_cmd() - sends an AFU command * send_cmd_ioarrin() - sends an AFU command via IOARRIN register
* @afu: AFU associated with the host. * @afu: AFU associated with the host.
* @cmd: AFU command to send. * @cmd: AFU command to send.
* *
* Return: * Return:
* 0 on success, SCSI_MLQUEUE_HOST_BUSY on failure * 0 on success, SCSI_MLQUEUE_HOST_BUSY on failure
*/ */
static int send_cmd(struct afu *afu, struct afu_cmd *cmd) static int send_cmd_ioarrin(struct afu *afu, struct afu_cmd *cmd)
{ {
struct cxlflash_cfg *cfg = afu->parent; struct cxlflash_cfg *cfg = afu->parent;
struct device *dev = &cfg->dev->dev; struct device *dev = &cfg->dev->dev;
int nretry = 0;
int rc = 0; int rc = 0;
u64 room; s64 room;
long newval; ulong lock_flags;
/* /*
* This routine is used by critical users such an AFU sync and to * To avoid the performance penalty of MMIO, spread the update of
* send a task management function (TMF). Thus we want to retry a * 'room' over multiple commands.
* bit before returning an error. To avoid the performance penalty
* of MMIO, we spread the update of 'room' over multiple commands.
*/ */
retry: spin_lock_irqsave(&afu->rrin_slock, lock_flags);
newval = atomic64_dec_if_positive(&afu->room); if (--afu->room < 0) {
if (!newval) {
do {
room = readq_be(&afu->host_map->cmd_room); room = readq_be(&afu->host_map->cmd_room);
atomic64_set(&afu->room, room); if (room <= 0) {
if (room) dev_dbg_ratelimited(dev, "%s: no cmd_room to send "
goto write_ioarrin; "0x%02X, room=0x%016llX\n",
udelay(1 << nretry); __func__, cmd->rcb.cdb[0], room);
} while (nretry++ < MC_ROOM_RETRY_CNT); afu->room = 0;
rc = SCSI_MLQUEUE_HOST_BUSY;
dev_err(dev, "%s: no cmd_room to send 0x%X\n", goto out;
__func__, cmd->rcb.cdb[0]); }
afu->room = room - 1;
goto no_room;
} else if (unlikely(newval < 0)) {
/* This should be rare. i.e. Only if two threads race and
* decrement before the MMIO read is done. In this case
* just benefit from the other thread having updated
* afu->room.
*/
if (nretry++ < MC_ROOM_RETRY_CNT) {
udelay(1 << nretry);
goto retry;
} }
goto no_room;
}
write_ioarrin:
writeq_be((u64)&cmd->rcb, &afu->host_map->ioarrin); writeq_be((u64)&cmd->rcb, &afu->host_map->ioarrin);
out: out:
spin_unlock_irqrestore(&afu->rrin_slock, lock_flags);
pr_devel("%s: cmd=%p len=%d ea=%p rc=%d\n", __func__, cmd, pr_devel("%s: cmd=%p len=%d ea=%p rc=%d\n", __func__, cmd,
cmd->rcb.data_len, (void *)cmd->rcb.data_ea, rc); cmd->rcb.data_len, (void *)cmd->rcb.data_ea, rc);
return rc; return rc;
no_room:
afu->read_room = true;
kref_get(&cfg->afu->mapcount);
schedule_work(&cfg->work_q);
rc = SCSI_MLQUEUE_HOST_BUSY;
goto out;
} }
/** /**
* wait_resp() - polls for a response or timeout to a sent AFU command * wait_resp() - polls for a response or timeout to a sent AFU command
* @afu: AFU associated with the host. * @afu: AFU associated with the host.
* @cmd: AFU command that was sent. * @cmd: AFU command that was sent.
*
* Return:
* 0 on success, -1 on timeout/error
*/ */
static void wait_resp(struct afu *afu, struct afu_cmd *cmd) static int wait_resp(struct afu *afu, struct afu_cmd *cmd)
{ {
int rc = 0;
ulong timeout = msecs_to_jiffies(cmd->rcb.timeout * 2 * 1000); ulong timeout = msecs_to_jiffies(cmd->rcb.timeout * 2 * 1000);
timeout = wait_for_completion_timeout(&cmd->cevent, timeout); timeout = wait_for_completion_timeout(&cmd->cevent, timeout);
if (!timeout) if (!timeout) {
context_reset(cmd); afu->context_reset(cmd);
rc = -1;
}
if (unlikely(cmd->sa.ioasc != 0)) if (unlikely(cmd->sa.ioasc != 0)) {
pr_err("%s: CMD 0x%X failed, IOASC: flags 0x%X, afu_rc 0x%X, " pr_err("%s: CMD 0x%X failed, IOASC: flags 0x%X, afu_rc 0x%X, "
"scsi_rc 0x%X, fc_rc 0x%X\n", __func__, cmd->rcb.cdb[0], "scsi_rc 0x%X, fc_rc 0x%X\n", __func__, cmd->rcb.cdb[0],
cmd->sa.rc.flags, cmd->sa.rc.afu_rc, cmd->sa.rc.scsi_rc, cmd->sa.rc.flags, cmd->sa.rc.afu_rc, cmd->sa.rc.scsi_rc,
cmd->sa.rc.fc_rc); cmd->sa.rc.fc_rc);
rc = -1;
}
return rc;
} }
/** /**
@ -405,24 +297,15 @@ static void wait_resp(struct afu *afu, struct afu_cmd *cmd)
*/ */
static int send_tmf(struct afu *afu, struct scsi_cmnd *scp, u64 tmfcmd) static int send_tmf(struct afu *afu, struct scsi_cmnd *scp, u64 tmfcmd)
{ {
struct afu_cmd *cmd;
u32 port_sel = scp->device->channel + 1; u32 port_sel = scp->device->channel + 1;
short lflag = 0;
struct Scsi_Host *host = scp->device->host; struct Scsi_Host *host = scp->device->host;
struct cxlflash_cfg *cfg = (struct cxlflash_cfg *)host->hostdata; struct cxlflash_cfg *cfg = (struct cxlflash_cfg *)host->hostdata;
struct afu_cmd *cmd = sc_to_afucz(scp);
struct device *dev = &cfg->dev->dev; struct device *dev = &cfg->dev->dev;
ulong lock_flags; ulong lock_flags;
int rc = 0; int rc = 0;
ulong to; ulong to;
cmd = cmd_checkout(afu);
if (unlikely(!cmd)) {
dev_err(dev, "%s: could not get a free command\n", __func__);
rc = SCSI_MLQUEUE_HOST_BUSY;
goto out;
}
/* When Task Management Function is active do not send another */ /* When Task Management Function is active do not send another */
spin_lock_irqsave(&cfg->tmf_slock, lock_flags); spin_lock_irqsave(&cfg->tmf_slock, lock_flags);
if (cfg->tmf_active) if (cfg->tmf_active)
@ -430,28 +313,23 @@ static int send_tmf(struct afu *afu, struct scsi_cmnd *scp, u64 tmfcmd)
!cfg->tmf_active, !cfg->tmf_active,
cfg->tmf_slock); cfg->tmf_slock);
cfg->tmf_active = true; cfg->tmf_active = true;
cmd->cmd_tmf = true;
spin_unlock_irqrestore(&cfg->tmf_slock, lock_flags); spin_unlock_irqrestore(&cfg->tmf_slock, lock_flags);
cmd->scp = scp;
cmd->parent = afu;
cmd->cmd_tmf = true;
cmd->rcb.ctx_id = afu->ctx_hndl; cmd->rcb.ctx_id = afu->ctx_hndl;
cmd->rcb.msi = SISL_MSI_RRQ_UPDATED;
cmd->rcb.port_sel = port_sel; cmd->rcb.port_sel = port_sel;
cmd->rcb.lun_id = lun_to_lunid(scp->device->lun); cmd->rcb.lun_id = lun_to_lunid(scp->device->lun);
lflag = SISL_REQ_FLAGS_TMF_CMD;
cmd->rcb.req_flags = (SISL_REQ_FLAGS_PORT_LUN_ID | cmd->rcb.req_flags = (SISL_REQ_FLAGS_PORT_LUN_ID |
SISL_REQ_FLAGS_SUP_UNDERRUN | lflag); SISL_REQ_FLAGS_SUP_UNDERRUN |
SISL_REQ_FLAGS_TMF_CMD);
/* Stash the scp in the reserved field, for reuse during interrupt */
cmd->rcb.scp = scp;
/* Copy the CDB from the cmd passed in */
memcpy(cmd->rcb.cdb, &tmfcmd, sizeof(tmfcmd)); memcpy(cmd->rcb.cdb, &tmfcmd, sizeof(tmfcmd));
/* Send the command */ rc = afu->send_cmd(afu, cmd);
rc = send_cmd(afu, cmd);
if (unlikely(rc)) { if (unlikely(rc)) {
cmd_checkin(cmd);
spin_lock_irqsave(&cfg->tmf_slock, lock_flags); spin_lock_irqsave(&cfg->tmf_slock, lock_flags);
cfg->tmf_active = false; cfg->tmf_active = false;
spin_unlock_irqrestore(&cfg->tmf_slock, lock_flags); spin_unlock_irqrestore(&cfg->tmf_slock, lock_flags);
@ -507,12 +385,12 @@ static int cxlflash_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scp)
struct cxlflash_cfg *cfg = (struct cxlflash_cfg *)host->hostdata; struct cxlflash_cfg *cfg = (struct cxlflash_cfg *)host->hostdata;
struct afu *afu = cfg->afu; struct afu *afu = cfg->afu;
struct device *dev = &cfg->dev->dev; struct device *dev = &cfg->dev->dev;
struct afu_cmd *cmd; struct afu_cmd *cmd = sc_to_afucz(scp);
struct scatterlist *sg = scsi_sglist(scp);
u32 port_sel = scp->device->channel + 1; u32 port_sel = scp->device->channel + 1;
int nseg, i, ncount; u16 req_flags = SISL_REQ_FLAGS_SUP_UNDERRUN;
struct scatterlist *sg;
ulong lock_flags; ulong lock_flags;
short lflag = 0; int nseg = 0;
int rc = 0; int rc = 0;
int kref_got = 0; int kref_got = 0;
@ -552,55 +430,38 @@ static int cxlflash_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scp)
break; break;
} }
cmd = cmd_checkout(afu);
if (unlikely(!cmd)) {
dev_err(dev, "%s: could not get a free command\n", __func__);
rc = SCSI_MLQUEUE_HOST_BUSY;
goto out;
}
kref_get(&cfg->afu->mapcount); kref_get(&cfg->afu->mapcount);
kref_got = 1; kref_got = 1;
cmd->rcb.ctx_id = afu->ctx_hndl; if (likely(sg)) {
cmd->rcb.port_sel = port_sel;
cmd->rcb.lun_id = lun_to_lunid(scp->device->lun);
if (scp->sc_data_direction == DMA_TO_DEVICE)
lflag = SISL_REQ_FLAGS_HOST_WRITE;
else
lflag = SISL_REQ_FLAGS_HOST_READ;
cmd->rcb.req_flags = (SISL_REQ_FLAGS_PORT_LUN_ID |
SISL_REQ_FLAGS_SUP_UNDERRUN | lflag);
/* Stash the scp in the reserved field, for reuse during interrupt */
cmd->rcb.scp = scp;
nseg = scsi_dma_map(scp); nseg = scsi_dma_map(scp);
if (unlikely(nseg < 0)) { if (unlikely(nseg < 0)) {
dev_err(dev, "%s: Fail DMA map! nseg=%d\n", dev_err(dev, "%s: Fail DMA map!\n", __func__);
__func__, nseg);
rc = SCSI_MLQUEUE_HOST_BUSY; rc = SCSI_MLQUEUE_HOST_BUSY;
goto out; goto out;
} }
ncount = scsi_sg_count(scp);
scsi_for_each_sg(scp, sg, ncount, i) {
cmd->rcb.data_len = sg_dma_len(sg); cmd->rcb.data_len = sg_dma_len(sg);
cmd->rcb.data_ea = sg_dma_address(sg); cmd->rcb.data_ea = sg_dma_address(sg);
} }
/* Copy the CDB from the scsi_cmnd passed in */ cmd->scp = scp;
cmd->parent = afu;
cmd->rcb.ctx_id = afu->ctx_hndl;
cmd->rcb.msi = SISL_MSI_RRQ_UPDATED;
cmd->rcb.port_sel = port_sel;
cmd->rcb.lun_id = lun_to_lunid(scp->device->lun);
if (scp->sc_data_direction == DMA_TO_DEVICE)
req_flags |= SISL_REQ_FLAGS_HOST_WRITE;
cmd->rcb.req_flags = req_flags;
memcpy(cmd->rcb.cdb, scp->cmnd, sizeof(cmd->rcb.cdb)); memcpy(cmd->rcb.cdb, scp->cmnd, sizeof(cmd->rcb.cdb));
/* Send the command */ rc = afu->send_cmd(afu, cmd);
rc = send_cmd(afu, cmd); if (unlikely(rc))
if (unlikely(rc)) {
cmd_checkin(cmd);
scsi_dma_unmap(scp); scsi_dma_unmap(scp);
}
out: out:
if (kref_got) if (kref_got)
kref_put(&afu->mapcount, afu_unmap); kref_put(&afu->mapcount, afu_unmap);
@ -628,17 +489,9 @@ static void cxlflash_wait_for_pci_err_recovery(struct cxlflash_cfg *cfg)
*/ */
static void free_mem(struct cxlflash_cfg *cfg) static void free_mem(struct cxlflash_cfg *cfg)
{ {
int i;
char *buf = NULL;
struct afu *afu = cfg->afu; struct afu *afu = cfg->afu;
if (cfg->afu) { if (cfg->afu) {
for (i = 0; i < CXLFLASH_NUM_CMDS; i++) {
buf = afu->cmd[i].buf;
if (!((u64)buf & (PAGE_SIZE - 1)))
free_page((ulong)buf);
}
free_pages((ulong)afu, get_order(sizeof(struct afu))); free_pages((ulong)afu, get_order(sizeof(struct afu)));
cfg->afu = NULL; cfg->afu = NULL;
} }
@ -650,30 +503,16 @@ static void free_mem(struct cxlflash_cfg *cfg)
* *
* Safe to call with AFU in a partially allocated/initialized state. * Safe to call with AFU in a partially allocated/initialized state.
* *
* Cleans up all state associated with the command queue, and unmaps * Waits for any active internal AFU commands to timeout and then unmaps
* the MMIO space. * the MMIO space.
*
* - complete() will take care of commands we initiated (they'll be checked
* in as part of the cleanup that occurs after the completion)
*
* - cmd_checkin() will take care of entries that we did not initiate and that
* have not (and will not) complete because they are sitting on a [now stale]
* hardware queue
*/ */
static void stop_afu(struct cxlflash_cfg *cfg) static void stop_afu(struct cxlflash_cfg *cfg)
{ {
int i;
struct afu *afu = cfg->afu; struct afu *afu = cfg->afu;
struct afu_cmd *cmd;
if (likely(afu)) { if (likely(afu)) {
for (i = 0; i < CXLFLASH_NUM_CMDS; i++) { while (atomic_read(&afu->cmds_active))
cmd = &afu->cmd[i]; ssleep(1);
complete(&cmd->cevent);
if (!atomic_read(&cmd->free))
cmd_checkin(cmd);
}
if (likely(afu->afu_map)) { if (likely(afu->afu_map)) {
cxl_psa_unmap((void __iomem *)afu->afu_map); cxl_psa_unmap((void __iomem *)afu->afu_map);
afu->afu_map = NULL; afu->afu_map = NULL;
@ -886,8 +725,6 @@ static void cxlflash_remove(struct pci_dev *pdev)
static int alloc_mem(struct cxlflash_cfg *cfg) static int alloc_mem(struct cxlflash_cfg *cfg)
{ {
int rc = 0; int rc = 0;
int i;
char *buf = NULL;
struct device *dev = &cfg->dev->dev; struct device *dev = &cfg->dev->dev;
/* AFU is ~12k, i.e. only one 64k page or up to four 4k pages */ /* AFU is ~12k, i.e. only one 64k page or up to four 4k pages */
@ -901,25 +738,6 @@ static int alloc_mem(struct cxlflash_cfg *cfg)
} }
cfg->afu->parent = cfg; cfg->afu->parent = cfg;
cfg->afu->afu_map = NULL; cfg->afu->afu_map = NULL;
for (i = 0; i < CXLFLASH_NUM_CMDS; buf += CMD_BUFSIZE, i++) {
if (!((u64)buf & (PAGE_SIZE - 1))) {
buf = (void *)__get_free_page(GFP_KERNEL | __GFP_ZERO);
if (unlikely(!buf)) {
dev_err(dev,
"%s: Allocate command buffers fail!\n",
__func__);
rc = -ENOMEM;
free_mem(cfg);
goto out;
}
}
cfg->afu->cmd[i].buf = buf;
atomic_set(&cfg->afu->cmd[i].free, 1);
cfg->afu->cmd[i].slot = i;
}
out: out:
return rc; return rc;
} }
@ -1549,13 +1367,6 @@ static void init_pcr(struct cxlflash_cfg *cfg)
/* Program the Endian Control for the master context */ /* Program the Endian Control for the master context */
writeq_be(SISL_ENDIAN_CTRL, &afu->host_map->endian_ctrl); writeq_be(SISL_ENDIAN_CTRL, &afu->host_map->endian_ctrl);
/* Initialize cmd fields that never change */
for (i = 0; i < CXLFLASH_NUM_CMDS; i++) {
afu->cmd[i].rcb.ctx_id = afu->ctx_hndl;
afu->cmd[i].rcb.msi = SISL_MSI_RRQ_UPDATED;
afu->cmd[i].rcb.rrq = 0x0;
}
} }
/** /**
@ -1644,19 +1455,8 @@ out:
static int start_afu(struct cxlflash_cfg *cfg) static int start_afu(struct cxlflash_cfg *cfg)
{ {
struct afu *afu = cfg->afu; struct afu *afu = cfg->afu;
struct afu_cmd *cmd;
int i = 0;
int rc = 0; int rc = 0;
for (i = 0; i < CXLFLASH_NUM_CMDS; i++) {
cmd = &afu->cmd[i];
init_completion(&cmd->cevent);
spin_lock_init(&cmd->slock);
cmd->parent = afu;
}
init_pcr(cfg); init_pcr(cfg);
/* After an AFU reset, RRQ entries are stale, clear them */ /* After an AFU reset, RRQ entries are stale, clear them */
@ -1829,6 +1629,9 @@ static int init_afu(struct cxlflash_cfg *cfg)
goto err2; goto err2;
} }
afu->send_cmd = send_cmd_ioarrin;
afu->context_reset = context_reset_ioarrin;
pr_debug("%s: afu version %s, interface version 0x%llX\n", __func__, pr_debug("%s: afu version %s, interface version 0x%llX\n", __func__,
afu->version, afu->interface_version); afu->version, afu->interface_version);
@ -1840,7 +1643,8 @@ static int init_afu(struct cxlflash_cfg *cfg)
} }
afu_err_intr_init(cfg->afu); afu_err_intr_init(cfg->afu);
atomic64_set(&afu->room, readq_be(&afu->host_map->cmd_room)); spin_lock_init(&afu->rrin_slock);
afu->room = readq_be(&afu->host_map->cmd_room);
/* Restore the LUN mappings */ /* Restore the LUN mappings */
cxlflash_restore_luntable(cfg); cxlflash_restore_luntable(cfg);
@ -1884,8 +1688,8 @@ int cxlflash_afu_sync(struct afu *afu, ctx_hndl_t ctx_hndl_u,
struct cxlflash_cfg *cfg = afu->parent; struct cxlflash_cfg *cfg = afu->parent;
struct device *dev = &cfg->dev->dev; struct device *dev = &cfg->dev->dev;
struct afu_cmd *cmd = NULL; struct afu_cmd *cmd = NULL;
char *buf = NULL;
int rc = 0; int rc = 0;
int retry_cnt = 0;
static DEFINE_MUTEX(sync_active); static DEFINE_MUTEX(sync_active);
if (cfg->state != STATE_NORMAL) { if (cfg->state != STATE_NORMAL) {
@ -1894,27 +1698,23 @@ int cxlflash_afu_sync(struct afu *afu, ctx_hndl_t ctx_hndl_u,
} }
mutex_lock(&sync_active); mutex_lock(&sync_active);
retry: atomic_inc(&afu->cmds_active);
cmd = cmd_checkout(afu); buf = kzalloc(sizeof(*cmd) + __alignof__(*cmd) - 1, GFP_KERNEL);
if (unlikely(!cmd)) { if (unlikely(!buf)) {
retry_cnt++; dev_err(dev, "%s: no memory for command\n", __func__);
udelay(1000 * retry_cnt);
if (retry_cnt < MC_RETRY_CNT)
goto retry;
dev_err(dev, "%s: could not get a free command\n", __func__);
rc = -1; rc = -1;
goto out; goto out;
} }
cmd = (struct afu_cmd *)PTR_ALIGN(buf, __alignof__(*cmd));
init_completion(&cmd->cevent);
cmd->parent = afu;
pr_debug("%s: afu=%p cmd=%p %d\n", __func__, afu, cmd, ctx_hndl_u); pr_debug("%s: afu=%p cmd=%p %d\n", __func__, afu, cmd, ctx_hndl_u);
memset(cmd->rcb.cdb, 0, sizeof(cmd->rcb.cdb));
cmd->rcb.req_flags = SISL_REQ_FLAGS_AFU_CMD; cmd->rcb.req_flags = SISL_REQ_FLAGS_AFU_CMD;
cmd->rcb.port_sel = 0x0; /* NA */ cmd->rcb.ctx_id = afu->ctx_hndl;
cmd->rcb.lun_id = 0x0; /* NA */ cmd->rcb.msi = SISL_MSI_RRQ_UPDATED;
cmd->rcb.data_len = 0x0;
cmd->rcb.data_ea = 0x0;
cmd->rcb.timeout = MC_AFU_SYNC_TIMEOUT; cmd->rcb.timeout = MC_AFU_SYNC_TIMEOUT;
cmd->rcb.cdb[0] = 0xC0; /* AFU Sync */ cmd->rcb.cdb[0] = 0xC0; /* AFU Sync */
@ -1924,20 +1724,17 @@ retry:
*((__be16 *)&cmd->rcb.cdb[2]) = cpu_to_be16(ctx_hndl_u); *((__be16 *)&cmd->rcb.cdb[2]) = cpu_to_be16(ctx_hndl_u);
*((__be32 *)&cmd->rcb.cdb[4]) = cpu_to_be32(res_hndl_u); *((__be32 *)&cmd->rcb.cdb[4]) = cpu_to_be32(res_hndl_u);
rc = send_cmd(afu, cmd); rc = afu->send_cmd(afu, cmd);
if (unlikely(rc)) if (unlikely(rc))
goto out; goto out;
wait_resp(afu, cmd); rc = wait_resp(afu, cmd);
if (unlikely(rc))
/* Set on timeout */
if (unlikely((cmd->sa.ioasc != 0) ||
(cmd->sa.host_use_b[0] & B_ERROR)))
rc = -1; rc = -1;
out: out:
atomic_dec(&afu->cmds_active);
mutex_unlock(&sync_active); mutex_unlock(&sync_active);
if (cmd) kfree(buf);
cmd_checkin(cmd);
pr_debug("%s: returning rc=%d\n", __func__, rc); pr_debug("%s: returning rc=%d\n", __func__, rc);
return rc; return rc;
} }
@ -2376,8 +2173,9 @@ static struct scsi_host_template driver_template = {
.change_queue_depth = cxlflash_change_queue_depth, .change_queue_depth = cxlflash_change_queue_depth,
.cmd_per_lun = CXLFLASH_MAX_CMDS_PER_LUN, .cmd_per_lun = CXLFLASH_MAX_CMDS_PER_LUN,
.can_queue = CXLFLASH_MAX_CMDS, .can_queue = CXLFLASH_MAX_CMDS,
.cmd_size = sizeof(struct afu_cmd) + __alignof__(struct afu_cmd) - 1,
.this_id = -1, .this_id = -1,
.sg_tablesize = SG_NONE, /* No scatter gather support */ .sg_tablesize = 1, /* No scatter gather support */
.max_sectors = CXLFLASH_MAX_SECTORS, .max_sectors = CXLFLASH_MAX_SECTORS,
.use_clustering = ENABLE_CLUSTERING, .use_clustering = ENABLE_CLUSTERING,
.shost_attrs = cxlflash_host_attrs, .shost_attrs = cxlflash_host_attrs,
@ -2412,7 +2210,6 @@ MODULE_DEVICE_TABLE(pci, cxlflash_pci_table);
* Handles the following events: * Handles the following events:
* - Link reset which cannot be performed on interrupt context due to * - Link reset which cannot be performed on interrupt context due to
* blocking up to a few seconds * blocking up to a few seconds
* - Read AFU command room
* - Rescan the host * - Rescan the host
*/ */
static void cxlflash_worker_thread(struct work_struct *work) static void cxlflash_worker_thread(struct work_struct *work)
@ -2449,11 +2246,6 @@ static void cxlflash_worker_thread(struct work_struct *work)
cfg->lr_state = LINK_RESET_COMPLETE; cfg->lr_state = LINK_RESET_COMPLETE;
} }
if (afu->read_room) {
atomic64_set(&afu->room, readq_be(&afu->host_map->cmd_room));
afu->read_room = false;
}
spin_unlock_irqrestore(cfg->host->host_lock, lock_flags); spin_unlock_irqrestore(cfg->host->host_lock, lock_flags);
if (atomic_dec_if_positive(&cfg->scan_host_needed) >= 0) if (atomic_dec_if_positive(&cfg->scan_host_needed) >= 0)

View File

@ -72,7 +72,7 @@ struct sisl_ioarcb {
u16 timeout; /* in units specified by req_flags */ u16 timeout; /* in units specified by req_flags */
u32 rsvd1; u32 rsvd1;
u8 cdb[16]; /* must be in big endian */ u8 cdb[16]; /* must be in big endian */
struct scsi_cmnd *scp; u64 reserved; /* Reserved area */
} __packed; } __packed;
struct sisl_rc { struct sisl_rc {

View File

@ -95,7 +95,7 @@ struct alua_port_group {
struct alua_dh_data { struct alua_dh_data {
struct list_head node; struct list_head node;
struct alua_port_group *pg; struct alua_port_group __rcu *pg;
int group_id; int group_id;
spinlock_t pg_lock; spinlock_t pg_lock;
struct scsi_device *sdev; struct scsi_device *sdev;
@ -371,7 +371,7 @@ static int alua_check_vpd(struct scsi_device *sdev, struct alua_dh_data *h,
/* Check for existing port group references */ /* Check for existing port group references */
spin_lock(&h->pg_lock); spin_lock(&h->pg_lock);
old_pg = h->pg; old_pg = rcu_dereference_protected(h->pg, lockdep_is_held(&h->pg_lock));
if (old_pg != pg) { if (old_pg != pg) {
/* port group has changed. Update to new port group */ /* port group has changed. Update to new port group */
if (h->pg) { if (h->pg) {
@ -390,7 +390,9 @@ static int alua_check_vpd(struct scsi_device *sdev, struct alua_dh_data *h,
list_add_rcu(&h->node, &pg->dh_list); list_add_rcu(&h->node, &pg->dh_list);
spin_unlock_irqrestore(&pg->lock, flags); spin_unlock_irqrestore(&pg->lock, flags);
alua_rtpg_queue(h->pg, sdev, NULL, true); alua_rtpg_queue(rcu_dereference_protected(h->pg,
lockdep_is_held(&h->pg_lock)),
sdev, NULL, true);
spin_unlock(&h->pg_lock); spin_unlock(&h->pg_lock);
if (old_pg) if (old_pg)
@ -942,7 +944,7 @@ static int alua_initialize(struct scsi_device *sdev, struct alua_dh_data *h)
static int alua_set_params(struct scsi_device *sdev, const char *params) static int alua_set_params(struct scsi_device *sdev, const char *params)
{ {
struct alua_dh_data *h = sdev->handler_data; struct alua_dh_data *h = sdev->handler_data;
struct alua_port_group __rcu *pg = NULL; struct alua_port_group *pg = NULL;
unsigned int optimize = 0, argc; unsigned int optimize = 0, argc;
const char *p = params; const char *p = params;
int result = SCSI_DH_OK; int result = SCSI_DH_OK;
@ -989,7 +991,7 @@ static int alua_activate(struct scsi_device *sdev,
struct alua_dh_data *h = sdev->handler_data; struct alua_dh_data *h = sdev->handler_data;
int err = SCSI_DH_OK; int err = SCSI_DH_OK;
struct alua_queue_data *qdata; struct alua_queue_data *qdata;
struct alua_port_group __rcu *pg; struct alua_port_group *pg;
qdata = kzalloc(sizeof(*qdata), GFP_KERNEL); qdata = kzalloc(sizeof(*qdata), GFP_KERNEL);
if (!qdata) { if (!qdata) {
@ -1053,7 +1055,7 @@ static void alua_check(struct scsi_device *sdev, bool force)
static int alua_prep_fn(struct scsi_device *sdev, struct request *req) static int alua_prep_fn(struct scsi_device *sdev, struct request *req)
{ {
struct alua_dh_data *h = sdev->handler_data; struct alua_dh_data *h = sdev->handler_data;
struct alua_port_group __rcu *pg; struct alua_port_group *pg;
unsigned char state = SCSI_ACCESS_STATE_OPTIMAL; unsigned char state = SCSI_ACCESS_STATE_OPTIMAL;
int ret = BLKPREP_OK; int ret = BLKPREP_OK;
@ -1123,7 +1125,7 @@ static void alua_bus_detach(struct scsi_device *sdev)
struct alua_port_group *pg; struct alua_port_group *pg;
spin_lock(&h->pg_lock); spin_lock(&h->pg_lock);
pg = h->pg; pg = rcu_dereference_protected(h->pg, lockdep_is_held(&h->pg_lock));
rcu_assign_pointer(h->pg, NULL); rcu_assign_pointer(h->pg, NULL);
h->sdev = NULL; h->sdev = NULL;
spin_unlock(&h->pg_lock); spin_unlock(&h->pg_lock);

View File

@ -34,13 +34,13 @@
* Definitions for the generic 5380 driver. * Definitions for the generic 5380 driver.
*/ */
#define NCR5380_read(reg) inb(instance->io_port + reg) #define NCR5380_read(reg) inb(hostdata->base + (reg))
#define NCR5380_write(reg, value) outb(value, instance->io_port + reg) #define NCR5380_write(reg, value) outb(value, hostdata->base + (reg))
#define NCR5380_dma_xfer_len(instance, cmd, phase) (0) #define NCR5380_dma_xfer_len NCR5380_dma_xfer_none
#define NCR5380_dma_recv_setup(instance, dst, len) (0) #define NCR5380_dma_recv_setup NCR5380_dma_setup_none
#define NCR5380_dma_send_setup(instance, src, len) (0) #define NCR5380_dma_send_setup NCR5380_dma_setup_none
#define NCR5380_dma_residual(instance) (0) #define NCR5380_dma_residual NCR5380_dma_residual_none
#define NCR5380_implementation_fields /* none */ #define NCR5380_implementation_fields /* none */
@ -71,6 +71,7 @@ static int dmx3191d_probe_one(struct pci_dev *pdev,
const struct pci_device_id *id) const struct pci_device_id *id)
{ {
struct Scsi_Host *shost; struct Scsi_Host *shost;
struct NCR5380_hostdata *hostdata;
unsigned long io; unsigned long io;
int error = -ENODEV; int error = -ENODEV;
@ -88,7 +89,9 @@ static int dmx3191d_probe_one(struct pci_dev *pdev,
sizeof(struct NCR5380_hostdata)); sizeof(struct NCR5380_hostdata));
if (!shost) if (!shost)
goto out_release_region; goto out_release_region;
shost->io_port = io;
hostdata = shost_priv(shost);
hostdata->base = io;
/* This card does not seem to raise an interrupt on pdev->irq. /* This card does not seem to raise an interrupt on pdev->irq.
* Steam-powered SCSI controllers run without an IRQ anyway. * Steam-powered SCSI controllers run without an IRQ anyway.
@ -125,7 +128,8 @@ out_host_put:
static void dmx3191d_remove_one(struct pci_dev *pdev) static void dmx3191d_remove_one(struct pci_dev *pdev)
{ {
struct Scsi_Host *shost = pci_get_drvdata(pdev); struct Scsi_Host *shost = pci_get_drvdata(pdev);
unsigned long io = shost->io_port; struct NCR5380_hostdata *hostdata = shost_priv(shost);
unsigned long io = hostdata->base;
scsi_remove_host(shost); scsi_remove_host(shost);
@ -149,18 +153,7 @@ static struct pci_driver dmx3191d_pci_driver = {
.remove = dmx3191d_remove_one, .remove = dmx3191d_remove_one,
}; };
static int __init dmx3191d_init(void) module_pci_driver(dmx3191d_pci_driver);
{
return pci_register_driver(&dmx3191d_pci_driver);
}
static void __exit dmx3191d_exit(void)
{
pci_unregister_driver(&dmx3191d_pci_driver);
}
module_init(dmx3191d_init);
module_exit(dmx3191d_exit);
MODULE_AUTHOR("Massimo Piccioni <dafastidio@libero.it>"); MODULE_AUTHOR("Massimo Piccioni <dafastidio@libero.it>");
MODULE_DESCRIPTION("Domex DMX3191D SCSI driver"); MODULE_DESCRIPTION("Domex DMX3191D SCSI driver");

View File

@ -651,7 +651,6 @@ static u32 adpt_ioctl_to_context(adpt_hba * pHba, void *reply)
} }
spin_unlock_irqrestore(pHba->host->host_lock, flags); spin_unlock_irqrestore(pHba->host->host_lock, flags);
if (i >= nr) { if (i >= nr) {
kfree (reply);
printk(KERN_WARNING"%s: Too many outstanding " printk(KERN_WARNING"%s: Too many outstanding "
"ioctl commands\n", pHba->name); "ioctl commands\n", pHba->name);
return (u32)-1; return (u32)-1;
@ -1754,8 +1753,10 @@ static int adpt_i2o_passthru(adpt_hba* pHba, u32 __user *arg)
sg_offset = (msg[0]>>4)&0xf; sg_offset = (msg[0]>>4)&0xf;
msg[2] = 0x40000000; // IOCTL context msg[2] = 0x40000000; // IOCTL context
msg[3] = adpt_ioctl_to_context(pHba, reply); msg[3] = adpt_ioctl_to_context(pHba, reply);
if (msg[3] == (u32)-1) if (msg[3] == (u32)-1) {
kfree(reply);
return -EBUSY; return -EBUSY;
}
memset(sg_list,0, sizeof(sg_list[0])*pHba->sg_tablesize); memset(sg_list,0, sizeof(sg_list[0])*pHba->sg_tablesize);
if(sg_offset) { if(sg_offset) {
@ -3350,7 +3351,7 @@ static int adpt_i2o_query_scalar(adpt_hba* pHba, int tid,
if (opblk_va == NULL) { if (opblk_va == NULL) {
dma_free_coherent(&pHba->pDev->dev, sizeof(u8) * (8+buflen), dma_free_coherent(&pHba->pDev->dev, sizeof(u8) * (8+buflen),
resblk_va, resblk_pa); resblk_va, resblk_pa);
printk(KERN_CRIT "%s: query operatio failed; Out of memory.\n", printk(KERN_CRIT "%s: query operation failed; Out of memory.\n",
pHba->name); pHba->name);
return -ENOMEM; return -ENOMEM;
} }

View File

@ -63,6 +63,14 @@ unsigned int fcoe_debug_logging;
module_param_named(debug_logging, fcoe_debug_logging, int, S_IRUGO|S_IWUSR); module_param_named(debug_logging, fcoe_debug_logging, int, S_IRUGO|S_IWUSR);
MODULE_PARM_DESC(debug_logging, "a bit mask of logging levels"); MODULE_PARM_DESC(debug_logging, "a bit mask of logging levels");
unsigned int fcoe_e_d_tov = 2 * 1000;
module_param_named(e_d_tov, fcoe_e_d_tov, int, S_IRUGO|S_IWUSR);
MODULE_PARM_DESC(e_d_tov, "E_D_TOV in ms, default 2000");
unsigned int fcoe_r_a_tov = 2 * 2 * 1000;
module_param_named(r_a_tov, fcoe_r_a_tov, int, S_IRUGO|S_IWUSR);
MODULE_PARM_DESC(r_a_tov, "R_A_TOV in ms, default 4000");
static DEFINE_MUTEX(fcoe_config_mutex); static DEFINE_MUTEX(fcoe_config_mutex);
static struct workqueue_struct *fcoe_wq; static struct workqueue_struct *fcoe_wq;
@ -582,7 +590,8 @@ static void fcoe_fip_send(struct fcoe_ctlr *fip, struct sk_buff *skb)
* Use default VLAN for FIP VLAN discovery protocol * Use default VLAN for FIP VLAN discovery protocol
*/ */
frame = (struct fip_frame *)skb->data; frame = (struct fip_frame *)skb->data;
if (frame->fip.fip_op == ntohs(FIP_OP_VLAN) && if (ntohs(frame->eth.h_proto) == ETH_P_FIP &&
ntohs(frame->fip.fip_op) == FIP_OP_VLAN &&
fcoe->realdev != fcoe->netdev) fcoe->realdev != fcoe->netdev)
skb->dev = fcoe->realdev; skb->dev = fcoe->realdev;
else else
@ -633,8 +642,8 @@ static int fcoe_lport_config(struct fc_lport *lport)
lport->qfull = 0; lport->qfull = 0;
lport->max_retry_count = 3; lport->max_retry_count = 3;
lport->max_rport_retry_count = 3; lport->max_rport_retry_count = 3;
lport->e_d_tov = 2 * 1000; /* FC-FS default */ lport->e_d_tov = fcoe_e_d_tov;
lport->r_a_tov = 2 * 2 * 1000; lport->r_a_tov = fcoe_r_a_tov;
lport->service_params = (FCP_SPPF_INIT_FCN | FCP_SPPF_RD_XRDY_DIS | lport->service_params = (FCP_SPPF_INIT_FCN | FCP_SPPF_RD_XRDY_DIS |
FCP_SPPF_RETRY | FCP_SPPF_CONF_COMPL); FCP_SPPF_RETRY | FCP_SPPF_CONF_COMPL);
lport->does_npiv = 1; lport->does_npiv = 1;
@ -2160,11 +2169,13 @@ static bool fcoe_match(struct net_device *netdev)
*/ */
static void fcoe_dcb_create(struct fcoe_interface *fcoe) static void fcoe_dcb_create(struct fcoe_interface *fcoe)
{ {
int ctlr_prio = TC_PRIO_BESTEFFORT;
int fcoe_prio = TC_PRIO_INTERACTIVE;
struct fcoe_ctlr *ctlr = fcoe_to_ctlr(fcoe);
#ifdef CONFIG_DCB #ifdef CONFIG_DCB
int dcbx; int dcbx;
u8 fup, up; u8 fup, up;
struct net_device *netdev = fcoe->realdev; struct net_device *netdev = fcoe->realdev;
struct fcoe_ctlr *ctlr = fcoe_to_ctlr(fcoe);
struct dcb_app app = { struct dcb_app app = {
.priority = 0, .priority = 0,
.protocol = ETH_P_FCOE .protocol = ETH_P_FCOE
@ -2186,10 +2197,12 @@ static void fcoe_dcb_create(struct fcoe_interface *fcoe)
fup = dcb_getapp(netdev, &app); fup = dcb_getapp(netdev, &app);
} }
fcoe->priority = ffs(up) ? ffs(up) - 1 : 0; fcoe_prio = ffs(up) ? ffs(up) - 1 : 0;
ctlr->priority = ffs(fup) ? ffs(fup) - 1 : fcoe->priority; ctlr_prio = ffs(fup) ? ffs(fup) - 1 : fcoe_prio;
} }
#endif #endif
fcoe->priority = fcoe_prio;
ctlr->priority = ctlr_prio;
} }
enum fcoe_create_link_state { enum fcoe_create_link_state {

View File

@ -801,6 +801,8 @@ int fcoe_ctlr_els_send(struct fcoe_ctlr *fip, struct fc_lport *lport,
return -EINPROGRESS; return -EINPROGRESS;
drop: drop:
kfree_skb(skb); kfree_skb(skb);
LIBFCOE_FIP_DBG(fip, "drop els_send op %u d_id %x\n",
op, ntoh24(fh->fh_d_id));
return -EINVAL; return -EINVAL;
} }
EXPORT_SYMBOL(fcoe_ctlr_els_send); EXPORT_SYMBOL(fcoe_ctlr_els_send);
@ -1316,7 +1318,7 @@ drop:
* The overall length has already been checked. * The overall length has already been checked.
*/ */
static void fcoe_ctlr_recv_clr_vlink(struct fcoe_ctlr *fip, static void fcoe_ctlr_recv_clr_vlink(struct fcoe_ctlr *fip,
struct fip_header *fh) struct sk_buff *skb)
{ {
struct fip_desc *desc; struct fip_desc *desc;
struct fip_mac_desc *mp; struct fip_mac_desc *mp;
@ -1331,20 +1333,49 @@ static void fcoe_ctlr_recv_clr_vlink(struct fcoe_ctlr *fip,
int num_vlink_desc; int num_vlink_desc;
int reset_phys_port = 0; int reset_phys_port = 0;
struct fip_vn_desc **vlink_desc_arr = NULL; struct fip_vn_desc **vlink_desc_arr = NULL;
struct fip_header *fh = (struct fip_header *)skb->data;
struct ethhdr *eh = eth_hdr(skb);
LIBFCOE_FIP_DBG(fip, "Clear Virtual Link received\n"); LIBFCOE_FIP_DBG(fip, "Clear Virtual Link received\n");
if (!fcf || !lport->port_id) { if (!fcf) {
/* /*
* We are yet to select best FCF, but we got CVL in the * We are yet to select best FCF, but we got CVL in the
* meantime. reset the ctlr and let it rediscover the FCF * meantime. reset the ctlr and let it rediscover the FCF
*/ */
LIBFCOE_FIP_DBG(fip, "Resetting fcoe_ctlr as FCF has not been "
"selected yet\n");
mutex_lock(&fip->ctlr_mutex); mutex_lock(&fip->ctlr_mutex);
fcoe_ctlr_reset(fip); fcoe_ctlr_reset(fip);
mutex_unlock(&fip->ctlr_mutex); mutex_unlock(&fip->ctlr_mutex);
return; return;
} }
/*
* If we've selected an FCF check that the CVL is from there to avoid
* processing CVLs from an unexpected source. If it is from an
* unexpected source drop it on the floor.
*/
if (!ether_addr_equal(eh->h_source, fcf->fcf_mac)) {
LIBFCOE_FIP_DBG(fip, "Dropping CVL due to source address "
"mismatch with FCF src=%pM\n", eh->h_source);
return;
}
/*
* If we haven't logged into the fabric but receive a CVL we should
* reset everything and go back to solicitation.
*/
if (!lport->port_id) {
LIBFCOE_FIP_DBG(fip, "lport not logged in, resoliciting\n");
mutex_lock(&fip->ctlr_mutex);
fcoe_ctlr_reset(fip);
mutex_unlock(&fip->ctlr_mutex);
fc_lport_reset(fip->lp);
fcoe_ctlr_solicit(fip, NULL);
return;
}
/* /*
* mask of required descriptors. Validating each one clears its bit. * mask of required descriptors. Validating each one clears its bit.
*/ */
@ -1576,7 +1607,7 @@ static int fcoe_ctlr_recv_handler(struct fcoe_ctlr *fip, struct sk_buff *skb)
if (op == FIP_OP_DISC && sub == FIP_SC_ADV) if (op == FIP_OP_DISC && sub == FIP_SC_ADV)
fcoe_ctlr_recv_adv(fip, skb); fcoe_ctlr_recv_adv(fip, skb);
else if (op == FIP_OP_CTRL && sub == FIP_SC_CLR_VLINK) else if (op == FIP_OP_CTRL && sub == FIP_SC_CLR_VLINK)
fcoe_ctlr_recv_clr_vlink(fip, fiph); fcoe_ctlr_recv_clr_vlink(fip, skb);
kfree_skb(skb); kfree_skb(skb);
return 0; return 0;
drop: drop:
@ -2122,7 +2153,7 @@ static void fcoe_ctlr_vn_rport_callback(struct fc_lport *lport,
LIBFCOE_FIP_DBG(fip, LIBFCOE_FIP_DBG(fip,
"rport FLOGI limited port_id %6.6x\n", "rport FLOGI limited port_id %6.6x\n",
rdata->ids.port_id); rdata->ids.port_id);
lport->tt.rport_logoff(rdata); fc_rport_logoff(rdata);
} }
break; break;
default: default:
@ -2145,9 +2176,15 @@ static void fcoe_ctlr_disc_stop_locked(struct fc_lport *lport)
{ {
struct fc_rport_priv *rdata; struct fc_rport_priv *rdata;
rcu_read_lock();
list_for_each_entry_rcu(rdata, &lport->disc.rports, peers) {
if (kref_get_unless_zero(&rdata->kref)) {
fc_rport_logoff(rdata);
kref_put(&rdata->kref, fc_rport_destroy);
}
}
rcu_read_unlock();
mutex_lock(&lport->disc.disc_mutex); mutex_lock(&lport->disc.disc_mutex);
list_for_each_entry_rcu(rdata, &lport->disc.rports, peers)
lport->tt.rport_logoff(rdata);
lport->disc.disc_callback = NULL; lport->disc.disc_callback = NULL;
mutex_unlock(&lport->disc.disc_mutex); mutex_unlock(&lport->disc.disc_mutex);
} }
@ -2178,7 +2215,7 @@ static void fcoe_ctlr_disc_stop(struct fc_lport *lport)
static void fcoe_ctlr_disc_stop_final(struct fc_lport *lport) static void fcoe_ctlr_disc_stop_final(struct fc_lport *lport)
{ {
fcoe_ctlr_disc_stop(lport); fcoe_ctlr_disc_stop(lport);
lport->tt.rport_flush_queue(); fc_rport_flush_queue();
synchronize_rcu(); synchronize_rcu();
} }
@ -2393,6 +2430,8 @@ static void fcoe_ctlr_vn_probe_req(struct fcoe_ctlr *fip,
switch (fip->state) { switch (fip->state) {
case FIP_ST_VNMP_CLAIM: case FIP_ST_VNMP_CLAIM:
case FIP_ST_VNMP_UP: case FIP_ST_VNMP_UP:
LIBFCOE_FIP_DBG(fip, "vn_probe_req: send reply, state %x\n",
fip->state);
fcoe_ctlr_vn_send(fip, FIP_SC_VN_PROBE_REP, fcoe_ctlr_vn_send(fip, FIP_SC_VN_PROBE_REP,
frport->enode_mac, 0); frport->enode_mac, 0);
break; break;
@ -2407,15 +2446,21 @@ static void fcoe_ctlr_vn_probe_req(struct fcoe_ctlr *fip,
*/ */
if (fip->lp->wwpn > rdata->ids.port_name && if (fip->lp->wwpn > rdata->ids.port_name &&
!(frport->flags & FIP_FL_REC_OR_P2P)) { !(frport->flags & FIP_FL_REC_OR_P2P)) {
LIBFCOE_FIP_DBG(fip, "vn_probe_req: "
"port_id collision\n");
fcoe_ctlr_vn_send(fip, FIP_SC_VN_PROBE_REP, fcoe_ctlr_vn_send(fip, FIP_SC_VN_PROBE_REP,
frport->enode_mac, 0); frport->enode_mac, 0);
break; break;
} }
/* fall through */ /* fall through */
case FIP_ST_VNMP_START: case FIP_ST_VNMP_START:
LIBFCOE_FIP_DBG(fip, "vn_probe_req: "
"restart VN2VN negotiation\n");
fcoe_ctlr_vn_restart(fip); fcoe_ctlr_vn_restart(fip);
break; break;
default: default:
LIBFCOE_FIP_DBG(fip, "vn_probe_req: ignore state %x\n",
fip->state);
break; break;
} }
} }
@ -2437,9 +2482,12 @@ static void fcoe_ctlr_vn_probe_reply(struct fcoe_ctlr *fip,
case FIP_ST_VNMP_PROBE1: case FIP_ST_VNMP_PROBE1:
case FIP_ST_VNMP_PROBE2: case FIP_ST_VNMP_PROBE2:
case FIP_ST_VNMP_CLAIM: case FIP_ST_VNMP_CLAIM:
LIBFCOE_FIP_DBG(fip, "vn_probe_reply: restart state %x\n",
fip->state);
fcoe_ctlr_vn_restart(fip); fcoe_ctlr_vn_restart(fip);
break; break;
case FIP_ST_VNMP_UP: case FIP_ST_VNMP_UP:
LIBFCOE_FIP_DBG(fip, "vn_probe_reply: send claim notify\n");
fcoe_ctlr_vn_send_claim(fip); fcoe_ctlr_vn_send_claim(fip);
break; break;
default: default:
@ -2467,26 +2515,33 @@ static void fcoe_ctlr_vn_add(struct fcoe_ctlr *fip, struct fc_rport_priv *new)
return; return;
mutex_lock(&lport->disc.disc_mutex); mutex_lock(&lport->disc.disc_mutex);
rdata = lport->tt.rport_create(lport, port_id); rdata = fc_rport_create(lport, port_id);
if (!rdata) { if (!rdata) {
mutex_unlock(&lport->disc.disc_mutex); mutex_unlock(&lport->disc.disc_mutex);
return; return;
} }
mutex_lock(&rdata->rp_mutex);
mutex_unlock(&lport->disc.disc_mutex);
rdata->ops = &fcoe_ctlr_vn_rport_ops; rdata->ops = &fcoe_ctlr_vn_rport_ops;
rdata->disc_id = lport->disc.disc_id; rdata->disc_id = lport->disc.disc_id;
ids = &rdata->ids; ids = &rdata->ids;
if ((ids->port_name != -1 && ids->port_name != new->ids.port_name) || if ((ids->port_name != -1 && ids->port_name != new->ids.port_name) ||
(ids->node_name != -1 && ids->node_name != new->ids.node_name)) (ids->node_name != -1 && ids->node_name != new->ids.node_name)) {
lport->tt.rport_logoff(rdata); mutex_unlock(&rdata->rp_mutex);
LIBFCOE_FIP_DBG(fip, "vn_add rport logoff %6.6x\n", port_id);
fc_rport_logoff(rdata);
mutex_lock(&rdata->rp_mutex);
}
ids->port_name = new->ids.port_name; ids->port_name = new->ids.port_name;
ids->node_name = new->ids.node_name; ids->node_name = new->ids.node_name;
mutex_unlock(&lport->disc.disc_mutex); mutex_unlock(&rdata->rp_mutex);
frport = fcoe_ctlr_rport(rdata); frport = fcoe_ctlr_rport(rdata);
LIBFCOE_FIP_DBG(fip, "vn_add rport %6.6x %s\n", LIBFCOE_FIP_DBG(fip, "vn_add rport %6.6x %s state %d\n",
port_id, frport->fcoe_len ? "old" : "new"); port_id, frport->fcoe_len ? "old" : "new",
rdata->rp_state);
*frport = *fcoe_ctlr_rport(new); *frport = *fcoe_ctlr_rport(new);
frport->time = 0; frport->time = 0;
} }
@ -2506,12 +2561,12 @@ static int fcoe_ctlr_vn_lookup(struct fcoe_ctlr *fip, u32 port_id, u8 *mac)
struct fcoe_rport *frport; struct fcoe_rport *frport;
int ret = -1; int ret = -1;
rdata = lport->tt.rport_lookup(lport, port_id); rdata = fc_rport_lookup(lport, port_id);
if (rdata) { if (rdata) {
frport = fcoe_ctlr_rport(rdata); frport = fcoe_ctlr_rport(rdata);
memcpy(mac, frport->enode_mac, ETH_ALEN); memcpy(mac, frport->enode_mac, ETH_ALEN);
ret = 0; ret = 0;
kref_put(&rdata->kref, lport->tt.rport_destroy); kref_put(&rdata->kref, fc_rport_destroy);
} }
return ret; return ret;
} }
@ -2529,6 +2584,7 @@ static void fcoe_ctlr_vn_claim_notify(struct fcoe_ctlr *fip,
struct fcoe_rport *frport = fcoe_ctlr_rport(new); struct fcoe_rport *frport = fcoe_ctlr_rport(new);
if (frport->flags & FIP_FL_REC_OR_P2P) { if (frport->flags & FIP_FL_REC_OR_P2P) {
LIBFCOE_FIP_DBG(fip, "send probe req for P2P/REC\n");
fcoe_ctlr_vn_send(fip, FIP_SC_VN_PROBE_REQ, fcoe_all_vn2vn, 0); fcoe_ctlr_vn_send(fip, FIP_SC_VN_PROBE_REQ, fcoe_all_vn2vn, 0);
return; return;
} }
@ -2536,25 +2592,37 @@ static void fcoe_ctlr_vn_claim_notify(struct fcoe_ctlr *fip,
case FIP_ST_VNMP_START: case FIP_ST_VNMP_START:
case FIP_ST_VNMP_PROBE1: case FIP_ST_VNMP_PROBE1:
case FIP_ST_VNMP_PROBE2: case FIP_ST_VNMP_PROBE2:
if (new->ids.port_id == fip->port_id) if (new->ids.port_id == fip->port_id) {
LIBFCOE_FIP_DBG(fip, "vn_claim_notify: "
"restart, state %d\n",
fip->state);
fcoe_ctlr_vn_restart(fip); fcoe_ctlr_vn_restart(fip);
}
break; break;
case FIP_ST_VNMP_CLAIM: case FIP_ST_VNMP_CLAIM:
case FIP_ST_VNMP_UP: case FIP_ST_VNMP_UP:
if (new->ids.port_id == fip->port_id) { if (new->ids.port_id == fip->port_id) {
if (new->ids.port_name > fip->lp->wwpn) { if (new->ids.port_name > fip->lp->wwpn) {
LIBFCOE_FIP_DBG(fip, "vn_claim_notify: "
"restart, port_id collision\n");
fcoe_ctlr_vn_restart(fip); fcoe_ctlr_vn_restart(fip);
break; break;
} }
LIBFCOE_FIP_DBG(fip, "vn_claim_notify: "
"send claim notify\n");
fcoe_ctlr_vn_send_claim(fip); fcoe_ctlr_vn_send_claim(fip);
break; break;
} }
LIBFCOE_FIP_DBG(fip, "vn_claim_notify: send reply to %x\n",
new->ids.port_id);
fcoe_ctlr_vn_send(fip, FIP_SC_VN_CLAIM_REP, frport->enode_mac, fcoe_ctlr_vn_send(fip, FIP_SC_VN_CLAIM_REP, frport->enode_mac,
min((u32)frport->fcoe_len, min((u32)frport->fcoe_len,
fcoe_ctlr_fcoe_size(fip))); fcoe_ctlr_fcoe_size(fip)));
fcoe_ctlr_vn_add(fip, new); fcoe_ctlr_vn_add(fip, new);
break; break;
default: default:
LIBFCOE_FIP_DBG(fip, "vn_claim_notify: "
"ignoring claim from %x\n", new->ids.port_id);
break; break;
} }
} }
@ -2591,19 +2659,26 @@ static void fcoe_ctlr_vn_beacon(struct fcoe_ctlr *fip,
frport = fcoe_ctlr_rport(new); frport = fcoe_ctlr_rport(new);
if (frport->flags & FIP_FL_REC_OR_P2P) { if (frport->flags & FIP_FL_REC_OR_P2P) {
LIBFCOE_FIP_DBG(fip, "p2p beacon while in vn2vn mode\n");
fcoe_ctlr_vn_send(fip, FIP_SC_VN_PROBE_REQ, fcoe_all_vn2vn, 0); fcoe_ctlr_vn_send(fip, FIP_SC_VN_PROBE_REQ, fcoe_all_vn2vn, 0);
return; return;
} }
rdata = lport->tt.rport_lookup(lport, new->ids.port_id); rdata = fc_rport_lookup(lport, new->ids.port_id);
if (rdata) { if (rdata) {
if (rdata->ids.node_name == new->ids.node_name && if (rdata->ids.node_name == new->ids.node_name &&
rdata->ids.port_name == new->ids.port_name) { rdata->ids.port_name == new->ids.port_name) {
frport = fcoe_ctlr_rport(rdata); frport = fcoe_ctlr_rport(rdata);
if (!frport->time && fip->state == FIP_ST_VNMP_UP) LIBFCOE_FIP_DBG(fip, "beacon from rport %x\n",
lport->tt.rport_login(rdata); rdata->ids.port_id);
if (!frport->time && fip->state == FIP_ST_VNMP_UP) {
LIBFCOE_FIP_DBG(fip, "beacon expired "
"for rport %x\n",
rdata->ids.port_id);
fc_rport_login(rdata);
}
frport->time = jiffies; frport->time = jiffies;
} }
kref_put(&rdata->kref, lport->tt.rport_destroy); kref_put(&rdata->kref, fc_rport_destroy);
return; return;
} }
if (fip->state != FIP_ST_VNMP_UP) if (fip->state != FIP_ST_VNMP_UP)
@ -2638,11 +2713,15 @@ static unsigned long fcoe_ctlr_vn_age(struct fcoe_ctlr *fip)
unsigned long deadline; unsigned long deadline;
next_time = jiffies + msecs_to_jiffies(FIP_VN_BEACON_INT * 10); next_time = jiffies + msecs_to_jiffies(FIP_VN_BEACON_INT * 10);
mutex_lock(&lport->disc.disc_mutex); rcu_read_lock();
list_for_each_entry_rcu(rdata, &lport->disc.rports, peers) { list_for_each_entry_rcu(rdata, &lport->disc.rports, peers) {
frport = fcoe_ctlr_rport(rdata); if (!kref_get_unless_zero(&rdata->kref))
if (!frport->time)
continue; continue;
frport = fcoe_ctlr_rport(rdata);
if (!frport->time) {
kref_put(&rdata->kref, fc_rport_destroy);
continue;
}
deadline = frport->time + deadline = frport->time +
msecs_to_jiffies(FIP_VN_BEACON_INT * 25 / 10); msecs_to_jiffies(FIP_VN_BEACON_INT * 25 / 10);
if (time_after_eq(jiffies, deadline)) { if (time_after_eq(jiffies, deadline)) {
@ -2650,11 +2729,12 @@ static unsigned long fcoe_ctlr_vn_age(struct fcoe_ctlr *fip)
LIBFCOE_FIP_DBG(fip, LIBFCOE_FIP_DBG(fip,
"port %16.16llx fc_id %6.6x beacon expired\n", "port %16.16llx fc_id %6.6x beacon expired\n",
rdata->ids.port_name, rdata->ids.port_id); rdata->ids.port_name, rdata->ids.port_id);
lport->tt.rport_logoff(rdata); fc_rport_logoff(rdata);
} else if (time_before(deadline, next_time)) } else if (time_before(deadline, next_time))
next_time = deadline; next_time = deadline;
kref_put(&rdata->kref, fc_rport_destroy);
} }
mutex_unlock(&lport->disc.disc_mutex); rcu_read_unlock();
return next_time; return next_time;
} }
@ -2674,11 +2754,21 @@ static int fcoe_ctlr_vn_recv(struct fcoe_ctlr *fip, struct sk_buff *skb)
struct fc_rport_priv rdata; struct fc_rport_priv rdata;
struct fcoe_rport frport; struct fcoe_rport frport;
} buf; } buf;
int rc; int rc, vlan_id = 0;
fiph = (struct fip_header *)skb->data; fiph = (struct fip_header *)skb->data;
sub = fiph->fip_subcode; sub = fiph->fip_subcode;
if (fip->lp->vlan)
vlan_id = skb_vlan_tag_get_id(skb);
if (vlan_id && vlan_id != fip->lp->vlan) {
LIBFCOE_FIP_DBG(fip, "vn_recv drop frame sub %x vlan %d\n",
sub, vlan_id);
rc = -EAGAIN;
goto drop;
}
rc = fcoe_ctlr_vn_parse(fip, skb, &buf.rdata); rc = fcoe_ctlr_vn_parse(fip, skb, &buf.rdata);
if (rc) { if (rc) {
LIBFCOE_FIP_DBG(fip, "vn_recv vn_parse error %d\n", rc); LIBFCOE_FIP_DBG(fip, "vn_recv vn_parse error %d\n", rc);
@ -2941,7 +3031,7 @@ static void fcoe_ctlr_disc_recv(struct fc_lport *lport, struct fc_frame *fp)
rjt_data.reason = ELS_RJT_UNSUP; rjt_data.reason = ELS_RJT_UNSUP;
rjt_data.explan = ELS_EXPL_NONE; rjt_data.explan = ELS_EXPL_NONE;
lport->tt.seq_els_rsp_send(fp, ELS_LS_RJT, &rjt_data); fc_seq_els_rsp_send(fp, ELS_LS_RJT, &rjt_data);
fc_frame_free(fp); fc_frame_free(fp);
} }
@ -2991,12 +3081,17 @@ static void fcoe_ctlr_vn_disc(struct fcoe_ctlr *fip)
mutex_lock(&disc->disc_mutex); mutex_lock(&disc->disc_mutex);
callback = disc->pending ? disc->disc_callback : NULL; callback = disc->pending ? disc->disc_callback : NULL;
disc->pending = 0; disc->pending = 0;
mutex_unlock(&disc->disc_mutex);
rcu_read_lock();
list_for_each_entry_rcu(rdata, &disc->rports, peers) { list_for_each_entry_rcu(rdata, &disc->rports, peers) {
if (!kref_get_unless_zero(&rdata->kref))
continue;
frport = fcoe_ctlr_rport(rdata); frport = fcoe_ctlr_rport(rdata);
if (frport->time) if (frport->time)
lport->tt.rport_login(rdata); fc_rport_login(rdata);
kref_put(&rdata->kref, fc_rport_destroy);
} }
mutex_unlock(&disc->disc_mutex); rcu_read_unlock();
if (callback) if (callback)
callback(lport, DISC_EV_SUCCESS); callback(lport, DISC_EV_SUCCESS);
} }
@ -3015,11 +3110,13 @@ static void fcoe_ctlr_vn_timeout(struct fcoe_ctlr *fip)
switch (fip->state) { switch (fip->state) {
case FIP_ST_VNMP_START: case FIP_ST_VNMP_START:
fcoe_ctlr_set_state(fip, FIP_ST_VNMP_PROBE1); fcoe_ctlr_set_state(fip, FIP_ST_VNMP_PROBE1);
LIBFCOE_FIP_DBG(fip, "vn_timeout: send 1st probe request\n");
fcoe_ctlr_vn_send(fip, FIP_SC_VN_PROBE_REQ, fcoe_all_vn2vn, 0); fcoe_ctlr_vn_send(fip, FIP_SC_VN_PROBE_REQ, fcoe_all_vn2vn, 0);
next_time = jiffies + msecs_to_jiffies(FIP_VN_PROBE_WAIT); next_time = jiffies + msecs_to_jiffies(FIP_VN_PROBE_WAIT);
break; break;
case FIP_ST_VNMP_PROBE1: case FIP_ST_VNMP_PROBE1:
fcoe_ctlr_set_state(fip, FIP_ST_VNMP_PROBE2); fcoe_ctlr_set_state(fip, FIP_ST_VNMP_PROBE2);
LIBFCOE_FIP_DBG(fip, "vn_timeout: send 2nd probe request\n");
fcoe_ctlr_vn_send(fip, FIP_SC_VN_PROBE_REQ, fcoe_all_vn2vn, 0); fcoe_ctlr_vn_send(fip, FIP_SC_VN_PROBE_REQ, fcoe_all_vn2vn, 0);
next_time = jiffies + msecs_to_jiffies(FIP_VN_ANN_WAIT); next_time = jiffies + msecs_to_jiffies(FIP_VN_ANN_WAIT);
break; break;
@ -3030,6 +3127,7 @@ static void fcoe_ctlr_vn_timeout(struct fcoe_ctlr *fip)
hton24(mac + 3, new_port_id); hton24(mac + 3, new_port_id);
fcoe_ctlr_map_dest(fip); fcoe_ctlr_map_dest(fip);
fip->update_mac(fip->lp, mac); fip->update_mac(fip->lp, mac);
LIBFCOE_FIP_DBG(fip, "vn_timeout: send claim notify\n");
fcoe_ctlr_vn_send_claim(fip); fcoe_ctlr_vn_send_claim(fip);
next_time = jiffies + msecs_to_jiffies(FIP_VN_ANN_WAIT); next_time = jiffies + msecs_to_jiffies(FIP_VN_ANN_WAIT);
break; break;
@ -3041,6 +3139,7 @@ static void fcoe_ctlr_vn_timeout(struct fcoe_ctlr *fip)
next_time = fip->sol_time + msecs_to_jiffies(FIP_VN_ANN_WAIT); next_time = fip->sol_time + msecs_to_jiffies(FIP_VN_ANN_WAIT);
if (time_after_eq(jiffies, next_time)) { if (time_after_eq(jiffies, next_time)) {
fcoe_ctlr_set_state(fip, FIP_ST_VNMP_UP); fcoe_ctlr_set_state(fip, FIP_ST_VNMP_UP);
LIBFCOE_FIP_DBG(fip, "vn_timeout: send vn2vn beacon\n");
fcoe_ctlr_vn_send(fip, FIP_SC_VN_BEACON, fcoe_ctlr_vn_send(fip, FIP_SC_VN_BEACON,
fcoe_all_vn2vn, 0); fcoe_all_vn2vn, 0);
next_time = jiffies + msecs_to_jiffies(FIP_VN_ANN_WAIT); next_time = jiffies + msecs_to_jiffies(FIP_VN_ANN_WAIT);
@ -3051,6 +3150,7 @@ static void fcoe_ctlr_vn_timeout(struct fcoe_ctlr *fip)
case FIP_ST_VNMP_UP: case FIP_ST_VNMP_UP:
next_time = fcoe_ctlr_vn_age(fip); next_time = fcoe_ctlr_vn_age(fip);
if (time_after_eq(jiffies, fip->port_ka_time)) { if (time_after_eq(jiffies, fip->port_ka_time)) {
LIBFCOE_FIP_DBG(fip, "vn_timeout: send vn2vn beacon\n");
fcoe_ctlr_vn_send(fip, FIP_SC_VN_BEACON, fcoe_ctlr_vn_send(fip, FIP_SC_VN_BEACON,
fcoe_all_vn2vn, 0); fcoe_all_vn2vn, 0);
fip->port_ka_time = jiffies + fip->port_ka_time = jiffies +
@ -3135,7 +3235,6 @@ int fcoe_libfc_config(struct fc_lport *lport, struct fcoe_ctlr *fip,
fc_exch_init(lport); fc_exch_init(lport);
fc_elsct_init(lport); fc_elsct_init(lport);
fc_lport_init(lport); fc_lport_init(lport);
fc_rport_init(lport);
fc_disc_init(lport); fc_disc_init(lport);
fcoe_ctlr_mode_set(lport, fip, fip->mode); fcoe_ctlr_mode_set(lport, fip, fip->mode);
return 0; return 0;

View File

@ -335,16 +335,24 @@ static ssize_t store_ctlr_enabled(struct device *dev,
const char *buf, size_t count) const char *buf, size_t count)
{ {
struct fcoe_ctlr_device *ctlr = dev_to_ctlr(dev); struct fcoe_ctlr_device *ctlr = dev_to_ctlr(dev);
bool enabled;
int rc; int rc;
if (*buf == '1')
enabled = true;
else if (*buf == '0')
enabled = false;
else
return -EINVAL;
switch (ctlr->enabled) { switch (ctlr->enabled) {
case FCOE_CTLR_ENABLED: case FCOE_CTLR_ENABLED:
if (*buf == '1') if (enabled)
return count; return count;
ctlr->enabled = FCOE_CTLR_DISABLED; ctlr->enabled = FCOE_CTLR_DISABLED;
break; break;
case FCOE_CTLR_DISABLED: case FCOE_CTLR_DISABLED:
if (*buf == '0') if (!enabled)
return count; return count;
ctlr->enabled = FCOE_CTLR_ENABLED; ctlr->enabled = FCOE_CTLR_ENABLED;
break; break;
@ -423,6 +431,75 @@ static FCOE_DEVICE_ATTR(ctlr, fip_vlan_responder, S_IRUGO | S_IWUSR,
show_ctlr_fip_resp, show_ctlr_fip_resp,
store_ctlr_fip_resp); store_ctlr_fip_resp);
static ssize_t
fcoe_ctlr_var_store(u32 *var, const char *buf, size_t count)
{
int err;
unsigned long v;
err = kstrtoul(buf, 10, &v);
if (err || v > UINT_MAX)
return -EINVAL;
*var = v;
return count;
}
static ssize_t store_ctlr_r_a_tov(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
struct fcoe_ctlr_device *ctlr_dev = dev_to_ctlr(dev);
struct fcoe_ctlr *ctlr = fcoe_ctlr_device_priv(ctlr_dev);
if (ctlr_dev->enabled == FCOE_CTLR_ENABLED)
return -EBUSY;
if (ctlr_dev->enabled == FCOE_CTLR_DISABLED)
return fcoe_ctlr_var_store(&ctlr->lp->r_a_tov, buf, count);
return -ENOTSUPP;
}
static ssize_t show_ctlr_r_a_tov(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct fcoe_ctlr_device *ctlr_dev = dev_to_ctlr(dev);
struct fcoe_ctlr *ctlr = fcoe_ctlr_device_priv(ctlr_dev);
return sprintf(buf, "%d\n", ctlr->lp->r_a_tov);
}
static FCOE_DEVICE_ATTR(ctlr, r_a_tov, S_IRUGO | S_IWUSR,
show_ctlr_r_a_tov, store_ctlr_r_a_tov);
static ssize_t store_ctlr_e_d_tov(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
struct fcoe_ctlr_device *ctlr_dev = dev_to_ctlr(dev);
struct fcoe_ctlr *ctlr = fcoe_ctlr_device_priv(ctlr_dev);
if (ctlr_dev->enabled == FCOE_CTLR_ENABLED)
return -EBUSY;
if (ctlr_dev->enabled == FCOE_CTLR_DISABLED)
return fcoe_ctlr_var_store(&ctlr->lp->e_d_tov, buf, count);
return -ENOTSUPP;
}
static ssize_t show_ctlr_e_d_tov(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct fcoe_ctlr_device *ctlr_dev = dev_to_ctlr(dev);
struct fcoe_ctlr *ctlr = fcoe_ctlr_device_priv(ctlr_dev);
return sprintf(buf, "%d\n", ctlr->lp->e_d_tov);
}
static FCOE_DEVICE_ATTR(ctlr, e_d_tov, S_IRUGO | S_IWUSR,
show_ctlr_e_d_tov, store_ctlr_e_d_tov);
static ssize_t static ssize_t
store_private_fcoe_ctlr_fcf_dev_loss_tmo(struct device *dev, store_private_fcoe_ctlr_fcf_dev_loss_tmo(struct device *dev,
struct device_attribute *attr, struct device_attribute *attr,
@ -507,6 +584,8 @@ static struct attribute_group fcoe_ctlr_lesb_attr_group = {
static struct attribute *fcoe_ctlr_attrs[] = { static struct attribute *fcoe_ctlr_attrs[] = {
&device_attr_fcoe_ctlr_fip_vlan_responder.attr, &device_attr_fcoe_ctlr_fip_vlan_responder.attr,
&device_attr_fcoe_ctlr_fcf_dev_loss_tmo.attr, &device_attr_fcoe_ctlr_fcf_dev_loss_tmo.attr,
&device_attr_fcoe_ctlr_r_a_tov.attr,
&device_attr_fcoe_ctlr_e_d_tov.attr,
&device_attr_fcoe_ctlr_enabled.attr, &device_attr_fcoe_ctlr_enabled.attr,
&device_attr_fcoe_ctlr_mode.attr, &device_attr_fcoe_ctlr_mode.attr,
NULL, NULL,

View File

@ -441,22 +441,31 @@ static int fnic_queuecommand_lck(struct scsi_cmnd *sc, void (*done)(struct scsi_
unsigned long ptr; unsigned long ptr;
spinlock_t *io_lock = NULL; spinlock_t *io_lock = NULL;
int io_lock_acquired = 0; int io_lock_acquired = 0;
struct fc_rport_libfc_priv *rp;
if (unlikely(fnic_chk_state_flags_locked(fnic, FNIC_FLAGS_IO_BLOCKED))) if (unlikely(fnic_chk_state_flags_locked(fnic, FNIC_FLAGS_IO_BLOCKED)))
return SCSI_MLQUEUE_HOST_BUSY; return SCSI_MLQUEUE_HOST_BUSY;
rport = starget_to_rport(scsi_target(sc->device)); rport = starget_to_rport(scsi_target(sc->device));
if (!rport) {
FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
"returning DID_NO_CONNECT for IO as rport is NULL\n");
sc->result = DID_NO_CONNECT << 16;
done(sc);
return 0;
}
ret = fc_remote_port_chkready(rport); ret = fc_remote_port_chkready(rport);
if (ret) { if (ret) {
FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
"rport is not ready\n");
atomic64_inc(&fnic_stats->misc_stats.rport_not_ready); atomic64_inc(&fnic_stats->misc_stats.rport_not_ready);
sc->result = ret; sc->result = ret;
done(sc); done(sc);
return 0; return 0;
} }
if (rport) { rp = rport->dd_data;
struct fc_rport_libfc_priv *rp = rport->dd_data;
if (!rp || rp->rp_state != RPORT_ST_READY) { if (!rp || rp->rp_state != RPORT_ST_READY) {
FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
"returning DID_NO_CONNECT for IO as rport is removed\n"); "returning DID_NO_CONNECT for IO as rport is removed\n");
@ -465,7 +474,6 @@ static int fnic_queuecommand_lck(struct scsi_cmnd *sc, void (*done)(struct scsi_
done(sc); done(sc);
return 0; return 0;
} }
}
if (lp->state != LPORT_ST_READY || !(lp->link_up)) if (lp->state != LPORT_ST_READY || !(lp->link_up))
return SCSI_MLQUEUE_HOST_BUSY; return SCSI_MLQUEUE_HOST_BUSY;
@ -2543,7 +2551,7 @@ int fnic_reset(struct Scsi_Host *shost)
* Reset local port, this will clean up libFC exchanges, * Reset local port, this will clean up libFC exchanges,
* reset remote port sessions, and if link is up, begin flogi * reset remote port sessions, and if link is up, begin flogi
*/ */
ret = lp->tt.lport_reset(lp); ret = fc_lport_reset(lp);
FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
"Returning from fnic reset %s\n", "Returning from fnic reset %s\n",

View File

@ -613,7 +613,7 @@ int fnic_fc_trace_set_data(u32 host_no, u8 frame_type,
fc_trace_entries.rd_idx = 0; fc_trace_entries.rd_idx = 0;
} }
fc_buf->time_stamp = CURRENT_TIME; ktime_get_real_ts64(&fc_buf->time_stamp);
fc_buf->host_no = host_no; fc_buf->host_no = host_no;
fc_buf->frame_type = frame_type; fc_buf->frame_type = frame_type;
@ -740,7 +740,7 @@ void copy_and_format_trace_data(struct fc_trace_hdr *tdata,
len = *orig_len; len = *orig_len;
time_to_tm(tdata->time_stamp.tv_sec, 0, &tm); time64_to_tm(tdata->time_stamp.tv_sec, 0, &tm);
fmt = "%02d:%02d:%04ld %02d:%02d:%02d.%09lu ns%8x %c%8x\t"; fmt = "%02d:%02d:%04ld %02d:%02d:%02d.%09lu ns%8x %c%8x\t";
len += snprintf(fnic_dbgfs_prt->buffer + len, len += snprintf(fnic_dbgfs_prt->buffer + len,

View File

@ -72,7 +72,7 @@ struct fnic_trace_data {
typedef struct fnic_trace_data fnic_trace_data_t; typedef struct fnic_trace_data fnic_trace_data_t;
struct fc_trace_hdr { struct fc_trace_hdr {
struct timespec time_stamp; struct timespec64 time_stamp;
u32 host_no; u32 host_no;
u8 frame_type; u8 frame_type;
u8 frame_len; u8 frame_len;

View File

@ -499,10 +499,7 @@ void vnic_dev_add_addr(struct vnic_dev *vdev, u8 *addr)
err = vnic_dev_cmd(vdev, CMD_ADDR_ADD, &a0, &a1, wait); err = vnic_dev_cmd(vdev, CMD_ADDR_ADD, &a0, &a1, wait);
if (err) if (err)
printk(KERN_ERR pr_err("Can't add addr [%pM], %d\n", addr, err);
"Can't add addr [%02x:%02x:%02x:%02x:%02x:%02x], %d\n",
addr[0], addr[1], addr[2], addr[3], addr[4], addr[5],
err);
} }
void vnic_dev_del_addr(struct vnic_dev *vdev, u8 *addr) void vnic_dev_del_addr(struct vnic_dev *vdev, u8 *addr)
@ -517,10 +514,7 @@ void vnic_dev_del_addr(struct vnic_dev *vdev, u8 *addr)
err = vnic_dev_cmd(vdev, CMD_ADDR_DEL, &a0, &a1, wait); err = vnic_dev_cmd(vdev, CMD_ADDR_DEL, &a0, &a1, wait);
if (err) if (err)
printk(KERN_ERR pr_err("Can't del addr [%pM], %d\n", addr, err);
"Can't del addr [%02x:%02x:%02x:%02x:%02x:%02x], %d\n",
addr[0], addr[1], addr[2], addr[3], addr[4], addr[5],
err);
} }
int vnic_dev_notify_set(struct vnic_dev *vdev, u16 intr) int vnic_dev_notify_set(struct vnic_dev *vdev, u16 intr)

View File

@ -64,9 +64,9 @@ static int card[] = { -1, -1, -1, -1, -1, -1, -1, -1 };
module_param_array(card, int, NULL, 0); module_param_array(card, int, NULL, 0);
MODULE_PARM_DESC(card, "card type (0=NCR5380, 1=NCR53C400, 2=NCR53C400A, 3=DTC3181E, 4=HP C2502)"); MODULE_PARM_DESC(card, "card type (0=NCR5380, 1=NCR53C400, 2=NCR53C400A, 3=DTC3181E, 4=HP C2502)");
MODULE_ALIAS("g_NCR5380_mmio");
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
#ifndef SCSI_G_NCR5380_MEM
/* /*
* Configure I/O address of 53C400A or DTC436 by writing magic numbers * Configure I/O address of 53C400A or DTC436 by writing magic numbers
* to ports 0x779 and 0x379. * to ports 0x779 and 0x379.
@ -88,40 +88,35 @@ static void magic_configure(int idx, u8 irq, u8 magic[])
cfg = 0x80 | idx | (irq << 4); cfg = 0x80 | idx | (irq << 4);
outb(cfg, 0x379); outb(cfg, 0x379);
} }
#endif
static unsigned int ncr_53c400a_ports[] = {
0x280, 0x290, 0x300, 0x310, 0x330, 0x340, 0x348, 0x350, 0
};
static unsigned int dtc_3181e_ports[] = {
0x220, 0x240, 0x280, 0x2a0, 0x2c0, 0x300, 0x320, 0x340, 0
};
static u8 ncr_53c400a_magic[] = { /* 53C400A & DTC436 */
0x59, 0xb9, 0xc5, 0xae, 0xa6
};
static u8 hp_c2502_magic[] = { /* HP C2502 */
0x0f, 0x22, 0xf0, 0x20, 0x80
};
static int generic_NCR5380_init_one(struct scsi_host_template *tpnt, static int generic_NCR5380_init_one(struct scsi_host_template *tpnt,
struct device *pdev, int base, int irq, int board) struct device *pdev, int base, int irq, int board)
{ {
unsigned int *ports; bool is_pmio = base <= 0xffff;
int ret;
int flags = 0;
unsigned int *ports = NULL;
u8 *magic = NULL; u8 *magic = NULL;
#ifndef SCSI_G_NCR5380_MEM
int i; int i;
int port_idx = -1; int port_idx = -1;
unsigned long region_size; unsigned long region_size;
#endif
static unsigned int ncr_53c400a_ports[] = {
0x280, 0x290, 0x300, 0x310, 0x330, 0x340, 0x348, 0x350, 0
};
static unsigned int dtc_3181e_ports[] = {
0x220, 0x240, 0x280, 0x2a0, 0x2c0, 0x300, 0x320, 0x340, 0
};
static u8 ncr_53c400a_magic[] = { /* 53C400A & DTC436 */
0x59, 0xb9, 0xc5, 0xae, 0xa6
};
static u8 hp_c2502_magic[] = { /* HP C2502 */
0x0f, 0x22, 0xf0, 0x20, 0x80
};
int flags, ret;
struct Scsi_Host *instance; struct Scsi_Host *instance;
struct NCR5380_hostdata *hostdata; struct NCR5380_hostdata *hostdata;
#ifdef SCSI_G_NCR5380_MEM u8 __iomem *iomem;
void __iomem *iomem;
resource_size_t iomem_size;
#endif
ports = NULL;
flags = 0;
switch (board) { switch (board) {
case BOARD_NCR5380: case BOARD_NCR5380:
flags = FLAG_NO_PSEUDO_DMA | FLAG_DMA_FIXUP; flags = FLAG_NO_PSEUDO_DMA | FLAG_DMA_FIXUP;
@ -140,8 +135,7 @@ static int generic_NCR5380_init_one(struct scsi_host_template *tpnt,
break; break;
} }
#ifndef SCSI_G_NCR5380_MEM if (is_pmio && ports && magic) {
if (ports && magic) {
/* wakeup sequence for the NCR53C400A and DTC3181E */ /* wakeup sequence for the NCR53C400A and DTC3181E */
/* Disable the adapter and look for a free io port */ /* Disable the adapter and look for a free io port */
@ -170,44 +164,50 @@ static int generic_NCR5380_init_one(struct scsi_host_template *tpnt,
if (ports[i]) { if (ports[i]) {
/* At this point we have our region reserved */ /* At this point we have our region reserved */
magic_configure(i, 0, magic); /* no IRQ yet */ magic_configure(i, 0, magic); /* no IRQ yet */
outb(0xc0, ports[i] + 9); base = ports[i];
if (inb(ports[i] + 9) != 0x80) { outb(0xc0, base + 9);
if (inb(base + 9) != 0x80) {
ret = -ENODEV; ret = -ENODEV;
goto out_release; goto out_release;
} }
base = ports[i];
port_idx = i; port_idx = i;
} else } else
return -EINVAL; return -EINVAL;
} } else if (is_pmio) {
else
{
/* NCR5380 - no configuration, just grab */ /* NCR5380 - no configuration, just grab */
region_size = 8; region_size = 8;
if (!base || !request_region(base, region_size, "ncr5380")) if (!base || !request_region(base, region_size, "ncr5380"))
return -EBUSY; return -EBUSY;
} } else { /* MMIO */
#else region_size = NCR53C400_region_size;
iomem_size = NCR53C400_region_size; if (!request_mem_region(base, region_size, "ncr5380"))
if (!request_mem_region(base, iomem_size, "ncr5380"))
return -EBUSY; return -EBUSY;
iomem = ioremap(base, iomem_size);
if (!iomem) {
release_mem_region(base, iomem_size);
return -ENOMEM;
} }
#endif
instance = scsi_host_alloc(tpnt, sizeof(struct NCR5380_hostdata)); if (is_pmio)
if (instance == NULL) { iomem = ioport_map(base, region_size);
else
iomem = ioremap(base, region_size);
if (!iomem) {
ret = -ENOMEM; ret = -ENOMEM;
goto out_release; goto out_release;
} }
instance = scsi_host_alloc(tpnt, sizeof(struct NCR5380_hostdata));
if (instance == NULL) {
ret = -ENOMEM;
goto out_unmap;
}
hostdata = shost_priv(instance); hostdata = shost_priv(instance);
#ifndef SCSI_G_NCR5380_MEM hostdata->io = iomem;
instance->io_port = base; hostdata->region_size = region_size;
instance->n_io_port = region_size;
if (is_pmio) {
hostdata->io_port = base;
hostdata->io_width = 1; /* 8-bit PDMA by default */ hostdata->io_width = 1; /* 8-bit PDMA by default */
hostdata->offset = 0;
/* /*
* On NCR53C400 boards, NCR5380 registers are mapped 8 past * On NCR53C400 boards, NCR5380 registers are mapped 8 past
@ -215,7 +215,7 @@ static int generic_NCR5380_init_one(struct scsi_host_template *tpnt,
*/ */
switch (board) { switch (board) {
case BOARD_NCR53C400: case BOARD_NCR53C400:
instance->io_port += 8; hostdata->io_port += 8;
hostdata->c400_ctl_status = 0; hostdata->c400_ctl_status = 0;
hostdata->c400_blk_cnt = 1; hostdata->c400_blk_cnt = 1;
hostdata->c400_host_buf = 4; hostdata->c400_host_buf = 4;
@ -230,10 +230,9 @@ static int generic_NCR5380_init_one(struct scsi_host_template *tpnt,
hostdata->c400_host_buf = 8; hostdata->c400_host_buf = 8;
break; break;
} }
#else } else {
instance->base = base; hostdata->base = base;
hostdata->iomem = iomem; hostdata->offset = NCR53C400_mem_base;
hostdata->iomem_size = iomem_size;
switch (board) { switch (board) {
case BOARD_NCR53C400: case BOARD_NCR53C400:
hostdata->c400_ctl_status = 0x100; hostdata->c400_ctl_status = 0x100;
@ -247,7 +246,7 @@ static int generic_NCR5380_init_one(struct scsi_host_template *tpnt,
ret = -EINVAL; ret = -EINVAL;
goto out_unregister; goto out_unregister;
} }
#endif }
ret = NCR5380_init(instance, flags | FLAG_LATE_DMA_SETUP); ret = NCR5380_init(instance, flags | FLAG_LATE_DMA_SETUP);
if (ret) if (ret)
@ -273,11 +272,9 @@ static int generic_NCR5380_init_one(struct scsi_host_template *tpnt,
instance->irq = NO_IRQ; instance->irq = NO_IRQ;
if (instance->irq != NO_IRQ) { if (instance->irq != NO_IRQ) {
#ifndef SCSI_G_NCR5380_MEM
/* set IRQ for HP C2502 */ /* set IRQ for HP C2502 */
if (board == BOARD_HP_C2502) if (board == BOARD_HP_C2502)
magic_configure(port_idx, instance->irq, magic); magic_configure(port_idx, instance->irq, magic);
#endif
if (request_irq(instance->irq, generic_NCR5380_intr, if (request_irq(instance->irq, generic_NCR5380_intr,
0, "NCR5380", instance)) { 0, "NCR5380", instance)) {
printk(KERN_WARNING "scsi%d : IRQ%d not free, interrupts disabled\n", instance->host_no, instance->irq); printk(KERN_WARNING "scsi%d : IRQ%d not free, interrupts disabled\n", instance->host_no, instance->irq);
@ -303,38 +300,39 @@ out_free_irq:
NCR5380_exit(instance); NCR5380_exit(instance);
out_unregister: out_unregister:
scsi_host_put(instance); scsi_host_put(instance);
out_release: out_unmap:
#ifndef SCSI_G_NCR5380_MEM
release_region(base, region_size);
#else
iounmap(iomem); iounmap(iomem);
release_mem_region(base, iomem_size); out_release:
#endif if (is_pmio)
release_region(base, region_size);
else
release_mem_region(base, region_size);
return ret; return ret;
} }
static void generic_NCR5380_release_resources(struct Scsi_Host *instance) static void generic_NCR5380_release_resources(struct Scsi_Host *instance)
{ {
struct NCR5380_hostdata *hostdata = shost_priv(instance);
void __iomem *iomem = hostdata->io;
unsigned long io_port = hostdata->io_port;
unsigned long base = hostdata->base;
unsigned long region_size = hostdata->region_size;
scsi_remove_host(instance); scsi_remove_host(instance);
if (instance->irq != NO_IRQ) if (instance->irq != NO_IRQ)
free_irq(instance->irq, instance); free_irq(instance->irq, instance);
NCR5380_exit(instance); NCR5380_exit(instance);
#ifndef SCSI_G_NCR5380_MEM
release_region(instance->io_port, instance->n_io_port);
#else
{
struct NCR5380_hostdata *hostdata = shost_priv(instance);
iounmap(hostdata->iomem);
release_mem_region(instance->base, hostdata->iomem_size);
}
#endif
scsi_host_put(instance); scsi_host_put(instance);
iounmap(iomem);
if (io_port)
release_region(io_port, region_size);
else
release_mem_region(base, region_size);
} }
/** /**
* generic_NCR5380_pread - pseudo DMA read * generic_NCR5380_pread - pseudo DMA read
* @instance: adapter to read from * @hostdata: scsi host private data
* @dst: buffer to read into * @dst: buffer to read into
* @len: buffer length * @len: buffer length
* *
@ -342,10 +340,9 @@ static void generic_NCR5380_release_resources(struct Scsi_Host *instance)
* controller * controller
*/ */
static inline int generic_NCR5380_pread(struct Scsi_Host *instance, static inline int generic_NCR5380_pread(struct NCR5380_hostdata *hostdata,
unsigned char *dst, int len) unsigned char *dst, int len)
{ {
struct NCR5380_hostdata *hostdata = shost_priv(instance);
int blocks = len / 128; int blocks = len / 128;
int start = 0; int start = 0;
@ -361,18 +358,16 @@ static inline int generic_NCR5380_pread(struct Scsi_Host *instance,
while (NCR5380_read(hostdata->c400_ctl_status) & CSR_HOST_BUF_NOT_RDY) while (NCR5380_read(hostdata->c400_ctl_status) & CSR_HOST_BUF_NOT_RDY)
; /* FIXME - no timeout */ ; /* FIXME - no timeout */
#ifndef SCSI_G_NCR5380_MEM if (hostdata->io_port && hostdata->io_width == 2)
if (hostdata->io_width == 2) insw(hostdata->io_port + hostdata->c400_host_buf,
insw(instance->io_port + hostdata->c400_host_buf,
dst + start, 64); dst + start, 64);
else else if (hostdata->io_port)
insb(instance->io_port + hostdata->c400_host_buf, insb(hostdata->io_port + hostdata->c400_host_buf,
dst + start, 128); dst + start, 128);
#else else
/* implies SCSI_G_NCR5380_MEM */
memcpy_fromio(dst + start, memcpy_fromio(dst + start,
hostdata->iomem + NCR53C400_host_buffer, 128); hostdata->io + NCR53C400_host_buffer, 128);
#endif
start += 128; start += 128;
blocks--; blocks--;
} }
@ -381,18 +376,16 @@ static inline int generic_NCR5380_pread(struct Scsi_Host *instance,
while (NCR5380_read(hostdata->c400_ctl_status) & CSR_HOST_BUF_NOT_RDY) while (NCR5380_read(hostdata->c400_ctl_status) & CSR_HOST_BUF_NOT_RDY)
; /* FIXME - no timeout */ ; /* FIXME - no timeout */
#ifndef SCSI_G_NCR5380_MEM if (hostdata->io_port && hostdata->io_width == 2)
if (hostdata->io_width == 2) insw(hostdata->io_port + hostdata->c400_host_buf,
insw(instance->io_port + hostdata->c400_host_buf,
dst + start, 64); dst + start, 64);
else else if (hostdata->io_port)
insb(instance->io_port + hostdata->c400_host_buf, insb(hostdata->io_port + hostdata->c400_host_buf,
dst + start, 128); dst + start, 128);
#else else
/* implies SCSI_G_NCR5380_MEM */
memcpy_fromio(dst + start, memcpy_fromio(dst + start,
hostdata->iomem + NCR53C400_host_buffer, 128); hostdata->io + NCR53C400_host_buffer, 128);
#endif
start += 128; start += 128;
blocks--; blocks--;
} }
@ -412,7 +405,7 @@ static inline int generic_NCR5380_pread(struct Scsi_Host *instance,
/** /**
* generic_NCR5380_pwrite - pseudo DMA write * generic_NCR5380_pwrite - pseudo DMA write
* @instance: adapter to read from * @hostdata: scsi host private data
* @dst: buffer to read into * @dst: buffer to read into
* @len: buffer length * @len: buffer length
* *
@ -420,10 +413,9 @@ static inline int generic_NCR5380_pread(struct Scsi_Host *instance,
* controller * controller
*/ */
static inline int generic_NCR5380_pwrite(struct Scsi_Host *instance, static inline int generic_NCR5380_pwrite(struct NCR5380_hostdata *hostdata,
unsigned char *src, int len) unsigned char *src, int len)
{ {
struct NCR5380_hostdata *hostdata = shost_priv(instance);
int blocks = len / 128; int blocks = len / 128;
int start = 0; int start = 0;
@ -439,18 +431,17 @@ static inline int generic_NCR5380_pwrite(struct Scsi_Host *instance,
break; break;
while (NCR5380_read(hostdata->c400_ctl_status) & CSR_HOST_BUF_NOT_RDY) while (NCR5380_read(hostdata->c400_ctl_status) & CSR_HOST_BUF_NOT_RDY)
; // FIXME - timeout ; // FIXME - timeout
#ifndef SCSI_G_NCR5380_MEM
if (hostdata->io_width == 2) if (hostdata->io_port && hostdata->io_width == 2)
outsw(instance->io_port + hostdata->c400_host_buf, outsw(hostdata->io_port + hostdata->c400_host_buf,
src + start, 64); src + start, 64);
else if (hostdata->io_port)
outsb(hostdata->io_port + hostdata->c400_host_buf,
src + start, 128);
else else
outsb(instance->io_port + hostdata->c400_host_buf, memcpy_toio(hostdata->io + NCR53C400_host_buffer,
src + start, 128); src + start, 128);
#else
/* implies SCSI_G_NCR5380_MEM */
memcpy_toio(hostdata->iomem + NCR53C400_host_buffer,
src + start, 128);
#endif
start += 128; start += 128;
blocks--; blocks--;
} }
@ -458,18 +449,16 @@ static inline int generic_NCR5380_pwrite(struct Scsi_Host *instance,
while (NCR5380_read(hostdata->c400_ctl_status) & CSR_HOST_BUF_NOT_RDY) while (NCR5380_read(hostdata->c400_ctl_status) & CSR_HOST_BUF_NOT_RDY)
; // FIXME - no timeout ; // FIXME - no timeout
#ifndef SCSI_G_NCR5380_MEM if (hostdata->io_port && hostdata->io_width == 2)
if (hostdata->io_width == 2) outsw(hostdata->io_port + hostdata->c400_host_buf,
outsw(instance->io_port + hostdata->c400_host_buf,
src + start, 64); src + start, 64);
else if (hostdata->io_port)
outsb(hostdata->io_port + hostdata->c400_host_buf,
src + start, 128);
else else
outsb(instance->io_port + hostdata->c400_host_buf, memcpy_toio(hostdata->io + NCR53C400_host_buffer,
src + start, 128); src + start, 128);
#else
/* implies SCSI_G_NCR5380_MEM */
memcpy_toio(hostdata->iomem + NCR53C400_host_buffer,
src + start, 128);
#endif
start += 128; start += 128;
blocks--; blocks--;
} }
@ -489,10 +478,9 @@ static inline int generic_NCR5380_pwrite(struct Scsi_Host *instance,
return 0; return 0;
} }
static int generic_NCR5380_dma_xfer_len(struct Scsi_Host *instance, static int generic_NCR5380_dma_xfer_len(struct NCR5380_hostdata *hostdata,
struct scsi_cmnd *cmd) struct scsi_cmnd *cmd)
{ {
struct NCR5380_hostdata *hostdata = shost_priv(instance);
int transfersize = cmd->transfersize; int transfersize = cmd->transfersize;
if (hostdata->flags & FLAG_NO_PSEUDO_DMA) if (hostdata->flags & FLAG_NO_PSEUDO_DMA)
@ -566,7 +554,7 @@ static struct isa_driver generic_NCR5380_isa_driver = {
}, },
}; };
#if !defined(SCSI_G_NCR5380_MEM) && defined(CONFIG_PNP) #ifdef CONFIG_PNP
static struct pnp_device_id generic_NCR5380_pnp_ids[] = { static struct pnp_device_id generic_NCR5380_pnp_ids[] = {
{ .id = "DTC436e", .driver_data = BOARD_DTC3181E }, { .id = "DTC436e", .driver_data = BOARD_DTC3181E },
{ .id = "" } { .id = "" }
@ -600,7 +588,7 @@ static struct pnp_driver generic_NCR5380_pnp_driver = {
.probe = generic_NCR5380_pnp_probe, .probe = generic_NCR5380_pnp_probe,
.remove = generic_NCR5380_pnp_remove, .remove = generic_NCR5380_pnp_remove,
}; };
#endif /* !defined(SCSI_G_NCR5380_MEM) && defined(CONFIG_PNP) */ #endif /* defined(CONFIG_PNP) */
static int pnp_registered, isa_registered; static int pnp_registered, isa_registered;
@ -624,7 +612,7 @@ static int __init generic_NCR5380_init(void)
card[0] = BOARD_HP_C2502; card[0] = BOARD_HP_C2502;
} }
#if !defined(SCSI_G_NCR5380_MEM) && defined(CONFIG_PNP) #ifdef CONFIG_PNP
if (!pnp_register_driver(&generic_NCR5380_pnp_driver)) if (!pnp_register_driver(&generic_NCR5380_pnp_driver))
pnp_registered = 1; pnp_registered = 1;
#endif #endif
@ -637,7 +625,7 @@ static int __init generic_NCR5380_init(void)
static void __exit generic_NCR5380_exit(void) static void __exit generic_NCR5380_exit(void)
{ {
#if !defined(SCSI_G_NCR5380_MEM) && defined(CONFIG_PNP) #ifdef CONFIG_PNP
if (pnp_registered) if (pnp_registered)
pnp_unregister_driver(&generic_NCR5380_pnp_driver); pnp_unregister_driver(&generic_NCR5380_pnp_driver);
#endif #endif

View File

@ -14,49 +14,28 @@
#ifndef GENERIC_NCR5380_H #ifndef GENERIC_NCR5380_H
#define GENERIC_NCR5380_H #define GENERIC_NCR5380_H
#ifndef SCSI_G_NCR5380_MEM
#define DRV_MODULE_NAME "g_NCR5380" #define DRV_MODULE_NAME "g_NCR5380"
#define NCR5380_read(reg) \ #define NCR5380_read(reg) \
inb(instance->io_port + (reg)) ioread8(hostdata->io + hostdata->offset + (reg))
#define NCR5380_write(reg, value) \ #define NCR5380_write(reg, value) \
outb(value, instance->io_port + (reg)) iowrite8(value, hostdata->io + hostdata->offset + (reg))
#define NCR5380_implementation_fields \ #define NCR5380_implementation_fields \
int offset; \
int c400_ctl_status; \ int c400_ctl_status; \
int c400_blk_cnt; \ int c400_blk_cnt; \
int c400_host_buf; \ int c400_host_buf; \
int io_width; int io_width;
#else
/* therefore SCSI_G_NCR5380_MEM */
#define DRV_MODULE_NAME "g_NCR5380_mmio"
#define NCR53C400_mem_base 0x3880 #define NCR53C400_mem_base 0x3880
#define NCR53C400_host_buffer 0x3900 #define NCR53C400_host_buffer 0x3900
#define NCR53C400_region_size 0x3a00 #define NCR53C400_region_size 0x3a00
#define NCR5380_read(reg) \ #define NCR5380_dma_xfer_len generic_NCR5380_dma_xfer_len
readb(((struct NCR5380_hostdata *)shost_priv(instance))->iomem + \
NCR53C400_mem_base + (reg))
#define NCR5380_write(reg, value) \
writeb(value, ((struct NCR5380_hostdata *)shost_priv(instance))->iomem + \
NCR53C400_mem_base + (reg))
#define NCR5380_implementation_fields \
void __iomem *iomem; \
resource_size_t iomem_size; \
int c400_ctl_status; \
int c400_blk_cnt; \
int c400_host_buf;
#endif
#define NCR5380_dma_xfer_len(instance, cmd, phase) \
generic_NCR5380_dma_xfer_len(instance, cmd)
#define NCR5380_dma_recv_setup generic_NCR5380_pread #define NCR5380_dma_recv_setup generic_NCR5380_pread
#define NCR5380_dma_send_setup generic_NCR5380_pwrite #define NCR5380_dma_send_setup generic_NCR5380_pwrite
#define NCR5380_dma_residual(instance) (0) #define NCR5380_dma_residual NCR5380_dma_residual_none
#define NCR5380_intr generic_NCR5380_intr #define NCR5380_intr generic_NCR5380_intr
#define NCR5380_queue_command generic_NCR5380_queue_command #define NCR5380_queue_command generic_NCR5380_queue_command
@ -73,4 +52,3 @@
#define BOARD_HP_C2502 4 #define BOARD_HP_C2502 4
#endif /* GENERIC_NCR5380_H */ #endif /* GENERIC_NCR5380_H */

View File

@ -1,10 +0,0 @@
/*
* There is probably a nicer way to do this but this one makes
* pretty obvious what is happening. We rebuild the same file with
* different options for mmio versus pio.
*/
#define SCSI_G_NCR5380_MEM
#include "g_NCR5380.c"

View File

@ -13,6 +13,7 @@
#define _HISI_SAS_H_ #define _HISI_SAS_H_
#include <linux/acpi.h> #include <linux/acpi.h>
#include <linux/clk.h>
#include <linux/dmapool.h> #include <linux/dmapool.h>
#include <linux/mfd/syscon.h> #include <linux/mfd/syscon.h>
#include <linux/module.h> #include <linux/module.h>
@ -110,7 +111,7 @@ struct hisi_sas_device {
struct domain_device *sas_device; struct domain_device *sas_device;
u64 attached_phy; u64 attached_phy;
u64 device_id; u64 device_id;
u64 running_req; atomic64_t running_req;
u8 dev_status; u8 dev_status;
}; };
@ -149,7 +150,8 @@ struct hisi_sas_hw {
struct domain_device *device); struct domain_device *device);
struct hisi_sas_device *(*alloc_dev)(struct domain_device *device); struct hisi_sas_device *(*alloc_dev)(struct domain_device *device);
void (*sl_notify)(struct hisi_hba *hisi_hba, int phy_no); void (*sl_notify)(struct hisi_hba *hisi_hba, int phy_no);
int (*get_free_slot)(struct hisi_hba *hisi_hba, int *q, int *s); int (*get_free_slot)(struct hisi_hba *hisi_hba, u32 dev_id,
int *q, int *s);
void (*start_delivery)(struct hisi_hba *hisi_hba); void (*start_delivery)(struct hisi_hba *hisi_hba);
int (*prep_ssp)(struct hisi_hba *hisi_hba, int (*prep_ssp)(struct hisi_hba *hisi_hba,
struct hisi_sas_slot *slot, int is_tmf, struct hisi_sas_slot *slot, int is_tmf,
@ -166,6 +168,9 @@ struct hisi_sas_hw {
void (*phy_enable)(struct hisi_hba *hisi_hba, int phy_no); void (*phy_enable)(struct hisi_hba *hisi_hba, int phy_no);
void (*phy_disable)(struct hisi_hba *hisi_hba, int phy_no); void (*phy_disable)(struct hisi_hba *hisi_hba, int phy_no);
void (*phy_hard_reset)(struct hisi_hba *hisi_hba, int phy_no); void (*phy_hard_reset)(struct hisi_hba *hisi_hba, int phy_no);
void (*phy_set_linkrate)(struct hisi_hba *hisi_hba, int phy_no,
struct sas_phy_linkrates *linkrates);
enum sas_linkrate (*phy_get_max_linkrate)(void);
void (*free_device)(struct hisi_hba *hisi_hba, void (*free_device)(struct hisi_hba *hisi_hba,
struct hisi_sas_device *dev); struct hisi_sas_device *dev);
int (*get_wideport_bitmap)(struct hisi_hba *hisi_hba, int port_id); int (*get_wideport_bitmap)(struct hisi_hba *hisi_hba, int port_id);
@ -183,6 +188,7 @@ struct hisi_hba {
u32 ctrl_reset_reg; u32 ctrl_reset_reg;
u32 ctrl_reset_sts_reg; u32 ctrl_reset_sts_reg;
u32 ctrl_clock_ena_reg; u32 ctrl_clock_ena_reg;
u32 refclk_frequency_mhz;
u8 sas_addr[SAS_ADDR_SIZE]; u8 sas_addr[SAS_ADDR_SIZE];
int n_phy; int n_phy;
@ -205,7 +211,6 @@ struct hisi_hba {
struct hisi_sas_port port[HISI_SAS_MAX_PHYS]; struct hisi_sas_port port[HISI_SAS_MAX_PHYS];
int queue_count; int queue_count;
int queue;
struct hisi_sas_slot *slot_prep; struct hisi_sas_slot *slot_prep;
struct dma_pool *sge_page_pool; struct dma_pool *sge_page_pool;

View File

@ -162,8 +162,8 @@ out:
hisi_sas_slot_task_free(hisi_hba, task, abort_slot); hisi_sas_slot_task_free(hisi_hba, task, abort_slot);
if (task->task_done) if (task->task_done)
task->task_done(task); task->task_done(task);
if (sas_dev && sas_dev->running_req) if (sas_dev)
sas_dev->running_req--; atomic64_dec(&sas_dev->running_req);
} }
static int hisi_sas_task_prep(struct sas_task *task, struct hisi_hba *hisi_hba, static int hisi_sas_task_prep(struct sas_task *task, struct hisi_hba *hisi_hba,
@ -232,8 +232,8 @@ static int hisi_sas_task_prep(struct sas_task *task, struct hisi_hba *hisi_hba,
rc = hisi_sas_slot_index_alloc(hisi_hba, &slot_idx); rc = hisi_sas_slot_index_alloc(hisi_hba, &slot_idx);
if (rc) if (rc)
goto err_out; goto err_out;
rc = hisi_hba->hw->get_free_slot(hisi_hba, &dlvry_queue, rc = hisi_hba->hw->get_free_slot(hisi_hba, sas_dev->device_id,
&dlvry_queue_slot); &dlvry_queue, &dlvry_queue_slot);
if (rc) if (rc)
goto err_out_tag; goto err_out_tag;
@ -303,7 +303,7 @@ static int hisi_sas_task_prep(struct sas_task *task, struct hisi_hba *hisi_hba,
hisi_hba->slot_prep = slot; hisi_hba->slot_prep = slot;
sas_dev->running_req++; atomic64_inc(&sas_dev->running_req);
++(*pass); ++(*pass);
return 0; return 0;
@ -369,8 +369,13 @@ static void hisi_sas_bytes_dmaed(struct hisi_hba *hisi_hba, int phy_no)
struct sas_phy *sphy = sas_phy->phy; struct sas_phy *sphy = sas_phy->phy;
sphy->negotiated_linkrate = sas_phy->linkrate; sphy->negotiated_linkrate = sas_phy->linkrate;
sphy->minimum_linkrate = phy->minimum_linkrate;
sphy->minimum_linkrate_hw = SAS_LINK_RATE_1_5_GBPS; sphy->minimum_linkrate_hw = SAS_LINK_RATE_1_5_GBPS;
sphy->maximum_linkrate_hw =
hisi_hba->hw->phy_get_max_linkrate();
if (sphy->minimum_linkrate == SAS_LINK_RATE_UNKNOWN)
sphy->minimum_linkrate = phy->minimum_linkrate;
if (sphy->maximum_linkrate == SAS_LINK_RATE_UNKNOWN)
sphy->maximum_linkrate = phy->maximum_linkrate; sphy->maximum_linkrate = phy->maximum_linkrate;
} }
@ -537,7 +542,7 @@ static void hisi_sas_port_notify_formed(struct asd_sas_phy *sas_phy)
struct hisi_hba *hisi_hba = sas_ha->lldd_ha; struct hisi_hba *hisi_hba = sas_ha->lldd_ha;
struct hisi_sas_phy *phy = sas_phy->lldd_phy; struct hisi_sas_phy *phy = sas_phy->lldd_phy;
struct asd_sas_port *sas_port = sas_phy->port; struct asd_sas_port *sas_port = sas_phy->port;
struct hisi_sas_port *port = &hisi_hba->port[sas_phy->id]; struct hisi_sas_port *port = &hisi_hba->port[phy->port_id];
unsigned long flags; unsigned long flags;
if (!sas_port) if (!sas_port)
@ -645,6 +650,9 @@ static int hisi_sas_control_phy(struct asd_sas_phy *sas_phy, enum phy_func func,
break; break;
case PHY_FUNC_SET_LINK_RATE: case PHY_FUNC_SET_LINK_RATE:
hisi_hba->hw->phy_set_linkrate(hisi_hba, phy_no, funcdata);
break;
case PHY_FUNC_RELEASE_SPINUP_HOLD: case PHY_FUNC_RELEASE_SPINUP_HOLD:
default: default:
return -EOPNOTSUPP; return -EOPNOTSUPP;
@ -764,7 +772,8 @@ static int hisi_sas_exec_internal_tmf_task(struct domain_device *device,
task = NULL; task = NULL;
} }
ex_err: ex_err:
WARN_ON(retry == TASK_RETRY); if (retry == TASK_RETRY)
dev_warn(dev, "abort tmf: executing internal task failed!\n");
sas_free_task(task); sas_free_task(task);
return res; return res;
} }
@ -960,6 +969,9 @@ static int hisi_sas_query_task(struct sas_task *task)
case TMF_RESP_FUNC_FAILED: case TMF_RESP_FUNC_FAILED:
case TMF_RESP_FUNC_COMPLETE: case TMF_RESP_FUNC_COMPLETE:
break; break;
default:
rc = TMF_RESP_FUNC_FAILED;
break;
} }
} }
return rc; return rc;
@ -987,8 +999,8 @@ hisi_sas_internal_abort_task_exec(struct hisi_hba *hisi_hba, u64 device_id,
rc = hisi_sas_slot_index_alloc(hisi_hba, &slot_idx); rc = hisi_sas_slot_index_alloc(hisi_hba, &slot_idx);
if (rc) if (rc)
goto err_out; goto err_out;
rc = hisi_hba->hw->get_free_slot(hisi_hba, &dlvry_queue, rc = hisi_hba->hw->get_free_slot(hisi_hba, sas_dev->device_id,
&dlvry_queue_slot); &dlvry_queue, &dlvry_queue_slot);
if (rc) if (rc)
goto err_out_tag; goto err_out_tag;
@ -1023,7 +1035,8 @@ hisi_sas_internal_abort_task_exec(struct hisi_hba *hisi_hba, u64 device_id,
hisi_hba->slot_prep = slot; hisi_hba->slot_prep = slot;
sas_dev->running_req++; atomic64_inc(&sas_dev->running_req);
/* send abort command to our chip */ /* send abort command to our chip */
hisi_hba->hw->start_delivery(hisi_hba); hisi_hba->hw->start_delivery(hisi_hba);
@ -1396,10 +1409,13 @@ static struct Scsi_Host *hisi_sas_shost_alloc(struct platform_device *pdev,
struct hisi_hba *hisi_hba; struct hisi_hba *hisi_hba;
struct device *dev = &pdev->dev; struct device *dev = &pdev->dev;
struct device_node *np = pdev->dev.of_node; struct device_node *np = pdev->dev.of_node;
struct clk *refclk;
shost = scsi_host_alloc(&hisi_sas_sht, sizeof(*hisi_hba)); shost = scsi_host_alloc(&hisi_sas_sht, sizeof(*hisi_hba));
if (!shost) if (!shost) {
goto err_out; dev_err(dev, "scsi host alloc failed\n");
return NULL;
}
hisi_hba = shost_priv(shost); hisi_hba = shost_priv(shost);
hisi_hba->hw = hw; hisi_hba->hw = hw;
@ -1432,6 +1448,12 @@ static struct Scsi_Host *hisi_sas_shost_alloc(struct platform_device *pdev,
goto err_out; goto err_out;
} }
refclk = devm_clk_get(&pdev->dev, NULL);
if (IS_ERR(refclk))
dev_info(dev, "no ref clk property\n");
else
hisi_hba->refclk_frequency_mhz = clk_get_rate(refclk) / 1000000;
if (device_property_read_u32(dev, "phy-count", &hisi_hba->n_phy)) if (device_property_read_u32(dev, "phy-count", &hisi_hba->n_phy))
goto err_out; goto err_out;
@ -1457,6 +1479,7 @@ static struct Scsi_Host *hisi_sas_shost_alloc(struct platform_device *pdev,
return shost; return shost;
err_out: err_out:
kfree(shost);
dev_err(dev, "shost alloc failed\n"); dev_err(dev, "shost alloc failed\n");
return NULL; return NULL;
} }
@ -1483,10 +1506,8 @@ int hisi_sas_probe(struct platform_device *pdev,
int rc, phy_nr, port_nr, i; int rc, phy_nr, port_nr, i;
shost = hisi_sas_shost_alloc(pdev, hw); shost = hisi_sas_shost_alloc(pdev, hw);
if (!shost) { if (!shost)
rc = -ENOMEM; return -ENOMEM;
goto err_out_ha;
}
sha = SHOST_TO_SAS_HA(shost); sha = SHOST_TO_SAS_HA(shost);
hisi_hba = shost_priv(shost); hisi_hba = shost_priv(shost);
@ -1496,12 +1517,13 @@ int hisi_sas_probe(struct platform_device *pdev,
arr_phy = devm_kcalloc(dev, phy_nr, sizeof(void *), GFP_KERNEL); arr_phy = devm_kcalloc(dev, phy_nr, sizeof(void *), GFP_KERNEL);
arr_port = devm_kcalloc(dev, port_nr, sizeof(void *), GFP_KERNEL); arr_port = devm_kcalloc(dev, port_nr, sizeof(void *), GFP_KERNEL);
if (!arr_phy || !arr_port) if (!arr_phy || !arr_port) {
return -ENOMEM; rc = -ENOMEM;
goto err_out_ha;
}
sha->sas_phy = arr_phy; sha->sas_phy = arr_phy;
sha->sas_port = arr_port; sha->sas_port = arr_port;
sha->core.shost = shost;
sha->lldd_ha = hisi_hba; sha->lldd_ha = hisi_hba;
shost->transportt = hisi_sas_stt; shost->transportt = hisi_sas_stt;
@ -1546,6 +1568,7 @@ int hisi_sas_probe(struct platform_device *pdev,
err_out_register_ha: err_out_register_ha:
scsi_remove_host(shost); scsi_remove_host(shost);
err_out_ha: err_out_ha:
hisi_sas_free(hisi_hba);
kfree(shost); kfree(shost);
return rc; return rc;
} }
@ -1555,12 +1578,14 @@ int hisi_sas_remove(struct platform_device *pdev)
{ {
struct sas_ha_struct *sha = platform_get_drvdata(pdev); struct sas_ha_struct *sha = platform_get_drvdata(pdev);
struct hisi_hba *hisi_hba = sha->lldd_ha; struct hisi_hba *hisi_hba = sha->lldd_ha;
struct Scsi_Host *shost = sha->core.shost;
scsi_remove_host(sha->core.shost); scsi_remove_host(sha->core.shost);
sas_unregister_ha(sha); sas_unregister_ha(sha);
sas_remove_host(sha->core.shost); sas_remove_host(sha->core.shost);
hisi_sas_free(hisi_hba); hisi_sas_free(hisi_hba);
kfree(shost);
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(hisi_sas_remove); EXPORT_SYMBOL_GPL(hisi_sas_remove);

View File

@ -843,6 +843,49 @@ static void sl_notify_v1_hw(struct hisi_hba *hisi_hba, int phy_no)
hisi_sas_phy_write32(hisi_hba, phy_no, SL_CONTROL, sl_control); hisi_sas_phy_write32(hisi_hba, phy_no, SL_CONTROL, sl_control);
} }
static enum sas_linkrate phy_get_max_linkrate_v1_hw(void)
{
return SAS_LINK_RATE_6_0_GBPS;
}
static void phy_set_linkrate_v1_hw(struct hisi_hba *hisi_hba, int phy_no,
struct sas_phy_linkrates *r)
{
u32 prog_phy_link_rate =
hisi_sas_phy_read32(hisi_hba, phy_no, PROG_PHY_LINK_RATE);
struct hisi_sas_phy *phy = &hisi_hba->phy[phy_no];
struct asd_sas_phy *sas_phy = &phy->sas_phy;
int i;
enum sas_linkrate min, max;
u32 rate_mask = 0;
if (r->maximum_linkrate == SAS_LINK_RATE_UNKNOWN) {
max = sas_phy->phy->maximum_linkrate;
min = r->minimum_linkrate;
} else if (r->minimum_linkrate == SAS_LINK_RATE_UNKNOWN) {
max = r->maximum_linkrate;
min = sas_phy->phy->minimum_linkrate;
} else
return;
sas_phy->phy->maximum_linkrate = max;
sas_phy->phy->minimum_linkrate = min;
min -= SAS_LINK_RATE_1_5_GBPS;
max -= SAS_LINK_RATE_1_5_GBPS;
for (i = 0; i <= max; i++)
rate_mask |= 1 << (i * 2);
prog_phy_link_rate &= ~0xff;
prog_phy_link_rate |= rate_mask;
hisi_sas_phy_write32(hisi_hba, phy_no, PROG_PHY_LINK_RATE,
prog_phy_link_rate);
phy_hard_reset_v1_hw(hisi_hba, phy_no);
}
static int get_wideport_bitmap_v1_hw(struct hisi_hba *hisi_hba, int port_id) static int get_wideport_bitmap_v1_hw(struct hisi_hba *hisi_hba, int port_id)
{ {
int i, bitmap = 0; int i, bitmap = 0;
@ -862,29 +905,23 @@ static int get_wideport_bitmap_v1_hw(struct hisi_hba *hisi_hba, int port_id)
* The callpath to this function and upto writing the write * The callpath to this function and upto writing the write
* queue pointer should be safe from interruption. * queue pointer should be safe from interruption.
*/ */
static int get_free_slot_v1_hw(struct hisi_hba *hisi_hba, int *q, int *s) static int get_free_slot_v1_hw(struct hisi_hba *hisi_hba, u32 dev_id,
int *q, int *s)
{ {
struct device *dev = &hisi_hba->pdev->dev; struct device *dev = &hisi_hba->pdev->dev;
struct hisi_sas_dq *dq; struct hisi_sas_dq *dq;
u32 r, w; u32 r, w;
int queue = hisi_hba->queue; int queue = dev_id % hisi_hba->queue_count;
while (1) {
dq = &hisi_hba->dq[queue]; dq = &hisi_hba->dq[queue];
w = dq->wr_point; w = dq->wr_point;
r = hisi_sas_read32_relaxed(hisi_hba, r = hisi_sas_read32_relaxed(hisi_hba,
DLVRY_Q_0_RD_PTR + (queue * 0x14)); DLVRY_Q_0_RD_PTR + (queue * 0x14));
if (r == (w+1) % HISI_SAS_QUEUE_SLOTS) { if (r == (w+1) % HISI_SAS_QUEUE_SLOTS) {
queue = (queue + 1) % hisi_hba->queue_count;
if (queue == hisi_hba->queue) {
dev_warn(dev, "could not find free slot\n"); dev_warn(dev, "could not find free slot\n");
return -EAGAIN; return -EAGAIN;
} }
continue;
}
break;
}
hisi_hba->queue = (queue + 1) % hisi_hba->queue_count;
*q = queue; *q = queue;
*s = w; *s = w;
return 0; return 0;
@ -1372,8 +1409,8 @@ static int slot_complete_v1_hw(struct hisi_hba *hisi_hba,
} }
out: out:
if (sas_dev && sas_dev->running_req) if (sas_dev)
sas_dev->running_req--; atomic64_dec(&sas_dev->running_req);
hisi_sas_slot_task_free(hisi_hba, task, slot); hisi_sas_slot_task_free(hisi_hba, task, slot);
sts = ts->stat; sts = ts->stat;
@ -1824,6 +1861,8 @@ static const struct hisi_sas_hw hisi_sas_v1_hw = {
.phy_enable = enable_phy_v1_hw, .phy_enable = enable_phy_v1_hw,
.phy_disable = disable_phy_v1_hw, .phy_disable = disable_phy_v1_hw,
.phy_hard_reset = phy_hard_reset_v1_hw, .phy_hard_reset = phy_hard_reset_v1_hw,
.phy_set_linkrate = phy_set_linkrate_v1_hw,
.phy_get_max_linkrate = phy_get_max_linkrate_v1_hw,
.get_wideport_bitmap = get_wideport_bitmap_v1_hw, .get_wideport_bitmap = get_wideport_bitmap_v1_hw,
.max_command_entries = HISI_SAS_COMMAND_ENTRIES_V1_HW, .max_command_entries = HISI_SAS_COMMAND_ENTRIES_V1_HW,
.complete_hdr_size = sizeof(struct hisi_sas_complete_v1_hdr), .complete_hdr_size = sizeof(struct hisi_sas_complete_v1_hdr),

View File

@ -55,10 +55,44 @@
#define HGC_DFX_CFG2 0xc0 #define HGC_DFX_CFG2 0xc0
#define HGC_IOMB_PROC1_STATUS 0x104 #define HGC_IOMB_PROC1_STATUS 0x104
#define CFG_1US_TIMER_TRSH 0xcc #define CFG_1US_TIMER_TRSH 0xcc
#define HGC_LM_DFX_STATUS2 0x128
#define HGC_LM_DFX_STATUS2_IOSTLIST_OFF 0
#define HGC_LM_DFX_STATUS2_IOSTLIST_MSK (0xfff << \
HGC_LM_DFX_STATUS2_IOSTLIST_OFF)
#define HGC_LM_DFX_STATUS2_ITCTLIST_OFF 12
#define HGC_LM_DFX_STATUS2_ITCTLIST_MSK (0x7ff << \
HGC_LM_DFX_STATUS2_ITCTLIST_OFF)
#define HGC_CQE_ECC_ADDR 0x13c
#define HGC_CQE_ECC_1B_ADDR_OFF 0
#define HGC_CQE_ECC_1B_ADDR_MSK (0x3f << HGC_CQE_ECC_1B_ADDR_OFF)
#define HGC_CQE_ECC_MB_ADDR_OFF 8
#define HGC_CQE_ECC_MB_ADDR_MSK (0x3f << HGC_CQE_ECC_MB_ADDR_OFF)
#define HGC_IOST_ECC_ADDR 0x140
#define HGC_IOST_ECC_1B_ADDR_OFF 0
#define HGC_IOST_ECC_1B_ADDR_MSK (0x3ff << HGC_IOST_ECC_1B_ADDR_OFF)
#define HGC_IOST_ECC_MB_ADDR_OFF 16
#define HGC_IOST_ECC_MB_ADDR_MSK (0x3ff << HGC_IOST_ECC_MB_ADDR_OFF)
#define HGC_DQE_ECC_ADDR 0x144
#define HGC_DQE_ECC_1B_ADDR_OFF 0
#define HGC_DQE_ECC_1B_ADDR_MSK (0xfff << HGC_DQE_ECC_1B_ADDR_OFF)
#define HGC_DQE_ECC_MB_ADDR_OFF 16
#define HGC_DQE_ECC_MB_ADDR_MSK (0xfff << HGC_DQE_ECC_MB_ADDR_OFF)
#define HGC_INVLD_DQE_INFO 0x148 #define HGC_INVLD_DQE_INFO 0x148
#define HGC_INVLD_DQE_INFO_FB_CH0_OFF 9 #define HGC_INVLD_DQE_INFO_FB_CH0_OFF 9
#define HGC_INVLD_DQE_INFO_FB_CH0_MSK (0x1 << HGC_INVLD_DQE_INFO_FB_CH0_OFF) #define HGC_INVLD_DQE_INFO_FB_CH0_MSK (0x1 << HGC_INVLD_DQE_INFO_FB_CH0_OFF)
#define HGC_INVLD_DQE_INFO_FB_CH3_OFF 18 #define HGC_INVLD_DQE_INFO_FB_CH3_OFF 18
#define HGC_ITCT_ECC_ADDR 0x150
#define HGC_ITCT_ECC_1B_ADDR_OFF 0
#define HGC_ITCT_ECC_1B_ADDR_MSK (0x3ff << \
HGC_ITCT_ECC_1B_ADDR_OFF)
#define HGC_ITCT_ECC_MB_ADDR_OFF 16
#define HGC_ITCT_ECC_MB_ADDR_MSK (0x3ff << \
HGC_ITCT_ECC_MB_ADDR_OFF)
#define HGC_AXI_FIFO_ERR_INFO 0x154
#define AXI_ERR_INFO_OFF 0
#define AXI_ERR_INFO_MSK (0xff << AXI_ERR_INFO_OFF)
#define FIFO_ERR_INFO_OFF 8
#define FIFO_ERR_INFO_MSK (0xff << FIFO_ERR_INFO_OFF)
#define INT_COAL_EN 0x19c #define INT_COAL_EN 0x19c
#define OQ_INT_COAL_TIME 0x1a0 #define OQ_INT_COAL_TIME 0x1a0
#define OQ_INT_COAL_CNT 0x1a4 #define OQ_INT_COAL_CNT 0x1a4
@ -73,13 +107,41 @@
#define ENT_INT_SRC1_D2H_FIS_CH1_MSK (0x1 << ENT_INT_SRC1_D2H_FIS_CH1_OFF) #define ENT_INT_SRC1_D2H_FIS_CH1_MSK (0x1 << ENT_INT_SRC1_D2H_FIS_CH1_OFF)
#define ENT_INT_SRC2 0x1bc #define ENT_INT_SRC2 0x1bc
#define ENT_INT_SRC3 0x1c0 #define ENT_INT_SRC3 0x1c0
#define ENT_INT_SRC3_WP_DEPTH_OFF 8
#define ENT_INT_SRC3_IPTT_SLOT_NOMATCH_OFF 9
#define ENT_INT_SRC3_RP_DEPTH_OFF 10
#define ENT_INT_SRC3_AXI_OFF 11
#define ENT_INT_SRC3_FIFO_OFF 12
#define ENT_INT_SRC3_LM_OFF 14
#define ENT_INT_SRC3_ITC_INT_OFF 15 #define ENT_INT_SRC3_ITC_INT_OFF 15
#define ENT_INT_SRC3_ITC_INT_MSK (0x1 << ENT_INT_SRC3_ITC_INT_OFF) #define ENT_INT_SRC3_ITC_INT_MSK (0x1 << ENT_INT_SRC3_ITC_INT_OFF)
#define ENT_INT_SRC3_ABT_OFF 16
#define ENT_INT_SRC_MSK1 0x1c4 #define ENT_INT_SRC_MSK1 0x1c4
#define ENT_INT_SRC_MSK2 0x1c8 #define ENT_INT_SRC_MSK2 0x1c8
#define ENT_INT_SRC_MSK3 0x1cc #define ENT_INT_SRC_MSK3 0x1cc
#define ENT_INT_SRC_MSK3_ENT95_MSK_OFF 31 #define ENT_INT_SRC_MSK3_ENT95_MSK_OFF 31
#define ENT_INT_SRC_MSK3_ENT95_MSK_MSK (0x1 << ENT_INT_SRC_MSK3_ENT95_MSK_OFF) #define ENT_INT_SRC_MSK3_ENT95_MSK_MSK (0x1 << ENT_INT_SRC_MSK3_ENT95_MSK_OFF)
#define SAS_ECC_INTR 0x1e8
#define SAS_ECC_INTR_DQE_ECC_1B_OFF 0
#define SAS_ECC_INTR_DQE_ECC_MB_OFF 1
#define SAS_ECC_INTR_IOST_ECC_1B_OFF 2
#define SAS_ECC_INTR_IOST_ECC_MB_OFF 3
#define SAS_ECC_INTR_ITCT_ECC_MB_OFF 4
#define SAS_ECC_INTR_ITCT_ECC_1B_OFF 5
#define SAS_ECC_INTR_IOSTLIST_ECC_MB_OFF 6
#define SAS_ECC_INTR_IOSTLIST_ECC_1B_OFF 7
#define SAS_ECC_INTR_ITCTLIST_ECC_1B_OFF 8
#define SAS_ECC_INTR_ITCTLIST_ECC_MB_OFF 9
#define SAS_ECC_INTR_CQE_ECC_1B_OFF 10
#define SAS_ECC_INTR_CQE_ECC_MB_OFF 11
#define SAS_ECC_INTR_NCQ_MEM0_ECC_MB_OFF 12
#define SAS_ECC_INTR_NCQ_MEM0_ECC_1B_OFF 13
#define SAS_ECC_INTR_NCQ_MEM1_ECC_MB_OFF 14
#define SAS_ECC_INTR_NCQ_MEM1_ECC_1B_OFF 15
#define SAS_ECC_INTR_NCQ_MEM2_ECC_MB_OFF 16
#define SAS_ECC_INTR_NCQ_MEM2_ECC_1B_OFF 17
#define SAS_ECC_INTR_NCQ_MEM3_ECC_MB_OFF 18
#define SAS_ECC_INTR_NCQ_MEM3_ECC_1B_OFF 19
#define SAS_ECC_INTR_MSK 0x1ec #define SAS_ECC_INTR_MSK 0x1ec
#define HGC_ERR_STAT_EN 0x238 #define HGC_ERR_STAT_EN 0x238
#define DLVRY_Q_0_BASE_ADDR_LO 0x260 #define DLVRY_Q_0_BASE_ADDR_LO 0x260
@ -94,7 +156,20 @@
#define COMPL_Q_0_DEPTH 0x4e8 #define COMPL_Q_0_DEPTH 0x4e8
#define COMPL_Q_0_WR_PTR 0x4ec #define COMPL_Q_0_WR_PTR 0x4ec
#define COMPL_Q_0_RD_PTR 0x4f0 #define COMPL_Q_0_RD_PTR 0x4f0
#define HGC_RXM_DFX_STATUS14 0xae8
#define HGC_RXM_DFX_STATUS14_MEM0_OFF 0
#define HGC_RXM_DFX_STATUS14_MEM0_MSK (0x1ff << \
HGC_RXM_DFX_STATUS14_MEM0_OFF)
#define HGC_RXM_DFX_STATUS14_MEM1_OFF 9
#define HGC_RXM_DFX_STATUS14_MEM1_MSK (0x1ff << \
HGC_RXM_DFX_STATUS14_MEM1_OFF)
#define HGC_RXM_DFX_STATUS14_MEM2_OFF 18
#define HGC_RXM_DFX_STATUS14_MEM2_MSK (0x1ff << \
HGC_RXM_DFX_STATUS14_MEM2_OFF)
#define HGC_RXM_DFX_STATUS15 0xaec
#define HGC_RXM_DFX_STATUS15_MEM3_OFF 0
#define HGC_RXM_DFX_STATUS15_MEM3_MSK (0x1ff << \
HGC_RXM_DFX_STATUS15_MEM3_OFF)
/* phy registers need init */ /* phy registers need init */
#define PORT_BASE (0x2000) #define PORT_BASE (0x2000)
@ -119,6 +194,9 @@
#define SL_CONTROL_NOTIFY_EN_MSK (0x1 << SL_CONTROL_NOTIFY_EN_OFF) #define SL_CONTROL_NOTIFY_EN_MSK (0x1 << SL_CONTROL_NOTIFY_EN_OFF)
#define SL_CONTROL_CTA_OFF 17 #define SL_CONTROL_CTA_OFF 17
#define SL_CONTROL_CTA_MSK (0x1 << SL_CONTROL_CTA_OFF) #define SL_CONTROL_CTA_MSK (0x1 << SL_CONTROL_CTA_OFF)
#define RX_PRIMS_STATUS (PORT_BASE + 0x98)
#define RX_BCAST_CHG_OFF 1
#define RX_BCAST_CHG_MSK (0x1 << RX_BCAST_CHG_OFF)
#define TX_ID_DWORD0 (PORT_BASE + 0x9c) #define TX_ID_DWORD0 (PORT_BASE + 0x9c)
#define TX_ID_DWORD1 (PORT_BASE + 0xa0) #define TX_ID_DWORD1 (PORT_BASE + 0xa0)
#define TX_ID_DWORD2 (PORT_BASE + 0xa4) #define TX_ID_DWORD2 (PORT_BASE + 0xa4)
@ -267,6 +345,8 @@
#define ITCT_HDR_RTOLT_OFF 48 #define ITCT_HDR_RTOLT_OFF 48
#define ITCT_HDR_RTOLT_MSK (0xffffULL << ITCT_HDR_RTOLT_OFF) #define ITCT_HDR_RTOLT_MSK (0xffffULL << ITCT_HDR_RTOLT_OFF)
#define HISI_SAS_FATAL_INT_NR 2
struct hisi_sas_complete_v2_hdr { struct hisi_sas_complete_v2_hdr {
__le32 dw0; __le32 dw0;
__le32 dw1; __le32 dw1;
@ -659,8 +739,6 @@ static void free_device_v2_hw(struct hisi_hba *hisi_hba,
qw0 &= ~(1 << ITCT_HDR_VALID_OFF); qw0 &= ~(1 << ITCT_HDR_VALID_OFF);
hisi_sas_write32(hisi_hba, ENT_INT_SRC3, hisi_sas_write32(hisi_hba, ENT_INT_SRC3,
ENT_INT_SRC3_ITC_INT_MSK); ENT_INT_SRC3_ITC_INT_MSK);
hisi_hba->devices[dev_id].dev_type = SAS_PHY_UNUSED;
hisi_hba->devices[dev_id].dev_status = HISI_SAS_DEV_NORMAL;
/* clear the itct */ /* clear the itct */
hisi_sas_write32(hisi_hba, ITCT_CLR, 0); hisi_sas_write32(hisi_hba, ITCT_CLR, 0);
@ -808,7 +886,7 @@ static void init_reg_v2_hw(struct hisi_hba *hisi_hba)
hisi_sas_write32(hisi_hba, ENT_INT_SRC_MSK1, 0x7efefefe); hisi_sas_write32(hisi_hba, ENT_INT_SRC_MSK1, 0x7efefefe);
hisi_sas_write32(hisi_hba, ENT_INT_SRC_MSK2, 0x7efefefe); hisi_sas_write32(hisi_hba, ENT_INT_SRC_MSK2, 0x7efefefe);
hisi_sas_write32(hisi_hba, ENT_INT_SRC_MSK3, 0x7ffffffe); hisi_sas_write32(hisi_hba, ENT_INT_SRC_MSK3, 0x7ffffffe);
hisi_sas_write32(hisi_hba, SAS_ECC_INTR_MSK, 0xfffff3c0); hisi_sas_write32(hisi_hba, SAS_ECC_INTR_MSK, 0xfff00c30);
for (i = 0; i < hisi_hba->queue_count; i++) for (i = 0; i < hisi_hba->queue_count; i++)
hisi_sas_write32(hisi_hba, OQ0_INT_SRC_MSK+0x4*i, 0); hisi_sas_write32(hisi_hba, OQ0_INT_SRC_MSK+0x4*i, 0);
@ -824,7 +902,7 @@ static void init_reg_v2_hw(struct hisi_hba *hisi_hba)
hisi_sas_phy_write32(hisi_hba, i, DONE_RECEIVED_TIME, 0x10); hisi_sas_phy_write32(hisi_hba, i, DONE_RECEIVED_TIME, 0x10);
hisi_sas_phy_write32(hisi_hba, i, CHL_INT0, 0xffffffff); hisi_sas_phy_write32(hisi_hba, i, CHL_INT0, 0xffffffff);
hisi_sas_phy_write32(hisi_hba, i, CHL_INT1, 0xffffffff); hisi_sas_phy_write32(hisi_hba, i, CHL_INT1, 0xffffffff);
hisi_sas_phy_write32(hisi_hba, i, CHL_INT2, 0xffffffff); hisi_sas_phy_write32(hisi_hba, i, CHL_INT2, 0xfff87fff);
hisi_sas_phy_write32(hisi_hba, i, RXOP_CHECK_CFG_H, 0x1000); hisi_sas_phy_write32(hisi_hba, i, RXOP_CHECK_CFG_H, 0x1000);
hisi_sas_phy_write32(hisi_hba, i, CHL_INT1_MSK, 0xffffffff); hisi_sas_phy_write32(hisi_hba, i, CHL_INT1_MSK, 0xffffffff);
hisi_sas_phy_write32(hisi_hba, i, CHL_INT2_MSK, 0x8ffffbff); hisi_sas_phy_write32(hisi_hba, i, CHL_INT2_MSK, 0x8ffffbff);
@ -836,7 +914,9 @@ static void init_reg_v2_hw(struct hisi_hba *hisi_hba)
hisi_sas_phy_write32(hisi_hba, i, SL_RX_BCAST_CHK_MSK, 0x0); hisi_sas_phy_write32(hisi_hba, i, SL_RX_BCAST_CHK_MSK, 0x0);
hisi_sas_phy_write32(hisi_hba, i, CHL_INT_COAL_EN, 0x0); hisi_sas_phy_write32(hisi_hba, i, CHL_INT_COAL_EN, 0x0);
hisi_sas_phy_write32(hisi_hba, i, PHYCTRL_OOB_RESTART_MSK, 0x0); hisi_sas_phy_write32(hisi_hba, i, PHYCTRL_OOB_RESTART_MSK, 0x0);
if (hisi_hba->refclk_frequency_mhz == 66)
hisi_sas_phy_write32(hisi_hba, i, PHY_CTRL, 0x199B694); hisi_sas_phy_write32(hisi_hba, i, PHY_CTRL, 0x199B694);
/* else, do nothing -> leave it how you found it */
} }
for (i = 0; i < hisi_hba->queue_count; i++) { for (i = 0; i < hisi_hba->queue_count; i++) {
@ -980,6 +1060,49 @@ static void sl_notify_v2_hw(struct hisi_hba *hisi_hba, int phy_no)
hisi_sas_phy_write32(hisi_hba, phy_no, SL_CONTROL, sl_control); hisi_sas_phy_write32(hisi_hba, phy_no, SL_CONTROL, sl_control);
} }
static enum sas_linkrate phy_get_max_linkrate_v2_hw(void)
{
return SAS_LINK_RATE_12_0_GBPS;
}
static void phy_set_linkrate_v2_hw(struct hisi_hba *hisi_hba, int phy_no,
struct sas_phy_linkrates *r)
{
u32 prog_phy_link_rate =
hisi_sas_phy_read32(hisi_hba, phy_no, PROG_PHY_LINK_RATE);
struct hisi_sas_phy *phy = &hisi_hba->phy[phy_no];
struct asd_sas_phy *sas_phy = &phy->sas_phy;
int i;
enum sas_linkrate min, max;
u32 rate_mask = 0;
if (r->maximum_linkrate == SAS_LINK_RATE_UNKNOWN) {
max = sas_phy->phy->maximum_linkrate;
min = r->minimum_linkrate;
} else if (r->minimum_linkrate == SAS_LINK_RATE_UNKNOWN) {
max = r->maximum_linkrate;
min = sas_phy->phy->minimum_linkrate;
} else
return;
sas_phy->phy->maximum_linkrate = max;
sas_phy->phy->minimum_linkrate = min;
min -= SAS_LINK_RATE_1_5_GBPS;
max -= SAS_LINK_RATE_1_5_GBPS;
for (i = 0; i <= max; i++)
rate_mask |= 1 << (i * 2);
prog_phy_link_rate &= ~0xff;
prog_phy_link_rate |= rate_mask;
hisi_sas_phy_write32(hisi_hba, phy_no, PROG_PHY_LINK_RATE,
prog_phy_link_rate);
phy_hard_reset_v2_hw(hisi_hba, phy_no);
}
static int get_wideport_bitmap_v2_hw(struct hisi_hba *hisi_hba, int port_id) static int get_wideport_bitmap_v2_hw(struct hisi_hba *hisi_hba, int port_id)
{ {
int i, bitmap = 0; int i, bitmap = 0;
@ -1010,29 +1133,24 @@ static int get_wideport_bitmap_v2_hw(struct hisi_hba *hisi_hba, int port_id)
* The callpath to this function and upto writing the write * The callpath to this function and upto writing the write
* queue pointer should be safe from interruption. * queue pointer should be safe from interruption.
*/ */
static int get_free_slot_v2_hw(struct hisi_hba *hisi_hba, int *q, int *s) static int get_free_slot_v2_hw(struct hisi_hba *hisi_hba, u32 dev_id,
int *q, int *s)
{ {
struct device *dev = &hisi_hba->pdev->dev; struct device *dev = &hisi_hba->pdev->dev;
struct hisi_sas_dq *dq; struct hisi_sas_dq *dq;
u32 r, w; u32 r, w;
int queue = hisi_hba->queue; int queue = dev_id % hisi_hba->queue_count;
while (1) {
dq = &hisi_hba->dq[queue]; dq = &hisi_hba->dq[queue];
w = dq->wr_point; w = dq->wr_point;
r = hisi_sas_read32_relaxed(hisi_hba, r = hisi_sas_read32_relaxed(hisi_hba,
DLVRY_Q_0_RD_PTR + (queue * 0x14)); DLVRY_Q_0_RD_PTR + (queue * 0x14));
if (r == (w+1) % HISI_SAS_QUEUE_SLOTS) { if (r == (w+1) % HISI_SAS_QUEUE_SLOTS) {
queue = (queue + 1) % hisi_hba->queue_count; dev_warn(dev, "full queue=%d r=%d w=%d\n\n",
if (queue == hisi_hba->queue) { queue, r, w);
dev_warn(dev, "could not find free slot\n");
return -EAGAIN; return -EAGAIN;
} }
continue;
}
break;
}
hisi_hba->queue = (queue + 1) % hisi_hba->queue_count;
*q = queue; *q = queue;
*s = w; *s = w;
return 0; return 0;
@ -1653,8 +1771,8 @@ slot_complete_v2_hw(struct hisi_hba *hisi_hba, struct hisi_sas_slot *slot,
} }
out: out:
if (sas_dev && sas_dev->running_req) if (sas_dev)
sas_dev->running_req--; atomic64_dec(&sas_dev->running_req);
hisi_sas_slot_task_free(hisi_hba, task, slot); hisi_sas_slot_task_free(hisi_hba, task, slot);
sts = ts->stat; sts = ts->stat;
@ -1675,6 +1793,7 @@ static u8 get_ata_protocol(u8 cmd, int direction)
case ATA_CMD_NCQ_NON_DATA: case ATA_CMD_NCQ_NON_DATA:
return SATA_PROTOCOL_FPDMA; return SATA_PROTOCOL_FPDMA;
case ATA_CMD_DOWNLOAD_MICRO:
case ATA_CMD_ID_ATA: case ATA_CMD_ID_ATA:
case ATA_CMD_PMP_READ: case ATA_CMD_PMP_READ:
case ATA_CMD_READ_LOG_EXT: case ATA_CMD_READ_LOG_EXT:
@ -1686,18 +1805,27 @@ static u8 get_ata_protocol(u8 cmd, int direction)
case ATA_CMD_PIO_WRITE_EXT: case ATA_CMD_PIO_WRITE_EXT:
return SATA_PROTOCOL_PIO; return SATA_PROTOCOL_PIO;
case ATA_CMD_DSM:
case ATA_CMD_DOWNLOAD_MICRO_DMA:
case ATA_CMD_PMP_READ_DMA:
case ATA_CMD_PMP_WRITE_DMA:
case ATA_CMD_READ: case ATA_CMD_READ:
case ATA_CMD_READ_EXT: case ATA_CMD_READ_EXT:
case ATA_CMD_READ_LOG_DMA_EXT: case ATA_CMD_READ_LOG_DMA_EXT:
case ATA_CMD_READ_STREAM_DMA_EXT:
case ATA_CMD_TRUSTED_RCV_DMA:
case ATA_CMD_TRUSTED_SND_DMA:
case ATA_CMD_WRITE: case ATA_CMD_WRITE:
case ATA_CMD_WRITE_EXT: case ATA_CMD_WRITE_EXT:
case ATA_CMD_WRITE_FUA_EXT:
case ATA_CMD_WRITE_QUEUED: case ATA_CMD_WRITE_QUEUED:
case ATA_CMD_WRITE_LOG_DMA_EXT: case ATA_CMD_WRITE_LOG_DMA_EXT:
case ATA_CMD_WRITE_STREAM_DMA_EXT:
return SATA_PROTOCOL_DMA; return SATA_PROTOCOL_DMA;
case ATA_CMD_DOWNLOAD_MICRO:
case ATA_CMD_DEV_RESET:
case ATA_CMD_CHK_POWER: case ATA_CMD_CHK_POWER:
case ATA_CMD_DEV_RESET:
case ATA_CMD_EDD:
case ATA_CMD_FLUSH: case ATA_CMD_FLUSH:
case ATA_CMD_FLUSH_EXT: case ATA_CMD_FLUSH_EXT:
case ATA_CMD_VERIFY: case ATA_CMD_VERIFY:
@ -1970,8 +2098,11 @@ static void phy_bcast_v2_hw(int phy_no, struct hisi_hba *hisi_hba)
struct hisi_sas_phy *phy = &hisi_hba->phy[phy_no]; struct hisi_sas_phy *phy = &hisi_hba->phy[phy_no];
struct asd_sas_phy *sas_phy = &phy->sas_phy; struct asd_sas_phy *sas_phy = &phy->sas_phy;
struct sas_ha_struct *sas_ha = &hisi_hba->sha; struct sas_ha_struct *sas_ha = &hisi_hba->sha;
u32 bcast_status;
hisi_sas_phy_write32(hisi_hba, phy_no, SL_RX_BCAST_CHK_MSK, 1); hisi_sas_phy_write32(hisi_hba, phy_no, SL_RX_BCAST_CHK_MSK, 1);
bcast_status = hisi_sas_phy_read32(hisi_hba, phy_no, RX_PRIMS_STATUS);
if (bcast_status & RX_BCAST_CHG_MSK)
sas_ha->notify_port_event(sas_phy, PORTE_BROADCAST_RCVD); sas_ha->notify_port_event(sas_phy, PORTE_BROADCAST_RCVD);
hisi_sas_phy_write32(hisi_hba, phy_no, CHL_INT0, hisi_sas_phy_write32(hisi_hba, phy_no, CHL_INT0,
CHL_INT0_SL_RX_BCST_ACK_MSK); CHL_INT0_SL_RX_BCST_ACK_MSK);
@ -2005,7 +2136,8 @@ static irqreturn_t int_chnl_int_v2_hw(int irq_no, void *p)
if (irq_value1) { if (irq_value1) {
if (irq_value1 & (CHL_INT1_DMAC_RX_ECC_ERR_MSK | if (irq_value1 & (CHL_INT1_DMAC_RX_ECC_ERR_MSK |
CHL_INT1_DMAC_TX_ECC_ERR_MSK)) CHL_INT1_DMAC_TX_ECC_ERR_MSK))
panic("%s: DMAC RX/TX ecc bad error! (0x%x)", panic("%s: DMAC RX/TX ecc bad error!\
(0x%x)",
dev_name(dev), irq_value1); dev_name(dev), irq_value1);
hisi_sas_phy_write32(hisi_hba, phy_no, hisi_sas_phy_write32(hisi_hba, phy_no,
@ -2037,6 +2169,318 @@ static irqreturn_t int_chnl_int_v2_hw(int irq_no, void *p)
return IRQ_HANDLED; return IRQ_HANDLED;
} }
static void
one_bit_ecc_error_process_v2_hw(struct hisi_hba *hisi_hba, u32 irq_value)
{
struct device *dev = &hisi_hba->pdev->dev;
u32 reg_val;
if (irq_value & BIT(SAS_ECC_INTR_DQE_ECC_1B_OFF)) {
reg_val = hisi_sas_read32(hisi_hba, HGC_DQE_ECC_ADDR);
dev_warn(dev, "hgc_dqe_acc1b_intr found: \
Ram address is 0x%08X\n",
(reg_val & HGC_DQE_ECC_1B_ADDR_MSK) >>
HGC_DQE_ECC_1B_ADDR_OFF);
}
if (irq_value & BIT(SAS_ECC_INTR_IOST_ECC_1B_OFF)) {
reg_val = hisi_sas_read32(hisi_hba, HGC_IOST_ECC_ADDR);
dev_warn(dev, "hgc_iost_acc1b_intr found: \
Ram address is 0x%08X\n",
(reg_val & HGC_IOST_ECC_1B_ADDR_MSK) >>
HGC_IOST_ECC_1B_ADDR_OFF);
}
if (irq_value & BIT(SAS_ECC_INTR_ITCT_ECC_1B_OFF)) {
reg_val = hisi_sas_read32(hisi_hba, HGC_ITCT_ECC_ADDR);
dev_warn(dev, "hgc_itct_acc1b_intr found: \
Ram address is 0x%08X\n",
(reg_val & HGC_ITCT_ECC_1B_ADDR_MSK) >>
HGC_ITCT_ECC_1B_ADDR_OFF);
}
if (irq_value & BIT(SAS_ECC_INTR_IOSTLIST_ECC_1B_OFF)) {
reg_val = hisi_sas_read32(hisi_hba, HGC_LM_DFX_STATUS2);
dev_warn(dev, "hgc_iostl_acc1b_intr found: \
memory address is 0x%08X\n",
(reg_val & HGC_LM_DFX_STATUS2_IOSTLIST_MSK) >>
HGC_LM_DFX_STATUS2_IOSTLIST_OFF);
}
if (irq_value & BIT(SAS_ECC_INTR_ITCTLIST_ECC_1B_OFF)) {
reg_val = hisi_sas_read32(hisi_hba, HGC_LM_DFX_STATUS2);
dev_warn(dev, "hgc_itctl_acc1b_intr found: \
memory address is 0x%08X\n",
(reg_val & HGC_LM_DFX_STATUS2_ITCTLIST_MSK) >>
HGC_LM_DFX_STATUS2_ITCTLIST_OFF);
}
if (irq_value & BIT(SAS_ECC_INTR_CQE_ECC_1B_OFF)) {
reg_val = hisi_sas_read32(hisi_hba, HGC_CQE_ECC_ADDR);
dev_warn(dev, "hgc_cqe_acc1b_intr found: \
Ram address is 0x%08X\n",
(reg_val & HGC_CQE_ECC_1B_ADDR_MSK) >>
HGC_CQE_ECC_1B_ADDR_OFF);
}
if (irq_value & BIT(SAS_ECC_INTR_NCQ_MEM0_ECC_1B_OFF)) {
reg_val = hisi_sas_read32(hisi_hba, HGC_RXM_DFX_STATUS14);
dev_warn(dev, "rxm_mem0_acc1b_intr found: \
memory address is 0x%08X\n",
(reg_val & HGC_RXM_DFX_STATUS14_MEM0_MSK) >>
HGC_RXM_DFX_STATUS14_MEM0_OFF);
}
if (irq_value & BIT(SAS_ECC_INTR_NCQ_MEM1_ECC_1B_OFF)) {
reg_val = hisi_sas_read32(hisi_hba, HGC_RXM_DFX_STATUS14);
dev_warn(dev, "rxm_mem1_acc1b_intr found: \
memory address is 0x%08X\n",
(reg_val & HGC_RXM_DFX_STATUS14_MEM1_MSK) >>
HGC_RXM_DFX_STATUS14_MEM1_OFF);
}
if (irq_value & BIT(SAS_ECC_INTR_NCQ_MEM2_ECC_1B_OFF)) {
reg_val = hisi_sas_read32(hisi_hba, HGC_RXM_DFX_STATUS14);
dev_warn(dev, "rxm_mem2_acc1b_intr found: \
memory address is 0x%08X\n",
(reg_val & HGC_RXM_DFX_STATUS14_MEM2_MSK) >>
HGC_RXM_DFX_STATUS14_MEM2_OFF);
}
if (irq_value & BIT(SAS_ECC_INTR_NCQ_MEM3_ECC_1B_OFF)) {
reg_val = hisi_sas_read32(hisi_hba, HGC_RXM_DFX_STATUS15);
dev_warn(dev, "rxm_mem3_acc1b_intr found: \
memory address is 0x%08X\n",
(reg_val & HGC_RXM_DFX_STATUS15_MEM3_MSK) >>
HGC_RXM_DFX_STATUS15_MEM3_OFF);
}
}
static void multi_bit_ecc_error_process_v2_hw(struct hisi_hba *hisi_hba,
u32 irq_value)
{
u32 reg_val;
struct device *dev = &hisi_hba->pdev->dev;
if (irq_value & BIT(SAS_ECC_INTR_DQE_ECC_MB_OFF)) {
reg_val = hisi_sas_read32(hisi_hba, HGC_DQE_ECC_ADDR);
panic("%s: hgc_dqe_accbad_intr (0x%x) found: \
Ram address is 0x%08X\n",
dev_name(dev), irq_value,
(reg_val & HGC_DQE_ECC_MB_ADDR_MSK) >>
HGC_DQE_ECC_MB_ADDR_OFF);
}
if (irq_value & BIT(SAS_ECC_INTR_IOST_ECC_MB_OFF)) {
reg_val = hisi_sas_read32(hisi_hba, HGC_IOST_ECC_ADDR);
panic("%s: hgc_iost_accbad_intr (0x%x) found: \
Ram address is 0x%08X\n",
dev_name(dev), irq_value,
(reg_val & HGC_IOST_ECC_MB_ADDR_MSK) >>
HGC_IOST_ECC_MB_ADDR_OFF);
}
if (irq_value & BIT(SAS_ECC_INTR_ITCT_ECC_MB_OFF)) {
reg_val = hisi_sas_read32(hisi_hba, HGC_ITCT_ECC_ADDR);
panic("%s: hgc_itct_accbad_intr (0x%x) found: \
Ram address is 0x%08X\n",
dev_name(dev), irq_value,
(reg_val & HGC_ITCT_ECC_MB_ADDR_MSK) >>
HGC_ITCT_ECC_MB_ADDR_OFF);
}
if (irq_value & BIT(SAS_ECC_INTR_IOSTLIST_ECC_MB_OFF)) {
reg_val = hisi_sas_read32(hisi_hba, HGC_LM_DFX_STATUS2);
panic("%s: hgc_iostl_accbad_intr (0x%x) found: \
memory address is 0x%08X\n",
dev_name(dev), irq_value,
(reg_val & HGC_LM_DFX_STATUS2_IOSTLIST_MSK) >>
HGC_LM_DFX_STATUS2_IOSTLIST_OFF);
}
if (irq_value & BIT(SAS_ECC_INTR_ITCTLIST_ECC_MB_OFF)) {
reg_val = hisi_sas_read32(hisi_hba, HGC_LM_DFX_STATUS2);
panic("%s: hgc_itctl_accbad_intr (0x%x) found: \
memory address is 0x%08X\n",
dev_name(dev), irq_value,
(reg_val & HGC_LM_DFX_STATUS2_ITCTLIST_MSK) >>
HGC_LM_DFX_STATUS2_ITCTLIST_OFF);
}
if (irq_value & BIT(SAS_ECC_INTR_CQE_ECC_MB_OFF)) {
reg_val = hisi_sas_read32(hisi_hba, HGC_CQE_ECC_ADDR);
panic("%s: hgc_cqe_accbad_intr (0x%x) found: \
Ram address is 0x%08X\n",
dev_name(dev), irq_value,
(reg_val & HGC_CQE_ECC_MB_ADDR_MSK) >>
HGC_CQE_ECC_MB_ADDR_OFF);
}
if (irq_value & BIT(SAS_ECC_INTR_NCQ_MEM0_ECC_MB_OFF)) {
reg_val = hisi_sas_read32(hisi_hba, HGC_RXM_DFX_STATUS14);
panic("%s: rxm_mem0_accbad_intr (0x%x) found: \
memory address is 0x%08X\n",
dev_name(dev), irq_value,
(reg_val & HGC_RXM_DFX_STATUS14_MEM0_MSK) >>
HGC_RXM_DFX_STATUS14_MEM0_OFF);
}
if (irq_value & BIT(SAS_ECC_INTR_NCQ_MEM1_ECC_MB_OFF)) {
reg_val = hisi_sas_read32(hisi_hba, HGC_RXM_DFX_STATUS14);
panic("%s: rxm_mem1_accbad_intr (0x%x) found: \
memory address is 0x%08X\n",
dev_name(dev), irq_value,
(reg_val & HGC_RXM_DFX_STATUS14_MEM1_MSK) >>
HGC_RXM_DFX_STATUS14_MEM1_OFF);
}
if (irq_value & BIT(SAS_ECC_INTR_NCQ_MEM2_ECC_MB_OFF)) {
reg_val = hisi_sas_read32(hisi_hba, HGC_RXM_DFX_STATUS14);
panic("%s: rxm_mem2_accbad_intr (0x%x) found: \
memory address is 0x%08X\n",
dev_name(dev), irq_value,
(reg_val & HGC_RXM_DFX_STATUS14_MEM2_MSK) >>
HGC_RXM_DFX_STATUS14_MEM2_OFF);
}
if (irq_value & BIT(SAS_ECC_INTR_NCQ_MEM3_ECC_MB_OFF)) {
reg_val = hisi_sas_read32(hisi_hba, HGC_RXM_DFX_STATUS15);
panic("%s: rxm_mem3_accbad_intr (0x%x) found: \
memory address is 0x%08X\n",
dev_name(dev), irq_value,
(reg_val & HGC_RXM_DFX_STATUS15_MEM3_MSK) >>
HGC_RXM_DFX_STATUS15_MEM3_OFF);
}
}
static irqreturn_t fatal_ecc_int_v2_hw(int irq_no, void *p)
{
struct hisi_hba *hisi_hba = p;
u32 irq_value, irq_msk;
irq_msk = hisi_sas_read32(hisi_hba, SAS_ECC_INTR_MSK);
hisi_sas_write32(hisi_hba, SAS_ECC_INTR_MSK, irq_msk | 0xffffffff);
irq_value = hisi_sas_read32(hisi_hba, SAS_ECC_INTR);
if (irq_value) {
one_bit_ecc_error_process_v2_hw(hisi_hba, irq_value);
multi_bit_ecc_error_process_v2_hw(hisi_hba, irq_value);
}
hisi_sas_write32(hisi_hba, SAS_ECC_INTR, irq_value);
hisi_sas_write32(hisi_hba, SAS_ECC_INTR_MSK, irq_msk);
return IRQ_HANDLED;
}
#define AXI_ERR_NR 8
static const char axi_err_info[AXI_ERR_NR][32] = {
"IOST_AXI_W_ERR",
"IOST_AXI_R_ERR",
"ITCT_AXI_W_ERR",
"ITCT_AXI_R_ERR",
"SATA_AXI_W_ERR",
"SATA_AXI_R_ERR",
"DQE_AXI_R_ERR",
"CQE_AXI_W_ERR"
};
#define FIFO_ERR_NR 5
static const char fifo_err_info[FIFO_ERR_NR][32] = {
"CQE_WINFO_FIFO",
"CQE_MSG_FIFIO",
"GETDQE_FIFO",
"CMDP_FIFO",
"AWTCTRL_FIFO"
};
static irqreturn_t fatal_axi_int_v2_hw(int irq_no, void *p)
{
struct hisi_hba *hisi_hba = p;
u32 irq_value, irq_msk, err_value;
struct device *dev = &hisi_hba->pdev->dev;
irq_msk = hisi_sas_read32(hisi_hba, ENT_INT_SRC_MSK3);
hisi_sas_write32(hisi_hba, ENT_INT_SRC_MSK3, irq_msk | 0xfffffffe);
irq_value = hisi_sas_read32(hisi_hba, ENT_INT_SRC3);
if (irq_value) {
if (irq_value & BIT(ENT_INT_SRC3_WP_DEPTH_OFF)) {
hisi_sas_write32(hisi_hba, ENT_INT_SRC3,
1 << ENT_INT_SRC3_WP_DEPTH_OFF);
panic("%s: write pointer and depth error (0x%x) \
found!\n",
dev_name(dev), irq_value);
}
if (irq_value & BIT(ENT_INT_SRC3_IPTT_SLOT_NOMATCH_OFF)) {
hisi_sas_write32(hisi_hba, ENT_INT_SRC3,
1 <<
ENT_INT_SRC3_IPTT_SLOT_NOMATCH_OFF);
panic("%s: iptt no match slot error (0x%x) found!\n",
dev_name(dev), irq_value);
}
if (irq_value & BIT(ENT_INT_SRC3_RP_DEPTH_OFF))
panic("%s: read pointer and depth error (0x%x) \
found!\n",
dev_name(dev), irq_value);
if (irq_value & BIT(ENT_INT_SRC3_AXI_OFF)) {
int i;
hisi_sas_write32(hisi_hba, ENT_INT_SRC3,
1 << ENT_INT_SRC3_AXI_OFF);
err_value = hisi_sas_read32(hisi_hba,
HGC_AXI_FIFO_ERR_INFO);
for (i = 0; i < AXI_ERR_NR; i++) {
if (err_value & BIT(i))
panic("%s: %s (0x%x) found!\n",
dev_name(dev),
axi_err_info[i], irq_value);
}
}
if (irq_value & BIT(ENT_INT_SRC3_FIFO_OFF)) {
int i;
hisi_sas_write32(hisi_hba, ENT_INT_SRC3,
1 << ENT_INT_SRC3_FIFO_OFF);
err_value = hisi_sas_read32(hisi_hba,
HGC_AXI_FIFO_ERR_INFO);
for (i = 0; i < FIFO_ERR_NR; i++) {
if (err_value & BIT(AXI_ERR_NR + i))
panic("%s: %s (0x%x) found!\n",
dev_name(dev),
fifo_err_info[i], irq_value);
}
}
if (irq_value & BIT(ENT_INT_SRC3_LM_OFF)) {
hisi_sas_write32(hisi_hba, ENT_INT_SRC3,
1 << ENT_INT_SRC3_LM_OFF);
panic("%s: LM add/fetch list error (0x%x) found!\n",
dev_name(dev), irq_value);
}
if (irq_value & BIT(ENT_INT_SRC3_ABT_OFF)) {
hisi_sas_write32(hisi_hba, ENT_INT_SRC3,
1 << ENT_INT_SRC3_ABT_OFF);
panic("%s: SAS_HGC_ABT fetch LM list error (0x%x) found!\n",
dev_name(dev), irq_value);
}
}
hisi_sas_write32(hisi_hba, ENT_INT_SRC_MSK3, irq_msk);
return IRQ_HANDLED;
}
static irqreturn_t cq_interrupt_v2_hw(int irq_no, void *p) static irqreturn_t cq_interrupt_v2_hw(int irq_no, void *p)
{ {
struct hisi_sas_cq *cq = p; struct hisi_sas_cq *cq = p;
@ -2136,6 +2580,16 @@ static irqreturn_t sata_int_v2_hw(int irq_no, void *p)
goto end; goto end;
} }
/* check ERR bit of Status Register */
if (fis->status & ATA_ERR) {
dev_warn(dev, "sata int: phy%d FIS status: 0x%x\n", phy_no,
fis->status);
disable_phy_v2_hw(hisi_hba, phy_no);
enable_phy_v2_hw(hisi_hba, phy_no);
res = IRQ_NONE;
goto end;
}
if (unlikely(phy_no == 8)) { if (unlikely(phy_no == 8)) {
u32 port_state = hisi_sas_read32(hisi_hba, PORT_STATE); u32 port_state = hisi_sas_read32(hisi_hba, PORT_STATE);
@ -2190,6 +2644,11 @@ static irq_handler_t phy_interrupts[HISI_SAS_PHY_INT_NR] = {
int_chnl_int_v2_hw, int_chnl_int_v2_hw,
}; };
static irq_handler_t fatal_interrupts[HISI_SAS_FATAL_INT_NR] = {
fatal_ecc_int_v2_hw,
fatal_axi_int_v2_hw
};
/** /**
* There is a limitation in the hip06 chipset that we need * There is a limitation in the hip06 chipset that we need
* to map in all mbigen interrupts, even if they are not used. * to map in all mbigen interrupts, even if they are not used.
@ -2245,6 +2704,26 @@ static int interrupt_init_v2_hw(struct hisi_hba *hisi_hba)
} }
} }
for (i = 0; i < HISI_SAS_FATAL_INT_NR; i++) {
int idx = i;
irq = irq_map[idx + 81];
if (!irq) {
dev_err(dev, "irq init: fail map fatal interrupt %d\n",
idx);
return -ENOENT;
}
rc = devm_request_irq(dev, irq, fatal_interrupts[i], 0,
DRV_NAME " fatal", hisi_hba);
if (rc) {
dev_err(dev,
"irq init: could not request fatal interrupt %d, rc=%d\n",
irq, rc);
return -ENOENT;
}
}
for (i = 0; i < hisi_hba->queue_count; i++) { for (i = 0; i < hisi_hba->queue_count; i++) {
int idx = i + 96; /* First cq interrupt is irq96 */ int idx = i + 96; /* First cq interrupt is irq96 */
@ -2303,12 +2782,26 @@ static const struct hisi_sas_hw hisi_sas_v2_hw = {
.phy_enable = enable_phy_v2_hw, .phy_enable = enable_phy_v2_hw,
.phy_disable = disable_phy_v2_hw, .phy_disable = disable_phy_v2_hw,
.phy_hard_reset = phy_hard_reset_v2_hw, .phy_hard_reset = phy_hard_reset_v2_hw,
.phy_set_linkrate = phy_set_linkrate_v2_hw,
.phy_get_max_linkrate = phy_get_max_linkrate_v2_hw,
.max_command_entries = HISI_SAS_COMMAND_ENTRIES_V2_HW, .max_command_entries = HISI_SAS_COMMAND_ENTRIES_V2_HW,
.complete_hdr_size = sizeof(struct hisi_sas_complete_v2_hdr), .complete_hdr_size = sizeof(struct hisi_sas_complete_v2_hdr),
}; };
static int hisi_sas_v2_probe(struct platform_device *pdev) static int hisi_sas_v2_probe(struct platform_device *pdev)
{ {
/*
* Check if we should defer the probe before we probe the
* upper layer, as it's hard to defer later on.
*/
int ret = platform_get_irq(pdev, 0);
if (ret < 0) {
if (ret != -EPROBE_DEFER)
dev_err(&pdev->dev, "cannot obtain irq\n");
return ret;
}
return hisi_sas_probe(pdev, &hisi_sas_v2_hw); return hisi_sas_probe(pdev, &hisi_sas_v2_hw);
} }
@ -2319,6 +2812,7 @@ static int hisi_sas_v2_remove(struct platform_device *pdev)
static const struct of_device_id sas_v2_of_match[] = { static const struct of_device_id sas_v2_of_match[] = {
{ .compatible = "hisilicon,hip06-sas-v2",}, { .compatible = "hisilicon,hip06-sas-v2",},
{ .compatible = "hisilicon,hip07-sas-v2",},
{}, {},
}; };
MODULE_DEVICE_TABLE(of, sas_v2_of_match); MODULE_DEVICE_TABLE(of, sas_v2_of_match);

View File

@ -276,6 +276,9 @@ static int hpsa_find_cfg_addrs(struct pci_dev *pdev, void __iomem *vaddr,
static int hpsa_pci_find_memory_BAR(struct pci_dev *pdev, static int hpsa_pci_find_memory_BAR(struct pci_dev *pdev,
unsigned long *memory_bar); unsigned long *memory_bar);
static int hpsa_lookup_board_id(struct pci_dev *pdev, u32 *board_id); static int hpsa_lookup_board_id(struct pci_dev *pdev, u32 *board_id);
static int wait_for_device_to_become_ready(struct ctlr_info *h,
unsigned char lunaddr[],
int reply_queue);
static int hpsa_wait_for_board_state(struct pci_dev *pdev, void __iomem *vaddr, static int hpsa_wait_for_board_state(struct pci_dev *pdev, void __iomem *vaddr,
int wait_for_ready); int wait_for_ready);
static inline void finish_cmd(struct CommandList *c); static inline void finish_cmd(struct CommandList *c);
@ -700,9 +703,7 @@ static ssize_t lunid_show(struct device *dev,
} }
memcpy(lunid, hdev->scsi3addr, sizeof(lunid)); memcpy(lunid, hdev->scsi3addr, sizeof(lunid));
spin_unlock_irqrestore(&h->lock, flags); spin_unlock_irqrestore(&h->lock, flags);
return snprintf(buf, 20, "0x%02x%02x%02x%02x%02x%02x%02x%02x\n", return snprintf(buf, 20, "0x%8phN\n", lunid);
lunid[0], lunid[1], lunid[2], lunid[3],
lunid[4], lunid[5], lunid[6], lunid[7]);
} }
static ssize_t unique_id_show(struct device *dev, static ssize_t unique_id_show(struct device *dev,
@ -864,6 +865,16 @@ static ssize_t path_info_show(struct device *dev,
return output_len; return output_len;
} }
static ssize_t host_show_ctlr_num(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct ctlr_info *h;
struct Scsi_Host *shost = class_to_shost(dev);
h = shost_to_hba(shost);
return snprintf(buf, 20, "%d\n", h->ctlr);
}
static DEVICE_ATTR(raid_level, S_IRUGO, raid_level_show, NULL); static DEVICE_ATTR(raid_level, S_IRUGO, raid_level_show, NULL);
static DEVICE_ATTR(lunid, S_IRUGO, lunid_show, NULL); static DEVICE_ATTR(lunid, S_IRUGO, lunid_show, NULL);
static DEVICE_ATTR(unique_id, S_IRUGO, unique_id_show, NULL); static DEVICE_ATTR(unique_id, S_IRUGO, unique_id_show, NULL);
@ -887,6 +898,8 @@ static DEVICE_ATTR(resettable, S_IRUGO,
host_show_resettable, NULL); host_show_resettable, NULL);
static DEVICE_ATTR(lockup_detected, S_IRUGO, static DEVICE_ATTR(lockup_detected, S_IRUGO,
host_show_lockup_detected, NULL); host_show_lockup_detected, NULL);
static DEVICE_ATTR(ctlr_num, S_IRUGO,
host_show_ctlr_num, NULL);
static struct device_attribute *hpsa_sdev_attrs[] = { static struct device_attribute *hpsa_sdev_attrs[] = {
&dev_attr_raid_level, &dev_attr_raid_level,
@ -907,6 +920,7 @@ static struct device_attribute *hpsa_shost_attrs[] = {
&dev_attr_hp_ssd_smart_path_status, &dev_attr_hp_ssd_smart_path_status,
&dev_attr_raid_offload_debug, &dev_attr_raid_offload_debug,
&dev_attr_lockup_detected, &dev_attr_lockup_detected,
&dev_attr_ctlr_num,
NULL, NULL,
}; };
@ -1001,7 +1015,7 @@ static void set_performant_mode(struct ctlr_info *h, struct CommandList *c,
{ {
if (likely(h->transMethod & CFGTBL_Trans_Performant)) { if (likely(h->transMethod & CFGTBL_Trans_Performant)) {
c->busaddr |= 1 | (h->blockFetchTable[c->Header.SGList] << 1); c->busaddr |= 1 | (h->blockFetchTable[c->Header.SGList] << 1);
if (unlikely(!h->msix_vector)) if (unlikely(!h->msix_vectors))
return; return;
if (likely(reply_queue == DEFAULT_REPLY_QUEUE)) if (likely(reply_queue == DEFAULT_REPLY_QUEUE))
c->Header.ReplyQueue = c->Header.ReplyQueue =
@ -2541,7 +2555,7 @@ static void complete_scsi_command(struct CommandList *cp)
if ((unlikely(hpsa_is_pending_event(cp)))) { if ((unlikely(hpsa_is_pending_event(cp)))) {
if (cp->reset_pending) if (cp->reset_pending)
return hpsa_cmd_resolve_and_free(h, cp); return hpsa_cmd_free_and_done(h, cp, cmd);
if (cp->abort_pending) if (cp->abort_pending)
return hpsa_cmd_abort_and_free(h, cp, cmd); return hpsa_cmd_abort_and_free(h, cp, cmd);
} }
@ -2824,14 +2838,8 @@ static void hpsa_print_cmd(struct ctlr_info *h, char *txt,
const u8 *cdb = c->Request.CDB; const u8 *cdb = c->Request.CDB;
const u8 *lun = c->Header.LUN.LunAddrBytes; const u8 *lun = c->Header.LUN.LunAddrBytes;
dev_warn(&h->pdev->dev, "%s: LUN:%02x%02x%02x%02x%02x%02x%02x%02x" dev_warn(&h->pdev->dev, "%s: LUN:%8phN CDB:%16phN\n",
" CDB:%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x\n", txt, lun, cdb);
txt, lun[0], lun[1], lun[2], lun[3],
lun[4], lun[5], lun[6], lun[7],
cdb[0], cdb[1], cdb[2], cdb[3],
cdb[4], cdb[5], cdb[6], cdb[7],
cdb[8], cdb[9], cdb[10], cdb[11],
cdb[12], cdb[13], cdb[14], cdb[15]);
} }
static void hpsa_scsi_interpret_error(struct ctlr_info *h, static void hpsa_scsi_interpret_error(struct ctlr_info *h,
@ -3080,6 +3088,8 @@ static int hpsa_do_reset(struct ctlr_info *h, struct hpsa_scsi_dev_t *dev,
if (unlikely(rc)) if (unlikely(rc))
atomic_set(&dev->reset_cmds_out, 0); atomic_set(&dev->reset_cmds_out, 0);
else
wait_for_device_to_become_ready(h, scsi3addr, 0);
mutex_unlock(&h->reset_mutex); mutex_unlock(&h->reset_mutex);
return rc; return rc;
@ -3623,8 +3633,32 @@ out:
static inline int hpsa_scsi_do_report_phys_luns(struct ctlr_info *h, static inline int hpsa_scsi_do_report_phys_luns(struct ctlr_info *h,
struct ReportExtendedLUNdata *buf, int bufsize) struct ReportExtendedLUNdata *buf, int bufsize)
{ {
return hpsa_scsi_do_report_luns(h, 0, buf, bufsize, int rc;
struct ReportLUNdata *lbuf;
rc = hpsa_scsi_do_report_luns(h, 0, buf, bufsize,
HPSA_REPORT_PHYS_EXTENDED); HPSA_REPORT_PHYS_EXTENDED);
if (!rc || !hpsa_allow_any)
return rc;
/* REPORT PHYS EXTENDED is not supported */
lbuf = kzalloc(sizeof(*lbuf), GFP_KERNEL);
if (!lbuf)
return -ENOMEM;
rc = hpsa_scsi_do_report_luns(h, 0, lbuf, sizeof(*lbuf), 0);
if (!rc) {
int i;
u32 nphys;
/* Copy ReportLUNdata header */
memcpy(buf, lbuf, 8);
nphys = be32_to_cpu(*((__be32 *)lbuf->LUNListLength)) / 8;
for (i = 0; i < nphys; i++)
memcpy(buf->LUN[i].lunid, lbuf->LUN[i], 8);
}
kfree(lbuf);
return rc;
} }
static inline int hpsa_scsi_do_report_log_luns(struct ctlr_info *h, static inline int hpsa_scsi_do_report_log_luns(struct ctlr_info *h,
@ -5488,7 +5522,7 @@ static int hpsa_scsi_queue_command(struct Scsi_Host *sh, struct scsi_cmnd *cmd)
dev = cmd->device->hostdata; dev = cmd->device->hostdata;
if (!dev) { if (!dev) {
cmd->result = NOT_READY << 16; /* host byte */ cmd->result = DID_NO_CONNECT << 16;
cmd->scsi_done(cmd); cmd->scsi_done(cmd);
return 0; return 0;
} }
@ -5569,6 +5603,14 @@ static void hpsa_scan_start(struct Scsi_Host *sh)
if (unlikely(lockup_detected(h))) if (unlikely(lockup_detected(h)))
return hpsa_scan_complete(h); return hpsa_scan_complete(h);
/*
* Do the scan after a reset completion
*/
if (h->reset_in_progress) {
h->drv_req_rescan = 1;
return;
}
hpsa_update_scsi_devices(h); hpsa_update_scsi_devices(h);
hpsa_scan_complete(h); hpsa_scan_complete(h);
@ -5624,7 +5666,7 @@ static int hpsa_scsi_host_alloc(struct ctlr_info *h)
sh->sg_tablesize = h->maxsgentries; sh->sg_tablesize = h->maxsgentries;
sh->transportt = hpsa_sas_transport_template; sh->transportt = hpsa_sas_transport_template;
sh->hostdata[0] = (unsigned long) h; sh->hostdata[0] = (unsigned long) h;
sh->irq = h->intr[h->intr_mode]; sh->irq = pci_irq_vector(h->pdev, 0);
sh->unique_id = sh->irq; sh->unique_id = sh->irq;
h->scsi_host = sh; h->scsi_host = sh;
@ -5999,11 +6041,9 @@ static int hpsa_send_reset_as_abort_ioaccel2(struct ctlr_info *h,
if (h->raid_offload_debug > 0) if (h->raid_offload_debug > 0)
dev_info(&h->pdev->dev, dev_info(&h->pdev->dev,
"scsi %d:%d:%d:%d %s scsi3addr 0x%02x%02x%02x%02x%02x%02x%02x%02x\n", "scsi %d:%d:%d:%d %s scsi3addr 0x%8phN\n",
h->scsi_host->host_no, dev->bus, dev->target, dev->lun, h->scsi_host->host_no, dev->bus, dev->target, dev->lun,
"Reset as abort", "Reset as abort", scsi3addr);
scsi3addr[0], scsi3addr[1], scsi3addr[2], scsi3addr[3],
scsi3addr[4], scsi3addr[5], scsi3addr[6], scsi3addr[7]);
if (!dev->offload_enabled) { if (!dev->offload_enabled) {
dev_warn(&h->pdev->dev, dev_warn(&h->pdev->dev,
@ -6020,32 +6060,28 @@ static int hpsa_send_reset_as_abort_ioaccel2(struct ctlr_info *h,
/* send the reset */ /* send the reset */
if (h->raid_offload_debug > 0) if (h->raid_offload_debug > 0)
dev_info(&h->pdev->dev, dev_info(&h->pdev->dev,
"Reset as abort: Resetting physical device at scsi3addr 0x%02x%02x%02x%02x%02x%02x%02x%02x\n", "Reset as abort: Resetting physical device at scsi3addr 0x%8phN\n",
psa[0], psa[1], psa[2], psa[3], psa);
psa[4], psa[5], psa[6], psa[7]);
rc = hpsa_do_reset(h, dev, psa, HPSA_PHYS_TARGET_RESET, reply_queue); rc = hpsa_do_reset(h, dev, psa, HPSA_PHYS_TARGET_RESET, reply_queue);
if (rc != 0) { if (rc != 0) {
dev_warn(&h->pdev->dev, dev_warn(&h->pdev->dev,
"Reset as abort: Failed on physical device at scsi3addr 0x%02x%02x%02x%02x%02x%02x%02x%02x\n", "Reset as abort: Failed on physical device at scsi3addr 0x%8phN\n",
psa[0], psa[1], psa[2], psa[3], psa);
psa[4], psa[5], psa[6], psa[7]);
return rc; /* failed to reset */ return rc; /* failed to reset */
} }
/* wait for device to recover */ /* wait for device to recover */
if (wait_for_device_to_become_ready(h, psa, reply_queue) != 0) { if (wait_for_device_to_become_ready(h, psa, reply_queue) != 0) {
dev_warn(&h->pdev->dev, dev_warn(&h->pdev->dev,
"Reset as abort: Failed: Device never recovered from reset: 0x%02x%02x%02x%02x%02x%02x%02x%02x\n", "Reset as abort: Failed: Device never recovered from reset: 0x%8phN\n",
psa[0], psa[1], psa[2], psa[3], psa);
psa[4], psa[5], psa[6], psa[7]);
return -1; /* failed to recover */ return -1; /* failed to recover */
} }
/* device recovered */ /* device recovered */
dev_info(&h->pdev->dev, dev_info(&h->pdev->dev,
"Reset as abort: Device recovered from reset: scsi3addr 0x%02x%02x%02x%02x%02x%02x%02x%02x\n", "Reset as abort: Device recovered from reset: scsi3addr 0x%8phN\n",
psa[0], psa[1], psa[2], psa[3], psa);
psa[4], psa[5], psa[6], psa[7]);
return rc; /* success */ return rc; /* success */
} }
@ -6663,8 +6699,7 @@ static int hpsa_big_passthru_ioctl(struct ctlr_info *h, void __user *argp)
return -EINVAL; return -EINVAL;
if (!capable(CAP_SYS_RAWIO)) if (!capable(CAP_SYS_RAWIO))
return -EPERM; return -EPERM;
ioc = (BIG_IOCTL_Command_struct *) ioc = kmalloc(sizeof(*ioc), GFP_KERNEL);
kmalloc(sizeof(*ioc), GFP_KERNEL);
if (!ioc) { if (!ioc) {
status = -ENOMEM; status = -ENOMEM;
goto cleanup1; goto cleanup1;
@ -7658,67 +7693,41 @@ static int find_PCI_BAR_index(struct pci_dev *pdev, unsigned long pci_bar_addr)
static void hpsa_disable_interrupt_mode(struct ctlr_info *h) static void hpsa_disable_interrupt_mode(struct ctlr_info *h)
{ {
if (h->msix_vector) { pci_free_irq_vectors(h->pdev);
if (h->pdev->msix_enabled) h->msix_vectors = 0;
pci_disable_msix(h->pdev);
h->msix_vector = 0;
} else if (h->msi_vector) {
if (h->pdev->msi_enabled)
pci_disable_msi(h->pdev);
h->msi_vector = 0;
}
} }
/* If MSI/MSI-X is supported by the kernel we will try to enable it on /* If MSI/MSI-X is supported by the kernel we will try to enable it on
* controllers that are capable. If not, we use legacy INTx mode. * controllers that are capable. If not, we use legacy INTx mode.
*/ */
static void hpsa_interrupt_mode(struct ctlr_info *h) static int hpsa_interrupt_mode(struct ctlr_info *h)
{ {
#ifdef CONFIG_PCI_MSI unsigned int flags = PCI_IRQ_LEGACY;
int err, i; int ret;
struct msix_entry hpsa_msix_entries[MAX_REPLY_QUEUES];
for (i = 0; i < MAX_REPLY_QUEUES; i++) {
hpsa_msix_entries[i].vector = 0;
hpsa_msix_entries[i].entry = i;
}
/* Some boards advertise MSI but don't really support it */ /* Some boards advertise MSI but don't really support it */
if ((h->board_id == 0x40700E11) || (h->board_id == 0x40800E11) || switch (h->board_id) {
(h->board_id == 0x40820E11) || (h->board_id == 0x40830E11)) case 0x40700E11:
goto default_int_mode; case 0x40800E11:
if (pci_find_capability(h->pdev, PCI_CAP_ID_MSIX)) { case 0x40820E11:
dev_info(&h->pdev->dev, "MSI-X capable controller\n"); case 0x40830E11:
h->msix_vector = MAX_REPLY_QUEUES; break;
if (h->msix_vector > num_online_cpus()) default:
h->msix_vector = num_online_cpus(); ret = pci_alloc_irq_vectors(h->pdev, 1, MAX_REPLY_QUEUES,
err = pci_enable_msix_range(h->pdev, hpsa_msix_entries, PCI_IRQ_MSIX | PCI_IRQ_AFFINITY);
1, h->msix_vector); if (ret > 0) {
if (err < 0) { h->msix_vectors = ret;
dev_warn(&h->pdev->dev, "MSI-X init failed %d\n", err); return 0;
h->msix_vector = 0;
goto single_msi_mode;
} else if (err < h->msix_vector) {
dev_warn(&h->pdev->dev, "only %d MSI-X vectors "
"available\n", err);
} }
h->msix_vector = err;
for (i = 0; i < h->msix_vector; i++) flags |= PCI_IRQ_MSI;
h->intr[i] = hpsa_msix_entries[i].vector; break;
return;
} }
single_msi_mode:
if (pci_find_capability(h->pdev, PCI_CAP_ID_MSI)) { ret = pci_alloc_irq_vectors(h->pdev, 1, 1, flags);
dev_info(&h->pdev->dev, "MSI capable controller\n"); if (ret < 0)
if (!pci_enable_msi(h->pdev)) return ret;
h->msi_vector = 1; return 0;
else
dev_warn(&h->pdev->dev, "MSI init failed\n");
}
default_int_mode:
#endif /* CONFIG_PCI_MSI */
/* if we get here we're going to use the default interrupt mode */
h->intr[h->intr_mode] = h->pdev->irq;
} }
static int hpsa_lookup_board_id(struct pci_dev *pdev, u32 *board_id) static int hpsa_lookup_board_id(struct pci_dev *pdev, u32 *board_id)
@ -8074,7 +8083,9 @@ static int hpsa_pci_init(struct ctlr_info *h)
pci_set_master(h->pdev); pci_set_master(h->pdev);
hpsa_interrupt_mode(h); err = hpsa_interrupt_mode(h);
if (err)
goto clean1;
err = hpsa_pci_find_memory_BAR(h->pdev, &h->paddr); err = hpsa_pci_find_memory_BAR(h->pdev, &h->paddr);
if (err) if (err)
goto clean2; /* intmode+region, pci */ goto clean2; /* intmode+region, pci */
@ -8110,6 +8121,7 @@ clean3: /* vaddr, intmode+region, pci */
h->vaddr = NULL; h->vaddr = NULL;
clean2: /* intmode+region, pci */ clean2: /* intmode+region, pci */
hpsa_disable_interrupt_mode(h); hpsa_disable_interrupt_mode(h);
clean1:
/* /*
* call pci_disable_device before pci_release_regions per * call pci_disable_device before pci_release_regions per
* Documentation/PCI/pci.txt * Documentation/PCI/pci.txt
@ -8243,34 +8255,20 @@ clean_up:
return -ENOMEM; return -ENOMEM;
} }
static void hpsa_irq_affinity_hints(struct ctlr_info *h)
{
int i, cpu;
cpu = cpumask_first(cpu_online_mask);
for (i = 0; i < h->msix_vector; i++) {
irq_set_affinity_hint(h->intr[i], get_cpu_mask(cpu));
cpu = cpumask_next(cpu, cpu_online_mask);
}
}
/* clear affinity hints and free MSI-X, MSI, or legacy INTx vectors */ /* clear affinity hints and free MSI-X, MSI, or legacy INTx vectors */
static void hpsa_free_irqs(struct ctlr_info *h) static void hpsa_free_irqs(struct ctlr_info *h)
{ {
int i; int i;
if (!h->msix_vector || h->intr_mode != PERF_MODE_INT) { if (!h->msix_vectors || h->intr_mode != PERF_MODE_INT) {
/* Single reply queue, only one irq to free */ /* Single reply queue, only one irq to free */
i = h->intr_mode; free_irq(pci_irq_vector(h->pdev, 0), &h->q[h->intr_mode]);
irq_set_affinity_hint(h->intr[i], NULL); h->q[h->intr_mode] = 0;
free_irq(h->intr[i], &h->q[i]);
h->q[i] = 0;
return; return;
} }
for (i = 0; i < h->msix_vector; i++) { for (i = 0; i < h->msix_vectors; i++) {
irq_set_affinity_hint(h->intr[i], NULL); free_irq(pci_irq_vector(h->pdev, i), &h->q[i]);
free_irq(h->intr[i], &h->q[i]);
h->q[i] = 0; h->q[i] = 0;
} }
for (; i < MAX_REPLY_QUEUES; i++) for (; i < MAX_REPLY_QUEUES; i++)
@ -8291,11 +8289,11 @@ static int hpsa_request_irqs(struct ctlr_info *h,
for (i = 0; i < MAX_REPLY_QUEUES; i++) for (i = 0; i < MAX_REPLY_QUEUES; i++)
h->q[i] = (u8) i; h->q[i] = (u8) i;
if (h->intr_mode == PERF_MODE_INT && h->msix_vector > 0) { if (h->intr_mode == PERF_MODE_INT && h->msix_vectors > 0) {
/* If performant mode and MSI-X, use multiple reply queues */ /* If performant mode and MSI-X, use multiple reply queues */
for (i = 0; i < h->msix_vector; i++) { for (i = 0; i < h->msix_vectors; i++) {
sprintf(h->intrname[i], "%s-msix%d", h->devname, i); sprintf(h->intrname[i], "%s-msix%d", h->devname, i);
rc = request_irq(h->intr[i], msixhandler, rc = request_irq(pci_irq_vector(h->pdev, i), msixhandler,
0, h->intrname[i], 0, h->intrname[i],
&h->q[i]); &h->q[i]);
if (rc) { if (rc) {
@ -8303,9 +8301,9 @@ static int hpsa_request_irqs(struct ctlr_info *h,
dev_err(&h->pdev->dev, dev_err(&h->pdev->dev,
"failed to get irq %d for %s\n", "failed to get irq %d for %s\n",
h->intr[i], h->devname); pci_irq_vector(h->pdev, i), h->devname);
for (j = 0; j < i; j++) { for (j = 0; j < i; j++) {
free_irq(h->intr[j], &h->q[j]); free_irq(pci_irq_vector(h->pdev, j), &h->q[j]);
h->q[j] = 0; h->q[j] = 0;
} }
for (; j < MAX_REPLY_QUEUES; j++) for (; j < MAX_REPLY_QUEUES; j++)
@ -8313,33 +8311,27 @@ static int hpsa_request_irqs(struct ctlr_info *h,
return rc; return rc;
} }
} }
hpsa_irq_affinity_hints(h);
} else { } else {
/* Use single reply pool */ /* Use single reply pool */
if (h->msix_vector > 0 || h->msi_vector) { if (h->msix_vectors > 0 || h->pdev->msi_enabled) {
if (h->msix_vector) sprintf(h->intrname[0], "%s-msi%s", h->devname,
sprintf(h->intrname[h->intr_mode], h->msix_vectors ? "x" : "");
"%s-msix", h->devname); rc = request_irq(pci_irq_vector(h->pdev, 0),
else
sprintf(h->intrname[h->intr_mode],
"%s-msi", h->devname);
rc = request_irq(h->intr[h->intr_mode],
msixhandler, 0, msixhandler, 0,
h->intrname[h->intr_mode], h->intrname[0],
&h->q[h->intr_mode]); &h->q[h->intr_mode]);
} else { } else {
sprintf(h->intrname[h->intr_mode], sprintf(h->intrname[h->intr_mode],
"%s-intx", h->devname); "%s-intx", h->devname);
rc = request_irq(h->intr[h->intr_mode], rc = request_irq(pci_irq_vector(h->pdev, 0),
intxhandler, IRQF_SHARED, intxhandler, IRQF_SHARED,
h->intrname[h->intr_mode], h->intrname[0],
&h->q[h->intr_mode]); &h->q[h->intr_mode]);
} }
irq_set_affinity_hint(h->intr[h->intr_mode], NULL);
} }
if (rc) { if (rc) {
dev_err(&h->pdev->dev, "failed to get irq %d for %s\n", dev_err(&h->pdev->dev, "failed to get irq %d for %s\n",
h->intr[h->intr_mode], h->devname); pci_irq_vector(h->pdev, 0), h->devname);
hpsa_free_irqs(h); hpsa_free_irqs(h);
return -ENODEV; return -ENODEV;
} }
@ -8640,6 +8632,14 @@ static void hpsa_rescan_ctlr_worker(struct work_struct *work)
if (h->remove_in_progress) if (h->remove_in_progress)
return; return;
/*
* Do the scan after the reset
*/
if (h->reset_in_progress) {
h->drv_req_rescan = 1;
return;
}
if (hpsa_ctlr_needs_rescan(h) || hpsa_offline_devices_ready(h)) { if (hpsa_ctlr_needs_rescan(h) || hpsa_offline_devices_ready(h)) {
scsi_host_get(h->scsi_host); scsi_host_get(h->scsi_host);
hpsa_ack_ctlr_events(h); hpsa_ack_ctlr_events(h);
@ -9525,7 +9525,7 @@ static int hpsa_put_ctlr_into_performant_mode(struct ctlr_info *h)
return rc; return rc;
} }
h->nreply_queues = h->msix_vector > 0 ? h->msix_vector : 1; h->nreply_queues = h->msix_vectors > 0 ? h->msix_vectors : 1;
hpsa_get_max_perf_mode_cmds(h); hpsa_get_max_perf_mode_cmds(h);
/* Performant mode ring buffer and supporting data structures */ /* Performant mode ring buffer and supporting data structures */
h->reply_queue_size = h->max_commands * sizeof(u64); h->reply_queue_size = h->max_commands * sizeof(u64);

View File

@ -176,9 +176,7 @@ struct ctlr_info {
# define DOORBELL_INT 1 # define DOORBELL_INT 1
# define SIMPLE_MODE_INT 2 # define SIMPLE_MODE_INT 2
# define MEMQ_MODE_INT 3 # define MEMQ_MODE_INT 3
unsigned int intr[MAX_REPLY_QUEUES]; unsigned int msix_vectors;
unsigned int msix_vector;
unsigned int msi_vector;
int intr_mode; /* either PERF_MODE_INT or SIMPLE_MODE_INT */ int intr_mode; /* either PERF_MODE_INT or SIMPLE_MODE_INT */
struct access_method access; struct access_method access;
@ -466,7 +464,7 @@ static unsigned long SA5_performant_completed(struct ctlr_info *h, u8 q)
unsigned long register_value = FIFO_EMPTY; unsigned long register_value = FIFO_EMPTY;
/* msi auto clears the interrupt pending bit. */ /* msi auto clears the interrupt pending bit. */
if (unlikely(!(h->msi_vector || h->msix_vector))) { if (unlikely(!(h->pdev->msi_enabled || h->msix_vectors))) {
/* flush the controller write of the reply queue by reading /* flush the controller write of the reply queue by reading
* outbound doorbell status register. * outbound doorbell status register.
*/ */

View File

@ -32,6 +32,7 @@
#include <linux/of.h> #include <linux/of.h>
#include <linux/pm.h> #include <linux/pm.h>
#include <linux/stringify.h> #include <linux/stringify.h>
#include <linux/bsg-lib.h>
#include <asm/firmware.h> #include <asm/firmware.h>
#include <asm/irq.h> #include <asm/irq.h>
#include <asm/vio.h> #include <asm/vio.h>
@ -1701,14 +1702,14 @@ static void ibmvfc_bsg_timeout_done(struct ibmvfc_event *evt)
/** /**
* ibmvfc_bsg_timeout - Handle a BSG timeout * ibmvfc_bsg_timeout - Handle a BSG timeout
* @job: struct fc_bsg_job that timed out * @job: struct bsg_job that timed out
* *
* Returns: * Returns:
* 0 on success / other on failure * 0 on success / other on failure
**/ **/
static int ibmvfc_bsg_timeout(struct fc_bsg_job *job) static int ibmvfc_bsg_timeout(struct bsg_job *job)
{ {
struct ibmvfc_host *vhost = shost_priv(job->shost); struct ibmvfc_host *vhost = shost_priv(fc_bsg_to_shost(job));
unsigned long port_id = (unsigned long)job->dd_data; unsigned long port_id = (unsigned long)job->dd_data;
struct ibmvfc_event *evt; struct ibmvfc_event *evt;
struct ibmvfc_tmf *tmf; struct ibmvfc_tmf *tmf;
@ -1814,41 +1815,43 @@ unlock_out:
/** /**
* ibmvfc_bsg_request - Handle a BSG request * ibmvfc_bsg_request - Handle a BSG request
* @job: struct fc_bsg_job to be executed * @job: struct bsg_job to be executed
* *
* Returns: * Returns:
* 0 on success / other on failure * 0 on success / other on failure
**/ **/
static int ibmvfc_bsg_request(struct fc_bsg_job *job) static int ibmvfc_bsg_request(struct bsg_job *job)
{ {
struct ibmvfc_host *vhost = shost_priv(job->shost); struct ibmvfc_host *vhost = shost_priv(fc_bsg_to_shost(job));
struct fc_rport *rport = job->rport; struct fc_rport *rport = fc_bsg_to_rport(job);
struct ibmvfc_passthru_mad *mad; struct ibmvfc_passthru_mad *mad;
struct ibmvfc_event *evt; struct ibmvfc_event *evt;
union ibmvfc_iu rsp_iu; union ibmvfc_iu rsp_iu;
unsigned long flags, port_id = -1; unsigned long flags, port_id = -1;
unsigned int code = job->request->msgcode; struct fc_bsg_request *bsg_request = job->request;
struct fc_bsg_reply *bsg_reply = job->reply;
unsigned int code = bsg_request->msgcode;
int rc = 0, req_seg, rsp_seg, issue_login = 0; int rc = 0, req_seg, rsp_seg, issue_login = 0;
u32 fc_flags, rsp_len; u32 fc_flags, rsp_len;
ENTER; ENTER;
job->reply->reply_payload_rcv_len = 0; bsg_reply->reply_payload_rcv_len = 0;
if (rport) if (rport)
port_id = rport->port_id; port_id = rport->port_id;
switch (code) { switch (code) {
case FC_BSG_HST_ELS_NOLOGIN: case FC_BSG_HST_ELS_NOLOGIN:
port_id = (job->request->rqst_data.h_els.port_id[0] << 16) | port_id = (bsg_request->rqst_data.h_els.port_id[0] << 16) |
(job->request->rqst_data.h_els.port_id[1] << 8) | (bsg_request->rqst_data.h_els.port_id[1] << 8) |
job->request->rqst_data.h_els.port_id[2]; bsg_request->rqst_data.h_els.port_id[2];
case FC_BSG_RPT_ELS: case FC_BSG_RPT_ELS:
fc_flags = IBMVFC_FC_ELS; fc_flags = IBMVFC_FC_ELS;
break; break;
case FC_BSG_HST_CT: case FC_BSG_HST_CT:
issue_login = 1; issue_login = 1;
port_id = (job->request->rqst_data.h_ct.port_id[0] << 16) | port_id = (bsg_request->rqst_data.h_ct.port_id[0] << 16) |
(job->request->rqst_data.h_ct.port_id[1] << 8) | (bsg_request->rqst_data.h_ct.port_id[1] << 8) |
job->request->rqst_data.h_ct.port_id[2]; bsg_request->rqst_data.h_ct.port_id[2];
case FC_BSG_RPT_CT: case FC_BSG_RPT_CT:
fc_flags = IBMVFC_FC_CT_IU; fc_flags = IBMVFC_FC_CT_IU;
break; break;
@ -1937,13 +1940,14 @@ static int ibmvfc_bsg_request(struct fc_bsg_job *job)
if (rsp_iu.passthru.common.status) if (rsp_iu.passthru.common.status)
rc = -EIO; rc = -EIO;
else else
job->reply->reply_payload_rcv_len = rsp_len; bsg_reply->reply_payload_rcv_len = rsp_len;
spin_lock_irqsave(vhost->host->host_lock, flags); spin_lock_irqsave(vhost->host->host_lock, flags);
ibmvfc_free_event(evt); ibmvfc_free_event(evt);
spin_unlock_irqrestore(vhost->host->host_lock, flags); spin_unlock_irqrestore(vhost->host->host_lock, flags);
job->reply->result = rc; bsg_reply->result = rc;
job->job_done(job); bsg_job_done(job, bsg_reply->result,
bsg_reply->reply_payload_rcv_len);
rc = 0; rc = 0;
out: out:
dma_unmap_sg(vhost->dev, job->request_payload.sg_list, dma_unmap_sg(vhost->dev, job->request_payload.sg_list,

File diff suppressed because it is too large Load Diff

View File

@ -204,8 +204,6 @@ struct scsi_info {
struct list_head waiting_rsp; struct list_head waiting_rsp;
#define NO_QUEUE 0x00 #define NO_QUEUE 0x00
#define WAIT_ENABLED 0X01 #define WAIT_ENABLED 0X01
/* driver has received an initialize command */
#define PART_UP_WAIT_ENAB 0x02
#define WAIT_CONNECTION 0x04 #define WAIT_CONNECTION 0x04
/* have established a connection */ /* have established a connection */
#define CONNECTED 0x08 #define CONNECTED 0x08
@ -259,6 +257,8 @@ struct scsi_info {
#define SCHEDULE_DISCONNECT 0x00400 #define SCHEDULE_DISCONNECT 0x00400
/* disconnect handler is scheduled */ /* disconnect handler is scheduled */
#define DISCONNECT_SCHEDULED 0x00800 #define DISCONNECT_SCHEDULED 0x00800
/* remove function is sleeping */
#define CFG_SLEEPING 0x01000
u32 flags; u32 flags;
/* adapter lock */ /* adapter lock */
spinlock_t intr_lock; spinlock_t intr_lock;
@ -287,6 +287,7 @@ struct scsi_info {
struct workqueue_struct *work_q; struct workqueue_struct *work_q;
struct completion wait_idle; struct completion wait_idle;
struct completion unconfig;
struct device dev; struct device dev;
struct vio_dev *dma_dev; struct vio_dev *dma_dev;
struct srp_target target; struct srp_target target;

View File

@ -186,16 +186,16 @@ static const struct ipr_chip_cfg_t ipr_chip_cfg[] = {
}; };
static const struct ipr_chip_t ipr_chip[] = { static const struct ipr_chip_t ipr_chip[] = {
{ PCI_VENDOR_ID_MYLEX, PCI_DEVICE_ID_IBM_GEMSTONE, IPR_USE_LSI, IPR_SIS32, IPR_PCI_CFG, &ipr_chip_cfg[0] }, { PCI_VENDOR_ID_MYLEX, PCI_DEVICE_ID_IBM_GEMSTONE, false, IPR_SIS32, IPR_PCI_CFG, &ipr_chip_cfg[0] },
{ PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CITRINE, IPR_USE_LSI, IPR_SIS32, IPR_PCI_CFG, &ipr_chip_cfg[0] }, { PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CITRINE, false, IPR_SIS32, IPR_PCI_CFG, &ipr_chip_cfg[0] },
{ PCI_VENDOR_ID_ADAPTEC2, PCI_DEVICE_ID_ADAPTEC2_OBSIDIAN, IPR_USE_LSI, IPR_SIS32, IPR_PCI_CFG, &ipr_chip_cfg[0] }, { PCI_VENDOR_ID_ADAPTEC2, PCI_DEVICE_ID_ADAPTEC2_OBSIDIAN, false, IPR_SIS32, IPR_PCI_CFG, &ipr_chip_cfg[0] },
{ PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_OBSIDIAN, IPR_USE_LSI, IPR_SIS32, IPR_PCI_CFG, &ipr_chip_cfg[0] }, { PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_OBSIDIAN, false, IPR_SIS32, IPR_PCI_CFG, &ipr_chip_cfg[0] },
{ PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_OBSIDIAN_E, IPR_USE_MSI, IPR_SIS32, IPR_PCI_CFG, &ipr_chip_cfg[0] }, { PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_OBSIDIAN_E, true, IPR_SIS32, IPR_PCI_CFG, &ipr_chip_cfg[0] },
{ PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_SNIPE, IPR_USE_LSI, IPR_SIS32, IPR_PCI_CFG, &ipr_chip_cfg[1] }, { PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_SNIPE, false, IPR_SIS32, IPR_PCI_CFG, &ipr_chip_cfg[1] },
{ PCI_VENDOR_ID_ADAPTEC2, PCI_DEVICE_ID_ADAPTEC2_SCAMP, IPR_USE_LSI, IPR_SIS32, IPR_PCI_CFG, &ipr_chip_cfg[1] }, { PCI_VENDOR_ID_ADAPTEC2, PCI_DEVICE_ID_ADAPTEC2_SCAMP, false, IPR_SIS32, IPR_PCI_CFG, &ipr_chip_cfg[1] },
{ PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CROC_FPGA_E2, IPR_USE_MSI, IPR_SIS64, IPR_MMIO, &ipr_chip_cfg[2] }, { PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CROC_FPGA_E2, true, IPR_SIS64, IPR_MMIO, &ipr_chip_cfg[2] },
{ PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CROCODILE, IPR_USE_MSI, IPR_SIS64, IPR_MMIO, &ipr_chip_cfg[2] }, { PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CROCODILE, true, IPR_SIS64, IPR_MMIO, &ipr_chip_cfg[2] },
{ PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_RATTLESNAKE, IPR_USE_MSI, IPR_SIS64, IPR_MMIO, &ipr_chip_cfg[2] } { PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_RATTLESNAKE, true, IPR_SIS64, IPR_MMIO, &ipr_chip_cfg[2] }
}; };
static int ipr_max_bus_speeds[] = { static int ipr_max_bus_speeds[] = {
@ -9439,23 +9439,11 @@ static void ipr_free_mem(struct ipr_ioa_cfg *ioa_cfg)
static void ipr_free_irqs(struct ipr_ioa_cfg *ioa_cfg) static void ipr_free_irqs(struct ipr_ioa_cfg *ioa_cfg)
{ {
struct pci_dev *pdev = ioa_cfg->pdev; struct pci_dev *pdev = ioa_cfg->pdev;
if (ioa_cfg->intr_flag == IPR_USE_MSI ||
ioa_cfg->intr_flag == IPR_USE_MSIX) {
int i; int i;
for (i = 0; i < ioa_cfg->nvectors; i++)
free_irq(ioa_cfg->vectors_info[i].vec,
&ioa_cfg->hrrq[i]);
} else
free_irq(pdev->irq, &ioa_cfg->hrrq[0]);
if (ioa_cfg->intr_flag == IPR_USE_MSI) { for (i = 0; i < ioa_cfg->nvectors; i++)
pci_disable_msi(pdev); free_irq(pci_irq_vector(pdev, i), &ioa_cfg->hrrq[i]);
ioa_cfg->intr_flag &= ~IPR_USE_MSI; pci_free_irq_vectors(pdev);
} else if (ioa_cfg->intr_flag == IPR_USE_MSIX) {
pci_disable_msix(pdev);
ioa_cfg->intr_flag &= ~IPR_USE_MSIX;
}
} }
/** /**
@ -9883,45 +9871,6 @@ static void ipr_wait_for_pci_err_recovery(struct ipr_ioa_cfg *ioa_cfg)
} }
} }
static int ipr_enable_msix(struct ipr_ioa_cfg *ioa_cfg)
{
struct msix_entry entries[IPR_MAX_MSIX_VECTORS];
int i, vectors;
for (i = 0; i < ARRAY_SIZE(entries); ++i)
entries[i].entry = i;
vectors = pci_enable_msix_range(ioa_cfg->pdev,
entries, 1, ipr_number_of_msix);
if (vectors < 0) {
ipr_wait_for_pci_err_recovery(ioa_cfg);
return vectors;
}
for (i = 0; i < vectors; i++)
ioa_cfg->vectors_info[i].vec = entries[i].vector;
ioa_cfg->nvectors = vectors;
return 0;
}
static int ipr_enable_msi(struct ipr_ioa_cfg *ioa_cfg)
{
int i, vectors;
vectors = pci_enable_msi_range(ioa_cfg->pdev, 1, ipr_number_of_msix);
if (vectors < 0) {
ipr_wait_for_pci_err_recovery(ioa_cfg);
return vectors;
}
for (i = 0; i < vectors; i++)
ioa_cfg->vectors_info[i].vec = ioa_cfg->pdev->irq + i;
ioa_cfg->nvectors = vectors;
return 0;
}
static void name_msi_vectors(struct ipr_ioa_cfg *ioa_cfg) static void name_msi_vectors(struct ipr_ioa_cfg *ioa_cfg)
{ {
int vec_idx, n = sizeof(ioa_cfg->vectors_info[0].desc) - 1; int vec_idx, n = sizeof(ioa_cfg->vectors_info[0].desc) - 1;
@ -9934,19 +9883,20 @@ static void name_msi_vectors(struct ipr_ioa_cfg *ioa_cfg)
} }
} }
static int ipr_request_other_msi_irqs(struct ipr_ioa_cfg *ioa_cfg) static int ipr_request_other_msi_irqs(struct ipr_ioa_cfg *ioa_cfg,
struct pci_dev *pdev)
{ {
int i, rc; int i, rc;
for (i = 1; i < ioa_cfg->nvectors; i++) { for (i = 1; i < ioa_cfg->nvectors; i++) {
rc = request_irq(ioa_cfg->vectors_info[i].vec, rc = request_irq(pci_irq_vector(pdev, i),
ipr_isr_mhrrq, ipr_isr_mhrrq,
0, 0,
ioa_cfg->vectors_info[i].desc, ioa_cfg->vectors_info[i].desc,
&ioa_cfg->hrrq[i]); &ioa_cfg->hrrq[i]);
if (rc) { if (rc) {
while (--i >= 0) while (--i >= 0)
free_irq(ioa_cfg->vectors_info[i].vec, free_irq(pci_irq_vector(pdev, i),
&ioa_cfg->hrrq[i]); &ioa_cfg->hrrq[i]);
return rc; return rc;
} }
@ -9984,8 +9934,7 @@ static irqreturn_t ipr_test_intr(int irq, void *devp)
* ipr_test_msi - Test for Message Signaled Interrupt (MSI) support. * ipr_test_msi - Test for Message Signaled Interrupt (MSI) support.
* @pdev: PCI device struct * @pdev: PCI device struct
* *
* Description: The return value from pci_enable_msi_range() can not always be * Description: This routine sets up and initiates a test interrupt to determine
* trusted. This routine sets up and initiates a test interrupt to determine
* if the interrupt is received via the ipr_test_intr() service routine. * if the interrupt is received via the ipr_test_intr() service routine.
* If the tests fails, the driver will fall back to LSI. * If the tests fails, the driver will fall back to LSI.
* *
@ -9997,6 +9946,7 @@ static int ipr_test_msi(struct ipr_ioa_cfg *ioa_cfg, struct pci_dev *pdev)
int rc; int rc;
volatile u32 int_reg; volatile u32 int_reg;
unsigned long lock_flags = 0; unsigned long lock_flags = 0;
int irq = pci_irq_vector(pdev, 0);
ENTER; ENTER;
@ -10008,15 +9958,12 @@ static int ipr_test_msi(struct ipr_ioa_cfg *ioa_cfg, struct pci_dev *pdev)
int_reg = readl(ioa_cfg->regs.sense_interrupt_mask_reg); int_reg = readl(ioa_cfg->regs.sense_interrupt_mask_reg);
spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags); spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
if (ioa_cfg->intr_flag == IPR_USE_MSIX) rc = request_irq(irq, ipr_test_intr, 0, IPR_NAME, ioa_cfg);
rc = request_irq(ioa_cfg->vectors_info[0].vec, ipr_test_intr, 0, IPR_NAME, ioa_cfg);
else
rc = request_irq(pdev->irq, ipr_test_intr, 0, IPR_NAME, ioa_cfg);
if (rc) { if (rc) {
dev_err(&pdev->dev, "Can not assign irq %d\n", pdev->irq); dev_err(&pdev->dev, "Can not assign irq %d\n", irq);
return rc; return rc;
} else if (ipr_debug) } else if (ipr_debug)
dev_info(&pdev->dev, "IRQ assigned: %d\n", pdev->irq); dev_info(&pdev->dev, "IRQ assigned: %d\n", irq);
writel(IPR_PCII_IO_DEBUG_ACKNOWLEDGE, ioa_cfg->regs.sense_interrupt_reg32); writel(IPR_PCII_IO_DEBUG_ACKNOWLEDGE, ioa_cfg->regs.sense_interrupt_reg32);
int_reg = readl(ioa_cfg->regs.sense_interrupt_reg); int_reg = readl(ioa_cfg->regs.sense_interrupt_reg);
@ -10033,10 +9980,7 @@ static int ipr_test_msi(struct ipr_ioa_cfg *ioa_cfg, struct pci_dev *pdev)
spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags); spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
if (ioa_cfg->intr_flag == IPR_USE_MSIX) free_irq(irq, ioa_cfg);
free_irq(ioa_cfg->vectors_info[0].vec, ioa_cfg);
else
free_irq(pdev->irq, ioa_cfg);
LEAVE; LEAVE;
@ -10060,6 +10004,7 @@ static int ipr_probe_ioa(struct pci_dev *pdev,
int rc = PCIBIOS_SUCCESSFUL; int rc = PCIBIOS_SUCCESSFUL;
volatile u32 mask, uproc, interrupts; volatile u32 mask, uproc, interrupts;
unsigned long lock_flags, driver_lock_flags; unsigned long lock_flags, driver_lock_flags;
unsigned int irq_flag;
ENTER; ENTER;
@ -10175,18 +10120,18 @@ static int ipr_probe_ioa(struct pci_dev *pdev,
ipr_number_of_msix = IPR_MAX_MSIX_VECTORS; ipr_number_of_msix = IPR_MAX_MSIX_VECTORS;
} }
if (ioa_cfg->ipr_chip->intr_type == IPR_USE_MSI && irq_flag = PCI_IRQ_LEGACY;
ipr_enable_msix(ioa_cfg) == 0) if (ioa_cfg->ipr_chip->has_msi)
ioa_cfg->intr_flag = IPR_USE_MSIX; irq_flag |= PCI_IRQ_MSI | PCI_IRQ_MSIX;
else if (ioa_cfg->ipr_chip->intr_type == IPR_USE_MSI && rc = pci_alloc_irq_vectors(pdev, 1, ipr_number_of_msix, irq_flag);
ipr_enable_msi(ioa_cfg) == 0) if (rc < 0) {
ioa_cfg->intr_flag = IPR_USE_MSI; ipr_wait_for_pci_err_recovery(ioa_cfg);
else { goto cleanup_nomem;
ioa_cfg->intr_flag = IPR_USE_LSI;
ioa_cfg->clear_isr = 1;
ioa_cfg->nvectors = 1;
dev_info(&pdev->dev, "Cannot enable MSI.\n");
} }
ioa_cfg->nvectors = rc;
if (!pdev->msi_enabled && !pdev->msix_enabled)
ioa_cfg->clear_isr = 1;
pci_set_master(pdev); pci_set_master(pdev);
@ -10199,33 +10144,23 @@ static int ipr_probe_ioa(struct pci_dev *pdev,
} }
} }
if (ioa_cfg->intr_flag == IPR_USE_MSI || if (pdev->msi_enabled || pdev->msix_enabled) {
ioa_cfg->intr_flag == IPR_USE_MSIX) {
rc = ipr_test_msi(ioa_cfg, pdev); rc = ipr_test_msi(ioa_cfg, pdev);
if (rc == -EOPNOTSUPP) { switch (rc) {
case 0:
dev_info(&pdev->dev,
"Request for %d MSI%ss succeeded.", ioa_cfg->nvectors,
pdev->msix_enabled ? "-X" : "");
break;
case -EOPNOTSUPP:
ipr_wait_for_pci_err_recovery(ioa_cfg); ipr_wait_for_pci_err_recovery(ioa_cfg);
if (ioa_cfg->intr_flag == IPR_USE_MSI) { pci_free_irq_vectors(pdev);
ioa_cfg->intr_flag &= ~IPR_USE_MSI;
pci_disable_msi(pdev);
} else if (ioa_cfg->intr_flag == IPR_USE_MSIX) {
ioa_cfg->intr_flag &= ~IPR_USE_MSIX;
pci_disable_msix(pdev);
}
ioa_cfg->intr_flag = IPR_USE_LSI;
ioa_cfg->nvectors = 1; ioa_cfg->nvectors = 1;
} ioa_cfg->clear_isr = 1;
else if (rc) break;
default:
goto out_msi_disable; goto out_msi_disable;
else {
if (ioa_cfg->intr_flag == IPR_USE_MSI)
dev_info(&pdev->dev,
"Request for %d MSIs succeeded with starting IRQ: %d\n",
ioa_cfg->nvectors, pdev->irq);
else if (ioa_cfg->intr_flag == IPR_USE_MSIX)
dev_info(&pdev->dev,
"Request for %d MSIXs succeeded.",
ioa_cfg->nvectors);
} }
} }
@ -10273,15 +10208,13 @@ static int ipr_probe_ioa(struct pci_dev *pdev,
ipr_mask_and_clear_interrupts(ioa_cfg, ~IPR_PCII_IOA_TRANS_TO_OPER); ipr_mask_and_clear_interrupts(ioa_cfg, ~IPR_PCII_IOA_TRANS_TO_OPER);
spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags); spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
if (ioa_cfg->intr_flag == IPR_USE_MSI if (pdev->msi_enabled || pdev->msix_enabled) {
|| ioa_cfg->intr_flag == IPR_USE_MSIX) {
name_msi_vectors(ioa_cfg); name_msi_vectors(ioa_cfg);
rc = request_irq(ioa_cfg->vectors_info[0].vec, ipr_isr, rc = request_irq(pci_irq_vector(pdev, 0), ipr_isr, 0,
0,
ioa_cfg->vectors_info[0].desc, ioa_cfg->vectors_info[0].desc,
&ioa_cfg->hrrq[0]); &ioa_cfg->hrrq[0]);
if (!rc) if (!rc)
rc = ipr_request_other_msi_irqs(ioa_cfg); rc = ipr_request_other_msi_irqs(ioa_cfg, pdev);
} else { } else {
rc = request_irq(pdev->irq, ipr_isr, rc = request_irq(pdev->irq, ipr_isr,
IRQF_SHARED, IRQF_SHARED,
@ -10323,10 +10256,7 @@ cleanup_nolog:
ipr_free_mem(ioa_cfg); ipr_free_mem(ioa_cfg);
out_msi_disable: out_msi_disable:
ipr_wait_for_pci_err_recovery(ioa_cfg); ipr_wait_for_pci_err_recovery(ioa_cfg);
if (ioa_cfg->intr_flag == IPR_USE_MSI) pci_free_irq_vectors(pdev);
pci_disable_msi(pdev);
else if (ioa_cfg->intr_flag == IPR_USE_MSIX)
pci_disable_msix(pdev);
cleanup_nomem: cleanup_nomem:
iounmap(ipr_regs); iounmap(ipr_regs);
out_disable: out_disable:

View File

@ -1413,10 +1413,7 @@ struct ipr_chip_cfg_t {
struct ipr_chip_t { struct ipr_chip_t {
u16 vendor; u16 vendor;
u16 device; u16 device;
u16 intr_type; bool has_msi;
#define IPR_USE_LSI 0x00
#define IPR_USE_MSI 0x01
#define IPR_USE_MSIX 0x02
u16 sis_type; u16 sis_type;
#define IPR_SIS32 0x00 #define IPR_SIS32 0x00
#define IPR_SIS64 0x01 #define IPR_SIS64 0x01
@ -1593,11 +1590,9 @@ struct ipr_ioa_cfg {
struct ipr_cmnd **ipr_cmnd_list; struct ipr_cmnd **ipr_cmnd_list;
dma_addr_t *ipr_cmnd_list_dma; dma_addr_t *ipr_cmnd_list_dma;
u16 intr_flag;
unsigned int nvectors; unsigned int nvectors;
struct { struct {
unsigned short vec;
char desc[22]; char desc[22];
} vectors_info[IPR_MAX_MSIX_VECTORS]; } vectors_info[IPR_MAX_MSIX_VECTORS];

View File

@ -2241,9 +2241,6 @@ ips_get_bios_version(ips_ha_t * ha, int intr)
uint8_t minor; uint8_t minor;
uint8_t subminor; uint8_t subminor;
uint8_t *buffer; uint8_t *buffer;
char hexDigits[] =
{ '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'A', 'B', 'C',
'D', 'E', 'F' };
METHOD_TRACE("ips_get_bios_version", 1); METHOD_TRACE("ips_get_bios_version", 1);
@ -2374,13 +2371,13 @@ ips_get_bios_version(ips_ha_t * ha, int intr)
} }
} }
ha->bios_version[0] = hexDigits[(major & 0xF0) >> 4]; ha->bios_version[0] = hex_asc_upper_hi(major);
ha->bios_version[1] = '.'; ha->bios_version[1] = '.';
ha->bios_version[2] = hexDigits[major & 0x0F]; ha->bios_version[2] = hex_asc_upper_lo(major);
ha->bios_version[3] = hexDigits[subminor]; ha->bios_version[3] = hex_asc_upper_lo(subminor);
ha->bios_version[4] = '.'; ha->bios_version[4] = '.';
ha->bios_version[5] = hexDigits[(minor & 0xF0) >> 4]; ha->bios_version[5] = hex_asc_upper_hi(minor);
ha->bios_version[6] = hexDigits[minor & 0x0F]; ha->bios_version[6] = hex_asc_upper_lo(minor);
ha->bios_version[7] = 0; ha->bios_version[7] = 0;
} }

View File

@ -295,7 +295,6 @@ enum sci_controller_states {
#define SCI_MAX_MSIX_INT (SCI_NUM_MSI_X_INT*SCI_MAX_CONTROLLERS) #define SCI_MAX_MSIX_INT (SCI_NUM_MSI_X_INT*SCI_MAX_CONTROLLERS)
struct isci_pci_info { struct isci_pci_info {
struct msix_entry msix_entries[SCI_MAX_MSIX_INT];
struct isci_host *hosts[SCI_MAX_CONTROLLERS]; struct isci_host *hosts[SCI_MAX_CONTROLLERS];
struct isci_orom *orom; struct isci_orom *orom;
}; };

View File

@ -350,16 +350,12 @@ static int isci_setup_interrupts(struct pci_dev *pdev)
*/ */
num_msix = num_controllers(pdev) * SCI_NUM_MSI_X_INT; num_msix = num_controllers(pdev) * SCI_NUM_MSI_X_INT;
for (i = 0; i < num_msix; i++) err = pci_alloc_irq_vectors(pdev, num_msix, num_msix, PCI_IRQ_MSIX);
pci_info->msix_entries[i].entry = i; if (err < 0)
err = pci_enable_msix_exact(pdev, pci_info->msix_entries, num_msix);
if (err)
goto intx; goto intx;
for (i = 0; i < num_msix; i++) { for (i = 0; i < num_msix; i++) {
int id = i / SCI_NUM_MSI_X_INT; int id = i / SCI_NUM_MSI_X_INT;
struct msix_entry *msix = &pci_info->msix_entries[i];
irq_handler_t isr; irq_handler_t isr;
ihost = pci_info->hosts[id]; ihost = pci_info->hosts[id];
@ -369,8 +365,8 @@ static int isci_setup_interrupts(struct pci_dev *pdev)
else else
isr = isci_msix_isr; isr = isci_msix_isr;
err = devm_request_irq(&pdev->dev, msix->vector, isr, 0, err = devm_request_irq(&pdev->dev, pci_irq_vector(pdev, i),
DRV_NAME"-msix", ihost); isr, 0, DRV_NAME"-msix", ihost);
if (!err) if (!err)
continue; continue;
@ -378,18 +374,19 @@ static int isci_setup_interrupts(struct pci_dev *pdev)
while (i--) { while (i--) {
id = i / SCI_NUM_MSI_X_INT; id = i / SCI_NUM_MSI_X_INT;
ihost = pci_info->hosts[id]; ihost = pci_info->hosts[id];
msix = &pci_info->msix_entries[i]; devm_free_irq(&pdev->dev, pci_irq_vector(pdev, i),
devm_free_irq(&pdev->dev, msix->vector, ihost); ihost);
} }
pci_disable_msix(pdev); pci_free_irq_vectors(pdev);
goto intx; goto intx;
} }
return 0; return 0;
intx: intx:
for_each_isci_host(i, ihost, pdev) { for_each_isci_host(i, ihost, pdev) {
err = devm_request_irq(&pdev->dev, pdev->irq, isci_intx_isr, err = devm_request_irq(&pdev->dev, pci_irq_vector(pdev, 0),
IRQF_SHARED, DRV_NAME"-intx", ihost); isci_intx_isr, IRQF_SHARED, DRV_NAME"-intx",
ihost);
if (err) if (err)
break; break;
} }

View File

@ -54,6 +54,7 @@ struct isci_orom *isci_request_oprom(struct pci_dev *pdev)
len = pci_biosrom_size(pdev); len = pci_biosrom_size(pdev);
rom = devm_kzalloc(&pdev->dev, sizeof(*rom), GFP_KERNEL); rom = devm_kzalloc(&pdev->dev, sizeof(*rom), GFP_KERNEL);
if (!rom) { if (!rom) {
pci_unmap_biosrom(oprom);
dev_warn(&pdev->dev, dev_warn(&pdev->dev,
"Unable to allocate memory for orom\n"); "Unable to allocate memory for orom\n");
return NULL; return NULL;

View File

@ -66,6 +66,9 @@ const char *rnc_state_name(enum scis_sds_remote_node_context_states state)
{ {
static const char * const strings[] = RNC_STATES; static const char * const strings[] = RNC_STATES;
if (state >= ARRAY_SIZE(strings))
return "UNKNOWN";
return strings[state]; return strings[state];
} }
#undef C #undef C
@ -454,7 +457,7 @@ enum sci_status sci_remote_node_context_event_handler(struct sci_remote_node_con
* the device since it's being invalidated anyway */ * the device since it's being invalidated anyway */
dev_warn(scirdev_to_dev(rnc_to_dev(sci_rnc)), dev_warn(scirdev_to_dev(rnc_to_dev(sci_rnc)),
"%s: SCIC Remote Node Context 0x%p was " "%s: SCIC Remote Node Context 0x%p was "
"suspeneded by hardware while being " "suspended by hardware while being "
"invalidated.\n", __func__, sci_rnc); "invalidated.\n", __func__, sci_rnc);
break; break;
default: default:
@ -473,7 +476,7 @@ enum sci_status sci_remote_node_context_event_handler(struct sci_remote_node_con
* the device since it's being resumed anyway */ * the device since it's being resumed anyway */
dev_warn(scirdev_to_dev(rnc_to_dev(sci_rnc)), dev_warn(scirdev_to_dev(rnc_to_dev(sci_rnc)),
"%s: SCIC Remote Node Context 0x%p was " "%s: SCIC Remote Node Context 0x%p was "
"suspeneded by hardware while being resumed.\n", "suspended by hardware while being resumed.\n",
__func__, sci_rnc); __func__, sci_rnc);
break; break;
default: default:

View File

@ -2473,7 +2473,7 @@ static void isci_request_process_response_iu(
"%s: resp_iu = %p " "%s: resp_iu = %p "
"resp_iu->status = 0x%x,\nresp_iu->datapres = %d " "resp_iu->status = 0x%x,\nresp_iu->datapres = %d "
"resp_iu->response_data_len = %x, " "resp_iu->response_data_len = %x, "
"resp_iu->sense_data_len = %x\nrepsonse data: ", "resp_iu->sense_data_len = %x\nresponse data: ",
__func__, __func__,
resp_iu, resp_iu,
resp_iu->status, resp_iu->status,

View File

@ -68,10 +68,14 @@ static void fc_disc_stop_rports(struct fc_disc *disc)
lport = fc_disc_lport(disc); lport = fc_disc_lport(disc);
mutex_lock(&disc->disc_mutex); rcu_read_lock();
list_for_each_entry_rcu(rdata, &disc->rports, peers) list_for_each_entry_rcu(rdata, &disc->rports, peers) {
lport->tt.rport_logoff(rdata); if (kref_get_unless_zero(&rdata->kref)) {
mutex_unlock(&disc->disc_mutex); fc_rport_logoff(rdata);
kref_put(&rdata->kref, fc_rport_destroy);
}
}
rcu_read_unlock();
} }
/** /**
@ -150,7 +154,7 @@ static void fc_disc_recv_rscn_req(struct fc_disc *disc, struct fc_frame *fp)
break; break;
} }
} }
lport->tt.seq_els_rsp_send(fp, ELS_LS_ACC, NULL); fc_seq_els_rsp_send(fp, ELS_LS_ACC, NULL);
/* /*
* If not doing a complete rediscovery, do GPN_ID on * If not doing a complete rediscovery, do GPN_ID on
@ -178,7 +182,7 @@ reject:
FC_DISC_DBG(disc, "Received a bad RSCN frame\n"); FC_DISC_DBG(disc, "Received a bad RSCN frame\n");
rjt_data.reason = ELS_RJT_LOGIC; rjt_data.reason = ELS_RJT_LOGIC;
rjt_data.explan = ELS_EXPL_NONE; rjt_data.explan = ELS_EXPL_NONE;
lport->tt.seq_els_rsp_send(fp, ELS_LS_RJT, &rjt_data); fc_seq_els_rsp_send(fp, ELS_LS_RJT, &rjt_data);
fc_frame_free(fp); fc_frame_free(fp);
} }
@ -289,15 +293,19 @@ static void fc_disc_done(struct fc_disc *disc, enum fc_disc_event event)
* Skip ports which were never discovered. These are the dNS port * Skip ports which were never discovered. These are the dNS port
* and ports which were created by PLOGI. * and ports which were created by PLOGI.
*/ */
rcu_read_lock();
list_for_each_entry_rcu(rdata, &disc->rports, peers) { list_for_each_entry_rcu(rdata, &disc->rports, peers) {
if (!rdata->disc_id) if (!kref_get_unless_zero(&rdata->kref))
continue; continue;
if (rdata->disc_id) {
if (rdata->disc_id == disc->disc_id) if (rdata->disc_id == disc->disc_id)
lport->tt.rport_login(rdata); fc_rport_login(rdata);
else else
lport->tt.rport_logoff(rdata); fc_rport_logoff(rdata);
} }
kref_put(&rdata->kref, fc_rport_destroy);
}
rcu_read_unlock();
mutex_unlock(&disc->disc_mutex); mutex_unlock(&disc->disc_mutex);
disc->disc_callback(lport, event); disc->disc_callback(lport, event);
mutex_lock(&disc->disc_mutex); mutex_lock(&disc->disc_mutex);
@ -446,7 +454,7 @@ static int fc_disc_gpn_ft_parse(struct fc_disc *disc, void *buf, size_t len)
if (ids.port_id != lport->port_id && if (ids.port_id != lport->port_id &&
ids.port_name != lport->wwpn) { ids.port_name != lport->wwpn) {
rdata = lport->tt.rport_create(lport, ids.port_id); rdata = fc_rport_create(lport, ids.port_id);
if (rdata) { if (rdata) {
rdata->ids.port_name = ids.port_name; rdata->ids.port_name = ids.port_name;
rdata->disc_id = disc->disc_id; rdata->disc_id = disc->disc_id;
@ -592,7 +600,6 @@ static void fc_disc_gpn_id_resp(struct fc_seq *sp, struct fc_frame *fp,
lport = rdata->local_port; lport = rdata->local_port;
disc = &lport->disc; disc = &lport->disc;
mutex_lock(&disc->disc_mutex);
if (PTR_ERR(fp) == -FC_EX_CLOSED) if (PTR_ERR(fp) == -FC_EX_CLOSED)
goto out; goto out;
if (IS_ERR(fp)) if (IS_ERR(fp))
@ -607,37 +614,41 @@ static void fc_disc_gpn_id_resp(struct fc_seq *sp, struct fc_frame *fp,
goto redisc; goto redisc;
pn = (struct fc_ns_gid_pn *)(cp + 1); pn = (struct fc_ns_gid_pn *)(cp + 1);
port_name = get_unaligned_be64(&pn->fn_wwpn); port_name = get_unaligned_be64(&pn->fn_wwpn);
mutex_lock(&rdata->rp_mutex);
if (rdata->ids.port_name == -1) if (rdata->ids.port_name == -1)
rdata->ids.port_name = port_name; rdata->ids.port_name = port_name;
else if (rdata->ids.port_name != port_name) { else if (rdata->ids.port_name != port_name) {
FC_DISC_DBG(disc, "GPN_ID accepted. WWPN changed. " FC_DISC_DBG(disc, "GPN_ID accepted. WWPN changed. "
"Port-id %6.6x wwpn %16.16llx\n", "Port-id %6.6x wwpn %16.16llx\n",
rdata->ids.port_id, port_name); rdata->ids.port_id, port_name);
lport->tt.rport_logoff(rdata); mutex_unlock(&rdata->rp_mutex);
fc_rport_logoff(rdata);
new_rdata = lport->tt.rport_create(lport, mutex_lock(&lport->disc.disc_mutex);
rdata->ids.port_id); new_rdata = fc_rport_create(lport, rdata->ids.port_id);
mutex_unlock(&lport->disc.disc_mutex);
if (new_rdata) { if (new_rdata) {
new_rdata->disc_id = disc->disc_id; new_rdata->disc_id = disc->disc_id;
lport->tt.rport_login(new_rdata); fc_rport_login(new_rdata);
} }
goto out; goto out;
} }
rdata->disc_id = disc->disc_id; rdata->disc_id = disc->disc_id;
lport->tt.rport_login(rdata); mutex_unlock(&rdata->rp_mutex);
fc_rport_login(rdata);
} else if (ntohs(cp->ct_cmd) == FC_FS_RJT) { } else if (ntohs(cp->ct_cmd) == FC_FS_RJT) {
FC_DISC_DBG(disc, "GPN_ID rejected reason %x exp %x\n", FC_DISC_DBG(disc, "GPN_ID rejected reason %x exp %x\n",
cp->ct_reason, cp->ct_explan); cp->ct_reason, cp->ct_explan);
lport->tt.rport_logoff(rdata); fc_rport_logoff(rdata);
} else { } else {
FC_DISC_DBG(disc, "GPN_ID unexpected response code %x\n", FC_DISC_DBG(disc, "GPN_ID unexpected response code %x\n",
ntohs(cp->ct_cmd)); ntohs(cp->ct_cmd));
redisc: redisc:
mutex_lock(&disc->disc_mutex);
fc_disc_restart(disc); fc_disc_restart(disc);
mutex_unlock(&disc->disc_mutex);
} }
out: out:
mutex_unlock(&disc->disc_mutex); kref_put(&rdata->kref, fc_rport_destroy);
kref_put(&rdata->kref, lport->tt.rport_destroy);
} }
/** /**
@ -678,7 +689,7 @@ static int fc_disc_single(struct fc_lport *lport, struct fc_disc_port *dp)
{ {
struct fc_rport_priv *rdata; struct fc_rport_priv *rdata;
rdata = lport->tt.rport_create(lport, dp->port_id); rdata = fc_rport_create(lport, dp->port_id);
if (!rdata) if (!rdata)
return -ENOMEM; return -ENOMEM;
rdata->disc_id = 0; rdata->disc_id = 0;
@ -708,7 +719,7 @@ static void fc_disc_stop(struct fc_lport *lport)
static void fc_disc_stop_final(struct fc_lport *lport) static void fc_disc_stop_final(struct fc_lport *lport)
{ {
fc_disc_stop(lport); fc_disc_stop(lport);
lport->tt.rport_flush_queue(); fc_rport_flush_queue();
} }
/** /**

View File

@ -67,7 +67,7 @@ struct fc_seq *fc_elsct_send(struct fc_lport *lport, u32 did,
fc_fill_fc_hdr(fp, r_ctl, did, lport->port_id, fh_type, fc_fill_fc_hdr(fp, r_ctl, did, lport->port_id, fh_type,
FC_FCTL_REQ, 0); FC_FCTL_REQ, 0);
return lport->tt.exch_seq_send(lport, fp, resp, NULL, arg, timer_msec); return fc_exch_seq_send(lport, fp, resp, NULL, arg, timer_msec);
} }
EXPORT_SYMBOL(fc_elsct_send); EXPORT_SYMBOL(fc_elsct_send);

View File

@ -94,6 +94,7 @@ struct fc_exch_pool {
struct fc_exch_mgr { struct fc_exch_mgr {
struct fc_exch_pool __percpu *pool; struct fc_exch_pool __percpu *pool;
mempool_t *ep_pool; mempool_t *ep_pool;
struct fc_lport *lport;
enum fc_class class; enum fc_class class;
struct kref kref; struct kref kref;
u16 min_xid; u16 min_xid;
@ -362,8 +363,10 @@ static inline void fc_exch_timer_set_locked(struct fc_exch *ep,
fc_exch_hold(ep); /* hold for timer */ fc_exch_hold(ep); /* hold for timer */
if (!queue_delayed_work(fc_exch_workqueue, &ep->timeout_work, if (!queue_delayed_work(fc_exch_workqueue, &ep->timeout_work,
msecs_to_jiffies(timer_msec))) msecs_to_jiffies(timer_msec))) {
FC_EXCH_DBG(ep, "Exchange already queued\n");
fc_exch_release(ep); fc_exch_release(ep);
}
} }
/** /**
@ -406,6 +409,8 @@ static int fc_exch_done_locked(struct fc_exch *ep)
return rc; return rc;
} }
static struct fc_exch fc_quarantine_exch;
/** /**
* fc_exch_ptr_get() - Return an exchange from an exchange pool * fc_exch_ptr_get() - Return an exchange from an exchange pool
* @pool: Exchange Pool to get an exchange from * @pool: Exchange Pool to get an exchange from
@ -450,14 +455,17 @@ static void fc_exch_delete(struct fc_exch *ep)
/* update cache of free slot */ /* update cache of free slot */
index = (ep->xid - ep->em->min_xid) >> fc_cpu_order; index = (ep->xid - ep->em->min_xid) >> fc_cpu_order;
if (!(ep->state & FC_EX_QUARANTINE)) {
if (pool->left == FC_XID_UNKNOWN) if (pool->left == FC_XID_UNKNOWN)
pool->left = index; pool->left = index;
else if (pool->right == FC_XID_UNKNOWN) else if (pool->right == FC_XID_UNKNOWN)
pool->right = index; pool->right = index;
else else
pool->next_index = index; pool->next_index = index;
fc_exch_ptr_set(pool, index, NULL); fc_exch_ptr_set(pool, index, NULL);
} else {
fc_exch_ptr_set(pool, index, &fc_quarantine_exch);
}
list_del(&ep->ex_list); list_del(&ep->ex_list);
spin_unlock_bh(&pool->lock); spin_unlock_bh(&pool->lock);
fc_exch_release(ep); /* drop hold for exch in mp */ fc_exch_release(ep); /* drop hold for exch in mp */
@ -525,8 +533,7 @@ out:
* Note: The frame will be freed either by a direct call to fc_frame_free(fp) * Note: The frame will be freed either by a direct call to fc_frame_free(fp)
* or indirectly by calling libfc_function_template.frame_send(). * or indirectly by calling libfc_function_template.frame_send().
*/ */
static int fc_seq_send(struct fc_lport *lport, struct fc_seq *sp, int fc_seq_send(struct fc_lport *lport, struct fc_seq *sp, struct fc_frame *fp)
struct fc_frame *fp)
{ {
struct fc_exch *ep; struct fc_exch *ep;
int error; int error;
@ -536,6 +543,7 @@ static int fc_seq_send(struct fc_lport *lport, struct fc_seq *sp,
spin_unlock_bh(&ep->ex_lock); spin_unlock_bh(&ep->ex_lock);
return error; return error;
} }
EXPORT_SYMBOL(fc_seq_send);
/** /**
* fc_seq_alloc() - Allocate a sequence for a given exchange * fc_seq_alloc() - Allocate a sequence for a given exchange
@ -577,7 +585,7 @@ static struct fc_seq *fc_seq_start_next_locked(struct fc_seq *sp)
* for a given sequence/exchange pair * for a given sequence/exchange pair
* @sp: The sequence/exchange to get a new exchange for * @sp: The sequence/exchange to get a new exchange for
*/ */
static struct fc_seq *fc_seq_start_next(struct fc_seq *sp) struct fc_seq *fc_seq_start_next(struct fc_seq *sp)
{ {
struct fc_exch *ep = fc_seq_exch(sp); struct fc_exch *ep = fc_seq_exch(sp);
@ -587,15 +595,15 @@ static struct fc_seq *fc_seq_start_next(struct fc_seq *sp)
return sp; return sp;
} }
EXPORT_SYMBOL(fc_seq_start_next);
/* /*
* Set the response handler for the exchange associated with a sequence. * Set the response handler for the exchange associated with a sequence.
* *
* Note: May sleep if invoked from outside a response handler. * Note: May sleep if invoked from outside a response handler.
*/ */
static void fc_seq_set_resp(struct fc_seq *sp, void fc_seq_set_resp(struct fc_seq *sp,
void (*resp)(struct fc_seq *, struct fc_frame *, void (*resp)(struct fc_seq *, struct fc_frame *, void *),
void *),
void *arg) void *arg)
{ {
struct fc_exch *ep = fc_seq_exch(sp); struct fc_exch *ep = fc_seq_exch(sp);
@ -615,12 +623,20 @@ static void fc_seq_set_resp(struct fc_seq *sp,
ep->arg = arg; ep->arg = arg;
spin_unlock_bh(&ep->ex_lock); spin_unlock_bh(&ep->ex_lock);
} }
EXPORT_SYMBOL(fc_seq_set_resp);
/** /**
* fc_exch_abort_locked() - Abort an exchange * fc_exch_abort_locked() - Abort an exchange
* @ep: The exchange to be aborted * @ep: The exchange to be aborted
* @timer_msec: The period of time to wait before aborting * @timer_msec: The period of time to wait before aborting
* *
* Abort an exchange and sequence. Generally called because of a
* exchange timeout or an abort from the upper layer.
*
* A timer_msec can be specified for abort timeout, if non-zero
* timer_msec value is specified then exchange resp handler
* will be called with timeout error if no response to abort.
*
* Locking notes: Called with exch lock held * Locking notes: Called with exch lock held
* *
* Return value: 0 on success else error code * Return value: 0 on success else error code
@ -632,9 +648,13 @@ static int fc_exch_abort_locked(struct fc_exch *ep,
struct fc_frame *fp; struct fc_frame *fp;
int error; int error;
FC_EXCH_DBG(ep, "exch: abort, time %d msecs\n", timer_msec);
if (ep->esb_stat & (ESB_ST_COMPLETE | ESB_ST_ABNORMAL) || if (ep->esb_stat & (ESB_ST_COMPLETE | ESB_ST_ABNORMAL) ||
ep->state & (FC_EX_DONE | FC_EX_RST_CLEANUP)) ep->state & (FC_EX_DONE | FC_EX_RST_CLEANUP)) {
FC_EXCH_DBG(ep, "exch: already completed esb %x state %x\n",
ep->esb_stat, ep->state);
return -ENXIO; return -ENXIO;
}
/* /*
* Send the abort on a new sequence if possible. * Send the abort on a new sequence if possible.
@ -680,8 +700,7 @@ static int fc_exch_abort_locked(struct fc_exch *ep,
* *
* Return value: 0 on success else error code * Return value: 0 on success else error code
*/ */
static int fc_seq_exch_abort(const struct fc_seq *req_sp, int fc_seq_exch_abort(const struct fc_seq *req_sp, unsigned int timer_msec)
unsigned int timer_msec)
{ {
struct fc_exch *ep; struct fc_exch *ep;
int error; int error;
@ -758,7 +777,7 @@ static void fc_exch_timeout(struct work_struct *work)
u32 e_stat; u32 e_stat;
int rc = 1; int rc = 1;
FC_EXCH_DBG(ep, "Exchange timed out\n"); FC_EXCH_DBG(ep, "Exchange timed out state %x\n", ep->state);
spin_lock_bh(&ep->ex_lock); spin_lock_bh(&ep->ex_lock);
if (ep->state & (FC_EX_RST_CLEANUP | FC_EX_DONE)) if (ep->state & (FC_EX_RST_CLEANUP | FC_EX_DONE))
@ -821,15 +840,19 @@ static struct fc_exch *fc_exch_em_alloc(struct fc_lport *lport,
/* peek cache of free slot */ /* peek cache of free slot */
if (pool->left != FC_XID_UNKNOWN) { if (pool->left != FC_XID_UNKNOWN) {
if (!WARN_ON(fc_exch_ptr_get(pool, pool->left))) {
index = pool->left; index = pool->left;
pool->left = FC_XID_UNKNOWN; pool->left = FC_XID_UNKNOWN;
goto hit; goto hit;
} }
}
if (pool->right != FC_XID_UNKNOWN) { if (pool->right != FC_XID_UNKNOWN) {
if (!WARN_ON(fc_exch_ptr_get(pool, pool->right))) {
index = pool->right; index = pool->right;
pool->right = FC_XID_UNKNOWN; pool->right = FC_XID_UNKNOWN;
goto hit; goto hit;
} }
}
index = pool->next_index; index = pool->next_index;
/* allocate new exch from pool */ /* allocate new exch from pool */
@ -888,14 +911,19 @@ err:
* EM is selected when a NULL match function pointer is encountered * EM is selected when a NULL match function pointer is encountered
* or when a call to a match function returns true. * or when a call to a match function returns true.
*/ */
static inline struct fc_exch *fc_exch_alloc(struct fc_lport *lport, static struct fc_exch *fc_exch_alloc(struct fc_lport *lport,
struct fc_frame *fp) struct fc_frame *fp)
{ {
struct fc_exch_mgr_anchor *ema; struct fc_exch_mgr_anchor *ema;
struct fc_exch *ep;
list_for_each_entry(ema, &lport->ema_list, ema_list) list_for_each_entry(ema, &lport->ema_list, ema_list) {
if (!ema->match || ema->match(fp)) if (!ema->match || ema->match(fp)) {
return fc_exch_em_alloc(lport, ema->mp); ep = fc_exch_em_alloc(lport, ema->mp);
if (ep)
return ep;
}
}
return NULL; return NULL;
} }
@ -906,14 +934,17 @@ static inline struct fc_exch *fc_exch_alloc(struct fc_lport *lport,
*/ */
static struct fc_exch *fc_exch_find(struct fc_exch_mgr *mp, u16 xid) static struct fc_exch *fc_exch_find(struct fc_exch_mgr *mp, u16 xid)
{ {
struct fc_lport *lport = mp->lport;
struct fc_exch_pool *pool; struct fc_exch_pool *pool;
struct fc_exch *ep = NULL; struct fc_exch *ep = NULL;
u16 cpu = xid & fc_cpu_mask; u16 cpu = xid & fc_cpu_mask;
if (xid == FC_XID_UNKNOWN)
return NULL;
if (cpu >= nr_cpu_ids || !cpu_possible(cpu)) { if (cpu >= nr_cpu_ids || !cpu_possible(cpu)) {
printk_ratelimited(KERN_ERR pr_err("host%u: lport %6.6x: xid %d invalid CPU %d\n:",
"libfc: lookup request for XID = %d, " lport->host->host_no, lport->port_id, xid, cpu);
"indicates invalid CPU %d\n", xid, cpu);
return NULL; return NULL;
} }
@ -921,6 +952,10 @@ static struct fc_exch *fc_exch_find(struct fc_exch_mgr *mp, u16 xid)
pool = per_cpu_ptr(mp->pool, cpu); pool = per_cpu_ptr(mp->pool, cpu);
spin_lock_bh(&pool->lock); spin_lock_bh(&pool->lock);
ep = fc_exch_ptr_get(pool, (xid - mp->min_xid) >> fc_cpu_order); ep = fc_exch_ptr_get(pool, (xid - mp->min_xid) >> fc_cpu_order);
if (ep == &fc_quarantine_exch) {
FC_LPORT_DBG(lport, "xid %x quarantined\n", xid);
ep = NULL;
}
if (ep) { if (ep) {
WARN_ON(ep->xid != xid); WARN_ON(ep->xid != xid);
fc_exch_hold(ep); fc_exch_hold(ep);
@ -938,7 +973,7 @@ static struct fc_exch *fc_exch_find(struct fc_exch_mgr *mp, u16 xid)
* *
* Note: May sleep if invoked from outside a response handler. * Note: May sleep if invoked from outside a response handler.
*/ */
static void fc_exch_done(struct fc_seq *sp) void fc_exch_done(struct fc_seq *sp)
{ {
struct fc_exch *ep = fc_seq_exch(sp); struct fc_exch *ep = fc_seq_exch(sp);
int rc; int rc;
@ -951,6 +986,7 @@ static void fc_exch_done(struct fc_seq *sp)
if (!rc) if (!rc)
fc_exch_delete(ep); fc_exch_delete(ep);
} }
EXPORT_SYMBOL(fc_exch_done);
/** /**
* fc_exch_resp() - Allocate a new exchange for a response frame * fc_exch_resp() - Allocate a new exchange for a response frame
@ -1197,7 +1233,7 @@ static void fc_exch_set_addr(struct fc_exch *ep,
* *
* The received frame is not freed. * The received frame is not freed.
*/ */
static void fc_seq_els_rsp_send(struct fc_frame *fp, enum fc_els_cmd els_cmd, void fc_seq_els_rsp_send(struct fc_frame *fp, enum fc_els_cmd els_cmd,
struct fc_seq_els_data *els_data) struct fc_seq_els_data *els_data)
{ {
switch (els_cmd) { switch (els_cmd) {
@ -1217,6 +1253,7 @@ static void fc_seq_els_rsp_send(struct fc_frame *fp, enum fc_els_cmd els_cmd,
FC_LPORT_DBG(fr_dev(fp), "Invalid ELS CMD:%x\n", els_cmd); FC_LPORT_DBG(fr_dev(fp), "Invalid ELS CMD:%x\n", els_cmd);
} }
} }
EXPORT_SYMBOL_GPL(fc_seq_els_rsp_send);
/** /**
* fc_seq_send_last() - Send a sequence that is the last in the exchange * fc_seq_send_last() - Send a sequence that is the last in the exchange
@ -1258,8 +1295,10 @@ static void fc_seq_send_ack(struct fc_seq *sp, const struct fc_frame *rx_fp)
*/ */
if (fc_sof_needs_ack(fr_sof(rx_fp))) { if (fc_sof_needs_ack(fr_sof(rx_fp))) {
fp = fc_frame_alloc(lport, 0); fp = fc_frame_alloc(lport, 0);
if (!fp) if (!fp) {
FC_EXCH_DBG(ep, "Drop ACK request, out of memory\n");
return; return;
}
fh = fc_frame_header_get(fp); fh = fc_frame_header_get(fp);
fh->fh_r_ctl = FC_RCTL_ACK_1; fh->fh_r_ctl = FC_RCTL_ACK_1;
@ -1312,13 +1351,18 @@ static void fc_exch_send_ba_rjt(struct fc_frame *rx_fp,
struct fc_frame_header *rx_fh; struct fc_frame_header *rx_fh;
struct fc_frame_header *fh; struct fc_frame_header *fh;
struct fc_ba_rjt *rp; struct fc_ba_rjt *rp;
struct fc_seq *sp;
struct fc_lport *lport; struct fc_lport *lport;
unsigned int f_ctl; unsigned int f_ctl;
lport = fr_dev(rx_fp); lport = fr_dev(rx_fp);
sp = fr_seq(rx_fp);
fp = fc_frame_alloc(lport, sizeof(*rp)); fp = fc_frame_alloc(lport, sizeof(*rp));
if (!fp) if (!fp) {
FC_EXCH_DBG(fc_seq_exch(sp),
"Drop BA_RJT request, out of memory\n");
return; return;
}
fh = fc_frame_header_get(fp); fh = fc_frame_header_get(fp);
rx_fh = fc_frame_header_get(rx_fp); rx_fh = fc_frame_header_get(rx_fp);
@ -1383,14 +1427,17 @@ static void fc_exch_recv_abts(struct fc_exch *ep, struct fc_frame *rx_fp)
if (!ep) if (!ep)
goto reject; goto reject;
FC_EXCH_DBG(ep, "exch: ABTS received\n");
fp = fc_frame_alloc(ep->lp, sizeof(*ap)); fp = fc_frame_alloc(ep->lp, sizeof(*ap));
if (!fp) if (!fp) {
FC_EXCH_DBG(ep, "Drop ABTS request, out of memory\n");
goto free; goto free;
}
spin_lock_bh(&ep->ex_lock); spin_lock_bh(&ep->ex_lock);
if (ep->esb_stat & ESB_ST_COMPLETE) { if (ep->esb_stat & ESB_ST_COMPLETE) {
spin_unlock_bh(&ep->ex_lock); spin_unlock_bh(&ep->ex_lock);
FC_EXCH_DBG(ep, "exch: ABTS rejected, exchange complete\n");
fc_frame_free(fp); fc_frame_free(fp);
goto reject; goto reject;
} }
@ -1433,7 +1480,7 @@ reject:
* A reference will be held on the exchange/sequence for the caller, which * A reference will be held on the exchange/sequence for the caller, which
* must call fc_seq_release(). * must call fc_seq_release().
*/ */
static struct fc_seq *fc_seq_assign(struct fc_lport *lport, struct fc_frame *fp) struct fc_seq *fc_seq_assign(struct fc_lport *lport, struct fc_frame *fp)
{ {
struct fc_exch_mgr_anchor *ema; struct fc_exch_mgr_anchor *ema;
@ -1447,15 +1494,17 @@ static struct fc_seq *fc_seq_assign(struct fc_lport *lport, struct fc_frame *fp)
break; break;
return fr_seq(fp); return fr_seq(fp);
} }
EXPORT_SYMBOL(fc_seq_assign);
/** /**
* fc_seq_release() - Release the hold * fc_seq_release() - Release the hold
* @sp: The sequence. * @sp: The sequence.
*/ */
static void fc_seq_release(struct fc_seq *sp) void fc_seq_release(struct fc_seq *sp)
{ {
fc_exch_release(fc_seq_exch(sp)); fc_exch_release(fc_seq_exch(sp));
} }
EXPORT_SYMBOL(fc_seq_release);
/** /**
* fc_exch_recv_req() - Handler for an incoming request * fc_exch_recv_req() - Handler for an incoming request
@ -1491,7 +1540,7 @@ static void fc_exch_recv_req(struct fc_lport *lport, struct fc_exch_mgr *mp,
* The upper-level protocol may request one later, if needed. * The upper-level protocol may request one later, if needed.
*/ */
if (fh->fh_rx_id == htons(FC_XID_UNKNOWN)) if (fh->fh_rx_id == htons(FC_XID_UNKNOWN))
return lport->tt.lport_recv(lport, fp); return fc_lport_recv(lport, fp);
reject = fc_seq_lookup_recip(lport, mp, fp); reject = fc_seq_lookup_recip(lport, mp, fp);
if (reject == FC_RJT_NONE) { if (reject == FC_RJT_NONE) {
@ -1512,7 +1561,7 @@ static void fc_exch_recv_req(struct fc_lport *lport, struct fc_exch_mgr *mp,
* first. * first.
*/ */
if (!fc_invoke_resp(ep, sp, fp)) if (!fc_invoke_resp(ep, sp, fp))
lport->tt.lport_recv(lport, fp); fc_lport_recv(lport, fp);
fc_exch_release(ep); /* release from lookup */ fc_exch_release(ep); /* release from lookup */
} else { } else {
FC_LPORT_DBG(lport, "exch/seq lookup failed: reject %x\n", FC_LPORT_DBG(lport, "exch/seq lookup failed: reject %x\n",
@ -1562,9 +1611,6 @@ static void fc_exch_recv_seq_resp(struct fc_exch_mgr *mp, struct fc_frame *fp)
if (fc_sof_is_init(sof)) { if (fc_sof_is_init(sof)) {
sp->ssb_stat |= SSB_ST_RESP; sp->ssb_stat |= SSB_ST_RESP;
sp->id = fh->fh_seq_id; sp->id = fh->fh_seq_id;
} else if (sp->id != fh->fh_seq_id) {
atomic_inc(&mp->stats.seq_not_found);
goto rel;
} }
f_ctl = ntoh24(fh->fh_f_ctl); f_ctl = ntoh24(fh->fh_f_ctl);
@ -1761,7 +1807,10 @@ static void fc_exch_recv_bls(struct fc_exch_mgr *mp, struct fc_frame *fp)
fc_frame_free(fp); fc_frame_free(fp);
break; break;
case FC_RCTL_BA_ABTS: case FC_RCTL_BA_ABTS:
if (ep)
fc_exch_recv_abts(ep, fp); fc_exch_recv_abts(ep, fp);
else
fc_frame_free(fp);
break; break;
default: /* ignore junk */ default: /* ignore junk */
fc_frame_free(fp); fc_frame_free(fp);
@ -1784,11 +1833,16 @@ static void fc_seq_ls_acc(struct fc_frame *rx_fp)
struct fc_lport *lport; struct fc_lport *lport;
struct fc_els_ls_acc *acc; struct fc_els_ls_acc *acc;
struct fc_frame *fp; struct fc_frame *fp;
struct fc_seq *sp;
lport = fr_dev(rx_fp); lport = fr_dev(rx_fp);
sp = fr_seq(rx_fp);
fp = fc_frame_alloc(lport, sizeof(*acc)); fp = fc_frame_alloc(lport, sizeof(*acc));
if (!fp) if (!fp) {
FC_EXCH_DBG(fc_seq_exch(sp),
"exch: drop LS_ACC, out of memory\n");
return; return;
}
acc = fc_frame_payload_get(fp, sizeof(*acc)); acc = fc_frame_payload_get(fp, sizeof(*acc));
memset(acc, 0, sizeof(*acc)); memset(acc, 0, sizeof(*acc));
acc->la_cmd = ELS_LS_ACC; acc->la_cmd = ELS_LS_ACC;
@ -1811,11 +1865,16 @@ static void fc_seq_ls_rjt(struct fc_frame *rx_fp, enum fc_els_rjt_reason reason,
struct fc_lport *lport; struct fc_lport *lport;
struct fc_els_ls_rjt *rjt; struct fc_els_ls_rjt *rjt;
struct fc_frame *fp; struct fc_frame *fp;
struct fc_seq *sp;
lport = fr_dev(rx_fp); lport = fr_dev(rx_fp);
sp = fr_seq(rx_fp);
fp = fc_frame_alloc(lport, sizeof(*rjt)); fp = fc_frame_alloc(lport, sizeof(*rjt));
if (!fp) if (!fp) {
FC_EXCH_DBG(fc_seq_exch(sp),
"exch: drop LS_ACC, out of memory\n");
return; return;
}
rjt = fc_frame_payload_get(fp, sizeof(*rjt)); rjt = fc_frame_payload_get(fp, sizeof(*rjt));
memset(rjt, 0, sizeof(*rjt)); memset(rjt, 0, sizeof(*rjt));
rjt->er_cmd = ELS_LS_RJT; rjt->er_cmd = ELS_LS_RJT;
@ -1960,8 +2019,7 @@ static void fc_exch_els_rec(struct fc_frame *rfp)
enum fc_els_rjt_reason reason = ELS_RJT_LOGIC; enum fc_els_rjt_reason reason = ELS_RJT_LOGIC;
enum fc_els_rjt_explan explan; enum fc_els_rjt_explan explan;
u32 sid; u32 sid;
u16 rxid; u16 xid, rxid, oxid;
u16 oxid;
lport = fr_dev(rfp); lport = fr_dev(rfp);
rp = fc_frame_payload_get(rfp, sizeof(*rp)); rp = fc_frame_payload_get(rfp, sizeof(*rp));
@ -1972,18 +2030,35 @@ static void fc_exch_els_rec(struct fc_frame *rfp)
rxid = ntohs(rp->rec_rx_id); rxid = ntohs(rp->rec_rx_id);
oxid = ntohs(rp->rec_ox_id); oxid = ntohs(rp->rec_ox_id);
ep = fc_exch_lookup(lport,
sid == fc_host_port_id(lport->host) ? oxid : rxid);
explan = ELS_EXPL_OXID_RXID; explan = ELS_EXPL_OXID_RXID;
if (!ep) if (sid == fc_host_port_id(lport->host))
xid = oxid;
else
xid = rxid;
if (xid == FC_XID_UNKNOWN) {
FC_LPORT_DBG(lport,
"REC request from %x: invalid rxid %x oxid %x\n",
sid, rxid, oxid);
goto reject; goto reject;
}
ep = fc_exch_lookup(lport, xid);
if (!ep) {
FC_LPORT_DBG(lport,
"REC request from %x: rxid %x oxid %x not found\n",
sid, rxid, oxid);
goto reject;
}
FC_EXCH_DBG(ep, "REC request from %x: rxid %x oxid %x\n",
sid, rxid, oxid);
if (ep->oid != sid || oxid != ep->oxid) if (ep->oid != sid || oxid != ep->oxid)
goto rel; goto rel;
if (rxid != FC_XID_UNKNOWN && rxid != ep->rxid) if (rxid != FC_XID_UNKNOWN && rxid != ep->rxid)
goto rel; goto rel;
fp = fc_frame_alloc(lport, sizeof(*acc)); fp = fc_frame_alloc(lport, sizeof(*acc));
if (!fp) if (!fp) {
FC_EXCH_DBG(ep, "Drop REC request, out of memory\n");
goto out; goto out;
}
acc = fc_frame_payload_get(fp, sizeof(*acc)); acc = fc_frame_payload_get(fp, sizeof(*acc));
memset(acc, 0, sizeof(*acc)); memset(acc, 0, sizeof(*acc));
@ -2065,6 +2140,24 @@ cleanup:
* @arg: The argument to be passed to the response handler * @arg: The argument to be passed to the response handler
* @timer_msec: The timeout period for the exchange * @timer_msec: The timeout period for the exchange
* *
* The exchange response handler is set in this routine to resp()
* function pointer. It can be called in two scenarios: if a timeout
* occurs or if a response frame is received for the exchange. The
* fc_frame pointer in response handler will also indicate timeout
* as error using IS_ERR related macros.
*
* The exchange destructor handler is also set in this routine.
* The destructor handler is invoked by EM layer when exchange
* is about to free, this can be used by caller to free its
* resources along with exchange free.
*
* The arg is passed back to resp and destructor handler.
*
* The timeout value (in msec) for an exchange is set if non zero
* timer_msec argument is specified. The timer is canceled when
* it fires or when the exchange is done. The exchange timeout handler
* is registered by EM layer.
*
* The frame pointer with some of the header's fields must be * The frame pointer with some of the header's fields must be
* filled before calling this routine, those fields are: * filled before calling this routine, those fields are:
* *
@ -2075,13 +2168,12 @@ cleanup:
* - frame control * - frame control
* - parameter or relative offset * - parameter or relative offset
*/ */
static struct fc_seq *fc_exch_seq_send(struct fc_lport *lport, struct fc_seq *fc_exch_seq_send(struct fc_lport *lport,
struct fc_frame *fp, struct fc_frame *fp,
void (*resp)(struct fc_seq *, void (*resp)(struct fc_seq *,
struct fc_frame *fp, struct fc_frame *fp,
void *arg), void *arg),
void (*destructor)(struct fc_seq *, void (*destructor)(struct fc_seq *, void *),
void *),
void *arg, u32 timer_msec) void *arg, u32 timer_msec)
{ {
struct fc_exch *ep; struct fc_exch *ep;
@ -2101,7 +2193,7 @@ static struct fc_seq *fc_exch_seq_send(struct fc_lport *lport,
ep->resp = resp; ep->resp = resp;
ep->destructor = destructor; ep->destructor = destructor;
ep->arg = arg; ep->arg = arg;
ep->r_a_tov = FC_DEF_R_A_TOV; ep->r_a_tov = lport->r_a_tov;
ep->lp = lport; ep->lp = lport;
sp = &ep->seq; sp = &ep->seq;
@ -2135,6 +2227,7 @@ err:
fc_exch_delete(ep); fc_exch_delete(ep);
return NULL; return NULL;
} }
EXPORT_SYMBOL(fc_exch_seq_send);
/** /**
* fc_exch_rrq() - Send an ELS RRQ (Reinstate Recovery Qualifier) command * fc_exch_rrq() - Send an ELS RRQ (Reinstate Recovery Qualifier) command
@ -2176,6 +2269,7 @@ static void fc_exch_rrq(struct fc_exch *ep)
return; return;
retry: retry:
FC_EXCH_DBG(ep, "exch: RRQ send failed\n");
spin_lock_bh(&ep->ex_lock); spin_lock_bh(&ep->ex_lock);
if (ep->state & (FC_EX_RST_CLEANUP | FC_EX_DONE)) { if (ep->state & (FC_EX_RST_CLEANUP | FC_EX_DONE)) {
spin_unlock_bh(&ep->ex_lock); spin_unlock_bh(&ep->ex_lock);
@ -2218,6 +2312,8 @@ static void fc_exch_els_rrq(struct fc_frame *fp)
if (!ep) if (!ep)
goto reject; goto reject;
spin_lock_bh(&ep->ex_lock); spin_lock_bh(&ep->ex_lock);
FC_EXCH_DBG(ep, "RRQ request from %x: xid %x rxid %x oxid %x\n",
sid, xid, ntohs(rp->rrq_rx_id), ntohs(rp->rrq_ox_id));
if (ep->oxid != ntohs(rp->rrq_ox_id)) if (ep->oxid != ntohs(rp->rrq_ox_id))
goto unlock_reject; goto unlock_reject;
if (ep->rxid != ntohs(rp->rrq_rx_id) && if (ep->rxid != ntohs(rp->rrq_rx_id) &&
@ -2385,6 +2481,7 @@ struct fc_exch_mgr *fc_exch_mgr_alloc(struct fc_lport *lport,
return NULL; return NULL;
mp->class = class; mp->class = class;
mp->lport = lport;
/* adjust em exch xid range for offload */ /* adjust em exch xid range for offload */
mp->min_xid = min_xid; mp->min_xid = min_xid;
@ -2558,36 +2655,9 @@ EXPORT_SYMBOL(fc_exch_recv);
*/ */
int fc_exch_init(struct fc_lport *lport) int fc_exch_init(struct fc_lport *lport)
{ {
if (!lport->tt.seq_start_next)
lport->tt.seq_start_next = fc_seq_start_next;
if (!lport->tt.seq_set_resp)
lport->tt.seq_set_resp = fc_seq_set_resp;
if (!lport->tt.exch_seq_send)
lport->tt.exch_seq_send = fc_exch_seq_send;
if (!lport->tt.seq_send)
lport->tt.seq_send = fc_seq_send;
if (!lport->tt.seq_els_rsp_send)
lport->tt.seq_els_rsp_send = fc_seq_els_rsp_send;
if (!lport->tt.exch_done)
lport->tt.exch_done = fc_exch_done;
if (!lport->tt.exch_mgr_reset) if (!lport->tt.exch_mgr_reset)
lport->tt.exch_mgr_reset = fc_exch_mgr_reset; lport->tt.exch_mgr_reset = fc_exch_mgr_reset;
if (!lport->tt.seq_exch_abort)
lport->tt.seq_exch_abort = fc_seq_exch_abort;
if (!lport->tt.seq_assign)
lport->tt.seq_assign = fc_seq_assign;
if (!lport->tt.seq_release)
lport->tt.seq_release = fc_seq_release;
return 0; return 0;
} }
EXPORT_SYMBOL(fc_exch_init); EXPORT_SYMBOL(fc_exch_init);

View File

@ -122,6 +122,7 @@ static void fc_fcp_srr_error(struct fc_fcp_pkt *, struct fc_frame *);
#define FC_HRD_ERROR 9 #define FC_HRD_ERROR 9
#define FC_CRC_ERROR 10 #define FC_CRC_ERROR 10
#define FC_TIMED_OUT 11 #define FC_TIMED_OUT 11
#define FC_TRANS_RESET 12
/* /*
* Error recovery timeout values. * Error recovery timeout values.
@ -195,7 +196,7 @@ static void fc_fcp_pkt_hold(struct fc_fcp_pkt *fsp)
* @seq: The sequence that the FCP packet is on (required by destructor API) * @seq: The sequence that the FCP packet is on (required by destructor API)
* @fsp: The FCP packet to be released * @fsp: The FCP packet to be released
* *
* This routine is called by a destructor callback in the exch_seq_send() * This routine is called by a destructor callback in the fc_exch_seq_send()
* routine of the libfc Transport Template. The 'struct fc_seq' is a required * routine of the libfc Transport Template. The 'struct fc_seq' is a required
* argument even though it is not used by this routine. * argument even though it is not used by this routine.
* *
@ -253,8 +254,21 @@ static inline void fc_fcp_unlock_pkt(struct fc_fcp_pkt *fsp)
*/ */
static void fc_fcp_timer_set(struct fc_fcp_pkt *fsp, unsigned long delay) static void fc_fcp_timer_set(struct fc_fcp_pkt *fsp, unsigned long delay)
{ {
if (!(fsp->state & FC_SRB_COMPL)) if (!(fsp->state & FC_SRB_COMPL)) {
mod_timer(&fsp->timer, jiffies + delay); mod_timer(&fsp->timer, jiffies + delay);
fsp->timer_delay = delay;
}
}
static void fc_fcp_abort_done(struct fc_fcp_pkt *fsp)
{
fsp->state |= FC_SRB_ABORTED;
fsp->state &= ~FC_SRB_ABORT_PENDING;
if (fsp->wait_for_comp)
complete(&fsp->tm_done);
else
fc_fcp_complete_locked(fsp);
} }
/** /**
@ -264,6 +278,8 @@ static void fc_fcp_timer_set(struct fc_fcp_pkt *fsp, unsigned long delay)
*/ */
static int fc_fcp_send_abort(struct fc_fcp_pkt *fsp) static int fc_fcp_send_abort(struct fc_fcp_pkt *fsp)
{ {
int rc;
if (!fsp->seq_ptr) if (!fsp->seq_ptr)
return -EINVAL; return -EINVAL;
@ -271,7 +287,16 @@ static int fc_fcp_send_abort(struct fc_fcp_pkt *fsp)
put_cpu(); put_cpu();
fsp->state |= FC_SRB_ABORT_PENDING; fsp->state |= FC_SRB_ABORT_PENDING;
return fsp->lp->tt.seq_exch_abort(fsp->seq_ptr, 0); rc = fc_seq_exch_abort(fsp->seq_ptr, 0);
/*
* fc_seq_exch_abort() might return -ENXIO if
* the sequence is already completed
*/
if (rc == -ENXIO) {
fc_fcp_abort_done(fsp);
rc = 0;
}
return rc;
} }
/** /**
@ -283,16 +308,16 @@ static int fc_fcp_send_abort(struct fc_fcp_pkt *fsp)
* fc_io_compl() will notify the SCSI-ml that the I/O is done. * fc_io_compl() will notify the SCSI-ml that the I/O is done.
* The SCSI-ml will retry the command. * The SCSI-ml will retry the command.
*/ */
static void fc_fcp_retry_cmd(struct fc_fcp_pkt *fsp) static void fc_fcp_retry_cmd(struct fc_fcp_pkt *fsp, int status_code)
{ {
if (fsp->seq_ptr) { if (fsp->seq_ptr) {
fsp->lp->tt.exch_done(fsp->seq_ptr); fc_exch_done(fsp->seq_ptr);
fsp->seq_ptr = NULL; fsp->seq_ptr = NULL;
} }
fsp->state &= ~FC_SRB_ABORT_PENDING; fsp->state &= ~FC_SRB_ABORT_PENDING;
fsp->io_status = 0; fsp->io_status = 0;
fsp->status_code = FC_ERROR; fsp->status_code = status_code;
fc_fcp_complete_locked(fsp); fc_fcp_complete_locked(fsp);
} }
@ -402,8 +427,6 @@ static void fc_fcp_can_queue_ramp_down(struct fc_lport *lport)
if (!can_queue) if (!can_queue)
can_queue = 1; can_queue = 1;
lport->host->can_queue = can_queue; lport->host->can_queue = can_queue;
shost_printk(KERN_ERR, lport->host, "libfc: Could not allocate frame.\n"
"Reducing can_queue to %d.\n", can_queue);
unlock: unlock:
spin_unlock_irqrestore(lport->host->host_lock, flags); spin_unlock_irqrestore(lport->host->host_lock, flags);
@ -430,9 +453,28 @@ static inline struct fc_frame *fc_fcp_frame_alloc(struct fc_lport *lport,
put_cpu(); put_cpu();
/* error case */ /* error case */
fc_fcp_can_queue_ramp_down(lport); fc_fcp_can_queue_ramp_down(lport);
shost_printk(KERN_ERR, lport->host,
"libfc: Could not allocate frame, "
"reducing can_queue to %d.\n", lport->host->can_queue);
return NULL; return NULL;
} }
/**
* get_fsp_rec_tov() - Helper function to get REC_TOV
* @fsp: the FCP packet
*
* Returns rec tov in jiffies as rpriv->e_d_tov + 1 second
*/
static inline unsigned int get_fsp_rec_tov(struct fc_fcp_pkt *fsp)
{
struct fc_rport_libfc_priv *rpriv = fsp->rport->dd_data;
unsigned int e_d_tov = FC_DEF_E_D_TOV;
if (rpriv && rpriv->e_d_tov > e_d_tov)
e_d_tov = rpriv->e_d_tov;
return msecs_to_jiffies(e_d_tov) + HZ;
}
/** /**
* fc_fcp_recv_data() - Handler for receiving SCSI-FCP data from a target * fc_fcp_recv_data() - Handler for receiving SCSI-FCP data from a target
* @fsp: The FCP packet the data is on * @fsp: The FCP packet the data is on
@ -536,8 +578,10 @@ crc_err:
* and completes the transfer, call the completion handler. * and completes the transfer, call the completion handler.
*/ */
if (unlikely(fsp->state & FC_SRB_RCV_STATUS) && if (unlikely(fsp->state & FC_SRB_RCV_STATUS) &&
fsp->xfer_len == fsp->data_len - fsp->scsi_resid) fsp->xfer_len == fsp->data_len - fsp->scsi_resid) {
FC_FCP_DBG( fsp, "complete out-of-order sequence\n" );
fc_fcp_complete_locked(fsp); fc_fcp_complete_locked(fsp);
}
return; return;
err: err:
fc_fcp_recovery(fsp, host_bcode); fc_fcp_recovery(fsp, host_bcode);
@ -609,7 +653,7 @@ static int fc_fcp_send_data(struct fc_fcp_pkt *fsp, struct fc_seq *seq,
remaining = seq_blen; remaining = seq_blen;
fh_parm_offset = frame_offset = offset; fh_parm_offset = frame_offset = offset;
tlen = 0; tlen = 0;
seq = lport->tt.seq_start_next(seq); seq = fc_seq_start_next(seq);
f_ctl = FC_FC_REL_OFF; f_ctl = FC_FC_REL_OFF;
WARN_ON(!seq); WARN_ON(!seq);
@ -687,7 +731,7 @@ static int fc_fcp_send_data(struct fc_fcp_pkt *fsp, struct fc_seq *seq,
/* /*
* send fragment using for a sequence. * send fragment using for a sequence.
*/ */
error = lport->tt.seq_send(lport, seq, fp); error = fc_seq_send(lport, seq, fp);
if (error) { if (error) {
WARN_ON(1); /* send error should be rare */ WARN_ON(1); /* send error should be rare */
return error; return error;
@ -727,15 +771,8 @@ static void fc_fcp_abts_resp(struct fc_fcp_pkt *fsp, struct fc_frame *fp)
ba_done = 0; ba_done = 0;
} }
if (ba_done) { if (ba_done)
fsp->state |= FC_SRB_ABORTED; fc_fcp_abort_done(fsp);
fsp->state &= ~FC_SRB_ABORT_PENDING;
if (fsp->wait_for_comp)
complete(&fsp->tm_done);
else
fc_fcp_complete_locked(fsp);
}
} }
/** /**
@ -764,8 +801,11 @@ static void fc_fcp_recv(struct fc_seq *seq, struct fc_frame *fp, void *arg)
fh = fc_frame_header_get(fp); fh = fc_frame_header_get(fp);
r_ctl = fh->fh_r_ctl; r_ctl = fh->fh_r_ctl;
if (lport->state != LPORT_ST_READY) if (lport->state != LPORT_ST_READY) {
FC_FCP_DBG(fsp, "lport state %d, ignoring r_ctl %x\n",
lport->state, r_ctl);
goto out; goto out;
}
if (fc_fcp_lock_pkt(fsp)) if (fc_fcp_lock_pkt(fsp))
goto out; goto out;
@ -774,8 +814,10 @@ static void fc_fcp_recv(struct fc_seq *seq, struct fc_frame *fp, void *arg)
goto unlock; goto unlock;
} }
if (fsp->state & (FC_SRB_ABORTED | FC_SRB_ABORT_PENDING)) if (fsp->state & (FC_SRB_ABORTED | FC_SRB_ABORT_PENDING)) {
FC_FCP_DBG(fsp, "command aborted, ignoring r_ctl %x\n", r_ctl);
goto unlock; goto unlock;
}
if (r_ctl == FC_RCTL_DD_DATA_DESC) { if (r_ctl == FC_RCTL_DD_DATA_DESC) {
/* /*
@ -910,7 +952,16 @@ static void fc_fcp_resp(struct fc_fcp_pkt *fsp, struct fc_frame *fp)
* Wait a at least one jiffy to see if it is delivered. * Wait a at least one jiffy to see if it is delivered.
* If this expires without data, we may do SRR. * If this expires without data, we may do SRR.
*/ */
fc_fcp_timer_set(fsp, 2); if (fsp->lp->qfull) {
FC_FCP_DBG(fsp, "tgt %6.6x queue busy retry\n",
fsp->rport->port_id);
return;
}
FC_FCP_DBG(fsp, "tgt %6.6x xfer len %zx data underrun "
"len %x, data len %x\n",
fsp->rport->port_id,
fsp->xfer_len, expected_len, fsp->data_len);
fc_fcp_timer_set(fsp, get_fsp_rec_tov(fsp));
return; return;
} }
fsp->status_code = FC_DATA_OVRRUN; fsp->status_code = FC_DATA_OVRRUN;
@ -959,9 +1010,12 @@ static void fc_fcp_complete_locked(struct fc_fcp_pkt *fsp)
if (fsp->cdb_status == SAM_STAT_GOOD && if (fsp->cdb_status == SAM_STAT_GOOD &&
fsp->xfer_len < fsp->data_len && !fsp->io_status && fsp->xfer_len < fsp->data_len && !fsp->io_status &&
(!(fsp->scsi_comp_flags & FCP_RESID_UNDER) || (!(fsp->scsi_comp_flags & FCP_RESID_UNDER) ||
fsp->xfer_len < fsp->data_len - fsp->scsi_resid)) fsp->xfer_len < fsp->data_len - fsp->scsi_resid)) {
FC_FCP_DBG(fsp, "data underrun, xfer %zx data %x\n",
fsp->xfer_len, fsp->data_len);
fsp->status_code = FC_DATA_UNDRUN; fsp->status_code = FC_DATA_UNDRUN;
} }
}
seq = fsp->seq_ptr; seq = fsp->seq_ptr;
if (seq) { if (seq) {
@ -970,7 +1024,7 @@ static void fc_fcp_complete_locked(struct fc_fcp_pkt *fsp)
struct fc_frame *conf_frame; struct fc_frame *conf_frame;
struct fc_seq *csp; struct fc_seq *csp;
csp = lport->tt.seq_start_next(seq); csp = fc_seq_start_next(seq);
conf_frame = fc_fcp_frame_alloc(fsp->lp, 0); conf_frame = fc_fcp_frame_alloc(fsp->lp, 0);
if (conf_frame) { if (conf_frame) {
f_ctl = FC_FC_SEQ_INIT; f_ctl = FC_FC_SEQ_INIT;
@ -979,10 +1033,10 @@ static void fc_fcp_complete_locked(struct fc_fcp_pkt *fsp)
fc_fill_fc_hdr(conf_frame, FC_RCTL_DD_SOL_CTL, fc_fill_fc_hdr(conf_frame, FC_RCTL_DD_SOL_CTL,
ep->did, ep->sid, ep->did, ep->sid,
FC_TYPE_FCP, f_ctl, 0); FC_TYPE_FCP, f_ctl, 0);
lport->tt.seq_send(lport, csp, conf_frame); fc_seq_send(lport, csp, conf_frame);
} }
} }
lport->tt.exch_done(seq); fc_exch_done(seq);
} }
/* /*
* Some resets driven by SCSI are not I/Os and do not have * Some resets driven by SCSI are not I/Os and do not have
@ -1000,10 +1054,8 @@ static void fc_fcp_complete_locked(struct fc_fcp_pkt *fsp)
*/ */
static void fc_fcp_cleanup_cmd(struct fc_fcp_pkt *fsp, int error) static void fc_fcp_cleanup_cmd(struct fc_fcp_pkt *fsp, int error)
{ {
struct fc_lport *lport = fsp->lp;
if (fsp->seq_ptr) { if (fsp->seq_ptr) {
lport->tt.exch_done(fsp->seq_ptr); fc_exch_done(fsp->seq_ptr);
fsp->seq_ptr = NULL; fsp->seq_ptr = NULL;
} }
fsp->status_code = error; fsp->status_code = error;
@ -1115,19 +1167,6 @@ static int fc_fcp_pkt_send(struct fc_lport *lport, struct fc_fcp_pkt *fsp)
return rc; return rc;
} }
/**
* get_fsp_rec_tov() - Helper function to get REC_TOV
* @fsp: the FCP packet
*
* Returns rec tov in jiffies as rpriv->e_d_tov + 1 second
*/
static inline unsigned int get_fsp_rec_tov(struct fc_fcp_pkt *fsp)
{
struct fc_rport_libfc_priv *rpriv = fsp->rport->dd_data;
return msecs_to_jiffies(rpriv->e_d_tov) + HZ;
}
/** /**
* fc_fcp_cmd_send() - Send a FCP command * fc_fcp_cmd_send() - Send a FCP command
* @lport: The local port to send the command on * @lport: The local port to send the command on
@ -1165,8 +1204,7 @@ static int fc_fcp_cmd_send(struct fc_lport *lport, struct fc_fcp_pkt *fsp,
rpriv->local_port->port_id, FC_TYPE_FCP, rpriv->local_port->port_id, FC_TYPE_FCP,
FC_FCTL_REQ, 0); FC_FCTL_REQ, 0);
seq = lport->tt.exch_seq_send(lport, fp, resp, fc_fcp_pkt_destroy, seq = fc_exch_seq_send(lport, fp, resp, fc_fcp_pkt_destroy, fsp, 0);
fsp, 0);
if (!seq) { if (!seq) {
rc = -1; rc = -1;
goto unlock; goto unlock;
@ -1196,7 +1234,7 @@ static void fc_fcp_error(struct fc_fcp_pkt *fsp, struct fc_frame *fp)
return; return;
if (error == -FC_EX_CLOSED) { if (error == -FC_EX_CLOSED) {
fc_fcp_retry_cmd(fsp); fc_fcp_retry_cmd(fsp, FC_ERROR);
goto unlock; goto unlock;
} }
@ -1222,8 +1260,16 @@ static int fc_fcp_pkt_abort(struct fc_fcp_pkt *fsp)
int rc = FAILED; int rc = FAILED;
unsigned long ticks_left; unsigned long ticks_left;
if (fc_fcp_send_abort(fsp)) FC_FCP_DBG(fsp, "pkt abort state %x\n", fsp->state);
if (fc_fcp_send_abort(fsp)) {
FC_FCP_DBG(fsp, "failed to send abort\n");
return FAILED; return FAILED;
}
if (fsp->state & FC_SRB_ABORTED) {
FC_FCP_DBG(fsp, "target abort cmd completed\n");
return SUCCESS;
}
init_completion(&fsp->tm_done); init_completion(&fsp->tm_done);
fsp->wait_for_comp = 1; fsp->wait_for_comp = 1;
@ -1301,7 +1347,7 @@ static int fc_lun_reset(struct fc_lport *lport, struct fc_fcp_pkt *fsp,
spin_lock_bh(&fsp->scsi_pkt_lock); spin_lock_bh(&fsp->scsi_pkt_lock);
if (fsp->seq_ptr) { if (fsp->seq_ptr) {
lport->tt.exch_done(fsp->seq_ptr); fc_exch_done(fsp->seq_ptr);
fsp->seq_ptr = NULL; fsp->seq_ptr = NULL;
} }
fsp->wait_for_comp = 0; fsp->wait_for_comp = 0;
@ -1355,7 +1401,7 @@ static void fc_tm_done(struct fc_seq *seq, struct fc_frame *fp, void *arg)
if (fh->fh_type != FC_TYPE_BLS) if (fh->fh_type != FC_TYPE_BLS)
fc_fcp_resp(fsp, fp); fc_fcp_resp(fsp, fp);
fsp->seq_ptr = NULL; fsp->seq_ptr = NULL;
fsp->lp->tt.exch_done(seq); fc_exch_done(seq);
out_unlock: out_unlock:
fc_fcp_unlock_pkt(fsp); fc_fcp_unlock_pkt(fsp);
out: out:
@ -1394,6 +1440,15 @@ static void fc_fcp_timeout(unsigned long data)
if (fsp->cdb_cmd.fc_tm_flags) if (fsp->cdb_cmd.fc_tm_flags)
goto unlock; goto unlock;
if (fsp->lp->qfull) {
FC_FCP_DBG(fsp, "fcp timeout, resetting timer delay %d\n",
fsp->timer_delay);
setup_timer(&fsp->timer, fc_fcp_timeout, (unsigned long)fsp);
fc_fcp_timer_set(fsp, fsp->timer_delay);
goto unlock;
}
FC_FCP_DBG(fsp, "fcp timeout, delay %d flags %x state %x\n",
fsp->timer_delay, rpriv->flags, fsp->state);
fsp->state |= FC_SRB_FCP_PROCESSING_TMO; fsp->state |= FC_SRB_FCP_PROCESSING_TMO;
if (rpriv->flags & FC_RP_FLAGS_REC_SUPPORTED) if (rpriv->flags & FC_RP_FLAGS_REC_SUPPORTED)
@ -1486,8 +1541,8 @@ static void fc_fcp_rec_resp(struct fc_seq *seq, struct fc_frame *fp, void *arg)
rjt = fc_frame_payload_get(fp, sizeof(*rjt)); rjt = fc_frame_payload_get(fp, sizeof(*rjt));
switch (rjt->er_reason) { switch (rjt->er_reason) {
default: default:
FC_FCP_DBG(fsp, "device %x unexpected REC reject " FC_FCP_DBG(fsp,
"reason %d expl %d\n", "device %x invalid REC reject %d/%d\n",
fsp->rport->port_id, rjt->er_reason, fsp->rport->port_id, rjt->er_reason,
rjt->er_explan); rjt->er_explan);
/* fall through */ /* fall through */
@ -1503,18 +1558,23 @@ static void fc_fcp_rec_resp(struct fc_seq *seq, struct fc_frame *fp, void *arg)
break; break;
case ELS_RJT_LOGIC: case ELS_RJT_LOGIC:
case ELS_RJT_UNAB: case ELS_RJT_UNAB:
FC_FCP_DBG(fsp, "device %x REC reject %d/%d\n",
fsp->rport->port_id, rjt->er_reason,
rjt->er_explan);
/* /*
* If no data transfer, the command frame got dropped * If response got lost or is stuck in the
* so we just retry. If data was transferred, we * queue somewhere we have no idea if and when
* lost the response but the target has no record, * the response will be received. So quarantine
* so we abort and retry. * the xid and retry the command.
*/ */
if (rjt->er_explan == ELS_EXPL_OXID_RXID && if (rjt->er_explan == ELS_EXPL_OXID_RXID) {
fsp->xfer_len == 0) { struct fc_exch *ep = fc_seq_exch(fsp->seq_ptr);
fc_fcp_retry_cmd(fsp); ep->state |= FC_EX_QUARANTINE;
fsp->state |= FC_SRB_ABORTED;
fc_fcp_retry_cmd(fsp, FC_TRANS_RESET);
break; break;
} }
fc_fcp_recovery(fsp, FC_ERROR); fc_fcp_recovery(fsp, FC_TRANS_RESET);
break; break;
} }
} else if (opcode == ELS_LS_ACC) { } else if (opcode == ELS_LS_ACC) {
@ -1608,7 +1668,9 @@ static void fc_fcp_rec_error(struct fc_fcp_pkt *fsp, struct fc_frame *fp)
switch (error) { switch (error) {
case -FC_EX_CLOSED: case -FC_EX_CLOSED:
fc_fcp_retry_cmd(fsp); FC_FCP_DBG(fsp, "REC %p fid %6.6x exchange closed\n",
fsp, fsp->rport->port_id);
fc_fcp_retry_cmd(fsp, FC_ERROR);
break; break;
default: default:
@ -1622,8 +1684,8 @@ static void fc_fcp_rec_error(struct fc_fcp_pkt *fsp, struct fc_frame *fp)
* Assume REC or LS_ACC was lost. * Assume REC or LS_ACC was lost.
* The exchange manager will have aborted REC, so retry. * The exchange manager will have aborted REC, so retry.
*/ */
FC_FCP_DBG(fsp, "REC fid %6.6x error error %d retry %d/%d\n", FC_FCP_DBG(fsp, "REC %p fid %6.6x exchange timeout retry %d/%d\n",
fsp->rport->port_id, error, fsp->recov_retry, fsp, fsp->rport->port_id, fsp->recov_retry,
FC_MAX_RECOV_RETRY); FC_MAX_RECOV_RETRY);
if (fsp->recov_retry++ < FC_MAX_RECOV_RETRY) if (fsp->recov_retry++ < FC_MAX_RECOV_RETRY)
fc_fcp_rec(fsp); fc_fcp_rec(fsp);
@ -1642,6 +1704,7 @@ out:
*/ */
static void fc_fcp_recovery(struct fc_fcp_pkt *fsp, u8 code) static void fc_fcp_recovery(struct fc_fcp_pkt *fsp, u8 code)
{ {
FC_FCP_DBG(fsp, "start recovery code %x\n", code);
fsp->status_code = code; fsp->status_code = code;
fsp->cdb_status = 0; fsp->cdb_status = 0;
fsp->io_status = 0; fsp->io_status = 0;
@ -1668,7 +1731,6 @@ static void fc_fcp_srr(struct fc_fcp_pkt *fsp, enum fc_rctl r_ctl, u32 offset)
struct fc_seq *seq; struct fc_seq *seq;
struct fcp_srr *srr; struct fcp_srr *srr;
struct fc_frame *fp; struct fc_frame *fp;
unsigned int rec_tov;
rport = fsp->rport; rport = fsp->rport;
rpriv = rport->dd_data; rpriv = rport->dd_data;
@ -1692,10 +1754,9 @@ static void fc_fcp_srr(struct fc_fcp_pkt *fsp, enum fc_rctl r_ctl, u32 offset)
rpriv->local_port->port_id, FC_TYPE_FCP, rpriv->local_port->port_id, FC_TYPE_FCP,
FC_FCTL_REQ, 0); FC_FCTL_REQ, 0);
rec_tov = get_fsp_rec_tov(fsp); seq = fc_exch_seq_send(lport, fp, fc_fcp_srr_resp,
seq = lport->tt.exch_seq_send(lport, fp, fc_fcp_srr_resp,
fc_fcp_pkt_destroy, fc_fcp_pkt_destroy,
fsp, jiffies_to_msecs(rec_tov)); fsp, get_fsp_rec_tov(fsp));
if (!seq) if (!seq)
goto retry; goto retry;
@ -1706,7 +1767,7 @@ static void fc_fcp_srr(struct fc_fcp_pkt *fsp, enum fc_rctl r_ctl, u32 offset)
fc_fcp_pkt_hold(fsp); /* hold for outstanding SRR */ fc_fcp_pkt_hold(fsp); /* hold for outstanding SRR */
return; return;
retry: retry:
fc_fcp_retry_cmd(fsp); fc_fcp_retry_cmd(fsp, FC_TRANS_RESET);
} }
/** /**
@ -1730,9 +1791,9 @@ static void fc_fcp_srr_resp(struct fc_seq *seq, struct fc_frame *fp, void *arg)
fh = fc_frame_header_get(fp); fh = fc_frame_header_get(fp);
/* /*
* BUG? fc_fcp_srr_error calls exch_done which would release * BUG? fc_fcp_srr_error calls fc_exch_done which would release
* the ep. But if fc_fcp_srr_error had got -FC_EX_TIMEOUT, * the ep. But if fc_fcp_srr_error had got -FC_EX_TIMEOUT,
* then fc_exch_timeout would be sending an abort. The exch_done * then fc_exch_timeout would be sending an abort. The fc_exch_done
* call by fc_fcp_srr_error would prevent fc_exch.c from seeing * call by fc_fcp_srr_error would prevent fc_exch.c from seeing
* an abort response though. * an abort response though.
*/ */
@ -1753,7 +1814,7 @@ static void fc_fcp_srr_resp(struct fc_seq *seq, struct fc_frame *fp, void *arg)
} }
fc_fcp_unlock_pkt(fsp); fc_fcp_unlock_pkt(fsp);
out: out:
fsp->lp->tt.exch_done(seq); fc_exch_done(seq);
fc_frame_free(fp); fc_frame_free(fp);
} }
@ -1768,20 +1829,22 @@ static void fc_fcp_srr_error(struct fc_fcp_pkt *fsp, struct fc_frame *fp)
goto out; goto out;
switch (PTR_ERR(fp)) { switch (PTR_ERR(fp)) {
case -FC_EX_TIMEOUT: case -FC_EX_TIMEOUT:
FC_FCP_DBG(fsp, "SRR timeout, retries %d\n", fsp->recov_retry);
if (fsp->recov_retry++ < FC_MAX_RECOV_RETRY) if (fsp->recov_retry++ < FC_MAX_RECOV_RETRY)
fc_fcp_rec(fsp); fc_fcp_rec(fsp);
else else
fc_fcp_recovery(fsp, FC_TIMED_OUT); fc_fcp_recovery(fsp, FC_TIMED_OUT);
break; break;
case -FC_EX_CLOSED: /* e.g., link failure */ case -FC_EX_CLOSED: /* e.g., link failure */
FC_FCP_DBG(fsp, "SRR error, exchange closed\n");
/* fall through */ /* fall through */
default: default:
fc_fcp_retry_cmd(fsp); fc_fcp_retry_cmd(fsp, FC_ERROR);
break; break;
} }
fc_fcp_unlock_pkt(fsp); fc_fcp_unlock_pkt(fsp);
out: out:
fsp->lp->tt.exch_done(fsp->recov_seq); fc_exch_done(fsp->recov_seq);
} }
/** /**
@ -1832,8 +1895,13 @@ int fc_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *sc_cmd)
rpriv = rport->dd_data; rpriv = rport->dd_data;
if (!fc_fcp_lport_queue_ready(lport)) { if (!fc_fcp_lport_queue_ready(lport)) {
if (lport->qfull) if (lport->qfull) {
fc_fcp_can_queue_ramp_down(lport); fc_fcp_can_queue_ramp_down(lport);
shost_printk(KERN_ERR, lport->host,
"libfc: queue full, "
"reducing can_queue to %d.\n",
lport->host->can_queue);
}
rc = SCSI_MLQUEUE_HOST_BUSY; rc = SCSI_MLQUEUE_HOST_BUSY;
goto out; goto out;
} }
@ -1980,15 +2048,26 @@ static void fc_io_compl(struct fc_fcp_pkt *fsp)
sc_cmd->result = (DID_ERROR << 16) | fsp->cdb_status; sc_cmd->result = (DID_ERROR << 16) | fsp->cdb_status;
break; break;
case FC_CMD_ABORTED: case FC_CMD_ABORTED:
if (host_byte(sc_cmd->result) == DID_TIME_OUT)
FC_FCP_DBG(fsp, "Returning DID_TIME_OUT to scsi-ml "
"due to FC_CMD_ABORTED\n");
else {
FC_FCP_DBG(fsp, "Returning DID_ERROR to scsi-ml " FC_FCP_DBG(fsp, "Returning DID_ERROR to scsi-ml "
"due to FC_CMD_ABORTED\n"); "due to FC_CMD_ABORTED\n");
sc_cmd->result = (DID_ERROR << 16) | fsp->io_status; set_host_byte(sc_cmd, DID_ERROR);
}
sc_cmd->result |= fsp->io_status;
break; break;
case FC_CMD_RESET: case FC_CMD_RESET:
FC_FCP_DBG(fsp, "Returning DID_RESET to scsi-ml " FC_FCP_DBG(fsp, "Returning DID_RESET to scsi-ml "
"due to FC_CMD_RESET\n"); "due to FC_CMD_RESET\n");
sc_cmd->result = (DID_RESET << 16); sc_cmd->result = (DID_RESET << 16);
break; break;
case FC_TRANS_RESET:
FC_FCP_DBG(fsp, "Returning DID_SOFT_ERROR to scsi-ml "
"due to FC_TRANS_RESET\n");
sc_cmd->result = (DID_SOFT_ERROR << 16);
break;
case FC_HRD_ERROR: case FC_HRD_ERROR:
FC_FCP_DBG(fsp, "Returning DID_NO_CONNECT to scsi-ml " FC_FCP_DBG(fsp, "Returning DID_NO_CONNECT to scsi-ml "
"due to FC_HRD_ERROR\n"); "due to FC_HRD_ERROR\n");
@ -2142,7 +2221,7 @@ int fc_eh_host_reset(struct scsi_cmnd *sc_cmd)
fc_block_scsi_eh(sc_cmd); fc_block_scsi_eh(sc_cmd);
lport->tt.lport_reset(lport); fc_lport_reset(lport);
wait_tmo = jiffies + FC_HOST_RESET_TIMEOUT; wait_tmo = jiffies + FC_HOST_RESET_TIMEOUT;
while (!fc_fcp_lport_queue_ready(lport) && time_before(jiffies, while (!fc_fcp_lport_queue_ready(lport) && time_before(jiffies,
wait_tmo)) wait_tmo))

View File

@ -226,7 +226,7 @@ void fc_fill_reply_hdr(struct fc_frame *fp, const struct fc_frame *in_fp,
sp = fr_seq(in_fp); sp = fr_seq(in_fp);
if (sp) if (sp)
fr_seq(fp) = fr_dev(in_fp)->tt.seq_start_next(sp); fr_seq(fp) = fc_seq_start_next(sp);
fc_fill_hdr(fp, in_fp, r_ctl, FC_FCTL_RESP, 0, parm_offset); fc_fill_hdr(fp, in_fp, r_ctl, FC_FCTL_RESP, 0, parm_offset);
} }
EXPORT_SYMBOL(fc_fill_reply_hdr); EXPORT_SYMBOL(fc_fill_reply_hdr);

View File

@ -149,7 +149,7 @@ static const char *fc_lport_state_names[] = {
* @offset: The offset into the response data * @offset: The offset into the response data
*/ */
struct fc_bsg_info { struct fc_bsg_info {
struct fc_bsg_job *job; struct bsg_job *job;
struct fc_lport *lport; struct fc_lport *lport;
u16 rsp_code; u16 rsp_code;
struct scatterlist *sg; struct scatterlist *sg;
@ -200,7 +200,7 @@ static void fc_lport_rport_callback(struct fc_lport *lport,
"in the DNS or FDMI state, it's in the " "in the DNS or FDMI state, it's in the "
"%d state", rdata->ids.port_id, "%d state", rdata->ids.port_id,
lport->state); lport->state);
lport->tt.rport_logoff(rdata); fc_rport_logoff(rdata);
} }
break; break;
case RPORT_EV_LOGO: case RPORT_EV_LOGO:
@ -237,23 +237,26 @@ static const char *fc_lport_state(struct fc_lport *lport)
* @remote_fid: The FID of the ptp rport * @remote_fid: The FID of the ptp rport
* @remote_wwpn: The WWPN of the ptp rport * @remote_wwpn: The WWPN of the ptp rport
* @remote_wwnn: The WWNN of the ptp rport * @remote_wwnn: The WWNN of the ptp rport
*
* Locking Note: The lport lock is expected to be held before calling
* this routine.
*/ */
static void fc_lport_ptp_setup(struct fc_lport *lport, static void fc_lport_ptp_setup(struct fc_lport *lport,
u32 remote_fid, u64 remote_wwpn, u32 remote_fid, u64 remote_wwpn,
u64 remote_wwnn) u64 remote_wwnn)
{ {
mutex_lock(&lport->disc.disc_mutex);
if (lport->ptp_rdata) { if (lport->ptp_rdata) {
lport->tt.rport_logoff(lport->ptp_rdata); fc_rport_logoff(lport->ptp_rdata);
kref_put(&lport->ptp_rdata->kref, lport->tt.rport_destroy); kref_put(&lport->ptp_rdata->kref, fc_rport_destroy);
} }
lport->ptp_rdata = lport->tt.rport_create(lport, remote_fid); mutex_lock(&lport->disc.disc_mutex);
lport->ptp_rdata = fc_rport_create(lport, remote_fid);
kref_get(&lport->ptp_rdata->kref); kref_get(&lport->ptp_rdata->kref);
lport->ptp_rdata->ids.port_name = remote_wwpn; lport->ptp_rdata->ids.port_name = remote_wwpn;
lport->ptp_rdata->ids.node_name = remote_wwnn; lport->ptp_rdata->ids.node_name = remote_wwnn;
mutex_unlock(&lport->disc.disc_mutex); mutex_unlock(&lport->disc.disc_mutex);
lport->tt.rport_login(lport->ptp_rdata); fc_rport_login(lport->ptp_rdata);
fc_lport_enter_ready(lport); fc_lport_enter_ready(lport);
} }
@ -409,7 +412,7 @@ static void fc_lport_recv_rlir_req(struct fc_lport *lport, struct fc_frame *fp)
FC_LPORT_DBG(lport, "Received RLIR request while in state %s\n", FC_LPORT_DBG(lport, "Received RLIR request while in state %s\n",
fc_lport_state(lport)); fc_lport_state(lport));
lport->tt.seq_els_rsp_send(fp, ELS_LS_ACC, NULL); fc_seq_els_rsp_send(fp, ELS_LS_ACC, NULL);
fc_frame_free(fp); fc_frame_free(fp);
} }
@ -478,7 +481,7 @@ static void fc_lport_recv_rnid_req(struct fc_lport *lport,
if (!req) { if (!req) {
rjt_data.reason = ELS_RJT_LOGIC; rjt_data.reason = ELS_RJT_LOGIC;
rjt_data.explan = ELS_EXPL_NONE; rjt_data.explan = ELS_EXPL_NONE;
lport->tt.seq_els_rsp_send(in_fp, ELS_LS_RJT, &rjt_data); fc_seq_els_rsp_send(in_fp, ELS_LS_RJT, &rjt_data);
} else { } else {
fmt = req->rnid_fmt; fmt = req->rnid_fmt;
len = sizeof(*rp); len = sizeof(*rp);
@ -518,7 +521,7 @@ static void fc_lport_recv_rnid_req(struct fc_lport *lport,
*/ */
static void fc_lport_recv_logo_req(struct fc_lport *lport, struct fc_frame *fp) static void fc_lport_recv_logo_req(struct fc_lport *lport, struct fc_frame *fp)
{ {
lport->tt.seq_els_rsp_send(fp, ELS_LS_ACC, NULL); fc_seq_els_rsp_send(fp, ELS_LS_ACC, NULL);
fc_lport_enter_reset(lport); fc_lport_enter_reset(lport);
fc_frame_free(fp); fc_frame_free(fp);
} }
@ -620,9 +623,9 @@ int fc_fabric_logoff(struct fc_lport *lport)
lport->tt.disc_stop_final(lport); lport->tt.disc_stop_final(lport);
mutex_lock(&lport->lp_mutex); mutex_lock(&lport->lp_mutex);
if (lport->dns_rdata) if (lport->dns_rdata)
lport->tt.rport_logoff(lport->dns_rdata); fc_rport_logoff(lport->dns_rdata);
mutex_unlock(&lport->lp_mutex); mutex_unlock(&lport->lp_mutex);
lport->tt.rport_flush_queue(); fc_rport_flush_queue();
mutex_lock(&lport->lp_mutex); mutex_lock(&lport->lp_mutex);
fc_lport_enter_logo(lport); fc_lport_enter_logo(lport);
mutex_unlock(&lport->lp_mutex); mutex_unlock(&lport->lp_mutex);
@ -899,7 +902,7 @@ static void fc_lport_recv_els_req(struct fc_lport *lport,
/* /*
* Check opcode. * Check opcode.
*/ */
recv = lport->tt.rport_recv_req; recv = fc_rport_recv_req;
switch (fc_frame_payload_op(fp)) { switch (fc_frame_payload_op(fp)) {
case ELS_FLOGI: case ELS_FLOGI:
if (!lport->point_to_multipoint) if (!lport->point_to_multipoint)
@ -941,15 +944,14 @@ struct fc4_prov fc_lport_els_prov = {
}; };
/** /**
* fc_lport_recv_req() - The generic lport request handler * fc_lport_recv() - The generic lport request handler
* @lport: The lport that received the request * @lport: The lport that received the request
* @fp: The frame the request is in * @fp: The frame the request is in
* *
* Locking Note: This function should not be called with the lport * Locking Note: This function should not be called with the lport
* lock held because it may grab the lock. * lock held because it may grab the lock.
*/ */
static void fc_lport_recv_req(struct fc_lport *lport, void fc_lport_recv(struct fc_lport *lport, struct fc_frame *fp)
struct fc_frame *fp)
{ {
struct fc_frame_header *fh = fc_frame_header_get(fp); struct fc_frame_header *fh = fc_frame_header_get(fp);
struct fc_seq *sp = fr_seq(fp); struct fc_seq *sp = fr_seq(fp);
@ -978,8 +980,9 @@ drop:
FC_LPORT_DBG(lport, "dropping unexpected frame type %x\n", fh->fh_type); FC_LPORT_DBG(lport, "dropping unexpected frame type %x\n", fh->fh_type);
fc_frame_free(fp); fc_frame_free(fp);
if (sp) if (sp)
lport->tt.exch_done(sp); fc_exch_done(sp);
} }
EXPORT_SYMBOL(fc_lport_recv);
/** /**
* fc_lport_reset() - Reset a local port * fc_lport_reset() - Reset a local port
@ -1007,12 +1010,14 @@ EXPORT_SYMBOL(fc_lport_reset);
*/ */
static void fc_lport_reset_locked(struct fc_lport *lport) static void fc_lport_reset_locked(struct fc_lport *lport)
{ {
if (lport->dns_rdata) if (lport->dns_rdata) {
lport->tt.rport_logoff(lport->dns_rdata); fc_rport_logoff(lport->dns_rdata);
lport->dns_rdata = NULL;
}
if (lport->ptp_rdata) { if (lport->ptp_rdata) {
lport->tt.rport_logoff(lport->ptp_rdata); fc_rport_logoff(lport->ptp_rdata);
kref_put(&lport->ptp_rdata->kref, lport->tt.rport_destroy); kref_put(&lport->ptp_rdata->kref, fc_rport_destroy);
lport->ptp_rdata = NULL; lport->ptp_rdata = NULL;
} }
@ -1426,13 +1431,13 @@ static void fc_lport_enter_dns(struct fc_lport *lport)
fc_lport_state_enter(lport, LPORT_ST_DNS); fc_lport_state_enter(lport, LPORT_ST_DNS);
mutex_lock(&lport->disc.disc_mutex); mutex_lock(&lport->disc.disc_mutex);
rdata = lport->tt.rport_create(lport, FC_FID_DIR_SERV); rdata = fc_rport_create(lport, FC_FID_DIR_SERV);
mutex_unlock(&lport->disc.disc_mutex); mutex_unlock(&lport->disc.disc_mutex);
if (!rdata) if (!rdata)
goto err; goto err;
rdata->ops = &fc_lport_rport_ops; rdata->ops = &fc_lport_rport_ops;
lport->tt.rport_login(rdata); fc_rport_login(rdata);
return; return;
err: err:
@ -1543,13 +1548,13 @@ static void fc_lport_enter_fdmi(struct fc_lport *lport)
fc_lport_state_enter(lport, LPORT_ST_FDMI); fc_lport_state_enter(lport, LPORT_ST_FDMI);
mutex_lock(&lport->disc.disc_mutex); mutex_lock(&lport->disc.disc_mutex);
rdata = lport->tt.rport_create(lport, FC_FID_MGMT_SERV); rdata = fc_rport_create(lport, FC_FID_MGMT_SERV);
mutex_unlock(&lport->disc.disc_mutex); mutex_unlock(&lport->disc.disc_mutex);
if (!rdata) if (!rdata)
goto err; goto err;
rdata->ops = &fc_lport_rport_ops; rdata->ops = &fc_lport_rport_ops;
lport->tt.rport_login(rdata); fc_rport_login(rdata);
return; return;
err: err:
@ -1772,7 +1777,7 @@ void fc_lport_flogi_resp(struct fc_seq *sp, struct fc_frame *fp,
if ((csp_flags & FC_SP_FT_FPORT) == 0) { if ((csp_flags & FC_SP_FT_FPORT) == 0) {
if (e_d_tov > lport->e_d_tov) if (e_d_tov > lport->e_d_tov)
lport->e_d_tov = e_d_tov; lport->e_d_tov = e_d_tov;
lport->r_a_tov = 2 * e_d_tov; lport->r_a_tov = 2 * lport->e_d_tov;
fc_lport_set_port_id(lport, did, fp); fc_lport_set_port_id(lport, did, fp);
printk(KERN_INFO "host%d: libfc: " printk(KERN_INFO "host%d: libfc: "
"Port (%6.6x) entered " "Port (%6.6x) entered "
@ -1784,7 +1789,9 @@ void fc_lport_flogi_resp(struct fc_seq *sp, struct fc_frame *fp,
get_unaligned_be64( get_unaligned_be64(
&flp->fl_wwnn)); &flp->fl_wwnn));
} else { } else {
if (e_d_tov > lport->e_d_tov)
lport->e_d_tov = e_d_tov; lport->e_d_tov = e_d_tov;
if (r_a_tov > lport->r_a_tov)
lport->r_a_tov = r_a_tov; lport->r_a_tov = r_a_tov;
fc_host_fabric_name(lport->host) = fc_host_fabric_name(lport->host) =
get_unaligned_be64(&flp->fl_wwnn); get_unaligned_be64(&flp->fl_wwnn);
@ -1858,12 +1865,6 @@ EXPORT_SYMBOL(fc_lport_config);
*/ */
int fc_lport_init(struct fc_lport *lport) int fc_lport_init(struct fc_lport *lport)
{ {
if (!lport->tt.lport_recv)
lport->tt.lport_recv = fc_lport_recv_req;
if (!lport->tt.lport_reset)
lport->tt.lport_reset = fc_lport_reset;
fc_host_port_type(lport->host) = FC_PORTTYPE_NPORT; fc_host_port_type(lport->host) = FC_PORTTYPE_NPORT;
fc_host_node_name(lport->host) = lport->wwnn; fc_host_node_name(lport->host) = lport->wwnn;
fc_host_port_name(lport->host) = lport->wwpn; fc_host_port_name(lport->host) = lport->wwpn;
@ -1900,18 +1901,19 @@ static void fc_lport_bsg_resp(struct fc_seq *sp, struct fc_frame *fp,
void *info_arg) void *info_arg)
{ {
struct fc_bsg_info *info = info_arg; struct fc_bsg_info *info = info_arg;
struct fc_bsg_job *job = info->job; struct bsg_job *job = info->job;
struct fc_bsg_reply *bsg_reply = job->reply;
struct fc_lport *lport = info->lport; struct fc_lport *lport = info->lport;
struct fc_frame_header *fh; struct fc_frame_header *fh;
size_t len; size_t len;
void *buf; void *buf;
if (IS_ERR(fp)) { if (IS_ERR(fp)) {
job->reply->result = (PTR_ERR(fp) == -FC_EX_CLOSED) ? bsg_reply->result = (PTR_ERR(fp) == -FC_EX_CLOSED) ?
-ECONNABORTED : -ETIMEDOUT; -ECONNABORTED : -ETIMEDOUT;
job->reply_len = sizeof(uint32_t); job->reply_len = sizeof(uint32_t);
job->state_flags |= FC_RQST_STATE_DONE; bsg_job_done(job, bsg_reply->result,
job->job_done(job); bsg_reply->reply_payload_rcv_len);
kfree(info); kfree(info);
return; return;
} }
@ -1928,25 +1930,25 @@ static void fc_lport_bsg_resp(struct fc_seq *sp, struct fc_frame *fp,
(unsigned short)fc_frame_payload_op(fp); (unsigned short)fc_frame_payload_op(fp);
/* Save the reply status of the job */ /* Save the reply status of the job */
job->reply->reply_data.ctels_reply.status = bsg_reply->reply_data.ctels_reply.status =
(cmd == info->rsp_code) ? (cmd == info->rsp_code) ?
FC_CTELS_STATUS_OK : FC_CTELS_STATUS_REJECT; FC_CTELS_STATUS_OK : FC_CTELS_STATUS_REJECT;
} }
job->reply->reply_payload_rcv_len += bsg_reply->reply_payload_rcv_len +=
fc_copy_buffer_to_sglist(buf, len, info->sg, &info->nents, fc_copy_buffer_to_sglist(buf, len, info->sg, &info->nents,
&info->offset, NULL); &info->offset, NULL);
if (fr_eof(fp) == FC_EOF_T && if (fr_eof(fp) == FC_EOF_T &&
(ntoh24(fh->fh_f_ctl) & (FC_FC_LAST_SEQ | FC_FC_END_SEQ)) == (ntoh24(fh->fh_f_ctl) & (FC_FC_LAST_SEQ | FC_FC_END_SEQ)) ==
(FC_FC_LAST_SEQ | FC_FC_END_SEQ)) { (FC_FC_LAST_SEQ | FC_FC_END_SEQ)) {
if (job->reply->reply_payload_rcv_len > if (bsg_reply->reply_payload_rcv_len >
job->reply_payload.payload_len) job->reply_payload.payload_len)
job->reply->reply_payload_rcv_len = bsg_reply->reply_payload_rcv_len =
job->reply_payload.payload_len; job->reply_payload.payload_len;
job->reply->result = 0; bsg_reply->result = 0;
job->state_flags |= FC_RQST_STATE_DONE; bsg_job_done(job, bsg_reply->result,
job->job_done(job); bsg_reply->reply_payload_rcv_len);
kfree(info); kfree(info);
} }
fc_frame_free(fp); fc_frame_free(fp);
@ -1962,7 +1964,7 @@ static void fc_lport_bsg_resp(struct fc_seq *sp, struct fc_frame *fp,
* Locking Note: The lport lock is expected to be held before calling * Locking Note: The lport lock is expected to be held before calling
* this routine. * this routine.
*/ */
static int fc_lport_els_request(struct fc_bsg_job *job, static int fc_lport_els_request(struct bsg_job *job,
struct fc_lport *lport, struct fc_lport *lport,
u32 did, u32 tov) u32 did, u32 tov)
{ {
@ -2005,7 +2007,7 @@ static int fc_lport_els_request(struct fc_bsg_job *job,
info->nents = job->reply_payload.sg_cnt; info->nents = job->reply_payload.sg_cnt;
info->sg = job->reply_payload.sg_list; info->sg = job->reply_payload.sg_list;
if (!lport->tt.exch_seq_send(lport, fp, fc_lport_bsg_resp, if (!fc_exch_seq_send(lport, fp, fc_lport_bsg_resp,
NULL, info, tov)) { NULL, info, tov)) {
kfree(info); kfree(info);
return -ECOMM; return -ECOMM;
@ -2023,7 +2025,7 @@ static int fc_lport_els_request(struct fc_bsg_job *job,
* Locking Note: The lport lock is expected to be held before calling * Locking Note: The lport lock is expected to be held before calling
* this routine. * this routine.
*/ */
static int fc_lport_ct_request(struct fc_bsg_job *job, static int fc_lport_ct_request(struct bsg_job *job,
struct fc_lport *lport, u32 did, u32 tov) struct fc_lport *lport, u32 did, u32 tov)
{ {
struct fc_bsg_info *info; struct fc_bsg_info *info;
@ -2066,7 +2068,7 @@ static int fc_lport_ct_request(struct fc_bsg_job *job,
info->nents = job->reply_payload.sg_cnt; info->nents = job->reply_payload.sg_cnt;
info->sg = job->reply_payload.sg_list; info->sg = job->reply_payload.sg_list;
if (!lport->tt.exch_seq_send(lport, fp, fc_lport_bsg_resp, if (!fc_exch_seq_send(lport, fp, fc_lport_bsg_resp,
NULL, info, tov)) { NULL, info, tov)) {
kfree(info); kfree(info);
return -ECOMM; return -ECOMM;
@ -2079,25 +2081,27 @@ static int fc_lport_ct_request(struct fc_bsg_job *job,
* FC Passthrough requests * FC Passthrough requests
* @job: The BSG passthrough job * @job: The BSG passthrough job
*/ */
int fc_lport_bsg_request(struct fc_bsg_job *job) int fc_lport_bsg_request(struct bsg_job *job)
{ {
struct fc_bsg_request *bsg_request = job->request;
struct fc_bsg_reply *bsg_reply = job->reply;
struct request *rsp = job->req->next_rq; struct request *rsp = job->req->next_rq;
struct Scsi_Host *shost = job->shost; struct Scsi_Host *shost = fc_bsg_to_shost(job);
struct fc_lport *lport = shost_priv(shost); struct fc_lport *lport = shost_priv(shost);
struct fc_rport *rport; struct fc_rport *rport;
struct fc_rport_priv *rdata; struct fc_rport_priv *rdata;
int rc = -EINVAL; int rc = -EINVAL;
u32 did, tov; u32 did, tov;
job->reply->reply_payload_rcv_len = 0; bsg_reply->reply_payload_rcv_len = 0;
if (rsp) if (rsp)
rsp->resid_len = job->reply_payload.payload_len; rsp->resid_len = job->reply_payload.payload_len;
mutex_lock(&lport->lp_mutex); mutex_lock(&lport->lp_mutex);
switch (job->request->msgcode) { switch (bsg_request->msgcode) {
case FC_BSG_RPT_ELS: case FC_BSG_RPT_ELS:
rport = job->rport; rport = fc_bsg_to_rport(job);
if (!rport) if (!rport)
break; break;
@ -2107,7 +2111,7 @@ int fc_lport_bsg_request(struct fc_bsg_job *job)
break; break;
case FC_BSG_RPT_CT: case FC_BSG_RPT_CT:
rport = job->rport; rport = fc_bsg_to_rport(job);
if (!rport) if (!rport)
break; break;
@ -2117,25 +2121,25 @@ int fc_lport_bsg_request(struct fc_bsg_job *job)
break; break;
case FC_BSG_HST_CT: case FC_BSG_HST_CT:
did = ntoh24(job->request->rqst_data.h_ct.port_id); did = ntoh24(bsg_request->rqst_data.h_ct.port_id);
if (did == FC_FID_DIR_SERV) { if (did == FC_FID_DIR_SERV) {
rdata = lport->dns_rdata; rdata = lport->dns_rdata;
if (!rdata) if (!rdata)
break; break;
tov = rdata->e_d_tov; tov = rdata->e_d_tov;
} else { } else {
rdata = lport->tt.rport_lookup(lport, did); rdata = fc_rport_lookup(lport, did);
if (!rdata) if (!rdata)
break; break;
tov = rdata->e_d_tov; tov = rdata->e_d_tov;
kref_put(&rdata->kref, lport->tt.rport_destroy); kref_put(&rdata->kref, fc_rport_destroy);
} }
rc = fc_lport_ct_request(job, lport, did, tov); rc = fc_lport_ct_request(job, lport, did, tov);
break; break;
case FC_BSG_HST_ELS_NOLOGIN: case FC_BSG_HST_ELS_NOLOGIN:
did = ntoh24(job->request->rqst_data.h_els.port_id); did = ntoh24(bsg_request->rqst_data.h_els.port_id);
rc = fc_lport_els_request(job, lport, did, lport->e_d_tov); rc = fc_lport_els_request(job, lport, did, lport->e_d_tov);
break; break;
} }

File diff suppressed because it is too large Load Diff

View File

@ -648,6 +648,10 @@ struct lpfc_hba {
#define HBA_FCP_IOQ_FLUSH 0x8000 /* FCP I/O queues being flushed */ #define HBA_FCP_IOQ_FLUSH 0x8000 /* FCP I/O queues being flushed */
#define HBA_FW_DUMP_OP 0x10000 /* Skips fn reset before FW dump */ #define HBA_FW_DUMP_OP 0x10000 /* Skips fn reset before FW dump */
#define HBA_RECOVERABLE_UE 0x20000 /* Firmware supports recoverable UE */ #define HBA_RECOVERABLE_UE 0x20000 /* Firmware supports recoverable UE */
#define HBA_FORCED_LINK_SPEED 0x40000 /*
* Firmware supports Forced Link Speed
* capability
*/
uint32_t fcp_ring_in_use; /* When polling test if intr-hndlr active*/ uint32_t fcp_ring_in_use; /* When polling test if intr-hndlr active*/
struct lpfc_dmabuf slim2p; struct lpfc_dmabuf slim2p;
@ -746,6 +750,8 @@ struct lpfc_hba {
uint32_t cfg_oas_priority; uint32_t cfg_oas_priority;
uint32_t cfg_XLanePriority; uint32_t cfg_XLanePriority;
uint32_t cfg_enable_bg; uint32_t cfg_enable_bg;
uint32_t cfg_prot_mask;
uint32_t cfg_prot_guard;
uint32_t cfg_hostmem_hgp; uint32_t cfg_hostmem_hgp;
uint32_t cfg_log_verbose; uint32_t cfg_log_verbose;
uint32_t cfg_aer_support; uint32_t cfg_aer_support;

View File

@ -2759,18 +2759,14 @@ LPFC_ATTR_R(enable_npiv, 1, 0, 1,
LPFC_ATTR_R(fcf_failover_policy, 1, 1, 2, LPFC_ATTR_R(fcf_failover_policy, 1, 1, 2,
"FCF Fast failover=1 Priority failover=2"); "FCF Fast failover=1 Priority failover=2");
int lpfc_enable_rrq = 2;
module_param(lpfc_enable_rrq, int, S_IRUGO);
MODULE_PARM_DESC(lpfc_enable_rrq, "Enable RRQ functionality");
lpfc_param_show(enable_rrq);
/* /*
# lpfc_enable_rrq: Track XRI/OXID reuse after IO failures # lpfc_enable_rrq: Track XRI/OXID reuse after IO failures
# 0x0 = disabled, XRI/OXID use not tracked. # 0x0 = disabled, XRI/OXID use not tracked.
# 0x1 = XRI/OXID reuse is timed with ratov, RRQ sent. # 0x1 = XRI/OXID reuse is timed with ratov, RRQ sent.
# 0x2 = XRI/OXID reuse is timed with ratov, No RRQ sent. # 0x2 = XRI/OXID reuse is timed with ratov, No RRQ sent.
*/ */
lpfc_param_init(enable_rrq, 2, 0, 2); LPFC_ATTR_R(enable_rrq, 2, 0, 2,
static DEVICE_ATTR(lpfc_enable_rrq, S_IRUGO, lpfc_enable_rrq_show, NULL); "Enable RRQ functionality");
/* /*
# lpfc_suppress_link_up: Bring link up at initialization # lpfc_suppress_link_up: Bring link up at initialization
@ -2827,14 +2823,8 @@ lpfc_txcmplq_hw_show(struct device *dev, struct device_attribute *attr,
static DEVICE_ATTR(txcmplq_hw, S_IRUGO, static DEVICE_ATTR(txcmplq_hw, S_IRUGO,
lpfc_txcmplq_hw_show, NULL); lpfc_txcmplq_hw_show, NULL);
int lpfc_iocb_cnt = 2; LPFC_ATTR_R(iocb_cnt, 2, 1, 5,
module_param(lpfc_iocb_cnt, int, S_IRUGO);
MODULE_PARM_DESC(lpfc_iocb_cnt,
"Number of IOCBs alloc for ELS, CT, and ABTS: 1k to 5k IOCBs"); "Number of IOCBs alloc for ELS, CT, and ABTS: 1k to 5k IOCBs");
lpfc_param_show(iocb_cnt);
lpfc_param_init(iocb_cnt, 2, 1, 5);
static DEVICE_ATTR(lpfc_iocb_cnt, S_IRUGO,
lpfc_iocb_cnt_show, NULL);
/* /*
# lpfc_nodev_tmo: If set, it will hold all I/O errors on devices that disappear # lpfc_nodev_tmo: If set, it will hold all I/O errors on devices that disappear
@ -2887,9 +2877,9 @@ lpfc_nodev_tmo_init(struct lpfc_vport *vport, int val)
vport->cfg_nodev_tmo = vport->cfg_devloss_tmo; vport->cfg_nodev_tmo = vport->cfg_devloss_tmo;
if (val != LPFC_DEF_DEVLOSS_TMO) if (val != LPFC_DEF_DEVLOSS_TMO)
lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT, lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
"0407 Ignoring nodev_tmo module " "0407 Ignoring lpfc_nodev_tmo module "
"parameter because devloss_tmo is " "parameter because lpfc_devloss_tmo "
"set.\n"); "is set.\n");
return 0; return 0;
} }
@ -2948,8 +2938,8 @@ lpfc_nodev_tmo_set(struct lpfc_vport *vport, int val)
if (vport->dev_loss_tmo_changed || if (vport->dev_loss_tmo_changed ||
(lpfc_devloss_tmo != LPFC_DEF_DEVLOSS_TMO)) { (lpfc_devloss_tmo != LPFC_DEF_DEVLOSS_TMO)) {
lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT, lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
"0401 Ignoring change to nodev_tmo " "0401 Ignoring change to lpfc_nodev_tmo "
"because devloss_tmo is set.\n"); "because lpfc_devloss_tmo is set.\n");
return 0; return 0;
} }
if (val >= LPFC_MIN_DEVLOSS_TMO && val <= LPFC_MAX_DEVLOSS_TMO) { if (val >= LPFC_MIN_DEVLOSS_TMO && val <= LPFC_MAX_DEVLOSS_TMO) {
@ -2964,7 +2954,7 @@ lpfc_nodev_tmo_set(struct lpfc_vport *vport, int val)
return 0; return 0;
} }
lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT, lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
"0403 lpfc_nodev_tmo attribute cannot be set to" "0403 lpfc_nodev_tmo attribute cannot be set to "
"%d, allowed range is [%d, %d]\n", "%d, allowed range is [%d, %d]\n",
val, LPFC_MIN_DEVLOSS_TMO, LPFC_MAX_DEVLOSS_TMO); val, LPFC_MIN_DEVLOSS_TMO, LPFC_MAX_DEVLOSS_TMO);
return -EINVAL; return -EINVAL;
@ -3015,8 +3005,8 @@ lpfc_devloss_tmo_set(struct lpfc_vport *vport, int val)
} }
lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT, lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
"0404 lpfc_devloss_tmo attribute cannot be set to" "0404 lpfc_devloss_tmo attribute cannot be set to "
" %d, allowed range is [%d, %d]\n", "%d, allowed range is [%d, %d]\n",
val, LPFC_MIN_DEVLOSS_TMO, LPFC_MAX_DEVLOSS_TMO); val, LPFC_MIN_DEVLOSS_TMO, LPFC_MAX_DEVLOSS_TMO);
return -EINVAL; return -EINVAL;
} }
@ -3204,6 +3194,8 @@ LPFC_VPORT_ATTR_R(scan_down, 1, 0, 1,
# Set loop mode if you want to run as an NL_Port. Value range is [0,0x6]. # Set loop mode if you want to run as an NL_Port. Value range is [0,0x6].
# Default value is 0. # Default value is 0.
*/ */
LPFC_ATTR(topology, 0, 0, 6,
"Select Fibre Channel topology");
/** /**
* lpfc_topology_set - Set the adapters topology field * lpfc_topology_set - Set the adapters topology field
@ -3281,11 +3273,8 @@ lpfc_topology_store(struct device *dev, struct device_attribute *attr,
phba->brd_no, val); phba->brd_no, val);
return -EINVAL; return -EINVAL;
} }
static int lpfc_topology = 0;
module_param(lpfc_topology, int, S_IRUGO);
MODULE_PARM_DESC(lpfc_topology, "Select Fibre Channel topology");
lpfc_param_show(topology) lpfc_param_show(topology)
lpfc_param_init(topology, 0, 0, 6)
static DEVICE_ATTR(lpfc_topology, S_IRUGO | S_IWUSR, static DEVICE_ATTR(lpfc_topology, S_IRUGO | S_IWUSR,
lpfc_topology_show, lpfc_topology_store); lpfc_topology_show, lpfc_topology_store);
@ -3679,7 +3668,12 @@ lpfc_link_speed_store(struct device *dev, struct device_attribute *attr,
int nolip = 0; int nolip = 0;
const char *val_buf = buf; const char *val_buf = buf;
int err; int err;
uint32_t prev_val; uint32_t prev_val, if_type;
if_type = bf_get(lpfc_sli_intf_if_type, &phba->sli4_hba.sli_intf);
if (if_type == LPFC_SLI_INTF_IF_TYPE_2 &&
phba->hba_flag & HBA_FORCED_LINK_SPEED)
return -EPERM;
if (!strncmp(buf, "nolip ", strlen("nolip "))) { if (!strncmp(buf, "nolip ", strlen("nolip "))) {
nolip = 1; nolip = 1;
@ -3789,6 +3783,9 @@ static DEVICE_ATTR(lpfc_link_speed, S_IRUGO | S_IWUSR,
# 1 = aer supported and enabled (default) # 1 = aer supported and enabled (default)
# Value range is [0,1]. Default value is 1. # Value range is [0,1]. Default value is 1.
*/ */
LPFC_ATTR(aer_support, 1, 0, 1,
"Enable PCIe device AER support");
lpfc_param_show(aer_support)
/** /**
* lpfc_aer_support_store - Set the adapter for aer support * lpfc_aer_support_store - Set the adapter for aer support
@ -3871,46 +3868,6 @@ lpfc_aer_support_store(struct device *dev, struct device_attribute *attr,
return rc; return rc;
} }
static int lpfc_aer_support = 1;
module_param(lpfc_aer_support, int, S_IRUGO);
MODULE_PARM_DESC(lpfc_aer_support, "Enable PCIe device AER support");
lpfc_param_show(aer_support)
/**
* lpfc_aer_support_init - Set the initial adapters aer support flag
* @phba: lpfc_hba pointer.
* @val: enable aer or disable aer flag.
*
* Description:
* If val is in a valid range [0,1], then set the adapter's initial
* cfg_aer_support field. It will be up to the driver's probe_one
* routine to determine whether the device's AER support can be set
* or not.
*
* Notes:
* If the value is not in range log a kernel error message, and
* choose the default value of setting AER support and return.
*
* Returns:
* zero if val saved.
* -EINVAL val out of range
**/
static int
lpfc_aer_support_init(struct lpfc_hba *phba, int val)
{
if (val == 0 || val == 1) {
phba->cfg_aer_support = val;
return 0;
}
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"2712 lpfc_aer_support attribute value %d out "
"of range, allowed values are 0|1, setting it "
"to default value of 1\n", val);
/* By default, try to enable AER on a device */
phba->cfg_aer_support = 1;
return -EINVAL;
}
static DEVICE_ATTR(lpfc_aer_support, S_IRUGO | S_IWUSR, static DEVICE_ATTR(lpfc_aer_support, S_IRUGO | S_IWUSR,
lpfc_aer_support_show, lpfc_aer_support_store); lpfc_aer_support_show, lpfc_aer_support_store);
@ -4055,39 +4012,10 @@ lpfc_sriov_nr_virtfn_store(struct device *dev, struct device_attribute *attr,
return rc; return rc;
} }
static int lpfc_sriov_nr_virtfn = LPFC_DEF_VFN_PER_PFN; LPFC_ATTR(sriov_nr_virtfn, LPFC_DEF_VFN_PER_PFN, 0, LPFC_MAX_VFN_PER_PFN,
module_param(lpfc_sriov_nr_virtfn, int, S_IRUGO|S_IWUSR); "Enable PCIe device SR-IOV virtual fn");
MODULE_PARM_DESC(lpfc_sriov_nr_virtfn, "Enable PCIe device SR-IOV virtual fn");
lpfc_param_show(sriov_nr_virtfn) lpfc_param_show(sriov_nr_virtfn)
/**
* lpfc_sriov_nr_virtfn_init - Set the initial sr-iov virtual function enable
* @phba: lpfc_hba pointer.
* @val: link speed value.
*
* Description:
* If val is in a valid range [0,255], then set the adapter's initial
* cfg_sriov_nr_virtfn field. If it's greater than the maximum, the maximum
* number shall be used instead. It will be up to the driver's probe_one
* routine to determine whether the device's SR-IOV is supported or not.
*
* Returns:
* zero if val saved.
* -EINVAL val out of range
**/
static int
lpfc_sriov_nr_virtfn_init(struct lpfc_hba *phba, int val)
{
if (val >= 0 && val <= LPFC_MAX_VFN_PER_PFN) {
phba->cfg_sriov_nr_virtfn = val;
return 0;
}
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"3017 Enabling %d virtual functions is not "
"allowed.\n", val);
return -EINVAL;
}
static DEVICE_ATTR(lpfc_sriov_nr_virtfn, S_IRUGO | S_IWUSR, static DEVICE_ATTR(lpfc_sriov_nr_virtfn, S_IRUGO | S_IWUSR,
lpfc_sriov_nr_virtfn_show, lpfc_sriov_nr_virtfn_store); lpfc_sriov_nr_virtfn_show, lpfc_sriov_nr_virtfn_store);
@ -4251,7 +4179,8 @@ lpfc_fcp_imax_init(struct lpfc_hba *phba, int val)
} }
lpfc_printf_log(phba, KERN_ERR, LOG_INIT, lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"3016 fcp_imax: %d out of range, using default\n", val); "3016 lpfc_fcp_imax: %d out of range, using default\n",
val);
phba->cfg_fcp_imax = LPFC_DEF_IMAX; phba->cfg_fcp_imax = LPFC_DEF_IMAX;
return 0; return 0;
@ -4401,8 +4330,8 @@ lpfc_fcp_cpu_map_init(struct lpfc_hba *phba, int val)
} }
lpfc_printf_log(phba, KERN_ERR, LOG_INIT, lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"3326 fcp_cpu_map: %d out of range, using default\n", "3326 lpfc_fcp_cpu_map: %d out of range, using "
val); "default\n", val);
phba->cfg_fcp_cpu_map = LPFC_DRIVER_CPU_MAP; phba->cfg_fcp_cpu_map = LPFC_DRIVER_CPU_MAP;
return 0; return 0;
@ -4441,12 +4370,10 @@ LPFC_VPORT_ATTR_RW(first_burst_size, 0, 0, 65536,
# to limit the I/O completion time to the parameter value. # to limit the I/O completion time to the parameter value.
# The value is set in milliseconds. # The value is set in milliseconds.
*/ */
static int lpfc_max_scsicmpl_time; LPFC_VPORT_ATTR(max_scsicmpl_time, 0, 0, 60000,
module_param(lpfc_max_scsicmpl_time, int, S_IRUGO);
MODULE_PARM_DESC(lpfc_max_scsicmpl_time,
"Use command completion time to control queue depth"); "Use command completion time to control queue depth");
lpfc_vport_param_show(max_scsicmpl_time); lpfc_vport_param_show(max_scsicmpl_time);
lpfc_vport_param_init(max_scsicmpl_time, 0, 0, 60000);
static int static int
lpfc_max_scsicmpl_time_set(struct lpfc_vport *vport, int val) lpfc_max_scsicmpl_time_set(struct lpfc_vport *vport, int val)
{ {
@ -4691,12 +4618,15 @@ unsigned int lpfc_fcp_look_ahead = LPFC_LOOK_AHEAD_OFF;
# HBA supports DIX Type 1: Host to HBA Type 1 protection # HBA supports DIX Type 1: Host to HBA Type 1 protection
# #
*/ */
unsigned int lpfc_prot_mask = SHOST_DIF_TYPE1_PROTECTION | LPFC_ATTR(prot_mask,
(SHOST_DIF_TYPE1_PROTECTION |
SHOST_DIX_TYPE0_PROTECTION | SHOST_DIX_TYPE0_PROTECTION |
SHOST_DIX_TYPE1_PROTECTION; SHOST_DIX_TYPE1_PROTECTION),
0,
module_param(lpfc_prot_mask, uint, S_IRUGO); (SHOST_DIF_TYPE1_PROTECTION |
MODULE_PARM_DESC(lpfc_prot_mask, "host protection mask"); SHOST_DIX_TYPE0_PROTECTION |
SHOST_DIX_TYPE1_PROTECTION),
"T10-DIF host protection capabilities mask");
/* /*
# lpfc_prot_guard: i # lpfc_prot_guard: i
@ -4706,9 +4636,9 @@ MODULE_PARM_DESC(lpfc_prot_mask, "host protection mask");
# - Default will result in registering capabilities for all guard types # - Default will result in registering capabilities for all guard types
# #
*/ */
unsigned char lpfc_prot_guard = SHOST_DIX_GUARD_IP; LPFC_ATTR(prot_guard,
module_param(lpfc_prot_guard, byte, S_IRUGO); SHOST_DIX_GUARD_IP, SHOST_DIX_GUARD_CRC, SHOST_DIX_GUARD_IP,
MODULE_PARM_DESC(lpfc_prot_guard, "host protection guard type"); "T10-DIF host protection guard type");
/* /*
* Delay initial NPort discovery when Clean Address bit is cleared in * Delay initial NPort discovery when Clean Address bit is cleared in
@ -5828,6 +5758,8 @@ lpfc_get_cfgparam(struct lpfc_hba *phba)
phba->cfg_oas_flags = 0; phba->cfg_oas_flags = 0;
phba->cfg_oas_priority = 0; phba->cfg_oas_priority = 0;
lpfc_enable_bg_init(phba, lpfc_enable_bg); lpfc_enable_bg_init(phba, lpfc_enable_bg);
lpfc_prot_mask_init(phba, lpfc_prot_mask);
lpfc_prot_guard_init(phba, lpfc_prot_guard);
if (phba->sli_rev == LPFC_SLI_REV4) if (phba->sli_rev == LPFC_SLI_REV4)
phba->cfg_poll = 0; phba->cfg_poll = 0;
else else

File diff suppressed because it is too large Load Diff

View File

@ -35,6 +35,7 @@
#define LPFC_BSG_VENDOR_MENLO_DATA 9 #define LPFC_BSG_VENDOR_MENLO_DATA 9
#define LPFC_BSG_VENDOR_DIAG_MODE_END 10 #define LPFC_BSG_VENDOR_DIAG_MODE_END 10
#define LPFC_BSG_VENDOR_LINK_DIAG_TEST 11 #define LPFC_BSG_VENDOR_LINK_DIAG_TEST 11
#define LPFC_BSG_VENDOR_FORCED_LINK_SPEED 14
struct set_ct_event { struct set_ct_event {
uint32_t command; uint32_t command;
@ -284,6 +285,15 @@ struct lpfc_sli_config_mbox {
} un; } un;
}; };
#define LPFC_FORCED_LINK_SPEED_NOT_SUPPORTED 0
#define LPFC_FORCED_LINK_SPEED_SUPPORTED 1
struct get_forced_link_speed_support {
uint32_t command;
};
struct forced_link_speed_support_reply {
uint8_t supported;
};
/* driver only */ /* driver only */
#define SLI_CONFIG_NOT_HANDLED 0 #define SLI_CONFIG_NOT_HANDLED 0
#define SLI_CONFIG_HANDLED 1 #define SLI_CONFIG_HANDLED 1

View File

@ -397,8 +397,6 @@ extern spinlock_t _dump_buf_lock;
extern int _dump_buf_done; extern int _dump_buf_done;
extern spinlock_t pgcnt_lock; extern spinlock_t pgcnt_lock;
extern unsigned int pgcnt; extern unsigned int pgcnt;
extern unsigned int lpfc_prot_mask;
extern unsigned char lpfc_prot_guard;
extern unsigned int lpfc_fcp_look_ahead; extern unsigned int lpfc_fcp_look_ahead;
/* Interface exported by fabric iocb scheduler */ /* Interface exported by fabric iocb scheduler */
@ -431,8 +429,8 @@ struct lpfc_sglq *__lpfc_get_active_sglq(struct lpfc_hba *, uint16_t);
#define HBA_EVENT_LINK_DOWN 3 #define HBA_EVENT_LINK_DOWN 3
/* functions to support SGIOv4/bsg interface */ /* functions to support SGIOv4/bsg interface */
int lpfc_bsg_request(struct fc_bsg_job *); int lpfc_bsg_request(struct bsg_job *);
int lpfc_bsg_timeout(struct fc_bsg_job *); int lpfc_bsg_timeout(struct bsg_job *);
int lpfc_bsg_ct_unsol_event(struct lpfc_hba *, struct lpfc_sli_ring *, int lpfc_bsg_ct_unsol_event(struct lpfc_hba *, struct lpfc_sli_ring *,
struct lpfc_iocbq *); struct lpfc_iocbq *);
int lpfc_bsg_ct_unsol_abort(struct lpfc_hba *, struct hbq_dmabuf *); int lpfc_bsg_ct_unsol_abort(struct lpfc_hba *, struct hbq_dmabuf *);

View File

@ -7610,7 +7610,7 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
/* reject till our FLOGI completes */ /* reject till our FLOGI completes */
if ((vport->port_state < LPFC_FABRIC_CFG_LINK) && if ((vport->port_state < LPFC_FABRIC_CFG_LINK) &&
(cmd != ELS_CMD_FLOGI)) { (cmd != ELS_CMD_FLOGI)) {
rjt_err = LSRJT_UNABLE_TPC; rjt_err = LSRJT_LOGICAL_BSY;
rjt_exp = LSEXP_NOTHING_MORE; rjt_exp = LSEXP_NOTHING_MORE;
goto lsrjt; goto lsrjt;
} }

View File

@ -921,6 +921,7 @@ struct mbox_header {
#define LPFC_MBOX_OPCODE_GET_PORT_NAME 0x4D #define LPFC_MBOX_OPCODE_GET_PORT_NAME 0x4D
#define LPFC_MBOX_OPCODE_MQ_CREATE_EXT 0x5A #define LPFC_MBOX_OPCODE_MQ_CREATE_EXT 0x5A
#define LPFC_MBOX_OPCODE_GET_VPD_DATA 0x5B #define LPFC_MBOX_OPCODE_GET_VPD_DATA 0x5B
#define LPFC_MBOX_OPCODE_SET_HOST_DATA 0x5D
#define LPFC_MBOX_OPCODE_SEND_ACTIVATION 0x73 #define LPFC_MBOX_OPCODE_SEND_ACTIVATION 0x73
#define LPFC_MBOX_OPCODE_RESET_LICENSES 0x74 #define LPFC_MBOX_OPCODE_RESET_LICENSES 0x74
#define LPFC_MBOX_OPCODE_GET_RSRC_EXTENT_INFO 0x9A #define LPFC_MBOX_OPCODE_GET_RSRC_EXTENT_INFO 0x9A
@ -2289,6 +2290,9 @@ struct lpfc_mbx_read_config {
#define lpfc_mbx_rd_conf_r_a_tov_SHIFT 0 #define lpfc_mbx_rd_conf_r_a_tov_SHIFT 0
#define lpfc_mbx_rd_conf_r_a_tov_MASK 0x0000FFFF #define lpfc_mbx_rd_conf_r_a_tov_MASK 0x0000FFFF
#define lpfc_mbx_rd_conf_r_a_tov_WORD word6 #define lpfc_mbx_rd_conf_r_a_tov_WORD word6
#define lpfc_mbx_rd_conf_link_speed_SHIFT 16
#define lpfc_mbx_rd_conf_link_speed_MASK 0x0000FFFF
#define lpfc_mbx_rd_conf_link_speed_WORD word6
uint32_t rsvd_7; uint32_t rsvd_7;
uint32_t rsvd_8; uint32_t rsvd_8;
uint32_t word9; uint32_t word9;
@ -2919,6 +2923,16 @@ struct lpfc_mbx_set_feature {
}; };
#define LPFC_SET_HOST_OS_DRIVER_VERSION 0x2
struct lpfc_mbx_set_host_data {
#define LPFC_HOST_OS_DRIVER_VERSION_SIZE 48
struct mbox_header header;
uint32_t param_id;
uint32_t param_len;
uint8_t data[LPFC_HOST_OS_DRIVER_VERSION_SIZE];
};
struct lpfc_mbx_get_sli4_parameters { struct lpfc_mbx_get_sli4_parameters {
struct mbox_header header; struct mbox_header header;
struct lpfc_sli4_parameters sli4_parameters; struct lpfc_sli4_parameters sli4_parameters;
@ -3313,6 +3327,7 @@ struct lpfc_mqe {
struct lpfc_mbx_get_port_name get_port_name; struct lpfc_mbx_get_port_name get_port_name;
struct lpfc_mbx_set_feature set_feature; struct lpfc_mbx_set_feature set_feature;
struct lpfc_mbx_memory_dump_type3 mem_dump_type3; struct lpfc_mbx_memory_dump_type3 mem_dump_type3;
struct lpfc_mbx_set_host_data set_host_data;
struct lpfc_mbx_nop nop; struct lpfc_mbx_nop nop;
} un; } un;
}; };
@ -3981,7 +3996,8 @@ union lpfc_wqe128 {
struct gen_req64_wqe gen_req; struct gen_req64_wqe gen_req;
}; };
#define LPFC_GROUP_OJECT_MAGIC_NUM 0xfeaa0001 #define LPFC_GROUP_OJECT_MAGIC_G5 0xfeaa0001
#define LPFC_GROUP_OJECT_MAGIC_G6 0xfeaa0003
#define LPFC_FILE_TYPE_GROUP 0xf7 #define LPFC_FILE_TYPE_GROUP 0xf7
#define LPFC_FILE_ID_GROUP 0xa2 #define LPFC_FILE_ID_GROUP 0xa2
struct lpfc_grp_hdr { struct lpfc_grp_hdr {

View File

@ -6279,34 +6279,36 @@ lpfc_setup_bg(struct lpfc_hba *phba, struct Scsi_Host *shost)
uint32_t old_guard; uint32_t old_guard;
int pagecnt = 10; int pagecnt = 10;
if (lpfc_prot_mask && lpfc_prot_guard) { if (phba->cfg_prot_mask && phba->cfg_prot_guard) {
lpfc_printf_log(phba, KERN_INFO, LOG_INIT, lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
"1478 Registering BlockGuard with the " "1478 Registering BlockGuard with the "
"SCSI layer\n"); "SCSI layer\n");
old_mask = lpfc_prot_mask; old_mask = phba->cfg_prot_mask;
old_guard = lpfc_prot_guard; old_guard = phba->cfg_prot_guard;
/* Only allow supported values */ /* Only allow supported values */
lpfc_prot_mask &= (SHOST_DIF_TYPE1_PROTECTION | phba->cfg_prot_mask &= (SHOST_DIF_TYPE1_PROTECTION |
SHOST_DIX_TYPE0_PROTECTION | SHOST_DIX_TYPE0_PROTECTION |
SHOST_DIX_TYPE1_PROTECTION); SHOST_DIX_TYPE1_PROTECTION);
lpfc_prot_guard &= (SHOST_DIX_GUARD_IP | SHOST_DIX_GUARD_CRC); phba->cfg_prot_guard &= (SHOST_DIX_GUARD_IP |
SHOST_DIX_GUARD_CRC);
/* DIF Type 1 protection for profiles AST1/C1 is end to end */ /* DIF Type 1 protection for profiles AST1/C1 is end to end */
if (lpfc_prot_mask == SHOST_DIX_TYPE1_PROTECTION) if (phba->cfg_prot_mask == SHOST_DIX_TYPE1_PROTECTION)
lpfc_prot_mask |= SHOST_DIF_TYPE1_PROTECTION; phba->cfg_prot_mask |= SHOST_DIF_TYPE1_PROTECTION;
if (lpfc_prot_mask && lpfc_prot_guard) { if (phba->cfg_prot_mask && phba->cfg_prot_guard) {
if ((old_mask != lpfc_prot_mask) || if ((old_mask != phba->cfg_prot_mask) ||
(old_guard != lpfc_prot_guard)) (old_guard != phba->cfg_prot_guard))
lpfc_printf_log(phba, KERN_ERR, LOG_INIT, lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"1475 Registering BlockGuard with the " "1475 Registering BlockGuard with the "
"SCSI layer: mask %d guard %d\n", "SCSI layer: mask %d guard %d\n",
lpfc_prot_mask, lpfc_prot_guard); phba->cfg_prot_mask,
phba->cfg_prot_guard);
scsi_host_set_prot(shost, lpfc_prot_mask); scsi_host_set_prot(shost, phba->cfg_prot_mask);
scsi_host_set_guard(shost, lpfc_prot_guard); scsi_host_set_guard(shost, phba->cfg_prot_guard);
} else } else
lpfc_printf_log(phba, KERN_ERR, LOG_INIT, lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"1479 Not Registering BlockGuard with the SCSI " "1479 Not Registering BlockGuard with the SCSI "
@ -6929,6 +6931,8 @@ lpfc_sli4_read_config(struct lpfc_hba *phba)
struct lpfc_mbx_get_func_cfg *get_func_cfg; struct lpfc_mbx_get_func_cfg *get_func_cfg;
struct lpfc_rsrc_desc_fcfcoe *desc; struct lpfc_rsrc_desc_fcfcoe *desc;
char *pdesc_0; char *pdesc_0;
uint16_t forced_link_speed;
uint32_t if_type;
int length, i, rc = 0, rc2; int length, i, rc = 0, rc2;
pmb = (LPFC_MBOXQ_t *) mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL); pmb = (LPFC_MBOXQ_t *) mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
@ -7022,6 +7026,58 @@ lpfc_sli4_read_config(struct lpfc_hba *phba)
if (rc) if (rc)
goto read_cfg_out; goto read_cfg_out;
/* Update link speed if forced link speed is supported */
if_type = bf_get(lpfc_sli_intf_if_type, &phba->sli4_hba.sli_intf);
if (if_type == LPFC_SLI_INTF_IF_TYPE_2) {
forced_link_speed =
bf_get(lpfc_mbx_rd_conf_link_speed, rd_config);
if (forced_link_speed) {
phba->hba_flag |= HBA_FORCED_LINK_SPEED;
switch (forced_link_speed) {
case LINK_SPEED_1G:
phba->cfg_link_speed =
LPFC_USER_LINK_SPEED_1G;
break;
case LINK_SPEED_2G:
phba->cfg_link_speed =
LPFC_USER_LINK_SPEED_2G;
break;
case LINK_SPEED_4G:
phba->cfg_link_speed =
LPFC_USER_LINK_SPEED_4G;
break;
case LINK_SPEED_8G:
phba->cfg_link_speed =
LPFC_USER_LINK_SPEED_8G;
break;
case LINK_SPEED_10G:
phba->cfg_link_speed =
LPFC_USER_LINK_SPEED_10G;
break;
case LINK_SPEED_16G:
phba->cfg_link_speed =
LPFC_USER_LINK_SPEED_16G;
break;
case LINK_SPEED_32G:
phba->cfg_link_speed =
LPFC_USER_LINK_SPEED_32G;
break;
case 0xffff:
phba->cfg_link_speed =
LPFC_USER_LINK_SPEED_AUTO;
break;
default:
lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
"0047 Unrecognized link "
"speed : %d\n",
forced_link_speed);
phba->cfg_link_speed =
LPFC_USER_LINK_SPEED_AUTO;
}
}
}
/* Reset the DFT_HBA_Q_DEPTH to the max xri */ /* Reset the DFT_HBA_Q_DEPTH to the max xri */
length = phba->sli4_hba.max_cfg_param.max_xri - length = phba->sli4_hba.max_cfg_param.max_xri -
lpfc_sli4_get_els_iocb_cnt(phba); lpfc_sli4_get_els_iocb_cnt(phba);
@ -7256,6 +7312,7 @@ int
lpfc_sli4_queue_create(struct lpfc_hba *phba) lpfc_sli4_queue_create(struct lpfc_hba *phba)
{ {
struct lpfc_queue *qdesc; struct lpfc_queue *qdesc;
uint32_t wqesize;
int idx; int idx;
/* /*
@ -7340,15 +7397,10 @@ lpfc_sli4_queue_create(struct lpfc_hba *phba)
phba->sli4_hba.fcp_cq[idx] = qdesc; phba->sli4_hba.fcp_cq[idx] = qdesc;
/* Create Fast Path FCP WQs */ /* Create Fast Path FCP WQs */
if (phba->fcp_embed_io) { wqesize = (phba->fcp_embed_io) ?
qdesc = lpfc_sli4_queue_alloc(phba, LPFC_WQE128_SIZE : phba->sli4_hba.wq_esize;
LPFC_WQE128_SIZE, qdesc = lpfc_sli4_queue_alloc(phba, wqesize,
LPFC_WQE128_DEF_COUNT);
} else {
qdesc = lpfc_sli4_queue_alloc(phba,
phba->sli4_hba.wq_esize,
phba->sli4_hba.wq_ecount); phba->sli4_hba.wq_ecount);
}
if (!qdesc) { if (!qdesc) {
lpfc_printf_log(phba, KERN_ERR, LOG_INIT, lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"0503 Failed allocate fast-path FCP " "0503 Failed allocate fast-path FCP "
@ -10260,6 +10312,7 @@ lpfc_write_firmware(const struct firmware *fw, void *context)
int i, rc = 0; int i, rc = 0;
struct lpfc_dmabuf *dmabuf, *next; struct lpfc_dmabuf *dmabuf, *next;
uint32_t offset = 0, temp_offset = 0; uint32_t offset = 0, temp_offset = 0;
uint32_t magic_number, ftype, fid, fsize;
/* It can be null in no-wait mode, sanity check */ /* It can be null in no-wait mode, sanity check */
if (!fw) { if (!fw) {
@ -10268,18 +10321,19 @@ lpfc_write_firmware(const struct firmware *fw, void *context)
} }
image = (struct lpfc_grp_hdr *)fw->data; image = (struct lpfc_grp_hdr *)fw->data;
magic_number = be32_to_cpu(image->magic_number);
ftype = bf_get_be32(lpfc_grp_hdr_file_type, image);
fid = bf_get_be32(lpfc_grp_hdr_id, image),
fsize = be32_to_cpu(image->size);
INIT_LIST_HEAD(&dma_buffer_list); INIT_LIST_HEAD(&dma_buffer_list);
if ((be32_to_cpu(image->magic_number) != LPFC_GROUP_OJECT_MAGIC_NUM) || if ((magic_number != LPFC_GROUP_OJECT_MAGIC_G5 &&
(bf_get_be32(lpfc_grp_hdr_file_type, image) != magic_number != LPFC_GROUP_OJECT_MAGIC_G6) ||
LPFC_FILE_TYPE_GROUP) || ftype != LPFC_FILE_TYPE_GROUP || fsize != fw->size) {
(bf_get_be32(lpfc_grp_hdr_id, image) != LPFC_FILE_ID_GROUP) ||
(be32_to_cpu(image->size) != fw->size)) {
lpfc_printf_log(phba, KERN_ERR, LOG_INIT, lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"3022 Invalid FW image found. " "3022 Invalid FW image found. "
"Magic:%x Type:%x ID:%x\n", "Magic:%x Type:%x ID:%x Size %d %zd\n",
be32_to_cpu(image->magic_number), magic_number, ftype, fid, fsize, fw->size);
bf_get_be32(lpfc_grp_hdr_file_type, image),
bf_get_be32(lpfc_grp_hdr_id, image));
rc = -EINVAL; rc = -EINVAL;
goto release_out; goto release_out;
} }

View File

@ -413,15 +413,13 @@ lpfc_new_scsi_buf_s3(struct lpfc_vport *vport, int num_to_alloc)
* struct fcp_cmnd, struct fcp_rsp and the number of bde's * struct fcp_cmnd, struct fcp_rsp and the number of bde's
* necessary to support the sg_tablesize. * necessary to support the sg_tablesize.
*/ */
psb->data = pci_pool_alloc(phba->lpfc_scsi_dma_buf_pool, psb->data = pci_pool_zalloc(phba->lpfc_scsi_dma_buf_pool,
GFP_KERNEL, &psb->dma_handle); GFP_KERNEL, &psb->dma_handle);
if (!psb->data) { if (!psb->data) {
kfree(psb); kfree(psb);
break; break;
} }
/* Initialize virtual ptrs to dma_buf region. */
memset(psb->data, 0, phba->cfg_sg_dma_buf_size);
/* Allocate iotag for psb->cur_iocbq. */ /* Allocate iotag for psb->cur_iocbq. */
iotag = lpfc_sli_next_iotag(phba, &psb->cur_iocbq); iotag = lpfc_sli_next_iotag(phba, &psb->cur_iocbq);
@ -607,7 +605,7 @@ lpfc_sli4_fcp_xri_aborted(struct lpfc_hba *phba,
} }
/** /**
* lpfc_sli4_post_scsi_sgl_list - Psot blocks of scsi buffer sgls from a list * lpfc_sli4_post_scsi_sgl_list - Post blocks of scsi buffer sgls from a list
* @phba: pointer to lpfc hba data structure. * @phba: pointer to lpfc hba data structure.
* @post_sblist: pointer to the scsi buffer list. * @post_sblist: pointer to the scsi buffer list.
* *
@ -736,7 +734,7 @@ lpfc_sli4_post_scsi_sgl_list(struct lpfc_hba *phba,
} }
/** /**
* lpfc_sli4_repost_scsi_sgl_list - Repsot all the allocated scsi buffer sgls * lpfc_sli4_repost_scsi_sgl_list - Repost all the allocated scsi buffer sgls
* @phba: pointer to lpfc hba data structure. * @phba: pointer to lpfc hba data structure.
* *
* This routine walks the list of scsi buffers that have been allocated and * This routine walks the list of scsi buffers that have been allocated and
@ -821,13 +819,12 @@ lpfc_new_scsi_buf_s4(struct lpfc_vport *vport, int num_to_alloc)
* for the struct fcp_cmnd, struct fcp_rsp and the number * for the struct fcp_cmnd, struct fcp_rsp and the number
* of bde's necessary to support the sg_tablesize. * of bde's necessary to support the sg_tablesize.
*/ */
psb->data = pci_pool_alloc(phba->lpfc_scsi_dma_buf_pool, psb->data = pci_pool_zalloc(phba->lpfc_scsi_dma_buf_pool,
GFP_KERNEL, &psb->dma_handle); GFP_KERNEL, &psb->dma_handle);
if (!psb->data) { if (!psb->data) {
kfree(psb); kfree(psb);
break; break;
} }
memset(psb->data, 0, phba->cfg_sg_dma_buf_size);
/* /*
* 4K Page alignment is CRITICAL to BlockGuard, double check * 4K Page alignment is CRITICAL to BlockGuard, double check
@ -857,7 +854,7 @@ lpfc_new_scsi_buf_s4(struct lpfc_vport *vport, int num_to_alloc)
psb->data, psb->dma_handle); psb->data, psb->dma_handle);
kfree(psb); kfree(psb);
lpfc_printf_log(phba, KERN_ERR, LOG_FCP, lpfc_printf_log(phba, KERN_ERR, LOG_FCP,
"3368 Failed to allocated IOTAG for" "3368 Failed to allocate IOTAG for"
" XRI:0x%x\n", lxri); " XRI:0x%x\n", lxri);
lpfc_sli4_free_xri(phba, lxri); lpfc_sli4_free_xri(phba, lxri);
break; break;
@ -1136,7 +1133,7 @@ lpfc_release_scsi_buf(struct lpfc_hba *phba, struct lpfc_scsi_buf *psb)
* *
* This routine does the pci dma mapping for scatter-gather list of scsi cmnd * This routine does the pci dma mapping for scatter-gather list of scsi cmnd
* field of @lpfc_cmd for device with SLI-3 interface spec. This routine scans * field of @lpfc_cmd for device with SLI-3 interface spec. This routine scans
* through sg elements and format the bdea. This routine also initializes all * through sg elements and format the bde. This routine also initializes all
* IOCB fields which are dependent on scsi command request buffer. * IOCB fields which are dependent on scsi command request buffer.
* *
* Return codes: * Return codes:
@ -1269,13 +1266,16 @@ lpfc_scsi_prep_dma_buf_s3(struct lpfc_hba *phba, struct lpfc_scsi_buf *lpfc_cmd)
#ifdef CONFIG_SCSI_LPFC_DEBUG_FS #ifdef CONFIG_SCSI_LPFC_DEBUG_FS
/* Return if if error injection is detected by Initiator */ /* Return BG_ERR_INIT if error injection is detected by Initiator */
#define BG_ERR_INIT 0x1 #define BG_ERR_INIT 0x1
/* Return if if error injection is detected by Target */ /* Return BG_ERR_TGT if error injection is detected by Target */
#define BG_ERR_TGT 0x2 #define BG_ERR_TGT 0x2
/* Return if if swapping CSUM<-->CRC is required for error injection */ /* Return BG_ERR_SWAP if swapping CSUM<-->CRC is required for error injection */
#define BG_ERR_SWAP 0x10 #define BG_ERR_SWAP 0x10
/* Return if disabling Guard/Ref/App checking is required for error injection */ /**
* Return BG_ERR_CHECK if disabling Guard/Ref/App checking is required for
* error injection
**/
#define BG_ERR_CHECK 0x20 #define BG_ERR_CHECK 0x20
/** /**
@ -4139,13 +4139,13 @@ lpfc_scsi_cmd_iocb_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pIocbIn,
lpfc_scsi_unprep_dma_buf(phba, lpfc_cmd); lpfc_scsi_unprep_dma_buf(phba, lpfc_cmd);
/* The sdev is not guaranteed to be valid post scsi_done upcall. */
cmd->scsi_done(cmd);
spin_lock_irqsave(&phba->hbalock, flags); spin_lock_irqsave(&phba->hbalock, flags);
lpfc_cmd->pCmd = NULL; lpfc_cmd->pCmd = NULL;
spin_unlock_irqrestore(&phba->hbalock, flags); spin_unlock_irqrestore(&phba->hbalock, flags);
/* The sdev is not guaranteed to be valid post scsi_done upcall. */
cmd->scsi_done(cmd);
/* /*
* If there is a thread waiting for command completion * If there is a thread waiting for command completion
* wake up the thread. * wake up the thread.
@ -4822,7 +4822,7 @@ wait_for_cmpl:
ret = FAILED; ret = FAILED;
lpfc_printf_vlog(vport, KERN_ERR, LOG_FCP, lpfc_printf_vlog(vport, KERN_ERR, LOG_FCP,
"0748 abort handler timed out waiting " "0748 abort handler timed out waiting "
"for abortng I/O (xri:x%x) to complete: " "for aborting I/O (xri:x%x) to complete: "
"ret %#x, ID %d, LUN %llu\n", "ret %#x, ID %d, LUN %llu\n",
iocb->sli4_xritag, ret, iocb->sli4_xritag, ret,
cmnd->device->id, cmnd->device->lun); cmnd->device->id, cmnd->device->lun);
@ -4945,26 +4945,30 @@ lpfc_check_fcp_rsp(struct lpfc_vport *vport, struct lpfc_scsi_buf *lpfc_cmd)
* 0x2002 - Success. * 0x2002 - Success.
**/ **/
static int static int
lpfc_send_taskmgmt(struct lpfc_vport *vport, struct lpfc_rport_data *rdata, lpfc_send_taskmgmt(struct lpfc_vport *vport, struct scsi_cmnd *cmnd,
unsigned tgt_id, uint64_t lun_id, unsigned int tgt_id, uint64_t lun_id,
uint8_t task_mgmt_cmd) uint8_t task_mgmt_cmd)
{ {
struct lpfc_hba *phba = vport->phba; struct lpfc_hba *phba = vport->phba;
struct lpfc_scsi_buf *lpfc_cmd; struct lpfc_scsi_buf *lpfc_cmd;
struct lpfc_iocbq *iocbq; struct lpfc_iocbq *iocbq;
struct lpfc_iocbq *iocbqrsp; struct lpfc_iocbq *iocbqrsp;
struct lpfc_nodelist *pnode = rdata->pnode; struct lpfc_rport_data *rdata;
struct lpfc_nodelist *pnode;
int ret; int ret;
int status; int status;
if (!pnode || !NLP_CHK_NODE_ACT(pnode)) rdata = lpfc_rport_data_from_scsi_device(cmnd->device);
if (!rdata || !rdata->pnode || !NLP_CHK_NODE_ACT(rdata->pnode))
return FAILED; return FAILED;
pnode = rdata->pnode;
lpfc_cmd = lpfc_get_scsi_buf(phba, rdata->pnode); lpfc_cmd = lpfc_get_scsi_buf(phba, pnode);
if (lpfc_cmd == NULL) if (lpfc_cmd == NULL)
return FAILED; return FAILED;
lpfc_cmd->timeout = phba->cfg_task_mgmt_tmo; lpfc_cmd->timeout = phba->cfg_task_mgmt_tmo;
lpfc_cmd->rdata = rdata; lpfc_cmd->rdata = rdata;
lpfc_cmd->pCmd = cmnd;
status = lpfc_scsi_prep_task_mgmt_cmd(vport, lpfc_cmd, lun_id, status = lpfc_scsi_prep_task_mgmt_cmd(vport, lpfc_cmd, lun_id,
task_mgmt_cmd); task_mgmt_cmd);
@ -5171,7 +5175,7 @@ lpfc_device_reset_handler(struct scsi_cmnd *cmnd)
fc_host_post_vendor_event(shost, fc_get_event_number(), fc_host_post_vendor_event(shost, fc_get_event_number(),
sizeof(scsi_event), (char *)&scsi_event, LPFC_NL_VENDOR_ID); sizeof(scsi_event), (char *)&scsi_event, LPFC_NL_VENDOR_ID);
status = lpfc_send_taskmgmt(vport, rdata, tgt_id, lun_id, status = lpfc_send_taskmgmt(vport, cmnd, tgt_id, lun_id,
FCP_LUN_RESET); FCP_LUN_RESET);
lpfc_printf_vlog(vport, KERN_ERR, LOG_FCP, lpfc_printf_vlog(vport, KERN_ERR, LOG_FCP,
@ -5249,7 +5253,7 @@ lpfc_target_reset_handler(struct scsi_cmnd *cmnd)
fc_host_post_vendor_event(shost, fc_get_event_number(), fc_host_post_vendor_event(shost, fc_get_event_number(),
sizeof(scsi_event), (char *)&scsi_event, LPFC_NL_VENDOR_ID); sizeof(scsi_event), (char *)&scsi_event, LPFC_NL_VENDOR_ID);
status = lpfc_send_taskmgmt(vport, rdata, tgt_id, lun_id, status = lpfc_send_taskmgmt(vport, cmnd, tgt_id, lun_id,
FCP_TARGET_RESET); FCP_TARGET_RESET);
lpfc_printf_vlog(vport, KERN_ERR, LOG_FCP, lpfc_printf_vlog(vport, KERN_ERR, LOG_FCP,
@ -5328,7 +5332,7 @@ lpfc_bus_reset_handler(struct scsi_cmnd *cmnd)
if (!match) if (!match)
continue; continue;
status = lpfc_send_taskmgmt(vport, ndlp->rport->dd_data, status = lpfc_send_taskmgmt(vport, cmnd,
i, 0, FCP_TARGET_RESET); i, 0, FCP_TARGET_RESET);
if (status != SUCCESS) { if (status != SUCCESS) {

View File

@ -47,6 +47,7 @@
#include "lpfc_compat.h" #include "lpfc_compat.h"
#include "lpfc_debugfs.h" #include "lpfc_debugfs.h"
#include "lpfc_vport.h" #include "lpfc_vport.h"
#include "lpfc_version.h"
/* There are only four IOCB completion types. */ /* There are only four IOCB completion types. */
typedef enum _lpfc_iocb_type { typedef enum _lpfc_iocb_type {
@ -2678,15 +2679,16 @@ lpfc_sli_iocbq_lookup(struct lpfc_hba *phba,
if (iotag != 0 && iotag <= phba->sli.last_iotag) { if (iotag != 0 && iotag <= phba->sli.last_iotag) {
cmd_iocb = phba->sli.iocbq_lookup[iotag]; cmd_iocb = phba->sli.iocbq_lookup[iotag];
list_del_init(&cmd_iocb->list);
if (cmd_iocb->iocb_flag & LPFC_IO_ON_TXCMPLQ) { if (cmd_iocb->iocb_flag & LPFC_IO_ON_TXCMPLQ) {
/* remove from txcmpl queue list */
list_del_init(&cmd_iocb->list);
cmd_iocb->iocb_flag &= ~LPFC_IO_ON_TXCMPLQ; cmd_iocb->iocb_flag &= ~LPFC_IO_ON_TXCMPLQ;
}
return cmd_iocb; return cmd_iocb;
} }
}
lpfc_printf_log(phba, KERN_ERR, LOG_SLI, lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
"0317 iotag x%x is out off " "0317 iotag x%x is out of "
"range: max iotag x%x wd0 x%x\n", "range: max iotag x%x wd0 x%x\n",
iotag, phba->sli.last_iotag, iotag, phba->sli.last_iotag,
*(((uint32_t *) &prspiocb->iocb) + 7)); *(((uint32_t *) &prspiocb->iocb) + 7));
@ -2721,8 +2723,9 @@ lpfc_sli_iocbq_lookup_by_tag(struct lpfc_hba *phba,
return cmd_iocb; return cmd_iocb;
} }
} }
lpfc_printf_log(phba, KERN_ERR, LOG_SLI, lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
"0372 iotag x%x is out off range: max iotag (x%x)\n", "0372 iotag x%x is out of range: max iotag (x%x)\n",
iotag, phba->sli.last_iotag); iotag, phba->sli.last_iotag);
return NULL; return NULL;
} }
@ -6291,6 +6294,25 @@ lpfc_sli4_repost_els_sgl_list(struct lpfc_hba *phba)
return 0; return 0;
} }
void
lpfc_set_host_data(struct lpfc_hba *phba, LPFC_MBOXQ_t *mbox)
{
uint32_t len;
len = sizeof(struct lpfc_mbx_set_host_data) -
sizeof(struct lpfc_sli4_cfg_mhdr);
lpfc_sli4_config(phba, mbox, LPFC_MBOX_SUBSYSTEM_COMMON,
LPFC_MBOX_OPCODE_SET_HOST_DATA, len,
LPFC_SLI4_MBX_EMBED);
mbox->u.mqe.un.set_host_data.param_id = LPFC_SET_HOST_OS_DRIVER_VERSION;
mbox->u.mqe.un.set_host_data.param_len = 8;
snprintf(mbox->u.mqe.un.set_host_data.data,
LPFC_HOST_OS_DRIVER_VERSION_SIZE,
"Linux %s v"LPFC_DRIVER_VERSION,
(phba->hba_flag & HBA_FCOE_MODE) ? "FCoE" : "FC");
}
/** /**
* lpfc_sli4_hba_setup - SLI4 device intialization PCI function * lpfc_sli4_hba_setup - SLI4 device intialization PCI function
* @phba: Pointer to HBA context object. * @phba: Pointer to HBA context object.
@ -6542,6 +6564,15 @@ lpfc_sli4_hba_setup(struct lpfc_hba *phba)
goto out_free_mbox; goto out_free_mbox;
} }
lpfc_set_host_data(phba, mboxq);
rc = lpfc_sli_issue_mbox(phba, mboxq, MBX_POLL);
if (rc) {
lpfc_printf_log(phba, KERN_WARNING, LOG_MBOX | LOG_SLI,
"2134 Failed to set host os driver version %x",
rc);
}
/* Read the port's service parameters. */ /* Read the port's service parameters. */
rc = lpfc_read_sparam(phba, mboxq, vport->vpi); rc = lpfc_read_sparam(phba, mboxq, vport->vpi);
if (rc) { if (rc) {
@ -11781,6 +11812,8 @@ lpfc_sli4_els_wcqe_to_rspiocbq(struct lpfc_hba *phba,
/* Look up the ELS command IOCB and create pseudo response IOCB */ /* Look up the ELS command IOCB and create pseudo response IOCB */
cmdiocbq = lpfc_sli_iocbq_lookup_by_tag(phba, pring, cmdiocbq = lpfc_sli_iocbq_lookup_by_tag(phba, pring,
bf_get(lpfc_wcqe_c_request_tag, wcqe)); bf_get(lpfc_wcqe_c_request_tag, wcqe));
/* Put the iocb back on the txcmplq */
lpfc_sli_ringtxcmpl_put(phba, pring, cmdiocbq);
spin_unlock_irqrestore(&pring->ring_lock, iflags); spin_unlock_irqrestore(&pring->ring_lock, iflags);
if (unlikely(!cmdiocbq)) { if (unlikely(!cmdiocbq)) {

View File

@ -18,7 +18,7 @@
* included with this package. * * included with this package. *
*******************************************************************/ *******************************************************************/
#define LPFC_DRIVER_VERSION "11.2.0.0." #define LPFC_DRIVER_VERSION "11.2.0.2"
#define LPFC_DRIVER_NAME "lpfc" #define LPFC_DRIVER_NAME "lpfc"
/* Used for SLI 2/3 */ /* Used for SLI 2/3 */

View File

@ -28,17 +28,15 @@
/* Definitions for the core NCR5380 driver. */ /* Definitions for the core NCR5380 driver. */
#define NCR5380_implementation_fields unsigned char *pdma_base; \ #define NCR5380_implementation_fields int pdma_residual
int pdma_residual
#define NCR5380_read(reg) macscsi_read(instance, reg) #define NCR5380_read(reg) in_8(hostdata->io + ((reg) << 4))
#define NCR5380_write(reg, value) macscsi_write(instance, reg, value) #define NCR5380_write(reg, value) out_8(hostdata->io + ((reg) << 4), value)
#define NCR5380_dma_xfer_len(instance, cmd, phase) \ #define NCR5380_dma_xfer_len macscsi_dma_xfer_len
macscsi_dma_xfer_len(instance, cmd)
#define NCR5380_dma_recv_setup macscsi_pread #define NCR5380_dma_recv_setup macscsi_pread
#define NCR5380_dma_send_setup macscsi_pwrite #define NCR5380_dma_send_setup macscsi_pwrite
#define NCR5380_dma_residual(instance) (hostdata->pdma_residual) #define NCR5380_dma_residual macscsi_dma_residual
#define NCR5380_intr macscsi_intr #define NCR5380_intr macscsi_intr
#define NCR5380_queue_command macscsi_queue_command #define NCR5380_queue_command macscsi_queue_command
@ -61,20 +59,6 @@ module_param(setup_hostid, int, 0);
static int setup_toshiba_delay = -1; static int setup_toshiba_delay = -1;
module_param(setup_toshiba_delay, int, 0); module_param(setup_toshiba_delay, int, 0);
/*
* NCR 5380 register access functions
*/
static inline char macscsi_read(struct Scsi_Host *instance, int reg)
{
return in_8(instance->base + (reg << 4));
}
static inline void macscsi_write(struct Scsi_Host *instance, int reg, int value)
{
out_8(instance->base + (reg << 4), value);
}
#ifndef MODULE #ifndef MODULE
static int __init mac_scsi_setup(char *str) static int __init mac_scsi_setup(char *str)
{ {
@ -167,16 +151,15 @@ __asm__ __volatile__ \
: "0"(s), "1"(d), "2"(n) \ : "0"(s), "1"(d), "2"(n) \
: "d0") : "d0")
static int macscsi_pread(struct Scsi_Host *instance, static inline int macscsi_pread(struct NCR5380_hostdata *hostdata,
unsigned char *dst, int len) unsigned char *dst, int len)
{ {
struct NCR5380_hostdata *hostdata = shost_priv(instance); unsigned char *s = hostdata->pdma_io + (INPUT_DATA_REG << 4);
unsigned char *s = hostdata->pdma_base + (INPUT_DATA_REG << 4);
unsigned char *d = dst; unsigned char *d = dst;
int n = len; int n = len;
int transferred; int transferred;
while (!NCR5380_poll_politely(instance, BUS_AND_STATUS_REG, while (!NCR5380_poll_politely(hostdata, BUS_AND_STATUS_REG,
BASR_DRQ | BASR_PHASE_MATCH, BASR_DRQ | BASR_PHASE_MATCH,
BASR_DRQ | BASR_PHASE_MATCH, HZ / 64)) { BASR_DRQ | BASR_PHASE_MATCH, HZ / 64)) {
CP_IO_TO_MEM(s, d, n); CP_IO_TO_MEM(s, d, n);
@ -189,23 +172,23 @@ static int macscsi_pread(struct Scsi_Host *instance,
return 0; return 0;
/* Target changed phase early? */ /* Target changed phase early? */
if (NCR5380_poll_politely2(instance, STATUS_REG, SR_REQ, SR_REQ, if (NCR5380_poll_politely2(hostdata, STATUS_REG, SR_REQ, SR_REQ,
BUS_AND_STATUS_REG, BASR_ACK, BASR_ACK, HZ / 64) < 0) BUS_AND_STATUS_REG, BASR_ACK, BASR_ACK, HZ / 64) < 0)
scmd_printk(KERN_ERR, hostdata->connected, scmd_printk(KERN_ERR, hostdata->connected,
"%s: !REQ and !ACK\n", __func__); "%s: !REQ and !ACK\n", __func__);
if (!(NCR5380_read(BUS_AND_STATUS_REG) & BASR_PHASE_MATCH)) if (!(NCR5380_read(BUS_AND_STATUS_REG) & BASR_PHASE_MATCH))
return 0; return 0;
dsprintk(NDEBUG_PSEUDO_DMA, instance, dsprintk(NDEBUG_PSEUDO_DMA, hostdata->host,
"%s: bus error (%d/%d)\n", __func__, transferred, len); "%s: bus error (%d/%d)\n", __func__, transferred, len);
NCR5380_dprint(NDEBUG_PSEUDO_DMA, instance); NCR5380_dprint(NDEBUG_PSEUDO_DMA, hostdata->host);
d = dst + transferred; d = dst + transferred;
n = len - transferred; n = len - transferred;
} }
scmd_printk(KERN_ERR, hostdata->connected, scmd_printk(KERN_ERR, hostdata->connected,
"%s: phase mismatch or !DRQ\n", __func__); "%s: phase mismatch or !DRQ\n", __func__);
NCR5380_dprint(NDEBUG_PSEUDO_DMA, instance); NCR5380_dprint(NDEBUG_PSEUDO_DMA, hostdata->host);
return -1; return -1;
} }
@ -270,16 +253,15 @@ __asm__ __volatile__ \
: "0"(s), "1"(d), "2"(n) \ : "0"(s), "1"(d), "2"(n) \
: "d0") : "d0")
static int macscsi_pwrite(struct Scsi_Host *instance, static inline int macscsi_pwrite(struct NCR5380_hostdata *hostdata,
unsigned char *src, int len) unsigned char *src, int len)
{ {
struct NCR5380_hostdata *hostdata = shost_priv(instance);
unsigned char *s = src; unsigned char *s = src;
unsigned char *d = hostdata->pdma_base + (OUTPUT_DATA_REG << 4); unsigned char *d = hostdata->pdma_io + (OUTPUT_DATA_REG << 4);
int n = len; int n = len;
int transferred; int transferred;
while (!NCR5380_poll_politely(instance, BUS_AND_STATUS_REG, while (!NCR5380_poll_politely(hostdata, BUS_AND_STATUS_REG,
BASR_DRQ | BASR_PHASE_MATCH, BASR_DRQ | BASR_PHASE_MATCH,
BASR_DRQ | BASR_PHASE_MATCH, HZ / 64)) { BASR_DRQ | BASR_PHASE_MATCH, HZ / 64)) {
CP_MEM_TO_IO(s, d, n); CP_MEM_TO_IO(s, d, n);
@ -288,7 +270,7 @@ static int macscsi_pwrite(struct Scsi_Host *instance,
hostdata->pdma_residual = len - transferred; hostdata->pdma_residual = len - transferred;
/* Target changed phase early? */ /* Target changed phase early? */
if (NCR5380_poll_politely2(instance, STATUS_REG, SR_REQ, SR_REQ, if (NCR5380_poll_politely2(hostdata, STATUS_REG, SR_REQ, SR_REQ,
BUS_AND_STATUS_REG, BASR_ACK, BASR_ACK, HZ / 64) < 0) BUS_AND_STATUS_REG, BASR_ACK, BASR_ACK, HZ / 64) < 0)
scmd_printk(KERN_ERR, hostdata->connected, scmd_printk(KERN_ERR, hostdata->connected,
"%s: !REQ and !ACK\n", __func__); "%s: !REQ and !ACK\n", __func__);
@ -297,7 +279,7 @@ static int macscsi_pwrite(struct Scsi_Host *instance,
/* No bus error. */ /* No bus error. */
if (n == 0) { if (n == 0) {
if (NCR5380_poll_politely(instance, TARGET_COMMAND_REG, if (NCR5380_poll_politely(hostdata, TARGET_COMMAND_REG,
TCR_LAST_BYTE_SENT, TCR_LAST_BYTE_SENT,
TCR_LAST_BYTE_SENT, HZ / 64) < 0) TCR_LAST_BYTE_SENT, HZ / 64) < 0)
scmd_printk(KERN_ERR, hostdata->connected, scmd_printk(KERN_ERR, hostdata->connected,
@ -305,25 +287,23 @@ static int macscsi_pwrite(struct Scsi_Host *instance,
return 0; return 0;
} }
dsprintk(NDEBUG_PSEUDO_DMA, instance, dsprintk(NDEBUG_PSEUDO_DMA, hostdata->host,
"%s: bus error (%d/%d)\n", __func__, transferred, len); "%s: bus error (%d/%d)\n", __func__, transferred, len);
NCR5380_dprint(NDEBUG_PSEUDO_DMA, instance); NCR5380_dprint(NDEBUG_PSEUDO_DMA, hostdata->host);
s = src + transferred; s = src + transferred;
n = len - transferred; n = len - transferred;
} }
scmd_printk(KERN_ERR, hostdata->connected, scmd_printk(KERN_ERR, hostdata->connected,
"%s: phase mismatch or !DRQ\n", __func__); "%s: phase mismatch or !DRQ\n", __func__);
NCR5380_dprint(NDEBUG_PSEUDO_DMA, instance); NCR5380_dprint(NDEBUG_PSEUDO_DMA, hostdata->host);
return -1; return -1;
} }
static int macscsi_dma_xfer_len(struct Scsi_Host *instance, static int macscsi_dma_xfer_len(struct NCR5380_hostdata *hostdata,
struct scsi_cmnd *cmd) struct scsi_cmnd *cmd)
{ {
struct NCR5380_hostdata *hostdata = shost_priv(instance);
if (hostdata->flags & FLAG_NO_PSEUDO_DMA || if (hostdata->flags & FLAG_NO_PSEUDO_DMA ||
cmd->SCp.this_residual < 16) cmd->SCp.this_residual < 16)
return 0; return 0;
@ -331,6 +311,11 @@ static int macscsi_dma_xfer_len(struct Scsi_Host *instance,
return cmd->SCp.this_residual; return cmd->SCp.this_residual;
} }
static int macscsi_dma_residual(struct NCR5380_hostdata *hostdata)
{
return hostdata->pdma_residual;
}
#include "NCR5380.c" #include "NCR5380.c"
#define DRV_MODULE_NAME "mac_scsi" #define DRV_MODULE_NAME "mac_scsi"
@ -356,6 +341,7 @@ static struct scsi_host_template mac_scsi_template = {
static int __init mac_scsi_probe(struct platform_device *pdev) static int __init mac_scsi_probe(struct platform_device *pdev)
{ {
struct Scsi_Host *instance; struct Scsi_Host *instance;
struct NCR5380_hostdata *hostdata;
int error; int error;
int host_flags = 0; int host_flags = 0;
struct resource *irq, *pio_mem, *pdma_mem = NULL; struct resource *irq, *pio_mem, *pdma_mem = NULL;
@ -388,17 +374,18 @@ static int __init mac_scsi_probe(struct platform_device *pdev)
if (!instance) if (!instance)
return -ENOMEM; return -ENOMEM;
instance->base = pio_mem->start;
if (irq) if (irq)
instance->irq = irq->start; instance->irq = irq->start;
else else
instance->irq = NO_IRQ; instance->irq = NO_IRQ;
if (pdma_mem && setup_use_pdma) { hostdata = shost_priv(instance);
struct NCR5380_hostdata *hostdata = shost_priv(instance); hostdata->base = pio_mem->start;
hostdata->io = (void *)pio_mem->start;
hostdata->pdma_base = (unsigned char *)pdma_mem->start; if (pdma_mem && setup_use_pdma)
} else hostdata->pdma_io = (void *)pdma_mem->start;
else
host_flags |= FLAG_NO_PSEUDO_DMA; host_flags |= FLAG_NO_PSEUDO_DMA;
host_flags |= setup_toshiba_delay > 0 ? FLAG_TOSHIBA_DELAY : 0; host_flags |= setup_toshiba_delay > 0 ? FLAG_TOSHIBA_DELAY : 0;

View File

@ -35,8 +35,8 @@
/* /*
* MegaRAID SAS Driver meta data * MegaRAID SAS Driver meta data
*/ */
#define MEGASAS_VERSION "06.811.02.00-rc1" #define MEGASAS_VERSION "06.812.07.00-rc1"
#define MEGASAS_RELDATE "April 12, 2016" #define MEGASAS_RELDATE "August 22, 2016"
/* /*
* Device IDs * Device IDs
@ -1429,6 +1429,8 @@ enum FW_BOOT_CONTEXT {
#define MR_MAX_REPLY_QUEUES_EXT_OFFSET_SHIFT 14 #define MR_MAX_REPLY_QUEUES_EXT_OFFSET_SHIFT 14
#define MR_MAX_MSIX_REG_ARRAY 16 #define MR_MAX_MSIX_REG_ARRAY 16
#define MR_RDPQ_MODE_OFFSET 0X00800000 #define MR_RDPQ_MODE_OFFSET 0X00800000
#define MR_CAN_HANDLE_SYNC_CACHE_OFFSET 0X01000000
/* /*
* register set for both 1068 and 1078 controllers * register set for both 1068 and 1078 controllers
* structure extended for 1078 registers * structure extended for 1078 registers
@ -2118,7 +2120,6 @@ struct megasas_instance {
u32 ctrl_context_pages; u32 ctrl_context_pages;
struct megasas_ctrl_info *ctrl_info; struct megasas_ctrl_info *ctrl_info;
unsigned int msix_vectors; unsigned int msix_vectors;
struct msix_entry msixentry[MEGASAS_MAX_MSIX_QUEUES];
struct megasas_irq_context irq_context[MEGASAS_MAX_MSIX_QUEUES]; struct megasas_irq_context irq_context[MEGASAS_MAX_MSIX_QUEUES];
u64 map_id; u64 map_id;
u64 pd_seq_map_id; u64 pd_seq_map_id;
@ -2140,6 +2141,7 @@ struct megasas_instance {
u8 is_imr; u8 is_imr;
u8 is_rdpq; u8 is_rdpq;
bool dev_handle; bool dev_handle;
bool fw_sync_cache_support;
}; };
struct MR_LD_VF_MAP { struct MR_LD_VF_MAP {
u32 size; u32 size;

View File

@ -1700,11 +1700,8 @@ megasas_queue_command(struct Scsi_Host *shost, struct scsi_cmnd *scmd)
goto out_done; goto out_done;
} }
/* if ((scmd->cmnd[0] == SYNCHRONIZE_CACHE) && MEGASAS_IS_LOGICAL(scmd) &&
* FW takes care of flush cache on its own for Virtual Disk. (!instance->fw_sync_cache_support)) {
* No need to send it down for VD. For JBOD send SYNCHRONIZE_CACHE to FW.
*/
if ((scmd->cmnd[0] == SYNCHRONIZE_CACHE) && MEGASAS_IS_LOGICAL(scmd)) {
scmd->result = DID_OK << 16; scmd->result = DID_OK << 16;
goto out_done; goto out_done;
} }
@ -4840,7 +4837,7 @@ fail_alloc_cmds:
} }
/* /*
* megasas_setup_irqs_msix - register legacy interrupts. * megasas_setup_irqs_ioapic - register legacy interrupts.
* @instance: Adapter soft state * @instance: Adapter soft state
* *
* Do not enable interrupt, only setup ISRs. * Do not enable interrupt, only setup ISRs.
@ -4855,8 +4852,9 @@ megasas_setup_irqs_ioapic(struct megasas_instance *instance)
pdev = instance->pdev; pdev = instance->pdev;
instance->irq_context[0].instance = instance; instance->irq_context[0].instance = instance;
instance->irq_context[0].MSIxIndex = 0; instance->irq_context[0].MSIxIndex = 0;
if (request_irq(pdev->irq, instance->instancet->service_isr, if (request_irq(pci_irq_vector(pdev, 0),
IRQF_SHARED, "megasas", &instance->irq_context[0])) { instance->instancet->service_isr, IRQF_SHARED,
"megasas", &instance->irq_context[0])) {
dev_err(&instance->pdev->dev, dev_err(&instance->pdev->dev,
"Failed to register IRQ from %s %d\n", "Failed to register IRQ from %s %d\n",
__func__, __LINE__); __func__, __LINE__);
@ -4877,28 +4875,23 @@ megasas_setup_irqs_ioapic(struct megasas_instance *instance)
static int static int
megasas_setup_irqs_msix(struct megasas_instance *instance, u8 is_probe) megasas_setup_irqs_msix(struct megasas_instance *instance, u8 is_probe)
{ {
int i, j, cpu; int i, j;
struct pci_dev *pdev; struct pci_dev *pdev;
pdev = instance->pdev; pdev = instance->pdev;
/* Try MSI-x */ /* Try MSI-x */
cpu = cpumask_first(cpu_online_mask);
for (i = 0; i < instance->msix_vectors; i++) { for (i = 0; i < instance->msix_vectors; i++) {
instance->irq_context[i].instance = instance; instance->irq_context[i].instance = instance;
instance->irq_context[i].MSIxIndex = i; instance->irq_context[i].MSIxIndex = i;
if (request_irq(instance->msixentry[i].vector, if (request_irq(pci_irq_vector(pdev, i),
instance->instancet->service_isr, 0, "megasas", instance->instancet->service_isr, 0, "megasas",
&instance->irq_context[i])) { &instance->irq_context[i])) {
dev_err(&instance->pdev->dev, dev_err(&instance->pdev->dev,
"Failed to register IRQ for vector %d.\n", i); "Failed to register IRQ for vector %d.\n", i);
for (j = 0; j < i; j++) { for (j = 0; j < i; j++)
if (smp_affinity_enable) free_irq(pci_irq_vector(pdev, j),
irq_set_affinity_hint(
instance->msixentry[j].vector, NULL);
free_irq(instance->msixentry[j].vector,
&instance->irq_context[j]); &instance->irq_context[j]);
}
/* Retry irq register for IO_APIC*/ /* Retry irq register for IO_APIC*/
instance->msix_vectors = 0; instance->msix_vectors = 0;
if (is_probe) if (is_probe)
@ -4906,14 +4899,6 @@ megasas_setup_irqs_msix(struct megasas_instance *instance, u8 is_probe)
else else
return -1; return -1;
} }
if (smp_affinity_enable) {
if (irq_set_affinity_hint(instance->msixentry[i].vector,
get_cpu_mask(cpu)))
dev_err(&instance->pdev->dev,
"Failed to set affinity hint"
" for cpu %d\n", cpu);
cpu = cpumask_next(cpu, cpu_online_mask);
}
} }
return 0; return 0;
} }
@ -4930,14 +4915,12 @@ megasas_destroy_irqs(struct megasas_instance *instance) {
if (instance->msix_vectors) if (instance->msix_vectors)
for (i = 0; i < instance->msix_vectors; i++) { for (i = 0; i < instance->msix_vectors; i++) {
if (smp_affinity_enable) free_irq(pci_irq_vector(instance->pdev, i),
irq_set_affinity_hint(
instance->msixentry[i].vector, NULL);
free_irq(instance->msixentry[i].vector,
&instance->irq_context[i]); &instance->irq_context[i]);
} }
else else
free_irq(instance->pdev->irq, &instance->irq_context[0]); free_irq(pci_irq_vector(instance->pdev, 0),
&instance->irq_context[0]);
} }
/** /**
@ -5095,6 +5078,8 @@ static int megasas_init_fw(struct megasas_instance *instance)
msix_enable = (instance->instancet->read_fw_status_reg(reg_set) & msix_enable = (instance->instancet->read_fw_status_reg(reg_set) &
0x4000000) >> 0x1a; 0x4000000) >> 0x1a;
if (msix_enable && !msix_disable) { if (msix_enable && !msix_disable) {
int irq_flags = PCI_IRQ_MSIX;
scratch_pad_2 = readl scratch_pad_2 = readl
(&instance->reg_set->outbound_scratch_pad_2); (&instance->reg_set->outbound_scratch_pad_2);
/* Check max MSI-X vectors */ /* Check max MSI-X vectors */
@ -5131,15 +5116,18 @@ static int megasas_init_fw(struct megasas_instance *instance)
/* Don't bother allocating more MSI-X vectors than cpus */ /* Don't bother allocating more MSI-X vectors than cpus */
instance->msix_vectors = min(instance->msix_vectors, instance->msix_vectors = min(instance->msix_vectors,
(unsigned int)num_online_cpus()); (unsigned int)num_online_cpus());
for (i = 0; i < instance->msix_vectors; i++) if (smp_affinity_enable)
instance->msixentry[i].entry = i; irq_flags |= PCI_IRQ_AFFINITY;
i = pci_enable_msix_range(instance->pdev, instance->msixentry, i = pci_alloc_irq_vectors(instance->pdev, 1,
1, instance->msix_vectors); instance->msix_vectors, irq_flags);
if (i > 0) if (i > 0)
instance->msix_vectors = i; instance->msix_vectors = i;
else else
instance->msix_vectors = 0; instance->msix_vectors = 0;
} }
i = pci_alloc_irq_vectors(instance->pdev, 1, 1, PCI_IRQ_LEGACY);
if (i < 0)
goto fail_setup_irqs;
dev_info(&instance->pdev->dev, dev_info(&instance->pdev->dev,
"firmware supports msix\t: (%d)", fw_msix_count); "firmware supports msix\t: (%d)", fw_msix_count);
@ -5152,11 +5140,6 @@ static int megasas_init_fw(struct megasas_instance *instance)
tasklet_init(&instance->isr_tasklet, instance->instancet->tasklet, tasklet_init(&instance->isr_tasklet, instance->instancet->tasklet,
(unsigned long)instance); (unsigned long)instance);
if (instance->msix_vectors ?
megasas_setup_irqs_msix(instance, 1) :
megasas_setup_irqs_ioapic(instance))
goto fail_setup_irqs;
instance->ctrl_info = kzalloc(sizeof(struct megasas_ctrl_info), instance->ctrl_info = kzalloc(sizeof(struct megasas_ctrl_info),
GFP_KERNEL); GFP_KERNEL);
if (instance->ctrl_info == NULL) if (instance->ctrl_info == NULL)
@ -5172,6 +5155,10 @@ static int megasas_init_fw(struct megasas_instance *instance)
if (instance->instancet->init_adapter(instance)) if (instance->instancet->init_adapter(instance))
goto fail_init_adapter; goto fail_init_adapter;
if (instance->msix_vectors ?
megasas_setup_irqs_msix(instance, 1) :
megasas_setup_irqs_ioapic(instance))
goto fail_init_adapter;
instance->instancet->enable_intr(instance); instance->instancet->enable_intr(instance);
@ -5315,7 +5302,7 @@ fail_init_adapter:
megasas_destroy_irqs(instance); megasas_destroy_irqs(instance);
fail_setup_irqs: fail_setup_irqs:
if (instance->msix_vectors) if (instance->msix_vectors)
pci_disable_msix(instance->pdev); pci_free_irq_vectors(instance->pdev);
instance->msix_vectors = 0; instance->msix_vectors = 0;
fail_ready_state: fail_ready_state:
kfree(instance->ctrl_info); kfree(instance->ctrl_info);
@ -5584,7 +5571,6 @@ static int megasas_io_attach(struct megasas_instance *instance)
/* /*
* Export parameters required by SCSI mid-layer * Export parameters required by SCSI mid-layer
*/ */
host->irq = instance->pdev->irq;
host->unique_id = instance->unique_id; host->unique_id = instance->unique_id;
host->can_queue = instance->max_scsi_cmds; host->can_queue = instance->max_scsi_cmds;
host->this_id = instance->init_id; host->this_id = instance->init_id;
@ -5947,7 +5933,7 @@ fail_io_attach:
else else
megasas_release_mfi(instance); megasas_release_mfi(instance);
if (instance->msix_vectors) if (instance->msix_vectors)
pci_disable_msix(instance->pdev); pci_free_irq_vectors(instance->pdev);
fail_init_mfi: fail_init_mfi:
fail_alloc_dma_buf: fail_alloc_dma_buf:
if (instance->evt_detail) if (instance->evt_detail)
@ -6105,7 +6091,7 @@ megasas_suspend(struct pci_dev *pdev, pm_message_t state)
megasas_destroy_irqs(instance); megasas_destroy_irqs(instance);
if (instance->msix_vectors) if (instance->msix_vectors)
pci_disable_msix(instance->pdev); pci_free_irq_vectors(instance->pdev);
pci_save_state(pdev); pci_save_state(pdev);
pci_disable_device(pdev); pci_disable_device(pdev);
@ -6125,6 +6111,7 @@ megasas_resume(struct pci_dev *pdev)
int rval; int rval;
struct Scsi_Host *host; struct Scsi_Host *host;
struct megasas_instance *instance; struct megasas_instance *instance;
int irq_flags = PCI_IRQ_LEGACY;
instance = pci_get_drvdata(pdev); instance = pci_get_drvdata(pdev);
host = instance->host; host = instance->host;
@ -6160,9 +6147,15 @@ megasas_resume(struct pci_dev *pdev)
goto fail_ready_state; goto fail_ready_state;
/* Now re-enable MSI-X */ /* Now re-enable MSI-X */
if (instance->msix_vectors && if (instance->msix_vectors) {
pci_enable_msix_exact(instance->pdev, instance->msixentry, irq_flags = PCI_IRQ_MSIX;
instance->msix_vectors)) if (smp_affinity_enable)
irq_flags |= PCI_IRQ_AFFINITY;
}
rval = pci_alloc_irq_vectors(instance->pdev, 1,
instance->msix_vectors ?
instance->msix_vectors : 1, irq_flags);
if (rval < 0)
goto fail_reenable_msix; goto fail_reenable_msix;
if (instance->ctrl_context) { if (instance->ctrl_context) {
@ -6245,6 +6238,34 @@ fail_reenable_msix:
#define megasas_resume NULL #define megasas_resume NULL
#endif #endif
static inline int
megasas_wait_for_adapter_operational(struct megasas_instance *instance)
{
int wait_time = MEGASAS_RESET_WAIT_TIME * 2;
int i;
if (atomic_read(&instance->adprecovery) == MEGASAS_HW_CRITICAL_ERROR)
return 1;
for (i = 0; i < wait_time; i++) {
if (atomic_read(&instance->adprecovery) == MEGASAS_HBA_OPERATIONAL)
break;
if (!(i % MEGASAS_RESET_NOTICE_INTERVAL))
dev_notice(&instance->pdev->dev, "waiting for controller reset to finish\n");
msleep(1000);
}
if (atomic_read(&instance->adprecovery) != MEGASAS_HBA_OPERATIONAL) {
dev_info(&instance->pdev->dev, "%s timed out while waiting for HBA to recover.\n",
__func__);
return 1;
}
return 0;
}
/** /**
* megasas_detach_one - PCI hot"un"plug entry point * megasas_detach_one - PCI hot"un"plug entry point
* @pdev: PCI device structure * @pdev: PCI device structure
@ -6269,9 +6290,14 @@ static void megasas_detach_one(struct pci_dev *pdev)
if (instance->fw_crash_state != UNAVAILABLE) if (instance->fw_crash_state != UNAVAILABLE)
megasas_free_host_crash_buffer(instance); megasas_free_host_crash_buffer(instance);
scsi_remove_host(instance->host); scsi_remove_host(instance->host);
if (megasas_wait_for_adapter_operational(instance))
goto skip_firing_dcmds;
megasas_flush_cache(instance); megasas_flush_cache(instance);
megasas_shutdown_controller(instance, MR_DCMD_CTRL_SHUTDOWN); megasas_shutdown_controller(instance, MR_DCMD_CTRL_SHUTDOWN);
skip_firing_dcmds:
/* cancel the delayed work if this work still in queue*/ /* cancel the delayed work if this work still in queue*/
if (instance->ev != NULL) { if (instance->ev != NULL) {
struct megasas_aen_event *ev = instance->ev; struct megasas_aen_event *ev = instance->ev;
@ -6302,7 +6328,7 @@ static void megasas_detach_one(struct pci_dev *pdev)
megasas_destroy_irqs(instance); megasas_destroy_irqs(instance);
if (instance->msix_vectors) if (instance->msix_vectors)
pci_disable_msix(instance->pdev); pci_free_irq_vectors(instance->pdev);
if (instance->ctrl_context) { if (instance->ctrl_context) {
megasas_release_fusion(instance); megasas_release_fusion(instance);
@ -6385,13 +6411,19 @@ static void megasas_shutdown(struct pci_dev *pdev)
struct megasas_instance *instance = pci_get_drvdata(pdev); struct megasas_instance *instance = pci_get_drvdata(pdev);
instance->unload = 1; instance->unload = 1;
if (megasas_wait_for_adapter_operational(instance))
goto skip_firing_dcmds;
megasas_flush_cache(instance); megasas_flush_cache(instance);
megasas_shutdown_controller(instance, MR_DCMD_CTRL_SHUTDOWN); megasas_shutdown_controller(instance, MR_DCMD_CTRL_SHUTDOWN);
skip_firing_dcmds:
instance->instancet->disable_intr(instance); instance->instancet->disable_intr(instance);
megasas_destroy_irqs(instance); megasas_destroy_irqs(instance);
if (instance->msix_vectors) if (instance->msix_vectors)
pci_disable_msix(instance->pdev); pci_free_irq_vectors(instance->pdev);
} }
/** /**
@ -6752,8 +6784,7 @@ static int megasas_mgmt_ioctl_fw(struct file *file, unsigned long arg)
if (atomic_read(&instance->adprecovery) != MEGASAS_HBA_OPERATIONAL) { if (atomic_read(&instance->adprecovery) != MEGASAS_HBA_OPERATIONAL) {
spin_unlock_irqrestore(&instance->hba_lock, flags); spin_unlock_irqrestore(&instance->hba_lock, flags);
dev_err(&instance->pdev->dev, "timed out while" dev_err(&instance->pdev->dev, "timed out while waiting for HBA to recover\n");
"waiting for HBA to recover\n");
error = -ENODEV; error = -ENODEV;
goto out_up; goto out_up;
} }
@ -6821,8 +6852,7 @@ static int megasas_mgmt_ioctl_aen(struct file *file, unsigned long arg)
spin_lock_irqsave(&instance->hba_lock, flags); spin_lock_irqsave(&instance->hba_lock, flags);
if (atomic_read(&instance->adprecovery) != MEGASAS_HBA_OPERATIONAL) { if (atomic_read(&instance->adprecovery) != MEGASAS_HBA_OPERATIONAL) {
spin_unlock_irqrestore(&instance->hba_lock, flags); spin_unlock_irqrestore(&instance->hba_lock, flags);
dev_err(&instance->pdev->dev, "timed out while waiting" dev_err(&instance->pdev->dev, "timed out while waiting for HBA to recover\n");
"for HBA to recover\n");
return -ENODEV; return -ENODEV;
} }
spin_unlock_irqrestore(&instance->hba_lock, flags); spin_unlock_irqrestore(&instance->hba_lock, flags);

View File

@ -782,7 +782,8 @@ static u8 mr_spanset_get_phy_params(struct megasas_instance *instance, u32 ld,
(raid->regTypeReqOnRead != REGION_TYPE_UNUSED)))) (raid->regTypeReqOnRead != REGION_TYPE_UNUSED))))
pRAID_Context->regLockFlags = REGION_TYPE_EXCLUSIVE; pRAID_Context->regLockFlags = REGION_TYPE_EXCLUSIVE;
else if (raid->level == 1) { else if (raid->level == 1) {
pd = MR_ArPdGet(arRef, physArm + 1, map); physArm = physArm + 1;
pd = MR_ArPdGet(arRef, physArm, map);
if (pd != MR_PD_INVALID) if (pd != MR_PD_INVALID)
*pDevHandle = MR_PdDevHandleGet(pd, map); *pDevHandle = MR_PdDevHandleGet(pd, map);
} }
@ -879,7 +880,8 @@ u8 MR_GetPhyParams(struct megasas_instance *instance, u32 ld, u64 stripRow,
pRAID_Context->regLockFlags = REGION_TYPE_EXCLUSIVE; pRAID_Context->regLockFlags = REGION_TYPE_EXCLUSIVE;
else if (raid->level == 1) { else if (raid->level == 1) {
/* Get alternate Pd. */ /* Get alternate Pd. */
pd = MR_ArPdGet(arRef, physArm + 1, map); physArm = physArm + 1;
pd = MR_ArPdGet(arRef, physArm, map);
if (pd != MR_PD_INVALID) if (pd != MR_PD_INVALID)
/* Get dev handle from Pd */ /* Get dev handle from Pd */
*pDevHandle = MR_PdDevHandleGet(pd, map); *pDevHandle = MR_PdDevHandleGet(pd, map);

View File

@ -748,6 +748,11 @@ megasas_ioc_init_fusion(struct megasas_instance *instance)
goto fail_fw_init; goto fail_fw_init;
} }
instance->fw_sync_cache_support = (scratch_pad_2 &
MR_CAN_HANDLE_SYNC_CACHE_OFFSET) ? 1 : 0;
dev_info(&instance->pdev->dev, "FW supports sync cache\t: %s\n",
instance->fw_sync_cache_support ? "Yes" : "No");
IOCInitMessage = IOCInitMessage =
dma_alloc_coherent(&instance->pdev->dev, dma_alloc_coherent(&instance->pdev->dev,
sizeof(struct MPI2_IOC_INIT_REQUEST), sizeof(struct MPI2_IOC_INIT_REQUEST),
@ -2000,6 +2005,8 @@ megasas_build_syspd_fusion(struct megasas_instance *instance,
io_request->DevHandle = pd_sync->seq[pd_index].devHandle; io_request->DevHandle = pd_sync->seq[pd_index].devHandle;
pRAID_Context->regLockFlags |= pRAID_Context->regLockFlags |=
(MR_RL_FLAGS_SEQ_NUM_ENABLE|MR_RL_FLAGS_GRANT_DESTINATION_CUDA); (MR_RL_FLAGS_SEQ_NUM_ENABLE|MR_RL_FLAGS_GRANT_DESTINATION_CUDA);
pRAID_Context->Type = MPI2_TYPE_CUDA;
pRAID_Context->nseg = 0x1;
} else if (fusion->fast_path_io) { } else if (fusion->fast_path_io) {
pRAID_Context->VirtualDiskTgtId = cpu_to_le16(device_id); pRAID_Context->VirtualDiskTgtId = cpu_to_le16(device_id);
pRAID_Context->configSeqNum = 0; pRAID_Context->configSeqNum = 0;
@ -2035,12 +2042,10 @@ megasas_build_syspd_fusion(struct megasas_instance *instance,
pRAID_Context->timeoutValue = pRAID_Context->timeoutValue =
cpu_to_le16((os_timeout_value > timeout_limit) ? cpu_to_le16((os_timeout_value > timeout_limit) ?
timeout_limit : os_timeout_value); timeout_limit : os_timeout_value);
if (fusion->adapter_type == INVADER_SERIES) { if (fusion->adapter_type == INVADER_SERIES)
pRAID_Context->Type = MPI2_TYPE_CUDA;
pRAID_Context->nseg = 0x1;
io_request->IoFlags |= io_request->IoFlags |=
cpu_to_le16(MPI25_SAS_DEVICE0_FLAGS_ENABLED_FAST_PATH); cpu_to_le16(MPI25_SAS_DEVICE0_FLAGS_ENABLED_FAST_PATH);
}
cmd->request_desc->SCSIIO.RequestFlags = cmd->request_desc->SCSIIO.RequestFlags =
(MPI2_REQ_DESCRIPT_FLAGS_FP_IO << (MPI2_REQ_DESCRIPT_FLAGS_FP_IO <<
MEGASAS_REQ_DESCRIPT_FLAGS_TYPE_SHIFT); MEGASAS_REQ_DESCRIPT_FLAGS_TYPE_SHIFT);
@ -2463,11 +2468,14 @@ irqreturn_t megasas_isr_fusion(int irq, void *devp)
/* Start collecting crash, if DMA bit is done */ /* Start collecting crash, if DMA bit is done */
if ((fw_state == MFI_STATE_FAULT) && dma_state) if ((fw_state == MFI_STATE_FAULT) && dma_state)
schedule_work(&instance->crash_init); schedule_work(&instance->crash_init);
else if (fw_state == MFI_STATE_FAULT) else if (fw_state == MFI_STATE_FAULT) {
if (instance->unload == 0)
schedule_work(&instance->work_init); schedule_work(&instance->work_init);
}
} else if (fw_state == MFI_STATE_FAULT) { } else if (fw_state == MFI_STATE_FAULT) {
dev_warn(&instance->pdev->dev, "Iop2SysDoorbellInt" dev_warn(&instance->pdev->dev, "Iop2SysDoorbellInt"
"for scsi%d\n", instance->host->host_no); "for scsi%d\n", instance->host->host_no);
if (instance->unload == 0)
schedule_work(&instance->work_init); schedule_work(&instance->work_init);
} }
} }
@ -2823,6 +2831,7 @@ int megasas_wait_for_outstanding_fusion(struct megasas_instance *instance,
dev_err(&instance->pdev->dev, "pending commands remain after waiting, " dev_err(&instance->pdev->dev, "pending commands remain after waiting, "
"will reset adapter scsi%d.\n", "will reset adapter scsi%d.\n",
instance->host->host_no); instance->host->host_no);
*convert = 1;
retval = 1; retval = 1;
} }
out: out:

View File

@ -478,6 +478,13 @@ typedef struct _MPI2_CONFIG_REPLY {
#define MPI26_MFGPAGE_DEVID_SAS3324_3 (0x00C2) #define MPI26_MFGPAGE_DEVID_SAS3324_3 (0x00C2)
#define MPI26_MFGPAGE_DEVID_SAS3324_4 (0x00C3) #define MPI26_MFGPAGE_DEVID_SAS3324_4 (0x00C3)
#define MPI26_MFGPAGE_DEVID_SAS3516 (0x00AA)
#define MPI26_MFGPAGE_DEVID_SAS3516_1 (0x00AB)
#define MPI26_MFGPAGE_DEVID_SAS3416 (0x00AC)
#define MPI26_MFGPAGE_DEVID_SAS3508 (0x00AD)
#define MPI26_MFGPAGE_DEVID_SAS3508_1 (0x00AE)
#define MPI26_MFGPAGE_DEVID_SAS3408 (0x00AF)
/*Manufacturing Page 0 */ /*Manufacturing Page 0 */
typedef struct _MPI2_CONFIG_PAGE_MAN_0 { typedef struct _MPI2_CONFIG_PAGE_MAN_0 {

View File

@ -849,7 +849,7 @@ _base_async_event(struct MPT3SAS_ADAPTER *ioc, u8 msix_index, u32 reply)
ack_request->EventContext = mpi_reply->EventContext; ack_request->EventContext = mpi_reply->EventContext;
ack_request->VF_ID = 0; /* TODO */ ack_request->VF_ID = 0; /* TODO */
ack_request->VP_ID = 0; ack_request->VP_ID = 0;
mpt3sas_base_put_smid_default(ioc, smid); ioc->put_smid_default(ioc, smid);
out: out:
@ -1078,7 +1078,7 @@ _base_interrupt(int irq, void *bus_id)
* new reply host index value in ReplyPostIndex Field and msix_index * new reply host index value in ReplyPostIndex Field and msix_index
* value in MSIxIndex field. * value in MSIxIndex field.
*/ */
if (ioc->msix96_vector) if (ioc->combined_reply_queue)
writel(reply_q->reply_post_host_index | ((msix_index & 7) << writel(reply_q->reply_post_host_index | ((msix_index & 7) <<
MPI2_RPHI_MSIX_INDEX_SHIFT), MPI2_RPHI_MSIX_INDEX_SHIFT),
ioc->replyPostRegisterIndex[msix_index/8]); ioc->replyPostRegisterIndex[msix_index/8]);
@ -1959,7 +1959,7 @@ _base_enable_msix(struct MPT3SAS_ADAPTER *ioc)
{ {
struct msix_entry *entries, *a; struct msix_entry *entries, *a;
int r; int r;
int i; int i, local_max_msix_vectors;
u8 try_msix = 0; u8 try_msix = 0;
if (msix_disable == -1 || msix_disable == 0) if (msix_disable == -1 || msix_disable == 0)
@ -1979,13 +1979,15 @@ _base_enable_msix(struct MPT3SAS_ADAPTER *ioc)
ioc->cpu_count, max_msix_vectors); ioc->cpu_count, max_msix_vectors);
if (!ioc->rdpq_array_enable && max_msix_vectors == -1) if (!ioc->rdpq_array_enable && max_msix_vectors == -1)
max_msix_vectors = 8; local_max_msix_vectors = 8;
else
local_max_msix_vectors = max_msix_vectors;
if (max_msix_vectors > 0) { if (local_max_msix_vectors > 0) {
ioc->reply_queue_count = min_t(int, max_msix_vectors, ioc->reply_queue_count = min_t(int, local_max_msix_vectors,
ioc->reply_queue_count); ioc->reply_queue_count);
ioc->msix_vector_count = ioc->reply_queue_count; ioc->msix_vector_count = ioc->reply_queue_count;
} else if (max_msix_vectors == 0) } else if (local_max_msix_vectors == 0)
goto try_ioapic; goto try_ioapic;
if (ioc->msix_vector_count < ioc->cpu_count) if (ioc->msix_vector_count < ioc->cpu_count)
@ -2050,7 +2052,7 @@ mpt3sas_base_unmap_resources(struct MPT3SAS_ADAPTER *ioc)
_base_free_irq(ioc); _base_free_irq(ioc);
_base_disable_msix(ioc); _base_disable_msix(ioc);
if (ioc->msix96_vector) { if (ioc->combined_reply_queue) {
kfree(ioc->replyPostRegisterIndex); kfree(ioc->replyPostRegisterIndex);
ioc->replyPostRegisterIndex = NULL; ioc->replyPostRegisterIndex = NULL;
} }
@ -2160,7 +2162,7 @@ mpt3sas_base_map_resources(struct MPT3SAS_ADAPTER *ioc)
/* Use the Combined reply queue feature only for SAS3 C0 & higher /* Use the Combined reply queue feature only for SAS3 C0 & higher
* revision HBAs and also only when reply queue count is greater than 8 * revision HBAs and also only when reply queue count is greater than 8
*/ */
if (ioc->msix96_vector && ioc->reply_queue_count > 8) { if (ioc->combined_reply_queue && ioc->reply_queue_count > 8) {
/* Determine the Supplemental Reply Post Host Index Registers /* Determine the Supplemental Reply Post Host Index Registers
* Addresse. Supplemental Reply Post Host Index Registers * Addresse. Supplemental Reply Post Host Index Registers
* starts at offset MPI25_SUP_REPLY_POST_HOST_INDEX_OFFSET and * starts at offset MPI25_SUP_REPLY_POST_HOST_INDEX_OFFSET and
@ -2168,7 +2170,7 @@ mpt3sas_base_map_resources(struct MPT3SAS_ADAPTER *ioc)
* MPT3_SUP_REPLY_POST_HOST_INDEX_REG_OFFSET from previous one. * MPT3_SUP_REPLY_POST_HOST_INDEX_REG_OFFSET from previous one.
*/ */
ioc->replyPostRegisterIndex = kcalloc( ioc->replyPostRegisterIndex = kcalloc(
MPT3_SUP_REPLY_POST_HOST_INDEX_REG_COUNT, ioc->combined_reply_index_count,
sizeof(resource_size_t *), GFP_KERNEL); sizeof(resource_size_t *), GFP_KERNEL);
if (!ioc->replyPostRegisterIndex) { if (!ioc->replyPostRegisterIndex) {
dfailprintk(ioc, printk(MPT3SAS_FMT dfailprintk(ioc, printk(MPT3SAS_FMT
@ -2178,14 +2180,14 @@ mpt3sas_base_map_resources(struct MPT3SAS_ADAPTER *ioc)
goto out_fail; goto out_fail;
} }
for (i = 0; i < MPT3_SUP_REPLY_POST_HOST_INDEX_REG_COUNT; i++) { for (i = 0; i < ioc->combined_reply_index_count; i++) {
ioc->replyPostRegisterIndex[i] = (resource_size_t *) ioc->replyPostRegisterIndex[i] = (resource_size_t *)
((u8 *)&ioc->chip->Doorbell + ((u8 *)&ioc->chip->Doorbell +
MPI25_SUP_REPLY_POST_HOST_INDEX_OFFSET + MPI25_SUP_REPLY_POST_HOST_INDEX_OFFSET +
(i * MPT3_SUP_REPLY_POST_HOST_INDEX_REG_OFFSET)); (i * MPT3_SUP_REPLY_POST_HOST_INDEX_REG_OFFSET));
} }
} else } else
ioc->msix96_vector = 0; ioc->combined_reply_queue = 0;
if (ioc->is_warpdrive) { if (ioc->is_warpdrive) {
ioc->reply_post_host_index[0] = (resource_size_t __iomem *) ioc->reply_post_host_index[0] = (resource_size_t __iomem *)
@ -2462,15 +2464,15 @@ _base_writeq(__u64 b, volatile void __iomem *addr, spinlock_t *writeq_lock)
#endif #endif
/** /**
* mpt3sas_base_put_smid_scsi_io - send SCSI_IO request to firmware * _base_put_smid_scsi_io - send SCSI_IO request to firmware
* @ioc: per adapter object * @ioc: per adapter object
* @smid: system request message index * @smid: system request message index
* @handle: device handle * @handle: device handle
* *
* Return nothing. * Return nothing.
*/ */
void static void
mpt3sas_base_put_smid_scsi_io(struct MPT3SAS_ADAPTER *ioc, u16 smid, u16 handle) _base_put_smid_scsi_io(struct MPT3SAS_ADAPTER *ioc, u16 smid, u16 handle)
{ {
Mpi2RequestDescriptorUnion_t descriptor; Mpi2RequestDescriptorUnion_t descriptor;
u64 *request = (u64 *)&descriptor; u64 *request = (u64 *)&descriptor;
@ -2486,15 +2488,15 @@ mpt3sas_base_put_smid_scsi_io(struct MPT3SAS_ADAPTER *ioc, u16 smid, u16 handle)
} }
/** /**
* mpt3sas_base_put_smid_fast_path - send fast path request to firmware * _base_put_smid_fast_path - send fast path request to firmware
* @ioc: per adapter object * @ioc: per adapter object
* @smid: system request message index * @smid: system request message index
* @handle: device handle * @handle: device handle
* *
* Return nothing. * Return nothing.
*/ */
void static void
mpt3sas_base_put_smid_fast_path(struct MPT3SAS_ADAPTER *ioc, u16 smid, _base_put_smid_fast_path(struct MPT3SAS_ADAPTER *ioc, u16 smid,
u16 handle) u16 handle)
{ {
Mpi2RequestDescriptorUnion_t descriptor; Mpi2RequestDescriptorUnion_t descriptor;
@ -2511,14 +2513,14 @@ mpt3sas_base_put_smid_fast_path(struct MPT3SAS_ADAPTER *ioc, u16 smid,
} }
/** /**
* mpt3sas_base_put_smid_hi_priority - send Task Managment request to firmware * _base_put_smid_hi_priority - send Task Management request to firmware
* @ioc: per adapter object * @ioc: per adapter object
* @smid: system request message index * @smid: system request message index
* @msix_task: msix_task will be same as msix of IO incase of task abort else 0. * @msix_task: msix_task will be same as msix of IO incase of task abort else 0.
* Return nothing. * Return nothing.
*/ */
void static void
mpt3sas_base_put_smid_hi_priority(struct MPT3SAS_ADAPTER *ioc, u16 smid, _base_put_smid_hi_priority(struct MPT3SAS_ADAPTER *ioc, u16 smid,
u16 msix_task) u16 msix_task)
{ {
Mpi2RequestDescriptorUnion_t descriptor; Mpi2RequestDescriptorUnion_t descriptor;
@ -2535,14 +2537,14 @@ mpt3sas_base_put_smid_hi_priority(struct MPT3SAS_ADAPTER *ioc, u16 smid,
} }
/** /**
* mpt3sas_base_put_smid_default - Default, primarily used for config pages * _base_put_smid_default - Default, primarily used for config pages
* @ioc: per adapter object * @ioc: per adapter object
* @smid: system request message index * @smid: system request message index
* *
* Return nothing. * Return nothing.
*/ */
void static void
mpt3sas_base_put_smid_default(struct MPT3SAS_ADAPTER *ioc, u16 smid) _base_put_smid_default(struct MPT3SAS_ADAPTER *ioc, u16 smid)
{ {
Mpi2RequestDescriptorUnion_t descriptor; Mpi2RequestDescriptorUnion_t descriptor;
u64 *request = (u64 *)&descriptor; u64 *request = (u64 *)&descriptor;
@ -2556,6 +2558,95 @@ mpt3sas_base_put_smid_default(struct MPT3SAS_ADAPTER *ioc, u16 smid)
&ioc->scsi_lookup_lock); &ioc->scsi_lookup_lock);
} }
/**
* _base_put_smid_scsi_io_atomic - send SCSI_IO request to firmware using
* Atomic Request Descriptor
* @ioc: per adapter object
* @smid: system request message index
* @handle: device handle, unused in this function, for function type match
*
* Return nothing.
*/
static void
_base_put_smid_scsi_io_atomic(struct MPT3SAS_ADAPTER *ioc, u16 smid,
u16 handle)
{
Mpi26AtomicRequestDescriptor_t descriptor;
u32 *request = (u32 *)&descriptor;
descriptor.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_SCSI_IO;
descriptor.MSIxIndex = _base_get_msix_index(ioc);
descriptor.SMID = cpu_to_le16(smid);
writel(cpu_to_le32(*request), &ioc->chip->AtomicRequestDescriptorPost);
}
/**
* _base_put_smid_fast_path_atomic - send fast path request to firmware
* using Atomic Request Descriptor
* @ioc: per adapter object
* @smid: system request message index
* @handle: device handle, unused in this function, for function type match
* Return nothing
*/
static void
_base_put_smid_fast_path_atomic(struct MPT3SAS_ADAPTER *ioc, u16 smid,
u16 handle)
{
Mpi26AtomicRequestDescriptor_t descriptor;
u32 *request = (u32 *)&descriptor;
descriptor.RequestFlags = MPI25_REQ_DESCRIPT_FLAGS_FAST_PATH_SCSI_IO;
descriptor.MSIxIndex = _base_get_msix_index(ioc);
descriptor.SMID = cpu_to_le16(smid);
writel(cpu_to_le32(*request), &ioc->chip->AtomicRequestDescriptorPost);
}
/**
* _base_put_smid_hi_priority_atomic - send Task Management request to
* firmware using Atomic Request Descriptor
* @ioc: per adapter object
* @smid: system request message index
* @msix_task: msix_task will be same as msix of IO incase of task abort else 0
*
* Return nothing.
*/
static void
_base_put_smid_hi_priority_atomic(struct MPT3SAS_ADAPTER *ioc, u16 smid,
u16 msix_task)
{
Mpi26AtomicRequestDescriptor_t descriptor;
u32 *request = (u32 *)&descriptor;
descriptor.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_HIGH_PRIORITY;
descriptor.MSIxIndex = msix_task;
descriptor.SMID = cpu_to_le16(smid);
writel(cpu_to_le32(*request), &ioc->chip->AtomicRequestDescriptorPost);
}
/**
* _base_put_smid_default - Default, primarily used for config pages
* use Atomic Request Descriptor
* @ioc: per adapter object
* @smid: system request message index
*
* Return nothing.
*/
static void
_base_put_smid_default_atomic(struct MPT3SAS_ADAPTER *ioc, u16 smid)
{
Mpi26AtomicRequestDescriptor_t descriptor;
u32 *request = (u32 *)&descriptor;
descriptor.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE;
descriptor.MSIxIndex = _base_get_msix_index(ioc);
descriptor.SMID = cpu_to_le16(smid);
writel(cpu_to_le32(*request), &ioc->chip->AtomicRequestDescriptorPost);
}
/** /**
* _base_display_OEMs_branding - Display branding string * _base_display_OEMs_branding - Display branding string
* @ioc: per adapter object * @ioc: per adapter object
@ -4070,7 +4161,7 @@ mpt3sas_base_sas_iounit_control(struct MPT3SAS_ADAPTER *ioc,
mpi_request->Operation == MPI2_SAS_OP_PHY_LINK_RESET) mpi_request->Operation == MPI2_SAS_OP_PHY_LINK_RESET)
ioc->ioc_link_reset_in_progress = 1; ioc->ioc_link_reset_in_progress = 1;
init_completion(&ioc->base_cmds.done); init_completion(&ioc->base_cmds.done);
mpt3sas_base_put_smid_default(ioc, smid); ioc->put_smid_default(ioc, smid);
wait_for_completion_timeout(&ioc->base_cmds.done, wait_for_completion_timeout(&ioc->base_cmds.done,
msecs_to_jiffies(10000)); msecs_to_jiffies(10000));
if ((mpi_request->Operation == MPI2_SAS_OP_PHY_HARD_RESET || if ((mpi_request->Operation == MPI2_SAS_OP_PHY_HARD_RESET ||
@ -4170,7 +4261,7 @@ mpt3sas_base_scsi_enclosure_processor(struct MPT3SAS_ADAPTER *ioc,
ioc->base_cmds.smid = smid; ioc->base_cmds.smid = smid;
memcpy(request, mpi_request, sizeof(Mpi2SepReply_t)); memcpy(request, mpi_request, sizeof(Mpi2SepReply_t));
init_completion(&ioc->base_cmds.done); init_completion(&ioc->base_cmds.done);
mpt3sas_base_put_smid_default(ioc, smid); ioc->put_smid_default(ioc, smid);
wait_for_completion_timeout(&ioc->base_cmds.done, wait_for_completion_timeout(&ioc->base_cmds.done,
msecs_to_jiffies(10000)); msecs_to_jiffies(10000));
if (!(ioc->base_cmds.status & MPT3_CMD_COMPLETE)) { if (!(ioc->base_cmds.status & MPT3_CMD_COMPLETE)) {
@ -4355,6 +4446,8 @@ _base_get_ioc_facts(struct MPT3SAS_ADAPTER *ioc)
if ((facts->IOCCapabilities & if ((facts->IOCCapabilities &
MPI2_IOCFACTS_CAPABILITY_RDPQ_ARRAY_CAPABLE)) MPI2_IOCFACTS_CAPABILITY_RDPQ_ARRAY_CAPABLE))
ioc->rdpq_array_capable = 1; ioc->rdpq_array_capable = 1;
if (facts->IOCCapabilities & MPI26_IOCFACTS_CAPABILITY_ATOMIC_REQ)
ioc->atomic_desc_capable = 1;
facts->FWVersion.Word = le32_to_cpu(mpi_reply.FWVersion.Word); facts->FWVersion.Word = le32_to_cpu(mpi_reply.FWVersion.Word);
facts->IOCRequestFrameSize = facts->IOCRequestFrameSize =
le16_to_cpu(mpi_reply.IOCRequestFrameSize); le16_to_cpu(mpi_reply.IOCRequestFrameSize);
@ -4582,7 +4675,7 @@ _base_send_port_enable(struct MPT3SAS_ADAPTER *ioc)
mpi_request->Function = MPI2_FUNCTION_PORT_ENABLE; mpi_request->Function = MPI2_FUNCTION_PORT_ENABLE;
init_completion(&ioc->port_enable_cmds.done); init_completion(&ioc->port_enable_cmds.done);
mpt3sas_base_put_smid_default(ioc, smid); ioc->put_smid_default(ioc, smid);
wait_for_completion_timeout(&ioc->port_enable_cmds.done, 300*HZ); wait_for_completion_timeout(&ioc->port_enable_cmds.done, 300*HZ);
if (!(ioc->port_enable_cmds.status & MPT3_CMD_COMPLETE)) { if (!(ioc->port_enable_cmds.status & MPT3_CMD_COMPLETE)) {
pr_err(MPT3SAS_FMT "%s: timeout\n", pr_err(MPT3SAS_FMT "%s: timeout\n",
@ -4645,7 +4738,7 @@ mpt3sas_port_enable(struct MPT3SAS_ADAPTER *ioc)
memset(mpi_request, 0, sizeof(Mpi2PortEnableRequest_t)); memset(mpi_request, 0, sizeof(Mpi2PortEnableRequest_t));
mpi_request->Function = MPI2_FUNCTION_PORT_ENABLE; mpi_request->Function = MPI2_FUNCTION_PORT_ENABLE;
mpt3sas_base_put_smid_default(ioc, smid); ioc->put_smid_default(ioc, smid);
return 0; return 0;
} }
@ -4764,7 +4857,7 @@ _base_event_notification(struct MPT3SAS_ADAPTER *ioc)
mpi_request->EventMasks[i] = mpi_request->EventMasks[i] =
cpu_to_le32(ioc->event_masks[i]); cpu_to_le32(ioc->event_masks[i]);
init_completion(&ioc->base_cmds.done); init_completion(&ioc->base_cmds.done);
mpt3sas_base_put_smid_default(ioc, smid); ioc->put_smid_default(ioc, smid);
wait_for_completion_timeout(&ioc->base_cmds.done, 30*HZ); wait_for_completion_timeout(&ioc->base_cmds.done, 30*HZ);
if (!(ioc->base_cmds.status & MPT3_CMD_COMPLETE)) { if (!(ioc->base_cmds.status & MPT3_CMD_COMPLETE)) {
pr_err(MPT3SAS_FMT "%s: timeout\n", pr_err(MPT3SAS_FMT "%s: timeout\n",
@ -5138,7 +5231,7 @@ _base_make_ioc_operational(struct MPT3SAS_ADAPTER *ioc)
/* initialize reply post host index */ /* initialize reply post host index */
list_for_each_entry(reply_q, &ioc->reply_queue_list, list) { list_for_each_entry(reply_q, &ioc->reply_queue_list, list) {
if (ioc->msix96_vector) if (ioc->combined_reply_queue)
writel((reply_q->msix_index & 7)<< writel((reply_q->msix_index & 7)<<
MPI2_RPHI_MSIX_INDEX_SHIFT, MPI2_RPHI_MSIX_INDEX_SHIFT,
ioc->replyPostRegisterIndex[reply_q->msix_index/8]); ioc->replyPostRegisterIndex[reply_q->msix_index/8]);
@ -5280,9 +5373,23 @@ mpt3sas_base_attach(struct MPT3SAS_ADAPTER *ioc)
ioc->build_sg = &_base_build_sg_ieee; ioc->build_sg = &_base_build_sg_ieee;
ioc->build_zero_len_sge = &_base_build_zero_len_sge_ieee; ioc->build_zero_len_sge = &_base_build_zero_len_sge_ieee;
ioc->sge_size_ieee = sizeof(Mpi2IeeeSgeSimple64_t); ioc->sge_size_ieee = sizeof(Mpi2IeeeSgeSimple64_t);
break; break;
} }
if (ioc->atomic_desc_capable) {
ioc->put_smid_default = &_base_put_smid_default_atomic;
ioc->put_smid_scsi_io = &_base_put_smid_scsi_io_atomic;
ioc->put_smid_fast_path = &_base_put_smid_fast_path_atomic;
ioc->put_smid_hi_priority = &_base_put_smid_hi_priority_atomic;
} else {
ioc->put_smid_default = &_base_put_smid_default;
ioc->put_smid_scsi_io = &_base_put_smid_scsi_io;
ioc->put_smid_fast_path = &_base_put_smid_fast_path;
ioc->put_smid_hi_priority = &_base_put_smid_hi_priority;
}
/* /*
* These function pointers for other requests that don't * These function pointers for other requests that don't
* the require IEEE scatter gather elements. * the require IEEE scatter gather elements.
@ -5332,6 +5439,21 @@ mpt3sas_base_attach(struct MPT3SAS_ADAPTER *ioc)
goto out_free_resources; goto out_free_resources;
} }
/* allocate memory for pending OS device add list */
ioc->pend_os_device_add_sz = (ioc->facts.MaxDevHandle / 8);
if (ioc->facts.MaxDevHandle % 8)
ioc->pend_os_device_add_sz++;
ioc->pend_os_device_add = kzalloc(ioc->pend_os_device_add_sz,
GFP_KERNEL);
if (!ioc->pend_os_device_add)
goto out_free_resources;
ioc->device_remove_in_progress_sz = ioc->pend_os_device_add_sz;
ioc->device_remove_in_progress =
kzalloc(ioc->device_remove_in_progress_sz, GFP_KERNEL);
if (!ioc->device_remove_in_progress)
goto out_free_resources;
ioc->fwfault_debug = mpt3sas_fwfault_debug; ioc->fwfault_debug = mpt3sas_fwfault_debug;
/* base internal command bits */ /* base internal command bits */
@ -5414,6 +5536,8 @@ mpt3sas_base_attach(struct MPT3SAS_ADAPTER *ioc)
kfree(ioc->reply_post_host_index); kfree(ioc->reply_post_host_index);
kfree(ioc->pd_handles); kfree(ioc->pd_handles);
kfree(ioc->blocking_handles); kfree(ioc->blocking_handles);
kfree(ioc->device_remove_in_progress);
kfree(ioc->pend_os_device_add);
kfree(ioc->tm_cmds.reply); kfree(ioc->tm_cmds.reply);
kfree(ioc->transport_cmds.reply); kfree(ioc->transport_cmds.reply);
kfree(ioc->scsih_cmds.reply); kfree(ioc->scsih_cmds.reply);
@ -5455,6 +5579,8 @@ mpt3sas_base_detach(struct MPT3SAS_ADAPTER *ioc)
kfree(ioc->reply_post_host_index); kfree(ioc->reply_post_host_index);
kfree(ioc->pd_handles); kfree(ioc->pd_handles);
kfree(ioc->blocking_handles); kfree(ioc->blocking_handles);
kfree(ioc->device_remove_in_progress);
kfree(ioc->pend_os_device_add);
kfree(ioc->pfacts); kfree(ioc->pfacts);
kfree(ioc->ctl_cmds.reply); kfree(ioc->ctl_cmds.reply);
kfree(ioc->ctl_cmds.sense); kfree(ioc->ctl_cmds.sense);

View File

@ -73,9 +73,9 @@
#define MPT3SAS_DRIVER_NAME "mpt3sas" #define MPT3SAS_DRIVER_NAME "mpt3sas"
#define MPT3SAS_AUTHOR "Avago Technologies <MPT-FusionLinux.pdl@avagotech.com>" #define MPT3SAS_AUTHOR "Avago Technologies <MPT-FusionLinux.pdl@avagotech.com>"
#define MPT3SAS_DESCRIPTION "LSI MPT Fusion SAS 3.0 Device Driver" #define MPT3SAS_DESCRIPTION "LSI MPT Fusion SAS 3.0 Device Driver"
#define MPT3SAS_DRIVER_VERSION "13.100.00.00" #define MPT3SAS_DRIVER_VERSION "14.101.00.00"
#define MPT3SAS_MAJOR_VERSION 13 #define MPT3SAS_MAJOR_VERSION 14
#define MPT3SAS_MINOR_VERSION 100 #define MPT3SAS_MINOR_VERSION 101
#define MPT3SAS_BUILD_VERSION 0 #define MPT3SAS_BUILD_VERSION 0
#define MPT3SAS_RELEASE_VERSION 00 #define MPT3SAS_RELEASE_VERSION 00
@ -300,7 +300,8 @@
* There are twelve Supplemental Reply Post Host Index Registers * There are twelve Supplemental Reply Post Host Index Registers
* and each register is at offset 0x10 bytes from the previous one. * and each register is at offset 0x10 bytes from the previous one.
*/ */
#define MPT3_SUP_REPLY_POST_HOST_INDEX_REG_COUNT 12 #define MPT3_SUP_REPLY_POST_HOST_INDEX_REG_COUNT_G3 12
#define MPT3_SUP_REPLY_POST_HOST_INDEX_REG_COUNT_G35 16
#define MPT3_SUP_REPLY_POST_HOST_INDEX_REG_OFFSET (0x10) #define MPT3_SUP_REPLY_POST_HOST_INDEX_REG_OFFSET (0x10)
/* OEM Identifiers */ /* OEM Identifiers */
@ -375,7 +376,6 @@ struct MPT3SAS_TARGET {
* per device private data * per device private data
*/ */
#define MPT_DEVICE_FLAGS_INIT 0x01 #define MPT_DEVICE_FLAGS_INIT 0x01
#define MPT_DEVICE_TLR_ON 0x02
#define MFG_PAGE10_HIDE_SSDS_MASK (0x00000003) #define MFG_PAGE10_HIDE_SSDS_MASK (0x00000003)
#define MFG_PAGE10_HIDE_ALL_DISKS (0x00) #define MFG_PAGE10_HIDE_ALL_DISKS (0x00)
@ -736,7 +736,10 @@ typedef void (*MPT_BUILD_SG)(struct MPT3SAS_ADAPTER *ioc, void *psge,
typedef void (*MPT_BUILD_ZERO_LEN_SGE)(struct MPT3SAS_ADAPTER *ioc, typedef void (*MPT_BUILD_ZERO_LEN_SGE)(struct MPT3SAS_ADAPTER *ioc,
void *paddr); void *paddr);
/* To support atomic and non atomic descriptors*/
typedef void (*PUT_SMID_IO_FP_HIP) (struct MPT3SAS_ADAPTER *ioc, u16 smid,
u16 funcdep);
typedef void (*PUT_SMID_DEFAULT) (struct MPT3SAS_ADAPTER *ioc, u16 smid);
/* IOC Facts and Port Facts converted from little endian to cpu */ /* IOC Facts and Port Facts converted from little endian to cpu */
union mpi3_version_union { union mpi3_version_union {
@ -1079,6 +1082,9 @@ struct MPT3SAS_ADAPTER {
void *pd_handles; void *pd_handles;
u16 pd_handles_sz; u16 pd_handles_sz;
void *pend_os_device_add;
u16 pend_os_device_add_sz;
/* config page */ /* config page */
u16 config_page_sz; u16 config_page_sz;
void *config_page; void *config_page;
@ -1156,7 +1162,8 @@ struct MPT3SAS_ADAPTER {
u8 reply_queue_count; u8 reply_queue_count;
struct list_head reply_queue_list; struct list_head reply_queue_list;
u8 msix96_vector; u8 combined_reply_queue;
u8 combined_reply_index_count;
/* reply post register index */ /* reply post register index */
resource_size_t **replyPostRegisterIndex; resource_size_t **replyPostRegisterIndex;
@ -1187,6 +1194,15 @@ struct MPT3SAS_ADAPTER {
struct SL_WH_EVENT_TRIGGERS_T diag_trigger_event; struct SL_WH_EVENT_TRIGGERS_T diag_trigger_event;
struct SL_WH_SCSI_TRIGGERS_T diag_trigger_scsi; struct SL_WH_SCSI_TRIGGERS_T diag_trigger_scsi;
struct SL_WH_MPI_TRIGGERS_T diag_trigger_mpi; struct SL_WH_MPI_TRIGGERS_T diag_trigger_mpi;
void *device_remove_in_progress;
u16 device_remove_in_progress_sz;
u8 is_gen35_ioc;
u8 atomic_desc_capable;
PUT_SMID_IO_FP_HIP put_smid_scsi_io;
PUT_SMID_IO_FP_HIP put_smid_fast_path;
PUT_SMID_IO_FP_HIP put_smid_hi_priority;
PUT_SMID_DEFAULT put_smid_default;
}; };
typedef u8 (*MPT_CALLBACK)(struct MPT3SAS_ADAPTER *ioc, u16 smid, u8 msix_index, typedef u8 (*MPT_CALLBACK)(struct MPT3SAS_ADAPTER *ioc, u16 smid, u8 msix_index,
@ -1232,13 +1248,6 @@ u16 mpt3sas_base_get_smid_scsiio(struct MPT3SAS_ADAPTER *ioc, u8 cb_idx,
u16 mpt3sas_base_get_smid(struct MPT3SAS_ADAPTER *ioc, u8 cb_idx); u16 mpt3sas_base_get_smid(struct MPT3SAS_ADAPTER *ioc, u8 cb_idx);
void mpt3sas_base_free_smid(struct MPT3SAS_ADAPTER *ioc, u16 smid); void mpt3sas_base_free_smid(struct MPT3SAS_ADAPTER *ioc, u16 smid);
void mpt3sas_base_put_smid_scsi_io(struct MPT3SAS_ADAPTER *ioc, u16 smid,
u16 handle);
void mpt3sas_base_put_smid_fast_path(struct MPT3SAS_ADAPTER *ioc, u16 smid,
u16 handle);
void mpt3sas_base_put_smid_hi_priority(struct MPT3SAS_ADAPTER *ioc,
u16 smid, u16 msix_task);
void mpt3sas_base_put_smid_default(struct MPT3SAS_ADAPTER *ioc, u16 smid);
void mpt3sas_base_initialize_callback_handler(void); void mpt3sas_base_initialize_callback_handler(void);
u8 mpt3sas_base_register_callback_handler(MPT_CALLBACK cb_func); u8 mpt3sas_base_register_callback_handler(MPT_CALLBACK cb_func);
void mpt3sas_base_release_callback_handler(u8 cb_idx); void mpt3sas_base_release_callback_handler(u8 cb_idx);

View File

@ -384,7 +384,7 @@ _config_request(struct MPT3SAS_ADAPTER *ioc, Mpi2ConfigRequest_t
memcpy(config_request, mpi_request, sizeof(Mpi2ConfigRequest_t)); memcpy(config_request, mpi_request, sizeof(Mpi2ConfigRequest_t));
_config_display_some_debug(ioc, smid, "config_request", NULL); _config_display_some_debug(ioc, smid, "config_request", NULL);
init_completion(&ioc->config_cmds.done); init_completion(&ioc->config_cmds.done);
mpt3sas_base_put_smid_default(ioc, smid); ioc->put_smid_default(ioc, smid);
wait_for_completion_timeout(&ioc->config_cmds.done, timeout*HZ); wait_for_completion_timeout(&ioc->config_cmds.done, timeout*HZ);
if (!(ioc->config_cmds.status & MPT3_CMD_COMPLETE)) { if (!(ioc->config_cmds.status & MPT3_CMD_COMPLETE)) {
pr_err(MPT3SAS_FMT "%s: timeout\n", pr_err(MPT3SAS_FMT "%s: timeout\n",

View File

@ -654,6 +654,7 @@ _ctl_do_mpt_command(struct MPT3SAS_ADAPTER *ioc, struct mpt3_ioctl_command karg,
size_t data_in_sz = 0; size_t data_in_sz = 0;
long ret; long ret;
u16 wait_state_count; u16 wait_state_count;
u16 device_handle = MPT3SAS_INVALID_DEVICE_HANDLE;
issue_reset = 0; issue_reset = 0;
@ -738,10 +739,13 @@ _ctl_do_mpt_command(struct MPT3SAS_ADAPTER *ioc, struct mpt3_ioctl_command karg,
data_in_sz = karg.data_in_size; data_in_sz = karg.data_in_size;
if (mpi_request->Function == MPI2_FUNCTION_SCSI_IO_REQUEST || if (mpi_request->Function == MPI2_FUNCTION_SCSI_IO_REQUEST ||
mpi_request->Function == MPI2_FUNCTION_RAID_SCSI_IO_PASSTHROUGH) { mpi_request->Function == MPI2_FUNCTION_RAID_SCSI_IO_PASSTHROUGH ||
if (!le16_to_cpu(mpi_request->FunctionDependent1) || mpi_request->Function == MPI2_FUNCTION_SCSI_TASK_MGMT ||
le16_to_cpu(mpi_request->FunctionDependent1) > mpi_request->Function == MPI2_FUNCTION_SATA_PASSTHROUGH) {
ioc->facts.MaxDevHandle) {
device_handle = le16_to_cpu(mpi_request->FunctionDependent1);
if (!device_handle || (device_handle >
ioc->facts.MaxDevHandle)) {
ret = -EINVAL; ret = -EINVAL;
mpt3sas_base_free_smid(ioc, smid); mpt3sas_base_free_smid(ioc, smid);
goto out; goto out;
@ -797,14 +801,20 @@ _ctl_do_mpt_command(struct MPT3SAS_ADAPTER *ioc, struct mpt3_ioctl_command karg,
scsiio_request->SenseBufferLowAddress = scsiio_request->SenseBufferLowAddress =
mpt3sas_base_get_sense_buffer_dma(ioc, smid); mpt3sas_base_get_sense_buffer_dma(ioc, smid);
memset(ioc->ctl_cmds.sense, 0, SCSI_SENSE_BUFFERSIZE); memset(ioc->ctl_cmds.sense, 0, SCSI_SENSE_BUFFERSIZE);
if (test_bit(device_handle, ioc->device_remove_in_progress)) {
dtmprintk(ioc, pr_info(MPT3SAS_FMT
"handle(0x%04x) :ioctl failed due to device removal in progress\n",
ioc->name, device_handle));
mpt3sas_base_free_smid(ioc, smid);
ret = -EINVAL;
goto out;
}
ioc->build_sg(ioc, psge, data_out_dma, data_out_sz, ioc->build_sg(ioc, psge, data_out_dma, data_out_sz,
data_in_dma, data_in_sz); data_in_dma, data_in_sz);
if (mpi_request->Function == MPI2_FUNCTION_SCSI_IO_REQUEST) if (mpi_request->Function == MPI2_FUNCTION_SCSI_IO_REQUEST)
mpt3sas_base_put_smid_scsi_io(ioc, smid, ioc->put_smid_scsi_io(ioc, smid, device_handle);
le16_to_cpu(mpi_request->FunctionDependent1));
else else
mpt3sas_base_put_smid_default(ioc, smid); ioc->put_smid_default(ioc, smid);
break; break;
} }
case MPI2_FUNCTION_SCSI_TASK_MGMT: case MPI2_FUNCTION_SCSI_TASK_MGMT:
@ -827,11 +837,19 @@ _ctl_do_mpt_command(struct MPT3SAS_ADAPTER *ioc, struct mpt3_ioctl_command karg,
} }
} }
if (test_bit(device_handle, ioc->device_remove_in_progress)) {
dtmprintk(ioc, pr_info(MPT3SAS_FMT
"handle(0x%04x) :ioctl failed due to device removal in progress\n",
ioc->name, device_handle));
mpt3sas_base_free_smid(ioc, smid);
ret = -EINVAL;
goto out;
}
mpt3sas_scsih_set_tm_flag(ioc, le16_to_cpu( mpt3sas_scsih_set_tm_flag(ioc, le16_to_cpu(
tm_request->DevHandle)); tm_request->DevHandle));
ioc->build_sg_mpi(ioc, psge, data_out_dma, data_out_sz, ioc->build_sg_mpi(ioc, psge, data_out_dma, data_out_sz,
data_in_dma, data_in_sz); data_in_dma, data_in_sz);
mpt3sas_base_put_smid_hi_priority(ioc, smid, 0); ioc->put_smid_hi_priority(ioc, smid, 0);
break; break;
} }
case MPI2_FUNCTION_SMP_PASSTHROUGH: case MPI2_FUNCTION_SMP_PASSTHROUGH:
@ -862,16 +880,30 @@ _ctl_do_mpt_command(struct MPT3SAS_ADAPTER *ioc, struct mpt3_ioctl_command karg,
} }
ioc->build_sg(ioc, psge, data_out_dma, data_out_sz, data_in_dma, ioc->build_sg(ioc, psge, data_out_dma, data_out_sz, data_in_dma,
data_in_sz); data_in_sz);
mpt3sas_base_put_smid_default(ioc, smid); ioc->put_smid_default(ioc, smid);
break; break;
} }
case MPI2_FUNCTION_SATA_PASSTHROUGH: case MPI2_FUNCTION_SATA_PASSTHROUGH:
{
if (test_bit(device_handle, ioc->device_remove_in_progress)) {
dtmprintk(ioc, pr_info(MPT3SAS_FMT
"handle(0x%04x) :ioctl failed due to device removal in progress\n",
ioc->name, device_handle));
mpt3sas_base_free_smid(ioc, smid);
ret = -EINVAL;
goto out;
}
ioc->build_sg(ioc, psge, data_out_dma, data_out_sz, data_in_dma,
data_in_sz);
ioc->put_smid_default(ioc, smid);
break;
}
case MPI2_FUNCTION_FW_DOWNLOAD: case MPI2_FUNCTION_FW_DOWNLOAD:
case MPI2_FUNCTION_FW_UPLOAD: case MPI2_FUNCTION_FW_UPLOAD:
{ {
ioc->build_sg(ioc, psge, data_out_dma, data_out_sz, data_in_dma, ioc->build_sg(ioc, psge, data_out_dma, data_out_sz, data_in_dma,
data_in_sz); data_in_sz);
mpt3sas_base_put_smid_default(ioc, smid); ioc->put_smid_default(ioc, smid);
break; break;
} }
case MPI2_FUNCTION_TOOLBOX: case MPI2_FUNCTION_TOOLBOX:
@ -886,7 +918,7 @@ _ctl_do_mpt_command(struct MPT3SAS_ADAPTER *ioc, struct mpt3_ioctl_command karg,
ioc->build_sg_mpi(ioc, psge, data_out_dma, data_out_sz, ioc->build_sg_mpi(ioc, psge, data_out_dma, data_out_sz,
data_in_dma, data_in_sz); data_in_dma, data_in_sz);
} }
mpt3sas_base_put_smid_default(ioc, smid); ioc->put_smid_default(ioc, smid);
break; break;
} }
case MPI2_FUNCTION_SAS_IO_UNIT_CONTROL: case MPI2_FUNCTION_SAS_IO_UNIT_CONTROL:
@ -905,7 +937,7 @@ _ctl_do_mpt_command(struct MPT3SAS_ADAPTER *ioc, struct mpt3_ioctl_command karg,
default: default:
ioc->build_sg_mpi(ioc, psge, data_out_dma, data_out_sz, ioc->build_sg_mpi(ioc, psge, data_out_dma, data_out_sz,
data_in_dma, data_in_sz); data_in_dma, data_in_sz);
mpt3sas_base_put_smid_default(ioc, smid); ioc->put_smid_default(ioc, smid);
break; break;
} }
@ -1064,6 +1096,9 @@ _ctl_getiocinfo(struct MPT3SAS_ADAPTER *ioc, void __user *arg)
break; break;
case MPI25_VERSION: case MPI25_VERSION:
case MPI26_VERSION: case MPI26_VERSION:
if (ioc->is_gen35_ioc)
karg.adapter_type = MPT3_IOCTL_INTERFACE_SAS35;
else
karg.adapter_type = MPT3_IOCTL_INTERFACE_SAS3; karg.adapter_type = MPT3_IOCTL_INTERFACE_SAS3;
strcat(karg.driver_version, MPT3SAS_DRIVER_VERSION); strcat(karg.driver_version, MPT3SAS_DRIVER_VERSION);
break; break;
@ -1491,7 +1526,7 @@ _ctl_diag_register_2(struct MPT3SAS_ADAPTER *ioc,
cpu_to_le32(ioc->product_specific[buffer_type][i]); cpu_to_le32(ioc->product_specific[buffer_type][i]);
init_completion(&ioc->ctl_cmds.done); init_completion(&ioc->ctl_cmds.done);
mpt3sas_base_put_smid_default(ioc, smid); ioc->put_smid_default(ioc, smid);
wait_for_completion_timeout(&ioc->ctl_cmds.done, wait_for_completion_timeout(&ioc->ctl_cmds.done,
MPT3_IOCTL_DEFAULT_TIMEOUT*HZ); MPT3_IOCTL_DEFAULT_TIMEOUT*HZ);
@ -1838,7 +1873,7 @@ mpt3sas_send_diag_release(struct MPT3SAS_ADAPTER *ioc, u8 buffer_type,
mpi_request->VP_ID = 0; mpi_request->VP_ID = 0;
init_completion(&ioc->ctl_cmds.done); init_completion(&ioc->ctl_cmds.done);
mpt3sas_base_put_smid_default(ioc, smid); ioc->put_smid_default(ioc, smid);
wait_for_completion_timeout(&ioc->ctl_cmds.done, wait_for_completion_timeout(&ioc->ctl_cmds.done,
MPT3_IOCTL_DEFAULT_TIMEOUT*HZ); MPT3_IOCTL_DEFAULT_TIMEOUT*HZ);
@ -2105,7 +2140,7 @@ _ctl_diag_read_buffer(struct MPT3SAS_ADAPTER *ioc, void __user *arg)
mpi_request->VP_ID = 0; mpi_request->VP_ID = 0;
init_completion(&ioc->ctl_cmds.done); init_completion(&ioc->ctl_cmds.done);
mpt3sas_base_put_smid_default(ioc, smid); ioc->put_smid_default(ioc, smid);
wait_for_completion_timeout(&ioc->ctl_cmds.done, wait_for_completion_timeout(&ioc->ctl_cmds.done,
MPT3_IOCTL_DEFAULT_TIMEOUT*HZ); MPT3_IOCTL_DEFAULT_TIMEOUT*HZ);

View File

@ -143,6 +143,7 @@ struct mpt3_ioctl_pci_info {
#define MPT2_IOCTL_INTERFACE_SAS2 (0x04) #define MPT2_IOCTL_INTERFACE_SAS2 (0x04)
#define MPT2_IOCTL_INTERFACE_SAS2_SSS6200 (0x05) #define MPT2_IOCTL_INTERFACE_SAS2_SSS6200 (0x05)
#define MPT3_IOCTL_INTERFACE_SAS3 (0x06) #define MPT3_IOCTL_INTERFACE_SAS3 (0x06)
#define MPT3_IOCTL_INTERFACE_SAS35 (0x07)
#define MPT2_IOCTL_VERSION_LENGTH (32) #define MPT2_IOCTL_VERSION_LENGTH (32)
/** /**

Some files were not shown because too many files have changed in this diff Show More